Classification using Neural network.

Implementation of Neural network layers using Keras and Network visualization and Binary Classification.

Dinesh
4 min readJan 5, 2021

Classification problems can categorize into two types, binary and multi-class based on number of classes. There are different types of other classification approaches like Multi-Label and Imbalanced Classification. Here we will focus on binary and multi-class classification using neural network and its implementation in python.

Binary Classification
Generally, binary classification problems concern one class that is the positive state and another class that is the negative state. The class for the positive state is assigned the class label 0 and the class with the negative state is assigned the class label 1. For example email spam detection (spam or not), patent having disease or not. The following diagram explains a basic neural network for binary classification.

A neural network having 8 input parameters and 1 output which decides +ve or -ve results and 1 hidden layer having 3 nodes one of them is a bias.

Data Preparation
We are using a sample disease dataset where a patient is having 8 different symptoms values and according to that patient and the results contains 0s and 1s where 0 represents negative and 1 represents positive. Download the dataset here.

import pandas as pd
#read dataset
df = pd.read_csv('sample_disease_dataset.csv')
df.head()

We are not going to do much data analysis, will directly jump to the data preparation. So to normalize our data we will use min-max normalization.

from sklearn import preprocessing
#min-max normalization
x = df.values #returns a numpy array
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)

After normalize we will split our data into 3 parts as Train, Test and Validation. Here we split our dataset into 70% and 30%. That 30% data we assign as Test data and rest 70% data further split into 80% as Train data and 20% Validation data.

from sklearn.model_selection import train_test_split
# split into input (X) and output (y) variables
X = x_scaled[:,0:8]
y = x_scaled[:,8]
X_t, X_test, y_t, y_test = train_test_split(X, y,
test_size=0.3,
random_state=42)
X_train, X_val, y_train, y_val = train_test_split(X_t, y_t,
test_size=0.2,
random_state=12)

Now we will build our neural network model using keras Sequential, we will make it small and simple to understand. As we have 8 input parameters, so the input layer will have 8 nodes, and we will have one hidden layer of 2 nodes and bios. You can understand by looking billow summary and model block diagram.

#importing library 
from keras.models import Sequential
from keras.layers import Dense
from keras import utils
model = Sequential()
model.add(Dense(2, input_dim=8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
utils.plot_model(model, to_file='model.jpg',show_shapes=True, show_layer_names=True)
model.compile(loss='binary_crossentropy', 
optimizer='adam',
metrics=['accuracy'])
history = model.fit(X_train,
y_train,
validation_data=(X_val,y_val),
epochs=1500,
batch_size=128,
shuffle=True,
verbose=1)

Here we are using Binary Cross entropy as a loss function, Adam as optimizer and batch size is 128 and we are training for 1500 epochs.

import matplotlib.pyplot as plt
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.show()

Results
We got training accuracy 0.78 and loss is 0.46 and validation accuracy 0.77 and loss 0.44.

Plot of Training loss vs Test loss

from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix,classification_report
print(classification_report(y_test, results))

Generally we use Machine Learning models like Logistic Regression, Naive Bayes Classifier, K-Nearest Neighbors, Decision Tree, Random Forest, Support Vector Machines, but we tried using Neural Networks for study. To train neural Networks we need gigantic dataset to get good accuracy.

You can get the notebook here

https://github.com/Dinesh317/Classification_using_Neural_network

--

--

Dinesh

Immensely interested in AI Research | I read papers and post my notes on Medium