As shown

in Fig. 3, the lowest level of this architecture is the input layer with 2D nxn images as our inputs. With

local receptive fields, upper layer neurons extract some elementary and complex

visual features. Each convolutional layer (Fig. 3) is composed of multiple

feature maps, which are constructed by convolving inputs with different filters.

In other words, the value of each unit in a feature map is the result depending

on a local receptive field in the previous layer and the filter. This is

followed by a nonlinear activation:

Yj(l)=f(?ikijXxi(l-1)+bj) (4)

Where Yj(l) is

the jth output for

the ith convolution layer Ci ; f(.) is

a nonlinear function (most recent implementations use a scaled hyperbolic

tangent function as the nonlinear activation function:

f (x)=1.7159.tanh(2x/3)). Kij is a trainable filter

(or kernel) in the filter bank that convolves with the feature map xi(l-1) from

the previous layer to produce a new feature map in the current layer. The

symbol X represents a discrete convolution operator and bj is a bias. Note that each filter Kij can connect to

all or a portion of feature maps in the previous layer reduces the spatial

resolution of the feature map. In general, each unit in the sub-sampling layer

is constructed by averaging a 2×2 area in the feature map or by max pooling

over a small region. The key parameters to be decided are weights between

layers, which are normally trained by standard back propagation procedures and

a gradient descent algorithm with mean squared-error as the loss function.

Alternatively, training deep CNN architectures can be unsupervised. Here in we

review a particular method for unsupervised training of CNNs: predictive sparse

decomposition (PSD) The idea is to approximate inputs X with a linear

combination of some basic and sparse functions.

Z*=arg||X-WZ||22+?|Z|1+?||Z-D.tanh(kx)||22

(5)

where W is a matrix with a

linear basis set, Z is a sparse coefficient matrix, D is a

diagonal gain matrix and K is the filter bank with predictor parameters.

The goal is to find the optimal basis function sets W and the filter

bank K that minimize the reconstruction error (the first term in Eq. 5)

with a sparse representation (the second term), and the code prediction error

simultaneously (the third term in Eq. 5, measuring the difference between the

predicted code and actual code, preserves invariance for certain distortions).

PSD can be trained with a feed-forward encoder to learn the filter bank and

also the pooling together.

In

summary, inspired by biological processes CNN algorithms learn a hierarchical

feature representation by utilizing strategies like local receptive fields,

shared weights, and sub sampling. Each

filter bank can be trained with either supervised or unsupervised methods. A

CNN is capable of learning good feature hierarchies automatically and providing

some degree of translational and distortional invariances.38

C) MULTI-LAYER PERCEPTRON

A

multilayer perceptron (MLP) is a

class of feed-forward

artificial

neural network. An MLP consists of atleast three layers of nodes. Except for the

input nodes, each node is a neuron that uses a nonlinear activation

function. MLP utilizes a supervised

learning technique called back-propagation

for training.39

Its multiple layers and non-linear activation distinguish MLP from a linear perceptron.

It can distinguish data that is not linearly

separable.40

The

MLP consists of three or more layers (an input layer and an output layer with

one or more hidden layers) of

non linearly-activating nodes making it a deep

neural network.

Since MLPs are fully connected, each node in one layer connects with a certain

weight w i j {displaystyle w_{ij}} wij to every node in

the following layer.

If a multilayer perceptron has a linear activation

function in all neurons, that is, a linear function that maps the weighted inputs to the output of each neuron, then linear algebra shows that any number of layers can be

reduced to a two-layer input-output model. In MLPs some neurons use a nonlinear activation function that

was developed to model the frequency of action potentials, or firing, of biological neurons.

The two

common activation functions are both sigmoids, and are described by

y(vi) =tanh(vi)

and y(vi)=(1+e-vi)-1 y ( v i ) = tanh ? ( v i ) and y ( v

i ) = ( 1 + e ? v i ) ? 1 {displaystyle y(v_{i})= anh(v_{i})~~{ extrm

{and}}~~y(v_{i})=(1+e^{-v_{i}})^{-1}}

The first is

a hyperbolic

tangent that ranges from -1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0

to 1. Here y i {displaystyle y_{i}} vi is the output of the i {displaystyle i} ith

node (neuron) and v i {displaystyle v_{i}} vi is

the weighted sum of the input connections.

Learning occurs in the perceptron by changing

connection weights after each piece of data is processed, based on the amount

of error in the output compared to the expected result. This is an example of supervised

learning, and is carried out through back-propagation,

a generalization of the least mean squares algorithm in the linear perceptron.

We represent the error in output node j

{displaystyle j} j in the n {displaystyle n} nth data

point by

e j ( n ) = d j ( n ) ? y j ( n ) {displaystyle

e_{j}(n)=d_{j}(n)-y_{j}(n)}ej(n)=dj(n)-yj(n),

where d {displaystyle d} d is the

target value and y {displaystyle y} y is the value produced by the

perceptron. The node weights are adjusted based on corrections that minimize

the error in the entire output, given by

E ( n ) = 1 2 ? j e j 2 ( n ) {displaystyle

{mathcal {E}}(n)={frac {1}{2}}sum _{j}e_{j}^{2}(n)} ?(n)=

?j

ej2(n).

Using gradient descent, the

change in each weight is

?wji(n)= -?

where y i {displaystyle y_{i}} yi is the output of the previous neuron and ? ? {displaystyle eta } is the learning rate, which is selected to

ensure that the weights quickly converge to a response, without oscillations.

The derivative to be calculated depends on the induced local field v j

{displaystyle v_{j}}vj, which itself varies. It is

easy to prove that for an output node this derivative can be simplified to

= ej(n) ? (vj(n))? ? E ( n

) ? v j ( n ) = e j ( n ) ? ? ( v j ( n ) ) {displaystyle -{frac {partial

{mathcal {E}}(n)}{partial v_{j}(n)}}=e_{j}(n)phi ^{prime }(v_{j}(n))}

Where ? ? ? {displaystyle phi ^{prime }} is

the derivative of the activation function described above, which itself does

not vary. The analysis is more difficult for the change in weights to a hidden

node, but it can be shown that the relevant derivative is

?

? E ( n ) ? v j ( n ) = ? ? ( v j ( n ) ) ? k ? ? E ( n ) ? v k ( n ) w k j ( n

) {displaystyle -{frac {partial {mathcal {E}}(n)}{partial v_{j}(n)}}=phi

^{prime }(v_{j}(n))sum _{k}-{frac {partial {mathcal {E}}(n)}{partial

v_{k}(n)}}w_{kj}(n)}

= ?(vj(n))?k

–

wkj(n)? ? E ( n

) ? v j ( n ) = e j ( n ) ? ? ( v j ( n ) ) {displaystyle -{frac {partial

{mathcal {E}}(n)}{partial v_{j}(n)}}=e_{j}(n)phi ^{prime }(v_{j}(n))}

This depends on

the change in weights of the k {displaystyle k} kth

nodes, which represent the output layer. So to change the hidden layer weights,

the output layer weights change according to the derivative of the activation

function, and so this algorithm represents a back-propagation of the activation

function.41