Small neural network

Notes taken while learning how to build a neural network for artistic purposes.
January 27, 2019

Below are some notes that I took while following Daniel Shiffman’s video tutorials on neural networks, which are heavily inspired by Make Your Own Neural Network, a book written by Tariq Rashid. The concepts and formulas here are not my original material, I just wrote them down in order to better understand and remember them.

Feedforward algorithm

The calculations made by one the network’s layers, which takes into account its synaptic “weights”, can be represented by the matrix product written down below, in which h represents an intermediary layer (or “hidden layer”) of the network, w represents the weights and x represents the inputs. In this inverted notation, wij indicates the weight from j to i.


This product can also be represented thusly:


And it’s also possible to simplify even more:


We must also add the bias B, whose value is 1.

[h1h2]=[w11w12w13b1w21w22w23b2]⎢ ⎢ ⎢x1x2x31⎥ ⎥ ⎥h1=(w11×x1)+(w12×x2)+(w13×x3)+b1h2=(w21×x1)+(w22×x2)+(w23×x3)+b2Hi=σ(WIHijXi+BHi)

The sigmoid function will be used as the activation function:


The calculation of the output layer Y will finally be done this way:



Once the feedforward is done, we are able to calculater the error e, which must then be sent from the output layer to the preceding layers, by backpropagation. Here, wij represents the weight w between the output layer j and the hidden layer i.


We will also simplify this calculation by not normalizing the weights before multiplying them with the error:


Which is equal to this matrix product:


It should be noted that the weights matrix that was used during the feedforward was transposed to be used for the backpropagation.


Other resources


This blog post is part of my research project Towards an algorithmic cinema, started in April 2018. I invite you to read the first blog post of the project to learn more about it.