A little while ago i started reading the free online book "Neural Networks and Deep Learning" by Michael Nielsen. If you're looking for an introduction to the subject then this is a really great place to start.
In the first two chapters Michael explains how to use stochastic gradient descent to train a feed forward neural network to recognise handwritten digits. The highlight for me is the solid explanation of the theoretical side, and the in situ application of this to a real world problem. The result is 74 lines of Python code capable of recognising handwritten digits with an error rate of less than 4%.
As an exercise I decided to re-implement Michael's Python example in C99, using the GNU Scientific Library (GSL) for matrix / vector operations and Catch for some (limited) unit tests.
To allow direct comparison with the Python example i've set it up to read in the first 50,000 numbers from the MNIST database of handwritten digits, and use the final 10,000 to test the network on each epoch, just like in the example.
As you'd expect the C port behaves just like the Python example, with some minor variation likely due to differences in the generation of random numbers and floating point inaccuracies. Perhaps the one nice thing is that despite neither version being optimised for speed, the C implementation runs about 15 times faster on my machine, making it less time consuming to play with the various parameters.
Finally a big thank you to Michael for taking the time to write this book, and for making it freely available.
The code for the C port can be found here.