Spiking Neural Network Experiments
These are some experiments I did to learn about a special kind of neural network algorithm inspired by the biology and mathematics of the brain: spiking neural networks. The goal of these projects was to build an intuition for how the brain learns and how it differs from current machine learning techniques.
The actual human brain is unfathomably complex, so a lot of this research tries to take certain specific neural mechanisms to their logical limit. The mechanism I focus on is called synchrony, which is when two neurons use chemical communication to synchronize their synapse fires. I wanted to see whether synchrony occurs in these biologically-inspired machine learning algorithms for spiking neural networks. The intuition being that synchrony could help associate and input and an output in a neural network. Here's a quick introduction to spiking neural networks for newbies, but I definitely am not an expert, so please reach out if something sounds incorrect.
The general idea
In the brain a neurons are connected to each other via pathways. A neuron has pathways coming in, called dendrites, and pathways going out, called an axon. The neuron has a threshold of charge that it needs to reach before it can send out a signal along it's axon, and there are various mathematical models to represent the axon signals of a neuron based on the incoming signals, this is called a spike train and it looks like this.
This is simplification is similar to an artificial perceptron, like those found in the neural networks behind AI. However, unlike the perceptron network, the brain and a spiking neural network use differential equations and feature a time component.
Here's a rudimentary demo I made of how electric signals are passed along inside the brain and a spiking neural network.
Try the spiking neural network playgroundNeural Plasticity
In spiking neural networks time is inherently part of the calculation, therefore it may be capable of learning how to respond to streams of data, not just static matrixes. Moreover, time can be used to enable learning with different algorithms than the traditional backpropagation method. One such method is called spike-timing dependent plasticity or STDP. It's a bit beyond the scope of this essay to explain the whole thing, but the basic idea is that it enables neurons to synchronize with very little work. So the goal would be to synchronize the input and output neurons through a large multilayer network.
The coolest part is that the network knows all most nothing, and this algorithm theoretically lets you use unsupervised learning, concurrent neurons, streaming input data, and dynamic network topologies all for free! This is pretty cool since it was most common at the time to train spiking neural networks by transfering the weights from a regular neural network.
"Event-Based, Timescale Invariant Unsupervised Online Deep Learning With STDP." Johannes Thiele. (2018).I may revisit this article with some more math, but for the curious I highly recommend the paper. For my final project in my mathematical neuroscience class, (which I initially took because of this project), I implemented a version of this network in C. This wasn't a great idea, because it was very hard to debug, but I handed a bunch of data off to my partner who was able to verify that the model was actually learning.
The original paper used a library for their experiments, but doing everything manually in C meant that I had to make tons of decisions about values like the timestep size and method of encoding data into spike trains. Ultimately I didn't get any impressive results beyond proof that it was learning a little bit.
Here's our final presentation from the class, which goes through some of this in a bit more depth, and dream up a few very theoretical applications of this in neuromorphic computing. In the brain, neurons synchronize over time through chemical signals, we wanted to use this synchrony behaviour to build associations between input and outputs in a neural network.
Hilariously, right after finishing this work, the original authors got back to us and cautioned that spiking neural networks are a horrible dead-end, and that they had since decided that backpropagation was far superior.