# An algorithm for simulating neuronal networks

Tagged as: neuroscience, algorithm
written on 2013-06-04

The simulation of a neuronal network model with N neurons can be represented as a sequence of transformations. We assume the constant matrix W of size N by N contains the synaptic weights of each connection in the network. We further assume that the global delay of synaptic events is represented by a delay matrix D of dimensions N by T, where T is the global delay as number of time steps.

• The first transformation computes the current state of all neurons given an initial state vector A^{t-1} of size N and initial input vector I^{t-1}, also of size N,
A_{i}^{t} = f_{i} (A_{i}^{t-1}, I_{i}^{t-1})
where f is the function that computes neuronal and synaptic dynamics.
• The next transformation computes the set S of neurons that have spiked during the current time step,
S = \forall{i}(spike(A_{i}^{t}) == true)
where spike is a function that determines whether a spike has ocurred given the current state of a neuron.
• The next transformation computes weighted sums of the inputs,
J_{j} = \sum_{i \in S} W_{i,j} spikecount(A_{j}^{t})
where J is a of the same dimensionality as I, and spikecount is a function that returns the number of spikes that have occurred given the current state of a neuron.
• The final transformation computes the new delay matrix and input vector,
I_{i}^{t} = D_{i,1}^{t-1}
D^{t} = nshift (cat (D, J, 2), 1)
where function cat concatenates J to the right side of D, and function nshift shifts the columns of the resulting matrix to the left by one.

The algorithm outlined above assumes a single type of synaptic connection. This restriction can be lifted by extending W to be of dimensions N by N by K, where K is the number of synaptic connection types, and extending I to be of dimensions N by K. The assumption that W is constant can be lifted by introducing a function that computesthe new value of W for each time step.