I would like to have done a bit more research before posting again here, but I wanted to provide an update before the AGM so that you guys can use this to inform any questions you may want to raise.
In early April I emailed Kris Carlson as an alternate path to try and resolve my concerns about the Akida hardware resources. A few days later I received an email from Peter van der Made. This started a lengthy series of emails exchanged over the last 7 weeks to try and sort out my concerns regarding the hardware resources that are required to meet the claims published for Akida. A chronological, 26 page copy of the entire exchange is attached to this post (
PVDM_Correspondence.pdf), and was only redacted to remove contact information. In the emails from May 13th and 15th he gave permission to share the information.
Some of the highlights from this discussion:
- The Akida neural architecture is a fairly significant departure from SNAP64 and the published patents, and employs some kind of hardware that can be reallocated between synapses and neurons.
- The advertised 10B synapse number is "equivalent synapses", which is easily calculated from the network definition by summing the (filter_size * number_of_strides) * filter_count across all layers. In contrast, the number of physical synapses in an Akida implementation is the sum of filter_size * filter_count across all layers.
- It is still ambiguous what the 1.2M neuron number refers to, but most recently it was defined as an average neuron count. See the 2nd concern below.
- Peter was consistent in stating that the hardware is not multiplexed. If the Akida specification is legitimate, then I think that a proper interpretation of this is probably central to the explanation.
- Peter has been far more available and responsive than I anticipated. He was aware of this thread and expressed a desire to resolve the concerns. He will probably then also be aware of this post and I hope it spurs further clarification.
And a few key concerns:
- After establishing the technique for calculating the number of hardware entities, the network which was given as the exemplar for 10 billion equivalent synapses appears to require resources that are not feasible for a $10-$20 chip with a 28nm process (and probably not even for any price or technology). I could be wrong, but something in the description is going to have to change for me to see that.
- It sometimes seemed that the explanations were changing in response to my further queries. For example, in the first email on April 5 Peter stated that the published neuron and synapse counts were the "maximum number of neurons and maximum number of synapses". This was restated as 1.2M physical neurons and 10B "equivalent synapses" later the same day. But then in the May 17 email he says that "The 1.2M neurons we published is an average figure" after I observed that the given network required more than 1.2M physical neurons.
- In the course of the discussion about the image classifier CNNs, Peter suggested that the discrepancy was due to CNNs reporting "neuron equivalents" and it was inferred that the Akida physical neuron was reused on each convolutional stride. However, that did not align with the subsequent description of the Akida physical neuron and the technique given for calculating physical neurons in Akida ended up being identical to the the count of neurons in a CNN.
As it currently stands, I am unfortunately more concerned about the viability of Akida than I was at the start of this inquiry. Despite having clarified my understanding of the design and published numbers, the task of extracting these clarifications was confusing and the technology still appears to me to not be feasible as it is described. I am open to the possibility that there are still additional explanations that would resolve this - but there is good reason why Loihi has "only" 131K neurons and 130M synapses. I am currently awaiting access to the development environment to see if that can provide additional clarification. If I had to guess at an explanation which would change my mind, I think that the chip specifications could be feasible if the reallocatable hardware is just space in SRAM, and the synaptic and neural hardware entities are actually being reused within a layer (i.e., multiplexing, or serialization, or pipelining) by sequentially pulling and storing values to the SRAM. But Peter repeatedly stated that this is not the case. It's possible that there is a severe disconnect between us on what that actually means.
Lastly, I also contacted Socionext to confirm that the MoU was in place and asked whether materials had been provided by Brainchip for their evaluation. They refused to say anything and would only refer me to the Brainchip press release. I don't think there's much to read into that, but I thought I would share.
If anybody has any questions that they want to ask me before the AGM, you should do so before the 24th. My availability will be limited thereafter.