In 2019 at a timewhen NASA may well have been speaking with Brainchip NASA commissioned threeresearchers as Harvard University to explore the feasibility of ConvolutionalSpiking Neural Networks.I have only been able to access extracts of two of these works but read together they make clear that spike timing dependant plasticity (STDP) is an essential ingredient found by these researchers to make possible scientifically viable spiking neural networks.
It is trite to saythat we know with absolute certainty that the company which owns and controlsthe use of STDP is Brainchip and if you need scientifically viable spikingneural networks they are as a result the only company in the world to go to andas we know this is in fact what NASA has done.
As to TensorInnovation Partners the only possible way they can fulfil their Phase 1 andPhase 11 agreement with NASA is to use Brainchip as no one else has STDP.
Tensor InnovationPartners obviously should never have disclosed as they did a relationship withBrainchip and that was quickly remedied on their website.
The relationshipwith Intel is not on its face for the purpose of using Loihi chips (leave asidethe fact that they do not exist for this purpose) but to use the Inteldevelopment environment for creating algorithms for use with AKD1000, 2000& 3000.
It should be remembered that AKD chips contain a CNN2SNN convertor which means that any algorithm that can be implemented in CNN can be converted on chip to SNN.The use of Intel’s open-sourced development environment once appointed a partner greatly facilitates the development of such algorithms.Intel is a NASA partner and it only requires a little imagination to consider that this existing relationship between Intel and NASA played a small part in a start up like Tensor Innovation Partners being admitted as an Intel partner for this purpose. The words of Anil Mankar ring loudly in this regard “If you want to do research come to us (Intel) if you want a chip go to Brainchip.”
My opinion onlyDYOR.
Deep ConvolutionalSpiking Neural Networks for Image Classification
Show affiliations
·Vaila, RuthvikChiasson, JohnSaxena, Vishal
Abstract
Spiking neural networks are biologically plausible counterparts of theartificial neural networks, artificial neural networks are usually trained withstochastic gradient descent and spiking neural networks are trained with spiketiming dependant plasticity. Training deep convolutional neural networks is amemory and power intensive job. Spiking networks could potentially help inreducing the power usage. There is a large pool of tools for one to chose totrain artificial neural networks of any size, on the other hand all theavailable tools to simulate spiking neural networks are geared towardscomputational neuroscience applications and they are not suitable for real lifeapplications. In this work we focus on implementing a spiking CNN usingTensorflow to examine behaviour of the network and empirically study the effectof various parameters on learning capabilities and also study catastrophicforgetting in the spiking CNN and weight initialization problem in R-STDP usingMNIST and N-MNIST data sets.
Pub Date:
March 2019
Continuous Learningin a Single-Incremental-Task Scenario with Spike Features
Show affiliations
·Vaila, Ruthvik; Chiasson, John; Saxena, Vishal
Abstract
Deep Neural Networks (DNNs) have two key deficiencies, their dependenceon high precision computing and their inability to perform sequential learning,that is, when a DNN is trained on a first task and the same DNN is trained onthe next task it forgets the first task. This phenomenon of forgetting previoustasks is also referred to as catastrophic forgetting. On the other hand amammalian brain outperforms DNNs in terms of energy efficiency and the abilityto learn sequentially without catastrophically forgetting. Here, we usebio-inspired Spike Timing Dependent Plasticity (STDP)in the feature extractionlayers of the network with instantaneous neurons to extract meaningfulfeatures. In the classification sections of the network we use a modifiedsynaptic intelligence that we refer to as cost per synapse metric as aregularizer to immunize the network against catastrophic forgetting in aSingle-Incremental-Task scenario (SIT). In this study, we use MNIST handwrittendigits dataset that was divided into five sub-task
Pub Date:
May 2020