BRN 1.16% 21.3¢ brainchip holdings ltd

Paper from Nicolas Oros BrainChip Scientist

  1. 10 Posts.
    Maiden post guys. Some interesting reading involving a BrainChip Scientist experimenting with IBM TrueNorth

    Link: Article

    A Self-Driving Robot Using Deep Convolutional Neural Networks on Neuromorphic Hardware

    Tiffany Hwu∗†, Jacob Isbell‡ , Nicolas Oros§ , and Jeffrey Krichmar∗ ∗
    Department of Cognitive Sciences University of California, Irvine Irvine, California, USA, 92697
    †Northrop Grumman Redondo Beach, California, USA, 90278
    ‡Department of Electrical and Computer Engineering University of Maryland College Park, Maryland, USA, 20742 BrainChip LLC Aliso Viejo, California, USA, 92656
    Department of Computer Sciences University of California, Irvine Irvine, California, USA, 92697 Email: [email protected]

    Abstract
    Neuromorphic computing is a promising solution for reducing the size, weight and power of mobile embedded systems. In this paper, we introduce a realization of such a system by creating the first closed-loop battery-powered communication system between an IBM TrueNorth NS1e and an autonomous Android-Based Robotics platform. Using this system, we constructed a dataset of path following behavior by manually driving the Android-Based robot along steep mountain trails and recording video frames from the camera mounted on the robot along with the corresponding motor commands. We used this dataset to train a deep convolutional neural network implemented on the TrueNorth NS1e. The NS1e, which was mounted on the robot and powered by the robot’s battery, resulted in a selfdriving robot that could successfully traverse a steep mountain path in real time. To our knowledge, this represents the first time the TrueNorth NS1e neuromorphic chip has been embedded on a mobile platform under closed-loop control.

    INTRODUCTION
    As the need for faster, more efficient computing continues to grow, the observed rate of improvement of computing speed shows signs of leveling off [1]. In response, researchers have been looking for new strategies to increase computing power. Neuromorphic hardware is a promising direction for computing, taking a brain-inspired approach to achieve magnitudes lower power than traditional Von Neumann architectures [2], [3]. Mimicking the computational strategy of the brain, the hardware uses event-driven, massively parallel and distributed processing of information. As a result, the hardware has low size, weight, and power, making it ideal for mobile embedded systems. In exploring the advantages of neuromorphic hardware, it is important to consider how this approach might be used to solve our existing needs and applications. One such application is autonomous driving [4]. In order for an autonomous mobile platform to perform effectively, it must be able to process large amounts of information simultaneously, extracting salient features from a stream of sensory data and making decisions about which motor actions to take [5]. Particularly, the platform must be able to segment visual scenes into objects such as roads and pedestrians [4]. Deep convolutional networks (CNNs) [6] have proven very effective for many tasks, including self-driving. For instance, Huval et al. used deep learning on a large dataset of highway driving to perform a variety of functions such as object and lane detection [7]. Recently, Bojarski et al., showed that tasks such as lane detection do not need to be explicitly trained [8]. In their DAVE-2 network, an end-to-end learning scheme was presented in which the network is simply trained to classify images from the car’s cameras into steering commands learned from real human driving data. Intermediate tasks such as lane detection were automatically learned within the intermediate layers, saving the work of selecting these tasks by hand. Such networks are suitable for running on neuromorphic hardware due to the large amount of parallel processing involved. In fact, many computer vision tasks have already been successfully transferred to the neuromorphic domain, such as handwritten digit recognition [9] and scene segmentation [10]. However, less work has been done embedding the neuromorphic hardware on mobile platforms. An example includes NENGO simulations embedded on SpiNNaker boards controlling mobile robots [11], [12]. Addressing the challenges of physically connecting these components, as well as creating a data pipeline for communication between the platforms is an open issue, but worth pursuing given the small size, weight and power of neuromorphic hardware. At the Telluride Neuromorphic Cognition Workshop 2016, we embedded the the IBM TrueNorth NS1e [13] on the Android-Based Robotics platform [14] to create a self-driving robot that uses a deep CNN to travel autonomously along an outdoor mountain path. The result of our experiment is a robot that is able to use video frame data to steer along a road in real time with low-powered processing.

    PLATFORMS
    A. IBM TrueNorth


    The IBM TrueNorth (Figure 1) is a neuromorphic chip with a multicore array of programmable neurons. Within each core, there are 256 input lines connected to 256 neurons through a 256x256 synaptic crossbar array. Each neuron on a core is connected with every other neuron on the same core through the crossbar, and can communicate with neurons on other cores through their input lines. In our experiment, we used the IBM NS1e board, which contains 4096 cores, 1 million neurons, and 256 million synapses. An integrate-and-fire neuron model having 23 parameters was used, with trinary synaptic weights of -1, 0, and 1. As the TrueNorth has been used to run many types of deep convolutional networks, and is able to be powered by an external battery, it served as ideal hardware for this task [16] [15] .

    B. Android Based Robotics

    The Android-Based Robotics platform (Figure 2) was created at the University of California, Irvine, using entirely off-the-shelf commodity parts and controlled by an Android phone [14]. The robot used in the present experiment, the CARLorado, was constructed from a Dagu Wild-Thumper All-Terrain chassis that could easily travel through difficult outdoor terrain. A IOIO-OTG microcontroller (SparkFun Electronics) communicated through a Bluetooth connection with the Android phone (Samsung Galaxy S5). The phone provided extra sensors such as a built-in accelerometer, gyroscope, compass, and global positioning system (GPS). The IOIOOTG controlled a pan and tilt unit that held the phone, a motor controller for the robot wheels, and ultrasonic sensors for detecting obstacles. Instructions for building the robot can be found at: http://www.socsci.uci.edu/∼jkrichma/ABR/. A differential steering technique was used, moving the left and right sides of the robot at different speeds for turning. The modularity of the platform made it easy to add extra units such as the IBM TrueNorth. Software for controlling the robot was written in Java using Android Studio. With various support libraries for the IOIO-OTG, open-source libraries for computer vision such as OpenCV, and sample Android-Based Robotics code (https://github.com/UCI-ABR), it was straightfoward to develop intelligent controls.


    METHODS AND RESULTS
    A. Data Collection
    First, we created datasets of first-person video footage of the robot and motor commands issued to the robot as it was manually driven along a mountain trail in Telluride, Colorado (Figures 5 and 8 top). This was done by creating an app in Android Studio that was run on both a Samsung Galaxy S5 smartphone and a Samsung Nexus 7 tablet (Figure 3). The smartphone was mounted on the pan and tilt unit of the robot with the camera facing ahead. JPEG images captured by the camera of the smartphone were saved to an SD card at 30 frames per second. The JPEGs had a resolution of 176 by 144 pixels. Through a Wi-Fi direct connection, the video frame data was streamed from the phone to a handheld tablet that controlled the robot. The tablet displayed controls for moving the robot forward and steering the robot left and right. These commands from the tablet (left, right, forward) were streamed to the smartphone via the Wi-Fi direct connection and saved on the smartphone as a text file. A total of 4 datasets were recorded on the same mountain trail, with each dataset recording a round trip of .5 km up and down a single trail segment. To account for different lighting conditions, we spread the recordings across two separate days, and on each day we performed one recording in the morning and one in the afternoon. In total we collected approximately 30 minutes of driving data. By matching the time stamps of motor commands to video images, we were able to determine which commands corresponded to which images. Images that were not associated with a left, right, or forward movement such as stopping were excluded. Due to lack of time, only the first day of data collection was used in actual training.

    B. Eedn Framework
    We used the dataset to train a deep convolutional neural network using an Energy-Efficient Deep Neuromorphic Network (EEDN), a network that is structured to run efficiently on the TrueNorth [15]. In summary, a traditional CNN is transferred to the neuromorphic domain by connecting the neurons on the TrueNorth with the same connectivity as the original CNN. Input values to the original CNN are translated into input firing patterns on EEDN, and the resulting firing rates of each neuron correspond to the values seen in the original CNN. To distribute a convolutional operation among cores of the TrueNorth, the layers are divided along the feature dimension into groups (Figure 4). When a neuron targets multiple core inputs, exact duplicates of the neuron and synaptic weights are created, either on the same core or a different core. The response of each neuron is the binary thresholded sum of synaptic input, in which the trinary weight values are determined by different combinations of two input lines. A more complete explanation of the EEDN flow and structure of the convolutional network (1 chip version) can be found in [15]. The video frames were preprocessed by down-sampling them to a resolution of 44 by 36 pixels and separating them into red, green, and blue channels. The output is a single layer of three neuron populations, corresponding to three classes of turning left, going straight, or turning right, as seen in Figure 5. Using the MatConvNet package, a Matlab toolbox for implementing convolutional neural networks, the network was trained to classify images into motor commands. For instance, if the image showed the road to be more towards the left of center, the CNN would learn the human-trained command of steering to the left. To test accuracy, the dataset was split into train and test sets by using every fifth frame as a test frame (in total 20 percent of the dataset). We achieved an accuracy of over 90 percent, which took 10K iterations and a few hours to train. Training was performed separately from the TrueNorth chip, producing trinary synaptic weights (-1,0,1) that could be used interchangeably in a traditional CNN or EEDN.

    C. Data Pipeline
    With the methods used in [15], the weights of the network were transferred to the TrueNorth NS1e. The CNN was able to run on the TrueNorth by feeding input from the camera on the Android Galaxy S5 to the TrueNorth using a TCP/IP connection. In order to achieve this, the phone had to replicate the preprocessing used when training the network. The preprocessing on the phone was achieved by using the Android OpenCV scaling function to downsample the images. Then, the images were separated into red, green, and blue channels. Next, the filter kernels from the first layer of the CNN were pulled from the EEDN training output and applied to the image using a 2D convolution function from the Android OpenCV library. The result of the convolution was thresholded into binary spiking format, such that any neuron with an activity greater than zero was set to spike. The spiking input to the TrueNorth was sent in XYF format, where X, Y, and F are the three dimensions to describe the identity of a spiking neuron within a layer. At each tick of the TrueNorth NS1e, a frame was fed into the input layer by sending the XYF coordinates of all neurons that spiked for that frame. A detailed diagram of the pipeline is found in Figure 7. Output from the TrueNorth NS1e was sent back to the smartphone through the TCP/IP connection in the form of a class histogram, which indicated the firing activity of the output neurons. The smartphone could then calculate which output neuron was the most active and issue the corresponding motor command to the robot.

    D. Physical Connection of Platforms
    The TrueNorth was powered by connecting the robot’s battery terminals from the motor controller to a two-pin battery connection on the NS1e board. It was then secured with velcro to the top of the housing for the IOIO and motor controller. A picture of the setup is seen in Figure 6. The robot, microcontroller, motor controller, servos, and NS1e were powered by a single Duratrax NiMH Onyx 7.2V 5000mAh battery.

    E. Testing
    With this wireless, battery-powered setup, the trained CNN was able to successfully drive the robot on the mountain trail (Figure 8). A wireless hotspot was necessary to create a TCP connection between the TrueNorth NS1e and the Android phone. We placed the robot on the same section of the trail used for training. The robot steered according to the class histograms received from the TrueNorth output, which provided total firing counts for each of the three output neuron populations. Steering was done by using the histogram to determine which output population fired the most, and steering in that direction. As a result, the robot stayed near the center of the trail, steering away from green brush on both sides of the trail. At some points, the robot did travel off the trail and needed to be manually redirected back towards the center of the trail. The robot drove approximately .5 km uphill, and the returned .5 km downhill with minimal intervention. It should be noted that there was a steep dropoff on the south side of the trail. Therefore, extra care was taken to make sure the robot did not tumble down the mountainside. A video of the path following performance can be seen at .

    DISCUSSION
    To the best of our knowledge, the present setup represents the first time the TrueNorth NS1e has been embedded on a mobile platform under closed loop control. It demonstrated that a low power neuromorphic chip could communicate with a smartphone in an autonomous system. Furthermore, it showed that a CNN using the EEDN framework was sufficient to achieve a self-driving application. Furthermore, this complete system ran in real-time and was powered by a single off-theshelf hobby grade battery, demonstrating the power efficiency of the TrueNorth NS1e. An expansion of this work would require better quantification of the robot’s performance. This could be achieved by tracking the number of times the robot had to be manually redirected, or comparing the CNN classifier accuracy on the training set of images versus the classifier accuracy on the actual images captured in realtime. Increasing the amount of training data would likely increase the classifier accuracy, since only 15 minutes of data were used for the training as compared to other self-driving CNNs [7], [8], which have used several days or even weeks of training. Our success was due in part to the simplicity of the landscape, with an obvious red hue to the dirt road and bold green hue for the bordering areas. It would therefore be useful to test the network in more complex settings. Additionally, while the main purpose of the project was to demonstrate a practical integration of neuromorphic and non-neuromorphic hardware, it would also be useful to calculate the power savings of running the CNN computations on neuromorphic hardware instead of directly on the smartphone.

    CONCLUSION
    In this trailblazing study, we have demonstrated a novel closed-loop system between a robotic platform and a neuromorphic chip, operating in a rugged outdoor environment. We have shown the advantages of integrating neuromorphic hardware with popular machine learning methods such as deep convolutional neural networks. We have shown that neuromorphic hardware can be integrated with smartphone technology and off the shelf components resulting in a complete autonomous system. The present setup is one of the first demonstrations of using neuromorphic hardware in an autonomous, embedded system.


    ACKNOWLEDGMENT
    The authors would like to thank Andrew Cassidy and Rodrigo Alvarez-Icaza of IBM for their support. This work was supported by the National Science Foundation Award number 1302125 and Northrop Grumman Aerospace Systems. We also would like to thank the Telluride Neuromorphic Cognition Engineering Workshop, The Institute of Neuromorphic Engineering, and their National Science Foundation, DoD and Industrial Sponsors.


    REFERENCES
    [1] J. Backus, “Can programming be liberated from the von neumann style?: a functional style and its algebra of programs,” Communications of the ACM, vol. 21, no. 8, pp. 613–641, 1978.
    [2] C. Mead, “Neuromorphic electronic systems,” Proceedings of the IEEE, vol. 78, no. 10, pp. 1629–1636, 1990.
    [3] G. Indiveri, B. Linares-Barranco, T. J. Hamilton, A. Van Schaik, R. Etienne-Cummings, T. Delbruck, S.-C. Liu, P. Dudek, P. Hafliger, ¨ S. Renaud et al., “Neuromorphic silicon neuron circuits,” Frontiers in neuroscience, vol. 5, p. 73, 2011.
    [4] S. Thrun, “Toward robotic cars,” Communications of the ACM, vol. 53, no. 4, pp. 99–106, 2010.
    [5] J. Levinson, J. Askeland, J. Becker, J. Dolson, D. Held, S. Kammel, J. Z. Kolter, D. Langer, O. Pink, V. Pratt et al., “Towards fully autonomous driving: Systems and algorithms,” in Intelligent Vehicles Symposium (IV), 2011 IEEE. IEEE, 2011, pp. 163–168.
    [6] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural computation, vol. 1, no. 4, pp. 541–551, 1989.
    [7] B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue et al., “An empirical evaluation of deep learning on highway driving,” arXiv preprint arXiv:1504.01716, 2015.
    [8] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang et al., “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316, 2016.
    [9] J. H. Lee, T. Delbruck, and M. Pfeiffer, “Training deep spiking neural networks using backpropagation,” arXiv preprint arXiv:1608.08782, 2016. [10] Y. Cao, Y. Chen, and D. Khosla, “Spiking deep convolutional neural networks for energy-efficient object recognition,” International Journal of Computer Vision, vol. 113, no. 1, pp. 54–66, 2015.
    [11] J. Conradt, F. Galluppi, and T. C. Stewart, “Trainable sensorimotor mapping in a neuromorphic robot,” Robotics and Autonomous Systems, vol. 71, pp. 60–68, 2015.
    [12] F. Galluppi, C. Denk, M. C. Meiner, T. C. Stewart, L. A. Plana, C. Eliasmith, S. Furber, and J. Conradt, “Event-based neural computing on an autonomous mobile platform,” in 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014, pp. 2862–2867.
    [13] P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada, F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura et al., “A million spiking-neuron integrated circuit with a scalable communication network and interface,” Science, vol. 345, no. 6197, pp. 668–673, 2014.
    [14] N. Oros and J. L. Krichmar, “Smartphone based robotics: Powerful, flexible and inexpensive robots for hobbyists, educators, students and researchers,” Center for Embedded Computer Systems, University of California, Irvine, Irvine, California, Tech. Rep. 13-16, 2013.
    [15] S. K. Esser, P. A. Merolla, J. V. Arthur, A. S. Cassidy, R. Appuswamy, A. Andreopoulos, D. J. Berg, J. L. McKinstry, T. Melano, D. R. Barch, C. di Nolfo, P. Datta, A. Amir, B. Taba, M. D. Flickner, and D. S. Modha, “Convolutional networks for fast, energy-efficient neuromorphic computing,” Proceedings of the National Academy of Sciences, vol. 113, no. 41, pp. 11 441–11 446, 2016.
    [16] F. Akopyan, “Design and tool flow of ibm’s truenorth: an ultra-low power programmable neurosynaptic chip with 1 million neurons,” in Proceedings of the 2016 on International Symposium on Physical Design. ACM, 2016, pp. 59–60.
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
21.3¢
Change
-0.003(1.16%)
Mkt cap ! $408.3M
Open High Low Value Volume
21.5¢ 22.0¢ 21.3¢ $513.8K 2.387M

Buyers (Bids)

No. Vol. Price($)
51 777913 21.0¢
 

Sellers (Offers)

Price($) Vol. No.
21.5¢ 424413 19
View Market Depth
Last trade - 13.33pm 03/07/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.