Abstract
As part of Phase 1 feasibility study, we evaluated the viability to develop a real-time HPC-scale neuromorphic cyber agent software called Cyber-NeuroRT. We evaluated several scalable neuromorphic techniques to detect and predict cybersecurity threats, compared full precision machine learning models with neuromorphic models and developed an end-to-end Proof of Concept (POC). Upon completion of Phase 2 prototype, we will produce dramatic reductions in latency and power--up to 100x--without sacrificing accuracy. This will enable quicker response times and savings in operating costs. Cyber-NeuroRT will be a real-time neuromorphic processor-based monitoring tool to predict and alert cybersecurity threats and warnings using the Neuromorphic Platforms of Intel Loihi 1 and BrainChip Akida. For our Phase 1 POC development, we used 450,000 Zeek log entries with a mixture of normal and malicious data for training the supervised ML models. As part of our study, we covered the following: Cyber Attack types covered – 8 attack types: backdoor, DDOS, DOS, injection, password, ransomware, scanning and XSS, Source files – Zeek log files and Packet Capture Format files (PCAP) containing both malicious and normal records. We used both Supervised and Unsupervised algorithms. We used algorithms including SNN and CNN-to-SNN conversion with unsupervised learning and supervised learning rules. To build a full-fledged prototype of Cyber-Neuro RT, we plan to transition the proof-of-concept work to scale to a large data set with additional threat types and other datasets from an HPC environment. HPC environments operate at larger scales than traditional IT domains and our solution should be able to monitor and predict events at more than 160,000 inferences per second. Tuning of Spike Neural Networks (SNN) parameters such as precision of weights and number of neurons used are two software parameters to explore. The chip can be tuned between high v. low power modes and performance can be studied as a function of power draw. Evaluation will be performed across a variety of datasets and parameter settings to estimate deployment performance. We will work on efficiency scaling of SNN algorithms in terms of accuracy and hardware metrics like power and energy consumption. Since cybersecurity attack classification is a temporal process, we will leverage recent advancements in the algorithm community to map temporal dynamics of SNNs to recurrent architectures. Further, to adapt to novel attack vectors, we will explore unsupervised learning techniques in a dynamic network architecture where we will grow or shrink the network as and when novel attack vectors arise. We will also perform an algorithm-hardware co-design analysis by ensuring that our algorithm proposals cater to and consider specific constraints from Akida or Loihi processors like network size, bit quantization levels, among others. 3.1 Some of the features of Cyber-NeuroRT prototype shall include: Ability to monitor, predict and provide system wide alerts of impending cybersecurity threats and warnings at scale by collecting and prioritizing data from Zeek logs and PCAP files streamed in real-time or batch. We will expand and refine different training techniques like CNN to SNN conversion, direct backpropagation training through surrogate gradient methods or local unsupervised Spike Timing Dependent Plasticity (STDP) enabled approaches. Compare performance of threat detection between neuromorphic processing vs GPU-based systems and compare between Akida and Intel Loihi processors. Ability to process the data system-wide at an unprecedented scale enabling adaptive, streaming analysis for monitoring and maintaining large-scale scientific computing integrity. Dashboards for security administrators and security analysts.
* Information listed above is at the time of submission. *
This SBIR obviously had a successful conclusion.