BRN 0.00% 19.5¢ brainchip holdings ltd

All roads lead to JAST, page-36

  1. 6,614 Posts.
    lightbulb Created with Sketch. 2189
    This seems a very lightweight "invention". The specification only has two drawings, one a flow chart, and the other a rudimentary block diagram.

    So I doubt it will be a competitor for Akida, the various patent specifications of which provide detailed descriptions of the SNN and CNN2SNN conversion.

    The description is vague and imprecise.

    This is the entire description of the invention:


    DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
    FIG. 1 schematically shows a representation of a method ( 10) for creating a pulsed neural network (SSN).
    The method begins with step 11. In this step, a deep neural network is provided. This deep neural network includes a plurality of layers, each of which is connected to one another. The layers may each include a plurality of neurons. The deep neural network may be an already trained deep neural network or a deep neural network, in which the parameters are randomly initialized, for example.

    In subsequent step 12, the deep neural network is assigned a control pattern. The control pattern characterizes in which sequence the layers ascertain their intermediate variables. For example, the control pattern may characterize that the layers calculate their output variables sequentially one after the other. In this case, each layer must wait until it is provided the respective input variable, so that this layer is then able to ascertain its intermediate variable. For example, the control pattern may also characterize that the layers are executed completely in parallel (cf. streaming rollout).

    Once step 12 has been concluded and then if the provided deep neural network is not trained after step 11, step 13 follows. This step is then skipped if the neural network has already been trained.

    In step 13, the deep neural network is trained using the control pattern. In this case, training takes place using training data, which includes training input variables and respectively assigned training output variables, in such a way that the deep neural network ascertains as a function of the training input variables their respectively assigned training output variables. In the process, the parameters of the deep neural network may be adapted with the aid of a gradient descent method, so that the deep neural network ascertains the respectively assigned training output variables.

    The gradient descent method may optimize a “categorical cross entropy” cost function as a function of the parameters of the deep neural network. The input variable, for example, an image, is preferably applied multiple times in succession during the training to the deep neural network, and the deep neural network ascertains multiple times one output variable each to this input variable based on the control pattern. Alternatively, a sequence of input variables may also be used.

    Once step 13 or step 12 has been concluded, step 14 follows. In this step, the trained deep neural network is converted into a pulsed neural network. During the conversion, the architecture and the parameterization of the deep neural network are used in order to create the pulsed neural network. The activations of the neurons of the deep neural network may be translated into proportional fire rates of the neurons of the pulsed neural network. For a detailed explanation regarding the behavior of the conversion, reference is made to the document “Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification” cited at the outset. In addition, the connections of the pulsed neural network are each assigned a delay as a function of the control pattern of the deep neural network used. An argmax-output layer of the pulsed neural network is preferably used, which counts all arriving pulses over a predefinable time interval t readout and applies the mathematical operator argmax across the counted pulses of the neurons of the argmax output layer.

    Optional step 15 may then be carried out, in which the pulsed neural network is operated as a function of the assigned delays.

    The pulsed neural network may be used, for example, for an at least semi-autonomous robot. The at least semi-autonomous robot may be an at least semi-autonomous vehicle, for example. In a further exemplary embodiment, the at least semi-autonomous robot may be a service robot, an assembly robot or stationary production robot, alternatively, an autonomous flying object, such as a drone.

    In one preferred specific embodiment, the at least semi-autonomous vehicle includes an event-based camera. This camera is connected to the pulsed neural network, which ascertains at least one output variable as a function of provided camera images. The output variable may be forwarded to a control unit.

    The control unit controls an actuator as a function of the output variable, preferably, it controls this actuator in such a way that vehicle 10 carries out a collision-free maneuver. In the first exemplary embodiment, the actuator may be a motor or a braking system of the vehicle. In a further exemplary embodiment, the semi-autonomous robot may be a tool, a work machine or a production robot. A material of a workpiece may be classified with the aid of the pulsed neural network. The actuator in this case may be a motor that drives a grinding head.

    FIG. 2 schematically shows a representation of a device 20 for training the deep neural network, in particular, for carrying out the steps for training. Device 20 includes a training module 21 and a module 22 to be trained. This module 22 to be trained contains the deep neural network. Device 20 trains the deep neural network as a function of output variables of the deep neural network and, preferably using predefinable training data. The training data expediently include a plurality of detected images, or sound sequences, text excerpts, event-based signals, radar signals, LIDAR signals or ultrasonic signals, each of which are labeled. During the training, parameters of the deep neural network stored in a memory 23 are adapted.

    The device further includes a processing unit 24 and a machine-readable memory element 25. A computer program may be stored on memory element 25, which includes commands which, when the commands are executed on processing unit 24, result in processing unit 24 carrying out the method for creating the pulsed neural network as shown, for example, in FIG. 1
    .

    This is so vague and imprecise, I can't imagine why they paid the money to file the patent application.

    Ella would not have approved.
    Last edited by BarrelSitter: 03/05/21
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.