BRN 2.94% 17.5¢ brainchip holdings ltd

Allowed: Neural processor based accelerator system and method, page-8

  1. 6,267 Posts.
    lightbulb Created with Sketch. 9114
    to understand the importance of the patent, these are referenced within:


    https://patents.google.com/patent/US8648867B2/en

    Graphic processor based accelerator system and method

    Abstract
    An accelerator system is implemented on an expansion card comprising a printed circuit board having (a) one or more graphics processing units (GPU), (b) two or more associated memory banks (logically or physically partitioned), (c) a specialized controller, and (d) a local bus providing signal coupling compatible with the PCI industry standards (this includes but is not limited to PCI-Express, PCI-X, USB 2.0, or functionally similar technologies). The controller handles most of the primitive operations needed to set up and control GPU computation. As a result, the computer's central processing unit (CPU) is freed from this function and is dedicated to other tasks. In this case a few controls (simulation start and stop signals from the CPU and the simulation completion signal back to CPU), GPU programs and input/output data are the information exchanged between CPU and the expansion card. Moreover, since on every time step of the simulation the results from the previous time step are used but not changed, the results are preferably transferred back to CPU in parallel with the computation.



    https://patents.google.com/patent/US8131659B2/en

    Field-programmable gate array based accelerator system

    Current Assignee: Microsoft Technology Licensing LLC

    Abstract
    Accelerator systems and methods are disclosed that utilize FPGA technology to achieve better parallelism and processing speed. A Field Programmable Gate Array (FPGA) is configured to have a hardware logic performing computations associated with a neural network training algorithm, especially a Web relevance ranking algorithm such as LambaRank. The training data is first processed and organized by a host computing device, and then streamed to the FPGA for direct access by the FPGA to perform high-bandwidth computation with increased training speed. Thus, large data sets such as that related to Web relevance ranking can be processed. The FPGA may include a processing element performing computations of a hidden layer of the neural network training algorithm. Parallel computing may be realized using a single instruction multiple data streams (SIMD) architecture with multiple arithmetic logic units in the FPGA.


    ---







 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
17.5¢
Change
0.005(2.94%)
Mkt cap ! $342.8M
Open High Low Value Volume
17.5¢ 18.0¢ 17.0¢ $737.6K 4.203M

Buyers (Bids)

No. Vol. Price($)
69 1587304 17.0¢
 

Sellers (Offers)

Price($) Vol. No.
17.5¢ 213251 4
View Market Depth
Last trade - 16.10pm 09/08/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.