to understand the importance of the patent, these are referenced within:
https://patents.google.com/patent/US8648867B2/en
Graphic processor based accelerator system and method
Abstract
An accelerator system is implemented on an expansion card comprising a printed circuit board having (a) one or more graphics processing units (GPU), (b) two or more associated memory banks (logically or physically partitioned), (c) a specialized controller, and (d) a local bus providing signal coupling compatible with the PCI industry standards (this includes but is not limited to PCI-Express, PCI-X, USB 2.0, or functionally similar technologies). The controller handles most of the primitive operations needed to set up and control GPU computation. As a result, the computer's central processing unit (CPU) is freed from this function and is dedicated to other tasks. In this case a few controls (simulation start and stop signals from the CPU and the simulation completion signal back to CPU), GPU programs and input/output data are the information exchanged between CPU and the expansion card. Moreover, since on every time step of the simulation the results from the previous time step are used but not changed, the results are preferably transferred back to CPU in parallel with the computation.
https://patents.google.com/patent/US8131659B2/en
Field-programmable gate array based accelerator system
Current Assignee: Microsoft Technology Licensing LLC
Abstract
Accelerator systems and methods are disclosed that utilize FPGA technology to achieve better parallelism and processing speed. A Field Programmable Gate Array (FPGA) is configured to have a hardware logic performing computations associated with a neural network training algorithm, especially a Web relevance ranking algorithm such as LambaRank. The training data is first processed and organized by a host computing device, and then streamed to the FPGA for direct access by the FPGA to perform high-bandwidth computation with increased training speed. Thus, large data sets such as that related to Web relevance ranking can be processed. The FPGA may include a processing element performing computations of a hidden layer of the neural network training algorithm. Parallel computing may be realized using a single instruction multiple data streams (SIMD) architecture with multiple arithmetic logic units in the FPGA.
---
- Forums
- ASX - By Stock
- BRN
- Allowed: Neural processor based accelerator system and method
Allowed: Neural processor based accelerator system and method, page-8
-
- There are more pages in this discussion • 69 more messages in this thread...
You’re viewing a single post only. To view the entire thread just sign in or Join Now (FREE)
Featured News
Add BRN (ASX) to my watchlist
|
|||||
Last
17.5¢ |
Change
0.005(2.94%) |
Mkt cap ! $342.8M |
Open | High | Low | Value | Volume |
17.5¢ | 18.0¢ | 17.0¢ | $737.6K | 4.203M |
Buyers (Bids)
No. | Vol. | Price($) |
---|---|---|
69 | 1587304 | 17.0¢ |
Sellers (Offers)
Price($) | Vol. | No. |
---|---|---|
17.5¢ | 213251 | 4 |
View Market Depth
No. | Vol. | Price($) |
---|---|---|
69 | 1587304 | 0.170 |
49 | 923903 | 0.165 |
67 | 1409816 | 0.160 |
23 | 964104 | 0.155 |
69 | 1820964 | 0.150 |
Price($) | Vol. | No. |
---|---|---|
0.175 | 123251 | 3 |
0.180 | 1205075 | 25 |
0.185 | 924511 | 27 |
0.190 | 1431809 | 21 |
0.195 | 984971 | 15 |
Last trade - 16.10pm 09/08/2024 (20 minute delay) ? |
Featured News
BRN (ASX) Chart |