BRN 0.00% 20.0¢ brainchip holdings ltd

New Partnership, page-36

  1. 9,852 Posts.
    lightbulb Created with Sketch. 26224
    The following three extracts and links provide an answer:

    “REASONS FOR SPACECRAFT VIBRATION TESTING

    The primary reason for performing a spacecraft level vibration test is to follow the “Test Like You Fly” (TLYF) philosophy in which the vibration test provides the final chance to verify that the as-built payload will perform as expected after exposure to the low-frequency launch environment. The spacecraft level vibration test along with the spacecraft level acoustic test and shock test covers the full range of expected dynamic environments that the payload will experience during launch.
    The spacecraft level vibration test is the only test that simulates the low/mid-frequency mechanically transmitted launch vibration environment. Acoustics tends to drive structural response above 100 Hz for most spacecraft modes. The acoustic 1/2 wavelength must be less than the smallest S/C dimension for significant excitation of non-baffled panels. At 100 Hz, this characteristic dimension is roughly 5 feet so that below 100 Hz only large surface area items such as solar arrays and antenna dishes will respond to direct acoustic input. For most launch vehicles the acoustic spectrum rolls off quickly below 100 Hz such that the input is roughly 10 dB below the peak SPL. Most primary spacecraft structure and very heavy components that attach to the spacecraft will not be excited by acoustics and will respond primarily to mechanically transmitted energy. One critical reason for running both a vibration test and an acoustic test at the spacecraft level is that each type of input excites the spacecraft structure differently and will screen for different types of failure modes. This is illustrated by the fact that both a coupled loads analysis and separate vibro-acoustic analysis are required to develop the full set of launch loads and environments experienced by the spacecraft and the hardware mounted to it.
    With all spacecraft and especially with the spacecraft produced by GSFC and JPL that typically include very sensitive one-of-a-kind science instruments, there is always the concern about the impact that workmanship will have on the ability of the hardware to perform as expected. The spacecraft level vibration test is the only test that will put significant loads into both primary and secondary structure, which responds dynamically to the low-frequency launch environment. This means it is the only test that will verify the installation and workmanship of structural interfaces between the spacecraft and attached hardware. Typically all components and most subsystems are tested separately and can be considered qualified for launch based on that lower- level of assembly testing. But the spacecraft vibration test is the only test that will screen for workmanship issues related to installation of the hardware on the spacecraft.
    Finally, the spacecraft that are being built by JPL and GSFC are typically one-of-a-kind spacecraft with instruments and sensors that are fabricated and tested by many different organizations from aerospace contractors, international partners and University organizations. These spacecraft are not multiple copies of production hardware in which workmanship and design issues may have already been addressed in prior builds. For these types of payloads, we rely very heavily on system level testing (vibration, acoustic, and shock) to uncover design and workmanship issues which may have been missed based on testing at lower levels of assembly due to test limitations, inadequate test specification, or inadequate simulation of boundary conditions.”

    AND:
    For the past several years, much has been written about systems data collection onboard modern airplanes: GE jet engines collect information at 5,000 data points per second; a Boeing 787 generates an average of 500GB of system data a flight; an Airbus A380 is fitted with as many as 25,000 sensors.19 Dec 2014
    AND:

    Functional safety vs. reliability
    A given in any discussion of automotive electronics is the tight relationship between reliability and functional safety. Functional safety focuses on avoiding injuries, whereas reliability is about whether the car works and does not need repair. But with increasing amounts of autonomy, there is plenty of overlap.

    “What happens if a rock hits the sensor? In addition to reliability on its own, we have to look at functional safety for self-driving cars, and the standard driving some of this activity is ISO 26262. It’s at the heart of a lot of things we do at the design stage,” said Maamari. “It’s okay for the chip to fail as long as it fails safely. That’s the pure focus of functional safety. If in a self-driving car, whether a chip fails or lightning strikes, it’s critical that the car does not crash. No injury is caused. Reliability, of course, is important. You’d rather have the chip not fail in the first place. So that’s desirable both for functional safety, but also for quality.”

    Understanding failure is essential for the auto industry. “There are many opportunities for failure,” said Baruch. “Getting to the point where you can repeat the process in a controlled manner and find root-cause issues inside those production lines, or between plants or different suppliers, is creating a significant challenge for reliability. Whatever you’re producing needs to be repeatable, and you need to be able to trust it. That’s causing a significant shift in how these companies are operating.”

    Reliability requirements for automotive electronics have been defined and graded by the Automotive Electronics Council (AEC), with AEC Q-100/200 the go-to standards for stress testing automotive ICs. Heat, humidity, and vibration are all risk factors that can destroy a chip, but materials, design, and manufacturing processes also can make chips more or less susceptible to risk factors. This gets complicated and details are important.

    “Judicious use of thermomechanical modeling is needed through-out the development and qualification process,” writes Amkor’s R. Dias et al., in the 2019 research paper, “Challenges and Approaches to Developing Automotive Grade 1/0 FCBGA Package Capability.” “Polymeric materials undergo permanent changes when subjected to high temperatures for extended periods of time. Depending on the ambience, this may include material oxidation as well as mechanical property changes resulting in embrittlement. The presence of humidity can also lead to loss of adhesion at the die passivation and substrate solder mask interfaces.”

    Beyond materials science, random errors such as an alpha particle striking a critical component can cause reliability issues. And then there is the issue of software reliability.

    “Software is challenging because it doesn’t follow any rules of physics,” said Dennis Ciplickas, vice president of advanced solutions at PDF Solutions. “Hardware sounds hard, but it actually follows some boundary conditions. With software, you can change one thing and have massive unintended costs,”

    One solution is redundancy, but that adds both expense and weight. Redundancy is normally the way to achieve reliability in aeronautical, but for automotive redundancy needs to be limited to specific systems within a vehicle.

    “I come from an aviation background and redundancy is the way that we normally take care of reliability issues — three flight computers voting on who’s right. But we don’t have that luxury in an automobile, where we’re trying to save a nickel,” said Jay Rathert, senior director of strategic collaborations at KLA.

    In a car, redundancy is balanced. “With a larger SoC, redundancies come in many forms,” said Maamari. The techniques are “at the higher level literally duplicating some CPUs or some blocks that have a poor function. You duplicate them, check the output, and make sure that the two get the same output, and then flag an issue if any of the two show something that is not consistent. That’s quite expensive, so it’s done for poor function, and it plays a dual role actually. You can constantly check whether they’re consistent, but that also it allows you to do some level of self-test during operation. You can bring one of the two down, do a self-test on it while the system still functions, and then bring it back. That is expensive because you duplicate an entire block or processor.”

    Other redundancies are more precisely designed in. “Sometimes it comes just at the memory register level, like a flip flop in logic design, where you identify some registers that play critical functions. Either you replace it by a much more tolerant part, or you put in triple modular redundancy, where you have three of them and it is a vote of the three. So it is fine granularity, coarse granularity, and a variety of other techniques. It’s all a balancing act to keep the cost down.”




    I think if you have read to this point you will have already worked out the answer to why successful adoption of AKIDA in SPACE is a driver of terrestrial success. "Used in Space" is the foil to "nobody got sacked buying IBM".

    My opinion only DYOR

    Fact Finder

 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
20.0¢
Change
0.000(0.00%)
Mkt cap ! $371.1M
Open High Low Value Volume
0.0¢ 0.0¢ 0.0¢ $0 0

Buyers (Bids)

No. Vol. Price($)
1 8444 22.5¢
 

Sellers (Offers)

Price($) Vol. No.
17.5¢ 125000 1
View Market Depth
Last trade - 16.16pm 24/07/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.