‘Shift Left and Stretch Right’ Is the Future of SDVs – EE Times Europe

In the world of embedded systems, software and hardware have traditionally been intricately connected. Given the resource constraints and deadlines, developers are compelled to ensure flawless integration between software and hardware. That, in turn, usually means extensive rounds of on-device testing to ensure interactions between interrupts, device drivers and application-level logic work as intended.

This traditional approach to development in embedded systems is increasingly at odds not just with rapid product development lifecycles but with the demands of service-led business models. A prime example lies in automotive design. OEMs are embracing the concept of the software-defined vehicle (SDV). When a vehicle goes out onto the market today, its capabilities are almost completely fixed, with only critical firmware updates applied during regular garage-service visits. The SDV is designed to be enhanced over its entire lifetime.

Making the SDV possible calls for a platform approach. Much of the software today runs on a broad spectrum of dedicated microcontrollers distributed around the car, arranged in many subsystems. Currently, this software changes infrequently. Looking ahead, to allow future functional upgrades via software, these functions need to be consolidated into a smaller number of high-performance multicore microprocessors. The in-vehicle network delivers real-time messages to control motors and other actuators in the car and transfers sensor inputs to the many tasks that run in each multicore processor complex.

The decoupling of hardware and software allows greater flexibility in application and architecture design and allows for increases in functionality over time (Figure 1). Software defines the experience for the driver by improving safety, cutting energy consumption and improving vehicle reliability. It also becomes a key differentiator for the OEM, which can create new revenue streams with cloud-based services available to drivers after the sale of the vehicle. Another advantage for the OEM is that the decoupling encourages greater software reuse across different vehicles, enabling a single application to support many vehicle designs, with minimal adaptation for each one.

These trends will change the way in which software is created and maintained. One change is to shift left to complete software earlier in the product development lifecycle, even when prototype hardware is not available. The second is to stretch right to support the ability to update the vehicles after they are in the hands of the drivers, using over-the-air (OTA) updates to add functionality to the vehicles throughout their lifetime.

Though shifting left and stretching right may look as though they demand different approaches, their requirements largely coincide if development teams choose the right software development methodology (Figure 2). This methodology is built on top of the concept of continuous integration and continuous, or near-continuous, deployment. It is an approach that has been used successfully in the enterprise space.

Embedded systems, such as SDVs, are turning to many of the same supporting technologies like virtualization and the use of software containers to isolate software modules and abstract them from the underlying hardware. The approach also provides easier integration with the cloud-based processes that will be employed for many of the value-added services OEMs will offer. These services will often fuse the core car capabilities with AI and analytics hosted in the cloud.

The core change for embedded systems is to remove the need for prototyping on physical hardware or at least reduce it to the bare minimum to ensure that assumptions about timing and hardware behavior are realistic.

In the cloud environment, containerization has been an important element in the adoption of continuous integration and deployment methodologies. Containers reduce the hardware dependencies of an application. They achieve this by packaging applications alongside the support libraries and device drivers with which they were tested and by isolating them from the underlying operating system. In the embedded environment, an additional layer of isolation and protection is enabled by virtualization. Under virtualization, a hypervisor maps the I/O messages to the underlying hardware. The hypervisors management of the virtual platform also helps enforce secure isolation between functionally independent tasks running on the same processor complex.

Containerization will help boost flexibility and the ability of OEMs to deploy updates, particularly in parts of the system where OTA updates are likely to be frequent, such as the infotainment modules in the vehicle cabin. However, though they will be more decoupled, hardware interfaces and the dependencies they impose will remain vitally important to the functionality of the cars real-time control and safety systems. Developers will want to see how changes in interrupt rates and the flow of sensor data will affect the way their software responds. The answer lies in the digital twin.

A digital twin is a model of the target that replicates hardware and firmware behavior to the required level of detail needed for testing. The key advantage of the digital twin is that developers do not need to access hardware to perform most of their tests. The twin can run in desktop tools or cloud-based containers either in interactive debug mode or in highly automated regression test suites. The regressions perform a variety of tests that accelerate quality-control checks whenever changes are made. Increasingly, teams are making use of analytics and machine-learning techniques to home in on bugs faster and remove them from the build before they progress too far.

As each update is made, it can be tested against any other code modules or subsystems that might be affected to check whether the changes lead to unexpected interactions or problems. The digital twin does not entirely replace hardware in the project. Conventional hardware-in-the-loop tests will still be used to check the behavior of the digital twin simulations against real-world conditions. But once divergences in behavior are ironed out, the digital twin can be used extensively to support mid-life updates. The extensive pre-hardware tests, which can be run at speed in the cloud across multiple servers, will give OEMs the confidence to roll out OTA updates with new features as they are made ready.

The accuracy of the models used in the digital twin is important, though it is not necessary to use fully timing-accurate models for many of the tests. Highly detailed models with full-timing information typically run slower than fast models optimized for analyzing instruction throughput and application logic on the target processor and that can run close to real time on cloud-server hardware. Partitioning tests, so that only those component or subsystem models that need a fully detailed simulation are run in the digital twin, will optimize test time and streamline the verification process.

Models can be built by the OEMs and subsystem suppliers. However, the digital twin is an area where partnership with the right semiconductor vendors provides a significant advantage. Vendors have committed to developing models of their silicon platforms for as much as a year before the products are ready to ship to OEMs for assembly into prototypes and end products.

As well as supporting shift left lifecycle acceleration, models provide the ability for OEMs and subsystem providers to learn quickly how architectural innovations can benefit the target applications. An example is the magnetoresistive random-access memory (MRAM) that is set to appear in future automotive SoCs, providing a high-performance alternative to flash and a way of overcoming the problems of using volatile DRAM and SRAM for persistent data. A basic model may treat non-volatile memory like flash and MRAM as equivalent and make no distinction in latency or bandwidth: The models just reflect the non-volatile behavior.

The basic model can then be replaced by one with a higher level of detail that reflects the differences in write and read times and other aspects of behaviorfor example, the ability to easily access arbitrary locations without needing to cache the data in DRAM, or support for rapid writes that lends itself to frequent changes in algorithm parameters and leads to improvements in performance. Those differences can be leveraged by changes to the code base that take full advantage of the technology where it is available. As a result, by adopting a model-centric approach to development, software teams can help specify future hardware implementations to improve performance over several generations of a vehicle or other system.

Once embedded in the development flow, the methodologies underpinning stretch right will enable continuous improvements to product quality and, in turn, service revenue (Figure 3). The flow of data is not just to the vehicle in the form of OTA updates but also in the other direction: OEMs aim to collect sensor data from the running systems and apply it to a variety of machine-learning and analytics systems. The information feeding into the OEMs data lake can be filtered and applied to the digital twin simulation to gauge how well the different platforms are performing in the real world. In the virtual world of the digital twin, developers can make changes to settings and test new software functions to see how they perform against real-world data.

Improvements can then be tested in the regression environment before being deployed in a new OTA update. This closing of the loop between development and deployment will lead to a much faster cycle of product refinement, improving both existing products and future versions as new hardware is developed, allowing a further shift left for the next generation. It is a further demonstration of how a holistic approach to development, encompassing continuous integration and digital twins, can streamline product design and support. The advantages of this new methodology make the dual targets of shift left and stretch right not just possible but inevitable.

Read also:

Making SDVs Quantum-SecureReady

Brian Carlson and Joppe Bos of NXP Semiconductors explain the benefits of migrating to software-defined vehicle architectures and the future security threats posed by quantum computing technologies.

SDV Safety Calls for Partnerships, Open Source

Industry experts from Codasip, Elektrobit, TTTech Auto and the Eclipse Foundation discuss safety and security of self-driving software-defined vehicles as well as challenges that this concept represents for automakers.

Visit link:
'Shift Left and Stretch Right' Is the Future of SDVs - EE Times Europe

Related Posts

Comments are closed.