Showing posts with label Virtualization. Show all posts
Showing posts with label Virtualization. Show all posts

Monday, March 2, 2015

Hypervisors, virtualization, and taking control of your safety certification budget

A new webinar on how virtualization can help you add new technology to existing designs.

First things first: should you say “hypervisor” or “virtual machine monitor”? Both terms refer to the same thing, but is one preferable to the other?

Hypervisor certainly has the greater sex appeal, suggesting it was coined by a marketing department that saw no hope in promoting a term as coldly technical as virtual machine monitor. But, in fact, hypervisor has a long and established history, dating back almost 50 years. Moreover, it was coined not by a marketing department, but by a software developer.

“Hypervisor” is simply a variant of “supervisor,” a traditional name for the software that controls task scheduling and other fundamental operations in a computer system — software that, in most systems, is now called the OS kernel. Because a hypervisor manages the execution of multiple OSs, it is, in effect, a supervisor of supervisors. Hence hypervisor.

No matter what you call it, a hypervisor creates multiple virtual machines, each hosting a separate guest OS, and allows the OSs to share a system’s hardware resources, including CPU, memory, and I/O. As a result, system designers can consolidate previously discrete systems onto a single system-on-chip (SoC) and thereby reduce the size, weight, and power consumption of their designs — a trinity of benefits known as SWaP.

That said, not all hypervisors are created equal. There are, for example, Type 1 “bare metal” hypervisors, which run directly on the host hardware, and Type 2 hypervisors, which run on top of an OS. Both types have their benefits, but Type 1 offers the better choice for any embedded system that requires fast, predictable response times — most safety-critical systems arguably fall within this category.

The QNX Hypervisor is an example of a Type 1 “bare metal” hypervisor.


Moreover, some hypervisors make it easier for the guest OSs to share hardware resources. The QNX Hypervisor, for example, employs several technologies to simplify the sharing of display controllers, network connections, file systems, and I/O devices like the I2C serial bus. Developers can, as a result, avoid writing custom shared-device drivers that increase testing and certification costs and that typically exhibit lower performance than field-hardened, vendor-supplied drivers.

Adding features, without blowing the certification budget
Hypervisors, and the virtualization they provide, offer another benefit: the ability to keep OSs cleanly isolated from each other, even though they share the same hardware. This benefit is attractive to anyone trying to build a safety-critical system and reduce SWaP. Better yet, the virtualization can help device makers add new and differentiating features, such as rich user interfaces, without compromising safety-critical components.

That said, hardware and peripheral device interfaces are evolving continuously. How can you maintain compliance with safety-related standards like ISO 26262 and still take advantage of new hardware features and functionality?

Enter a new webinar hosted by my inimitable colleague Chris Ault. Chris will examine techniques that enable you to add new features to existing devices, while maintaining close control of the safety certification scope and budget. Here are some of the topics he’ll address:

  • Overview of virtualization options and their pros and cons
     
  • Comparison of how adaptive time partitioning and virtualization help achieve separation of safety-critical systems
     
  • Maintaining realtime performance of industrial automation protocols without directly affecting safety certification efforts
     
  • Using Android applications for user interfaces and connectivity

Webinar coordinates:
Exploring Virtualization Options for Adding New Technology to Safety-Critical Devices
Time: Thursday, March 5, 12:00 pm EST
Duration: 1 hour
Registration: Visit TechOnLine

Monday, December 9, 2013

So many cores — what to do with them all?

Multi-core processors are clearly becoming the mainstream for automotive infotainment systems. TI’s OMAP processors and their automotive derivatives use dual A15 cores, Freescale's i.MX 6 boasts up to four A9 cores, and other companies also have multi-core architectures in production or on near-term roadmaps. Quad-core A15 processors are just around the corner. As a percentage of overall die area, the CPU core is relatively small, so SoC producers can lay down multiple cores with little impact on cost. GPUs, on the other hand, represent a large percentage of the die area and, as such, are typically instantiated only once per SoC.

Realistically, infotainment systems should no longer be CPU bound. In fact, when looking at leading-edge SoCs available today, many are asking what to do with all that extra CPU just sitting there, waiting to do something. At first blush, the more obvious areas to merge are infotainment and ADAS, or infotainment and digital instrument clusters. This is, at the highest level, pretty much a no-brainer so long as the safety requirements mandated for clusters and ADAS can be achieved.

Thing is, automotive safety standards like ISO 26262 require system-level certifications. As such, the entire system needs to be certified. Already a daunting task for a standalone ADAS system or digital instrument cluster, the problem can become unmanageable when you include the full infotainment stack.

Think about your car. Your cluster does a handful of operations whereas your infotainment system runs a full navigation system, voice recognition, multimedia, device connectivity, and, in the connected case, cloud services. People don't get frustrated trying to figure out how your cluster works (I hope), and they don't give up trying to figure out how fast the car is moving. The same cannot be said for many infotainment systems shipping today. Ask your mother to pair her cell phone to her car. I dare you! The complexity involved in attempting to certify a system that combines infotainment and cluster functions is orders of magnitude higher than certifying a cluster alone.

All is not lost, however. Virtualization offers an elegant way to isolate multiple systems running on a single multi-core SoC. By using virtualization you could seek certification on the cluster without burdening yourself with the infotainment problem and collapse two formerly discrete systems onto one SoC. You would save money and probably get a promotion to boot. Just one thing: there is still only one GPU on the die and both the infotainment system and the cluster rely heavily on that single GPU.

Enter Red Bend Software, a long-time QNX CAR Platform partner for FOTA. They have taken the challenge of virtualizing the GPU head-on and successfully demonstrated the QNX CAR Platform and a Crank Software-based digital instrument cluster running on dual displays driven by a single OMAP 5 at Telematics Munich. I saw it and was impressed with how snappy performance was on the infotainment side and how smooth the needles were (60+ fps) on the cluster.


Using virtualization to drive dual displays from a single TI OMAP 5 processor.

According to Red Bend, they have designed a GPU-sharing architecture that enables multiple guest operating systems to access hardware accelerators, including the GPU, providing isolation between the operating systems while having a minimal impact on overall performance (which probably isn't a huge deal considering how many CPU cores are going to be shipping on a single SoC in the near term). It sounds impressive, but seeing is believing.

Red Bend will next show this demo in the TI Suite at CES (N115 in the North Hall). If system consolidation is something that keeps you up at night, you should really stop by to see what they have done.