Wednesday, March 11, 2015

Long time, no see: Catching up with the QNX CAR Platform

By Megan Alink, Director of Marketing Communications for Automotive

It’s a fact — a person simply can’t be in two places at one time. I can’t, you can’t, and the demo team at QNX can’t (especially when they’re brainstorming exciting showcase projects for 2016… but that’s another blog. Note to self.) So what’s a QNX-loving, software-admiring, car aficionado to do when he or she has lost touch and wants to see the latest on the QNX CAR Platform for Infotainment? Video, my friends.

One of the latest additions to our QNX Cam YouTube channel is an update to a video made just over two and a half years ago, in which my colleague, Sheridan Ethier, took viewers on a feature-by-feature walkthrough of the QNX CAR Platform. Now, Sheridan’s back for another tour, so sit back and enjoy a good, old-fashioned catch-up with what’s been going on with our flagship automotive product (with time references, just in case you’re in a bit of a hurry).

Sheridan Ethier hits the road in the QNX reference vehicle based on a modified Jeep Wrangler, running the latest QNX CAR Platform for Infotainment.

We kick things off with a look at one of the most popular elements of an infotainment system — multimedia. Starting around the 01:30 mark, Sheridan shows how the QNX CAR Platform supports a variety of music formats and media sources, from the system’s own multimedia player to a brought-in device. And when your passenger is agitating to switch from the CCR playlist on your MP3 device to Meghan Trainor on her USB music collection, the platform’s fast detection and sync time means you’ll barely miss a head-bob.

The QNX CAR Platform’s native multimedia player — the “juke box” — is just one of many options for enjoying your music.

About five minutes in, we take a look at how the QNX CAR Platform implements voice recognition. Whether you’re seeking out a hot latté, navigating to the nearest airport, or calling a co-worker to say you’ll be a few minutes late, the QNX CAR Platform lets you do what you want to do while doing what you need to do — keeping your hands on the wheel and your eyes on the road. Don’t miss a look at concurrency (previously discussed here by Paul Leroux) during this segment, when Sheridan runs the results of his voice commands (multimedia, navigation, and a hands-free call) smoothly at the same time.

Using voice recognition, users can navigate to a destination by address or point of interest description (such as an airport).

At eight minutes, Sheridan tells us about one of the best examples of the flexibility of the QNX CAR Platform — its support for application environments, including native C/C++, Qt, HTML5, and APK for running Android applications. The platform’s audio management capability makes a cameo appearance when Sheridan switches between the native multimedia player and the Pandora HTML5 app.

Pandora is just one of the HTML5 applications supported by the QNX CAR Platform.

As Sheridan tells us (at approximately 12:00), the ability to project smartphone screens and applications into the vehicle is an important trend in automotive. With technologies like MirrorLink, users can access nearly all of the applications available on their smartphone right from the head unit.

Projection technologies like MirrorLink allow automakers to select which applications will be delivered to the vehicle’s head unit from the user’s connected smartphone. 

Finally, we take a look at two interesting features that differentiate the QNX CAR Platform — last mode persistence (e.g. when the song you were listening to when you turned the car off starts up at the same point when you turn the car back on) and fastboot (which, in the case of QNX CAR, can bring your backup camera to life in 0.8 seconds, far less than the NHTSA-mandated 2 seconds). These features work hand-in-hand to ensure a safer, more enjoyable, more responsive driving experience.

Fastboot in 0.8 seconds means that when you’re ready to reverse, your car is ready to show you the way.

Interested in learning more about the QNX CAR Platform for Infotainment? Check out Paul Leroux’s blog on the architecture of this sophisticated piece of software. To see QNX CAR in action, read Tina Jeffrey’s blog, in which she talks about how the platform was implemented in the reimagined QNX reference vehicle for CES 2015.

Check out the video here:


Wednesday, March 4, 2015

“What do you mean, I have to learn how not to drive?”

The age of autonomous driving lessons is upon us.

Paul Leroux
What would it be like to ride in an autonomous car? If you were to ask the average Joe, he would likely describe a scenario in which he sips coffee, plays video games, and spends quality time with TSN while the car whisks him to work. The average Jane would, no doubt, provide an equivalent answer. The problem with this scenario is that autonomous doesn’t mean driverless. Until autonomous vehicles become better than humans at handling every potential traffic situation, drivers will have to remain alert much or all of the time, even if their cars do 99.9% of the driving for them.

Otherwise, what happens when a car, faced with a situation it can’t handle, suddenly cedes control to the driver? Or what happens when the car fails to recognize a pedestrian on the road ahead?

Of course, it isn’t easy to maintain a high level of alertness while doing nothing in particular. It takes a certain maturity of mind, or at least a lack of ADD. Which explains why California, a leader in regulations for autonomous vehicles, imposes restrictions on who is allowed to “drive” them. Prerequisites include a near-spotless driving record and more than 10 years without a DUI conviction. Drivers must also complete an autonomous driving program, the length of which depends on the car maker or automotive supplier in question. According to a recent investigation by IEEE Spectrum, Google offers the most comprehensive program — it lasts five weeks and subjects drivers to random checks.

1950s approach to improving driver
alertness. Source:
 
Modern Mechanix blog

In effect, drivers of autonomous cars have to learn how not to drive. And, as another IEEE article suggests, they may even need a special license.

Ample warnings
Could an autonomous car mitigate the attention issue? Definitely. It could, for example, give the driver ample warning before he or she needs to take over. The forward collision alerts and other informational ADAS functions in the latest QNX technology concept car offer a hint as to how such warnings could operate. For the time being, however, it’s hard to imagine an autonomous car that could always anticipate when it needs to cede control. Until then, informational ADAS will serve as an adjunct, not a replacement, for eyes, ears, and old-fashioned attentiveness.

Nonetheless, research suggests that adaptive cruise control and other technologies that enable autonomous or semi-autonomous driving can, when compared to human drivers, do a better job of avoiding accidents and improving traffic flow. To quote my friend Andy Gryc, autonomous cars would be more “polite” to other vehicles and be better equipped to negotiate inter-vehicle space, enabling more cars to use the same length of road.

Fewer accidents, faster travel times. I could live with that.


2015 approach to improving driver alertness: instrument cluster from the QNX reference vehicle.

Monday, March 2, 2015

Hypervisors, virtualization, and taking control of your safety certification budget

A new webinar on how virtualization can help you add new technology to existing designs.

First things first: should you say “hypervisor” or “virtual machine monitor”? Both terms refer to the same thing, but is one preferable to the other?

Hypervisor certainly has the greater sex appeal, suggesting it was coined by a marketing department that saw no hope in promoting a term as coldly technical as virtual machine monitor. But, in fact, hypervisor has a long and established history, dating back almost 50 years. Moreover, it was coined not by a marketing department, but by a software developer.

“Hypervisor” is simply a variant of “supervisor,” a traditional name for the software that controls task scheduling and other fundamental operations in a computer system — software that, in most systems, is now called the OS kernel. Because a hypervisor manages the execution of multiple OSs, it is, in effect, a supervisor of supervisors. Hence hypervisor.

No matter what you call it, a hypervisor creates multiple virtual machines, each hosting a separate guest OS, and allows the OSs to share a system’s hardware resources, including CPU, memory, and I/O. As a result, system designers can consolidate previously discrete systems onto a single system-on-chip (SoC) and thereby reduce the size, weight, and power consumption of their designs — a trinity of benefits known as SWaP.

That said, not all hypervisors are created equal. There are, for example, Type 1 “bare metal” hypervisors, which run directly on the host hardware, and Type 2 hypervisors, which run on top of an OS. Both types have their benefits, but Type 1 offers the better choice for any embedded system that requires fast, predictable response times — most safety-critical systems arguably fall within this category.

The QNX Hypervisor is an example of a Type 1 “bare metal” hypervisor.


Moreover, some hypervisors make it easier for the guest OSs to share hardware resources. The QNX Hypervisor, for example, employs several technologies to simplify the sharing of display controllers, network connections, file systems, and I/O devices like the I2C serial bus. Developers can, as a result, avoid writing custom shared-device drivers that increase testing and certification costs and that typically exhibit lower performance than field-hardened, vendor-supplied drivers.

Adding features, without blowing the certification budget
Hypervisors, and the virtualization they provide, offer another benefit: the ability to keep OSs cleanly isolated from each other, even though they share the same hardware. This benefit is attractive to anyone trying to build a safety-critical system and reduce SWaP. Better yet, the virtualization can help device makers add new and differentiating features, such as rich user interfaces, without compromising safety-critical components.

That said, hardware and peripheral device interfaces are evolving continuously. How can you maintain compliance with safety-related standards like ISO 26262 and still take advantage of new hardware features and functionality?

Enter a new webinar hosted by my inimitable colleague Chris Ault. Chris will examine techniques that enable you to add new features to existing devices, while maintaining close control of the safety certification scope and budget. Here are some of the topics he’ll address:

  • Overview of virtualization options and their pros and cons
     
  • Comparison of how adaptive time partitioning and virtualization help achieve separation of safety-critical systems
     
  • Maintaining realtime performance of industrial automation protocols without directly affecting safety certification efforts
     
  • Using Android applications for user interfaces and connectivity

Webinar coordinates:
Exploring Virtualization Options for Adding New Technology to Safety-Critical Devices
Time: Thursday, March 5, 12:00 pm EST
Duration: 1 hour
Registration: Visit TechOnLine