ARHUD – Redefining Automotive HMI
Date :
2023-03-23

With the advent of the smart cockpit, the driving experience is changing. There are more and more information and tools that are being provided to the driver, these new additions provide a profound benefit to the driving experience, however there is a flaw, it can cause distracted driving if the HMI medium is not ideal, so the industry is now searching for solutions.

 

At the recent HMI summit held in Shanghai on 3/17, Raythink’s marketing expert Rex Jiang shared his viewpoint that ARHUD would become the most important HMI in the smart cockpit. With the capability to augment reality and deliver vital and easy to understand information without distraction, ARHUD is the most practical and safe HMI. ARHUD’s continual improvements in performance and function will eventually pave the way for mainstream adoption.

 ARHUD – Redefining Automotive HMI

Below are some of the speech highlights:

 

Raythink Marketing Expert Rex Jiang: “This morning I read a report stating that traffic accidents are once again on the rise. With the smart cockpit, Cars are getting eyes, ears brains to help make sense of the complicated environment around us. Cars are getting increasingly intelligent and the smart cockpit is an inevitable trend. But how best do we deliver these functions to the driver without distracting them?”

 

ARHUD will redefine automotive HMI?

 

With the smart cockpit, there are increasingly more tools and information that is being added to the driving experience. ADAS is an invaluable tool, and most of those that have utilized ADAS features swear by them. Navigation is something everyone uses now. Beyond that, there’s also infotainment, telephony and more. But all of these features, while very helpful and one could say, essential, have a fatal flaw. These systems all draw the driver’s attention away from the real world and can cause distracted driving.

 

So, what is the solution? What is the best way for the car to interact with the driver? We are now seeing a lot of new ways to present information to the driver, as car manufacturers and Tier 1 suppliers search for new solutions. Some increase the number of screens, or combine them, for example Tesla making the decision to remove their dashboard and transfer the dashboard information to the central display. We see cars have different feedback systems, with voice assistance, haptic feedback and more. And finally we are seeing more and more cars incorporating the ARHUD.

 

I truly believe that the ARHUD is the most important HMI for the smart automotive. It is the only way to deliver vital information to the driver, in a way that limits distraction. My viewpoint here is not only shared by me, but by our customers who have increased their orders to a total of 400k pieces, as they are choosing to move from ARHUD as an option to ARHUD as the standard. More and more, the ARHUD is being viewed as the future.

At the recent CES, BMW unveiled the iVision Dee which features the ARHUD as one of the key improvements. The CEO shared that the ARHUD will eventually replace the dashboard. We see other car manufacturers also following this trend, such as Vinfast and Li Motors who are reducing the footprint of the dashboard, and utilizing HUD, although it is just WHUD currently. We are also seeing a big wave of investment into ARHUDs by both investors and car manufacturers and tier 1 suppliers.

 

The ARHUD requires 3 core technologies

 

To achieve a true ARHUD you will need to satisfy 3 key technologies.

The ARHUD requires a larger FOV and VID for a better user experience. It requires optical design and PGU expertise, to ensure a stable image that is clear, bright, without double image, or distortion. The design also needs to reduce the volume of the ARHUD to support wider adoption. It also requires a robust software platform, that has the capability to perform real-time data fusion on a variety of inputs, and to then process that information, combined with AI algorithms to properly render AR graphics.

 

I’ll discuss each of these 3 keys in more detail.

Why the large FOV?

We need a large FOV, because we want as much coverage of the driver’s vision as possible. With a small FOV, the AR graphics can be distracting, they may actually inhibit the driver’s view of the real world. Small FOV graphics may actually block the sight of the driver, and that is something we have to avoid.

A large FOV is also a lot more functional for turns, as when you make sharp turns, a small FOV will be completely misaligned with the real world, while a large FOV at least can cover it partially.

A small FOV also limits the functions that can be displayed, with more display area we can introduce new functions, and better utilize the display area, while not inhibiting the drivers  view or increasing distractions. As the FOV, there is a huge difference in the functionality and the benefit to the driver.

 

 

Here at different FOVs, we can implement different functions. At FOV 10 you can have dashboard information, simple LDW, and simple navigation. As you increase the FOV, you can add more features, for example the navigation arrow can cross multiple lanes, which makes it even more easy to follow. We can also display blindspot warnings in a more intuitive way, or even show POI. It opens a lot more possibilities, and allows for the design of better UIs and UX.

 

We also need a longer VID.

The longer VID enhances the virtual image coverage when combined with the FOV.

A longer VID is also required to essentially “trick” the human eye into viewing the virtual images as part of the real world, which provides a much safer driving experience.

In addition, with shorter VID, the eye will need to refocus, it will need to adjust the ocular angle, which will all cause fatigue. The time it takes to refocus can also create a safety issue, as for older drivers it takes longer to refocus.

 

Based on various studies, drivers also naturally look ahead at the road extending from 15m. So we want to aim for the most natural viewing state for the best driving experience.

 

A longer VID also has many other benefits, such as eliminating ghost images, which also allows us to implement the ARHUD without the need for a PVB wedge film, thus cutting costs and simplifying the production process. The longer VID also allows warnings to be presented to the driver from further away, which allows more time for the driver to respond to the warnings. A 15m VID compared to a 7.5m allows the driver twice the amount of time to respond to any warnings.  

 

Overall, the longer VID results in a better user experience. Here is another simple illustration of what difference the VID and FOV can make. We don’t want to be limited to the small white display area, but instead to fully cover 3 lanes.

 

So we’ve established that to achieve a better effect for the AR HUD we need to have a larger FOV and VID, ideally with 20 FOV and 15m VID. However, this is quite difficult with the current traditional PGUs – TFT/DLP, even with Raythink’s expertise in using these conventional PGUs. So we have always strived towards developing better solutions such as the LBS module.

 

Here we have a simple comparison of the different PGUs. For the sake of time I won’t discuss every detail but just go over some of the key points.

TFT while very mature, and cost effective, has trouble with brightness and temperature, so it has problems extending the VID, as with long VID the magnification factor causes solar loading, and burnout. In addition the TFT also has contrast issues, which can create light windows in darker environments.

 

The DLP is monopolized by TI, so it’s higher priced, and is less customizable than other solutions. It also requires a larger heatsink, and so it is difficult to reduce the AR HUD volume. These factors cause it to be harder to adapt into a wider range of vehicles.

 

LCOS still is an emerging technology, and so still has a lot to be determined. However, contrast and cost seem to be issues that will prevent it from reaching wider adoption.

 

Raythink believes that LBS will be the clear choice for the ARHUD pgu in the near future. So we have invested heavily in developing our own module called the OpticalCore. The OpticalCore PGU has high brightness, contrast, and resolution while having a smaller volume compared to other PGUs and is able to approach the cost of the TFT...

 

When the OpticalCore PGU “scans” the images out, it is able to turn on/off individual pixels.. which results in lower consumption and true blacks, a very very high contrast ratio. In addition, with it being laser based it has great brightness and a very wide color gamut.

 

The OpticalCore PGU has no light window, even in dark driving situations. Beyond the light window, the LBS module also allows for large FOV, long VID, elimination of ghosting reducing need for PVB wedge film, and is able to alleviate solar loading concerns.

 

Our OpticalCore is nearing production, and we have already developed multiple prototypes and have successfully implemented the OpticalCore module in an ARHUD, which will be unveiled at the Shanghai expo next month.

 

The 3rd key to the ARHUD is having a robust AR software architecture.

 

First, the AR HUD needs to be able to integrate a wide variety of data sources and conduct data fusion to enhance the accuracy. Basically, we obtain and combine information that tells us the vehicle location and orientation, threat object detection, Eye/Head position, environmental data, and even road/traffic conditions. The inputs include data from vision cameras, radar, lidar, from maps, light sensors, inertia sensors, from the dashboard, ADAS,  DMS and more. The software needs to be able to integrate this information in real-time, with low latency.

 

Once we have all this information, we have the processing and rendering stage. We utilize a wide variety of AI algorithms that act upon this data. There are algorithms for distortion compensation, perspective compensation, environment adaptation, priority management algorithms and many more algorithms to be able to provide the best AR image to the driver. It is of utmost importance to have these algorithms, as distortion, low visibility and unstable images really reduce the function of the ARHUD.

 

After all these algorithms and processing, finally we output precision AR functions and graphics, that can adapt to different situations. The AR functions include ADAS features such as LDW, PCW, Collision warnings, blindspot detection, AR navigation and more. We can also output AI assistants, infotainment, POI, telephony and more functions. Finally, The ARHUD also needs to fit seamlessly in to the smart cockpit, and interact with the other screens and car systems for a holistic experience.

 

With our AR Generator SDK software platform, we are able to allow our customers to customize and personalize the user experience, bringing an experience that is uniquely represents their brand.

 

To reiterate, the ARHUD needs to solve these 3 core technologies to be able to truly provide a great driving experience. It needs the large FOV and long VID, the optical design and PGU expertise, and the software platform to allow ARHUD to fulfill its potential. With this, ARHUD will rapidly take it’s place as a core HMI in the smart car era.

 

I believe Raythink is uniquely positioned in the ARHUD industry, as we are one of the very few companies that is able to realize the true capabilities of the ARHUD.

 

 

About Raythink:

  

Raythink is an international company with our headquarters and factory in Shenzhen, our software algorithm center in India, our optical innovation center in Taipei, and systems office in Shanghai. Although we were founded in 2019, just about 4 years ago, we have been working on AR HUD technology since 2014 and we have achieved many firsts in the industry, such as the first 20 degree FOV and 15m ar hud in 2020, and now the first LBS ar hud which will be unveiled next month.  

 

We have a wide range of ARHUD products to meet our customers needs, ranging from 6.5 to 20 FOV, and up to 15m VID. At any stage of our customers growth, we are able to adapt and provide the most suitable solution for them. We also have expertise in utilizing and optimizing TFT/DLP solutions and creating a viable upgrade path to our OpticalCore PGU.

 

We are also able to eliminate the need for a PVB wedge film, as we are able to eliminate ghosting. We also provide an AR Generator SDK that is full of features that allow our customers to create the most advanced user and HMI experience.

 

Due to our leading technology and expertise, we have been awarded with multiple mass production contracts with a total order volume of close to 400k pieces.

 

(The above content comes from Raythink’s Marketing Expert Rex Jiang’s speech, Redefining Automotive HMI, given at the 2023 3rd annual GasGoo HMI Conference.)