Human Machine Interfaces (HMIs) are an incredibly important part of consumer electronics. A machine that is clumsy and unintuitive to interact with will rarely be a great success. As a result of this, if you look back over the past ten to fifteen years, it’s been one of the leading areas of innovation for the industry. Phones have evolved from having area-consuming PC-like keyboards to soft-control touchscreens and this change enabled larger screens and a better multimedia experience on pocket-sized mass-market mobile devices. In fact, the simplicity of the touchscreen has become so popular that it’s been adapted into many other areas where an easy to use natural user interface (NUI) is key, such as automotive multimedia systems or advanced medical applications.
In its most simple form it is easy to define HMI as “how a user interacts with their device”. However, innovations in HMI have much deeper influences than that. We can see that advancements in the area of HMI have not only changed how we interact with devices, but also what those devices do for us. It has been a key influencer in unlocking new functionality within our devices and what they mean in our lives. Phones are no longer used simply as a means of interchanging messages. We can now monitor our health, check the weather, play a vast variety of games on them, draw and edit pictures, or surf the internet at ease. The evolution of the HMI goes hand-in-hand with the evolution of a device’s multimedia content.
Today, richly graphical user interfaces, touchscreens and soft controls are the norm. To enable this, processors have had to evolve to offer the increased computational performance required of the device. From a graphical interface perspective, you can see GPU development has been driven by three key areas of demand:
- Increasing resolutions: Since the Google Nexus 10 exceeded full HD 1080p resolution with its 2560x1600 (WQXGA) screen, OEMs are continuing to increase pixel density setting 4K (UHD) resolution as the new goal for mobile silicon. To enable this, ARM is not only increasing the number of potential shader core implementations within each GPU (the ARM® Mali™-T760 GPU can host up to sixteen) but we are also improving the internal efficiency of the cores themselves and the memory access in order to ensure that the scaling of cores results in a proportional scaling of performance.
- Diversity of screen sizes: GPUs are suitable not only for HD tablets and UHD DTV displays but also for smaller, wearable devices and IoT applications. The growing diversity of consumer devices is encouraging semiconductor companies to deliver a correspondingly diverse GPU IP portfolio: a processor suitable for any application. With GPUs ranging from the area-efficient Mali-300 to the high-performing, sixteen core ARM Mali-T760 this is exactly what ARM is offering and we are continuing to evolve our roadmaps to deliver great graphical experiences on any device.
- Hardware support for more complex content: as NUIs become increasingly life-like, hardware support for features such as the latest APIs becomes crucial in order to enable graphics-rich, intuitive content with smooth frame rates. Not only that, but the raw computational power needed in order to produce these smooth, life-like images that are expected in high end devices puts ever increasing demands on the capabilities of the processors in them. Again, that’s where the efficiency of ARM CPUs and GPUs come into play. Coupled with the configurability and scalability of ARM processors, device manufacturers have the flexibility they need to meet consumer demands, cost efficiently, across the entire market.
I believe the current phase of HMI is still being explored and will continue to see significant innovation. In the world of battery-powered devices, traditional PC games have been adapted from console and controller platforms to mobile. With this shift you can see some developers mimicking console controllers with the touchscreen, whilst others have achieved success with new, simple interfaces tailored to the nature of the game (such as swipes, tilts, etc.) This success is inspiring more developers to either design new applications for these effective HMIs, or even new HMIs tailored to their new game, making the entire multimedia experience ever more intrinsically interactive instead of conforming to traditional HMI methods.
However, that’s all happening today. What really excites me is what we can see coming in the future.
Across nearly all the evolutions in NUI, you can see a desire and trend for effortless and instinctive interaction. Physical push buttons have given way to soft buttons; fixed function devices have had their functionality opened up by this range of application-dependent software-driven controls. Looking into the near future, I can see the next phase of HMI arriving in the form of our devices “reaching out” to interact with us. Why should I have to remember an easily copied PIN sequence to unlock my device? Why can’t ‘I’ be the key? This trend is at the beginning of its lifecycle with facial recognition capabilities becoming standard in mobile devices and starting to be used for unlocking phones. As another example, why do we still have to find controllers every time we wish to change the channel or volume on the TV? Why can’t we control TVs ourselves via gesture or voice control? Why can’t the TV control itself, reaching out to see if anyone is watching it or whether the content is suitable for the audience (for example if there are children in the room)? As eyeSight’s CEO, Gideon Shmuel, says:
“In order for an interaction solution to be a true enhancement compared to existing UIs it must be simple, intuitive and effortless to control – and to excel even further, an interaction solution should become invisible to the user. This enhanced machine vision understanding will, in fact, deliver a user aware solution that understands and predicts the desired actions, even before a deliberate command has been given.”
The concepts for these new HMIs have existed for a while. But it is only in the past year that technology is starting to catch up in order to provide the desired result within the restricted mobile budget. In most cases when a device is “reaching out” to the user it is using either gesture recognition, motion detection or facial recognition. Two issues had been holding this back. Firstly, the processing budget for UI in embedded and mobile devices was not sufficient to support these pixel-intensive, computationally demanding tasks. Advancements such as GPU Compute, OpenCL™ and ARM big.LITTLE™ processing are addressing this issue, increasing the amount of processing possible within the same time budget, and several companies are seeing success in these areas.
Video interview with eyeSight
Secondly, I believe that the lack of a flexible, adaptable platform on which these tasks can be developed and matured was holding back this technology. However, now devices with performance-efficient GPU Compute technology are entering the market, such as the recently released Galaxy Note 3, and ARM is seeing an explosion in the number of third parties and developers exploring ways in which this new functionality can bring their innovations to life.
Looking even further ahead, it is clear that HMI will become even more complex as machines start to “reach out” to each other as well as to their users. As devices continue to diversify I believe that we will see a burst of innovation in how these devices start to interact and be used either in conjunction or interchangeably with each other. As the Internet of Things picks up its pace, the conversation will be about HMMI rather than simply HMI; then HMMMI. How will we interact with our devices when all our devices are connected? If my smartphone senses my hands are cold, will it automatically turn the room or car heating up? If I leave my house but accidentally leave the lights on, will they turn themselves off? Will the advancements in NUI on our mobile devices make obsolete any interactions on less interactive devices? Will we even need the mobile devices as an interface with the machine-world or will every device with a processor be able to “reach out” to its environment? The possibilities are vast in a user-aware world, and ARM’s role in this area will continue to be to develop the processor IP which enables continuous, ground breaking innovation.
What are your thoughts on the future of NUI? What will ARM have to do to meet its future needs? Let us know your thoughts in the comments below.