Quantcast
Channel: ARM Mali Graphics
Viewing all 266 articles
Browse latest View live

Tech Symposia 2016 Tech Talks - Graphics, video and virtual reality

$
0
0

Following a fantastic first event in Shanghai it was off to the airport for a short hop to a much chillier Beijing for round two of 2016’s Tech Symposia. On Monday we talked about the keynotes and major product announcements so today we’re taking you to the beautiful Beijing Sheraton ballroom where our technical experts took us through a deeper dive into a huge range of products and processes.

IMG-20161102-WA0000.jpg

Split into three streams, the Tech Talks covered Next generation processing, Smart embedded & IoT and Intelligent implementation and infrastructure. Focussing on the first stream, first up were ARM Senior Product Managers Dan Wilson and Roger Barker for a closer look at their new products launched on Monday. Having covered the market drivers such as Virtual Spaces and the new Vulkan graphics API in the keynotes, Dan used his session to consider Mali-G51’s Bifrost GPU architecture in greater detail.

 

The first Bifrost based Mali GPU, Mali-G71 is a high performance product designed for premium devices. In order to facilitate quality graphics on mainstream devices based on Mali-G51, specific architectural optimizations were made to rebalance workloads and prioritise graphics processes. A new shader core design allows partners to choose a flexible implementation of single or dual pixel shader cores, with single pixel cores able to handle one texel per cycle and dual pixel cores handling two texels per cycle. Partners can also choose to implement an asymmetric combination of the two for an MP3 configuration. Not only are there changes to the available shader cores but specific optimizations were also made to the texture and varying units. The texture unit changes have been designed for increased tolerance to high memory system latency while effectively reducing the pipeline length to reduce silicon die area and power consumption. As Mali-G51 based devices start to appear in consumers’ hands in 2018 it will be great to see how our partners have leveraged these developments to provide a greater user experience for mainstream devices.

DualShader.JPG

Following Dan, Roger took to the stage to provide us with more detail about the new video processor, Mali-V61. He highlighted the evolution of video from live stream TV broadcast through on-demand streaming right up to today’s newest use cases where an ever increasing number of users are communicating in live, real-time video. Featuring all-new, super high quality VP9 encode and decode and much improved HEVC encode, the Mali-V61 VPU again provides better than ever configurability and choice to partners. Roger explained that the reasoning behind the importance of this high quality encode was around the emergence of time critical video applications that can’t support the additional processing time required to transcode content from different codecs in the cloud before delivering a VP9 decode, for example. With VP9 encode you can upload your video content in the same format in which it will be decoded, removing the transcoding lag and facilitating the fastest possible video experience. In terms of flexibility, we were given a more rounded view of the options available across mutlicore scaling. With an 8 core configuration partners can choose to support one 4K stream at 120 frames per second (FPS), or alternatively multiple streams at a variety of quality and FPS performance points from 720p right up to 4K. These are of course just a taste of the technical elements so you can visit the Connected Community to read the launch blogs for Mali-V61 and Mali-G51 for all the details.

20161102_133834.jpg

Another great multimedia session was hosted by our in-house graphics guru Sylwester Bala and boasted the rather poetic title of ‘Virtual Reality: Hundreds of millions of pixels in front of your eyes.’ Sylwester took us through the evolution of mobile gaming from the earliest Gameboy platform right up to the mobile virtual reality applications appearing today. The graphics complexity has increased throughout this timeline but virtual reality has accelerated this trend even further. For starters, we have to render a marginally different view for each eye in order to create a viewpoint that our brain understands, effectively doubling the graphics workload. The lenses within the head mounted display are needed to correct the distance perception in the field of view but have the additional effect of making the final image appear sunken in the middle, known as a pin cushion effect. Barrel distortion therefore has to be applied in post processing to correct this effect, adding another level of processing complexity. Sylwester discussed the factors which often limit the quality of a VR experience such as latency, double CPU and GPU processing (required by the two separate views) and resolution. One of the solutions to reducing graphics workload is the use of foveated rendering. This approach effectively splits the display into two concentric circles to mimic the fovea region of your eye. The central section is rendered in high resolution but the outer section in lower resolution. The reduced quality of the outer section isn’t perceptible to the viewer but greatly reduces the processing power required. It does however, mean that instead of rendering twice, for two eyes, we are rendering four times, for four different sections. Sylwester explained how our Multiview extension can render both views with the required variations, simultaneously. Read more about Multiview in this blog. The really exciting result of the use of Multiview is the impact it has on GPU performance. Graphs demonstrated that GPU utilization could be taken down to from near 100% (for 1024 x 1024 resolution) to 40% for 512 x 512 inset resolution. Further savings can be achieved by reducing the size of the inset section depending on the specific of the device and its optics. It also demonstrated up to 50% bandwidth savings. Further efficiencies could be achieved through the use of ARM Frame Buffer Compression for bandwidth heavy processes like Timewarp, in some cases up to 50%.

 

There were so many fantastic sessions that I couldn’t possibly hope to cover them all but I hope this has given you at least a glimpse of the vast breadth of knowledge we’re so lucky to be able to share with our partners. From here the Tech Symposia tour moves to Shenzhen on 4th November then on to Taiwan, Korea, Japan and finally, India before finishing up for another year. Don’t hesitate to get in touch if there’s something in particular you’d like to hear more about!

Beijing.jpg


Tech Symposia 2016 China roundup: Shanghai – Beijing – Shenzhen

$
0
0

Welcome to sunny Shenzhen for the third and final day of the China leg of ARM Tech Symposia 2016. There’s a very different feel to this city from the moment you leave the airport. Just a few short decades ago this was a sleepy little fishing village and the speed of growth to today’s sprawling metropolis of shining skyscrapers with LED displays emblazoned across their sides gives an impression of youthful urgency. The Ritz Carlton plays host to us today, with flashing lights and Hollywood soundtrack all the familiar faces were welcomed back to the stage to share their knowledge with a new set of partners and developers.

shenzhen.jpg

Asked for his impressions of this year’s ARM Tech Symposia events, VP Worldwide Marketing, Ian Ferguson (now famous for his keynote describing IoT applications for watering his walnuts), said ‘China is such an important market for ARM and its ecosystem. Our ongoing commitment to bringing experts from across our company and the ecosystem, helps equip local companies to develop compelling new products that benefit us all.’ It’s certainly true that the innovation and progression of the Chinese technology industry is pushing forward many of the solutions we’ll all come to rely on in the future. It seems every seat is filled with bright, talented people keen to take emerging technologies to a broader marketplace.

fergie.jpg

For those of us lucky enough to attend all three events it’s been a fantastic opportunity to see talks across all three technical tracks, broadening our own learning along with our partners and colleagues. This learning experience is one of the key aims of the event, with Noel Hurley, GM of Business Segments Group, saying: ‘these events are so well attended and organised (well done to the team!) it’s really encouraging to see all these engineers keen to understand what we are doing and how they can design with ARM technology’. The turnout has indeed been great, with a vast variety of attendees and speakers from all over the world, coming together to share their experience.

 

Having focused on graphics and VR at previous events today I had the chance to join Judd Heape, from our newly created Imaging and Vision Group, for his talk on the Computer Vision products which joined ARM’s portfolio following the acquisition of Apical earlier this year. Many people are already feeling the benefits of assertive display technologies without even being aware of it. This uses pixel by pixel tone mapping to adjust specific areas of an image to allow you to see greater contrast and detail on the screen of your phone, even in bright conditions. Not only does this technology improve the viewing quality of your images but can also save between 20 and 50% power consumption depending on your settings. This technology is already working silently in millions of devices and we are now in a position to leverage its full potential for a greater range of consumers, as well as extending it with the latest versions of this product which can automatically remaster High Dynamic Range (HDR) content to be viewed on mobile displays. Also explained were the Assertive Camera product, enabling HDR to improve image capture quality, and a Geometric Distortion Engine which can effectively ‘unravel’ fish eye style images into standard perspective.

cv.jpg

Judd explained that the importance of imaging doesn't stop at capturing and displaying images but now extends to understanding the content of those images. Smart, low power technologies like ARM's Computer Vision engine, which is enabling instant facial recognition and behavior mapping across large groups of people, will also start to change the way we work. Security and safety can be much improved by utilizing this technology to assess overcrowding in transport, for example, and address it before it becomes a danger to commuters. It can also improve personal video content by allowing you to focus specifically on your friends or family members as they play football, or run a marathon for example.

 

Demos, too, are adding value to these events. With dedicated team members and partners on hand to help you try everything from VR to drones, even the coffee breaks can be hugely informative.

 

Having seen just a small proportion of the fantastic presentations across these events I’m in awe of the wealth of potential at our fingertips and am already looking forward to next year’s events to see what one more year of innovation will bring. For those of you lucky enough to join the team at the upcoming events in Taiwan, Korea, Japan and India, there’s a lot to look forward to!

VR hub page - A one stop shop for all things virtual reality

$
0
0

Latest Unreal Engine release 4.14 adds mobile features optimized for Mali GPUs

$
0
0

This week, Unreal Engine released their latest UEv4.14 upgrade and it includes several cool mobile features:

 

VR Multiview Support for Mobile

VR Multiview is an extension of OpenGL ES available on Android devices. Multiview allows the developer to simultaneously render two viewpoints of the same scene, representing the left and right eye perspective, with one single draw call. By effectively halving the rendering requirements this extension reduces CPU and GPU load compared to traditional stereoscopic rendering. The blog “Understanding Multiview” by Thomas Poulet provides more detail on how to use this extension to exploit maximum benefits.

 

In the UE4.14 editor you can enable Mobile Multiview as per the picture below:

 

Enabling Multiview.jpgEnabling Multiview in UE4.14

 

The Multiview feature in UE4 is currently only compatible with Mali GPUs.

 

Improved Vulkan API Support on Android

UE 4.14 enhances Vulkan support for the Samsung Galaxy S7 device, as well as the latest Android devices supporting Android 7 (Nougat) OS.

Vulkan brings many benefits to mobile devices and graphics developers. For developers, the API works cross-platform, covering everything from desktop to consoles and mobile devices. For mobile devices, Vulkan has a much lower CPU overhead compared to previous graphics APIs thanks to the support of multithreading. Nowadays, mobile devices have, on average, between four and eight CPU cores, so having an API which is multithreading friendly is key.

 

To learn more about Vulkan benefits and the comparison with OpenGL ES API, read this blog.

 

Forward Shading Render with MSAA

Unreal Engine releases a new forward shading renderer which combines high-quality lighting features with Multisample Anti-Aliasing (MSAA) support.

 

Anti-Aliasing is a technique for reducing “Jaggies” or step-like lines that should otherwise appear smooth. These step-like lines appear because the display doesn’t have enough resolution to display a line that appears smooth to the human eye. Anti-Aliasing is the method to trick the eye into thinking that the jagged edge is smooth by packing blended pixels on either side of the hard line depending on pixel coverage.

A_A.jpgAnti-Aliasing (without and with)

 

Multiple levels of anti-aliasing are available on the ARM Mali GPU hardware. 4x MSAA is provided with ARM Mali GPUs with close to zero performance penalty due to the tile buffer supporting 4 samples per pixel by default. MSAA is well suited to VR applications as the eyes are much closer to the display.

 

More information about MSAA in ARM Mali GPUs is available as part of our developer resources.

 

Automatic LOD Generation

This new feature automatically generates several Level Of Details (LODs) of your static meshes. LOD reduces the polygon count in your graphics, as a different LOD is rendered depending on the distance of your mesh from the camera view point. Rendering large meshes consumes a lot memory and battery power. It’s therefore very important to render the images at the most efficient LOD level, that is to say, rendering to the lowest LOD that does not show visual artefacts or compromise visual quality. By rendering content to this optimal point rather than to the highest resolution, mobile devices can benefit from huge decode time and energy savings.

 

UE4.14 can automatically calculate the screen size to use for each LOD level created by ticking “Auto Compute LOD Distances”.

LOD-Settings.pngLOD Settings in UE4.14

 

To find out more about LODs in UE4.14, or have more details of all the features released in 4.14, please read the UE4.14 release notes.

Huawei Mate 9: Sustained performance for gaming & VR

$
0
0

As the days get shorter and the cold weather begins to creep in, we know it’s that time of year when we can start to get excited about the brand new devices for the upcoming year. A major announcement for us in the ARM® Mali™ Multimedia team is the brand new Huawei Mate 9 smartphone, based on the Kirin 960 chipset. Featuring a dual 20MP / 12MP Leica camera set up, 4K video capture and 64GB expandable storage, this is of course great news for both consumers and the smartphone market as a whole. Not only that, but it’s also especially exciting for us as one of the first devices to feature both the premium ARM processers launched earlier this year at Computex, the Cortex®-A73 and Mali-G71.

PremiumMobile 2016.JPG

The Mali-G71 GPU was the first graphics processor based on our new Bifrost graphics architecture and was designed to support high end use cases like immersive VR gaming, as well as brand new graphics APIs like Khronos’s Vulkan. Superior energy efficiency can be achieved through the smart combination of multiple ARM technologies, so as well as the Mali-G71, the Kirin 960 uses ARM big.LITTLE™ technology in an octa-core configuration. It features four high performance 'big' ARM Cortex-A73 cores and four high efficiency 'LITTLE' ARM Cortex-A53 cores. According to xda-developers, the Huawei Mate 9 outperforms its predecessor by around 10% and 18% in single-thread and multi-core performance respectively. Combined with other advantages of big.LITTLE – longer periods of sustained peak performance and a richer user experience – and Mali-G71, the Kirin 960 chipset in the Huawei Mate 9 will push the boundaries of mobile compute for use cases such as Augmented Reality and Virtual Reality, delivering a leading premium mobile experience. bigLITTLE.png

 

Speed was of the essence in terms of handset performance, with Huawei boasting a clever Machine Learning algorithm that learns your habits as a user and prioritizes application performance accordingly. This allows the power to go where you need it most, ensuring the smoothest performance whilst protecting your privacy by running it directly on the handset rather than bouncing it to the cloud.

 

This device hits the market just eight months after the Mali-G71 IP was first made available to HiSilicon’s Kirin team of engineers and represents an incredibly fast time to market, especially for a device capable of handling such complex content. With the inherent compatibility between the products, not to mention the ability to exploit the Mali-G71’s full coherency between CPU and GPU, it’s great to see that Mali-G71 is allowing our partners to speed up their time to market and deliver the newest devices to the consumer, faster than previous generations.

 

The decision to design Mali-G71 with Vulkan in mind seems to have provided additional benefits too. Huawei showcased side by side screenshots of Vulkan demo “The Machines” from Directive Games, claiming between 40% and a massive 400% more efficiency compared to the previous API, OpenGL ES! Our own comparisons also showed a massive power saving on Vulkan compared to Open GL ES, watch the video to see just how beneficial a new, dedicated API can be.

Talking about their decision to make the Vulkan demo for Mali-G71, Atli Már Sveinsson CEO and co-founder of Directive Games explained: ‘With the high-speed growth of VR and AR on mobile devices we knew we needed a GPU with enough performance to deliver a really high quality user experience and the Mali-G71 gave us all the power we needed while still reducing energy consumption.’

 

With bigger and better products appearing faster and cheaper we’re still seeing huge leaps forward in the smartphone industry. With demanding new use cases emerging every day the journey is far from over and there’s still much to be done, so watch this space to see what other exciting advancements Huawei and ARM can deliver for premium smartphones!

HuaweiMate9.jpg

360 video: It's everywhere you look...

$
0
0

360 degree video is changing not only the way we consume content, but the way we create it. We’re no longer restricted to sharing our experiences in selfies, single photos or even panoramas to capture more of a given scene. With 360 degree video we can now share the whole scene, and not just in static images, but in motion. Better still, gone are the days of retrospective slideshows of your favourite holiday pics, now you can share what’s happening right now, with the people you really wish could be there with you.

 

So how does 360 video actually work? Well, first of all we obviously have to capture the entire scene. This is made possible using a series of two or more cameras, as in the below image, to capture different fields of view. In some cases many cameras are used but more often now we’re seeing two cameras, both of which capture a 180 degree view, configured to capture the entire circular scene. Image quality is really important, especially for use with a VR headset, as we know from previous experience that unrealistic focus or resolution can take an immersive experience from fantastic to failure really fast.

360cameras.png

After we’ve captured high quality views of all the angles, we need to consolidate them to create one cohesive scene. We do this by ‘stitching’ together each of the individual views, as seamlessly as possible, to create a single panorama that covers the entire 360 degrees. This is, of course, where using only two cameras can make things easier. Having only two views to stitch together lessens the frequency of the joins and therefore makes it less likely they’ll be visible to the user.

Picture1.png

Once we’ve created this circular environment we need to figure out how to use it. To view 360 as a normal video, as you’ve almost certainly done on Facebook, is simple, you can just choose to scroll around the view as you wish, to see the areas not immediately in front of you. To view it In a VR headset for a truly immersive experience requires a little more work. As we know from our previous forays into VR content creation, we need to create two marginally different views, one for each eye. This is to ensure the brain can interpret the images as they would in the real world, whereas if we created the two views identically the brain would intuitively understand that something was wrong and the immersion of the experience would be instantly compromised. To get this right we can use clever technologies like our Multiview extension to create the duplicate views without doubling the rendering overhead. Barrel distortion also then needs to be applied to ensure the pin cushion effect, caused by having the lens right next to the eye, is counteracted. This allows us to experience the 360 video as a fully immersive environment in the privacy of our own headset.

 

This is still a pretty complex process and might seem beyond the capability of the average user, but it’s no longer the realm of specialist agencies, or several thousand dollar custom cameras like Obama used to promote the protection of US national parks. With the recent release of the Samsung Gear 360, amongst others, 360 video capture just went mainstream. This tiny little device is small and light enough to take with you wherever you go and high enough quality that the benefits are quickly apparent.

360.JPG

As Samsung’s (brilliant) advert shows, the world is no longer off limits just because you’re sick, or unable to travel, or even double booked for an event. With the easy capturing and immediate sharing of 360 content from a small, portable device, immersive environments and virtual spaces become the domain of the mainstream market.

 

In the interests of research, (and not at all a nice day out) a couple of colleagues and I took a field trip into the centre of Cambridge to see just how easy it was to produce a 360 video, in this example, a walking tour experience. We wanted to see just how simple the Samsung Gear 360 was to use and how much of our local world we could take to our global colleagues.

 

In this age of unlimited digital images we’re used to taking hundreds of pictures and discarding all but the very best. The disconcerting aspect of 360 video is that, because the cameras go all the way around, there’s no screen and you of course cannot actually see what it is you’re filming. This brings back the retro feeling of waiting to get your prints back from the developer in the pre-digital camera age and was somehow all the more exciting for the wait. Staged as a romantic walking and punting tour of Kings College, my colleagues and I had a heap of fun playing with our new toy. It was actually very easy to use, with great battery life and super easy upload for editing when we were done. (A note to the user though, a sturdy tripod is a must, the little convex lenses don’t do too well when falling face first onto gravel from a couple metres up... Oops.)

 

Intending to take our tour to our Chinese colleagues, we wanted to feature the memorial stone of Xu Zhimo, famous Chinese poet who spent many years in Cambridge. Not only could we capture a great scene around the memorial stone itself but we also decided we could take it a step further. In implementing the video for use with a VR headset we were able to add graphics pointing the user to the most interesting areas of the scene. This also then allowed us to overlay graphics showing the full poem, effectively taking a 360 video to both a VR and AR application with amazing ease. Best of all is that you don’t need a top of the line smartphone to enjoy these kind of Virtual Spaces applications. We tested this video on our brand new Mali-G51 mainstream GPU, and on its predecessor Mali-T830. As you can see from the video below, Mali-G51’s best ever energy efficiency means applications like this can run smoothly even on mainstream devices.

The speed with which these awesome technologies are reaching the hands of the average consumer goes to show just how fast the adoption of VR and related tech is taking off. With DIY virtual spaces on the rise it’s only a matter of time until distance really is no barrier to our professional and social interactions.

Viewing all 266 articles
Browse latest View live