This year, GDC is 30 years old… and it has become the annual pilgrimage of any serious game developer: from latest hardware releases to new, optimized APIs, developer tools, middleware and game engines.
At ARM we have a myriad of free resources and tools for game developers. From game development tutorials and developer guides to SDKs, sample code and developer tools. Our engineering team works flat out in the run up to GDC’16 to create new developer resources as well as updating existing ones so that game developers are all set to get the most out of ARM Mali and ARM Cortex processors. They are dedicated to ensuring you can fully utilize the computational power available and achieve console quality graphics on mobile platforms.
TALK #1: Vulkan API on Mobile with Unreal Engine 4 Case Study
The new Vulkan graphics API was just released a few weeks ago. Up to now, developers had OpenGL graphics API for desktop environments and OpenGL ES for mobile platforms. The latter was a sub-set of the OpenGL graphics API features to accommodate to a mobile architecture. However, with the latest technology advances on mobile, the graphics APIs were due for an upgrade and now was a good time, in terms of hardware device capabilities, to merge to a single graphics API. Great news for game developers!
The talk is going to be a deep dive into how the Vulkan API works, the shaders and pipelines, synchronisation, memory management, command buffers, etc. There are three key Vulkan feature highlights for mobile platforms covered in the talk: multithreading, multi-pass render passes and memory management.
Multithreading is key for mobile. Mass market mobile devices already have four to eight cores and previous graphics APIs did not take full advantage of them as implementing multiple threads meant a lot of context switches, which was taking a toll on performance. Vulkan brings multithreading to the next level, giving developers flexibility and control over the resources and when and how to execute the threads.
The multi-pass render passes feature allows the use of on-tile cache memory on mobile. It also enables the driver to perform various optimizations when each pixel rendered in a subpass accesses the results of the previous subpass at the same pixel location. It is similar to the concept of Pixel Local Storage introduced by ARM and available in OpenGL ES as an extension [.EXT].
Memory management behaviour depends on the GPU and pipeline architecture. Immediate rendering used mainly in desktops is very different from deferred rendering in mobile devices. In the Vulkan API, memory management is much more explicit than in previous APIs. Developers can allocate and deallocate memory in Vulkan, whereas in OpenGL the memory management is hidden from the programmer.
Finally, the talk illustrates the joint collaboration with Epic Games. Epic Games released Vulkan API support on their Unreal Engine 4 at MWC and Epic Games and ARM had the goal of showcasing impressive real-time graphics with UE4 Vulkan on the Samsung Galaxy S7. The result is the awesome ProtoStar demo pictured beside and this talk covers the challenges faced and the lessons learnt.
TALK #2: Making light work of dynamic large worlds
You may have seen theannouncementearlier today: the Enlighten team has just released a new feature set specially designed to solve the challenge of bringing dynamic global illumination to large world games. By developing advanced level of detail mechanisms for terrain, non-terrain lightmaps and probes, it is now possible for game studios to add large scale lighting effects, such as time of day, to complex worlds with huge draw distances.
This is great news – map sizes in games have been getting bigger and bigger in recent years and their popularity is incredible. The current generation of gaming platforms introduced beautiful, real-time rendering of environments where the player can roam freely through forests, canyons and vast, open terrain. Running such worlds with dynamic lighting effects at acceptable framerates required new innovations.
Seastack Bay is the demonstration designed to showcase Enlighten’s new large world feature set. In this talk its lighting artist, Ivan Pedersen, presents the challenges he and the Enlighten team were faced with when developing a brand new global illumination technology for open worlds. He will be joined on stage by Dominic Matthews of Ninja Theory, a local game studio that collaborated with the Enlighten team to create Seastack Bay. They are the studio behind the upcoming title, Hellblade, and will discuss in this session how Enlighten is helping the project fulfil its ambitions.
TALK #3: Achieving High Quality Mobile VR Games
This talk is a joint collaboration with Unity, nDreams and ARM. Unity has been the lead game engine to integrate native VR support for the Samsung GearVR and nDreams is the leading VR game studio with a team dedicated to developing for mobile VR using Unity engine.
The talk starts by describing the steps a developer needs to take to port any application or game to VR in Unity, and to enable the GearVR developer mode. It then gives a few key recommendations regarding the challenges you might encounter when porting a specific game to a new VR environment: motion sickness, UI interaction and camera settings. Furthermore, as the VR experience is immersive, the developer needs to take into account that any frightening or unsettling situation will be amplified in VR.
The focus of the talk then moves to a series of optimized rendering effects for mobile platforms which have been achieved using the local cubemap technique. This covers reflections, the innovative way of implementing dynamic soft shadows and refractions, as well as implementing stereo reflections in VR.
Stereo reflections will be explained in detail. In VR, the left and right eyes are rendered individually and it is not good practice to use exactly the same reflections for both eyes. The recommendation is to render stereo reflections when reflections are a noticeable effect for the end-user, otherwise the reflections don’t appear to have any depth and the VR application will not provide a fully immersive virtual experience to the end-user.
The talk session will cover how to implement and synchronize left and right cameras to achieve stereo reflections in custom shaders using Unity engine. Afterwards, Unity will explain how and why 3D and Virtual Reality are perceived, so that the developers learn to use this in Unity3D engine to build architectural and gaming environments that create the sense of presence and immersion.
The last part of the talk is covered by nDreams and they are going to talk about their experience creating the renowned VR titles SkyDIEving, Gunner and Perfect Beach. They will also discuss the research they did to implement movement using controllers for the GearVR and the Google Cardboard, both designed for smartphones.
TALK #4: Optimize your Mobile Games with Practical Case Studies
This talk first introduces you to the ARM tools and skills needed to profile and debug your application. ARM has several tools to help game developers optimize their games, all available free of charge.
First of all, it’s recommended developers use a profiler tool to analyze the system performance. ARM provides DS-5 Streamline which covers the whole system performance (CPU and GPU) so that you can find the performance bottlenecks in your code. The four main culprits are:
- being CPU bound (for instance, the game physics being too complicated),
- being vertex bound (for example, your assets might have too many vertices)
- being fragment bound (most common case: you might have a high overdraw index)
- and finally, you could be bandwidth bound (ie. loading textures every frame or loading large textures).
Once the developer has identified the code bottlenecks, the Mali Graphics Debugger (MGD) tool traces every Vulkan, OpenGL ES or OpenCL API call to help understand the rendering cycle, showing the graphics calls and checking the shaders, textures and buffers. This helps the developer identify the issues. The talk covers all the latest features of MGD: VR support, geometry viewer, render pass dependencies, shaders reports and statistics, frame capture and analysis, alternating drawing modes, etc. These all help developers gain a quick, deeper understanding of the performance.
Mali Graphics Debugger
Finally, the optimization work comes along and six best practise techniques are explained in deep detail:
- batching draw calls
- eliminating overdraw
- frustrum, occlusion and distance culling
- Level of Detail (LoD)
- texture compression with ASTC
- mip-mapping and the use of anti-aliasing.
TALK #5: An End-to-End Approach to Physically Based Rendering (PBR)
Wes and Sam work in two companies that produce technology integral to many studios’ physically based workflows – Geomerics and Allegorithmic. Despite the increasing number of AAA and independent studios integrating a physically based pipeline, in practice they have seen a common lack of understanding about the repercussions which decisions made in the material creation phase have on later phases of the development pipeline – most significantly the lighting phase. Yet if PBR is managed correctly across the entire art team, its benefits come alive:
- The guesswork around authoring surface attributes to look realistic is removed
- An artist can set up a material once and reuse it throughout the game
- Materials have an accurate appearance independent of lighting conditions
- Studios can have a universal workflow that produces consistent artwork across teams and even across companies
No matter whether you are a texture artist or a lighting artist, it is important to understand the fundamentals of each step in the development pipeline to ensure that the work you do has the desired contribution to the final visual result.
To start with, this talk explains in an artist-friendly manner the fundamentals of lighting physics and energy conservation. It will equip texture artists with the knowledge they need to bear in mind throughout the material creation phase in order to create materials that light predictably no matter what the lighting set-up, including:
- The light ray model
- Specular and diffuse reflections
- Absorption and scattering
It will go on to explain the two main PBR workflows and describe in detail the properties texture artists need to consider when supplying information into the shader:
- Metallic workflow
- Specular/glossiness workflow
- Key differences between the workflows
Moving from concept into reality, the talk will discuss practical guidelines for creating PBR textures and remove the guesswork from setting material values. It is worth noting that while adherence to our guidelines will ensure an artist authors maps correctly, the physics principles discussed earlier will equip the audience with the knowledge needed to follow his intuition and explore different creative styles successfully within a PBR pipeline. We will discuss:
- Base color – diffuse (albedo)
- Metal reflectance
- Dielectric F0
The talk will conclude with a practical demonstration of the theory. Using a single scene, we will vary the texture and lighting set up and the audience will be shown the visual impact of supplying both incorrect and correct material information.