Quantcast
Channel: ARM Mali Graphics
Viewing all articles
Browse latest Browse all 266

Achieving High Quality Mobile VR Games

$
0
0

Introduction

 

Last month the game developer community celebrated its main event in San Francisco: the Game Developers Conference (GDC). The longest-running event devoted to the game industry set a new record in its 30th edition with more than 27,000 attendees. The expo hall was crowded until the very last minute and many talks were moved to bigger rooms to accommodate the demand.

 

In this blog I would like to provide a round-up of one of the ARM sponsored talks at GDC 2016: Achieving High Quality Mobile VR Games. I had the privilege of sharing the talk with two great colleges; Carl Callewaert (Unity Technologies Americas Director & Global Leader of Evangelism) and Patrick O'Luanaigh (nDreams CEO).

TalkThreePresenter.jpg

Figure 1. Delivering the presentation at GDC 2016.

The talk was devoted to mobile VR but each of the speakers presented different aspects of the topic. I spoke from the perspective of developers and shared our experience in porting Ice Cave demo to Samsung Gear VR. I also talked about some highly optimized rendering techniques, based on local cubemaps, that we have used in the demo to achieve high quality VR content. I also discussed the importance of rendering stereo reflections and showed how to implement them in Unity.

 

Carl talked from the perspective of the game platform which is used by more than a half of developers all over the world. He shared with the audience the latest news about the VR integration into Unity and discussed very interesting ideas about how to use well established architectural design principles to build VR gaming environments that create the sense of presence and immersion. To the delight of the attendants Carl showed the first part of the real-time rendered short film Adam, an impressive photorealistic demo that highlights Unity’s rendering capabilities.

 

Finally, Patrick presented from the perspective of the game studio that has already successfully released several VR games. As part of the development process nDreams has extensively researched movement in VR. In his talk Patrick shared some of their most interesting findings as part of their commitment to delivering the best user experience in their VR game catalogue.

 

The concept of local cubemaps

 

The content I delivered during the first part of the session was devoted mainly to describing several rendering techniques, based on local cubemaps, that we used in the Ice Cave demo. For those who are not very familiar with the concept of local cubemaps I explain it briefly below.

 

Let’s assume we have a local environment delimited by an arbitrary boundary and we have baked the surrounding environment in a cubemap from a given position inside of the local environment. We are looking at some star in the boundary in the direction defined by vector V and we want to answer the question: what is the vector we need to use to retrieve the star from the cubemap texture?

TheConceptOfLocalCubemap.png

Figure 2. The concept of local cubemap.

 

If we use the same vector V instead of the star we will get the happy face as shown in the left picture of Figure 1. What then is the vector we need to use? As we can see from the middle picture we need to use a vector from the cubemap position to the intersection point of the view vector with the boundaries. We can solve this type of problem only if we assume some simplifications.

 

We introduce a proxy geometry to simplify the problem of finding the intersection point P as shown in the picture on the right. The simplest proxy geometry is a box, the bounding box of the scene. We find the intersection point P and we build a new vector from the position the cubemap was baked to the intersection point and we use this new “local corrected vector” to fetch the texture from the cubemap. The lesson here is that for every vector we use to retrieve whatever we bake in the local cubemap, we need to apply the local correction.

 

Improving VR quality & performance

 

Developing games for mobile devices is challenging as we need to very carefully balance runtime resources. Mobile VR is even more challenging as we have to deal with the added complexity of stereo rendering and the strict requirements for FPS performance to achieve a successful user experience.

 

Several highly efficient rendering techniques based on local cubemaps used in the Ice Cave demo have proved very suitable for VR as well.

 

Dynamic Soft Shadows based on local cubemaps

 

As we know, runtime shadows in mobile devices are expensive; in mobile VR they are a performance killer. The new shadow rendering technique based on local cubemaps developed at ARM contributes to saving runtime resources in mobile VR while providing high quality shadows. The implementation details of this technique can be found in several publications 1, 2, 3.

DynamicSoftShadows.png

Figure 3. Dynamic soft shadows based on local cubemaps.

 

The main idea of this technique is to render the transparency of the local environment boundaries to the alpha channel of a static cubemap off-line. Then at runtime in the shader we use the fragment-to-light vector to fetch the texel from the cubemap and determine if the fragment is lit or shadowed.  As we are dealing with a local cubemap the local correction has to be applied to the fragment-to-light vector before the fetching operation. The fact that we use the same texture every frame guarantees high quality shadows with no pixel shimmering or instabilities which are present with other shadow rendering techniques.

 

Dynamic soft shadows based on local cubemaps can be used effectively with other runtime shadows techniques to combine shadows from static and dynamic geometry. Another important feature of this technique is the fact that it can efficiently reproduce the softness of the shadows, i.e. the fact that shadows are softer the further away they are from the object that creates them.

 

CombinedShadows.png

Figure 4. Combined shadows in the Ice Cave demo.

 

Reflections based on local cubemaps

 

The local cubemap technique can also be used to render very efficient and high quality reflections. When using this technique the local environment is rendered off-line in the RGB channels of the cubemap. Then at runtime in the fragment shader we fetch the texel from the cubemap in the direction of the reflection vector. Again though, as we are dealing with a local cubemap we first need to apply the local correction to the reflection vector, i.e.  build a new vector from the position where the cubemap was generated to the intersection point P (Figure 4). We finally use the new vector R’ to fetch the texel from the cubemap.

ReflectionsBasedOnLocalCubemaps.png

Figure 5. Reflections based on local cubemaps.

 

The implementation details of this technique can be found in previous blogs 3, 4, 5. This technique can also be combined with other runtime reflection techniques to integrate reflections from static and dynamic geometry 3, 6.

 

IceCaveCombinedReflections.png

Figure 6. Combined reflections in the Ice Cave demo.

 

Stereo reflections in VR

 

Stereo reflections are important in VR because if reflections are not stereo, i.e. we use the same texture for both eyes, then the user will easily notice that something is wrong in the virtual world. This will break the sense of full immersion, negatively impacting the VR user experience.

 

For planar reflections, rendered at runtime, that use the mirrored camera technique 6, we need to apply a mirror transformation to the main camera view matrix. We also need a half eye separation shift in the x axis to find the left/right position where reflections must be rendered from. The mirrored camera(s) alternately renders left/right reflections to a single texture that is used in the shader by the left/right eye of the main camera to apply the reflections to the reflective object.

 

At this point we must achieve a complete synchronization between the rendering of left/right reflection camera(s) and the left/right main camera. The picture below, taken from the device, shows how the left and right eyes of the main camera are using different colors in the reflection texture applied to the platform in the Ice Cave demo.

StereoReflCheckTwoColors.png

Figure 7. Left/right stereo reflection synchronization.

If we are dealing with reflections based on local cubemaps then we need to use two slightly different reflection vectors to fetch the texel from the cubemap. For this we need to find (if it is not provided) the left/right main camera position and build the left/right view vector used to find the reflection vector in the shader. Both vectors must be “locally corrected” before fetching the reflection texture from the cubemap.

 

A detailed implementation of stereo reflections in Unity can be found in a blog published recently 6.

 

Latest Unity improvements

 

During his presentation Carl pointed to the latest Unity effort in VR – the new VR editor that allows the building of VR environments directly from within an HMD. At GDC 2016 we saw a live demo that showed the progress of this tool in a presentation from Timoni West (Unity Principal Designer).

 

The Adam demo Carl displayed to attendees was also a nice proof-point for how much Unity has advanced in terms of real-time rendering capabilities. The picture below gives some idea of this.

AdamDemo.png

Figure 8. A photogram from the first part of the Unity real-time rendered short film “Adam”.

 

Carl also went through some highlights of a presentation he had delivered the day before about how to create a sense of presence in VR. I found his ideas about the importance of creating depth perception when designing VR environments really interesting. Greeks and Romans knew very well how important it is to correctly manage perspective, light, shadows and shapes to create the right sense of presence that invites you to walk around and understand the space.

 

Movement in VR

 

The last part of the talk was devoted to movement in VR. Patrick’s talk attracted much attention from attendees and prompted a lot of questions at the end. Movement in VR is an important topic as it directly influences the quality of the VR experience. nDreams development team performed  extensive research into different types of movement in VR and their impact on several groups of users. The figures Patrick presented in the talk about the results of this research were a valuable takeaway for attendees.

 

According to Patrick, mobile VR control will move towards controllers, tracked controllers and hand tracking, allowing more detailed input.

Initial nDreams tests confirmed some basic facts:

  • Movement needs to be as realistic as possible. When moving, aim to keep the speed to around 1.5 m/s as opposed to, for example,  Call of Duty where the player often moves at 7 m/s. Keep any strafing to a minimum, and keep the strafe speed as low as possible.
  • Don’t take control of the camera away from the player i.e. camera shakes, cutscenes etc.
  • Ensure there is no perceived acceleration. A tiny negligible acceleration in movement for example can take the edge off starting and stopping, but acceleration over any period of time is incredibly uncomfortable.

 

nDreamsTheBasics.png

Figure 9. Some nDreams basic findings.

 

In terms of translation movement nDreams researched two main modalities: instant teleport and blink. Blink is a kind of fast teleport where your move is completed within 120ms. This movement time is so short, there is no time to experience any sickness but the user has a sense of motion and tunnel effect. Teleport is seen as more precise due to the additional directional reticule, whereas blink feels more immersive.

 

Rotation study included trigger and snap modalities. Trigger rotations use the shoulder buttons of the controller to rotate in steps of 45 degrees to left/right each time. Snaps rotations used the joystick buttons instead. Rotation-wise, participants mostly preferred triggers; however the consumers who understood snap effectively preferred its flexibility.

 

Some figures about results of movement and rotation research are shown below.

 

MovementPreferences.png

Figure 10. Some figures from the nDreams’ movement and rotation research.

 

The table below summarizes some of the most important findings delivered by Patrick O'Luanaigh.

 

Movement needs to be as realistic as possible. Ideally keep your speed to around 1.5m/s.
Do not take control of the camera away from the player.
Ensure there is no perceived acceleration.

Lower speed moving and strafing speed is much more comfortable than a faster one. High rotation speed is seen as more comfortable,

since your rotation normally finished before you start to feel motion sick

The best solution for rotation is to turn with your body.
Alternative controls encourage players to move their body to look around. Snap rotations.
Rotation-wise participants mostly preferred triggers; however the consumers who understood snap effectively preferred its flexibility.
Fast teleport (blink) at 100 m/s is sickness free and more immersive than simple teleport.
Instant teleport is seen as more precise due to the additional directional reticule.

Remove movement and rotation altogether.

Figure 11. Summary of  findings from nDreams’ research about movement in VR.

 

VR is just taking its first real steps and there is a lot still to explore and learn. This is the reason Patrick concluded his presentation with a recommendation I really liked: Test everything! What works for your game may be different from someone else’s.

 

Conclusions

 

The talk Achieving High Quality Mobile Games at GDC 2016 had a great turn out and lots of questions were discussed at the end. After the talk we had many people coming to the ARM booth to find out more about Ice Cave demo and the rendering techniques based on local cubemaps discussed in the talk. What GDC 2016 showed above all was the great uptake VR is experiencing and the increasing interest of the development community and game studios in this exciting technology.

 

Finally, I would like to thanks Carl Callewaert and Patrick O'Luanaigh for their great contribution to the presentation.

 

References

  1. Efficient Soft Shadows Based on Static Local Cubemap.Sylwester Bala and Roberto Lopez Mendez, GPU Pro 7, 2016.
  2. Dynamic Soft Shadows Based on Local Cubemap. Sylwester Bala, ARM Connected Community.
  3. ARM Guide for Unity Developers, Mali Developer Center.
  4. Reflections Based on Local Cubemaps in Unity. Roberto Lopez Mendez, ARM Connected Community.
  5. The Power of Local Cubemaps at Unite APAC and the Taoyuan Effect. Roberto Lopez Mendez, ARM Connected Community.
  6. Combined Reflections: Stereo Reflections in VR. Roberto Lopez Mendez, ARM Connected Community.
  7. Travelling Without Moving - Controlling Movement in Virtual Reality. Patrick O'Luanaigh, Presentation delivered at VRTGO, Newcastle, 2015.

Viewing all articles
Browse latest Browse all 266

Trending Articles