Introduction
Developers have used reflections extensively in traditional game development and we can therefore expect the same trend in mobile VR games. In a previous blog I discussed the importance of rendering stereo reflections in VR to achieve a successful user experience and demonstrated how to do this on Unity. In this blog I demonstrate how to render stereo reflections in Unity specifically for Google Cardboard because, while Unity has built-in support for Samsung Gear VR, for Google Cardboard it uses the Google VR SDK for Unity.
This latest VR SDK supports building VR applications on Android for both Daydream and Cardboard. The use of an external SDK in Unity leads to some specific differences when implementing stereo reflections. This blog addresses those differences and provides a stereo reflection implementation for Google Cardboard.
Combined reflections – an effective way of rendering reflections
In previous blogs 1,2 I discussed the advantages and limitations of reflections based on local cubemaps. Combined reflections have proved an effective way of overcoming the main limitation of this rendering technique derived from the static nature of the cubemap. In the Ice Cave demo, reflections based on local cubemaps are used to render reflections from static geometry while planar reflections rendered at runtime using a mirrored camera are used to render reflections from dynamic objects.
|
Figure 1 Combining reflections from different types of geometry. |
The static nature of the local cubemap does have a positive impact in that it allows for faster and higher quality rendering. For example, reflections based on local cubemaps are up to 2.8 times faster than planar reflections rendered at runtime. The fact that we use the same texture every frame guarantees high quality reflections with no pixel instabilities which are present with other techniques that render reflections to texture every frame.
Finally, as there are only read operations involved when using static local cubemaps, the bandwidth use is halved. This feature is especially important in mobile devices where bandwidth is a limited resource. The conclusion here is that when possible, use local cubemaps to render reflections. When combining with other techniques they allow us to achieve higher quality at very low cost.
In this blog I show how to render stereo reflections for Google Cardboard for reflections based on local cubemaps and runtime planar reflections rendered using the mirrored camera technique. We assume here the shader of the reflective material that combines both reflections from static and dynamic objects to be the same as in the previous blog.
Rendering stereo planar reflections from dynamic objects
In the previous blog I showed how to set up the cameras responsible for rendering planar reflections for left and right eyes. For Google Cardboard we need to follow the same procedure but when creating the cameras we need to correctly set the viewport rectangle as shown below:
|
Figure 2. Viewport settings for reflection cameras. |
The next step is to attach to each reflection camera the below script:
void OnPreRender() { SetUpReflectionCamera (); // Invert winding GL.invertCulling = true; } void OnPostRender() { // Restore winding GL.invertCulling = false; } |
The method SetUpReflectionCamera positions and orients the reflection camera. Nevertheless its implementation differs from the implementation provided in the previous blog. The Android VR SDK directly exposes the main left and right cameras that appear in the hierarchy as children of the Main Camera:
|
Figure 3. Main left and right cameras exposed in the hierarchy. |
Note that LeftReflectionCamera and RightReflectionCamera game objects appear disabled because we render those cameras manually.
As we can directly access the main left and right cameras the SetUpReflectionCamera method can build the worldToCameraMatrix of the reflection camera without any additional steps:
void SetUpCamera(){ // Set up reflection camera // Find out the reflection plane: position and normal in world space Vector3 pos = chessBoard.transform.position; // Reflection plane normal in the direction of Y axis Vector3 normal = Vector3.up; float d = -Vector3.Dot(normal, pos) - clipPlaneOffset; Vector4 reflectionPlane = new Vector4(normal.x, normal.y, normal.z, d); Matrix4x4 reflectionMatrix = Matrix4x4.zero; CalculateReflectionMatrix(ref reflectionMatrix, reflectionPlane); // Update left reflection camera considering main left camera position and orientation Camera reflCamLeft = gameObject.GetComponent<Camera>(); // Set view matrix Matrix4x4 m = mainLeftCamera.GetComponent<Camera>().worldToCameraMatrix * reflectionMatrix; reflCamLeft.worldToCameraMatrix = m; // Set projection matrix reflCamLeft.projectionMatrix = mainLeftCamera.GetComponent<Camera>().projectionMatrix; } |
The code snippet shows the implementation of the SetUpCamera method for the left reflection camera. The mainLeftCamera is a public variable that must be populated by dragging and dropping the Main Camera Left game object. For the right reflection camera the implementation will be exactly the same but use instead the Main Camera Right game object.
The implementation of the function CalculateReflectionMatrix is provided in the previous blog.
The rendering of the reflection cameras is handled by the main left and right cameras. We attach the script below to the main right camera:
using UnityEngine; using System.Collections; public class ManageRightReflectionCamera : MonoBehaviour { public GameObject reflectiveObj; public GameObject rightReflectionCamera; private Vector3 rightMainCamPos; void OnPreRender(){ rightReflectionCamera.GetComponent<Camera> ().Render (); reflectiveObj.GetComponent<Renderer> ().material.SetTexture ("_ReflectionTex", rightReflectionCamera.GetComponent<Camera> ().targetTexture); rightMainCamPos = gameObject.GetComponent<Camera> ().transform.position; reflectiveObj.GetComponent<Renderer> ().material.SetVector ("_StereoCamPosWorld", new Vector4(rightMainCamPos.x, rightMainCamPos.y, rightMainCamPos.z, 1)); } } |
This script issues the rendering of the right reflection camera and updates the reflection texture _ReflectionTex in the shader of the reflective material. Additionally, the script passes the position of the right main camera to the shader in world coordinates.
A similar script is attached to the left main camera to handle the rendering of the left reflection camera. Replace the public variable rightReflectionCamera with leftReflectionCamera.
The reflection texture _ReflectionTex is updated in the shader by the left and right reflection cameras alternately. It is worth to check in the shader that the reflection cameras are in sync with the main camera rendering. We can set the reflection cameras to update the reflection texture with different colours. The screenshot below taken from the devices shows a stable picture of the reflective surface (chessboard) for each eye.
|
Figure 4. Left/Right main camera synchronization with runtime reflection texture. |
The OnPreRender method in the script can be further optimized, as it was in the previous blog, to ensure that it only runs when the reflective object needs to be rendered. Refer to the previous blog for how to use the OnWillRenderObject callback to determine when the reflective surface needs to be rendered.
Rendering stereo reflections based on local cubemap from static objects
To render reflections based on static local cubemaps we need to calculate the reflection vector in the fragment shader and apply the local correction to it. The local corrected reflection vector is then used to fetch the texel from the cubemap and render the reflection1. Rendering stereo reflections based on static local cubemaps means that we need to use different reflection vectors for each eye.
The view vector D is built in the vertex shader and is passed as a varying to the fragment shader:
D = vertexWorld - _WorldSpaceCameraPos; |
In the fragment shader, D is used to calculate the reflection vector R, according to the expression:
where N is the normal to the reflective surface.
To implement stereo reflections we need to provide the vertex shader with the positions of the left and right main cameras to calculate two different view vectors and thus two different reflection vectors.
The last instruction in the scripts attached to the main left and right cameras sends the position of the main left/right cameras to the shader and updates the uniform _StereoCamPosWorld. This uniform is then used in the vertex shader to calculate the view vector:
D = vertexWorld - _StereoCamPosWorld; |
Once reflections from both static and dynamic objects have been implemented in “stereo mode” we can feel the depth in the reflections rendered in the chessboard when seen through the Google Cardboard headset.
|
Figure 5. Stereo reflections on the chessboard. |
|
Conclusions
The local cubemap technique for reflections allows rendering of high quality and efficient reflections from static objects in mobile games. When combined with other techniques it allows us to achieve higher reflection quality at very low cost.
Implementing stereo reflections in VR contributes to the realistic building of our virtual world and achieving the sensation of full immersion we want the VR user to enjoy. In this blog we have shown how to implement stereo reflections in Unity for Google Cardboard with minimum impact on performance.
References
- Reflections Based on Local Cubemaps in Unity
- Combined Reflections: Stereo Reflections in VR