Quantcast
Channel: ARM Mali Graphics
Viewing all articles
Browse latest Browse all 266

Nothing but Static

$
0
0

At GDC 2013 I gave a presentation called Nothing but Static on the ARM booth. Attendance was disappointing, in part because I showed up late, but also because people were understandably nonplussed at the promise of a tech talk on making better use of static geometry in graphical applications. I shot myself in the foot with what I thought was a very clever title.

 

The real aim of the talk was to show how static geometry and textures could be used to create dynamic effects, reducing the bandwidth needed to animate a living environment. This is important because bandwidth is a constant worry in mobile graphics. A colleague of mine, Ed Plowman, regularly talks about the fact that if you map the raw compute power of GPUs over time, the ones in the mobile space scale along the same curve that console and desktop GPUs improved at before. If you chart available bandwidth over time, the curves diverge as desktop GPUs developed into power sucking monstrosities with their own array of fans and heat pipes. Bandwidth is almost entirely linked to power and liquid-cooling systems are a place mobile devices can’t afford to go.

 

With some older engines and applications, if geometry was animated, the animation had to be done on the CPU, and the entire mesh was re-sent to the GPU every frame. This was because the early mobile GPUs only supported OpenGL® ES 1.1, and with fixed function pipelines there was no way to manipulate the mesh after it had been sent. When OpenGL ES 2.0 was launched, it opened a second possibility, skeletal animation on the GPU. You have an additional attribute to each vertex, stating which bone it was aligned to, and then a number of transformations are sent in an array, so the GPU can look up the relevant transformation for each vertex based on its bone ID.

 

This way the vertices can be stored in a vertex buffer object (VBO) and never change on the GPU and the only information being sent per frame is the uniform array of bone transformations. This is certainly more efficient than animating on the CPU but it still has drawbacks in some cases. If you imagine an example mesh of a human figure, the legs, torso, arms and head would all move independently, meaning a total of 10 bones. Since you’ll be doing nothing strange in the projection part of the matrix you can get away with a 4x3 matrix per bone, but even that means sending 120 floating point values per frame for a single model. If you need the model animated in a very specific way, such as character animation, this is still the most efficient method. Other types of animation needn’t use such restrictions.

 

In the recent Seemore demo we showed a greenhouse with a monstrous plant growing from the centre of the floor, and dotted around it were writhing tentacles like vines, also bursting through the ground. If we’d wanted to animate each of them with its own skeleton we’d have had to carefully trade off the number of bones against the resolution of the movement. Getting a fluid rippling motion over a skeleton is difficult, probably because all the things in real life that move that way (like snakes and cephalopods) have either a huge number of bones or no bones at all.

 

So, for the tentacle we applied a sine wave, with the phase based on the model-space Y coordinate shifting the tentacle’s vertices in its X and Z coordinate. There’s a little more to it than a single lateral shift, there was also the matter of setting the surface normals and tangents based on the cosines of the same equation, as well as tilting the mesh around the central axis so it looked like it was really curling around, not just being skewed. The effect was applied along with a few other curvature and pulse equations, all of which had their magnitude increased by distance from the ground, so the base never moved away from the hole in the floor we’d made for it. The total per-frame data was a single vec4 for each tentacle, with the X and Y making the tentacle lean in a given direction (there had originally been plans to have them attempt to touch the player but it was already freaky enough with them just wiggling) and the Z and W controlling the phase of the wriggling and the pulse.

A similar compound curve was applied to the tongue and the stem of the plant to give it a nice S-bend with a pair of circular curves of slowly shifting tightness.

 

Normally when I present this information, this is the point at which someone has pointed out that not everything can just wriggle, and real world applications rely upon quite rigid constraints on the animation, which is why I quickly move on to a second example: the pages of a book in the Gesture demo we produced much earlier. The interface was intended to feel as tangible as possible and so interactive elements had to move correctly. One such element was a big thick book which users could open and flip through the pages.

 

A lot of virtual books have the pages as a pair of solid blocks which hinge in the middle and I can only surmise from this that a lot of graphics coders have never really looked at a book. A block of pages in a book will never open to a flat surface, particularly if the book has a wide stiff spine. What actually happens is that the spine bends some of the way and the rest of the bend comes from the pages, which curve out from the middle a short way then lay flat, ending in a chiselled face to the page edges.  Getting the book to open in this way would be highly impractical in a skeletal animation, but provided the mesh has sufficient vertex resolution, the page curvature can be easily calculated algorithmically by rotating the points around a cylindrical section, the outer radius of which becomes the offset for the chisel on the end. A single uniform float controlled the curvature of the pages in this way, from fully closed to fully open.

 

A similar equation was then used to animate a single page surface, rising up from this tightly curved rest position, into a more relaxed curve, which then inverted  as the page turned over, finally landing flat in the rest position in the opposite size. Of course the fact that it started and ended flush with the pages allowed the textures to be parameterised, so that during the animation one half of the book was the next page, one was the previous, and the page turning between had a page either side, and was only rendered during the page turn animation, giving the perfect illusion of the pages turning one at a time from a solid block of pages.

 

On ARM® Mali™ hardware supporting OpenGL ES 3.0 the vertex shader can access texture sampling functions, which mean that algorithmic animation of vertices can use textures as input to give less  mathematical results, such as ocean waves or deformable terrain.

 

Algorithmic animation is not limited to the vertex shader. There’s no end to the number of weird and wonderful effects possible with a little ingenuity in the fragment shader. Combinations of sine waves with a time controlled phase value can represent anything from a rippling pond to an electric plasma arc. My personal favourite effect in this vein is the dust particle animation from the Timbuktu demo.

 

For this effect each particle was a camera aligned billboard with texture coordinates ranging from (2, 2) to (-2, -2). This curious set up meant we could do a quick r = x2+y2 and figure out if the fragment was inside a unit circle and discard any that weren’t. Following on with sqrt(1-r) we get the third dimension of the surface of a unit sphere inside that circle’s screen space. Since it’s a unit circle, that makes it the surface normal of that point on the sphere also and converted to world space we could then do lighting equations to shade it as a sphere and use the dot product to the camera vector to fade it at the edges, like a perfectly round cloud in the middle of the billboard.

 

That effect itself is no big deal, but the magic happens when you have a noisy texture added into the mix. Sample the texture based on that original X and Y, as well as a time based offset value, and use the resulting red and green to offset the X and Y before performing the sphere calculation. What this gives you is a noisy cloud which seems to flow in time. Since this is a particle, it also fades over time and in the course of that fade, the distortion is increased, making the cloud expand, billow and dissipate in a noisy looking organic way.

 

The great thing about this technique is that by changing the texture, the cloud looks different. The same algorithm can produce tight noisy clouds of dust or soft billowy clouds of steam.  It can even distort more in a specific direction to look like a thin wispy vapour.

 

All these techniques are described in the GDC talk I gave and later it was combined with my second presentation from the same event about draw-call batching into a far more attractively titled video called “Dynamic Duo”. For reasons unfathomable to me it is most often referred to as “Optimised Effects for Mobile” and you can find it on www.malideveloper.arm.com,or  you can watch it right now:

 

 

If you’d like to talk about any of the techniques I’ve described in person, I regularly attend game development events and I’m not hard to find. Keep an eye on the ARMMultimedia twitter feed to see what events we’re attending next. Alternatively, drop a comment in the section below.


Viewing all articles
Browse latest Browse all 266

Trending Articles