Quantcast
Channel: ARM Mali Graphics
Viewing all articles
Browse latest Browse all 266

Evaluating compute performance on mobile platforms

$
0
0

Evaluating compute performance on mobile platforms: an introduction

Using the GPU for compute-intensive processing is all about improving performance compared to using the CPU only. But how do we measure performance in the first place? In this post, I'll touch upon some basics of benchmarking compute workloads on mobile platforms to ensure we are on solid ground when talking about performance improvements.

 

Benchmarking basics

To measure performance, we select a workload and a metric of its performance. Because workloads are often called benchmarks, the process of evaluating performance is usually called benchmarking.

 

Selecting a representative workload is a bit of a dark art so we will leave this topic for another day. Selecting a metric is more straightforward.

 

The most widely used metric is the execution time. To state bluntly, the lower the execution time is, the faster the system is. In other words, lower is better.

 

Frequently, the chosen metric is inversely proportional to the execution time. So, the higher the metric is, the lower the execution time is. In other words, higher is better. For example, when measuring memory bandwidth, the usual metric is the amount of data copied per unit time. As this metric is inversely proportional to the execution time, higher is better.

 

Benchmarking pitfalls

Benchmarking on mobile platforms can be tricky. Running experiments back to back can produce unexpected performance variation, and so can dwindling battery charge, hot room temperature or an alignment of stars. Fundamentally, we are talking about battery powered, passively cooled devices which tend to like saving their battery charge and keeping their temperature down. In particular, dynamic voltage and frequency scaling (DVFS) can get in the way. Controlling these factors (or at least accounting for them) is key to meaningful performance evaluation on mobile platforms.

 

Deciding what to measure and how to measure it deserves special attention. In particular, when focussing on optimising device code (kernels), it's important to measure kernel execution time directly, because host overheads can hide effects of kernel optimisations.

 

To illustrate some of the pitfalls, I have created an IPython Notebook which I encourage you to view before peeking into our next topic.

20140919_iPython_Notebook.png

Sample from iPython Notebook

What's next?

 

Using GPU settings that are ill-suited for evaluating performance is common but should not bite you once you've become aware of it. However, even when all known experimental factors are carefully controlled for, experiments on real systems may produce noticeably different results from run to run. To properly evaluate performance, what we really need is a good grasp of basic statistical concepts and techniques...

 

Are you snoring already? I too used to think that statistics was dull and impenetrable. (A confession: statistics was the only subject I flunked at university, I swear!) Apparently not so, when you apply it to optimising performance! If you are at the ARM TechCon on 1-3 October 2014, come along to my live demo, or just wait a little bit longer and I will tell you all you need to know!


Viewing all articles
Browse latest Browse all 266

Trending Articles