FPS Versus Frame Time

Home | Up | Search | X-Zone News | Services | Book Support | Links | Feedback | Smalltalk MT | The Scrapyard | FAQ | Technical Articles

 

Written by Robert Dunlop
Microsoft DirectX MVP

FPS: A common yet flawed metric of game performance

One of the most common ways to provide a simple measure of graphics performance in game titles is frame rate, expressed in frames per second.  However, this measure can be quite deceiving, especially with today's faster video hardware.  While it may provide some measure of performance, when it comes to making judgments regarding optimization, FPS is a very poor means of measuring application performance.

Non-Linearity of FPS values

The problem with using FPS to measure performance, from a mathematical perspective, is that it is non-linear.  But before I go there, let's look at it another way: simply put, it's asking the wrong side of the question.  When evaluating code performance in a real-time rendering application, the concern is how long it takes to render each frame, and how much time various sections of code are contributing to this.  However, FPS asks the flip side of this question: it's like asking how long it took to get from point A to point B, and being told that the car was traveling at 60 miles per hour.  OK, if we know the distance from A to B we could figure it out, but it's not what we asked!

Now, this may seem like I'm being a bit picky on the details, and to be honest part of it comes from a pet peeve.  Namely, that it seems like at least once a week I see a question to the effect of:

"My application was running at 900FPS, then I added rendering of .... and my frame rate dropped to 450FPS.  Why is this feature so slow??  Why is it cutting my performance in half?!??"

Or a common variation on this theme:

"When I render a single object, I'm getting 900FPS.  Then when I render a second object in the scene, my frame rate drops to 450FPS, and 300FPS if I render 3 objects!!  Why is DirectX giving me such terrible performance?  It obviously can't handle many more polygons at this rate!"

I have another issue with a statement like that, but let's address this usage of FPS first.  OK, remember I said we really want to know how long it takes the code to draw a frame, right?  Well, let's take a look at how long we are talking here.  There are 1,000 milliseconds in a second, so if we divide 1000 by the specified frame rates, we find the following times:

1000ms/sec / 900FPS = 1.111.. ms per frame
1000ms/sec / 450FPS = 2.222.. ms per frame
1000ms/sec / 300FPS = 3.333.. ms per frame

 

Hey, notice something going on here?  The time is changing linear with the number of objects rendered, but the frame rate is not!  In fact, it is highly non-linear, as shown in the above graph, which plots the frame rate from execution times of 1 millisecond through 40 milliseconds.  Now, to illustrate how radically this can slant one's perception of performance, do you think the person complaining above would react the same way to a drop from 60FPS to 56.25FPS?  Probably not, I think... but check this out:

1000ms/sec / 900FPS = 1.111.. ms per frame
1000ms/sec / 450FPS = 2.222.. ms per frame
Increase in execution time: 1.111.. ms

1000ms/sec / 60FPS =    16.666.. ms per frame
1000ms/sec / 56.25FPS = 17.777.. ms per frame
Increase in execution time: 1.111.. ms!

Seeing such a disparity, one can see how bad conclusions could be reached, especially when comparing methods in different contexts.  For example, if one method in an app caused a 900FPS to 450FPS drop, while another method in another engine caused a drop from 60FPS to 55FPS, which might you think to be more expensive?  If you've been paying attention, you should suspect that the 5FPS drop is a sign of a greater performance cost than the 450FPS drop seen with the first method!  In fact, that 5FPS drop represents 36.4% more execution time than the 450FPS drop!

So take that as food for thought if you are currently using an FPS counter as a measure of your performance.  If you want a quick indication of performance between profiling sessions, use frame time instead.  In the DX9 framework, for example, you could modify the CD3DApplication::UpdateStats() function to use something like:

_sntprintf( m_strFrameStats, cchMaxFrameStats,
            _T("%.02f ms/f %.02f fps (%dx%dx%d)"),
            1000.0f/m_fFPS, m_fFPS,
            ...

Till next time....

Robert Dunlop.
1/22/2003

This site, created by DirectX MVP Robert Dunlop and aided by the work of other volunteers, provides a free on-line resource for DirectX programmers.

Special thanks to WWW.MVPS.ORG, for providing a permanent home for this site.

Visitors Since 1/1/2000: Hit Counter
Last updated: 07/26/05.