Furmark is one of the applications not loved very much by chip or better GPU designers. It stresses the chips significantly more than most normal applications like games or (other) benchmarks. Sometimes it is referred to as a power virus - which, given Wikipedias current (as of this writing) definition, is just false.

Quote Originally Posted by http://en.wikipedia.org/wiki/Power_virus
A power virus is a malicious computer program that executes specific machine code in order to reach the maximum CPU power dissipation (thermal energy output for the central processing unit). Computer cooling apparatus are designed to dissipate power up to the Thermal Design Power, rather than maximum power, and a power virus could cause the system to overheat; if it does not have logic to stop the processor, this may cause permanent physical damage.


Stability Test applications are similar programs which have the same effect as power viruses (high CPU usage) but stay under the user's control. They are used for testing CPUs, for example, when overclocking.[citation needed]
Different micro-architectures typically require different machine code to hit their maximum power. Examples of such machine code do not appear to be distributed in CPU reference materials.
Obviously, the term Stability Test fits the usage model of Furmark-type tools much closer, since it always stays under the users control. Further, it does not - to the best of my knowledge - run any -architecture specific functional code.


But what makes it tick - or better: why does it not scale with my Radeon HD 5870? Furmark, after all, is also a benchmark and when run in benchmark mode, I am getting the following results (sorry for the old versions of everything, it's just from my archives) for Furmark 1.3.0 running in 1600x1200:
• X850 XT PE: 8 Fps (Catalyst 8.4, 540/587 MHz)
• X1800 XT: 9 Fps (Catalyst 8.4, 621/747 MHz)
• X1950 XTX: 25 Fps (Catalyst 8.7, 648/999 MHz) -> notice a trend already?
• HD 2900 XT: 38 Fps (Catalyst 8.7, 743/999 MHz)
• HD 3870: 39 Fps (Catalyst 8.7, 775/1125 MHz) -> everyone knows, R600 had just too much bandwith
• HD 4870: 73 Fps (Catalyst 8.7, 750/900 MHz)
• HD 4870: 37 Fps (Catalyst 8.10, 750/900 MHz) -> AMD having decided Furmark is evil for RV770
• HD 5870: 116 Fps (Catalyst 9.11, 850/1200 MHz)
Now, when running Furmark 1.6.5 in Stability Test + Extreme Burning mode at 1280 x 1024, it looks a bit differently.
• HD 4870: 51 Fps (Nearly all recent Catalyst after 8.10)
• HD 4770: 25 Fps (Nearly all recent Catalyst after 9.10)
• HD 5770: 35 Fps (Nearly all recent Catalyst after 9.11)
• HD 5870: 72 Fps (Nearly all recent Catalyst after 9.10)
• HD 5670: 20 Fps (Nearly all recent Catalyst after 10.3)
Where's the great scaling gone? Note that HD 5770 has the same amount of shaders as HD 4870 and 100 MHz higher core clock. Ok, architectures change, but AMDs wasn't changed that drastically to explain these massive cutbacks in scaling whereas before the Fps increased close to linearly with the number and frequency of the shaders.

So I decided to run my HD 5870 through a series of Furmark-Tests (in Extreme Burning mode) where I scaled core and memory clocks in steps of 5 % of base clocks up (only one 5 % increase possible) and down (more steps possible).

Here's what I got:

Furmark scaling with core and memory clocks on Radeon HD 5870


While Furmark shows a little bit of memory dependance in pretty low resolutions, the scaling is much more pronounced when looking at the different core clocks, which sometimes even reach linearity. So, there's only few possibilities left, why Furmark should not be twice as fast on HD 5870 compared to the lower clocker HD 4870 - yet, it is not.