Jun 262011
 

After reading my post on the shortcomings of Amdahl’s law, A reader of Future Chips blog, Bjoern Knafla (@bjoernknafla), on Twitter suggested that I should add a discussion on Gustafson’s law on my blog. Fortunately, I have the honor of meeting Dr. John Gustafson in person when he came to Austin in 2009. The following are my mental notes from my discussion with him.  It is a very simple concept which should be understood by all parallel programmers and computer architects designing multicore machines.

IMO there are two distinct pillars of Gustafson’s law. I describe them both in my own words.

Lemma 1:

There exists workloads that are gaseous in nature: When provided with more compute power, they expand to consume the newly provided power.

Such programs are more common than you think. I will give four real-life examples:

1. BitCoin mining. If I give you more compute power, you will not finish sooner. Instead, you will just mine more coins.

2. Graphics. If I give you more compute power, you will just run your frames at a higher resolution or with more details.

3. Numerical analysis such as computing pi. If I give you more computes, you will just compute more digits of pi.

4. Weather Prediction. If I give you more computes, you will just run your software longer to get even more accurate predictions.

Side note: This property is also found in several non-parallel programs.

Lemma 2:

When the problem size is increased, the parallel portion expands faster than the serial portion.

For example, Martix-Matrix-Multiply (MMM). The setup of MMM, ie.. initializing the matrices increases linearly with the size of the matrix. however, the actual compute is O(n^3).

What’s does it imply

Guftafson’s law only applies for workloads where both the above conditions are true. If a workload follows Gustafson’s law, it is hurt

What does it not imply

It does NOT imply that Amdahl’s law is dead. It just implies Amdahl’s law is less important in some workloads. I stress on the word less because the serial portion still exists and we know that it still hurts performance, but just with a lesser magnitude.

There are many workloads out there which aren’t gaseous. For example, when sorting a list of numbers in Excel, I will not increase the size of my balance sheet if my computer gets faster. Similarly, when doing spell check, I will not write longer documents if my computer has become faster. Lastly, I will not make my database transactions lengthier if I can run them faster.

Shortcoming of Gustafson’s Law

Just like Amdahl’s law, Gustafson’s law makes all the same assumptions about the world being infinitely parallel or completely serial. It also does not account for overhead associated with the creation/deletion of threads. It does not account for other type of serial portions such as critical sections. See this post for my list of assumptions. Thus, Gustafson’s  law only becomes applicable if: (a)  the workload obeys all the assumptions of Amdahl’s law, and (b) it obeys the above two Lemma’s.

What can programmers do?

Still try to eliminate the serial part. If you can’t eliminate the serial part, at least try to make it so that the serial part grows slower than the parallel part when the working set increases.Also try to make the time in serial part constant and independent of the number of threads (I( realize this impossible in most cases).

What can architects do?

Keep Gustafson’s law in mind and ensure that workloads that do obey Gustafson’s law do not get penalized. This implies ensuring that a workload that does expand to leverage more cores does not become limited due to the memory system. (I am just asking for a balanced design).

  One Response to “Parallel Programming: Amdahl’s Law or Gustafson’s Law”

 Leave a Reply

(required)

(required)

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>