May 162011
 

Prof. Yale Patt defines computer architecture as the contract between hardware and software. Computer architects partition the work between hardware and software,  and also define the interface between them (Instruction Set Architecture). Computer architects make their decisions based on the needs of the customers and the constraints posed by device physics and finances.

The factors that influence our decisions continue to change. This post describes what in my opinion are the top three factors that (should) influence our decisions today.

1. Performance Requirements

Performance is defined as the amount of work a computer can do in a given time. For example, how fast can a computer react to a user action such a mouse click. The higher the performance, the faster the “reaction time.” Many computing tasks have strict performance requirements, e.g., a computer that takes 25 hours to predict what the weather will be in 24 hours isn’t very useful. Other requirements are strict from a user-expereince standpoint. If it takes two minutes for my cell phone to dial a phone number, it is clearly unacceptable. Thus, computer design must always consider the performance requirements to make the computer useful.

2. Power Requirements

Computers need electrical power to do their work just like human beings need food to function. It is important to consider how much electrical energy is required in performing a particular task. This determines the electricity bills received and also determines the battery life of our cell phones, tablets, and laptops.

Some computers can do the same task with less power while others require more. It is important to point out that the computers that burn less power are often slow, e.g., my cell phone will take hours to open MS Word but it can run on a small battery for hours; but the ones that burn more power are fast, e.g., my desktop burns a lot more power but it can open MS Word in seconds. My desktop will consume a cell phone battery in less than one second.

3. Programmer effort

A major factor in computer design is how much effort is required in programming a computer. Programmer effort is very important from an economic stand point because it determines the cost of the software. The harder a computer is to program to, the more expensive its software is. For example, the cost of software that is run by government defense labs often exceeds hundreds of millions of dollars but it does get very high performance (notice the trade-off).

Minor factors

Other factors such as the engineering resources required to design the hardware also impact the design. In general, architectures are first done assuming engineering resources and the design is later simplified as resource constraints come in the way. This is why the actual shipped hardware is often less complex than the initial architecture designed for the product.

Applying this premise to todays computers …

Together these three factors form a rigid triangle. It is difficult to build a computer which is fast, power-efficient, and easy to program. So what are chip manufacturers building? I present the following examples:

Programmer Effort Power Performance Examples
High High High These are for super fast computers where we pay power and programmer effort to get ninja performance. The best examples is 3D games running on high performance discreet GPUs. Coding them requires a lot of effort as programmers optimize their code to the last bit and the chip burns large amount of power, providing very high performance in return.
High High Low Not a good idea. This will be a failed design in my opinion.
High Low High Network chips like the Intel IXP are great examples of this. Coding anything complex on these chips is a nightmare, but if something can be mapped onto them successfully, they provide very high performance for low power. Other examples that follow this trends to a lesser extent include the Intel Itanium chip. Itanium is a VLIW (Very Long Instruction Word) machine. It requires programmers to specify instruction scheduling which increases their work. However, Itanium provides means to get higher efficiency at lower programer effort.
High Low Low This applies to chips where you need a lot of effort just to get the code working. I can’t think of a good solid example here. Suggestions?
Low High High We have been uses architectures in this category for a while. Many common commercial processors fall in this category, e.g., Intel Core2 or i7 architectures. These chips employ power-hungry techniques like out-of-order execution to extract performance out of the software. Consequently, programmers can write sloppy code but still get higher performance, if the customer is willing to pay for high power. These chips are found in our desktops and laptops today.
Low High Low This is the case where performance is solely used for programmer efficiency. Using scripting languages like python generates these test cases where programmer effort is low, performance is low, but power is high because the hardware is trying to improve performance of this code.
Low Low High Would be great, but this doesn’t usually happen on interesting workloads.
Low Low Low There is a large market for these processors, any guesses? Your mobile devices has these chips in them by the dozens. These include the Intel Atom and also ARM-based chips that are used by Apple, Qualcomm, etc. In these devices, performance requirements are low so they trade performance for power. Programmer effort is low as they allow programmers to write code in high-level languages and use automated compilers for some best-effort optimizations.

I would love to hear what others have to say on this topic, especially some programmers. Do you think there are other factors that do (or should) influence computer architects’ decisions?

  34 Responses to “Three factors that influence CPU architecture”

  1. @FutureChips hiya, im sure theres lots of ways to do that [[quantify programmer effort]] . LOC per day is an old fashioned way, be interested to know what’s done currently.

  2. “@FutureChips: @Dr_Black Is there a method to quantify programmer effort?” -> I say no, it’s like trying to measure an artist painting

  3. @Dr_Black @futurechips cans of soda

  4. @GreyAreaUK lol. Thanks all for your lively comments. Even if we don’t solve #measuringprogrammereffort we get good coder jokes.

  5. Programmer effort really needs to be thought of on two levels. Like the computer’s architecture divides the hardware from the software, the compiler divides the programmer from the architecture.

    Clearly, both abstractions can be quite leaky. Where the architecture shows through to the application programmer, like the Cell processor or a GPU, there is an impact on everyone who writes code for it.

    On the other hand, an arcane architecture which can be hidden by the compiler is really more of an engineering cost. This gives the designer another dimension to work in when making the power/performance trade-off.

  6. Your site is pretty cool to me and your topics are very relevant. I was browsing around and came across something you might find interesting. I was guilty of 3 of them with my sites. “99% of site owners are guilty of these 5 BIG errors”. http://tinyurl.com/cvartfk You will be suprised how easy they are to fix.

  7. Hi there, just became alert to your blog through Google, and
    found that it’s truly informative. I’m going to watch out for brussels.
    I will appreciate if you continue this in future.
    Many people will be benefited from your writing. Cheers!

  8. Hey! This is my first comment here so I just wanted to give
    a quick shout out and say I genuinely enjoy reading through your blog posts.
    Can you suggest any other blogs/websites/forums that cover the same topics?
    Appreciate it!

 Leave a Reply

(required)

(required)

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>