Jun 062011
 

There are two types of people in this world: those who use cloud computing knowingly and those who use it unknowingly. In fact, the latter often use the clouds more. Apple’s shift to the cloud today marks an important landmark. While most people are discussing it in the light of how it will impact consumer’s lives, I can’t stop thinking about how it will change computer science and the work lives of the “geeks.” Some old skills will become irrelevant and some brand new fields will emerge. In this post, I present my take on how computer science and engineering job descriptions will be impacted by this change.

Traditional computers were designed for two markets: backend servers and client desktops. Many companies were using a one shoe fits all policy by selling the same hardware (I mean the processors) to both markets. This policy worked because the power/performance requirements were about the same for both segments. Laptops separated these markets by putting tighter power requirements on the client which led to the advent of mobile-specific  processing cores like the Pentium-M. The recent cell phone and cloud revolution has pushed this client-server separation even further. Now the dominating client platforms are the low-power laptops, cell phones, and tablets. This sure impacts the mainstream computer science profoundly. I have tried to organize my thoughts into sections below.

Impact on Chip Architecture

Architects will need to design chips which can balance power-efficiency  and performance. I think the following features will be in every chips and all hardware guys will need to know them:

1. Clock gating and Data gating. Turning off units that are not used will become essential for future chips.

2. Special purpose units. Special purpose hardware units can do work more energy efficiently and faster. They are already “in.” Latest chips include encryption engines and video decoders. I expect to see more of these popping up on the chips.

3. Wider Single-Instruction Multiple Data (SIMD) units. SIMD saves energy by sharing the fetch and retire logic between multiple operations. Vector and SIMD units have proven to save energy in many performance-hungry market segments, e.g., graphics. Both ARM and Intel are putting wider SIMD units and that trend will continue.

4. Multi-core: Multi-cores save energy if programmer picks up the burden of finding the parallelism in a problem. I think that cores will become leaner and increase in number. For code that cannot be parallelized, we can expect heterogeneous chips with some fast and some slow cores. Intel’s turbo mode is already a step in this direction.

Impact on Platform Components

I actually feel that platform-hardware will become less relevant because the work will move to the clouds. Platforms will become dumb terminals. However, I think the innovation in platform design will be to leverage the cloud as much as possible and provide excellent GUI and HCI interfaces.

Impact on Software

I think the first job of new software will be to integrate with the cloud. Most software already does that via the restful APIs. We will obviously see more and more of that. However, I think programmers will need to do a lot more analyses to decide what runs on the cloud and what runs locally. Naïve decisions will surely worsen user experience by becoming network bound or client bound. For those who have done some real app design will probably agree with me that even simple decisions like whether to render a list of elements on the client or the server can change the user experience a lot on low-performance devices like the iPhone. I think it is important for programmers to figure out ways to measure and analyze performance trade-offs.

The second burden on software will be to use the special purpose units and SIMD units that the hardware will provide. IMO the field of software that will be most pivotal in this area is compiler-design. Programmers can do some part of the work but they will eventually turn to the compiler to find the best use of the special purpose units.

The third task will be to write efficient code. I have discussed in my earlier posts on this topic that knowing just a little bit of hardware can make programs much more efficient. iCould just makes my claim even more relevant. Writing better code will increase speed and reduce battery consumption.

The fourth task is to get better at parallel programming. While I side with the software guys that parallel programming is hard, I do feel that this new era requires programmers to write parallel code. Multi-core promises energy-efficiency but only if programmers write parallel code. With energy-efficiency being the top metric, programmers will be even more motivated to write parallel code.

Impact on Network Engineering

Last, but the most important impact is on the field of networking. I think networking flourishes the most in the Cloud. The demand for reliable, high-bandwidth, low-latency networks grows up exponentially as work gets split between the client and the cloud. I am not a networking expert but I do know that this is where the some of the most important research and development will be. Faster routers, high frequency communications, better error checking, will all be pivotal. In fact, an unexpected “miracle” here can make a lot of the above discussion irrelevant.

Security

Caleb, a regular reader of Future Chip, adds the following insight: I am thinking particularly of security and privacy, which are both important yet difficult problems in the cloud. Solving important, difficult problems isn’t generally very profitable, but I expect we’ll see some creative solutions with lasting usefulness.

Security is a major challenge in the cloud. In fact, if we can solve network latency and security, we can build even thinner client platforms. Security is perhaps the biggest problem introduced by cloud computing on the server side. Needless to say, research in computer architecture, programming languages, and operating systems needs to focus on this problem. By the way, I am not just talking about security in terms of hacking to get credit card numbers but also denial-of-service attacks in the cloud.

Conclusions

I actually feel that the demand for good computer scientists and engineers will only increase as a result of this change. It introduces more opportunity to do better design to get an edge over competition. This consequently translates into high demand for good engineers, which in turn can impact our pay checks! Comments?

  9 Responses to “iOS 5 goes to the cloud. How does it impact us computer scientists?”

  1. In general I agree with your assessment, but I think it’s important to remember that “the cloud” is not a new concept. The quote “this year’s re-invention of the mainframe is called ‘the cloud’” sums it up nicely for me. Old idea, new (mostly mobile) platform.

    The hardware constraints that made mainframes and thin clients useful were eventually surmounted–the iPad2 being faster than a Cray 2 is a good example. If history is any indication, the constraints that make cloud computing useful will likely become moot in a few years, probably through advances in networking, although I’m watching for advances in storage and battery technology as well.

    In the meantime, I’m hoping the autonomous nature of the networking technology on which the cloud is built will force the development of well-designed protocols and robust design patterns that can be utilized elsewhere when the Next Big Thing happens. I’m thinking particularly of security and privacy, which are both important yet difficult problems in the cloud. Solving important, difficult problems isn’t generally very profitable, but I expect we’ll see some creative solutions with lasting usefulness.

    • Hey Caleb,

      Thanks for your readership.

      I agree that the cloud isn’t new but I think iPhone/iCould (and their likes) are going to make it mainstream like never before. I think the mainframes disappeared because people want ubiquitous computing and it wasn’t possible in the mainframe era because of the lack of wireless internet. I do think Clouds have a much bigger chance of sustaining the test of time.

      I really liked your security insight. I should have thought of that. All credit to you for the update in the article (see above).

      Aater

      • Hi Aater,

        Glad my comments were useful :) I would like to mention that I’m talking about financial profits when I say “not very profitable”. I wasn’t sure that completely clear in my original comment.

        I think you’re probably correct about mobile devices making cloud computing a permanent fixture in the technoscape, mostly because I expect the tradition of cloud-based solutions for mobile to last long after the technical limitations are gone. Some traditions in technology are quite long-lasting, usually for economic reasons. Private IPv4 networks come to mind–we’ve had a better solution for at least a decade. Speaking of which, maybe the increased use cloud computing on mobile devices will be the necessary incentive for ISPs to start supporting IPv6. That would be nifty.

        • Hey Caleb,

          Thanks for clarifying. I had figured that you are discussing finances. In fact, a lot of these “technical” discussion are really about finances because IMO Dollar is the only metric in our field :-)

          By the way, now that I think more, doesn’t solving security promises large financial incentives in the cloud era? If you can build a provably secure cloud, banks and other sensitive organizations will be happy to pay you billions.

          I agree with you that clouds will stay even after the limitations are gone. In fact, I can take it a step further and say that perhaps the limitations will stay because there will be financial incentives to keep them around. The cloud allows an eco-system for charging the consumers a per-use fee. It is a win-win situation because it helps service providers as well as users.

          About IPv6, I totally see that the cloud will motivate the transition. The financial motive will fix it before we know:-)

          Aater

  2. Good stuff mate… Keep it coming.

  3. [...] from Industry and Academic folks. I personally think it does change my job description a lot (http://www.futurechips.org/thoug…). What's your take?  Add [...]

  4. Meh – unbacked speculation, merest opinion that’s already conventional wisdom, worth only a comment on Reddit (not even a comment on HN).

    • Hey Negative Guy,

      I appreciate you taking the time to read and comment.

      I understand you have your own opinion about the article and I appreciate your sentiment.

      However, I do not agree with you calling it “unbacked speculation.” I think its more educated guessing based on how engineering and economics work. The challenges will go up, demand will increase, salaries will increase. Some simple proof: please take a look at the salaries that Facebook and Google now pay compared to last year. I speculate the trend will continue.

      If you think it is unbacked, can you please be specific about the part that is unbacked speculation? A healthy discussion on this topic may be useful for the community.

      Aater

 Leave a Reply

(required)

(required)

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>