Tuesday, March 29, 2011

Amdahl's Law is Wrong

Amdahl's Law is True

Amdahl’s law ( http://en.wikipedia.org/wiki/Amdahl's_law ) is used to determining the maximum speedup that can be achieve by taking a serial algorithm and parallelizing it. For any given algorithm there is usually some portion that is fundamentally serial. Amdahl’s law states simply that if 1/Xth of the time in your serial algorithm is spent doing fundamentally serial tasks, then no matter how fast you make the parallel portion, the overall algorithm can only be sped up at most X times. For example, if 1/3rd of your algorithm was fundamentally serial, then parallel computation could make it at most 3 times faster.

Amdahl's law is true. If you are trying to speed up a specific algorithm with a non-trivial non-parallelizable component, Amdahl's limitation is a hard limit. Further, many algorithms really do have non-trivial non-parallelizable components.

A Parallel Future

It is clear, today, that the only option left to increase the speed of our algorithms is more and more parallel computation. And yet Amdahl’s law puts hard limits on the maximum speedup we can possibly achieve. Have we hit a hard, essential limit to computing?

Amdahl's Law is Wrong

When faced with such an irrefutable law of nature, what do we do? Isn't this an essential limitation that all programs are subject to? Let's take a closer look at exactly what Amdahl's law really says:

  • When speeding up a single serial algorithm with parallel processing, the fundamentally serial 1/Xth limits the maximum speedup to X-times faster.

Problem #1: This only applies to single serial algorithms. We are transitioning from a world where everything we ever did was in a single, serial algorithm, so our natural instincts is that Amdahl's law applies to everything. It does not. There are many applications where it is desirable to run many shared-little or shared-nothing tasks simultaneously. The classic example is a web server where each request is processed by a single serial algorithm, but any number of simultaneous requests could be running at the same time. To speed up your web-site, double the number of servers and double your throughput*.

Problem #2: When we say the "fundamentally serial 1/Xth" of your algorithm limits your parallel speedup, how much is X? When Amdahl is explained X is commonly presented as a small integer - 3 or 5 - limiting the parallel speedup to 3x or 5x. But what happens if X is one million? Or one billion or more? Then Amdahl’s law suddenly seems much less limiting. Many algorithms involve some amount of setup and then doing the same kind of processing on each element in the data-set. These are often called "embarrassingly parallel" algorithms. With yesterday's computers where even a gig of ram was a lot, managing data-sets of more than a few million elements was uncommon. But today with cloud computing on massive arrays of commodity hardware, dataset with trillions of elements are common.

Amdahl's law is "wrong" in that it is always presented as a serious limitation on parallel computing. It is a serious limitation, but only in converting old code into a parallel framework. Instead, if viewed upside down, Amdahl's law can be presented as a guiding concept for discovering new applications now possible with modern parallel processing.

If your algorithm has a large chunk of fundamentally serial computation, instead of being sad at how little speedup you can get from a straight port into a parallel algorithm, ask yourself what cool new problems you could solve if you could run your serial computation many times - try billions of times - at the same time. There is only so much faster you might be able to run a spell-checker over a given document, but what if you could run your it over every document in existence? You'd be able to generate a database that is dramatically better at spelling correction suggestions and knows all words (i.e. Google Suggests). This is a dramatic improvement in the spell-checker's functionality that Amdahl's-law-thinking prematurely squashes.

If you still can't figure out how to get more out of running your program many times in parallel, how about running it over larger data? If our file was broken up into paragraphs, we could run one copy of the spell checker on each paragraph. Then, if we double the length of the document and double the number of parallel processes, we could still spell-check it in the same amount of time.

The World is Embarrassingly Parallel

Amdahl's law forces us to look at our feet, trying to speed up our old, serial code. Let it go! Look up! Look around! The world is massively, excessively, extraordinarily embarrassingly parallel.

There are billions of people in the world, all acting independently and in parallel, cooperating to form communities, companies, social networks and nations. The human body consists of 100 trillion bacteria and human cells (10x more bacteria than human cells! http://en.wikipedia.org/wiki/Human_flora ). Every cell is a complicated chemical processor and all of them are running the massive parallel program that is a human being. And that’s just Human biota. The biosphere has an unimaginable number of creatures running the parallel program which is planet Earth. For starters, there are an estimated 10 quintillion (10 billion trillion) insects (http://www.si.edu/Encyclopedia_SI/nmnh/buginfo/bugnos.htm).

Practically speaking, every-day embarrassingly parallel computing tasks are everywhere:

  • 2D and 3D graphics
  • Simulation including big science like physics and biology, as well as video game physics
  • Search
  • Data-mining
  • Sensor and signal processing including audio and video
  • NP-complete problems including most of the difficult and interesting optimization problems.
  • Human networks that can be modeled with sparse graphs including social networks and the world-wide-web.
  • etc…

Serious parallel processing is already available on the desktop. GPUs today can have 100s of cores. With cloud computing services, massive computing clusters can be fired up at will by anyone. We are in the age of massive parallel processing. Instead of being scared away from it by Amdahl’s Law, we should rejoice! The opportunities for new applications and novel re-inventions of old applications are vast, and Amdahl’s Law, flipped on its head, can be a guide to discovering them.

Wednesday, March 9, 2011

EVA: The Design Principle of Essence Versus Artifact

2017 Update: It took me 7 years to realize I was one tick away from a great acronym for this principle. Instead of "Essence and Artifact" (EAA - awkward!), I realized the inherent tension between essence and artifact is better expressed as Essence VERSUS Artifact, or EVA. Now, back to what I wrote about it in 2011. -SBD

I’m a designer and builder, and I highly value well-designed things. I both enjoy well-designed things made by other people, and I enjoy making well-designed things myself. I spend a lot of time thinking about what it means to be well-designed. There are many ways to measure the quality of design including aesthetics, form, function, and cost, but there is one design principle I find to be most important. I call it: Essence and Artifact.

What is “Essence and Artifact”?


There are two kinds of constraints on design. The universe we live in forces physical and logical constraints on what is possible. These essential constraints are inviolate. We can't change the charge of an electron anymore than we can change the value of π. These constrains define an upper bound on what is possible when designing a solution to a problem. This upper bound is what I call the “essential solution” to a problem. It is often unachievable in practice, but it is the goal any designer strives for and the benchmark by which all solutions are measured.

In addition to the essential constraints of the universe, we are also constrained by historical artifacts. Manufacturing may settle on standards, and though that standard can be easily rewritten, retooling the infrastructure based on that standard may become an economic impossibility. Similarly, virtual architectures, such as the Intel x86 instruction set, are so economically entrenched by all the software that depends on them, that they can't be practically changed. In addition to constraining our options, artifacts constrain our thinking.

The Essence and Artifact Design Principle


Understand which constraints are essential and which are artificial.  Once the artifacts have been identified, discard as many of them as possible the design to approach the essential solutions for the problem.

Keep in mind that every actual thing in existence is an Artifact and is in some way non-essential. However, if our designs approach the upper bound of the essential solution then by definition there is little room to improve upon them. In other words, essential solutions are also timeless, and I think “timeless” is an excellent measure of good design.

Future Posts



In future posts I hope to dive deeper into some of the implications of E&A. Below are a few of the key attributes of each:

Essence is:
·      Timeless
·      Simple
·      Powerful (Maximally applicable)
·      Modular
·      Focused
·      Theoretical

Artifacts are:
·      Historical
·      Complex
·      Constrained
·      Difficult
·      Narrowly Focused or Unfocused
·      Actual