Thursday, December 15, 2011

Where do Profits come from?

This week's EconTalk Podcast: Munger on Profits, Entrepreneurship, and Storytelling

In this week's EconTalk Munger and Robers presented and discussed several mini-stories about trading. They explored many angles of society's common, mistaken malignment of trade. Specifically, the malignment of profit acquired solely through trade without actually "producing anything". The entire 'cast is excellent and well worth your time listening to or reading the transcript (also available at the link above).

I want to point out two statements from the pod-cast that were particularly awesome.

"The source of profits is correcting errors."

The whole quote is:
The source of profits is correcting errors. The economy around us is full of errors. And what I mean by errors is a maladjustment, a divergence between what we are actually producing and what we "should" produce in order to use the stuff that we have--our mental resources, physical capital, labor--to produce the highest possible level of satisfaction of what the public wants. -Munger 

If you chew on that for a bit you'll find it is an astonishingly deep insight. In economics there is a concept of optimality where supply has been balanced to exactly meet demand and price has been adjusted to exactly match costs. In other words, an optimal market has no profits. What we have, instead, in the real world, is a sub-optimal market. Resources have not been balanced optimally. There are gaps in the market where goods and services are not being put to their optimal use. If someone can discover these, they can connect the good or service with a more optimal use, benefitting both the producer and the consumer, and gaining some profit. All three parties benefit from the trader correcting an error in the market.

Now for the second awesome statement. First I'll paraphrase:

Trading as a more equitable means of maximizing 
individual satisfaction than 'goodwill'.

The second statement is from the last story Munger and Roberts discuss. It appears to be a true story about a British officer name R.A. Radford who had a background in economics and became a POW during WWII. His POW camp of 2400 inmates lived solely on rations provided by the red cross. The rations were identical for every inmate and identical month to month. From an economics point of view, the production of the entire closed society was static and egalitarian. This clearly sets up a fascinating case-study for how a society might organize when production is not an issue. Radford wrote up his economic observations later in an essay which is available online: The Economic Organisation of a P.O.W. Camp - R.A. Radford.

The whole quote, directly from Radford's essay is:
Very soon after capture people realized that it was both undesirable and unnecessary, in view of the limited size and the equality of supplies, to give away or to accept gifts of cigarettes or food. "Goodwill" developed into trading as a more equitable means of maximizing individual satisfaction. -Radford

What a wonderful statement! Wonderful, I suppose, if you are already pro-free-markets. If you think about it, though, it makes sense. If I give you the bits of my rations I don't want I may benefit you some, but I don't know if you are the person in camp who would benefit most from my gift. How should I answer that question? I need some way of finding out the value of my extra rations to every person in the camp. The problem is there are 2400 people in the camp, and they each have some bit of extra rations they too aren't using and would like to find the most valuable use for. How do you solve such a complicated problem, and do you really need the cooperative 'goodwill' of all 2400 people?

No, you just need trade.

Trade and, by implication, a price system are the best known way of solving such a complicated constraint optimization problem. Every single trade benefits both parties, generating wealth for both. If it were not so, they wouldn't trade. If there are inefficiencies in the trades, if the overal economy is sub-optimal, if there are errors in the system, then traders can arise to correct those errors and bring the overall system closer to optimality profiting themselves, yes, but also the people they trade with, and via proof-by-mathematical-induction, everyone in the camp.

Fascinating! This simple podcast which was pretty light as EconTalk goes really got me thinking.

NOTE: All the above assumes the trades are without fraud or coercion. This appears to have been mostly true in the POW example, though if you read the essay, various interesting market failures did occur, and were, for the most part, corrected by market forces. Roberts and Munger to address this in their discussion as well.

Thursday, September 22, 2011

Parsing, Ruby and Babel-Bridge


Here is the video from a recent talk I did on parsing and the Babel-Bridge gem I wrote for generating parsers with ease in Ruby. I recommend viewing it full-screen in 480 or 720p so you can see the text in the live-demo. Enjoy!

Wednesday, September 21, 2011

The (Sad) State of Concurrency in Ruby 1.9.2

Why do we want concurrency? Because we want to speed things up!

What is making us slow?
  1. We are waiting for the CPU
  2. We are waiting for network I/O
  3. We are waiting for the user
  4. We are waiting for disk I/O
Cpu-Bound
What do we do? We have other cores, let's make use of them!

Threading   The mainstream answer these days is spawning threads. However, if you try this in Ruby you will be in for a surprise. Though Ruby 1.9 uses OS threads, the interpretor has what is called a "global interpreter lock" or "GIL". This means that any time one thread is running Ruby code, no other thread can be running Ruby code. This effectively means that, in Ruby, even if you have multiple threads, you can only use at most 1 core of your CPU at a time.

Threading also has all the common problems of race conditions, locks, synchronizing and general "thread safety". This can be a little intimidating to manage, but usually a clean efficient solution can be arrived at. The best idea is to share as little global data as possible. If you are writing a server, assign at most one thread per request and keep all the data for that request local and you are already 99% there.

Notes:
  • Rubinius is said to be working on a version of their runtime that doesn't have a GIL and could therefor use multiple cores.
  • jRuby works today without a GIL and allows the full use of multiple cores.
Forking   Okay, so threading isn't really an answer. Instead of running multiple threads, how about running multiple processes? Generally, this can work. If you run more than one instance of the Ruby VM, you can use more than one core. But, there are a lot of limitations, some of them unique to Ruby.

Processes don't share memory. Once you fork a new processes you can still read the state of your program before the fork, but any changes you make in process B won't update process A and visa versa. Once you fork you must coordinate your processes through some other means, usually that means Pipes. This is a highly error prone and tedious "API".

On Linux or other Unix-based systems (including MacOS), forking a new processes is usually very efficient. In abstract, forking means cloning the entire processes. In practice, though, the operating system has a nifty feature called "copy on write" (CoW). Which says the two processes can share most all of the memory that hasn't been written to since the fork. As soon as one or the other processes writes to the memory (usually a CPU "page" = 4k), it is copied over into the new processes. This means that when you fork a new processes nothing needs to be copied at first and only a few things need to be copied over time as you start altering things. This is not only CPU efficient, it is also memory efficient. Great, right? Well, in Ruby it's not so great. Ruby's garbage collector (1.9.x) writes to every object every time it runs. This means that the first time either processes triggers garbage collection, the entire processes must be copied. In other words, Ruby loses all benefits of CoW. If you are writing a large application that uses a lot of memory, forking only makes sense for very long-running processes because the overhead of forking is very high.

Notes:
  • Ruby Enterprise Edition patched Ruby 1.8.x to be CoW friendly. There is currently no Enterprise Edition for Ruby 1.9.x and forward support of REE seems to be in question.
  • Rubinius is supposed to be CoW friendly. I have not tested this.
  • jRuby, well, just use Threads. Forking may work, too, if Java allows it and is CoW friendly. Both seem likely, but I don't know.

Conclusion   If your app is CPU-bound your best bet is to optimize your code. It is very hard to impossible to use multiple cores in Ruby unless you are willing to switch fully to jRuby.

Network I/O Bound
EventMachine   If you are writing some sort of server that is going to respond to network requests or if your app sends out network requests it is highly likely that you are network I/O bound. In the server case, you'd like to be able to be processing another request while you wait for the first one's input to arrive. In the client case, you'd like to be doing other things while waiting for the server to get back to you.

If your app is network I/O bound, there is a good solution in Ruby: EventMachine. All network I/O is "evented". When you fire off a network request, you pass in a block of what you want to do after the network request completes. The call returns immediately. You can then do other work while the network request processes and when it is done, the EventMachine "Reactor" (global event loop) will queue your block and execute it. Similarly if you are waiting for input, you can pass in a block of what to do once the input is ready, the call returns immediately and you can do other work while the input processes.

Eventing has the nifty advantage of being concurrent without having to be "thread-safe". It is known as a "cooperative multitasking". Threading is "preemptive multitasking". The down side of eventing is you have to pass around a lot of blocks that are "defferred" to run later at some point. Code execution order can get confusing and your code-size goes up significantly.

Another thing to be aware of is eventing only runs on one thread. Even if you are using jRuby, you will still only be using one core. EventMachine has the idea of "offline" processes which run on separate threads from a thread pool. This gets you halfway there, but keep in mind that your Reactor, the global event loop, is still single-threaded. At some point that one thread running the event loop is going to be a bottleneck no matter how many offline threads you have running. Obviously if you are running offline threads you have to think about thread-safety again, too.

Fibers   Fibers are a cool new technology in Ruby 1.9. They are basically threads, except they actually share just one OS thread and they only multitask when you tell them to. This makes thread-saftey trivial, much like eventing, PLUS you can still write your code in a linear fashion. A mix of eventing and fibers basically gets you all the simplicity of ignoring thread-safety with little of the conceptual or code overhead of pure eventing. Sounding pretty good, right?

Conclusion   Eventing, optionally with Fibers, can solve your network IO bound situation. It is a clean framework that isn't as complicated to write for as a pure threading solution. The key benefit over threading is simpler concurrency and allegedly higher performance on a single core. The down side is you are limited to one core.

Note   Threading works well for network-bound processes, too. Although Ruby has the GIL, the GIL is released whenever Ruby makes calls to the OS - including networking calls. This means that when a thread is waiting for network IO, another ruby thread can be running on the core. Threads solve the network-bound problem if you can mange the thread safety.

Waiting for the User
There isn't a lot of user interfaces written in Ruby, but concurrency is a common problem in GUI design. Generally pure Eventing works great for this. Inputs, in the form of user mouse and keyboard interactions can trigger events, and outputs, in the form of screen updates, are usually quick enough that they don't need their own threads. I'd be curious to hear about anyone who has practical experience with this in Ruby, but I suspect this isn't a major problem. You are mostly likely to end up being CPU bound if your interface is properly evented rather than User-bound.

Disk I/O Bound
What if your app is disk IO bound? I'm writing a store server in Ruby called Monotable. Ideally, under maximum load, it should keep N disks reading and writing at their max capacity at all times. In principle, Eventing and Fibers should work here. Whenever I read or write to disk, I could suspend execution of the current Fiber and let other events fire. When the read/write completes, the system could queue my fiber as an event and it would continue on the main thread when its turn comes up.

Unfortunately neither EventMachine nor any of the Fiber-aware gems available support evented disk IO. Looking at the Ruby IO library it appears there is no way to asynchronously read or write and be notified or test later if the read/write has completed. I've looked at IO#read_nonblock and IO#write_nonblock and it is not clear those are answers to this problem.

Conclusion   Only threading can solve the disk-IO-bound problem. Since the GIL is released with any File I/O, other threads can be doing work while the first is blocked. It seams the best solution will be to use EventMachine for network IO and defer any disk IO work to threads.

Back to Being CPU Bound...
If you are network IO bound, you can use EventMachine and Fibers to speed you up, but under enough load you will be CPU bound. For disk IO bottlenecks, threads are your best bet, and they will speed you up, but you will ultimately be CPU bound here as well. Unfortunately, Ruby doesn't help us much with this. Rubinius is looking promising, and jRuby is here today with multi-core support, but if you want your code to work with the standard, mainstream Ruby 1.9.x you are mostly out of luck.

This is an incomplete exploration of the problem. I'd love to hear if people discover more on this topic.

This is written 2011-09-21. My current version of Ruby is ruby 1.9.2p290 (2011-07-09 revision 32553) [x86_64-linux].

Other Interpretors
Gems dealing with non-blocking IO
GIL - Global Interpretor Lock / Global VM Lock
No-one has an answer to the non-blocking Disk IO question

Sunday, June 19, 2011

Hayek's "The Use of Knowledge in Society" and How I abridged the 1945 Free-Market Article that inspired Wikipedia

Quick Links:

Wikipedia

"'The Use of Knowledge in Society' by Austrian School economist Friedrich von Hayek was central to [Jimmy Wales'] thinking about how to manage Wikipedia." (source)

How is it that one of the most successful egalitarian projects of our time was inspired by the work of the free market economist Friedrich Hayek? To some it may seem like a contradiction. However, one could also say that Wikipedia is an excellent example of free market principles at work. After all, Wikipedia has been so successful explicitly because the foundation has always attempted to minimize its central authority and instead it encouraged the 'market' of contributors and consumers to find their own, organic solutions.

What is in this essay that helped inspire Wikipedia? Who is Hayek? Is there an accessible, abridged version of the essay?

… and will there be a music video? (yes!)


Hayek vs Keynes Rap Anthem

In 2010 I ran across a piece on NPR about a rap video that pitted Keynes against Hayek (listen on NPR). I was surprised at the video's high production value, but what was particularly great was how it gave both economists fair and equal representation. If you agree with me after watching the video that Hayek wins the argument, it will be based on the merit of his arguments in this honest intellectual debate set to the background of a surprisingly compelling rap video.

The video was a tremendous success. It now has over 5.5 million views on YouTube (2016), has been translated into a dozen languages, and has led to an equally great sequel. Best of all, since it is a fair representation of both theories, it has been used in schools as a teaching aid. If you haven't already, watch the video now. You won't regret it.

    EconTalk with Russ Roberts

    After watching the video I looked into the podcast by the video's co-creator, economist Russ Roberts. Much like the video itself, the EconTalk podcast has a free market bias, but Russ Roberts consistently is honest about his own biases and presents the opposing view in a respectful light. I have been hooked on EconTalk ever since, and have learned a lot.

    In January, 2011 Roberts interviewed Bruce Caldwell regarding Hayek’s life and ideas (listen on EconTalk). At the end of the talk they recommended the best place to start reading Hayek was a short essay written in 1945 called “The Use of Knowledge in Society.” Though Hayek's ideas are excellent, lucid and well thought through, as a writer I have found him verbose and often inscrutable. Having read some of Hayek’s books including “The Road to Serfdom,” “The Constitution of Liberty,” and “The Fatal Conceit,” the idea of a “short work” that concisely presented his ideas was very appealing. I was hoping I might finally have a piece of Hayek’s work I could recommend to people.


    Abridging a Great Work

    After reading “The Use of Knowledge in Society” I can agree with Roberts and Caldwell. It is an excellent, stand alone piece that gets at the heart of the most fundamental questions of economics: How is it possible for a society as complex and diverse as ours to function and thrive?

    The essay is short, for Hayek. However, it is still not very accessible to the average reader. It isn’t that the material is inaccessible. Hayek did an excellent job of presenting his case in a way didn’t require any special knowledge. However, his prose requires a lot of work to understand.

    I couldn’t help but think, with full respect to Hayek, if only the prose could be reworked, this essay would more accessible and could reach a broader audience. With that thought I decided to try my hand at writing an abridge version of his article. It took me many months, spending an hour here and there, but I am please with the result. I believe I managed to cut the length of the article almost in half while maintaining the clarity and intent of Hayek’s original work.

    Here are the links again. Enjoy! I welcome feedback.


    Tuesday, March 29, 2011

    Amdahl's Law is Wrong

    Amdahl's Law is True

    Amdahl’s law ( http://en.wikipedia.org/wiki/Amdahl's_law ) is used to determining the maximum speedup that can be achieve by taking a serial algorithm and parallelizing it. For any given algorithm there is usually some portion that is fundamentally serial. Amdahl’s law states simply that if 1/Xth of the time in your serial algorithm is spent doing fundamentally serial tasks, then no matter how fast you make the parallel portion, the overall algorithm can only be sped up at most X times. For example, if 1/3rd of your algorithm was fundamentally serial, then parallel computation could make it at most 3 times faster.

    Amdahl's law is true. If you are trying to speed up a specific algorithm with a non-trivial non-parallelizable component, Amdahl's limitation is a hard limit. Further, many algorithms really do have non-trivial non-parallelizable components.

    A Parallel Future

    It is clear, today, that the only option left to increase the speed of our algorithms is more and more parallel computation. And yet Amdahl’s law puts hard limits on the maximum speedup we can possibly achieve. Have we hit a hard, essential limit to computing?

    Amdahl's Law is Wrong

    When faced with such an irrefutable law of nature, what do we do? Isn't this an essential limitation that all programs are subject to? Let's take a closer look at exactly what Amdahl's law really says:

    • When speeding up a single serial algorithm with parallel processing, the fundamentally serial 1/Xth limits the maximum speedup to X-times faster.

    Problem #1: This only applies to single serial algorithms. We are transitioning from a world where everything we ever did was in a single, serial algorithm, so our natural instincts is that Amdahl's law applies to everything. It does not. There are many applications where it is desirable to run many shared-little or shared-nothing tasks simultaneously. The classic example is a web server where each request is processed by a single serial algorithm, but any number of simultaneous requests could be running at the same time. To speed up your web-site, double the number of servers and double your throughput*.

    Problem #2: When we say the "fundamentally serial 1/Xth" of your algorithm limits your parallel speedup, how much is X? When Amdahl is explained X is commonly presented as a small integer - 3 or 5 - limiting the parallel speedup to 3x or 5x. But what happens if X is one million? Or one billion or more? Then Amdahl’s law suddenly seems much less limiting. Many algorithms involve some amount of setup and then doing the same kind of processing on each element in the data-set. These are often called "embarrassingly parallel" algorithms. With yesterday's computers where even a gig of ram was a lot, managing data-sets of more than a few million elements was uncommon. But today with cloud computing on massive arrays of commodity hardware, dataset with trillions of elements are common.

    Amdahl's law is "wrong" in that it is always presented as a serious limitation on parallel computing. It is a serious limitation, but only in converting old code into a parallel framework. Instead, if viewed upside down, Amdahl's law can be presented as a guiding concept for discovering new applications now possible with modern parallel processing.

    If your algorithm has a large chunk of fundamentally serial computation, instead of being sad at how little speedup you can get from a straight port into a parallel algorithm, ask yourself what cool new problems you could solve if you could run your serial computation many times - try billions of times - at the same time. There is only so much faster you might be able to run a spell-checker over a given document, but what if you could run your it over every document in existence? You'd be able to generate a database that is dramatically better at spelling correction suggestions and knows all words (i.e. Google Suggests). This is a dramatic improvement in the spell-checker's functionality that Amdahl's-law-thinking prematurely squashes.

    If you still can't figure out how to get more out of running your program many times in parallel, how about running it over larger data? If our file was broken up into paragraphs, we could run one copy of the spell checker on each paragraph. Then, if we double the length of the document and double the number of parallel processes, we could still spell-check it in the same amount of time.

    The World is Embarrassingly Parallel

    Amdahl's law forces us to look at our feet, trying to speed up our old, serial code. Let it go! Look up! Look around! The world is massively, excessively, extraordinarily embarrassingly parallel.

    There are billions of people in the world, all acting independently and in parallel, cooperating to form communities, companies, social networks and nations. The human body consists of 100 trillion bacteria and human cells (10x more bacteria than human cells! http://en.wikipedia.org/wiki/Human_flora ). Every cell is a complicated chemical processor and all of them are running the massive parallel program that is a human being. And that’s just Human biota. The biosphere has an unimaginable number of creatures running the parallel program which is planet Earth. For starters, there are an estimated 10 quintillion (10 billion trillion) insects (http://www.si.edu/Encyclopedia_SI/nmnh/buginfo/bugnos.htm).

    Practically speaking, every-day embarrassingly parallel computing tasks are everywhere:

    • 2D and 3D graphics
    • Simulation including big science like physics and biology, as well as video game physics
    • Search
    • Data-mining
    • Sensor and signal processing including audio and video
    • NP-complete problems including most of the difficult and interesting optimization problems.
    • Human networks that can be modeled with sparse graphs including social networks and the world-wide-web.
    • etc…

    Serious parallel processing is already available on the desktop. GPUs today can have 100s of cores. With cloud computing services, massive computing clusters can be fired up at will by anyone. We are in the age of massive parallel processing. Instead of being scared away from it by Amdahl’s Law, we should rejoice! The opportunities for new applications and novel re-inventions of old applications are vast, and Amdahl’s Law, flipped on its head, can be a guide to discovering them.

    Wednesday, March 9, 2011

    EVA: The Design Principle of Essence Versus Artifact

    2017 Update: It took me 7 years to realize I was one tick away from a great acronym for this principle. Instead of "Essence and Artifact" (EAA - awkward!), I realized the inherent tension between essence and artifact is better expressed as Essence VERSUS Artifact, or EVA. Now, back to what I wrote about it in 2011. -SBD

    I’m a designer and builder, and I highly value well-designed things. I both enjoy well-designed things made by other people, and I enjoy making well-designed things myself. I spend a lot of time thinking about what it means to be well-designed. There are many ways to measure the quality of design including aesthetics, form, function, and cost, but there is one design principle I find to be most important. I call it: Essence and Artifact.

    What is “Essence and Artifact”?


    There are two kinds of constraints on design. The universe we live in forces physical and logical constraints on what is possible. These essential constraints are inviolate. We can't change the charge of an electron anymore than we can change the value of π. These constrains define an upper bound on what is possible when designing a solution to a problem. This upper bound is what I call the “essential solution” to a problem. It is often unachievable in practice, but it is the goal any designer strives for and the benchmark by which all solutions are measured.

    In addition to the essential constraints of the universe, we are also constrained by historical artifacts. Manufacturing may settle on standards, and though that standard can be easily rewritten, retooling the infrastructure based on that standard may become an economic impossibility. Similarly, virtual architectures, such as the Intel x86 instruction set, are so economically entrenched by all the software that depends on them, that they can't be practically changed. In addition to constraining our options, artifacts constrain our thinking.

    The Essence and Artifact Design Principle


    Understand which constraints are essential and which are artificial.  Once the artifacts have been identified, discard as many of them as possible the design to approach the essential solutions for the problem.

    Keep in mind that every actual thing in existence is an Artifact and is in some way non-essential. However, if our designs approach the upper bound of the essential solution then by definition there is little room to improve upon them. In other words, essential solutions are also timeless, and I think “timeless” is an excellent measure of good design.

    Future Posts



    In future posts I hope to dive deeper into some of the implications of E&A. Below are a few of the key attributes of each:

    Essence is:
    ·      Timeless
    ·      Simple
    ·      Powerful (Maximally applicable)
    ·      Modular
    ·      Focused
    ·      Theoretical

    Artifacts are:
    ·      Historical
    ·      Complex
    ·      Constrained
    ·      Difficult
    ·      Narrowly Focused or Unfocused
    ·      Actual