robjsoftware.info

A blog about software – researching it, developing it, and contemplating its future.

Archive for September 2007

Messages and Transactions, Part 1: Erlang and its Discontents

with 2 comments

With the rise of multi-core chips, everyone’s talking about how best to do concurrency. I think the two most interesting emerging paradigms are software memory transactions and message-based programming. This two-post series is about these two paradigms.

(Cedric Beust’s recent post about Erlang inspired me to accelerate this post a bit. Soon: the final Conscientious Software post.)

Message-oriented programming environments are structured around the dictum that all state is decomposed into modular elements that have no direct interaction. All state is local in a message-oriented system. Each stateful entity — processes in Erlang, or vats in E, for example — interacts with other stateful entities solely through asynchronous messaging. Entities can send messages to other entities, and receive messages from other entities. Message receipt happens one at a time; each entity is essentially a single-threaded message loop. Therefore, concurrency conflicts as such cannot happen, because no two messages can ever be operating on the same state concurrently.

There is much beauty to this model. Erlang heavily leverages the fault isolation properties of such a system; Joe Armstrong’s Erlang thesis (which seems to be offline at the moment?) describes basic patterns of constructing structures of cooperating processes with very high reliability. The basic idea seems to be recursive restartability. A process which receives an invalid message or whose state becomes corrupted is expected to immediately throw a fatal exception and cease to exist. Other supervisor processes monitor their children and detect these failures, restarting the children as necessary.

In E, the same ideas are used to push forward the frontier of interaction between mutually suspicious code. The core concept here is that on the open internet, particularly as software becomes more sophisticated in its interactions with other software, there is a need to allow systems to interact via messages while remaining able to protect themselves against misbehavior on the part of other systems. Message-oriented programming again serves to isolate, but in this case it is not isolation from unintentional bugs or from external failures, but isolation also from malicious behavior. Really this deserves a whole separate post, so I’ll leave it at that for now.

The great thing about systems like this, in theory, is that they scale very gracefully. Add more entities (processes / vats), and you get scale. Their inherent reliance on distributed messaging means that they work better with more processing power, no matter whether that’s multi-core or multi-machine. You can also upgrade a running system with new code, while maintaining full operation. Erlang is getting a lot of interest right now for this reason.

The thing that frustrates me about the current Erlang wave, however, is that most of the core Erlang papers — such as Joe Armstrong’s thesis — do relatively little to describe how one actually takes a real-world problem and expresses it as a message-passing system. For example, the thesis describes a very high-reliability ATM switch. It gives maybe a half-page sketch of some of the core modules of the switch, and it briefly describes which (not how, only which) some of the generic Erlang patterns are used in the switch’s code. But when it comes to the details — how requests are handled and handed off; how overload is dealt with; how failures are recovered at different levels of the system; how new features are implemented on top of existing running code — there’s basically nothing.

I really want to see some much more detailed case studies of actual Erlang systems that implement high reliability solutions, walking in detail through the whole code and process architecture, and talking about how upgrades and extensions are implemented on top of the base product. Without that level of explanation, it’s not at all obvious how an Erlang system is actually built. One small example: consider a bank account transfer, implemented as a single transaction between two account objects. How would that be built in an Erlang system? Would each bank account be a separate process? If so, how would you achieve transactional semantics across the withdrawal, which involves both accounts? Would you need some kind of transaction manager pattern? Are there higher-level patterns in Erlang that address these issues? Most of the Armstrong Erlang papers focus on failure and fault recovery and supervision, but not on how distributed data structure updates are performed.

The most detailed discussion about an implemented Erlang system that I’ve been able to find is the overview of the Mnesia database. But even here, the focus is primarily on the ways in which the database resembles or leverages other database technologies, and not so much on the ways it leverages Erlang for fault tolerance or online upgrade. The very things that seem to make Erlang unique are the hardest things to get detailed descriptions of! Frustrating!

Cedric’s post also makes a good point that Erlang seems lacking in the areas of error reporting and code structuring. Message-oriented languages in general have an uneasy relationship with traditional object-oriented reuse structures — message-oriented languages in some sense are inherently decoupled, tending towards structural typing (in which messages are described primarily by their contents); whereas traditional class-based languages are oriented towards nominal typing (in which the inheritance lineage of a class determines its compatibility with other classes). There are some hybrid approaches to bridging the two, for example the (somewhat defunct) HydroJ messaging framework for message-oriented programming in Java, but the two paradigms are definitely tricky to couple.

Software transactions and message-oriented systems seem to fall on two ends of a language spectrum. In the transactional case, it’s easy to see how to compose previously-sequential code and achieve some amount of concurrent scaling. In the messaging case, it’s not so easy to know how to take a distributed problem and decompose it into messages. I look forward to much more explanation of the design process for message-oriented systems, since only thus will they be competitive with the software transactional model, which is more immediately intuitive to sequential programmers (especially those who already have database experience).

Coming soon: the software transaction end of the spectrum. (And then back to finish off conscientious software!)

Edit after original post: Looks like Joe already answered my transaction question. Solution: bundle up your updates into a single message that you send to the database server. Dang, that’s a foreign concept to me, being a Java weenie used to the Hibernate wrap-your-serial-code-in-a-thread-local-transaction pattern. Erlang definitely demands you structure your code to its patterns from the ground up. A bad thing? Not necessarily… but what about composable concurrency? Better get my next post done soon!

Written by robjellinghaus

2007/09/21 at 05:29

Posted in concurrency

Conscientious Software, Part 2: Provable Robustness

leave a comment »

Second post in a series. Part 1.

Onwards! Now, what’s really interesting to me is the further implications of this autopoietic / allopoietic dichotomy throughout the software stack. The Conscientious Software paper talks about potential means of implementing autopoietic software in terms of more vaguely defined membranes and other sorts of perception-oriented interfaces between software components, along the lines of Jaron Lanier’s work.

To me, that is a lot harder to think about than the Google example, in which a lot of allopoietic engineering (very clear definition of subsystems, logical contracts, operational proofs, and other extremely “rigid” software engineering techniques) was used to create a system that, to a large extent, is reliably self-sustaining.

In other words, it seems to me that we want our autopoietic systems to be as reliable as we can make them. And the best techniques we have for designing something to be reliable are essentially allopoietic techniques.

Let’s look at this another way. A lot of programming language research right now is devoted towards integrating proof systems into various languages. In some sense, every strongly typed language has a proof system built into it. A type system is a proof system (see Wadler’s paper on Proofs As Programs). The power of these proof systems and type systems is steadily increasing — a lot of work lately is going into logics for proving file system safety, data structure heap correctness, distributed messaging consistency, and on and on.

We are continually developing conceptual techniques to increase the accuracy and correctness with which we can describe the desired properties of a system, and prove that the system in fact has those properties as implemented.

It is a fascinating question, to me, whether all those tools can be turned to the purpose of developing autopoietic systems. Let’s take one example: the above-linked work on logically proving that a file system is reliable.

The intention of this work is to model the state of the computer’s memory relative to the known state of the computer’s disk, in terms of what the memory believes to be true versus what the disk’s contents declare as true. Logical statements relate changes in the memory beliefs to changes in the disk’s beliefs. Overall, there is an error if at any point one can prove that the system’s disk beliefs are inconsistent with memory.

In a sense, this proof work is intended to make the file system robust in the face of any and all environmental failures. The robustness comes from construction, not necessarily from recovery. This is building reliability into the system at the very bottom.

It is not too difficult to postulate that such logics could be further developed, for example, to characterize the consistency properties of the Google File System, or of other large-scale distributed software infrastructures. (Boy, what a handwaving sentence! Yes, it’s not too difficult to postulate that. Actually doing it, on the other hand, is likely to be extremely difficult indeed. But over time even hard problems can get solved….)

Another direction in which to deliver increased reliability is internal self-protection. Almost all major organisms on Earth are composed of cellular structures. A cell is defined by its boundary — without a boundary, a cell does not exist. Some programming languages have similarly oriented themselves around the concept of bounded, encapsulated units which are composed in large numbers to create a greater whole. Possibly the most industrially tested and publicly available example of such a language is Erlang, which composes large and immensely reliable systems out of many small software components, which communicate only by asynchronous message passing. There is no global state in an Erlang program, and no shared state among independent objects. This makes Erlang innately fault-tolerant, insofar as the failure of any one component cannot immediately cause other components to fail. This also allows the system to be upgraded while continuing to run. A really successful autopoietic system eventually reaches a point where it needs to provide continuous availability, with no externally visible downtime at all, ever. Systems decomposed into pure message-passing subcomponents inherently support this, since individual components can be replaced cleanly while the rest of the system continues normal operation.

Of course, autopoietic layers need to be built as other Erlang objects (a common Erlang idiom is to have a number of worker objects overseen by a smaller pool of supervisor objects — this is exactly an autopoietic software layer), but the innate properties of the language contribute to the enforced internal modularity which leads not only to improved stability but also to ease of distribution. (Autopoietic systems will necessarily also be distributed and parallel systems, since any individual hardware unit may fail, and if the overall system is to survive it must have redundant parallel hardware.)

Large-scale autopoietic systems currently, and in the near future, will combine these properties. They will be systems that provide a general-purpose computing environment, with well-defined and consistent contracts, which are engineered with as much built-in and provable correctness as we can give them, which are modularized into independent and well-protected components, and which provide sufficiently reliable guarantees to the allopoietic programs running in those environments.

What is especially interesting is that this view flies in the face of the original paper’s claim that what’s needed is more vagueness, more inconsistency if you will, more tolerance between components for partial failures or partial communication. I’m not convinced by this part of their argument, at all. I think that we currently lack the ability to engineer in that style. Perhaps ultimately we’ll make greater use of artificial evolution for engineering design, and I would expect such design tools to be far better than we are at leveraging interfaces that are ambiguous or otherwise not rigidly defined.

Next: what happens if software can get sick?

Written by robjellinghaus

2007/09/18 at 04:47

Conscientious Software, Part 1: Allopoiesis and Autopoiesis

leave a comment »

Recently some Sun researchers wrote a very interesting paper about the future evolution of software. The paper seeks to characterize a new type of software, which they term “conscientious”. I want to dig into this concept a bit.

This will be a multi-post series; this first post lays the groundwork, then the subsequent posts will delve into some further implications of the basic concepts.

This post will make much more sense if you read their paper first; otherwise, hang on 🙂 Their paper is very much a manifesto, in the sense that it aggressively commingles concepts from biology, engineering, and computer science to propose a very ambitious and not-fully-understood thesis. So, this post is going to do that too, and conceptual rigor be damned. It’s brainstorming time.

Conscientious software might be described (paraphrasing their paper heavily) as software which has a currently unknown degree of self-awareness, in the sense that it has the ability to test itself, to analyze its own functioning, to address external or internal malfunctions appropriately, and to maintain its operation under a variety of adverse conditions. And, of course, to actually do some job that we consider valuable.

There are two kinds of systems they talk about extensively. These are allopoietic systems and autopoietic systems. The technical definitions here come from biology, but I’ll paraphrase heavily and just say that an autopoietic system creates itself, sustains itself, and produces itself, whereas an allopoietic system is externally created and produces something other than itself.

An example they give is the distinction between a living amoeba and a watch. An amoeba consists of a large number of cellular structures and chemical processes. A living amoeba is a dynamic system — its continued existence depends on its active metabolism, by which it takes in nutrients, processes them, excretes material toxic to it, seeks out hospitable environments, and avoids dangerous ones. All the metabolic cycles and other processes that constitute the amoeba are primarily oriented towards self-sustainment.

A watch, on the other hand, has almost no ability to sustain itself. Its sole purpose is to tell time, and that purpose is only useful to the users of the watch, not to the watch itself. If the watch malfunctions, it must be repaired externally.

What does this have to do with software? They claim — and I think I agree — that software needs to develop in a more autopoietic direction. Fragile software, with frequent malfunctions, tricky configuration, and inadequate self-monitoring and self-repair capabilities, is increasingly hard to scale to larger and more complex functions. Yet we clearly want software to be capable of more than it currently is. Autopoiesis is a very thought-provoking concept when considering how to design such self-sustaining systems.

They make another point, which is that a purely autopoietic system — one which is only concerned with self-sustainment — is of little value to us. We want a system which can both sustain itself and produce something valuable. Their paper draws a picture of an autopoietic system containing an allopoietic core — the self-sustaining system devotes a large part of its resources to solving specific, externally defined problems.

This is all terribly abstract. What would such a system look like? Are we talking about some kind of muzzy fuzzy genetically engineered software? What are some examples of autopoietic software today?

They start with garbage collection as an example of an autopoietic software function. In some sense, memory is the environment in which software exists. As software runs, it populates that memory. Without lots of careful management, the memory can easily become cluttered with no-longer-used but not-yet-deallocated objects. It’s no big stretch to consider garbage objects as waste products of the software’s metabolism. (Well, OK, it is a stretch, but like I said, we’re brainstorming here.)

Garbage collection was originally developed to enable LISP programs to be developed in the way that was conceptually cleanest — in other words, in a largely functional, allocation-intensive style — without complicating them with memory reclamation logic. Externalizing the garbage collection code meant that the LISP program itself could more or less just do its job, with the underlying metabolism of its environment responsible for cleaning up its waste.

Another example (mine, not theirs), and probably one of the most well-known (or at least well-documented) large-scale examples, is Google’s infrastructure. Google has developed numerous systems — the Google File System, the BigTable large-scale database and the Chubby lock system — that are inherently widely distributed and fault-tolerant. These systems actively monitor themselves, replicate their data around failed components, and generally self-manage on top of a widely distributed pool of faulty servers. It is even less of a stretch to think of these systems as implementing a metabolic environment for user programs that is composed from cells (servers) that routinely die and are replaced, while the entire system has a dynamic, self-sustaining continuity. (I’m sure many Google admins would laugh very hard at the phrase “self-sustaining” here, but nonetheless these systems do internally monitor and repair themselves.)

The allopoietic component of the Google environment is the applications that actually run on the underlying autopoietic Google services. These applications can (mostly) ignore the details of the hardware they run on and the storage they use; they can simply consume the resources they need (in both persistent storage and CPU) to perform their job.

OK. So that’s autopoiesis and allopoiesis as applied to software as we currently know it. Next post: allopoiesis as the foundation of autopoiesis.

Written by robjellinghaus

2007/09/07 at 03:22