robjsoftware.info

A blog about software – researching it, developing it, and contemplating its future.

Glorious Possibilities #2: The Ultimate Parser

with one comment

So for various reasons I have been learning a lot about parsers lately.  The #1 thing I have learned is that parsing is all about tradeoffs.  The #2 thing I have learned is that the ultimate parsing environment does not yet exist.

Long-time readers of this blog (assuming I didn’t lose them all in the slightly bobbled switch to WordPress!) know that I’m very interested in growable languages and in extensible programming environments.  Parsing is at the heart of programming language design, since it brings meaning to the text we programmers write.  When writing extensible programming environments, it is important that your language be closed under composition, so extending your language’s grammar still has some meaning (even if it introduces more ambiguity).  Parsers, in this sense, want to be powerful.

On the other end of the spectrum, there are parsers for relatively straightforward formats such as XML or CSS.  These parsers are often running in resource-constrained environments (a cell phone browser) or are dealing with large volumes under high load (an enterprise messaging bus).  For these parsers, high performance is absolutely critical, sometimes in both memory and CPU.  Parsers, in this sense, want to be fast.

There is a very tricky tradeoff between power and speed throughout the parsing landscape.  As just one example, I’ve been experimenting with ANTLR lately.  It is, on many fronts, the best parsing toolkit available.  Terence Parr has done a bang-up job over many years of addressing difficult technical issues while building a very usable tool.  It is just purely wonderful that ANTLR’s generated code has line-by-line comments that come directly from the grammar that generated it.  ANTLRWorks is a very enjoyable tool for language design.  Finally, Terence’s books are immensely useful.

But even ANTLR has some surprises.  I generated CSharp2 code from the example CSS grammar on the ANTLR site.  Then I ran it and looked at its memory footprint with a profiler.  I was not surprised to see that there were an immense number of IToken objects being created — ANTLR’s generated lexer (as of version 3, anyway) allocates a new IToken object for each token parsed.  This alone is actually a really big performance problem for high-load parsers — allocation in the parser’s hot path is a big problem in a managed language.

There was a surprise, though, and not a good one.  There were close to 5 megabytes of int16[] arrays.  Where did those come from?  From the lexer DFAs, which are stored in static lexer fields and uncompressed when the lexer is loaded.  The compressed char[] representation of the lexer DFAs is tiny — certainly not megabytes.  So there is a shockingly high compression ratio here.  I have to question whether that grammar is really complex enough to motivate a multi-megabyte lexer DFA.  It seems that I am not the only person encountering surprisingly large lexers with ANTLR v3.  It looks like this can be worked around with a hand-written lexer, but wouldn’t it be wonderful if ANTLR itself were able to tune its lexer generation to achieve the same effect (trading off some speed, perhaps, to reduce DFA explosion)?

So there is room for performance improvement even on the most mature tools in the industry.  And on the other end of the spectrum, there are grammatical constructs that are impossible to express in purely context-free parsing frameworks like YACC.  I have been corresponding a bit with Yitzhak Mandelbaum at AT&T, subsequent to his excellent paper (with Trevor Jim and David Walker) on data-dependent parsing.  This has a lot of similarities with ANTLR’s semantic predicates, but seems to be perhaps more powerful and general (being embedded in a framework that is inherently scannerless).  However, being based on Earley parsing, it is hard for this method to gain competitive performance.  The method does support embedded “black-box” parsers, however.

The holy grail might be an environment with a powerful, compositional, potentially ambiguous top-level user language, and an implementation that could aggressively analyze sublanguages of the grammar and choose optimized implementations.  Grammar and language design is a balancing act between ambiguity and generality on the one hand, and efficiency on the other.  Different implementations have different tradeoffs — implementations supporting unambiguous grammars only will inevitably be faster than those that handle ambiguous parses; LR and LL algorithms both have their own idiosyncrasies with respect to the languages they can support; and so on.  So the ideal language environment would itself be extensible with different parsing algorithms that could themselves be composed.  That way, if all you have is a regular expression, you get tightly tuned inner loops and no backtracking; but if you have a highly ambiguous grammar, you get a backtracking and/or chart-parsing implementation, perhaps with lots of data dependencies.  The toolset should let you apply different algorithms to different parts of your language, and should support you in refactoring (for example, the left-corner transform) your grammar.

Incidentally, this last point is one reason why Parsing Expression Grammars are potentially problematic.  PEGs are a beautiful embedding of a declarative grammar in a functional programming language.  But in some sense most PEG implementations are too embedded — the only representation of the grammar’s structure is in the code itself, and all you can do with your PEG is to run it, not refactor or otherwise transform it.  (And let’s leave aside the issue of ordered choice potentially breaking your compositional intention.)  There is at least one endeavor to make PEG representations transformable, which is extremely interesting and worth following.

There is much more that could be said here — error recovery, for instance, is extremely tricky in many cases, and has tight interactions with the algorithm you are using.  Yet you still would like a declarative way to express your grammar’s error behavior, if possible. 

And what about low-level performance?  Which is faster:  a virtual object dispatch in your parser’s inner loop, or a colossal switch statement?  If you are doing code generation, the only answer is probably “generate both versions and then measure it on your specific grammar!”  There are so many tradeoffs here that drawing general conclusions may be nigh impossible.

The basic point I want to make is that parsing is very far from being a properly solved problem.  Tools like ANTLR point the way, but there is more still to be done.  And this is the glorious possibility:  building a parsing environment that is itself composable, and that is fully transparent to the developer at all times, letting the developer choose their implementation tradeoffs at all levels.

As Terence knows better than anyone, this alone could consume an entire career.  Or multiple such careers.  Still, I could see myself having a lot of fun building an F# implementation of this kind of transparent, composable parsing toolkit.  What a useful thing it would be!

Edit: And you know I will be following Terence’s work on ANTLR 4.  Also, this presentation of his on parser implementation is a great look at the state of the art in managed-language parser code generation. 

Edit #2:  I must also mention that ANTLR has soundly convinced me of the benefit of LL style: LL style is top-down recursive, which is a very natural way for most programmers to think.  Table-driven parsers, without good and scalable visualization, can be quite hard to debug and extend, but ANTLR’s generated code is a model of clarity.

Written by robjellinghaus

2010/05/04 at 22:51

Posted in Uncategorized

Glorious Possibilities #1: the Holofunk

with one comment

Yow!  Two postings in one night!

One reason for my reduced posting here is that I’ve realized this blog works a lot better if I have a personal hacking project. Hacking on something I can publicly talk about makes for much more interesting material. So I am glad that I have a surfeit of possibilities there, one or more of which will definitely be happening this year. The next three posts are devoted to these lovely options, any or all of which may or may not wind up happening, but which are nonetheless interesting in their own right.

Up today: the Holofunk.

A year or two ago I became aware of a British performer named Beardyman. I am an old school raver, and so I still follow the electronic / dance music scene. Beardyman won the best beatboxer in the UK prize two years in a row (a feat none else have accomplished), and these days he does one-man musical shows with his voice and a bevy of samplers and sequencers. The net result is that he is literally a one-man band. I find his work incredibly inspirational, as I like the electronic music scene and my voice is the only instrument I am much good with. So Beardyman is one inspiration.

Another inspiration of mine is my old college friend Leon Dewan. He and his cousin have created the Dewanatron, a unique device for making sounds never before conceived. I had the pleasure of joining them in a performance in Seattle recently (that would be me on the left at 3:15), and seeing them hacking their musical hardware was quite compelling. It’s fun to make weird sounds and bring order out of chaos out of order.

Now then, please integrate the above influences with the following facts:

1) There exists a piece of software named Ableton Live. (How existentially qualified of me!) This is a very powerful sequencing / synthesizing / sampling application used by many DJs and electronic music producers. Recently the Ableton people have done a lot of work to integrate Ableton Live with another audio product named Max, which is an exceptionally scriptable and flexible audio toolbox. The combined product is named Max for Live. There are existing C# libraries that allow Max, and hence Max for Live, to be controlled over a TCP connection.

2) 3D graphics cards are currently shipping; in fact, most mid-end to high-end PC

graphics cards can already drive a 3D display, and the hardware is about to become very widely available.

3) Microsoft this year, if their rumored timeframes hold true, is going to ship an attachment for the Xbox 360. This attachment is currently called Project Natal. It is essentially mass-market full-body video-based motion capture and 3D body tracking. The idea is that you just wave your arms and move your body and the thing watches you and can track you in realtime. (Actually, current rumors have it that the thing has about 100ms of lag, so you can’t expect instantaneous response out of it.) If it lives up to the hype — admittedly a big “if” — it really could be a step forward in interface technology.

So, let’s combine the artistic influences with the technical possibilities.

Let’s say that in late 2010 you could put together a reasonably fast PC with a 3D graphics card, an installation of Max for Live, and a PC-based Natal development kit. Let’s further say that Natal actually works, and that it is possible to — with enough experimentation and false starts — build an application that lets you directly manipulate 3D objects. (Hopefully Natal has at least some ability to detect finger position, or grabbing anything will be hard….)

If you could hook up a grab-and-drag Natal application to Max for Live, you could build a gesturally controlled 3D sound space. Add a microphone and whatever instruments you like, and make it really easy to script new gestures and new kinds of 3D sound-controlling objects.

It’d be a sonic holodeck. Punch the throbbing red sphere and start recording a new loop. Grab a sound and rip it in half, then push the two copies slightly out of tempo. Wave your arms around to change pitch, volume, modulation, or crossfade. I have no real idea what the detailed interface would be since I haven’t played with Natal for real yet.

But what I do know is that there is a real chance this could be an immensely entertaining thing for creative musicians to play with. I also know that I really want to play with it myself!

So that’s my first possible 2010 hacking project: the Holofunk. That’s what I’ll call it. If, of course, I ever finish it. If I do get somewhere with it, I will open source it, probably on Codeplex. And if anyone else is equally inspired by this, or otherwise has more ability to execute on it than I do (given my great day job and busy family life), by all means feel free! There is going to be some seriously fun art coming out of the techno scene in the next few years, and I can’t wait. I love the 21st century!

Written by robjellinghaus

2010/04/13 at 21:15

Posted in Uncategorized

Time keeps on slipping, slipping, slipping

leave a comment »

I originally thought that one posting a month was a reasonable and sustainable rate for this blog.

And actually, I still do think that. But I also think I’ve been less than effective at making it happen. Which is why I am going to make up for lost time. In 2009, I made… oh dear… five postings. That leaves a 2009 deficit of seven, plus three so far this year.

And it’s not as though I don’t appreciate the recent attention (for some definition of “recent”) that this blog has gotten. I do appreciate it. However, I don’t plan to make any similar posts in the foreseeable future. There is plenty of other stuff to talk about!

Working at Microsoft is an odd experience in some ways, because the company is so epically big that its output varies widely. So on the one hand there are plenty of failures to be embarrassed about, but on the other hand there are sometimes big wins that bring a kind of transitive corporate patriotic pride. For example, I never wanted a Windows phone before; no one had much good to say about them. But the bizarrely named Windows 7 Phone Series phones look great and I want one badly.

In fact, now I really need one, because I recently lost my only cell phone. So the question now is whether I can live without any cell phone whatsoever until these new goodies ship. They’d better ship in fall 2010 or I might have to go over somewhere and slap someone. Really, it is mostly catchup with the iPhone, but it is still nice to see a big company get with the program.

Next up: hacking resolutions for 2010!

Written by robjellinghaus

2010/04/13 at 21:13

Posted in Uncategorized

Good bye, Blogger

leave a comment »

If you’ve read this blog before, you may have noticed it now looks different.  I’ve moved from Blogger to WordPress, and based on experience so far, I’m not looking back.

Well, actually I guess I will, because it’s worth blogging about 🙂  I created this blog on Blogger almost three years ago.  The features Blogger had when I created this blog are almost exactly the features Blogger has today.

In some ways this was sort of fine.  I was used to the simple little textarea Blogger post window.  I was accustomed to the minimalist comments handling and the fairly clunky UI.

But then I took a hiatus from blogging, and when I came back, my most recent posts were fairly heavily spammed.  And that was not acceptable.

It’s odd, because Google is good at dealing with spam in Gmail.  Never had spam one there.  But both Google Groups and Blogger are definite spam victims at this point.  Google has a hit or miss record with its products, and Blogger has definitely been neglected.

WordPress, on the other hand, is widely considered to have the best spam control in the blogging industry.  I can already say that the blog editing UI is streets ahead of Blogger.  The content import from Blogger seems to have worked great.  So far I’m quite pleased, and willing to pay the $10 per year for domain mapping.

I am also working on a content revival for this blog.  Stay tuned!

Written by robjellinghaus

2010/03/10 at 07:59

Posted in Uncategorized

Because I can

with one comment

This is a list of worthy programmers. It makes me happy.

Unfortunately that is all I can say about this list at this time. Hopefully someday my disclosure may be fuller.

Written by robjellinghaus

2009/09/11 at 07:24

Posted in Uncategorized

Sex and the Clueless Coders

with one comment

I am very late to the “party” here, but I can’t help chiming in on the recent spate of programming conference presenters dropping bits of porn into their presentations. First it was the Golden Gate Ruby Conference in April, where a presentation on CouchDB (hmm, now I know what couch they meant!) included some racy pictures of mostly nekkid women. What’s more astonishing yet is that David Hennemeier Hanson, the Big Dog of the Ruby community, apparently thought this was all quite appropriate. I guess he wants the Ruby community to be all anti-corporate and rock-star. Or something.

And now as I was doing my remedial surfing to catch up on the story from April, I find that something similar went down (ahem) at a Minneapolis Flash conference, where the presenter did a big ol’ animation of all kinds of X-rated activities.

It’s funny, in a sad way, to see so many clueless male geeks sticking up for the L33t Rebels Busting Out The Pr0n. I mean, my God, guys, the entire Internet is filled with images of hotties of all genders, ages, species, and descriptions. Why do you feel the need to shove it in peoples’ faces at a geek conference?

I work for Microsoft now, which I suppose is an arch-corpocracy by the definitions of these running-wild-and-free, swinging-low coder cowboys (operative word being boys). Here at Microsoft we have this little thing called an HR policy. What it means is that if I wallpapered my desktop or office with pictures like the ones these zany idiots are slathering all over the place, I WOULD GET FIRED. And deservedly so. Because while sex is great, mixing it with work is guaranteed trouble for everyone.

It’s especially appalling how absolutely butt-ignorant many of these testosterone-poisoned hackers seem to be. I lived in San Francisco for a good twenty years, and I got to know a raft of feminist sex workers and general sex-positive people. And the one abiding principle that everyone thrived on was respect. Listening to what other people want and how other people feel — what a concept!!! And that’s exactly what all these indignant self-important immature coderboys are apparently utterly unable to do, when confronted with the thoroughly understandable discomfort of the women and the more enlightened men who were present at these code talks that turned into bad peep shows.

I love programming. I love computer science. The world of software has unlimited potential. And the lack of women in this field is beyond tragic. This summer I’m actually managing a female intern in our group; she is rocking the code in a serious way and I’m honored to get to be her manager. Looking at this unbelievable travesty of civilized behavior at these conferences, I’m just really, really glad we’re not on any Ruby or Flash projects, and I’m also glad we’re in a Big Boring Corporation, because not only are we working on things that are astonishingly cool despite our corporate overlords, we are protected from the kind of juvenile bullshit that seems to have infected the Ruby world.

We want MORE women in this field, not less — but if the numbskull, sexist Rubyists and their ilk have their way, all we’ll have is stupid guys with more balls than brains. And that’s bad for everyone. KEEP IT IN YOUR DAMN PANTS, IDIOTS! And do your pr0n surfing in private!

Edit: Here’s the big thread on the Ruby debacle, with the presenter in question chiming in. Now you know which side I’m on….

More editing: Here’s Martin Fowler’s excellent summary of the debacle. And here’s DHH himself — scroll down to April 27. Kind of funny how he seems to think that enjoying the movie Pulp Fiction and putting porn into a technical conference are somehow related. Dude, don’t be so proud of being R-rated — I’m sometimes very X-rated myself, but you won’t find out about it here, because I know better than to get my boundaries mixed up! Good fences make good neighbors.

Written by robjellinghaus

2009/07/07 at 07:24

Posted in Uncategorized

Bob vs. Gavin, no holds barred

with 3 comments

As this blog chronicles, I was in the Java world from 1996 to 2008. Near the end of the road, I got involved in the Seam and GWT communities, and I met Gavin King and Bob Lee. Gavin developed Hibernate and Seam (both wonderful), and Bob created Guice at Google (which I hear is wonderful, but haven’t looked at it much). (Sorry, can’t be arsed to link all these terms; search = your friend!)

Then we had to sell our house (housing bubble near-miss) and we decided to move to Seattle, and now I work for Microsoft and am very happy, and I lost touch with what was going on in the Java world. The last I heard, Gavin was working on the Web Beans JSR (JSR-299), collaborating closely with Bob, and Web Beans was going to be the best of Seam and the best of Guice standardized and pushing Java EE forwards. Ah, how lovely a picture!

However, I just opened my RSS feed on Gavin’s blog, and was shocked — SHOCKED — to see serious trouble in paradise. For example, quoth Gavin:

Bob, if you’re honestly trying to argue that Guice – which relies upon procedural Java code to specify dependency-related metadata – is as inherently toolable as a system that allows all dependency-related metadata to be specified using annotations (or XML for that matter), I’m just going to leave this discussion now, because that’s absurd.

And quoth Bob (replying to a different post, I’m cherry-picking not summarizing):

This is a little disingenuous. As you know, the lead of a JSR has absolute power. Most leads don’t abuse this power. They listen to their (highly representative) EG, achieve consensus, and very rarely make executive decisions. I think you’ll agree that you are more of a dictator. Yes, you took what you thought were the best ideas from Guice, but I found working with you as a lead and changing your mind on anything to be an exercise in frustration. I can’t count how many hours I wasted convincing you that Seam-style injection was fundamentally flawed only to have you switch to using proxies which have their own set of problems. I even brought Josh Bloch in one time to help settle a debate, but you cursed at and insulted him. I sincerely wish I had that part of my life back. By joining your JSR, Spring would not only validate it, but they’d have to give you absolute power over themselves. Based on my experience, I wouldn’t recommend they do that.

Zow! What’s also strange is that this comment shows up in my RSS feed for Gavin’s page, but I’m not seeing it on the comments web page itself. So if you want to see the fur fly, go straight to the feed.

This is the kind of thing I miss the least about the Java world. I theoretically admire the openness of the JCP/JSR process, and in theory it should lead to better results than a more closed process. But in practice, normal human perversity just gets in the way — the kinds of personalities that drive specs forwards tend to be very focused, and prone to conflict. So now it looks like there are going to be multiple JSRs describing dependency annotations, and the two people who could best work it out seem to be at each others’ throats (as far as their respective specs are concerned). Disappointing.

Neal Gafter recently left Google — and the entire Java world — and came to Microsoft, for similar reasons… he put man-years of work into the Java closures spec, and then it was killed due to backroom political pressure. The Microsoft model is more like, we own everything, and we will do what we think is best. Coming from the Java world, I used to think that made Microsoft the Evil Borg. But now, on the inside, I see there are a lot of benefits to having a single decision point. (Well, God knows there are huge political issues even inside Microsoft, but it’s still an order of magnitude less than the Java world!)

Written by robjellinghaus

2009/05/08 at 18:04

Posted in Uncategorized

Intentional shipped!!!

leave a comment »

Intentional Software was founded in 2002 by Charles Simonyi, one of Microsoft’s first billionaires. They’ve been pretty vague ever since then, which led to the usual vaporware skepticism.

I’ve been following them for some time, because I had extremely positive results using domain-driven design at Nimblefish (my first serious enterprise software job). I’ve also blogged extensively here before about the appeal of extensible languages.

Well, last week they finally gave a real demo, and it turns out they’ve been very busy for these last seven years. The Intentional team has made some real leaps forward in the whole concept of language construction, multi-language projection, and bidirectional editing. Basically, their system lets you use any number of different languages to describe a problem domain; you can create your own languages, project them as Ruby/Java/C# code, tables, or diagrams, edit them in any format and have it translated directly to the others (to the extent possible), run the model directly while editing it, and just generally take metaprogramming to completely the next level.

Yes, I’m excited; it’s not every year (or even every decade) that you see a demo like this.

Here’s a breakdown of the contents, if you need to optimize your optical cycles:

  • 0:00 – 9:00: basic Powerpoint conceptual overview
  • 9:00 – 13:00: more technical Powerpoint about the structure of their system
  • 13:00 – 21:00: illustrating the “metamodel” for a state machine domain-specific language
  • 21:00 – 31:00: bidirectionally projecting and editing a state machine via Ruby, UML, etc.
  • 31:00 – 36:00: demonstrating an electronics DSL, including a live evaluation of electrical flow while editing the circuit graphically
  • 36:00 – 43:00: the Cap Gemini beta test product, implementing a pension system with live Intellisense on the actuarial math equations, and a temporal rule language for pension rules
  • 43:00 – 46:00: the multi-user versioning support that operates at the level of their fundamental tree-based data structure
  • 46:00 – end: various Q&A

If you only have 15 minutes, watch 31:00 – 46:00.

There are various systems that have done various subsets of all this before, but I’ve never seen it packaged in a unified way. Ever. It’s time to start watching Intentional very closely. It may also be time to check out the open-source JetBrains Meta Programming System. Software is all about raising levels of abstraction, and we might just have some new cranes coming online.

Edit: I just checked out the Meta Programming System’s tutorial, and yay! Looks like it’s a free version of many of these concepts that we can play with now! Time to tinker….

Written by robjellinghaus

2009/04/26 at 02:59

Miracles can happen: CACM

leave a comment »

So I’ve been a member of the ACM for many years. For a long time it was the only way to get at their Digital Library, which was the motherlode for research paper junkies like me. That made it worth putting up with their magazine, Communications of the ACM (CACM for short), which was really remarkable for how it never had anything worth reading. Pretty much all the lead articles it published were turgid studies about the sociological makeup of MIS departments, or the nature of collaboration in enterprises, or other strange bureaucratic stuff that only had a tenuous connection to programming as I knew it.

Last spring I got my current (unbelievably excellent) job at Microsoft, which has free corporate access to the Digital Library. Hmm, I thought, maybe I should drop my ACM membership.

They must have been snooping on me, because almost exactly then, they announced a complete editorial revamp of CACM. Refereed articles! Hardcore software / hardware research papers! Suddenly they were talking my language. And even better, they actually did it.

The new CACM is frankly the best computing magazine I’ve ever seen. Wide ranges of articles, high technical bar, many diverse subjects of great interest… it’s really a winner. Puts the old Byte / Dr. Dobbs / etc. to shame. Of course, all those mags are dead, too… but the itch they used to scratch is scratched much better by CACM! Who would have thought?

And the new CACM web site is no slouch, either.

I guess sometimes miracles do happen. Thank you, ACM, for doing such a good job on this reboot!

(And yes, it’s been a while… I’ve been sick, the whole family had the flu, the dog ate my homework, it’s been awful dark outside, gimme a break here! At least I’m back! 🙂

Written by robjellinghaus

2009/03/27 at 03:46

Posted in Uncategorized

The Five Stages of Programming

with one comment

Programming is an interesting job, because it goes in continual cycles. Each part of the cycle has its own emotion that goes with it, too.

When starting off a new project, there’s a learning curve that goes with it. You’re spinning up, reading code, reading technical papers, trying to figure out what the hell you’re going to do. The main emotion here is puzzlement — how is this thing going to work? What’s the interface? What’s the feature set? What the heck is going ON?

After that comes early implementation. In this phase, the main emotion is nervousness. You think you know how it’s going to work, but there’s nothing really there yet. So you’re hacking madly away, trying to get enough of a skeleton in place that you can start to make it dance. Forget about getting flesh on the bones, you’re just trying to come up with something that can stand up! Since you don’t really know what you’re doing yet, it could all still fall apart on you.

Once you’re out of those woods, you’re into late implementation. Here, the main emotion is adrenalin. You’re charging on all cylinders, driving at full throttle. The bones are rapidly becoming enfleshinated, and you’re in the zone. This is in some ways the most satisfying part of the whole cycle, because now you start to see some real results from what you’ve been working on.

The last phase is debugging. Here, the emotion swings wildly between frustration and relief. You’re almost done… except you’re not! There’s a bug! Fix it, fast! OK… and on to the next test… and WHAMO, another weird bug! Grrrr. OK, got that one done… YES! IT WORKS!!! Ship it!

And then the whole cycle starts over again.

So that’s my job: puzzlement, nervousness, adrenalin, frustration, and relief. Of course sometimes you take a few steps back. For example, right now I made it all the way to relief, but I’m about to backslide into nervousness. The best-case scenario, though, is when you make it to relief and then you can keep building on the code you just finished… then you have a kind of secure happy foundation under you, reassuring you that even if your current layer falls to bits in a welter of recrimination, at least you know the relief — that fantastic sense of accomplishment that comes with writing a software machine from thin air, that has real value and usefulness — is still out there, in the future, waiting for you.

That’s what software is, to me: the promise of progress, of building on what’s come before, making it better. And this emotional cycle is what it takes to make that happen. So I’ll close with a word that sums it all up for me:

Onwards!

Written by robjellinghaus

2008/12/30 at 05:03

Posted in Uncategorized