There's an argument commonly heard these days that open-source software is all very well for infrastructure or commodity software where the requirements are well-established, but that it can't really innovate. I laugh when I hear this, because I remember when the common wisdom was exactly the opposite -- that we hackers were great for exploratory, cutting-edge stuff but couldn't deliver reliable product.
How quickly people forget. We built the World Wide Web, fer cripessakes! The original browser and the original webservers were built by a hacker at CERN, not in some closed-door corporate shop. Before that, years before we got Linux and our own T-shirts, people who would later identify their own behavior correctly as open-source hacking built the Internet.
It seems to me that bringing the Internet and the World Wide Web into being ought to count as enough "innovation" for any one geological era. But it didn't start or stop there. Nobody even conceived of cross-platform graphics engines before the X window system. The entire group of modern scripting languages descends from open-source Perl, and almost all draw critical strengths and innovative drive from the size and diversity of their open-source communities.
Even in user-interface design, much of the most innovative work going on today is in open source. Consider for example the Facades system. Or just the astonishing, eye-popping visual experimentalism of Compiz/Fusion under Linux.
It's actually corporations who have trouble innovating. Innovation is too disruptive of established business models and practices; it's risky, and it involves coping with those annoying prima donnas at the R&D lab. Consequently, even well-intentioned big companies like Xerox that are smart enough to fund real research centers like Xerox PARC often reject the truly groundbreaking ideas from their own researchers. Today you'd be extremely hard pressed to find any of the really cool ideas from Microsoft R&D being deployed in actual Microsoft products.
The process of innovation and deployment in open source is of course not friction-free, but it certainly looks that way when compared to the corporate world. One of my favorite current examples is the way Guido van Rossum and the Python community are gearing up to re-invent their language for its 3.0 release. Their "Python Enhancement Proposal" process for fostering and filtering novel ideas by individual contributors repays careful study; like the Internet RFC process (on which it's clearly modeled) it produces a combination of innovative pace and successful deployment that even Bell Labs in its heyday could not have dreamed of sustaining.
Yet, somehow we still see earnest screeds like this one by Christophe de Dinechin:
What I'd like to see happen is genuine open-source innovation. But I'm afraid this cannot happen, because real innovation requires a lot of money, and corporations remain the best way to fund such innovation, in general with high hopes to make even more money in return.
The easy, cheap reply would be to write the author off as a blithering idiot who has failed to notice that his entire environment has been drastically reshaped by open-source innovation, and the proof slaps him in the face every time he looks at a browser. But, in fact, I think he (and others like him) are not idiots; they are reasonably bright people making a couple of serious and identifiable errors in their reasoning about open source, closed source, and innovation.
Error the first: ignoring the present value of open-source innovations in the recent past when projecting the difference in expected returns between open-source and closed-source innovation strategies. This is what M'sieu de Dinechin is doing when he fails to notice that Tim Berners-Lee was a hacker operating in open source, and his successors mostly likewise.
Error the second: discounting innovations that are not user-visible and salable by a marketing department. OK, the latest piece of eye candy from Apple is very nice, but if you ask me how it compares to the present value of (say) the open-source BIND daemon, the answer is "no contest"; one just looks pretty, the other is fundamental to the entire frickin' web-centered economy.
Error the third: ignoring work like Metisse/Facades because it isn't yet deployed on enough machines to show up on a marketing survey. The problem here is that people like de Dinechin wind up erroneously taking the ability of corporations to sell incremental improvements into an established marketplace as their major proxy for measuring the ability to originate innovations in the first place. This makes their view of what constitutes 'innovation' nearsighted even where it's not altogether blind.
Error the zeroth: confusing two issues, one of which is "which strategy globally maximizes innovation?" and the other one of which is "how do I, the hungry would-be innovator, get paid?" This is the big one; I'm numbering it Error Zero because I think it's at the bottom of almost all the other systematic mistakes de Dinechin and people like him are prone to, including errors One through Three.
de Dinechin, and people like him, have a simple and linear model of how innovation works. Pay a bright guy like de Dinechin, stand back, and watch the brilliant stuff come out and change the world. In this model, if you don't pay bright guys like de Dinechin, innovative stuff doesn't come out because they're too busy grinding out COBOL or something so they can eat -- no world-changes, so sad.
This model is very appealing to people like de Dinechin, who have an understandably strong desire to be paid for being smart and creative. Heck, it appeals to me for exactly the same reason. Unfortunately, and unlike de Dinechin, I know that it is seriously false-to-fact.
I have a very different model of how innovation, at least in software, actually works. One of its premises can be expressed by what I shall now dub the Iron Law of Software R&D: If you are a programmer developing innovative software, the odds that you will be permitted to finish it and it will actually be deplayed are, other things being equal, inversely proportional to the product of your depth of innovation and your job security.
That is, the cushier your corporate sinecure is, the less likely it is that you will make a difference. The more innovative your software is, the less likely it is that you will actually be supported all the way to deployment.
The reason for this is dead simple. Corporations exist to mitigate investment risk. The large and more stable a corporation is, the more resistant it is to disruption in its practices and business model including the unvoidable short-term disruptions from what might be long-term innovative gain. Net-present-value accounting therefore almost always leads to the conclusion that innovation is a mistake.
"But what about Bell Labs?" I hear you sputter. Ah, yes, that archetype of the halcyon days of corporate research. Well, for one, Bell Labs is dead; pressure for short-term returns has made the kind of empire-building it represented effectively impossible today. And even in its heyday, Ken Thompson had to write Unix as an after-hours project on a piece of salvaged junk, then sneak Unix tapes out the back door to deploy it.
The Iron Law explains neatly why most of the research that came out of Xerox PARC was eventually deployed by other corporations, mostly startups with no preexisting business model in jeopardy. And why you get the most actual, deployed innovation in open source -- because the people whose revenue streams we're jeopardizing (if any) aren't the same people who are funding us (if any).