The traditional take on weblogging was that it generated a lot of amateur writing, but lacked the guiding hand of professional editorial control. In this piece : editorializing is parallelizable (which is based on a post to JD Lasica : http://jd.manilasites.com/2001/05/29) I point out editorializing, fixing factual errors, passing critical comment on, call lies, is the equivalent of the software developer's debugging. And, just as a large number of eyeballs allows a community of free software users and developers to test and debug software faster and more effectively than a professional company, so a large number of readers and bloggers will debug and editorialize the news and opinions going round in their community.
The effect is that any particular weblog may be flawed, but start reading around a few and you'll see factual errors picked up, opinions criticised, dishonesty flamed.
Editorializing is Parallelizable ... is my new slogan, and my new way of thinking about weblogs. It's a deliberate echo of the basic principle EricRaymond suggested to explain the success of the Open Source movement : "Debugging is Parallelizable" (In fact he attributes the phrase to Jeff Dutkey, Raymond's own characterization is "Given enough eyeballs, all bugs are shallow". See TheCathedralAndBazaar )
To my mind, this is an excellent parallel with what's going on in the blogging world. The Open Source movement discovered that a large number of, not very co-ordinated, individually motivated, amateurs (in the sense of being unpaid volunteers) managed to produce software of comparable if not superior quality to that produced by the wealthiest corporations, using the most rigorous software engineering methodologies.
How could that happen?
StuartKauffman might be able to explain. Something to do with the emergent orderliness of very large numbers of interacting parts maybe. But Raymond also brought a new understanding through his insight that the work involved in developing software consisted largely of the parallelizable activity of discovering and fixing bugs.
Which brings us back to blogging. A lot of people are talking about blogs as writing. Looked at this way, the proliferation of blogs is a challange to the republic of letters. Whether blogs are criticised as amateurish, unedited rants, or celebrated as vehicles of free expression, the assumption is that they must swamp the web in yet more low grade infoglut, and make the good information harder to track down and verify.
But in fact, the majority of blogging effort seems to me to be editorializing, that is identifying and drawing attention to good writing and information, adding interpretation and opinion, making cogent criticisms, filtering multiple sources to present particular viewpoints or argument. And as with debugging this is highly parallelizable. Each blog filters and refines the information feed for the next one down the line. Find a discerning, enthusiastic blogger with taste and range of interests that overlap yours and you will have found a source of finest quality, distilled information and comment. Essentially, you have found the sea of information debugged and customized for you.
And as with open source, we can expect the good things to scale up with the number of bloggers. The more bloggers, rampaging across the web like marcher ants, devouring everything before them, the easier, not harder, it will be to find the most useful and reliable information. That will be the stuff the bloggers are all pointing to and discussing.
But we can, of course expect trouble from the traditional media companies. Blogs are in competition with traditional media for the following reason. As suggested by ClayShirky, (http://www.openp2p.com/pub/a/p2p/2000/12/19/micropayments.html ), selling individual news items isn't a viable business model. Most commercial newspapers' business model is to sell aggregations of news. Hence you trust a newspaper to provide a package of selected news writing and comment. (In fact often the raw news material comes from general sources like Reuters.)
Now, many content based businesses are concerned to keep that aggregation model - if not through physical aggregation on paper, through selling their branded editorializing. Clearly bloggers pose a threat to that. I don't trust many media brands to select stuff for me - what I mean is, I don't turn to the front page of CNET or MSN as my overview of what's relevant to me - instead I trust DaveWiner, Tomalak's Realm, Camworld and Scoblizer etc. as my aggregating, editorializing brands ... So, once the media companies realize, watch for them to turn on bloggers, and denounce "deep linking" to articles, as viciously as the RIAA went after Napster or Microsoft are going after the open source movement ;->
This is a pretty straight (but tidier) copy of the mail I responded to JD Lasica after the first discussion. Useful way to get some of my thoughts together re : blogging == distributed editing.
thanks for your response to my comments. But I think you missed one of my main emphases. You are still implying that in a sea of opinion we need a professional media to provide the role of guarantor of reliability or trustworthiness. Whereas my point is that reliability is an emergent effect of the large number of imperfect editors. Imagine the comparable case of the open source movement. "It's all very well these amateurs writing code, but without professional managers to run proper test schedules, these open source projects will never have code as robust or reliable as that generated inside properly managed projects."
However the result has turned out to be different. Much open source software is lousy, but given a project with enough interest, the law of large numbers of eyeballs kicks in, and the code being worked on by this community ends up more reliable and robust than that from projects run with top-down management techniques. That can only happen because debugging is the sort of thing that can be parallelized. Now I think that the blogging phenomenon has revealed another activity which can be parallelized in the same way : that is the sifting through multiple opinions to find the most interesting / accurate stuff.
For subjects with a large enough interest base (enough eyeballs) the same law of large numbers will kick in, and the sifting effect of the community will start to outperform traditional media organizations and their quality control techniques (such as FactCheckers, sacking journalists who make up false stories, etc.)
There are already many cases where a community already provides better information than commercial media organizations. But these are often dismissed as being unusual in that they are especially technical and the community is a community of technical experts.
Personally, I don't think so. We've seen the effect, of communities of experts outperforming traditional media organizations, earliest in technical fields because these technical communities have been first on to the internet in large numbers and first to achieve the requisite critical mass. But the principle is universal. Expect to see it next in other areas of expertise : gardening, bird watching, custom cars, pet care, history etc.
Once a community of interest on the net has achieved critical mass to outperform the available commercial media, I think there's no going back. The community will suck attention away from, and steal the brand loyalty from, the commercial rivals. Those rivals may remain valued parts of the community, but they will have no special status.
Fine rhetoric. But is there any evidence for this? Particularly the claim that factual errors get "debugged" or fixed as they propagate through the BlogoSphere. Where can we find actual studies?