Hi Frank -
"Has anyone read the above piece from the Summer 2007 issue of strategy + business? I was hoping to read some comments on that one. Anyone?"
The piece had some correlations to what we were discussion about elites. I agreed with the thesis, that there was much to be gained from "bottom up" harvesting, both of thought and information. Also, agreed that such harvesting demands careful sorting of the wheat from the chaff.
It's at that point where my views went a little further, perhaps, than the author's. While not disagreeing with what he states, my experience is that almost anything built by committee (a true peer group) is likely to be flawed.
This is where the discussion of elites merges, but I don't necessarily like the term "elite":
"'Elite \['E]`lite"\, n. [F., fr. ['e]lire to choose, L. eligere. See Elect.]
A choice or select body; the flower; as, the ['e]lite of society.
Source: Webster's Revised Unabridged Dictionary (1913)
Probably, the author's "meritocracy" is better, as the ultimate judge of what should be found valid, and acted upon. But even with a meritocracy, we can still have a body within which individuals have differing agendas (personal: ambition, monetary gain; general: moral, political). Anybody who's watched the parade of "expert witnesses" in litigation, each contradicting the other, can certainly be skeptical about the worth of special expertise.
In fact it's not enough to have a Good Idea. As stated previously, what's good is defined by what's possible, and what's achievable. Those criteria may mitigate or even defeat a concept that's theoretically valid. With Linux, or Netscape, or Wikipedia, the boundaries aren't limited by the "Open Souce" isue; they're also defined by the task itself. With programming we're talking output that works: something measurable. Yes-no. With Wikipedia, we're discussing output that is "generally agreed" to be "true": a semantic minefield, if ever there was one. Can output be measured? Maybe. We know that there are obvious deficiencies, acknowledged by Wales himself.
What's possible is understood, with both Linux and Wikipedia. What's achievable with Linux is far less easily achieved with Wikipedia.
Linux and Wikipedia also differ in that there is a solution to the problems of one, but not necessarily the other. Linux has an Ultimate Judge, who happens also to possess certain qualities sufficient to guide the concept properly. The scope of the endeavour is narrow enough to be guided by one person, despite multiple contributors.
OTOH Wikipedia solicits contributions over the breadth of human knowledge. There is no group of persons (see "expert witnesses" above) who could be expected to agree on everything, and no individual who could possibly have sufficient knowledge to both gather all valid content. The answer in such a case is to fall back on method; Britannica produced a superior result: weaker and slower in the gathering, but stronger on evaluation.
You could make a comparison to those occasions where police solicit tips from the public on a major crime. Typically in such an event, they're flooded with a deluge of information that must be sorted and evaluated, most of which turns out to be irrelevant. Within the evaluators exists a heirarchy. There's a parallel in counterintelligence: hundreds, maybe thousands of contributors, called analysts: facts, statistics, trends. From those, some graduate to senior positions, where they exercise a screening function on what's likely to be true. Despite the existence of a hierarchy, and extensive screening, history demonstrates that counterintelligence often misses what is "true". What should we expect from Wikipedia, then?
If you were CEO in a company employing Torvalds and Wales on their separate projects (not likely, but just for the sake of argument) you'd probably conclude that Wales' project was flawed because output was unreliable, even unusable at times.
You'd probably also conclude that programming (code) is different than raw information: information doesn't need to "work"; code does. In one case, what's admissible for screening is to a large extent self-limiting; in the other case, not. Even after diligent review and acceptance, validity of Wikipedia content can be arguable, debatable, and only probable.
"Open source" doesn't define the difference between the Linux (or Netscape) and Wikipedia. "Peers" doesn't describe the difference between the contributors. In any case where massive amounts of information must be evaluated, concomitant resources must be allocated to review. If there's no (or insufficient) quality control on the path from input to output, the result is guaranteed to be questionable. GIGO isn't just for programmers, but Linux has a lot less GI.
There are pragmatic limits on what can be achieved with Open Source, if that's the correct term for what we're discussing. Some of the difficulties aren't obvious. The only way to achieve the benefits without the drawbacks is by effective screening, for which someone has to be made responsible. With Linux (notwithstanding "peer" input) Torvalds can and does exercise that responsibility. With Wikipedia, it's questionable whether the problem is solvable, without resorting to methods similar to those used by traditional encyclopedias. For pragmatic reasons those methods are somewhat exclusionary, right from the get-go: certainly not "Open Source".
The author states, "It seems fair to say that although the bazaar should be defined by diversity, the cathedral should be defined by talent. When you move from the bazaar to the cathedral, it’s best to leave your democratic ideals behind."
I agree, but he should recognize the differences between the two tasks. He should also question whether even the finest talent can ever "solve" Wikipedia's task-related problems without a massive commitment of resources.
Jim
|