My alma mater has an annual contest called the "Bulwer-Lytton Fiction Contest", honoring the author who penned the opening line that started "It was a dark and stormy night..." In the contest, entrants try to write the worst opening line to a fictious novel. Last years winner was Sera Kirk, who wrote:
A small assortment of astonishingly loud brass instruments raced each other lustily to the respective ends of their distinct musical choices as the gates flew open to release a torrent of tawny fur comprised of angry yapping bullets that nipped at Desdemona's ankles, causing her to reflect once again (as blood filled her sneakers and she fought her way through the panicking crowd) that the annual Running of the Pomeranians in Liechtenstein was a stupid idea.
In my last post, I was practicing for next years competition <g>
Back on subject, I think that a major drag on software productivity is exactly this 'big mix of vendors'. The test/support matrix for a multi-platform software product can be pretty scary. Multiply the hardware/OS/OS version by the external components (and their versions) and your own release versions, and you can literally get into hundreds of different combinations. For example, if you support all the 'current' versions of Windows (95/98/2000/Me/NT), Unix (couple of versions of Solaris), and Mac, and have two or three embedded packages, and support three versions of your own, you might be talking 200+ combinations. Throw in different DBMS, reporting, analytics, and other associated packages, and you might be talking thousands of combinations.
You know how virtually all software producers handle this? By what is called "sparse matrix"; that is, they pick the most likely combinations and test those, and ignore the rest. They might test 10%, 5%, even 2% of the possible combinations. That means, a large percentage of the shipped products are not tested at all.
Multiply this by the number of released versions: major releases, minor releases, patch releases, and the matrix becomes even more 'sparse'. Sure, there are some ways to help with this, including automated testing. But automated testing never seems to produce all the coverage needed, and test verification is difficult to automate.
So, companies that support a large matrix have a choice: put an enormous amount of resources into testing and test automation, or take a chance and 'spot test'. |