>>> "A...prodictive programmer can write 30000 lines of code per year... " This is about 15 lines per hour. If a programmer makes $100000 per year [probably low when all costs are included ] this comes to $3/line- or more realistically $ 3 to 5/line. <<<
Sadly, this is exorbitantly over-optimistic, in most situations. It is also vague, since a 'line' of C++ may do 100 times what a line of assembler does. On top of that, in the most productive environments for the given desired functionality, e.g. in some cases 4GL database environments, the total amount of code may be (hopefully) very small and most hours allocated may be spent in server setup, ER design, user interviewing, or other tasks.
It is true that without testing, design, analysis, management or extensive debugging time, without version control or any other team members to worry about, a single programmer in an ideal situation on a hot streak, reusing code already tested and attacking a well-understood problem domain might maintain a rate like that for a while.
I once used MSFT's own figures for number of products, number of programmers on staff, and number of lines of code total to come up with something like 2 lines of code a day. (They ran a full page ad with the data (!)) That depended somewhat on who you called an programmer (I think they included line managers and so forth.) That was circa 1992. That's a bad result, but not an unusual result. (MSFT might differ with me on the interpretation, but then I don't have their internal numbers, if any.)
One problem is that productivity takes a huge negative hit as staff sizes grow, amply documented in 'The Mythical Man-Month' thirty years ago.
Another problem is that as development environments get easier to use, they do so mostly by being less general (e.g. not everything is a database problem), or the improvements in general software engineering turn out to be incremental. It is estimated that for a given complex project it probably is possible to do it in about half or a third the time at best it would have taken thirty years ago, and that is mostly due to hardware improvements (hi-res terminals, speed), not programming environments. And that is only counting actual programming and compiling time.
Then you need to add in maintenance time.
Anyway, I have had days when I wrote a couple hundred lines of code and they just worked right away. That seldom happens. Several times as often, I spend hours to days chasing down some elusive failure, usually when there are several failures at work disguising each other, and I am trying to change one variable at a time.
Less than half of all software projects are successful, too. You could say the programmer productivity on those is zero, though the failure is often at the business or product idea level.
If you want better info than this, I suggest one of the software quality associations. Software metrics is a very tricky field, one that has always had an appeal to managers because of the possibility of quantizing the tracking of programmer productivity. But as quality managers know, the programmer pushing out a huge number of lines of code per day (and night) is quite likely to be the programmer whose code is thrown out entirely in the end, who has the worst problems interacting with his/her team, who most likely abandoned the design goals that were hard to code for design goals that were easier to code, without consultation with customers or designers, most likely to require the hiring of one or more maintenance programmers to be employed until forever to keep his/her code working, most likely to become a permanent fixture in a self-created sinecure because nobody else can read the code they wrote, most likely to trash the project and/or whole company. Not always their own fault either. Usually a naive manager rewards this behaviour, and the naive programmer is too eager to please.
Plus environments and 'meaning per code line' (my phrase) vary wildly. On top of that, the best programmers sometimes generate the least lines of code for a particular function point or definition, and reuse the most code, at the expense of the appearance of less productivity to the naive (false appearance).
Also under the heading of 'your results may vary', IBM once tested a bunch of their programmers for lines of code per day in COBOL. They found that matched pairs of programmers, of the same age, education, background, years of experience, employee review ratings, and so on could vary in particular test assignments from each other by as much as 20 times. Sometimes this was consistent, sometimes it just lasted for one test. So go figure. If IBM has one thing, it's consistent hiring practices. Then again, lots of people cheat: at college, on resumes, during programming and programming tests.
I suggest looking at some of the old research on function points, software metrics, and so forth before you jump to any conclusions about the validity of calculating programmer productivity that way. This is discussed ad nauseum in general SE periodicals like IEEE Computer and ACM Communications, and in periodicals for specialist organizations and ACM SIGs. You could start with the ACM and IEEE web sites, maybe.
Hope this wasn't more than you wanted.
Chaz |