Why Didn’t it Take Off? Pathology of a Floundering Web 2.0 Startup

May 25, 2008

Tipjoy is a web startup for tipping web authors/personalities funded by Y Combinator. It went live in February to decent press coverage. The concept is simple—if you see something you like tip the author by either clicking on participating author’s link, using the Tipjoy bookmarklet or going to the Tipjoy site and entering the URL.

It’s a good idea and the founders have worked hard to overcome a number of the inherent problems with small money transfers. However, there’s one little problem remaining. It’s not generating revenue.

On the main page of Tipjoy the technically savvy, but perhaps not as financially savvy, authors post their latest statistics:

 $2,519.01 (red arrow) is not a large sum of money. Compound that with the time period indicated by the blue arrow. Elsewhere in the site founders explain that they charge a 3% transaction fee. In other words they pulled in $75.57 over 4 months. Nice. Probably doesn’t even cover hosting fees.

Of course this isn’t the full story. There is likely between a few hundred up to a few thousand dollars sitting in accounts either waiting to be given out as tips or claimed by the tip recipients. With money markets at around 2.5% that may represent another $20 in interest. Thrilling.

It appears the failure to generate revenue has had it’s costs. The founders haven’t blogged for nearly a month—and previous postings are uninspiring to say the least. It seems that around the end of March the founders attention wandered from the site—after less that 2 months live! How many successful businesses ever got it right in that short of time? <Right now imagine a typical rant about generation Y’s failure to follow through>

 OK, now clear those thoughts. Generation bashing is never fair. Let’s look at the site itself.

 First, what’s right.

  1. Design: It’s an easy to understand, easy to use site. The color scheme is clear and inviting.

  2. Concept: The idea is simple and the implementation is straightforward.

  3. Technology: Excellently implemented. The site isn’t foundering through lack of technical expertise.

  4. Ease of use: It took about 15 seconds to sign up. Try it.

 So what went wrong wrong? Let’s disassemble it a bit:

Product: In my opinion the product stinks. No, not the idea. The idea is great. Not the implementation. It’s a solid web site. But the product. Let me illustrate. As you may know I live in Tennessee and it gets hot in the summer. A few years ago a deer got hit by a car along the road I drive to work. After a few days of that mid-summer heat that thing could be smelled from nearly a mile away. Some people may take from that experience that deer stink. Well, not all deer. Rotten deer. This product is rotten.

Tipping is a very abstract “product”. “Social well being” is probably as close as it gets. One evening my wife and I dined at our local Olive Garden and by bad luck ended up with a barely competent–if that–waiter. Luckily the lady serving tables near us was very good and bailed him out a couple times. Understandably his tip was very small. But on the way out the door I slipped her a couple bucks. She felt good and I felt good.

Replicating that experience on-line is difficult. Tipjoy is notably lacking in that regard. The main page is made up of a bunch of (dry) statistics. Hello, this isn’t Microsoft Excel! What are they trying to sell—business analytics?? Where’s the sidebar advertising how your Tipjoy data can be added to your Facebook/Myspace site? Where’s the banner that says “Show your girlfriend you really like her latest Facebook post?”. Where’s the “attach a message to the tip” feature? Tipjoy is a social site—it must market itself as a social service. It must integrate into life, or life will continue to pass by outside of it.

Advertising: People have to know about a site in order to use it. Tipjoy did enjoy good initial coverage from TechCrunch and other scattered outlets. However, it lacks long-term support. If I were them I’d contact as many bloggers and Myspacers (if that’s what they’re called—I’m getting too old school for that stuff) to include links on their sites. Tipjoy must become integrated into the social framework.

And one last thing. No credit cards? Hey dudes, it’s 2008, Web 2.0. Instead of “Coming Soon” it should say “transaction charge”. Leave the choice to the consumer.

Lesson: To succeed in Web 2.0 your site can not be an optional layer added to people’s lives. It must be inserted directly into the lives of the consumer. Until Tipjoy finds that recipe it can only founder.

Like this article?  Leave me a tip


 Disclaimer: I have no investment in Tipjoy, nor do I have any business relationship with the company. I don’t know the founders personally, but best wishes to them. I have only the greatest respect for anyone who has the guts to start a new company. My analysis may sound harsh, but remember that anyone who gets as far as Tipjoy is clearly a star.


Want the same analysis on your web 2.0 business?   See the about page for details

Advertisements

The Meaning of (Software) Quality

February 14, 2008

Introduction

I had just finished reading a bedtime story to my young son, but before putting up the book I flipped a few pages to see what was coming up and came across a nice full page COLOR picture. Keep in mind this book is an original 1950 edition of Uncle Arthur’s Bedtime Stories. In 1950 color was not an easy or affordable technology. The color photos were carefully chosen and strategically scattered—one every few chapters. My son and I paused to take a look at the picture before putting the book up.

Color photos are a dime a dozen now.  I get fancier printing on credit card offers. Yet, a slightly grainy 58 year old photo still had the capacity to engage. It was something extra that had been put into the book on purpose to make the book extra special. In short, it had quality.  Quality that was still obvious 58 years later.

Quality in Software

Of all the traits we strive to put into software, quality is one of the hardest. Merriam Webster defines quality as “a degree of excellence”. But excellence at what? At the most basic level quality is a measure of functionality.

Base Definition: Quality Products are Accurately Specified and Match the Specification

That doesn’t necisarrily mean a formal set of “design documents”. However, every software system needs to have a clearly defined purpose and it needs to actually work! We spend a lot of effort on this—requirements analysis, use case analysis, unit testing, regression testing, etc. And for good reason.  If the software “doesn’t work” it will never achieve anything. However, once we’ve satisfied this basic level of quality we should move on to a higher quality.

Advanced Definition (Inherits Base): Quality is the Delivery of Exceptionally Meaningful Features

This is “fuzzy” and “illogical” and that’s OK. A good illustration is the classic Star Trek tension between Mr. Spock’s Volcan “logic” and the “illogical” humanity of the rest of the crew. Software is typically approached with a Volcan perspective–which is OK when defining specifications and testing logic.  But software is for humans. It’s the human quality that allows iTunes to realize that when I put a music CD into my drive it means I want to rip it. (Media player thinks I want to click on the “CD” tab and then click the “RIP CD” button)  Microsoft misses from time to time on the human level, but it has made some hits.  When Word started suggesting spelling corrections as a right-click mouse button option I was overjoyed–no longer did I have to wait and spell check the entire document and loose my sense of context.  I could correct my mistakes immediately, in context, as if they never happened. I still feel a sense of liberation each time I zap a misspelling. 

Summary

Quality is a two sided coin.  Quality is about accuracy, but it’s also about emotion.  Quality software must work.  But it also must give something meantinful to the lives of the people that use it.  “Meaningful” may be “fuzzy” and “illogical”, but it’s what divides the accurate from the fantastic, the good from the great.

Microsoft’s Yahoo Blunder

February 3, 2008

 The news of Microsoft’s 40 billion offer to buy Yahoo astounded me. (CNN) Almost in a daze I listened to it again and again on one media outlet after another. All reported the same story: Google is gobbling up the lucrative Internet advertising market and Microsoft wants a share. I’ve seen some bad business moves—Time Warner’s purchase of AOL for example. But this one tops the charts. Not only will it fail. It will fail spectacularly.

Microsoft has had a remarkably successful run selling software. I use this term rather loosely since much of the software it sold is operating system code. Selling “code” may be a better term. Excluding a few minor (and late) exceptions—such as the Xbox–Microsoft made its fortune selling code.

In the early days selling code worked well. In fact, it was the only model around and it still works well in niche markets. In the 80’s anything computer related was by modern standards a niche market. Microsoft focused on owning the largest of these niches and rode those markets to stardom. Microsoft still thinks in terms of owning software market space. Even worse, Microsoft seems to think that everyone else thinks that way too.

This is where Microsoft comes into direct ideological conflict with Yahoo and Google. Yahoo and Google do not sell software. They sell advertising. Yahoo and Google don’t act like software companies. They don’t think like software companies. Because—they aren’t software companies! They are advertising companies. Microsoft doesn’t understand the advertising business. Most people don’t. Although it’s a very simple business.

Definition: Advertising is the art of consistently drawing lots of attention.

That’s it. Find a way to consistently draw attention and you have created advertising space. If you don’t believe me—think about the Super Bowl. It is the king of advertising. Why? It gets LOTS of ATTENTION. World class sports and world class entertainment rolled up into one massive show available for—you guess it—FREE.

Google has been winning the war on Internet advertising war because it’s really good at getting lots of attention. In addition to providing the best search engine on earth Google has found another way to get lots of attention: Offer free software services. People flock to Google services by the millions. Google doesn’t really care about the “code markets” it’s poaching. It doesn’t even think in terms of “code markets”. Google is just looking for “cheap” attention—and it’s getting it. The fact they have eaten up some of Microsoft’s “code markets” is mostly an accidental byproduct of their business plan.

Yahoo is very similar to Google, just less successful. In order for Yahoo to regain momentum it must capture advertising space (i.e. get and hold attention) quickly. That will come in direct conflict with the existing Microsoft for-sale software model. Microsoft will never let Yahoo poach any of their “code markets” until those products are no longer producing significant revenue. By that time Google will have poached all the useful advertising space and moved on. A merge with Microsoft is the death-knell to Yahoo’s ability to compete with Google.

On the other side, Microsoft gains very littel from Yahoo.  An advertising company has very little in common with a niche software company. It can do nothing to defend it from Google. Microsoft may gain something in intelectual property and people, but certainly those things can be aquired much more cheaply from small startup firms. Ultimately, the main problem with the Yahoo acquisition is it will distract Microsoft from what it really should be doing—abandoning declining niches (like OS and Office Products) and moving into new niche space. The purchase of the ProClarity business intelligence suite was an excellent start down that road. Microsoft needs to build on those kind of small purchases.

I predict that if Microsoft goes through with the Yahoo purchase it will find itself caught between being yesterday’s niche software player and tomorrow’s advertising company and will fail at both.

 

Disclosure: I do not own any positions in either Microsoft, Yahoo or Google.

Twelve Lessons From Writing Documentation

November 18, 2007

We all believe good system documentation is important, but who actually does it?  The typical corporate IT system documentation usually consists of repository comments, a few moldering requirements documents abandoned on some network share and maybe a few inline code comments.

A couple years ago I finally tired of the standard “documentation process” and actually started to document some of what I do. 

Here are some lessons I’ve learned:

1) Like any skill, writing documentation improves with practice.  As skill improves it also becomes more fun.  (Yes, I said fun.  It really *can* be fun)

2) Some of the “classic” documents (requirements for example) turn out to have a very short useful life.  Other nameless documents—instructions for running a complex script—turn out to be highly useful.

3) Documentation is best written in short segments.  I use a wiki—it facilitates the kind of quick, “jot down an idea” thoughts that are often the most useful.  Writing in short segments also keeps it from getting boring.

4) Documentation must be easy to update.  Otherwise it gets out of date and looses the trust factor.  Again I really like the wiki for that.

5) Documentation must be easy to search.  Anything difficult to reference—like the traditional formal document stored on a shared drive—simply won’t be used.

6) Documentation is for programmers and users, not management.  Keep it professional (even if the document on multi-threading seems like it deserves that explicative!), but casual.

7) Keep documents small.  I prefer 1 to 2 printed pages.  If any single document is more than 3 pages long its coving too much ground and should be broken into its component parts.  It’s easier to navigate short documents and they are easier to search.  (Fewer false positives)  Leverage the power of the hyperlink!

8 ) Scale the documentation.  Systems are naturally hierarchical.  Systems break into sub-systems, which become classes and so on.  Let the documentation naturally break out into patterns that fit either the architecture, or usage of your systems.

9) Its OK to have documentation outside the code.  Documentation that’s tied to specific parts of the code should be in the code–class and method documentation for example.  But system and subsystem documentation may work better in a wiki.  The important thing is that the documentation is easily accessible and universally recognized as the place to go. 

10) Not all documentation is code related.  (For example, a dictionary of business terms/acronyms) There needs to be a place to store non-code documentation.

11) Documentation should answer questions of what, where and why.  The code already answers “how” so don’t waste time on that.  For those who think code is “self documenting”, just try to write code that explains why a particular design or architecture was chosen instead of a competing alternative. . .   Some of the most useful comments are ones like “we tried doing… but that turned out to be a bad idea because. . .”

12) Documentation should be shared.  The more eyes that see it, the more valuable it becomes.

But in the end, the biggest lesson–the person who uses my documentation the most:  me.  I’m amazed at how often I go back and read something I wrote to refresh my memory on a certain system, process or whatever.  The time I’ve spent writing has easily paid for itself just in time saved from re-researching.

What Else to Learn at University

October 4, 2007

The other day I joined in on a short conversation between an intern and a full time developer.  They were discussing what a university student should learn in addition to what’s taught in the classroom.  They weren’t talking about make-up work for an out-of-date curriculum.  (Which unfortunately is so common)  But truly how to go from good to great.

Here are some of the ideas that came out in that conversation and what I thought of later.  If you have any other suggestions please feel free to add your comments below.

1)       Master SQL.  Data is the core of nearly every system.  While the database schema may be generic, the data in the schema is the least generic piece of the system.  All the quirks and undocumented requirements of the business will quickly find their (often ugly) way into the data.  Strong DB skills are essential to create and maintain effective and efficient systems.  Bosses are always on the lookout to pick up job candidates with stronger than average database skills.

 

2)       Learn a DB language.  T-SQL and PL/SQL are easy to learn and extremely valuable in the workplace.  Very few students graduate with DB language skills.  The few that do have a massive advantage in the job market.

 

3)       Watch out for teaching languages.  Explicit teaching languages like Pascal are a thing of the past, but languages that are taught a lot (C++ and Java for example) carry a lot of baggage.  First, everyone knows them.  Each job opening in that language will have dozens of candidates—many with prior experience.  Second, the language is usually not growing quickly so the opportunities for advancement are limited.  You should master the teaching language—but don’t bet your economic future on it.  Instead, learn a non-teaching language.  Find a language that interests you and has a growing job market.  Master that language.

 

4)       Get an internship that uses your non-teaching language of choice.  If that falls through—get a different internship.  Practical experience in the “real world” simply can’t be matched in the classroom.

 

5)       Do a bootstrapped startup.  Build a software product and sell it.  But whatever you do, don’t spend a dollar on it that the business hasn’t already earned.  The point of a startup is experience—hiring someone else to do the dirty work will give them both the experience and your money.  Don’t waste it.  Grow slowly and leverage momentum.  Become the world’s expert in the product you are developing and selling.  You’ll either succeed or have a very impressive resume or both.

 

6)       Be balanced.  Writing code is not everything.  I am not referring to beer bashes–the party life will do little more than lower your grades.  (One imballance is never corrected by another.)  I mean seriously engage in life outside of computing.  Volunteer, attend church, sing in a choir and play in intramural sports.  The great secret of technology is that writing great software is only half about technology.  The other half is about people.  Great systems can only come into being when the people who write them understand people, their needs, their dreams and their desires.  Software must be shaped to be human.

 

Far too many lines of beautiful code are written that no one will ever use because they don’t do anything useful for anyone.  It is such a shame.  Great software can only be written when the walls of isolation between IT and non-IT are broken down.

 

I was fortunate to attend a small Christian school that not only has a top notch CS program, but also a great liberal arts curriculum.  Rubbing shoulders with English, PE, Nursing, Religion and Psychology majors on a daily basis enriched and widened my life in a way nothing else could. 

There was the supper at the cafeteria where a psychology major friend of mine played a small (non-embarrassing—thankfully!) psychology experiment on me.  Then explained the experiment and its’ meaning.

 

There was the athletic friend who twice tried to teach me how to hit a golf ball (and failed miserably)

 

And on and on

               

Learning to understand (and appreciate) the different skills and ways of thinking of the “non-technical” world has paid off over and over in the years since.

Smashing the Myth of Predictable Software Development Part 2: Living with Unpredictability

September 25, 2007

 

In a previous post I argue that software development is inherently unpredictable.  I believe the best metaphor for software is a growing human relationship from first meeting to marriage.  Relationships like software grow along a predictable path–but at a random, unpredictable rate.  Thus they perfectly illustrate the nature of a software “project” with too many unknowns to allow for a predictable schedule.

Effectively creating software requires a process that can flex with this unpredictability and still produce useful results.

Fundamentally there are 3 levers that control software development.

1)       Risk

2)       Scope

3)       Schedule

 Risk

To have any hope of consistently delivering software developers must have control over risk.  This includes things like: leaving room in the budget for beefier hardware, developing and testing key architectural components before investing too heavily in the rest of the code, identifying scalability bottlenecks and testing them with code fragments that simulate full system activity, etc.   

Unfortunately, risk is consistently underestimated and often costs are cut in this area with catastrophic results.  In particular the concept of risk can be hard to get across to non-technical users.  Many people outside of IT misinterpret open and honest discussions of risk as a lack of confidence.

Personally, I’ve found the best way to approach risk with non-technical people is to (over) explain the reason for discussing risk in detail—namely that is how we avoid problems to produce “SUCCESS”!  We programmers (and other IT people) take it for granted that risk management is yellow brick road that leads to Oz, but non-technical users do not have knowledge hard wired in.  As the technical expert we programmers are often expected to double as teachers for those without the benefit of our technical training.  For those still in school—analyze your favorite teachers and take some notes of their methods.  Teaching skills can be a real asset to a programming career.

Scope

Control over scope is rare.  There is the infamous “scope creep” that runs havoc, but even more insidious is hidden scope.  I can’t count how many times I’ve had a request that seemed fairly straight-forward.  Then somewhere between two days and two weeks later I learn some funky little quirk in the business process that totally trips up the design.  The requirements remain unchanged while the project balloons to 4 times its original size.   

A few software houses have managed to control scope.  Intuit makers of popular financial software releases new products on a yearly cycle.  There are some hard-fast scope requirements—tax law for example—but most of their scope is flexible.  Features that aren’t stable in time for the 2008 version end up in the 2009 edition.

Starting with Windows 95 Microsoft seemed to be flirting with that model, but their markets (or maybe the company itself) seem to prefer well defined improvements in scope over precise release schedules so they never have been able to deliver consistently timed releases.  Since 2000 they seem to be steadily drifting away from dated releases.  (Ironically this excluding IT oriented products–SQL Server and Visual Studio—we developers like our naming schemes simple and mathematical.  Just the word “Vista” is enough to make me physically ill. J)

Schedule

While scope is somewhat flexible it is typically not flexible enough to account for all the unexpected roadblocks.  The only other alterative is to vary the schedule enough to allow for the work to be completed.  Management (particularly non-IT management) finds this unsettling.  Many of the software horror stories circulating today have their origin in management that is unwilling to sacrifice either scope or schedule and end up sacrificing staff instead. 

In order to work around the scope/schedule problem some have resorted to saying—“We can meet the schedule, but must cut quality.”  Personally, I see this as a cop out.  Quality is an attribute of scope.  A buggy or hard to use solution is an unfinished solution.  Phrasing scope in terms of “quality” might be easier for non-technical people to understand.  However, it gives the idea that quality is optional.  The results are rarely pretty.

Personally, my approach to schedule management is to keep a close relationship with my user/customer community updating them regularly on the status and challenges of the current project.  Initially this approach often results in a burst of initial scope when people realize that someone in IT is finally listening!  However, it’s not long before a balance emerges.  I’ve found that (most) customers will voluntarily cut/modify scope in order to achieve their top priorities within a reasonable schedule.

However, a note of warning is due here.  A key part of a good relationship with a customer is to emphasize the infrastructural investments that must be made.  The customer must “own” not only their features, but the entire system that supports those features.  Otherwise they won’t value/appreciate the huge investment that must be made.  Software is a lot like an ice-berg.  That 20% that is visible—the UI—is supported by 80% that is under the water.

Summary

Far too often programmers are left without control of scope or schedule and are forced to absorb the randomness of software development by working extra hours.  If the software industry recognized the true nature of software we’d abandon our project-centric approach and turn to a more dynamic model.  (I see signs of this in the current wave of web 2.0 companies)  One of evolving systems (or sets of interacting systems) developed over the long-term with an eye towards process and integration.

Additional Reading:In this article Joel Spolsky comments that the construction phase of software can be estimated with decent accuracy, but the design and bug fixing phases of software are unpredictable.  An interesting idea with a number of practical applications.

Change Without Risk?

September 9, 2007

 

“Change Without Risk” the slogan read.  This was on a poster advertising the latest Oracle DB release that came shrinkwrapped with the latest copy of Oracle Magazine.  “Change Without Risk”. . . what a laugh.  The very idea is ludicrous. Please, someone put a leash on those marketing guys!

Everything has risk.  Upgrading has risk.  Not upgrading can have even more risk.  Evaluating and managing risk is second nature to those of us in IT.  But, it’s easy to get into a rut and not think deeply about risk until some silly piece of marketing hype comes along.

There are four basic ways to handle risk:

 Retain

The risks we keep.  Some of these are chosen—for example deciding to go with a backup tape vs. a hot offsite backup.  But most retained risks are not thought through and come as a surprise—particularly to non-technical people.  Unfortunately, a common “suit” strategy seems to be to “save” money by retaining more risk.  In most cases I think the “suits” simply don’t understand the extent and scope of the risks they are working with.

 Ameliorate

Find a way to reduce risk.  For example: backup the hard drive, provide a backup system, write maintainable code.  Ameliorating risk is usually the most effective way to manage risk.  However, it comes at a cost.  Usually the cost is relatively low, but one that non-technical decision makers may cut if they don’t understand the consequences.  I find when I tell real stores of real failures in similar situations people connect/understand risks a lot better and are more willing to take the necessary steps.

 Avoid

Chose a different solution that avoids risk altogether.  I often code the “hard” parts of my applications first so I can identify the areas of greatest risk when I still have time to re-design and avoid risky solutions.

 Share

This is mostly applicable in the insurance world, but I think there are some applications to programming.  For example, I can code a robust data access layer and share it across several servers so that if one server goes down the others can seamlessly pick up the load.

 Summary

It’s impossible to live life without risk–but with proper precautions risk can be reduced to a point where it’s livable.  And to ya’ll over at Oracle:  Next time you make a poster—please advertise some real benefits–not some “Change Without Risk” hype.  I really would like a poster I can feel good about hanging up beside (or on top of) my Microsoft poster without embarrassment.  🙂

  

P.s. for those of you patiently waiting the follow up article to my previous post.  I’m working on it and should have it finished in the next few days.

The Myth of Software Estimation

August 26, 2007

I cringe ever time I see or hear a mention of software estimation. After years of seeing my estimates and the estimates of my colleagues fail, I’ve come to the conclusion that beyond the roughest of figures software estimation is impossible. This is not due to negligence or incompetence. Nor is it due to an immaturity of the software field. It is simply a fact of software development that software estimation is a myth.

The Right Metaphor
To defeat the estimation myth, we must first discard the faulty metaphors assigned to software: construction, writing, engineering. In reality, developing software is more akin to a developing human relationship.

When I was in grad school I began to notice that from time to time a particular young woman would join my table at the cafeteria for lunch. I didn’t think much about it until one day I looked up at her and realized that perhaps the reason she was sitting at my table was because she liked me.

We geeks aren’t known for our social intuition so this was a big moment.  So I did some rough calculations on the speed and complexity of relationship building and making proper adjustments for personality factors I estimated we’d be married in 18 months . . . not!

small_wedding.jpgA marriage “go live” date was the last thing on my mind. In fact it was months and several relational millstones later before we even discussed a date. As it turned out, circumstances beyond our control delayed even that date. But, in the end the pieces fell together and we were happily married. Now we have a house, a mortgage and two small children.

If we substitute “software project” for “young woman” and “maintenance and support” for “mortgage and family responsibilities” we would have the perfect description of the software development project.

Relationships Are Unpredictable
Why can’t we predict relationships? It is certainly is not due to insufficient study. People have strived to understand how love works for thousands of years. Yet, at best matchmaking has always been an art of intuition and guesswork. There are simply too many variables and too many unknowns to predict how two humans will relate. The same is true of software. Each new project is built in unknown territory. Even a simple project has a myriad of variables and unknown factors. It is simply impossible to predict the result.

An Example
A few months ago a web service I’d written began to eat up memory like crazy. I’d coded it using a modern garbage controlled language so theoretically a memory leak was impossible. But, it was happening. I tracked down every obvious choice. I talked to my co-workers. No luck. Finally, after two weeks I finally discovered the problem–a small bug inside a loop was interacting with Microsoft Active Directory to create hundreds of AD tokens. Another bug caused these tokens to skip garbage collection. Schedule? Throw that to the wind.

Every software developer has stories like this one. You probably have one from just last month, or even last week! The interaction between human and computer is simply too complex to predict when the next brick wall will come sailing out of nowhere. I like to tell junior developers that the art of developing software is like running hurdles—except that instead of hurdles we run at brick walls and in the middle of the night. When we hit one we scramble up, dig beneath, or bash our way through then sprint headlong into the next wall.

Large Project Estimation
Individual estimation is impossible. However, if enough developers are involved the random occurrence of schedule breaking events averages out and becomes a measurable “white noise”. The “white noise” can be then factored into the estimate as overhead. Estimation models such as COCOMO are built on this predictability.

However, there are clear limitations to this type of model.

  • A limited number of critical path options may narrow a project schedule to a tiny set of dependencies. Once a schedule is dependent on a very small group (or individual) developer(s) the project schedule can (and will!) be blown about by random chance.
  • If a project depends on unproven technologies or technology unproven to the domain the number of random roadblock events increases unpredictably. A key assumption of large project estimation is that the average number of schedule delaying events per developer is known and can be factored in as an overhead. With new technologies this can not be known and can not be accurately factored into the estimation.
  • Range of Precision: Models are appealing because they provide a number—however, even in ideal conditions they are far from precise.
  • In short, I would be extremely cautions to schedule or budget based solely on the results of a software estimation model. However, these models are useful and should be carefully considered in the light of the limitations above. One of the best uses may in fact be as an antidote to unrealistic optimism–I.e. “No, this isn’t a 6 month project. It really WILL take *THREE FULL YEARS!*”

    Summary
    We (and our users) must accept that a software estimate is about as good as a prediction of marriage on the first date.

    Upcoming Article: How to keep your “customer’s” happy without estimates. (Coming in Early September)


    Further Reading:
    Large Limits to Software Estimation

    Is Leadership a Programming Skill?

    August 15, 2007

    The other day I posted an article on leadership to programming.reddit.com.  Within minutes it had been voted down.  The article was popular elsewhere so it wasn’t the article.  Apparently the consensus is that leadership doesn’t fit with programming.  I disagree.  Here’s why leaderships *IS* a programming skill:

    1) The Trend Towards Integration
    The days of isolated systems that don’t communicate with other systems disappeared years ago.  We are often called upon to write code that forms the glue between various systems—and various groups of people.

    In my own job I often find myself working directly or indirectly with dozens of people.  The social integration task may be as much of a challenge as the technical task.  Programmers must be able to effectively work with different people in different roles. 

    2) Projects are Bigger than One Person – And Include non IT members
    One day I learned of a system problem and went to talk to with the clerk who reported the problem to get more information.  To my surprise the problem had started days before, but it wasn’t a big enough to bother his boss about.  I didn’t hear about it until it had reached the minor crisis level.  I immediately made a mental note to add this clerk to my “team”.  From then on I regularly checked back to find out if any new issues had come up.  It didn’t take long for him to learn the scenarios that indicated potential problems and he would report anything strange that came his way.  The result of this team: timely first-hand information and some good PR for IT.

    Programmers tend to see their team as only fellow coders.  The many programs that poorly fill user needs are evidence enough of the harm this causes.  Effective programmers draw the user community into the “team” and together work towards common goals.  Tactfully overcoming geographic and organizational hurtles are as much a part of a programmer’s job as design patterns and class hierarchies.

    3) Effective Leaders Know How to be Good Followers
    Good leaders know how to organize and focus efforts on the task at hand.  Those skills make it easier to contribute to any project—regardless of who is leading.  If you don’t have the humility to be a good follower, you won’t be a good leader.  People don’t voluntarily choose to follow someone on an ego trip!

    4) The Ego Problem
    Let’s face it—a large percentage of the people in IT are in it to feed oversized egos.  (From time to time that has included me!)  The results are devastating.  Leadership contrasts self absorption by focusing energies on improving the community.  Over the last few years I’ve seen a lot of positive growth in this area.  The growing strength of the open source movement is a powerful testimony to great attitudes of so many programmers.

    Summary
    In university I decided not to be a programmer because I enjoy people too much to stand the idea of spending 100% of my time staring at a computer screen.  Thankfully, that detour didn’t last too long.  How misinformed I was—I can’t remember the last time I spent 100% of my day starting at the computer screen without some break to work with a colleague, answer a question, discuss a design, etc.  Computers were built by people and for people.  The human factor can never (and should never) be removed from that equation.


    Jeff Staddon is a full time software developer living near Chattanooga, TN. USA


    effectiveexecutive.jpg

    Jeff’s Book Recommendations:

    Probably due to it’s title, The Effective Executive, this book is amost unknown inside of IT. However, I found it to be very useful. (It’s really about being a effective knowledge worker.) While aimed at management most of the content is applicable to any IT role.

     

    What Databases Should Do For Me

    August 12, 2007

    When I was a kid Christmas was the most exciting day of the year. I’d rip open the packages to see if I finally owned what I’d spent months hoping for.  In technology we don’t have a scheduled yearly Christmas, but occasionally we get exciting new technology.  And I can dream of the great things I’d like to see.  Here’s my wish list of what I’d like to see databases do:

    Automatic Transactions
    Many times when testing or troubleshooting I need to see the history of record changes. I wish the following SQL were valid:

      SELECT * FROM item_master
      WHERE item_master.item_id = 475
      FROM SYSDATE – 1 TO SYSDATE

    Some of you are going to argue, “What’s wrong with creating a trigger and writing changes to a transactional table?”.  There isn’t anything wrong with that approach, but it’s a time waster: create the trigger, create the table, maintain the table when fields are added/dropped, create reports on the transactional table, blah, blah.  Millions of databases around the world face the same problem and apply the same generic solutions.  Generic problems should be solved in one place, not millions of places.

    Automatic Indexing
    In my opinion discussions about which database is fastest is usually a moot point. In the practical world the fastest database is the one with the best indexing. (And sufficient IO capacity) Most databases are severely under indexed. Above a certain cost threshold databases should keep statistics on queries and automatically create indexes for costly or common queries. (Probably a combination of the two factors: a very common query with low cost deserves and index as much as a uncommon query with high cost)  The RDBMS would also need to track the cost savings of the new indexes and drop them if they no longer provide sufficient savings.  There would need to be a cap on index creation if a table has too many indexes (i.e. if the indexes are interfering with updates and inserts)  Index creation would need to be load aware so it wouldn’t kick off during the busiest time of the day. . .  as you can see this would not be trivial.

    Automatic indexing would not replace manually defined (i.e. permanent) indexes, but it would be awesome.  Especially for purchased applictaions.  Purchased applications are chronically mal-indexed (one reason is that no one customer uses the application in exactly the same way) and no one at the customer sites knows what goes on under the hood to fix it.  Indexing problems in purchased applications are rarely resolved in a timely manner (or at all).

    Appeal to MySQL Developers
    I wish these features were part of Oracle or SQL Server, but other than natural product evolution I don’t think the giants have much innovation left in them. If we’re going to see innovation in databases it has to come from somewhere else.


    Jeff’s Book Recommendations:

    The Mythical Man-Month is the software engineering classic. This book should be mandatory reading for the professional programmer.