Every once in a while, there is a need to create a working prototype of some experimental functionality. Lets say you are a programmer. A marketing guy asks you to make something up from a wireframe he gives you... Of course this is wrong right from the beginning - in a large project you should consult the Architect before making any add-ons or changes to the application, Product Management shouldn't report into marketing, and he probably forwarded it straight from the customer anyway... Sometimes you need to try something out. Show it to your customer and your Q&A team, tell them it's an alpha. Show it to your users and tell'em it's a beta. Then your marketing boss comes along and says: "Good job! It's working! Leave it!". At this point, the new product made it through a release cycle, but users didn't start using it. After a while, it stops working...
So what is a prototype? Some people may have an image of libraries or other components stuck together to appear like a finished product (which it's not). A prototype lacks key elements of software design and implementation, like modularity, documentation, unit tests... It can have unpredictable capacity limitations (because of limited testing and no design phase). The resulting code will be unreadable to developers who didn't participate in it's creation.
If you tell a programmer to make it quick, or hurry up, he will make something more or less fitting this description.
My experience suggests that if you continue on making iterative changes to the prototype, bad things happen:
- Your new project can suddenly become just too big. Even if you use frameworks with well defined design patterns, like MVC, no one person can grasp all relations between modules, changes in state and logic, etc. (no technical leadership).
- If you didn't do documentation, there is a small number of programmers that understand and work on the core application (which usually is just a fragment of the first prototype).
- Some functionality can become very difficult to maintain after an author leaves the workshop.
- Poorly designed or not documented code accumulates bloat, becomes slow and buggy.
- If you forget about tests, you can't easily rewrite different modules inside the core without making changes in the whole package, changing APIs and introducing bugs. Your current API is opaque and hard to debug.
- If you didn't document all application features, you can't simply rewrite it, in fact, it can be quite impossible.
Now you have two options - dump your current customers (or wait for them to fire you), or sell your users/clients a new, rewritten version of the product, with new functionality and better usability. I've seen it go both ways.
I won't look further into the problem of project management. I'm more interested in what can we do by changing the solution's architecture.
When making a prototype, an average programmer, will dump his blob of code, thus creating his application; a resourceful programmer will use many external calls to libraries, toolkits and frameworks which the average programmer hasn't heard of, writing less code and finishing his assignment earlier; but a successful programmer will also create tools of his own. The premise here is that libraries have an interface, tests and documentation.
Using libraries, means less code to develop and less bugs. Frameworks provide a solid, well tested base, free built-in functionality, examples and common programming style. Many tools are also component based and have a consistent terminology - these are pretty important in a large project with many developers. The downside is always the learning curve, sometimes performance.
Am I against prototyping? Surprisingly, no! Even if you are average, or have average programmers on your team, you can plan to fail, and plan rewrites. This is what good Product Managers do (they might be hiding this fact from you).
Some time ago, I read an opinion, that stated you have to (re)write your application three times, because the third version is generally the best, it goes something like this:
The first is the "I don't know what I'm doing" version, which gets written by trying without much thought, ugly hacks, and without a decent design. Sometimes it does work quite well however, as despite not being very pretty it does what it's supposed to.
The second is "V1 is crap, but now that I have figured it all out I can do better!". Often a horrible mess, due to things like wanting to make everything modular, adding every feature possible, and using the latest cool tech and design patterns where they don't belong. Turns out to be slow, huge, buggy and overly complicated to use. There's even a name for this: "second system effect".
Based on the lessons learned from the lack of planning in the first and the excesses of the second, the third version has a good chance of being actually decent.
However careful is your planning, you may still want to be on the edge of innovation, and release unfinished software (and be more like Google). Maybe you want to start an Open Source project, which should gain momentum, not otherwise.
I will explore my ideas on how to plan for big and small changes in my next post.
As a sidenote: if you are new at this, and you just found out your boss is from marketing, don't do prototyping, forget about quick hacks when bug-fixing. If you plan on keeping your job longer than a year, focus on writing good code, learn what the current application does. Your work will be appreciated by your coworkers. If you see others do it, stop them, or start looking for a new workplace. Be careful not to become a superstar developer.