Programmers, generally speaking, like writing code. It seems obvious, but it's important to the point I would like make.
Software defects arise from writing code. Sure, there are classes of errors which arise as a result of programmers or stakeholders actually just getting requirements or specifications wrong, but mistakes in understanding or requirements only manifest once they are translated (by flawed, fallible humans) into something executable.
So a very simple way to greatly reduce the number of defects that exist in our software seems to be to stop humans writing code. By pushing more of the work into compilers and tools (which we can verify with a high degree of confidence), we reduce the areas where human error can lead to software defects.
We're already on this path, essentially. Very few people write very low-level code by hand these days. We rely on compilers to generate executable code for us, which allows us to work at a higher level of abstraction where we are more likely to be able to analyse and discover mistakes without needing to run the program.
Similarly, type systems integrated with compilers and static analysis tools remove the burden on us as programmers to manually verify certain runtime properties of our systems. Garbage collectors remove humans from the memory-allocation game altogether.
See what I'm getting at? We have progressively removed bits of software development from the reach of application developers. Similarly, the use of extensive standard libraries packaged with mainstream programming languages (hopefully!) no longer requires programmers to create bespoke implementations of often-used features. The less code a programmer writes, the fewer chances he or she has to introduce errors (errors in library implementations are a separate issue - however a finite amount of code re-used by many people is likely to be much better over time than a piece of code used by one implementation).
The rise of various MVC-style frameworks that generate a lot of boilerplate code (e.g. Ruby on Rails, CakePHP, etc.) further shrinks the sphere of influence of the application developer. In an ideal world, we would be able to use all of these sorts of features to ensure that we essentially just write down the interesting bits of our application functionality, and the surrounding tools ensure global consistency is maintained. As long as we can have a high degree of confidence in our tools, we should be producing very few errors.
There is one basic problem: it doesn't go far enough.
Despite their best intentions, Ruby on Rails and CakePHP are basically abominations. I speak only of these two in particular because I've had direct experience with them. Perhaps other such frameworks are not awful. The flaws in both frameworks can essentially be blamed on their implementation languages, and the paradigm that governs their implementations. Without any kind of type safety, and with very little to help the programmer avoid making silly mistakes (e.g. mis-spelling a variable name), we can't really have a high degree of confidence in these tools.
Compilers, on the other hand, are generally very good. I have a high degree of confidence in most of the compilers I use. Sure, there are occasional bugs, but as long as you're not doing safety-critical development, most compilers are perfectly acceptable.
So why are there still defects in software? First, most new developments still use old tools and technologies. If any kind of meritocracy was in operation, I would guess that very few new things other than OS kernels and time-critical embedded systems would be written in C, but that's simply not the case. Many things that make us much better programmers (by preventing us from meddling in parts of the development!) are regarded as "too hard" for the average programmer. Why learn how to use the pre-existing implementation that has been tested and refined over many years when you can just roll-your-own or keep doing what you've always done? Nobody likes to feel out of their depth, and clinging tight to old ideas is one way to prevent this.
Having done quite a bit of programming using technologies that are "too hard" (e.g. I'm a big fan of functional programming languages such as ML and Haskell), I think that if you use these technologies as they are designed to be used, you can dramatically reduce the number of defects in your software. I know I criticised methodology "experts" in my previous post for using anecdotal evidence to support claims, but this isn't entirely anecdotal. A language with a mathematical guarantee of type safety removes even the possibility of deliberately constructing programs that exhibit certain classes of errors. They simply cannot happen, and we can have a high degree of confidence in their impossibility. As programmers, we do not even need to consider contingencies or error handling for these cases, because the compiler will simply not allow them to occur. This is a huge step in the right direction. We just need more people to start using these sorts of approaches.
So, the title of this post was "When are we going to stop writing code?", and I ask this with some seriousness. As we shrink the range of things that programmers are responsible for in software development, we shrink the set of software defects that they can cause. Let's keep going! I believe it is very nearly within our reach to stop writing software and start specifying software instead. Write the specification, press a button and have a full implementation that is mathematically guaranteed to implement your specification. Sure, there may be bugs in the specification, but we already have some good strategies for finding bugs in code. With a quick development cycle, we could refine a specification through testing and through static analysis. We can build tools for specifications that ensure internal consistency. And as in the other situations where we have been able to provide humans with more abstract ways to represent their intentions, it becomes much easier for a human to verify the correctness of the representation with respect to their original intentions, without the need to run a "mental compiler" between code and the expected behaviour. This means we can leave people to solve problems and let machines write code.
That said, it's probably still not realistic for people to stop writing code tomorrow. The tools that exist today are far from perfect. We're still going to be forced to write code for the forseeable future. We can get pretty close to the Utopian ideal simply by using the best tools available to us here and now, and in the mean time, I'm going to keep working on writing less code.