Some psychologists object to extending these results to an explanation for superstitious behaviour in humans, but I believe the link between reinforcement and repeated behaviour is relatively well-demonstrated.
What's the point of this pigeon story? I think it applies to programming.
Having a poor mental model of the semantics of a programming language leads to all sorts of bizarre superstitious behaviour. You only have to read thedailywtf for a little while to see all kinds of examples of the programming equivalent of throwing salt over your shoulder and turning around three times. Let me stretch the analogy a little further:
Certain programming structures lead to success in a task. This success leads to a kind of reinforcement of the techniques used to reach that solution. This "reinforcement" could be compiling, passing tests or running successfully. Similarly, there is a kind of negative reinforcement schedule - compile errors, failing tests or runtime failures.
It's important to form a distinction at this point between simple learning-from-experience and the kinds of superstitious behaviour that I'm arguing occur in programming. Let me give an example - having a poor knowledge of a given language, let's say that I construct a program and I apply techniques A, B, C and D. These could be anything, really - naming schemes for variables, grouping of sub-expressions with parentheses, adherence to particular patterns, or anything else. It may be that only A and C were sufficient in order to have the program compile and run. Unfortunately, I've also included B and D. Next time I come to write a program, if I continue applying B and D, these become a kind of entrenched superstition. They serve no purpose, except to make reading code more difficult for programmers who do not share your superstitions. This is still just learning from experience though - this isn't inherently "superstitious" - it's just a school of programming associated with throwing everything at a wall and seeing what sticks.
Now, here's the crux: it is inconsistent application of "reward" that leads to superstitious behaviour. This inconsistency exists in two basic forms in programming languages. The first is when there is a lot of hidden behaviour. When there are a lot of hidden assumptions, it becomes very difficult to discern why your program works. This is where the compiler or framework takes on a "magical" quality, whereby the rules governing success or failure are so complex as to seem almost random.
The second opportunity for inconsistent application of reward is late failure. I find this kind of behaviour is regularly present within the high-level, dynamically-typed, interpreted languages. Where the goals of these languages emphasise being accessible and intuitive for beginners, this actually equates in many cases to "keep running until the error is completely fatal". Where failure occurs well away from the actual source of the error, diagnosing and correcting errors can quickly become an exercise in near-random mutation of source code. When cause-and-effect become very difficult to predict, then changing one part of the system and having another piece break (or start working) becomes a kind of reinforcement schedule. This has two dangerous impacts: 1) it leads to unnecessary error handling in places there shouldn't be error handling. The hideous global "try/catch" block wrapping an entire program is a symptom of failure occurring much later than it should. 2) programmers become timid about touching certain parts of a system. This leads to more "magical thinking", whereby a piece of the system not only is difficult to understand, but should not be attempted to be understood. This makes it almost impossible to refactor or grow a system in a structured way. There will always be bits of "magical" code waiting to break unexpectedly.
So of the two sources of inconsistency, the first is the easiest to address. This basically just corresponds to "learn enough about your environment so that it no longer appears magical":
- Test your assumptions regularly - Do you really need those extra parentheses? If you're not sure, try it without.
- Be a square: read the manual - it seems incredibly dull, but it's actually worth understanding your language of choice from the ground up. You may be a good enough programmer to pick up languages just by reading a bit of source code and a few ad hoc tutorials, but there is something to be said for the systematic approach to learning a language - namely, the hidden behaviours of the environment become considerably less magical.
- Avoid magic - If you're considering web frameworks or programming languages or whatever, beware the phrase "it all just works". If you don't understand how or why it works, and you don't believe you are willing to devote the effort to understanding how or why it works, then the benefits of any whiz-bang features are going to be eaten up by the maintenance and correctness issues you'll suffer as a result of superstitious behaviour.
So the only advice I can offer, aside from "fail early" is to employ self-discipline in the absence of language-enforced discipline. Attempt to build structures that will cause your program to fail early rather than deferring failure. Avoid developing superstitious behaviours around certain pieces of code - if everyone spends a little time questioning their own attitudes towards development (and particularly towards debugging), then maybe there will be a little less superstitious coding behaviour in the world.