Monday, 31 August 2009

Making money from programming language design

I've had an interest in programming language design for a long time. Lately, however, as I've been working on my prototype bigraphs-based language, I've been pondering the economics of programming language design. Not that I have ever been particularly interested in making vast sums of money from designing programming languages - I'm much more interested in solving interesting problems, but it's interesting that PL design is viewed as a bit of a poor cousin to other varieties of mad schemes. You're a startup with a $100k/month burn rate designing a new Web 2.0 application that will simultaneously topple Facebook, Spotify and cure cancer? Sure, sounds exciting! You're designing a new programming language? Oh, it'll never catch on.

The lack of enthusiasm from those outside the PL design community probably stems from the fact that it is probably a bit of a lame duck in terms of earning money*. I do it for fun and interest (and making the world a better place etc), but that argument doesn't fly with some people.

* I know, there are lots of other reasons. The strange ways in which adoption and rejection of languages by users works would make me nervous if I were an investor. Also, sometimes it's hard to get enthusiastic about conversations over the merits of particular syntactic constructs.

So this post is dedicated to various ways that it might be possible to make money from programming language design (and implementation). There is probably a large overlap here with the economics of software in general. This is largely hypothetical, as I can't say that I've succeeded (or even tried) in making any significant amount of money from programming language design. Similarly, I'm basically "thinking out loud" at any scale. Some of these ideas might work to earn some beer money for one guy in a garage, while others might only be achievable by huge corporations with millions of dollars of contracts. Either way, my point is to simply enumerate some ways in which programming language design and implementation itself could be financially rewarding.

1) Commercial compilers

By "commercial compilers" I mean you create a single reference implementation that is "the compiler", and then you sell licenses for the compiler software. This is probably not a serious option outside a few specialist domains, given the myriad of production quality compilers for most languages, almost all of which are free and open-source. This might be an option if you are targeting some very specific (and probably very expensive) piece of hardware, especially if your language provides some value that the vendor-supplied implementation does not. The "specific" part is probably the important part. In the absence of other options, people become much more willing to part with money. If they can just use the same open-source compiler for the same old language they've always used, they probably will.

Incidentally, I am interested to know how the economics of the entire .NET thing has worked out. I assume Visual Studio is still a commercial product? It's been several thousand years since I looked at it. I assume the incentive for the creation of C# was not to profit directly from the language, but rather to create a big vendor lock? The existence of Mono creates an even stranger situation. If you know more about this side of .NET than me, I'm eager to learn more.

2) Free compilers, commercial add-ons

This is a pretty simple model. Open source the compiler, then sell value-added IDEs, debuggers, accelerators, static analysis tools and anything else you can think of. My understanding is that this is largely the route taken by Zend. It's probably not a bad approach, weakly corresponding to the "the first hit is free" school of business employed by many entrepreneurs working on street corners. In a situation where you control the language specification, and the implementation, then you are likely the best-positioned to create accompanying tools that "professional" users of the language are likely to want or need. This permits people to become fully familiar with the language as a hobbyist, producing real (useful) software using the freely available tools, and then moving on to commercial programming environments where the cost of tools is hardly considered.

Personally though, I'm pretty stingy when it comes to buying software that I don't really need. Commercial IDE? I don't use IDEs anyway. Commercial debugger? I'll do print-debugging if I have to. The "our implementation is crap unless you buy X" stuff is pretty much an automatic way of making me not give your language a second thought.

3) Sell support

You know, the old open source trick. I have no interest in the debate as to whether this is commercially viable or not, but it seems to work for some people. I'd rather not. I assume this would essentially equate to some kind of commercial support, either becoming a producer of software in your language, or providing training and tool-support services. This might extend to providing some kind of certification process for programmers.

4) Domain-specific embeddings

Domain-specific languages are hot right now. I quite like the idea that a single language design could then be specialised into domains in exchange for money. I think the key difference between making money from programming languages and software in general is that people are almost completely unwilling to pay for compilers or tools for general-purpose languages. There's just no need in this day and age. However, within a domain-specific context, it would appear that there may be more acceptance and willingness to pay for tools and support. It might be that you can convince large company X to pay you to create a DSL for their particular domain, or it might simply be a matter of convincing users of your general-purpose language that their particular task would be easier/better/faster in some domain-specific variant that you sell.

5) License the spec/controlled membership

Form a consortium! Charge bucketloads of money for interested parties to become members. You probably need a lot of users with a lot of money (read: "enterprise") before this would cease to be a joke. If you could get into this sort of situation, you're probably already making money from your language in one way or another. Licensing the specification, certifying implementations, and simply collecting fees for pretty much anything sounds appealing.

6) Write a book

I know even less about the economics of publishing than I do about the economics of programming languages, but I assume the fact that people continue to publish books mean that someone, somewhere is making money from them. Writing a book about your language seems to be a good way to kill two birds with one stone (i.e. write good quality documentation, and possibly make some money if enough people buy it).

7) Ad Revenue

I doubt this is going to be enough to retire on, but programming language websites often have the advantage of traffic. Hosting the tool downloads and the documentation can provide an incentive for a lot of repeat visitors. It might not be much, but it's always nice when your hobby is able to buy you a pizza a couple times a year, right?

8) Conferences and speaking engagements

I'm not sure how large a community has to get before speaking engagements would be likely to take up more than a week in any given year. Conferences are probably hit-and-miss in terms of revenue generation. You might end up making a little bit of money, but you could similarly lose a whole bunch.

9) Embedded runtimes/interpreters/compilers

If you could convince a manufacturer of some device (e.g, a mobile phone) to include your implementation of language X, then presumably you could negotiate royalties per-device. I assume that some of the mobile Java implementations provide some sort of financial incentives for their creators?

10) ???

Can anyone else think of ways in which programming language design (or implementation) can be used to generate revenue? Let me know in the comments, because if you have a guaranteed get-rich-quick-scheme, I might have a programming language to sell you :-p

Wednesday, 26 August 2009


I'm working on a programming language that follows the general principles of Bigraphs and bigraphical reactive systems. For simplicity's sake, think term-rewriting with process mobility.

I've written up something that may eventually become documentation for an implementation of this language at:

It's still a work in progress, but I would definitely appreciate any thoughts or comments. You can either comment here or email me using the address in the side-bar.

Tuesday, 25 August 2009

I laugh at your puny human 'objects'

Let's say that Earth is being invaded by aliens. The last tatters of the US government (and a motley crew of scientists and caravan enthusiasts) are holed up in a secret government base in the desert, where a crashed alien ship is stored. One of the dashing action heroes realises that he can engineer a computer virus to take out the alien shields that are preventing any meaningful counter-attack. Our brave hacker whips up some code, which must be delivered to the alien mothership using the (now-repaired) alien fighter. They sneak into the mothership undetected and upload the virus.

At the moment of truth, as they are about to detonate a nuclear device and make their escape, the virus program throws an exception because the behaviour of the 'AlienMothership' class differed slightly from our hero's interpretation, and the aliens succeed in taking over the world.

OK, so I ripped most of that off from Independence Day, but I think it goes to the heart of my least favourite technology: objects.

I'll freely admit that this is a bit of a rant. You're entitled to disagree.

Objects are a bad model for the universe. There, I said it. Object Oriented Programming (OOP) is oversold as a "natural" way of modelling things. It really isn't. Let's examine why:

Let's start with encapsulation. I have no problem with encapsulation, as such. It's a great idea. Except when it isn't. When you have hidden (mutable) state, I think it's pretty much a terrible idea. You define a nice tidy interface (or class definition, or whatever terminology you prefer), and then call some methods from another location. At any given moment, that object is in some state, and the meaning of calling another method on that object could be completely different. X.a() then X.b() may be completely different to X.b() then X.a(). This implied ordering is not really expressed anywhere. There might be an in-source comment or a note in the documentation that mentions it, but really, you're on your own here.

I'm not talking about incorrect behaviour here either, where it would be possible to check (at runtime) that a call to X.a() never occurs after X.b(). I'm referring to a situation where both execution orders are perfectly valid, but have totally different meanings. How can I resolve this? Well, naturally, I just look inside the class and figure out how internal state is modified and what the behaviour will be depending on the internal state. Whoops! I'm basically back in imperative-land.

To summarise, I think hiding behaviour inside objects makes no sense. "Hiding" computation away is basically ignoring the point of the program. Abstracting away irrelevant computation makes sense, but OOP takes this paradigm way too far, hiding away both irrelevant computation and the computation we are interested in.

I asserted earlier that I thought objects were a poor model for the universe. I'll try to elaborate on this claim.

The classic OO examples always feature things that are somewhat modular in their design, interacting in terms of clearly defined events, with some internal state. For example, one might model a bicycle with wheels and a 'pedal()' method. Or a 'Point' class that perfectly captures the fact that a point has x and y coordinates, with some methods to handle translation or rotation or any other interesting operation.

But a 'point' is just data. Why does it have behaviour? In this case we're basically just using objects to control name spaces. It's nice to be able to have the word 'point' appear in operations on points, and have all of the functionality associated with a point in one place. Great, so let's do that, but instead of hiding the state, let's have a 'Point' module with no internal state, which gathers up all of the point-related functionality that operates on values of type 'point'.

A bike won't do anything without a person to pedal it. A bike makes good sense from an OOP point of view. We can have some internal state, recording things about the bike. It's position and maybe its speed and direction of travel. The "pedal()" and "turnHandleBars()" methods alter this internal state. But a bike exists in the real world. The "world" records the position of the bike, not the bike itself. You can't pick up a bike in the real world and ask it where it is, or where it is going. A real bike stores none of this information, rather the context in which the bike exists tells us all of that information.

So when we model a 'Bike' in an OO context, we're actually talking about a bike in a particular context. The claimed decoupling just hasn't happened. It's unlikely that any natural implementation of a bike could be picked up and re-used in a context where different world-assumptions exist. Perhaps we want to ride a bike on the moon! It seems unlikely we'd be able to do that without going back and changing some internal 'gravity' parameter inside the object. When we model the world this way, we are encoding all sorts of assumptions about the world into the object itself.

If we do the 'right thing' here and remove any world-specific state from the Bike class, what have we got left? In this case, essentially an empty object that simply records the fact that a bike exists. Why do we need an object to do that?

Then there's the question of active and passive behaviour. A bike does not move itself. A person does. A method in the 'Person' class might call the 'pedal' method on the Bike class. This is a relatively sane operation. Calling methods on a Person becomes a much stranger kind of game. Obviously this is a modelling task, so we're abstracting away from the real world, but there is only one kind of method-calling, which models the interaction between a passive system and an active one. Where is the thread of program control? What about active objects calling active objects? Where do 'utility' functions fit in? They have no real-world equivalent, but they still get shoe-horned into the OO paradigm. Don't even mention concurrency in this situation - that's just a horror show waiting to happen.

This isn't just irrational OOP-hatred, by the way. I used to do a lot of programming in Java and C++. These days I use SML. I've found it to be a vast improvement on the situation, and here is why:
  • Records, Tuples and recursive data types fulfil most of the "structured data storage" uses of objects.
  • The module system provides the kind of name-space control and "bundling" for which people use classes.
  • Referential transparency means you can understand easily the meaning and purpose of a given function that operates on data. There is no "hidden state" which modifies this behaviour in unpredictable ways.
  • The "control" of the program follows one predictable path of execution, making mental verification considerably easier.
  • Important computation isn't inadvertently "hidden away". If computation happens, it is because there is a function that does it explicitly.
  • Clear distinction between "passive" data and "active" computation.
  • No hidden state means it is much harder to unintentionally model the object of interest and its context together.
The only thing that object orientation does for most programs is to provide basic name-space control. I don't think it significantly helps maintainability or readability. There are much better ways to do name-space control than wheeling out a giant, unwieldy object system that everything has to be crammed into.

I think the problem essentially exists as a result of static modelling of the universe. If you freeze a moment in time, then sure, everything does look like objects. But we're interested in behaviour and time and computation, none of which are captured very well by bolting on some "behaviour" to an essentially static model of a system. And sadder still, instead of putting the behaviour somewhere sensible, it gets wedged into the static models themselves!

There are a few other things I really dislike. Inheritance, for example. this post is getting really long though, so I'll stop here with the comment: If I was fighting the alien hoards, it wouldn't be with objects.

Friday, 21 August 2009

Worrying trends in programming languages: Javascript

Why is Javascript so popular?

It seems that in certain circles, the answer to any question appears to be "Javascript!". With the rise of large social networking platforms and browser-based "desktop-style" applications, it appears that more and more of what we do on a daily basis is likely to be powered by traditionally "web" technologies.

The announcement of Google Chrome OS seems to suggest that at least some people believe that the desktop and the web should no longer be distinct at all. This is totally worthy as a goal, and I like the idea of removing the distinction. However the implications are a bit scary.

Some people (me included) don't really like writing Javascript, but luckily there are several compiler projects that will generate Javascript from your non-Javascript code (Google Web Toolkit springs to mind immediately, though I'm sure there are many others).

Javascript is an odd triumph in programming language terms. It is almost universally available in modern web browsers. It's truly "cross platform" in all the ways that Java really isn't, in that most implementations are reasonably standard across platforms, without the requirement to install a vendor-specific virtual machine or anything like that.

But couldn't we do better? Why on earth would we take a language that can be compiled to efficient native code and then compile it into a slow, interpreted language? We appear to be ignoring almost all of the advances made in programming languages in the last 20 years. By compiling to Javascript, we throw away information that would permit efficient optimisation or static analysis for security. It just seems insane!

The situation is complicated further by the likes of Adobe's Flash and (more recently) Microsoft's Silverlight. With Flash, I am basically instantiating yet another Javascript (well, Actionscript) environment inside my browser. The goals are essentially the same. If it's possible (and even desirable!) to start writing desktop-style applications in Javascript, shouldn't Javascript be able to do almost everything that Flash or Silverlight can? Surely with a little thinking, we could do better.

Well I'm perfectly aware of why Javascript has become as popular as it is, I don't see why it should continue to become more popular. Embedding a virtual machine inside a web browser that would permit execution of some quasi-compiled form of Javascript would be one way. Open up the floodgates and let other languages target the same VM directly. With a bit of JIT compilation, it could probably even be an efficient way to execute some programs, within a browser-based security model. You could even implement a version of the VM in Javascript itself for legacy browsers that don't implement said VM.

But this still appears to be an imperfect solution. Ignoring the problems associated with trying to encourage standardisation or adoption of web technologies, it's still wasteful and just replaces the question "why are we writing desktop applications in Javascript?" with "Why are we running desktop applications inside a web browser?"

The basic technological need that Javascript fulfils very well is the ability to run untrusted code in a standard way, without needing to download or install anything. It provides a standard way to display and interact with an application through the browser, while still allowing nearly limitless customisation.

So putting on our programming language designer hats, what would a Javascript-replacement need to provide?
  1. Security - the ability to run untrusted code in such a way that we need not worry about its source. Similarly, we must be able to control the information it has access to, allowing it certain types of information but at a level far more restrictive than user-level operating system access control would provide.
  2. Zero installation - We may want the illusion of persistence, but we want to be able to access this application from any machine we may sit down at, without worrying about platform installation restrictions or where our data goes.
  3. Uniform display mechanism - HTML still appears to be the best candidate here. We still need to be able to render HTML, but I imagine if we could provide some native components that support hardware acceleration for things like video and 3D applications, we might kill two birds with one stone.
So, in my very armchair-philosophical way, I think that this can all be solved in one easy step:

Download, compile and run anything presented within a web page

That's right! Let's just run it. It would need to be in some sort of bytecode form, but that bytecode would be designed in such a way that it is relatively easy for the browser (or operating system?) to transparently compile it to native code. The application has the full privileges afforded to any other desktop application. It can access and manipulate the DOM in the web browser by any means it wishes. We could presumably provide a list of resources available to the application, so that if we want for it to be able to access an OpenGL or DirectX surface to do fast 3D, or display streaming video, this is fine.

Now, before you write me off as a complete moron, I know, this would probably present some security problems. We might not want to allow unrestricted access to our file system or memory to untrusted programs. This is where I think we need one more addition to our system: Proof-Carrying Code

The basic idea is that the bytecode that is embedded as a resource in a web page comes with a proof. While writing proofs is difficult, checking proofs is computationally inexpensive and relatively simple. The program provides a proof that it is well-behaved with respect to the restrictions we wish to place upon untrusted web-based programs, and our browser can verify that the proof does indeed guarantee that the bytecode is well-behaved. You can think of the proof as a portfolio of evidence for the safety of executing the bytecode - the browser can inspect this evidence (and the bytecode) and establish that the security properties hold itself - we don't need to "trust" the proof at all. As long as this proof-checking succeeds, we can run this program with total confidence in its inability to do anything bad to our system.

However, as I said before, writing proofs is difficult. Except that in this case, our compiler should be able to do it largely automatically. If you write your web-based program in a high-level language that provides no facilities for doing anything bad, then it should be relatively straightforward to automatically construct a proof that will satisfy the browser's proof-checker. The difficulty associated with proving that the program is well-behaved is going to be largely proportional to the low-level constructs you wish to access. It might not be viable to verify certain kinds of programs - I'm not sure, but at least our efforts would be going into figuring out how to construct proofs automatically, rather than figuring out how to force everything we want to do with computers into the limitations of an antiquated scripting language.

Thursday, 20 August 2009

A prototype static analysis tool for observable effects

I've had a piece of code kicking around for a while that hasn't really been doing anything, so I've decided to wrap it in a basic web front-end and unleash it upon the unsuspecting public.

It's a tool for performing a very basic kind of specification checking for a simple ML-like language. The idea is that you construct a specification consisting of *traces* of observable effects, and then the compiler can statically determine whether your program behaves according to this specification. It does this by automatically inferring a process algebra-like behavioural type for the program, and then using this as a representation against which to perform specification checking in a tractable way.

An example:




The Operations section defines which operations are of interest to us. Naturally our program could be producing any number of observable actions, but we may only be interested in specifying certain types of behaviours. In this case, we're specifying a program that interacts with a vending machine.

Each action (e.g. insertCoin) corresponds to some library call in the language that produces a side-effect. Because the language (which I will describe shortly) is "mostly functional" (in the style of ML), there is no mechanism to introduce new side-effects - they have to come from a built-in library function, or some foreign function interface.

The (empty) Necessary section is the strongest type of specification - it says "the program must be able to behave according to all of these traces, and must not be able to exhibit any other behaviour". When it is empty (as in this example), it has no effect.

The Safe section specifies a weak liveness property, "the program must be able to behave according to at least one of these traces". We have two separate traces defined in our Safe section in this example, each corresponding to a "sensible" interaction between our program and the vending machine. The compiler will halt and issue an error if the program is not able to behave according to any of the traces in a non-empty Safe section.

The final section is the Forbidden traces. These act as a general "pattern match" across all the possible behaviours that a program is able to exhibit. In this case, we specify that we never want our program to be able to insert two coins into the machine. The underscore (_) operator is a wild card, standing for any action that appears in the Operations section. The asterix (*) has the normal meaning one would expect from regular expressions, meaning "zero or more occurrences of this action". So the final trace demands that there never be two occurrences of insertCoin in any of the possible program behaviours.

Now, an implementation!

fun main x = let v = insertCoin () in (if rand_boolean() then ((button1 ());(takeDrink1 ())) else ((button2 ());(takeDrink2 ())))

val _ = main ()
Now this implements our specification. The web-based tool will report "specification satisfied", and will print a visual representation of the behavioural type that it has inferred from the program text.

The tool is at:

I will upload the (as of yet) unpublished paper describing the internals of the system and hopefully some better examples and documentation later tonight. Let me know if you would like to know more.

The tool is pretty fragile at the moment. The error reporting is very bad in most cases. Let me know if you come across any totally weird behaviour, because part of this unleashing-on-the-public exercise is to try to figure out what is and isn't working, and how I can improve it, so I would appreciate any feedback.

The idea (before you ask) is the encourage a very light weight form of specification. There is no execution going on here. It is supposed to mimic the function of certain types of unit tests in a completely static way. The way it is implemented at the moment is a rough prototype. Once I have proved to myself that this is a feasible kind of static analysis to perform, I may work on integrating it into a compiler for a general-purpose language. There is no particular restriction on the source language, however it is most likely to be feasible for a strongly typed functional language.

Tuesday, 18 August 2009

Hardware/Software Co-design

I'll preface this post by saying: I don't know much about hardware.

I have, however, spent a lot of time around hardware engineers. As a very naive programmer, I generally assumed that what they did was programming with some "extra bits" added, though I've long since been corrected.

Hardware is fast

How fast? For FPGA developments, I believe many applications can be 200 times faster. I believe even a reasonably inexperienced hardware designer can achieve speed ups of 50-100 times on many applications. It's also worth thinking about power consumption. The power consumption of an FPGA can often be a small fraction of the equivalent processing power required to hit the same performance constraints using commercially-available microprocessors.

This all sounds wonderful. Why aren't we all doing this? Well, for me, the problem is basically me.

I have neither the tools, nor the experience, to design non-trivial solutions to problems in hardware. If I were to sit down and try to solve a problem in hardware, I would probably end up writing VHDL or Verilog in much the same way that I would write C or Java, which would result in really, really awful hardware, which would be unlikely to be efficient. The maximum clock-rate would be very low, and finding a device big enough to fit my resulting hardware on could become an expensive undertaking.

That said, I still really like the idea of using hardware to solve problems. While specialist hardware engineers will always have the job of designing high-performance hardware with tight performance constraints, is there any low-hanging fruit which we (as programmers) could target for using reconfigurable logic devices as part of our programs?

The most obvious way of achieving this seems to be with the general idea of "hardware acceleration", where a special-purpose device (such as a PCI-X card with an ASIC or FPGA implementation of some algorithm) takes over some of the hard work from the CPU. The device manufacturer provides some sort of software library that hides the hardware interaction behind nice high-level function calls that are familiar territory for programmers.

These are great for specific applications, but I'm not aware of any "general purpose" cards that would permit programmers to integrate them into an arbitrary application without having some knowledge of hardware design.

Perhaps the next obvious alternative is compiling software into hardware. This is dangerous territory. There have been so many claims of software-to-hardware compilers that I won't even bother to list them. I'm sure you can find hundreds with a quick Google search. The problem with most of these is that they are actually just hardware semantics hidden underneath a veneer of some mainstream programming language. A programmer familiar with the source language will still not be able to produce efficient hardware without knowing something about hardware.

I think there could be two possible solutions, which I will sketch here:

Component-based Approach

Presumably we could get experienced hardware engineers to implement a wide variety of software-style components as efficient hardware. We use the same approach as the domain-specific hardware accelerator manufacturers, hiding the complexity of hardware interaction behind some nice software libraries. If you had a significant portion of the expensive operations from a programming language's standard library available as hardware components, it wouldn't take a particularly smart compiler to replace certain standard library calls with calls to the (functionally identical) hardware-wrapper library.

But there are some problems that I can see.

It is unlikely to be efficient in terms of communication, area or power consumption to simply push a whole lot of small, infrequently-used library calls into hardware. There would need to be a lot of static (or runtime profiling?) analysis to determine where there are efficiencies to be gained. The compiler would be forced to "schedule" components into the finite area available on the FPGA. This would have implications for latency and throughput. It might be necessary to buffer large numbers of calls before executing them. The performance gains from doing operations in hardware are likely to be eaten up by the communication overhead resulting from having to move data from registers in the CPU (or from processor cache, or from main memory) to the FPGA. Similarly in a traditional imperative setting, this is going to be an interruption of program control. The program will have to stop and wait for the results from the FPGA. The alternative is concurrency, but as I've previously said, concurrency is hard, and this is the type of heterogeneous task-level parallelism that is really hard.

So it appears that this approach would require putting a lot of smarts into the compiler. We would need good performance heuristics, something resembling auto-vectorisation and automatic parallelisation, and likely an ability to trade-off latency for throughput.

It all seems a little bit challenging, given the state of the art of each of those areas. It would appear that some of the purely functional languages would be best-suited to this kind of approach, but even in the absence of side-effects it seems uh... difficult, at best.

Language-based Approach

So if we're not going to expect the compiler to perform a rather sophisticated brand of super-compilation on our programs, can we instead structure our programs (and programming languages) to make this kind of hardware/software co-design easier?

I believe that this is one of the possible future directions for the JStar project, although I'm not aware of the technical details in this case. The data-structure-free approach may permit the kinds of analyses required to allow for efficient scheduling of components onto an external FPGA, so it definitely seems like an interesting avenue to keep an eye on.

Another avenue has to be concurrent (and functional?) languages. These are designed to express concurrent computation, which is essentially what we are trying to maximize, while minimising the presence of mutable state. With mutable state largely out of the way, you are free to have copies of data lying around. There are still questions as to how we maximise the efficiency (and indeed, even approach predicting the efficiency).

I'm aware of efforts to compile Esterel to hardware, though I have no idea to what degree this is practical or whether it is used in industry.

It strikes me that a model-based approach to software development could be used to make the entire process a lot less painful. By starting with a high-level model, it would be possible to refine towards both hardware and software implementations of different parts of the system. Provided you could contrive acceptable heuristics for predicting performance, it seems like you could start to play this sort of game:

I have a high-level abstract specification S.

I have a set of (hardware) components:

H = {h1,h2,h3}

I have some software components that are synthesised from S, or are user-supplied:

W = {w1,w2,w3}

We want to choose a subset X ⊆ H and Y ⊆ W, such that:

S ⊑ x1 || ... || xn || y1 || ... || yn

Where '⊑' is some appropriate notion of refinement and '||' is parallel composition. I'm not sure if this is more tractable than simply solving the problem outright for a given language, but it certainly seems like it would be considerably easier to verify the correctness of the decomposition of the program into software and hardware components, which would seem rather more difficult in this case than for a traditional compiler.

Is anyone aware of any projects that attempt this kind of thing? I'd be interested to hear from hardware and software people alike about their opinions on enabling programmers with no specialist hardware knowledge to use hardware in a manner that is both efficient and reasonably familiar.

Monday, 17 August 2009


I'm working on another post, but I thought I would briefly mention two things:

1) I have turned on Google AdSense for this blog. Let me know if the ads are particularly irritating. I mostly turned it on for my own personal curiosity, rather than because I'm hoping to quit my day job using the ad revenue.

2) I'm on Twitter - - I've actually had an account for a long time, but I'm actively using it now.

Sunday, 16 August 2009

Simple patterns for better arguments about programming languages

I somehow get myself into a lot of arguments about programming languages. Perhaps it's because I talk about them a lot, and everybody has an opinion!

Getting into arguments about programming languages generally means listening to a large number of value-based judgements until somebody gets bored and admits that language X really isn't as bad as he or she originally asserted, and yes, maybe language Y does have some flaws. This is largely counter-productive. If we're going to argue about the best way to skin a cat*, let's argue about things we can actually compare. So I present in this post a number of more objective ways of talking about the merits of programming languages such that you might actually persuade somebody that your well-informed opinion is actually valid!

(* Please don't, that's cruel and horrible)

I often hear the word "powerful" used to describe the relevant merits of programming languages, as in "People use Ruby because it is more powerful". This is often used in conjunction with lots of other words like "maintainable", "simple", "faster", "easier" etc etc. These are all largely intangible qualities that could just as well be described by "I prefer language X".

That said, maybe there is some value in comparing languages. So what can and can't we measure?

It's worth creating a distinction at this point between the programming language itself (i.e. syntax and semantics - the set of valid programs), and the eco-system that exists around the language, in terms of compilers, tools, libraries, user-bases etc. Sometimes these overlap in a big way, especially in cases where there exists only a single implementation of a given language.

Comparison 1: Performance

We have some reasonably good metrics for "performance" of a given implementation for a certain application. These are often pulled out when one party in a heated discussion about the merits of a programming language demands evidence to support claims of superiority or inferiority. That said, as a metric for comparing languages, they are pretty lousy. Sure, it's interesting that you can write a compiler for some language X that outperforms a compiler for language Y for a certain application. Big deal. It might say something about the ability for the program to express efficient solutions to problems, or it might just tell you that one compiler is better than another.

Before I wave my hands and dismiss "performance" as a useful metric altogether, it is true that some languages lend themselves to efficient implementations better than others. The Computer Language Shootout Game makes for interesting reading, even if it is not a completely scientific means of comparing whole languages. Without beating up on anyone's favourite languages, languages that permit dynamic behaviour (usually requiring interpretation rather than compilation-proper) don't really lend themselves to static analysis and optimisation. Sure, it happens, but it is added to the runtime cost of the application (because it is done, by necessity, at runtime). I don't think anybody would really have a problem admitting that languages like PHP, Python, Perl and Ruby are unlikely to be considered in the same performance category as C/C++, ML, Fortran and a bunch of other languages that do lend themselves to good compile-time optimisation and compilation to fast, native code (or something with comparable performance).

So, take-home talking point number one: Language X lends itself to compile-time optimisation and efficient code-generation that is better than language Y's.

You need to be careful with this one though - just because people haven't written good optimising compilers for a language doesn't mean it's not possible or even difficult! Sometimes it's just not a priority. If you're going to use this talking point, you need to be able to substantiate it! Think about things like: presence (or absence) of very dynamic features, referential transparency, treatment of constants, and syntactic ambiguity resolvable only at runtime (Perl, I'm looking at you).

Comparison 2: Readability

This is argument is abused regularly. Arguments about readability often boil down to "Language X is different to language Y, which I am used to, therefore I find it harder to read!". This is not a good argument. You're basically just saying "I find it easier to use things that I am familiar with", which is uh, pretty obvious, really. Some other person who uses X more than Y probably finds Y equally unreadable.

There might be one tiny point for comparison here, although I would hesitate to use it unless you're very sure that you're correct. It basically goes like this:

How much information about the expected run-time behaviour of a program can I discern from the structure of a small piece of the program?

To elaborate, what is the smallestt piece of the program that I can look at independent of the rest while still having at least some idea what it is meant to do? Can I look at just one line? A few lines? A page of code? The entire program spread over multiple files? This is probably equivalent to some kind of modularity property, although I don't mean this in terms of objects or module systems. I mean, "if something is true within this piece of code now, will it still be true no matter what else I add to the system?"

I figure this corresponds to "the amount of things you need to keep in your head" in order to read code. Smaller is probably better.

Talking point two: Language X requires the programmer to mentally store fewer things in order to read and understand the program than language Y.

You'll need to be able to substantiate this if you're going to use it. Think about things like scoping rules, syntactically significant names and consistency. Does this operator always mean the same thing? No? Does its meaning vary in some consistent way that is indicated by a (relatively small) context? It doesn't? Oh dear.

Comparison 3: Bugs!

Don't get drawn into this one lightly. Discussions about bugs can go on for days. The sources of bugs, the causes of bugs, and what language design decisions influence the ability to detect (and indeed prevent) bugs.

Being a formal methods geek, I'm pretty passionate about this point. I think bugs in all their forms are bad things. Some people see them as an unavoidable side-effect of producing software, much like mopping a floor will make the floor all wet.

I think a well-designed language should allow a compiler to prevent you from making dumb mistakes. I'm going to beat up on PHP here, because really, anybody who has used PHP pretty much loathes it. My pet peeve:

$someVeryComplicatedVariableName = doSomeStuff();

print $someVeryComplicatedName;

Whoops! I guess I didn't type that variable name correctly the second time. Oh well, don't worry, PHP's default behaviour is to initialise it to an empty value that is coerced to an appropriate type for the context. In this case, my program would print nothing at all. Sure, it issues a notice somewhere, but this isn't considered an error in PHP, because we don't declare things! In this case, the design of the language prevents the compiler/intepreter from stopping us from doing something very dumb.

Languages that lend themselves to compilers that enforce global consistency in some way are good at preventing errors. I've already said in a previous post that I'm a fan of type systems that prevent dumb mistakes. Obviously in a programming language there is always some conceptual gap between what we expect the compiler to do, and what it actually does. To me a well-designed programming language is one that will prevent me from being inconsistent, and which will tell me if my understanding of a program differs significantly from the compiler's interpretation of it (manifested in the way I am writing my program, probably as a type error!).

Finally, how many things are put in front of me that I have to get right in order for my program to be correct? Oh, I have to do my own memory management? So it would be bad if I got that wrong, right?

Talking point: Language X is designed in such a way that a compiler or interpreter can detect and report a larger class of errors, and prevent more simple mistakes than one for language Y.

Do your research on this one too. Unfortunately there are very few studies with substantial findings on the types of errors that programmers make. You'll need to restrict yourself to tangible classes of errors that are detectable in language X, but will not be detectable in language Y, by virtue of the design of the language. Remember, a bad compiler might not report errors, but that doesn't necessarily mean it's impossible to detect them!

The natural counter to argument is usually to propose an absurdly complicated way by which it might be possible to simulate the language-based features of language X in language Y at the application level. For example, "We'll search-and-replace to replace every occurence of variable x with a function that validates its internal structure!" At this point, you should probably just sigh and walk away, safe in the knowledge that you don't need to worry about any such absurd solutions in your own daily life.

Comparison 4: Eco-System Factors

If you want to argue about libraries and user communities, be my guest. If you're having an argument about programming languages, these things are probably important, but they really only tell us the way things are, not why they are like that or how they should be. Arguing about how things should be is not particularly helpful, but similarly, "Everyone uses X so it must be better than Y" is a very unhelpful argument. A lot of people get cancer, too. Doesn't mean I'm lining up for it.

Talking point: Language X has considerably more library support for than language Y, and that is what I am interested in doing.

Perfectly valid. Just don't fall into the trap of suggesting that having more libraries means that one language is better than any other. Most languages worth their salt actually provide some sort of foreign function interface, so the idea of a library being completely unavailable in a language is pretty rare. You might have to put a bit of work into it, but it's certainly not a limitation of the language just because someone wrote a library for X and not for Y.

Comparison 5: Non-comparisons

This is my "please avoid" list. If you use any of these arguments in a discussion on programming languages, you should really look at what evidence you have, because you probably don't have any:

  • Lines of Code - please, please avoid using this. It's just not relevant. If you use this, you have reduced an argument about the relative merits of programming languages to one about typing speed.
  • Elegance - I know what you mean, some solutions are elegant, but this really can't be quantified. It's probably the solution that is elegant, rather than the solution in a given language. It could probably be expressed elegantly in almost any language by a programmer sufficiently familiar with the conventions of that language.
  • "I don't need a language-based solution for that, we have social conventions". No, no, no. What you're saying here is that you are accepting that this problem is still a problem if people don't follow the conventions. Guess what? People are really bad at following rules. Some people are just born rule-breakers.
  • "Language X is too restrictive". Really? I find those pesky laws about not going on murderous rampages rather restrictive too. If you can't qualify this statement with a very specific example of something that is both within the domain of the language, and which genuinely can't be done using the features of the language in question, then you're just saying "Language X doesn't let me do things exactly how I did them in language Y".
So I hope this all provides some food for thought. I would love to hear of any other arguments that people resort to about programming languages that really annoy you. Or perhaps I've missed some genuinely valuable comparisons that can be made?

Saturday, 15 August 2009

Sources of inspiration and Interesting directions for programming languages

I try to keep abreast of new developments in programming languages, but it's pretty easy to fall behind. There are so many small languages (I mean that in terms of user-base or general appeal) that may incorporate interesting features, but which are not all that appealing in of themselves. I've decided to dedicate this post to something of a sketch of where I would like to take my own work in programming languages, and hopefully illustrating a few interesting sources of inspiration for anyone who has an interest in programming languages.

I've tried to provide some links to projects which are currently working towards these goals in different ways, although it is unlikely to be complete. Let me know if you think I've missed any significant developments in any of these areas.

Exotic Type Systems

I really like strong, static typing. I'm a big fan of SML, for example, and the combination of type safety with good type inference means big benefits with very little effort. I really don't want to get into the entire dynamic/static typing argument, but needless to say, I think that it's worth having your compiler do the work for you. An inability to formulate programs in this way strikes me as a result of lack of imagination, rather than some inherent limitation. Every time I hear somebody claim that they need the large-scale dynamic behaviour of language X in order to Y, I can't help think that I really don't need the bugs.

But strong, static type systems are old news now. More and more languages are integrating them and buying into the dream. But there is even more that a type system can give you! One promising direction: dependent types.

If you've used something like ML or Haskell before, dependent types are a reasonably intuitive extension. Imagine the type signature for a function append that takes two lists and joins them together:

val append : 'a list * 'a list -> 'a list

Brilliant. We've cut out the possibility (at compile time) of anyone trying to pass lists of different types, or using the result of append in such a way that they expect append([1,2,3],[4,5,6]) to return a list of strings. It doesn't, however, tell us very much about the lists themselves, and that's where dependent types come in!

val append : 'a list<n> * 'a list<m> -> 'a list<n+m>

Now we have added additional information about the length of the list. Now a correct implementation of append must implement this behaviour, and a programming cannot then attempt to use the result of append in any way that is inconsistent with this behaviour. Simiarly, we can start to check some interesting properties:

val zip : 'a list<n> * 'b list<n> -> ('a * 'b) list<n>

Which tells us (most importantly) that the two lists passed to the zip function must be of the same length (n).

This is an absurdly simple treatment of dependent types, featuring only computation on integer values (in no particular syntax - this is just some ML-inspired syntax which one might use to express dependent types). While projects like Dependent ML have restricted dependent type values to simple arithmetic on integers for tractability reasons, dependent types could extend to permitting any value (or expression yielding a value) in the language to be used as a value within a dependent type. This kind of expressive power quickly leads to the ability to construct specifications of programs within the type system, which are then statically checked at compile time.

Links of interest: Coq, ATS, Epigram

Visual Programming Languages

I know, people have been going on about visual programming languages for years. There is just nothing that appealing about them as a concept. I wouldn't use one. But for some reason, I still think they are a good idea, which makes me think that the problems might be in the execution, rather than the theory. For me at least, there is just some kind of instinctual cringe factor about them though.

That said, I think they are very likely to provide a good way to manage complexity going forward. I have no idea what such a system might look like, but my experience working with process algebras has always been that one labelled transition system diagram is a whole lot more useful to my understanding than a page full of notation. As the types of systems we want to describe get more complex (particularly with reference to mobile and distributed systems), I think visual languages may be the best way forward.

Bigraphs are interesting in this way, as the visual representation is (according to Milner's original paper) considered primary, with any textual representation being second to the visual representation. Bigraphs deserve a post all of their own (and they will likely get one soon), but I think if we can figure out a way to describe programs and systems visually such that unnecessary complexity is hidden, they might start to catch on.

Links of interest: The Wikipedia Article (see how many there are!?)

Concurrent Programming Languages

I go on about concurrency a lot, but there's a good reason for that. It's interesting! Languages with process algebras hidden inside hold a special place in my heart. The Occam Programming Language is an example of brilliant elegance (as was the Transputer for which it was designed) well ahead of its time. In Occam, things are basically concurrent by default. The programmer must deliberately write sequential code, rather than everything being sequential by default, by virtue of the way things are written down as is the case in most languages. This definitely appeals to me.

Also of note are languages that implement Futures in a clean way. I'm sure many languages do, however my only real exposure to futures has been through Alice ML. Futures are (I think) a nice way to think about certain concurrent computations, because they can essentially just be thought of as a special kind of laziness.

Links: Occam, Erlang, AliceML, Mozart/Oz, Clojure*

* Note: I'm seriously allergic to Lisp and all its relatives. I'm not going to go into the reasons because I'll get flamed, but needless to say, I wouldn't use it.

As great as these languages are, I'd like to see things go a lot further in this direction. Bigraphical Programming Languages seem very promising, although it is still early days. Given that it's possible to represent the lambda calculus within bigraphs, it seems like this could take us a long way towards closing the gap between the way we create software now, and the way we would like to create software in the future. Given the number of things that have processors in them these days, the notion of software existing as a process on a single machine seems very limiting. Cloud computing is a good start, but what about totally ubiquitous computing? Advances in programming languages are keeping up to date with the way people create software, but I'm interested in how we'll create software 50 years from now.

More Theorem Proving

Writing software inside a theorem prover is a reasonably rewarding experience. I haven't done a heck of a lot of it, and it's definitely hard work, but the benefits are obvious. Instead of making a mental note or leaving a small comment about checking if a certain property of your program is true, you write down a lemma and prove it. I think this style of designing software is definitely beyond the reach of most people (it's mostly beyond my reach for anything above a reasonably small size), simply because of the effort involved. Given that most people refuse to engage with type systems that tell them they are wrong, I can't imagine programmers queuing up to be told they are wrong more often.

That said, I think there is definitely room for making greater use of theorem proving "behind the scenes" in programming languages and compilers. I think with tool support, this could be done in such a way that made little impact on the way programmers work. More static analysis can only be a good thing, especially if it gives us the means to provide early feedback to programmers about properties that may or may not be true. Going one step further towards tools that would ask the programmer to clarify their intentions could be very helpful. Hopefully it could be done in a less irritating way than a certain office-assistant paper clip. In any case, I think there is a lot to be gained by shrinking the gap between "programming" and "theorem proving" as separate activities.

Interesting links: Coq, Isabelle, Daikon, ESC/Java2

Tuesday, 11 August 2009

If concurrency is easy, you're either absurdly smart or you're kidding yourself

Concurrency is hard. I like to think I'm a pretty smart guy. I'm definitely a very good programmer, and I've had plenty of theoretical and practical experience with concurrency. I still find concurrency very hard.

Concurrency is viewed as black magic, even by programmers who use it on a daily basis. When people tell me that concurrency is easy, it generally means one of two things - They are super-humanly smart, or the way they are using concurrency is restricted to nice simple patterns of interaction to which we can apply some simple mental analogy. Thread pools are fairly easy to think about, for example.

I'm talking about the kind of eye-wateringly difficult concurrency that arises when you are trying to extract some sort of task-level parallelism, or when you are working with loosely-coupled distributed systems (read "the general case"). The difficulty of concurrency is usually mentioned in the same breath as a list of common defects (e.g. race conditions, deadlocks, livelocks, etc), however I think it is important to remember that these things are all just outcomes that arise as a result of the difficulty, in the same way that hitting the ground is the probable outcome of falling off a building.

The possibility for a programmer to introduce problems such as deadlocks and race conditions is not what makes concurrency difficult. It's our total inability to mentally reason about concurrency that makes it difficult.

Most programmers above a certain level are very good at conceptualising large, complex systems. They can interrogate perceived weaknesses in a program before it is even written. This is how good programmers manage to write programs that are largely free of defects and that generally work in the way that is expected. Small-scale errors aside, a good high-level conceptual understanding of the system, coupled with an ability to mentally simulate the behaviour of any portion of a program provides programmers with most of the tools they need to construct good-quality programs. The programmer is likely to know (at least at some high level) how the entire system is going to work any any given moment.

Large-scale concurrency robs us of this ability to easily mentally simulate behaviour. It makes our mental models unimaginably complex. When it comes to concurrency, most of the intuitions we use to guide us through development simply break down. I think of it as a sort of night-blindness.

So what is different? Humans are very good at causality. When presented with the present state of the universe, plus some action, we can generally predict with a high degree of accuracy what the state of the universe will be after that action has been performed. We can integrate huge numbers of factors into our calculation of the future state of the universe - gravity, wind, light, sound, heat. If we start rolling a ball down a slope, we can imagine at any point in time approximately where that ball is going to be. What we don't expect, or even attempt to predict, is an inter-dimensional portal opening, an alternative-reality version of ourselves stepping out of the void and setting the ball on fire. And that's kinda concurrency for you. We are good at imagining ourselves in a situation. We are the observer in the code. Unfortunately in a concurrent situation, we are the observer in many different "realities".

Even with the knowledge that there is some algorithmic nature to the opening of dimensional portals during our ball-rolling activities, and that the alternative-reality observer's pyromaniac tendencies can moderated by holding up a certain flag, our mental models of the universe just don't extend to predicting this kind of behaviour. If I were inclined towards evolutionary psychology, I would probably suggest that it's because as a species, we don't encounter these kinds of situations in our natural habitats very often. We can predict cause-and-effect, but the idea of having to maintain a model of a second universe with its own rules as well as our own is just beyond the needs of most early humans, who were probably much more concerned with procreating and trying to avoid being eaten by tigers.

It's not just the breakdown of causality that is a problem - it's also syntax. I know I claimed I wasn't going to talk about syntax just two short posts ago, but it's kinda important here.

When we think about some function "f", we can conceptualise it as something like:

(S,x) ---(some computation)---> (S',y)

Where "S" on the right represents the state of the world before the computation occurs, "x" is some input, "S'" is the state of the world after the comptuation has occurred and "y" is some output. Given that most programming languages give us the ability to control scope, "S" is probably pretty small. We usually don't even bother writing it down, leading to the more comfortable:

f(x) = (some computation)

With an implied world-state being accessible to us before and after the computation. Causality applies and all is well with the world. We can mentally substitute any occurrence of "f(x)" with the behaviour of the computation.

Now, consider something concurrent. A given transition in a labeled transition system has one input state, one output state, plus a side-effect:

x ---a---> y

Assuming some interpretation of this LTS using name-based synchronisation, any other occurence of "a" in the system is going to be causally related to this occurence of "a". We're used to names just being names - they have no special meaning. Well now "a" does have a special meaning. This is where it all starts to get a bit dicey. We no longer have all of the information we need in front of us. Causality has gone from being able to consider events locally within some frame of reference to needing to maintain a globally consistent mental model of all the ways that an "a" action could occur anywhere in the universe. Reading this description of behaviour in isolation is no longer sufficient to tell us the whole story.

This isn't just some problem with name-based synchronisation. If anything, I think name-based point-to-point communication is considerably simpler than most of what we do in practice, and yet we are already at a point where our ability to rely on local information to give us clues as to how this piece of the system will behave has been taken away.

Introducing locking gets a bit scarier yet again. We're now required to know the state of our world at any given point in time, as well as needing to know the state of every other given world at any given point in time. And the notions of time in each world are not the same.

I think it's a wonder that we get concurrency right as often as we do. I think it's as much by careful attention to detail and dumb luck as by good planning or being good at this type of thing.

Now, before you jump up and down and tell me that "insert-your-favourite-concurrency-model-here addresses these problems", I know there are strategies for managing the complexity of these tasks. I did concurrency theory for a living, so I know some approaches are better than others. My point here was essentially to illustrate why it is important to manage complexity when doing concurrency. It's definitely not just a case of "you can't do concurrency so that means you are stupid". I'm arguing that pretty much everyone is bad at concurrency beyond some simple patterns of interaction that we cling to because we can maintain a good mental model by using them.

In my next exciting installment, I'll talk about how I think we can improve the state of the art. Spoiler alert: it involves Bigraphs, and some ideas that I hope I might be able to turn into a viable commercial proposition one of these days.

Oh, also, I recall reading a really good paper about the problems with syntax for representing terms in process algebras a few years ago, and I can't for the life of me remember what it was. If anyone has any ideas what it might have been, let me know. I'll give you a cookie.

Sunday, 9 August 2009

When are we going to stop writing code?

Programmers, generally speaking, like writing code. It seems obvious, but it's important to the point I would like make.

Software defects arise from writing code. Sure, there are classes of errors which arise as a result of programmers or stakeholders actually just getting requirements or specifications wrong, but mistakes in understanding or requirements only manifest once they are translated (by flawed, fallible humans) into something executable.

So a very simple way to greatly reduce the number of defects that exist in our software seems to be to stop humans writing code. By pushing more of the work into compilers and tools (which we can verify with a high degree of confidence), we reduce the areas where human error can lead to software defects.

We're already on this path, essentially. Very few people write very low-level code by hand these days. We rely on compilers to generate executable code for us, which allows us to work at a higher level of abstraction where we are more likely to be able to analyse and discover mistakes without needing to run the program.

Similarly, type systems integrated with compilers and static analysis tools remove the burden on us as programmers to manually verify certain runtime properties of our systems. Garbage collectors remove humans from the memory-allocation game altogether.

See what I'm getting at? We have progressively removed bits of software development from the reach of application developers. Similarly, the use of extensive standard libraries packaged with mainstream programming languages (hopefully!) no longer requires programmers to create bespoke implementations of often-used features. The less code a programmer writes, the fewer chances he or she has to introduce errors (errors in library implementations are a separate issue - however a finite amount of code re-used by many people is likely to be much better over time than a piece of code used by one implementation).

The rise of various MVC-style frameworks that generate a lot of boilerplate code (e.g. Ruby on Rails, CakePHP, etc.) further shrinks the sphere of influence of the application developer. In an ideal world, we would be able to use all of these sorts of features to ensure that we essentially just write down the interesting bits of our application functionality, and the surrounding tools ensure global consistency is maintained. As long as we can have a high degree of confidence in our tools, we should be producing very few errors.

There is one basic problem: it doesn't go far enough.

Despite their best intentions, Ruby on Rails and CakePHP are basically abominations. I speak only of these two in particular because I've had direct experience with them. Perhaps other such frameworks are not awful. The flaws in both frameworks can essentially be blamed on their implementation languages, and the paradigm that governs their implementations. Without any kind of type safety, and with very little to help the programmer avoid making silly mistakes (e.g. mis-spelling a variable name), we can't really have a high degree of confidence in these tools.

Compilers, on the other hand, are generally very good. I have a high degree of confidence in most of the compilers I use. Sure, there are occasional bugs, but as long as you're not doing safety-critical development, most compilers are perfectly acceptable.

So why are there still defects in software? First, most new developments still use old tools and technologies. If any kind of meritocracy was in operation, I would guess that very few new things other than OS kernels and time-critical embedded systems would be written in C, but that's simply not the case. Many things that make us much better programmers (by preventing us from meddling in parts of the development!) are regarded as "too hard" for the average programmer. Why learn how to use the pre-existing implementation that has been tested and refined over many years when you can just roll-your-own or keep doing what you've always done? Nobody likes to feel out of their depth, and clinging tight to old ideas is one way to prevent this.

Having done quite a bit of programming using technologies that are "too hard" (e.g. I'm a big fan of functional programming languages such as ML and Haskell), I think that if you use these technologies as they are designed to be used, you can dramatically reduce the number of defects in your software. I know I criticised methodology "experts" in my previous post for using anecdotal evidence to support claims, but this isn't entirely anecdotal. A language with a mathematical guarantee of type safety removes even the possibility of deliberately constructing programs that exhibit certain classes of errors. They simply cannot happen, and we can have a high degree of confidence in their impossibility. As programmers, we do not even need to consider contingencies or error handling for these cases, because the compiler will simply not allow them to occur. This is a huge step in the right direction. We just need more people to start using these sorts of approaches.

So, the title of this post was "When are we going to stop writing code?", and I ask this with some seriousness. As we shrink the range of things that programmers are responsible for in software development, we shrink the set of software defects that they can cause. Let's keep going! I believe it is very nearly within our reach to stop writing software and start specifying software instead. Write the specification, press a button and have a full implementation that is mathematically guaranteed to implement your specification. Sure, there may be bugs in the specification, but we already have some good strategies for finding bugs in code. With a quick development cycle, we could refine a specification through testing and through static analysis. We can build tools for specifications that ensure internal consistency. And as in the other situations where we have been able to provide humans with more abstract ways to represent their intentions, it becomes much easier for a human to verify the correctness of the representation with respect to their original intentions, without the need to run a "mental compiler" between code and the expected behaviour. This means we can leave people to solve problems and let machines write code.

That said, it's probably still not realistic for people to stop writing code tomorrow. The tools that exist today are far from perfect. We're still going to be forced to write code for the forseeable future. We can get pretty close to the Utopian ideal simply by using the best tools available to us here and now, and in the mean time, I'm going to keep working on writing less code.



I've decided to start collecting some of my thoughts and musings on programming languages and semantics in this blog. This is partially a reaction against repeated appearances of the writings of Louis Savain on various blogs and news sites that I read. I have better things to do with my time than answer any of his over-reaching claims about programming in detail, but I disagree with most of them. The primary reason for concentrating on semantics is that I find too many discussions of programming and software development quickly veer into discussions about syntax. I find discussions about the merits of parentheses or value-judgements about "readability" really dull. A supervisor of mine once memorably said:

"Discussions about syntax are much like reading Shakespeare and then remarking on the excellent quality of the spelling".

And how true that is. Similarly, almost all of the (interesting) things that good programmers do relate to constructing abstractions, maintaining a consistent mental model of a complex system, and having a firm understanding of semantics, not of syntax. To return to the previous analogy, knowing lots about the syntax of a particular programming language is about as useful in the construction of quality programs as knowing how to spell is use in the construction of quality plays.

I use "quality" here without expanding on the definition. I think most people could come up with a sufficient definition of "good quality software", which might include things like not crashing, not creating security vulnerabilities, and generally behaving in a sensible manner when faced with both valid and invalid inputs. Software Engineering Tips has recently published an interesting article on what makes a bad programmer, and I find myself generally agreeing with most of the sentiments expressed.

So, I hope to maintain a strong focus on formal methods, programming languages, concurrency and theoretical computer science, hopefully with only passing references to syntax or the particulars of implementations. Similarly, I will likely stay away from the "human factors" involved in software development (e.g. methodology). This is not because I think methodology is not important - quite the opposite is true. I've just found that this is ground that has been endlessly covered in some detail by many authors, generally without a lot of hard evidence. Such discussions generally devolve very quickly into the presentation of various anecdotes around vaguely-defined concepts of "productivity" and "ease of use", followed by some dubious conclusions that don't really follow from the scant evidence. I don't claim that I will be objective or comprehensive. Excellent sites such as Lambda The Ultimate already cover a broad range of topics in programming languages, logic and semantics with much more objective and substantiated viewpoints than I could ever hope to provide. Instead, I will express opinions and a personal viewpoint on developments and topics within the area.

Finally, a little bit about me. I am a programmer by trade. I have worked both on the sorts of everyday applications that are the bread-and-butter of the software industry, as well as research projects involving compilers and software quality. I hope you enjoy my musings, and don't hesitate to comment if you disagree!