I've tried to provide some links to projects which are currently working towards these goals in different ways, although it is unlikely to be complete. Let me know if you think I've missed any significant developments in any of these areas.
Exotic Type Systems
I really like strong, static typing. I'm a big fan of SML, for example, and the combination of type safety with good type inference means big benefits with very little effort. I really don't want to get into the entire dynamic/static typing argument, but needless to say, I think that it's worth having your compiler do the work for you. An inability to formulate programs in this way strikes me as a result of lack of imagination, rather than some inherent limitation. Every time I hear somebody claim that they need the large-scale dynamic behaviour of language X in order to Y, I can't help think that I really don't need the bugs.
But strong, static type systems are old news now. More and more languages are integrating them and buying into the dream. But there is even more that a type system can give you! One promising direction: dependent types.
If you've used something like ML or Haskell before, dependent types are a reasonably intuitive extension. Imagine the type signature for a function append that takes two lists and joins them together:
val append : 'a list * 'a list -> 'a list
Brilliant. We've cut out the possibility (at compile time) of anyone trying to pass lists of different types, or using the result of append in such a way that they expect append([1,2,3],[4,5,6]) to return a list of strings. It doesn't, however, tell us very much about the lists themselves, and that's where dependent types come in!
val append : 'a list<n> * 'a list<m> -> 'a list<n+m>
Now we have added additional information about the length of the list. Now a correct implementation of append must implement this behaviour, and a programming cannot then attempt to use the result of append in any way that is inconsistent with this behaviour. Simiarly, we can start to check some interesting properties:
val zip : 'a list<n> * 'b list<n> -> ('a * 'b) list<n>
Which tells us (most importantly) that the two lists passed to the zip function must be of the same length (n).
This is an absurdly simple treatment of dependent types, featuring only computation on integer values (in no particular syntax - this is just some ML-inspired syntax which one might use to express dependent types). While projects like Dependent ML have restricted dependent type values to simple arithmetic on integers for tractability reasons, dependent types could extend to permitting any value (or expression yielding a value) in the language to be used as a value within a dependent type. This kind of expressive power quickly leads to the ability to construct specifications of programs within the type system, which are then statically checked at compile time.
Links of interest: Coq, ATS, Epigram
Visual Programming Languages
I know, people have been going on about visual programming languages for years. There is just nothing that appealing about them as a concept. I wouldn't use one. But for some reason, I still think they are a good idea, which makes me think that the problems might be in the execution, rather than the theory. For me at least, there is just some kind of instinctual cringe factor about them though.
That said, I think they are very likely to provide a good way to manage complexity going forward. I have no idea what such a system might look like, but my experience working with process algebras has always been that one labelled transition system diagram is a whole lot more useful to my understanding than a page full of notation. As the types of systems we want to describe get more complex (particularly with reference to mobile and distributed systems), I think visual languages may be the best way forward.
Bigraphs are interesting in this way, as the visual representation is (according to Milner's original paper) considered primary, with any textual representation being second to the visual representation. Bigraphs deserve a post all of their own (and they will likely get one soon), but I think if we can figure out a way to describe programs and systems visually such that unnecessary complexity is hidden, they might start to catch on.
Links of interest: The Wikipedia Article (see how many there are!?)
Concurrent Programming Languages
I go on about concurrency a lot, but there's a good reason for that. It's interesting! Languages with process algebras hidden inside hold a special place in my heart. The Occam Programming Language is an example of brilliant elegance (as was the Transputer for which it was designed) well ahead of its time. In Occam, things are basically concurrent by default. The programmer must deliberately write sequential code, rather than everything being sequential by default, by virtue of the way things are written down as is the case in most languages. This definitely appeals to me.
Also of note are languages that implement Futures in a clean way. I'm sure many languages do, however my only real exposure to futures has been through Alice ML. Futures are (I think) a nice way to think about certain concurrent computations, because they can essentially just be thought of as a special kind of laziness.
Links: Occam, Erlang, AliceML, Mozart/Oz, Clojure*
* Note: I'm seriously allergic to Lisp and all its relatives. I'm not going to go into the reasons because I'll get flamed, but needless to say, I wouldn't use it.
As great as these languages are, I'd like to see things go a lot further in this direction. Bigraphical Programming Languages seem very promising, although it is still early days. Given that it's possible to represent the lambda calculus within bigraphs, it seems like this could take us a long way towards closing the gap between the way we create software now, and the way we would like to create software in the future. Given the number of things that have processors in them these days, the notion of software existing as a process on a single machine seems very limiting. Cloud computing is a good start, but what about totally ubiquitous computing? Advances in programming languages are keeping up to date with the way people create software, but I'm interested in how we'll create software 50 years from now.
More Theorem Proving
Writing software inside a theorem prover is a reasonably rewarding experience. I haven't done a heck of a lot of it, and it's definitely hard work, but the benefits are obvious. Instead of making a mental note or leaving a small comment about checking if a certain property of your program is true, you write down a lemma and prove it. I think this style of designing software is definitely beyond the reach of most people (it's mostly beyond my reach for anything above a reasonably small size), simply because of the effort involved. Given that most people refuse to engage with type systems that tell them they are wrong, I can't imagine programmers queuing up to be told they are wrong more often.
That said, I think there is definitely room for making greater use of theorem proving "behind the scenes" in programming languages and compilers. I think with tool support, this could be done in such a way that made little impact on the way programmers work. More static analysis can only be a good thing, especially if it gives us the means to provide early feedback to programmers about properties that may or may not be true. Going one step further towards tools that would ask the programmer to clarify their intentions could be very helpful. Hopefully it could be done in a less irritating way than a certain office-assistant paper clip. In any case, I think there is a lot to be gained by shrinking the gap between "programming" and "theorem proving" as separate activities.
Interesting links: Coq, Isabelle, Daikon, ESC/Java2
This is well done. Who are you? Do you have an email?
ReplyDeleteThanks. I'm Gian Perrone. I'm a computer scientist by training.
ReplyDeleteMy email is my first name, followed by a '.', followed by my last name, followed by gmail.com.