Thursday 24 September 2009

Two features that would improve my life

If you've ever seen From the Earth to the Moon, you might remember the episode about the creation of the Lunar Lander. Tom Kelly is shown standing outside throwing a red ball against a wall repeatedly while he tries to think of solutions to hard problems. This scene has always been memorable to me because I think it is the only cinematic depiction of "problem solving" that has ever seemed familiar to me.

I cite this scene because it echoes the way I problem solve. I think a lot, and I do strange things in an undirected manner until I solve it. When I'm actually creating software, I generally know how most of it is going to work before I even start typing. Writing all the supporting code and boilerplate is not intellectually challenging - and I think this is the majority of the "programming" that a commercial programmer is called upon to do.

In the midst of this programming-as-typing though, there are often hard problems that need to be solved. These are usually the bits that I find myself actually needing to think about - they are usually hard problems to reason about, and I find that I "code around" them in many instances. It's the kind of situation where you might write a stub and then promise yourself to come back and write it later, once you understand the problem (and its solution) a little better.

A recent example for me - I was attempting to figure out how to do a novel kind of type inference over complex terms. I wrote a "infer" function that takes a piece of a program and returns a typed piece of program. And then I wrote some tests, and my implementation consists of just returning "unknown type". Not very useful.

I have a bunch of strategies I usually use when I'm trying to solve these kinds of problems. I scribble on paper a lot, trying to contrive concrete examples that I can start to generalise into an actual algorithm. That's fine at the early stages, and helps my understanding a lot.

Another strategy is to use some sort of REPL (interactive environment) and try to write bits of code that get me closer to my goal. I usually use MLton for development, so this usually means switching over to SML/NJ so that I can start to "sketch" in code.

Often these two approaches are enough to get me most of the way to a solution. If the problem is really hard, it might not be. In these cases, I generally wheel out my trusty theorem prover, Isabelle. At this point, it becomes possible to re-implement the bits of supporting code that I need in order to express the problem, and then start trying to test my assumptions by actually proving properties about the kinds of functions I'm "sketching". This forces me to make very concrete my assumptions and intentions, because a theorem prover refuses to make educated guesses.

So, having gone through that entire process, I was thinking of something that would make my life better: a language-aware prototyping environment.

Let me explain. I have very little tolerance (or desire to use) visual programming environments, or "CASE tools", or very dynamic languages that permit "rapid prototyping". I'm talking about something that will help while I'm trying to solve hard problems embedded within lots of other code during "real" software development.

Basically I want the ability to access all of the supporting code and boilerplate that I've written in order to express my problem from within a prototyping environment. Having to re-implement things inside Isabelle is a pain, particularly if I'm dealing with problems that are linked to the way things are represented. I have to re-build all of the structures in another language just to be able to re-state the problem I'm trying to solve.

So I want to be able to instantiate a prototyping environment exactly where I would currently just write a stub. I want to be able to see visualisations of data easily. I'd like to be able to do some of the things that I do with the REPL. I'd like to be able to easily save and re-run test cases. So far, so pedestrian.

And this is where it starts to get more interesting. I want to be able to write partial specifications. In this kind of "exploratory programming" - I want to be able to start to apply "fuzzy" constraints until they become clearer in my head, mixing them with valid program code. I think a good way to do this would be to employ some of the techniques from Qualitative Spatial and Temporal Reasoning. These could be quite simple things, for example: "this value gets bigger over time", "this event is going to occur before this one", and when dealing with complex data structures - "this value exists inside this other one".

None of this would be directly executable, naturally, but with a range of visualisation options and some consistency checking, it would give you an ability to interact with a partially-written program in ways that you would otherwise start to do manually in a REPL. Taking some of the QSTR statements and deriving some new facts from them would be useful too - if I have a bit of code saying that x gets bigger over time, then perhaps it could tell me that this means that y will get smaller over time. It could draw me a pretty graph! By being able to mix "real" code with these kinds of qualitative statements, I think you could start to get some interesting results. I think the trick would be to integrate it into a compiler so that it's not necessary to do this in a "dynamic" way - the stuff that is not directly executable just gets compiled into some code that implements the prototyping environment.

That's the first feature that I think would improve my life. It would serve as a (metaphorical) "red ball" to throw against the wall while you try to solve a hard problem.

The second is a fairly simple extension: generate tests, specifications and documentation based on this prototyping process. By including your compiler in the prototyping process, it seems like you are actually starting to capture a whole bunch of useful information.

If I specified that A comes before B, then surely there are some useful tests (what happens if B comes before A, for example?). If I made some statement about the depth of a tree versus the number of iterations through a piece of code, then that might be an interesting test or profiling metric.

From a documentation standpoint, it seems like you could generate some useful documentation that provides insight into how a "difficult" piece of code actually works, because hopefully it would be possible to capture some of the rationale and the intuitive problem solving process that led to the eventual solution. By keeping track of the revisions made inside the prototyping environment, you essentially have a record of the "story" of the code as it becomes more concrete.

While it would probably be a reasonable amount of work, it's certainly not impossible. I believe the minimal set of functionality that would actually make this a useful feature would be quite small. I think this could be a "killer app" for some programming language - as long as the rest of the language was sane, I think I could be convinced to make a switch.

6 comments:

  1. Very interesting idea... Because our knowledge and intelligence is limited, let the machines help us find the right solutions.
    At some point though, we won't be able to follow them anymore with our limited capacities, but your approach sounds really useful.

    ReplyDelete
  2. Have you looked into Behavior Driven Design? Some of what you describe is exactly how I program when using a mocking framework.

    The mock is programmed specifically as a prototype when you test first then write code.

    ReplyDelete
  3. Aside from the interactive environment (which sounds like a joy to use and a chore to build), I like the idea of writing assertions (in whatever language) that can refer to values of mutable state over time (or locals over recursion depth) without requiring me to implement logging/aggregation of the values as they vary over time. If we're only concerned with values as they are at a given instant/scope, assertions in the native program language are sufficient.

    This reminds me of the more modest wish I've had: annotation of my source code variables with a trace or summary of their typical/random/min/max/mode values. An interactive interface would allow for more full expansion of the value-history, but as a minimum, I'd get a brief summary next to each variable reference of what the variable held at various times.

    At least many new languages (e.g. F#) allow you to meaningfully print anything short of a lambda.

    ReplyDelete
  4. Илья Казначеев: No, I haven't. My initial googling makes me think it's forth-like. Have I got that wrong?

    uglycode: I've done model-driven development and model-based testing, which at least seems like it should be similar on the surface. Got a link to anything specific?

    Jonathan: I agree it would probably be awful to implement, which is kinda why I wrote a blog post about it rather than writing a blog post announcing the release of it!

    I think your suggestions for things short of an interactive environment are probably quite realistic. I insisted on interactivity simply because I thought it would make the development cycle considerable shorter, but maybe the compiler spitting out as much as it can along with some profiling-style output would be very nearly as good.

    ReplyDelete
  5. Right, I see three options controlling the amount of information available in the trace/profile (since ALL the history of the computation may be unwieldy):

    1) noninteractive (short of recompiling w/ different options). you can still have an interactive tool for navigating/summarizing

    2) runtime configured (no need to recompile the program), but only at startup, i.e. if debugging interactively, you're stuck with the verbosity you had on startup

    3) fully interactive - express a summarizing/asserting/filtering on any particular values/scopes mid-execution. I think this is what you want.

    On a vaguely related note, I'm not sure what to make of time-travel debugging as recently supported in gdb (it's been in Ocaml for a while in some form), either (what would I do with it?) A REPL and something like Quickcheck seem more useful. I generally get very little out of the debugger except the opportunity to find invalid state without having to log/print it myself, and (in C++-land) the location of a segfault. For C/C++, valgrind is actually more useful than the debugger.

    ReplyDelete