A post about Arthur Whitney and kOS made the rounds a few days ago. It concerns a text editor Arthur made with four lines of K code, and a complete operating system he’s working on. These were all built in K, a vector oriented programming language derived from APL. This reminded me that I really need to look at APL after all of the language ranting I’ve done recently.

Note: For the purposes of this post I’m lumping K, J, and the other APL derived languages in with APL itself, much as I’d refer to Scheme or Clojure as Lisps.

After reading up, I’m quite impressed with APL. I’ve always heard it can do complex tasks in a fraction of the code as other languages, and be super fast. It turns out this is very true. Bernard Legrand's APL – a Glimpse of Heaven provides a great overview of the language and why it’s interesting.

APL is not without it’s problems, however. The syntax is very hard to read. I’m sure it becomes easier once you get used to it, but I still spent a lot more time analyzing a single line of code than I would in any another language.

APL is fast and compact for some tasks, but not others. Fundamentally it’s a bunch of operations that work on arrays. If your problem can be phrased in terms of array operations then this is awesome. If it can’t then you start fighting the language and eventually it bites you.

I found anything with control structures to be cumbersome. This isn’t to say that APL can’t do things that require an if statement, but you don’t get the benefits. This code to compute a convex hull, for example, seems about as long as it would be in a more traditional language. With a factor of 2, at least. It doesn’t benefit much from APL’s strengths.

Another challenge is that the official syntax uses non-ASCII characters. I actually don’t see this as a problem. We are a decade and a half into the 21st century and can deal with non-ASCII characters quite easily. The challenge is that the symbols themselves to most people. I didn’t find it hard to pick up the basics after reading a half hour tutorial, so I think the real problem is that the syntax scares programmers away before they ever try it.

I also think enthusiasts focus on how much better APL is than other languages, rather than simply showing someone why they should spend the time to learn it. They need to show what it can do that is also practical. While it’s cool to be able to calculate all of the primes from 1 to N in just a few characters, that isn’t going to sell most developers because that’s not a task they actually need to accomplish very often.

APL seems ideal for solving mathematical problems, or at least a good subset of them. The problem for APL is that Mathematica, MathLab, and various other tools have sprung up to do that better.

Much like Lisp, APL seems stuck between the past and the future. The things it’s really good at it is too general for. More recent specialized tools to the job better. APL isn't general enough to be good as a general purpose language. And many general purpose languages have added array processing support (often through libraries) that make them good enough for the things APL is good at. Java 8 streams and lambda functions, for example. Thus it remains stuck in a few niches like high speed finance. This is not a bad niche to be in (highly profitable, I’m sure) but APL will never become widely used.

That said, I really like APL for the things it’s good at. I wish APL could be embedded in a more general purpose language, much like regular expressions are embedded in JavaScript. I love the concept of a small number of functions that can be combined to do amazing things with arrays. This is the most important part of APL — for me at least — but it’s hidden behind a difficult notation.

I buy the argument that any notation is hard to understand until you learn it, and with learning comes power. Certainly this is true for reading prose.

Humans are good pattern recognizers. We don’t read by parsing letters. Only children just learning to read go letter by letter. The letters form patterns, called words, that our brains recognize in their entirety. After a while children's brains pick up the patterns and process them whole. In fact our brains are so good at picking up patterns that we can read most English words with all of the letters scrambled as long as the first and last letters are correct.

I’m sure this principle of pattern recognition applies to an experienced APL programmer as well. They can probably look at this

x[⍋x←6?40]

and think: pick six random numbers from 1 to 40 and return them in ascending order.

After a time this mental processing would become natural. However, much like with writing, code needs spacing and punctuation to help the symbolic "letters" form words in the mind of the programmer. Simply pursuing compactness for the sake of "mad skillz props" doesn’t help anyone. It just makes for write-only code.

Were I to reinvent computing (in my fictional JoshTrek show where the computer understands all spoken words with 200% accuracy), I would replace the symbols with actual meaningful words, then separate them into chunks with punctuation, much like sentences.

this

x[⍋x←6?40]
would become
deal 6 of 1 to 40 => x, sort_ascending, index x

The symbols are replaced with words and the ordering swapped, left to right. It still takes some training to understand what it means, but far less. It’s not as compact but far easier to pick up.

So, in summary, APL is cool and has a lot to teach us, but I don’t think I’d ever use it in my daily work.

addendum:

Since writing this essay I discovered Q, also by Arthur Whitney, that expands K’s terse syntax, but I still find it harder to read than it should be.

I have a problem. Sometimes I get something into my head and it sticks there, taunting me, until I do something about it. Much like the stupid song stuck in your brain, you must play the song to be released from it's grasp. So it is with software.

Last week I had to spend a lot of time in Windows working on a port of Electron. This means lots of Node scripts and Git on the command line.

Windows Pains

It may sound like it sometimes, but I really don't hate Windows. It's a fine GUI operating system but the command shell sucks. Really, really bad. Powershell is an improvement but still pretty bad. There has to be something better. I don't want to hate myself and throw my laptop across the room while coding. It dampens productivity. This blog was the result of that rage face. I tiny birdy told me things will get a lot better in Windows 10. I sure hope so.

In the past I would have used Cygwin, which is a port of Bash and a bunch of unix utilities. Sadly it never worked very well (getting POSIX compliant apps to run on Windows is just a big ball of pain) and support has dwindled in recent years.

Then something happened. After pondering for a while I realized I didn't actually care about having standard Unix utilities. Really I just want the Bash interface. I want a command line interpreter that has a proper history, tab completion, and directory navigation. I want ls and more and cd. I don't actually care if they are spec compliant and can be used in Bash shell scripts. I don't really care about shell scripts at all, since I write everything in Node now. I just want the interface.

I could make a new shell, something simple that would get the job done. Node is already ported to Windows, it's built around streams, and NPM gives me access to endless existing modules. That's 90% of the work already done. I just need to stitch it together.

Photon

And so Photon was born.

Photon is about 250 lines of Javascript that give a command line with ls, cp, mv, rm, rmdir, mkdir, more, pwd, and the ability to call other programs like git. It has a very simple form of tab completion (rather buggy), and uses ANSI colors and tables for formatting. (For some reason there are approximately 4.8 billion ANSI color modules for Node).

All you need to do is npm install -g photonsh then photonsh to get this:

Photon Shell screenshot

Most features were trivial to implement. Here is the function for cp.

    cp: function(a,b) {
        if(!fs.existsSync(a))         return fileError("No such file: ",a);
        if(!fs.statSync(a).isFile())  return fileError("Not a file: ",a);
        var ip = fs.createReadStream(path.join(cwd,a));
        var op = fs.createWriteStream(path.join(cwd,b));
        ip.pipe(op);
    },

Pretty much exactly what you would expect. For the buffered editor with history I used Node's built in readline module which includes callbacks for tab completion.

The hard part

The grand irony here is that I wrote it because of my Windows pain but have yet to actually run it on Windows. I stopped that Windows porting effort for other reasons; so now I just have this program I randomly wrote. Rather than waste the man-months of effort (okay, it was really only about 3 hours), I figured something like this should be shared with the world so that others might learn from my mistakes.

Speaking of mistakes, Photon is horribly buggy and you probably shouldn't run it. No really, it could totally delete your hard drive and stuff. More importantly, Node TTY support is iffy. It turns out Unix shells are very hard to write because of lots of semi-documented assumptions. Go try to write Xterm sometime. There's a reason few people have done it.

In theory a unix shell is simple. You exec a program and pipe it's output to stdout until it's done. The same with input. But what about buffering? But what about ANSI codes? But what about raw keyboard input? Apparently there is a whole world of adhoc specs for how command line apps do 'interactive' things. Running grep from exec is easy. Running vim is not.

In the end I found pausing Node's own REPL interface then execing with the 'inherit' flag worked most of the time. I'm sure there's a better way to do it, but casual Googling with Bing hasn't found it yet.

Onward!

So where does Photon go from here? I have no idea. There's tons of things you could do with it. Node can stream anything, so copying a remote URL to a local file should be trivial. Or you could build a text mode raytracer. Whatever. The choice is yours. Choose wisely. Or don't. The code will still be here (on github).

Enjoy!

I need to move on to other projects so I’m wrapping up the rest of my ideas in this blog. Gotta get it outta my brainz first.

The key concept I’ve explored in this series is that the code you see in an editor need not be identical to what is stored on disk, or the same as what is sent to the compiler. If we relax this constraint then a world of opportunity opens up. We’ve been writing glorified text files for 40 years. We can do better. Let’s explore.

Keywords

Why can’t you name a variable for? Because in many common languages for is a reserved word. You, as the programmer, aren’t allowed to use for because it represent a particular loop construct. The underlying compiler doesn’t actually care of course. It doesn’t care about the name of any of your variables or other words in your code. The compiler just needs them to be unique symbols, some of which are mapped to existing constructs like conditionals and loops.

If the compiler doesn’t care then why can’t we do it? Because the parser (the ‘front end’ of the compiler) does care. The parser needs to unambiguously transform a stream of ASCII text into an abstract syntax tree. It’s the unambiguous part that’s the trouble. The syntax restrictions in most common languages are there to make the parser happy. If the parser was magical and could just "know what we meant" then any syntax could be used. Perhaps even syntax that made more sense to the human rather than the computer.

Fundamentally, this is what typographic programming does. It lets us tell the parser which text is what without using specific syntax rules. Instead we use color or font choices to indicate whether a given chunk of text is a variable or keyword or something else. Of course editing in such a system would be a pain, but we already know how to solve that problem. Graphical word processors are proof that it is possible. Before we get to how we solve it let us consider why. Would such a system have enough benefits to outweigh the cost of building it. What new things could we do?

Nothing’s reserved

If we use typography to indicate syntax, then keywords no longer need to be reserved. Any keyword could be used as a variable and any text string could be used as a keyword. You could swap for with fore or thusly. You could use spaces in keywords as for each of. These aren’t very useful examples but the compiler could easily handle them.

With the syntactic restrictions lifted we are free to explore new control flow constructs. How about forever to mean an infinite loop and 10 times for standard for fixed length loops? It’s all the same to the compiler but the human reading it would better understand the meaning.

Custom Operators

If nothing is reserved then user defined operators become easy. After all; what is an operator but a function with a single letter name from a restricted character set. In Python 4 + 5 is just sugar for add(4,5).

With no syntax rules anything could be an operator. Operators could have multiple letter names, or other symbols from the full unicode set. The only reason operators are given special treatment to begin with is because they represent functions which are so commonly used (like arithmetic) that we want a shorthand. With free syntax we can create a shorthand for the functions that are useful to the task at hand rather than the abstract general purpose tasks the language inventors imagined.

Let’s look at something more concrete. Using complex numbers and vectors is common in graphics programming, but we have to use clumsy and verbose syntax in most languages. This is sad. Mathematics already has invented compact notation for these concepts but we can’t use them due to ASCII limitations. Without these limitations we could add complex numbers with the plus sign like this:

A +B

instead of

complexAdd(A,B)

To help the programmer remember these are complex numbers they could be rendered in a different color.

There are two ways to multiply vectors: the dot product and the cross product. They have very different meanings. With full unicode we could use the correct symbols like this:

A ⋅ B  // dot product
A ⨯ B // cross product

No ambiguity at all. It would be too much to expect a language to support every possible notation. Much better instead to have a language that lets the programmer create their own notation.

Customization in Practice

So how would this work in practice? At some point the code must be transformed into something the compiler understands. Let’s postulate a hypothetical language called X. X has no syntax rules, only the semantic rules of it’s AST. To tell the complier how to convert the code into the AST we must provide our own rules. Something like this.

fun => function
cross => cross
dot => dot
|x| => magnitude(x)

fun intersection(V,R) {
     return V dot R / |V|;
}

We have now defined a mini language in X which still compiles to the same syntactic structure.

Of course typing all of these rules in every file (or compilation unit) would be a huge pain, so we could include them much as we include external libraries.

@include math.rules

fun intersection(V,R) {
     return V dot R / |V|;
}

Most importantly, not only would the compiler understand these rules but so would the editor. The editor can now indicate that V ⋅ R is valid only if they are both vectors. It could enforce the rules from the rule file. Now our code is limited only by the imagination of our rule writers, not the fixed compiler.

In practice, were X to become popular, we would not see everyone making up their own rules. Instead usage would gather around a few popular rulesets much as JavaScript gathered around a few popular libraries like JQuery. We might call each of these rulesets dialects, each a particular flavor derived from the base X language. Custom DSLs would become trivial to implement. It would be common for developers to use one or two "standard" dialects for most of their code but use a special purpose dialect for a particular task.

The important thing here is that the language no longer has a fixed syntax. It can adapt and evolve as needed. All without changing the compiler.

How do you edit?

I hope I’ve convinced you that a flexible syntax delimited by typography is useful. Many common idioms like iteration, accumulation, and data structure traversals could be distilled to concise syntax. And if it has problems then we can tweak it.

There is one big problem though. How would you actually edit code like this?

Fortunately this problem has already been solved by graphical word processors. These tools use color, font, size, weight and spacing to distinguish one element from another. Choosing the mode for a variable is as simple as selecting it with the cursor and using a drop down.

Manually highlighting a entire page of code would quickly grow tedious, of course. For common operations, like declaring a variable, the programmer could type a special symbol like @. This tells the editor that the next letters are a variable name. The programmer ends it with @ or by pressing the spacebar or return key. This @ symbol doesn’t exist in the code. It is simply to indicate to the editor that the programmer wants to be in ‘variable’ mode. Once the mode is finished the @’s go away and the text is rendered with the ‘variable’ font. This is no different than using star word star to indicate bold in Markdown text. The stars never appear in the rendered text.

The choice of the @ symbol doesn't matter as long as it's easy with the user's native keyboard. @ is good for US keyboards. French or Russians might use something else.

Resolving Ambiguity

Even using manual markup might become tedious, though. Fortunately the editor can usually figure out the meaning of any given token by using the dialect rules. If the rules indicate that # equals division then the editor can just do the right thing. Using manual highlighting would only be necessary if the dialect itself introduces an ambiguity. (ex: # means division and also the start of a hex value)

What about multiplying vectors? You could type in either of the two proper symbols, but the average keyboard doesn’t support those directly. You’d have to memorize a unicode code point or use a floating dialog. Alternatively, we could use code completion. If you type * then the editor knows this must be either dot or cross product. It provides only those two choices in a drop down, much as we auto-complete method names today.

Using a syntax free language does not fully remove the need to resolve ambiguity, it just moves the resolution process to edit time rather than compile time. This is good. The human is present at edit time and can explain to the computer was is correct. The human is not there at compile time, so any ambiguity must result in an error that the human must come back and fix. Furthermore, resolving the ambiguity need only happen once, when the human types it, not every time the code is compiled. This will further reduce code regressions when other parts of the system change.

Undoubtedly we would discover more edge cases, but these are all solvable. Modern GUI word processors and spreadsheets prove this. A more challenging issue is version control.

Versioning

Code changes over time. It must be versioned. I don’t know why it took 40 years for us to invent distributed version control systems like Git, but at least we have it now. It would be a shame to give that up just as we’ve gotten the world on board. The problem is Git and other VCSs don’t really understand code. They just understand text. There are really only two ways to solve this:

1) modify git, and the other tools around it (diff viewers, github’s website, etc.) to support binary diffs specific to our new system.

2) make the on disk format be pure text.

Clearly option 1 is a non-starter. One day, once language X takes over the world, we could ask the GitHub team to add support for X diffs, but that’s a long ways off. We have to start with option 2.

You might think I’m going back on what I said at the start. After all, I stated we should no longer be writing code as text on disk, but that is exactly what I am suggesting. What I don’t want is to store the same thing that we edit. From the VCS’s point of view the editor and visual representation are irrelevant. The only thing that matters is what is the file on disk. X needs a canonical on serialization format. Regardless of what tool you use to edit X, as long as it saves to the same format we are fine. This is no different than SQL or HTML. Everyone has their favorite tool, but they all write to the same format.

Canonical Serialization Format.

X’s serialization format should obviously be plain text. < 128bit ASCII would be fine, though I think we could handle UTF8 easily. Most modern diff tools can work with UTF8 cleanly, so Japanese comments and math symbols would come through just fine.

The X format should also be unambiguous. Variables are marked up explicitly as variables. Operators as operators. There should be no need for the parser to guess at anything or interpret syntax rules. We could use one of the many existing formats like JSON, XML, or even LaTex. It doesn’t really matter since humans will rarely need to look at them.

But.... since we are defining a new serialization format anyways, there are a few useful things we could add.

Code as Graph

Code is really just a graph. Graphs can be serialized in many ways. Rather than using function names inline they could be represented by identifiers which point to a lookup table. Then, if a function is renamed the code only changes in one place rather than at every point in the code where the function is used. This creates a semantic diff that the diff tool could render as ‘function Y renamed to Z’.

v467 = foo
v468 = bar
v469 = baz

fun v467 () {
return v468 + v469;
}

Semantic diff-ing could be very powerful. Any refactoring should be reducible to its essential meaning: moved X to a new class or extracted Y from Z. Whitespace changes would be ignored (or never stored in the first place). Commit messages could be context aware: changed X in the unit test for Y and added logging to Z. Our current tools just barely understand when a file has been renamed instead of deleted and a new one added. There’s a lot of room for innovation here.

WrapUp

I hope I’ve convinced you there is value in this approach. Building language X still won’t be easy. To be viable we have to make a compiler, useful dialect definitions, and a visual editor; all at the same time. That’s a lot of work before anyone else can use it. Building on top of existing tools like Eclipse or Atom.io would help, but I know it’s still a big hill to climb. Trust me. The view will be worth it.

Lately I've been digging into Rust, a new programming language sponsored by Mozilla. They recently rewrote their docs and announced a roadmap to 1.0 by the end of the year, so now is a good time to take a look at it. I went through the new Language Guide last night then wrote a small ray tracer to test it out.

One of the biggest success stories of the last three decades of programming is memory safety. C and C++ may be fast but it's very easy to have dangling pointers and buffer overflows. Endless security exploits come back to this fundamental limitation of C and C++. Raw pointers == lack of memory safety.

Many modern languages provide this memory safety, but they do it at runtime with references and garbage collection. JavaScript, Java, Ruby, Python, and Perl all fall into this camp. They accomplish this safety at the cost of runtime speed. While they are typically JITed today instead interpreted, they are all slower than C/C++ because of their runtime overhead. For many tasks this is fine, but if you are building something low level or where speed really matters, then you probably go back to C/C++ and all of the problems it entails.

Rust is different. Rust is a statically typed compiled language meant to target the same tasks that you might use C or C++ for today, but it's whole purpose in life is to promote memory safety. By design, Rust code can't have dangling pointers, buffer overflows, or a whole host of other memory errors. Any code which would cause this literally can't be compiled. The language doesn't allow it. I know it sounds crazy, but it really does work.

Most importantly, Rust achieves all of these memory safety guarantees at compile time. There is no runtime overhead, making the final code as fast as C/C++, but far safer.

I won't go into how all of this work, but the short description is that Rust uses several kinds of pointers that let the compiler prove who owns memory at any given moment. If you write up a situation where the complier can't predict what will happen, it won't compile. If you can get your code to compile then you are guaranteed to be memory safe.

I plan to do all of my native coding in Rust when it hits 1.0. Rust has a robust FFI so interfacing with existing C libs is quite easy. Since I absolutely hate C++, this is a big win for me. :)

Coming into Rust I was worried the pointer constraints would make writing code difficult, much like puzzle languages. I was pleasantly surprised to find it pretty easy to code in. It's certainly more verbose than a dynamic language like JavaScript, but I was able to convert a JS ray tracer to Rust in about an hour. The resulting code roughly looks like what you'd expect from C, just with a few differences. Let's take a look.

First, the basic type definitions. I created a Vector, Sphere, Color, Ray, and Light class. Rust doesn't really have classes in the C++/Java sense, but it does have structs enhanced with method implementations, so you can think of them similar to classes.

use std::num;


struct Vector {
    x:f32,
    y:f32,
    z:f32
}

impl Vector {
    fn new(x:f32,y:f32,z:f32) -> Vector {
       Vector { x:x, y:y, z:z }
    }
    fn scale(&self, s:f32)    -> Vector  { 
       Vector { x:self.x*s, y:self.y*s, z:self.z*s }
    }
    fn plus(&self, b:Vector)  -> Vector  { 
      Vector::new(self.x+b.x, self.y+b.y, self.z+b.z)
    }
    fn minus(&self, b:Vector) -> Vector  {
       Vector::new(self.x-b.x, self.y-b.y, self.z-b.z) 
    }
    fn dot(&self, b:Vector)   -> f32     { 
      self.x*b.x + self.y*b.y + self.z*b.z 
    }
    fn magnitude(&self)       -> f32     { 
      (self.dot(*self)).sqrt() 
    }
    fn normalize(&self)       -> Vector  { 
      self.scale(1.0/self.magnitude())  
    }
}

struct Ray {
    orig:Vector,
    dir:Vector,
}

struct Color {
    r:f32,
    g:f32,
    b:f32,
}

impl Color {
    fn scale (&self, s:f32) -> Color {
        Color { r: self.r*s, g:self.g*s, b:self.b*s }
    }
    fn plus (&self, b:Color) -> Color {
        Color { r: self.r + b.r, g: self.g + b.g, b: self.b + b.b }
    }
}

struct Sphere {
    center:Vector,
    radius:f32,
    color: Color,
}

impl Sphere {
    fn get_normal(&self, pt:Vector) -> Vector {
        return pt.minus(self.center).normalize();
    }
}

struct Light {
    position: Vector,
    color: Color,
}

Without knowing the language you can still figure out what's going on. Types are specified after the field names, with f32 and i32 meaning integer and floating point values. There's also a slew of finer grained number types for when you need tight memory control.

Next up I created a few constants.

static WHITE:Color  = Color { r:1.0, g:1.0, b:1.0};
static RED:Color    = Color { r:1.0, g:0.0, b:0.0};
static GREEN:Color  = Color { r:0.0, g:1.0, b:0.0};
static BLUE:Color   = Color { r:0.0, g:0.0, b:1.0};

static LIGHT1:Light = Light {
    position: Vector { x: 0.7, y: -1.0, z: 1.7} ,
    color: WHITE
};

Now in my main function I'll set up the scene and create a lookup table of one letter strings for text mode rendering.

fn main() {
    println!("Hello, worlds!");
    let lut = vec!(".","-","+","*","X","M");

    let w = 20*4i;
    let h = 10*4i;

    let scene = vec!(
        Sphere{ center: Vector::new(-1.0, 0.0, 3.0), radius: 0.3, color: RED },
        Sphere{ center: Vector::new( 0.0, 0.0, 3.0), radius: 0.8, color: GREEN },
        Sphere{ center: Vector::new( 1.0, 0.0, 3.0), radius: 0.3, color: BLUE }
        );

Now lets get to the core ray tracing loop. This looks at every pixel to see if it's ray intersects with the spheres in the scene. It should be mostly understandable, but you'll start to see the differences with C.

    for j in range(0,h) {
        println!("--");
        for i in range(0,w) {
            //let tMax = 10000f32;
            let fw:f32 = w as f32;
            let fi:f32 = i as f32;
            let fj:f32 = j as f32;
            let fh:f32 = h as f32;

            let ray = Ray {
                orig: Vector::new(0.0,0.0,0.0),
                dir:  Vector::new((fi-fw/2.0)/fw, (fj-fh/2.0)/fh,1.0).normalize(),
            };

            let mut objHitObj:Option<(Sphere,f32)> = None;

            for obj in scene.iter() {
                let ret = intersect_sphere(ray, obj.center, obj.radius);
                if ret.hit {
                    objHitObj = Some((*obj,ret.tval));
                }
            }

The for loops are done with a range function which returns an iterator. Iterators are used extensively in Rust because they are inherently safer than direct indexing.

Notice the objHitObj variable. It is set based on the result of the intersection test. In JavaScript I used several variables to track if an object had been hit, and to hold the hit object and hit distance if it did intersect. In Rust you are encouraged to use options. An Option is a special enum with two possible values: None and Some. If it is None then there is nothing inside the option. If it is Some then you can safely grab the contained object. Options are a safer alternative to null pointer checks.

Options can hold any object thanks to Rust's generics. In the code above I tried out something tricky and surprisingly it worked. Since I need to store several values I created an option holding a tuple, which is like a fixed size array with fixed types. objHitObj is defined as an option holding a tuple of a Sphere and an f32 value. When I check if ret.hit is true I set the option to Some((obj,ret.tval)), meaning the contents of my object pointer and the hit distance.

Now lets look at the second part of the loop, once ray intersection is done.

            let pixel = match objHitObj {
                Some((obj,tval)) => lut[shade_pixel(ray,obj,tval)],
                None             => " "
            };

            print!("{}",pixel);
        }
    }

Finally I can check and retrieve the option values using an if statement or a match. Match is like a switch/case statement in C, but with super powers. It forces you to account for all possible code paths. This ensures there are no mistakes during compilation. In the code above I match the some and none cases. In the Some case it pulls out the nested objects and gives them the names obj and tval, just like the tuple I stuffed into it earlier. This is called destructuring in Rust. If there is a value then it calls shadepixel and returns character in the look up table representing that grayscale value. If the None case happens then it returns a space. In either case we know the pixel variable will have a valid value after the match. It's impossible for pixel to be null, so I can safely print it.

The rest of my code is basically vector math. It looks almost identical to the same code in JavaScript, just strongly typed.

fn shade_pixel(ray:Ray, obj:Sphere, tval:f32) -> uint {
    let pi = ray.orig.plus(ray.dir.scale(tval));
    let color = diffuse_shading(pi, obj, LIGHT1);
    let col = (color.r + color.g + color.b) / 3.0;
    (col * 6.0) as uint
}

struct HitPoint {
    hit:bool,
    tval:f32,
}

fn intersect_sphere(ray:Ray, center:Vector, radius:f32) -> HitPoint {
    let l = center.minus(ray.orig);
    let tca = l.dot(ray.dir);
    if tca < 0.0 {
        return HitPoint { hit:false, tval:-1.0 };
    }
    let d2 = l.dot(l) - tca*tca;
    let r2 = radius*radius;
    if d2 > r2 {
        return HitPoint { hit: false, tval:-1.0 };
    }
    let thc = (r2-d2).sqrt();
    let t0 = tca-thc;
    //let t1 = tca+thc;
    if t0 > 10000.0 {
        return HitPoint { hit: false, tval: -1.0 };
    }
    return HitPoint { hit: true, tval: t0}

}

fn clamp(x:f32,a:f32,b:f32) -> f32{
    if x < a { return a;  }
    if x > b { return b; }
    return x;
}

fn diffuse_shading(pi:Vector, obj:Sphere, light:Light) -> Color{
    let n = obj.get_normal(pi);
    let lam1 = light.position.minus(pi).normalize().dot(n);
    let lam2 = clamp(lam1,0.0,1.0);
    light.color.scale(lam2*0.5).plus(obj.color.scale(0.3))
}

That's it. Here's the final result.

text

So far I'm really happy with Rust. It has some rough edges they are still working on, but I love the direction they are going. It really could be a replacement for C/C++ in lots of cases.

Buy or no buy? Buy! It's free!

After the more abstract talk I’d like to come back to something concrete. Regular Expressions, or regex, are powerful but often inscrutable. Today let’s see how we could make them easier to use through typography and visualization without diminishing that power.

Regular Expressions are essentially a mini language embedded inside of your regular language. I’ve often seen regex written like this,

new Regex(“^\\s+([a-Z]|[0-9])\\w+\\$$”) // regex for special vars

then sometimes reformatted like this

//regex for special vars
new Regex(
     “^” // start of line
     +“\\s+” // one or more whitespace
     +“([a-Z]|[0-9])” //one letter or number
     +”\w+” // one word
     +”\\$” // the literal dollar sign
     +”$” // end of line
)

The fact that the author had to manually split up the text and add comments simply screams for a better syntax. Something far more readable but still terse. It would seem that readability and conciseness would be mutually exclusive. Anything more readable would also be far longer, eliminating much of the value of regex, right?

Au contraire! We have a special power at our disposal: we can render whatever we want in the editor. The compiler doesn’t care as long as it receives the description in some canonical form. So rather than choosing readable vs terse we can cheat and do both.

Font Styling

Let’s start with the standard syntax but replacing delimiters and escape sequences with typographic choices. This regex looks for variable names in a made up language. Variables must start with a letter or number, followed by any word character, followed by a dollar sign. Here is the same regex using bold italics for all special chars:

text

Those problematic double escaped backslashes are gone. We can tell which dollar sign is a literal and which is a magic variable thanks to font styling.

Color and Brackets

Next we can turn the literals green and the braces gray, so it’s clear which part is actual words. We can also replace the $ and ^ with symbols that actually look like beginning and ending of lines. Open and close up brackets or floor brackets.

text

Now one more thing, let’s replace the w and s magic characters with something that looks even more different than plain text: letter boxes. These are actual unicode characters from the "Unicode Characters in the Enclosed Alphanumeric Supplement Block".

text

Now the regex is much easier to read but still compact. However, unless you are really familiar with regex syntax you may still be confused. You still have a bunch of specific symbols and notation to remember. How could we solve this?

Think Bigger

Let’s invent a second notation that is even easier to read. We already have a clue of how this should work. Good programmers write the regex vertically with comments as an adhoc secondary notation. Let’s make it official.

If we have two views then the editor could switch between them as desired. When the user clicks on the regex is will expand to something like this:

text

We get what looks like a tiny spreadsheet where we can edit any term directly, using code completion so we don’t have to remember the exact terms. Furthermore the IDE can show the extra documentation hints only when we are in this "detailed" mode.

Even with this though, there is still a problem with regex. Just by looking at it you can’t always tell what it will match and what it won’t. Without a long comment, how do programmers today know what a regex will match. They use their brains. Yes, they actually look at the text and imagine what it will match in their heads. They essentially simulate the computer the regex mentally. This is totally backwards! The whole point of a computer is to simulate things for us, not the other way around. Can’t we make the computer do it’s job?!

Simulate

Instead of simulating the regex mentally, let’s give it some strings and have the computer tell us what will match. Let’s see what that would look like:

regex tester

We can type in as many examples as we want. Having direct examples also lets us test out the edge cases, which is often where regexes fail.

Hmm. You know… this is basically some unit tests for the regex. If we just add another column like this:

regex unit tests

then we can see at a glance if the regex is working as expected. In this example the last test is failing so it is highlighted in red.

Most importantly, the unit tests are right next to the regex in the code; right where they are used. If we collapse the triangle then everything goes away. It’s still there, just hidden, until we need it.

This is the power of having a smart IDE and flexible syntax. Because these are only visualization changes it would work with any existing compiler. No new language required.

I’ve talked a lot about ways to improve the syntax and process of writing code. In Typographic Programming Language, Fonts, and Tabs Vs Spaces I've talked about the details of how to improve programming. However, I haven't really talked about my larger vision. Where am I actually going with this?

My real goal is to build the Ultimate IDE and Programming Language for solving problems cleanly and simply. Ambitious much?

Actually, my real goal is to create the computer from Star Trek (the one with Majel Barrett's voice).

Actually, my real goal is to create a cybernetically enhanced programmer.

Okay. Let’s back up a bit.

The Big Picture

When we are programming, what are we really doing? What is the essence? Once we figure that out we should be able to work backwards from there.

Programming is when you have a problem to solve and you tell the computer how to solve it for you. That’s really it. Everything we do comes back to that fundamental. Functions are ways of encapsulating solving stuff in small chunks so we can reason about it. The same thing with objects (often used to "model" what we are solving). Unit tests are to help verify what we are solving, though it often helps to define the problem as well. At the end of the day it's all to serve the goal of teaching the computer how to solve problems for us.

So how could we compare one PL against another?

Metrics

From here out I'm going to use Programming Language, PL, to mean the entire system of IDE, compiler, editor, build tools, etc. The entire system that lets you have the computer solve a problem for you.

How can we say that one PL is better than another? By what metric? Well, how about what lets you solve the program the fastest, or the easiest, or with the least headache. That sounds nice and is certainly true, but it's rather subjective. We need something more empirical.

Hmm. That's not very helpful. Let's back up again.

Paper and Pencil

What is a pad of paper and a pencil? It’s a thinking device. If you want to add up some numbers you can do it in your head, but after more than a few numbers it becomes tricky to manage them all. So we write them down. We outsource part of the process to some paper.

If we want to do division we can do it with the long division process. This is actually not the most efficient way to divide numbers, but it works well because you can have the paper manage all of the state for you.

What if you need to remember a list of things to do? You write it down on paper. The paper becomes an extension of your brain. It is a tool for thinking. This perhaps explains some people’s fetish over moleskin style sketch books (not that I would ever invest in a Kickstarter for sketchbooks.

A Tool for Thinking

If we think of a programming language as a tool for thinking, then it becomes clear. The PL helps you tell the computer what to do. So a good PL would help you around the parts of programming that are hard: namely keeping state in your head. A bad PL requires you to remember a lot of things.

For example, suppose you write a function that calls another function named 'foo'. You must remember what this other function does and what it accepts and what it returns. If the function is named well, say 'increment', then the function name itself helps your brain.You have less to remember because the function name carries information.

Now suppose we have type information in our PL. The function increment only takes numbers. Now I don’t have to remember that it only takes numbers; the compiler will enforce it for me. Of course I only find this out when I compile the code. To make the PL give me less to remember the IDE can compile constantly, giving me an immediate warning when I do something bad. Code completion can also help by only suggesting local values that are numbers.

With these types of features the PL acts as a tool for thinking. An extension of the brain by holding state.

So we can say a PL is “better” if it reduces the cognitive load of the programmer. Things like deterministic performance and runtime speed are nice, but they are (or at least should be) secondary to reducing the cognitive load of the programmer. Of course a program which is too slow to be used is a failure, so ultimately it does matter. However computers get faster. The runtime performance is less of a concern than the quality of the code. Perhaps this explains the resurgence in functional programming styles. The runtime hit matters less than it did 30 years ago, so we can afford to waste cycles on things which reduce the cognitive load of the programmer.

Regex

Now, where were we? Oh, right; the Fire Swamp.

Let's look at a more concrete example. Regular expressions. Regexes are powerful. They are concise. But a given regex is not easy to read if you didn’t write it yourself; or even if you did but you didn’t just write it. Look at your regex from 6 months ago sometime. I hope you added good documentation.

A regex conveys a lot of information. You have a lot to load up into your brain. When you look at a regex to see what it does you have to start simulating it in your brain. Your brain basically becomes a crappy computer that executes the regex on hypothetical strings.

That's crazy. Why are we simulating a state machine in our heads? That’s what we have computers for! To simulate things. We should have the PL show the regex to us in an easier to understand form, or perhaps even multiple forms. With unit tests. And visualizers. Or an embedded regex simulator to show how it works. I have a lot to say about regular expressions in an upcoming blog, but for now I'll just say they are some extremely low hanging fruit in the mission to reduce cognitive load.

Expanding Complexity

Now you might think, if we make a PL which sufficiently reduces cognitive load then programmers will have little to do. Programming will become so easy that anyone could do it. True, this might happen to some degree. In fact, I would argue that letting novices program is actually a good thing (though we would call it problem description rather than programming). However, in general this won’t happen. As we decrease the cognitive load we increase the complexity of the programs we can make.

There seems to be a limit to how much complexity the human brain can handle. This limit varies from person to person of course. And it is affected by your health, hunger level, stress, tiredness, etc. (Never code on an empty stomach.) But there is a limit.

Historically better tools have reduced the complexity at hand to something below the cognitive limit of the programmer. So what did we do? We tackled more complex tasks.

Josh’s first postulate of complexity: Just as data always expands to fill available disk space, programing tasks always increase in complexity to fit our brain budget.

Offloading complexity to tools merely allows us to tackle bigger problems. It’s really no different then the brain boost we got from inventing paper and pencil, just separated by a few thousand years. [1]

Now I think we can agree that a PL which reduces more cognitive load of the human is better than one which reduces less. That's our metric. So how can we turn this into something actionable? Does it suggest possible improvements to real world programming languages?

The answer is yes! If we stop focusing on PL implementations but rather the user experience of the programmer, then many ideas become readily available. Everything I’ve discussed in previous blogs is driven by this core principle. Showing a color or image inline reduces cognitive load because you don’t have to visualize in your head what the color or image actually looks like. The editor can just do it for you. This is just the beginning.

My Forever Project

These are ideas I’ve been working on a long time. In fact I recently realized some of these ideas have been in my head for twenty years. I found evidence of tools I (attempted) to write from my college papers. It’s only recently that everything has started to gel into an expressible form. It’s also only recently that we’ve had the "problem" of computers with too much computational power going to waste. This is my Forever Project.

There’s so much possibility now. If a notepad and a text editor are our current cybernetic enhancements, what else could we build? Could Google Glass help you with programming? How about an Oculus Rift? Or using an iPad as an extra screen? The answer is Yes! We could definitely use these to reduce cognitive load while interacting with a computer. We just might not call all of these tasks "programming" but they are (or will be shortly as software continues to eat the world).

deep breath

My concept summarized: Programming systems should not be thought of as ways to make a computer do tricky things. They are ways to make a computer solve problems for you. Problems you would have to do in your head without them (assuming you could do them at all). Thus PLs are tools for thinking. This gives us a metric. Do particular PLs help us think better and solve problems better? As it turns out, most PLs fail miserably by this metric. Or at least they could be a whole lot better. Many ideas from the 60s and 70s still remain unimplemented and unused.

But don’t be sad. Our field is less than 100 years old. We are actually doing pretty well. It took a few thousand years to invent suspension bridges and they still don’t work 100% of the time.

So from now on let us consider how to make better tools for thinking. Everything Bret Victor has been doing, and Jonathan Edwards, and even the work from the 70s & 80s that Alan Kay did with Smalltalk and the Dynabook come back to the same thing: building better tools for thinking. With better tools we can think better thoughts. With better tools we can solve bigger problems.

So let’s get crackin'!

footnote: [1] Before someone calls me on this, I have no idea when paper and pencil were first invented for the purposes of reducing problem complexity. Probably in ancient Egypt for royal bookkeepers. I'll leave that as an exercise to the reader.

So far my posts on Typographic Programming have covered font choices and formatting. Different ways of rendering the source code itself. I haven’t covered the spacing of the code yet, or more specifically: indentation. Or even more specifically: tabs vs spaces.

Put on your asbestos suits, folks. It’s gonna get hot in this kitchen.

Traditionally source code has been rendered with a monospace font. This allows for manual horizontal positioning with spaces or tab characters. Of course the tab character doesn’t have a defined width (I’ll explain in a moment why) so flame wars have erupted around spaces vs tabs, on par with the great editor wars of the last century. Ultimately these are pointless arguments. Tabs vs spaces is an artifact of trying to render code into a monospace grid of characters. It’s the 21st century! We can do better than our dad's 1970s terminal. In fact, they did better in the 19th century!

In The Beginning

Let’s start at the beginning. Fixed whitespace indenting can be used to line things up so they become pretty, and therefore easier to read. But that's a lot of work. All that pressing of space bars and adjusting when things change.

Instead of manually controlling whitespace what if we used tab stops. I don’t mean the tab character, which is mapped to either 4 or 8 spaces, but actual tab stops. Yes. They used to be a real physical thing.

image of typewriter tab stop

In the olden days, back when we used manual typewriters (I think I was the last high school class to take typing on such machines), there was such a thing as a tabstop. These were vertical brackets along the page (well, along that metal bar at the bottom of the current line). These tiny pieces of metal literally stopped the tabs, thus giving them the name tabstops. We were so creative with names in those days.

When you hit the tab key the cursor (a rapidly spinning metal ball imprinted with the noun: “Selectric”) would jump from the left edge of the paper to the first tabstop. Hit tab again and it will go to the next tabstop. Now of course, these tab stops were adjustable, so you could choose the indenting style you wanted for your particular document.

Let me repeat that. The tabs stops could be adjusted to the indenting style of your particular document. Inherent is the concept that there is no “one right way”, but rather the format must suit the needs of the particular document, or part of a document, that you are writing.

When WYSIWYG editors came along they preserved the notion of a tabstop. They even made it better by giving you nice vertical lines to see the effect of changing the tabstop. When you hit tab the text would move to the stop. If you later move the stop then the text aligned with it will magically move as well. Dynamic tabstops! Yay. We can finally rock like its the 1990s.

Word for Mac, circa 1991

Semantic Indentation

So why do we go back to the 1970s with our text editors? Tabstops are a simple concept for semantically (sorta) indenting our code. Let’s see what some code would look like with simple tabular semantic indenting.

Here’s some code with no formatting other than a standard indent.

text

This is your typical Cish code with brackets and parameters. It would be nice to line up the parameters with their types. The drawRect code is also similar between lines. We should clean that up too.

Here is code with semantic indenting.

text

How would you type in such code? When you hit the tab key the text advances to the next tabstop. these tabstops are dynamic, however. Instead of giving you a line with a ruler at the top, the tabstops automatically expand to fit the text in the column. Essentially they act more like spreadsheet cells than tab stops.

Furthermore, the text will be left aligned at the tabstop by default, but right aligned for text that ends with a comma or other special character. This process is completely automatic and hidden, of course. The programmer just hits the tab key and continues typing, the IDE handles all of the formatting details, as it should be. We humans write the important things (the code) and let the computer handle the busy work (formatting).

The tab stops (or columns if you think of it as a table) don’t extend the entire document. They only go down as far as the next logical chunk. There could be multiple ways to define a ‘chunk’ semantically, but one indicator would be a double space. If you break the code flow with a double space then it will revert to the document / project wide defaults. This lets us use standard indentation for common structures like functions and flow control bracing, while still allowing for custom indentation when needed.

Ludicrous Speed

Furthermore, using semantic indentation could completely remove the need for braces as block delimiters. Semantic indentation can replace where blocks begin and end.

if(x==y) {
     foo
} else {
     bar
}

becomes

if x==y
     foo
else
     bar
and
if(x==y) {
     foo
}
can become
if x==y
     foo
or even
if x==y  foo

using the tab character.

This might be a tad confusing, however, because there is only whitespace between the x==y and the foo. How do we know its not a space instead? If you hit the tab key, which indicates you are going to the next chunk instead of just a long conditional expression, the editor could draw a light glyph where the tab is. Perhaps a rightward unicode arrow.

Now I know the Rubyists and Pythoners will say that they've already removed the block delimiters. Quite true, but this goes one step further.

Python takes the choice of whitespace away from the programmer, but the programmer still has to implement it. With semantic indentation the entire job of formatting is taken away. You just type your code and the editor does the right thing. Such a system also opens the door for alternative rendering of the code in particular circumstances.

Better Fonts

And of course we come to our final advantage. Without manual formatting with spaces we don't need to be restricted to a monospace font anymore. Our code could look like this:

text

Semantic indenting. Less typing, more readable code. Let’s rock like it’s the 1990s!

Apparently my last post hit HackerNews and I didn’t know it. That’s what I get for not checking my server logs.

Going through the comments I see that about 30% of people find it interesting and 70% think I’m an idiot. That’s much better than usual, so I'm going to tempt fate with another installment. The general interest in my post (and let’s face it, typography for source code is a pretty obscure topic of discussion), spurred me to write a follow up.

In today’s episode we’ll tour the font themselves. If we want to reinvent computing it’s not enough to grab a typewriter font and call it a day. We have to plan this carefully, and that starts with a good selection of typefaces.

Note that I am not going to use color or boxes or any other visual indicators in this post. Not that they aren’t useful, but today I want to see how far we could get with just font styling: typeface, size, weight, and style (italics, small caps, etc.).

Since I'm formatting symbolic code, data, and comments; I need a range of typefaces that work well together. I’ve chosen the Source Pro family from Adobe. It is open source and freely redistributable, with a full set of weights and italics. More importantly, it has three main faces: a monospace font: Source Code Pro, a serif font: Source Serif Pro, and a sans-serif font: Source Sans Pro. All three are specifically designed to work together and have a range of advanced glyphs and features. We won't use many of these features today but they will be nice to have in the future.

Lets start formatting some code. This will be a gradual process where we chose the basic formatting then build on top for different semantic chunks.

For code itself we will stick with the monospace font: Source Code Pro. I would argue that a fixed width font is not actually essential for code, but that’s an argument for another day when we look at indentation. For today we’ll stick with fixed width.

Code comments and documentation will use Source Serif Pro. Why? Well, comments don’t need the explicit alignment of a monospace font, so definitely not Source Code Pro. Comments are prose. The sans serif font would work okay but for some reason when I think "text" I think of serifs. It feels more like prose. More texty.

So I won’t use Source Sans Pro today but I will save it for future use. Using the Source [x] Pro set gives us that option.

Below is a simple JavaScript function set with the default weights of those two fonts. This is the base style we will work from.

text

So that’s a good start but.., I can immediately think of a few improvements. Code (at least in C derived languages) has five main elements: comments, keywords, symbols, literals, and miscellaneous — or what I like to call ‘extraneous cruft’. It’s mainly parenthesis and brackets for delimiting functions and procedure bodies. It is possible to design a language which uses ordering to reduce the need for delimiters, or to be rid of them completely with formatting conventions (as I talked about last week). However, today’s job is to just restyle without changing the code so let’s leave them unmolested for now. Instead we will minimize their appearance by setting them in a thin weight. (All text is still in black, though).

Next up is symbols. Symbols the part of a program that the programmer can change. These are arguably the most important part of the program; the parts we spend the most time thinking about, so let’s make them stand out with a very heavy weight: bold 700.

text

Better, but I don’t like how the string literal blends in to with the rest of the code. String literals are almost like prose so let’s show them in serif type, this time with a bolder weight and shrunk a tiny bit (90% of normal).

For compatibility I did the same with numeric literals. I’m not sure if ‘null’ is really a literal or a symbol, but you can assign values to it so I’ll call it a literal.

text

Next up is keywords. Keywords are the part of the language that the programmer cannot change. They are strictly defined by the language and completely reserved. Since they are immutable it doesn’t really matter how we render them. I could use a smiley face for the function keyword and the compiler wouldn’t care. It always evaluates to the same thing. However, unlike my 3yr old’s laptop, I don’t have a smiley face key on my computer; so let’s keep the same spelling. I do want to do something unorthodox though. Let’s put the keywords in small caps.

Small caps are glyphs the size of lower case letters, but drawn like the upper case letters. To do small caps right you can’t just put your text in upper case and shrink it down. It would look strange. Small caps are actually different glyphs designed to have a similar (but not identical) width and a shorter height. They are hard to generate programmatically. This is one place where a human has to do the hard work. Fortunately we have small caps at our disposal thanks to the great contributions by Logos and type designer Marc Weymann. Open source is a good thing.

text

Now we are getting somewhere. Now the code has a dramatically different feel.

There’s one more thing to address: the variables. Are they symbols like function names? Yes, but it feels different than function names. They are also not usually prefixed with a parent object or namespace specifier. Really we have three cases. A fully qualified symbol like ‘baz’ in foo.bar.baz, the prefix part (foo.bar), and standalone variables that aren’t qualified at all (like ‘x’). This distinction applies whether or not the symbol is a function or an object reference (it could actually be both in JavaScript).

In the end I decided these cases are related but distinct. Standalone symbols have a weight of 400. Technically this is the default weight in CSS and shouldn’t appear to be ‘bold’, but since the base font is super light, regular will feel heavier against it. The symbol at the end of a qualifier chain will also be bold, but with a weight of 700. Finally the prefix part will be italics to further distinguish it. There really isn’t a right answer here; other combinations would work equally well, so I just played around until I found something that felt right.

This is the final version:

text

I also shrunk the comments to 80%. Again it just felt right, and serifed fonts are easier to read in longer lines, so the comments can handle the smaller size.

Here’s a link to the live mockup in HTML and CSS. This design turned out much better than I originally thought it would. We can do a lot without color and spacing changes. Now imagine what we could do will our full palette of tools. But that will have to wait for next time.

BTW, if you submit this to Hacker News or Reddit please let me know via Twitter so I can answer questions.

Allow me to present a simple thought experiment. Suppose we didn’t need to store our code as ASCII text on disk. Could we change the way we write -- and more importantly read -- symbolic code? Let’s assume we have a magic code editor which can read, edit, and write anything we can imagine. Furthermore, assume we have a magic compiler which can work with the same. What would the ideal code look like?

Well, first we could get rid of delimiters. Why do we even have them? Our sufficiently stupid compilers.

Delimiters like quotes are there to let the compiler know when a symbol ends and a literal begins. That’s also why variables can’t start with a number; the compiler wouldn’t know if you meant a variable name or a numeric literal. What if we could distinguish between then using typography instead:

Here’s an example.

text

This example is semantically equivalent to:

print "The cats are hungry."; //no quotes or parens are needed

Rendering the literal inside a special colored box makes it more readable than the plain text version. We live in the 21st century. We have more typographic options than quotes! Let’s use them. A green box is but one simple option.

Let’s take the string literal example further:

text

Without worrying about delimiting the literals we don’t need extra operators for concatenation; just put them inline. In fact, there becomes no difference between string concatenation and variable interpolation. The only difference is how we choose to render them on screen. Number formatting can be shown inline as well, but visually separate by putting the format control in a gray box.

text

Also notice that comments are rendered in an entirely different font, and pushed to the side (with complete Unicode support, of course).

Once we’ve gone down the road of showing string literals differently we could do the same with numbers.

text

Operators are still useful of course, but only when they represent a real mathematical operation. It turns out there is a separate glyph for multiplication, it’s not just an x, but it’s still visually confusing. Maybe a proper dot would be better.

Since some numbers are actually units, this hypothetical language would need separate types for those units. They could be rendered as prose by using common metric abbreviations.

text

In a sense, a number with units really is a different thing than plain numbers. It’s nice for the programming language to recognize that.

As long as we are going to have special support for numeric and string literals, why not go all the way:

Color literals

text

Image literals

text

Byte arrays

text

If our IDEs really understood the concepts represented in the language then we could write highly visual but still symbolic code. If only our compilers were sufficiently smart.

I’m not saying we should actually code this way but thought experiments are a good way to find new ideas; ideas that we could then apply to our existing programming systems.

Animation is just moving something over time. The rate at which the something moves is defined by a function called an easing equation or interpolation function. It is these equations which make something move slowly at the start and speed up, or slow down near the end. These equations give animation a more life like feel. The most common set of easing equations come from Robert Penner's book and webpage.

Penner created a very complete set of equations, including some fun ones like 'spring' and 'bounce'. Unfortunately their formulation isn't the best. Here is one of them in JavaScript


easeInCubic:function(x,t,b,c,d){
returnc*(t/=d)*t*t+b;
},
easeOutCubic:function(x,t,b,c,d){
returnc*((t=t/d-1)*t*t+1)+b;
},

They really obscure the meaning of the equations, making them hard to understand. I think they were designed this way for efficiency reasons since they were first implemented in Flash ActionScript, where speed would be important on the in-efficient. It makes them much harder to understand and extend, however. Fortunately, we can refactor it to be a lot clearer.

Let's start with the easeInCubic equation. Ignore the x parameter (I'm not sure why it's there since it's not used). t is the current time, starting at zero. d is the duration in time. b and c are the starting and ending values.

easeInCubic:function(x,t,b,c,d){
returnc*(t/=d)*t*t+b;
},

If we divide t by d before calling this function, then t will always be in the range of 0 to 1, and d can be left out of the function. If we define the returned version of t to also be from 0 to 1, then the actual interpolation of b to c can also be done outside of the function. Here is some code which will call the easing equation then interpolate the results:


                var t = t/d;
                t = easeInCubic(t);
                var val = b + t*(c-b);

We have moved all of the common code outside the actual easing equation. What that leaves is a value t from 0 to 1 which we must transform into a different t. Here's the new easeInCubic function.


        function easeInCubic(t) {
            return Math.pow(t,3);
        }

That is the essence of the equation. Simply raising t to the power of three. The other formulation might be slightly faster, but it's very confusing, and the speed difference today is largely irrelevant since modern VMs can easily optimize the cleaner form.

Now let's try to transform the second one. An ease out is the same as an ease in except in reverse. If t when from 0 to 1 then the out version will go from 1 - 0. To get this we subtract t from 1 as shown:

        function cubicOut(t) {
            return 1-Math.pow(1-t,3);
        }

However, this looks awfully close to the easeIn version. Rather than writing a new equation we can factor out the differences. Subtract t from 1 before passing it in, then subtract the result from one after getting the return value. The out form just invokes the in form:

function easeOutCubic(t) {
     return 1 - easeInCubic(1-t);
}

Now we can write other equations in a similarly compact form. easeInQuad and easeOutQuad go from:

easeInQuad:function(x,t,b,c,d){
returnc*(t/=d)*t+b;
},
easeOutQuad:function(x,t,b,c,d){
return-c*(t/=d)*(t-2)+b;
},

to

function easeInQuad(t) {  return t*t; }
function easeOutQuad(t) { return 1-easeInQuad(1-t); }

Now let's consider the easeInOutCubic. This one smooths both ends of the equation. In reality it's just scaling the easeIn to the first half of the t, from 0 to 0.5. Then it applies an easeOut to the second half, from 0.5 to 1. Rather than this complex form:

easeInOutQuad:function(x,t,b,c,d){
if((t/=d/2)<1)returnc/2*t*t+b;
return-c/2*((--t)*(t-2)-1)+b;
},

We can compose our previous functions to define it like so:

        function cubicInOut(t) {
            if(t < 0.5) return cubicIn(t*2.0)/2.0;
            return 1-cubicIn((1-t)*2)/2;                
        }

Much cleaner.

Here is the original form of elastic out, which gives you a cartoon like bouncing effect:

easeOutElastic:function(x,t,b,c,d){
vars=1.70158;varp=0;vara=c;
if(t==0)returnb;if((t/=d)==1)returnb+c;if(!p)p=d*.3;
if(a<Math.abs(c)){a=c;vars=p/4;}
elsevars=p/(2*Math.PI)*Math.asin(c/a);
returna*Math.pow(2,-10*t)*Math.sin((t*d-s)*(2*Math.PI)/p)+c+b;
},

and here is the reduced form:


        function easeOutElastic(t) {
            var p = 0.3;
            return Math.pow(2,-10*t) * Math.sin((t-p/4)*(2*Math.PI)/p) + 1;
        }

Moral of the story: "Math is your friend" and "always refactor".

These new equations will be included in a future release of Amino, my cross platform graphics library.

Software Libraries are good. They allow abstraction and encapsulation; which encourages reuse. They also allow the library to be written by one person and used by another. This is reuse at the programmer level, not just the system level. A library can be written by a person who has domain expertise but used by someone else who has less or no expertise in that domain. For example: an XML parser. The implementer knows the XML spec inside and out, but the user of the lib needs only a basic understanding of XML in order to use the lib.

Now that we've established that libraries are good (regardless of whether are are implemented as classes, packages, DLLs, etc.), how can we extend them? Traditionally libraries are very limited in how they can influence the experience of the programmer. The coder simply has new functions to call. A more advanced case is component driven GUI builders where the lib exposes more information so that the IDE can make the library easier to use. Annotating of types in a map component could enhance the GUI builder draw markers, use a dummy tile set, and set positions. This is all standard stuff. Could libraries do more? How could the library author further influence the experience of the library user?

Let's do a quick brainstorm:

  • Extend the language with new operators and syntax specific to the library. A physics library could import operator overloads to enable more math like syntax when dealing with physical units and vectors.
  • Tests that verify aspects of the code that calls the library. This could automate the advice that would be in the documentation, such as "don't pass null to certain functions" or "this function returns an object that you must dispose of".
  • If the lib is used from within an IDE, the lib can insert new documentation into the searchable index of docs.
  • The lib could come with example code and reusable annotated code snippets that would only be applied to projects using the lib.
  • The lib could insert a new package manager into the compiler toolchain. No more messing with Maven POMs.
  • The lib can provide the compiler with new code optimizers for the particular tasks. An imaging analysis library could instruct the compiler on how to better use SIMD instructions when available.

Obviously modifying the environment of the programmer is a dangerous activity so we would need very strict scoping on the changes. A library which extends the language applies only in the parts of the code that actually import the library. Changes are scoped to the code which asks for them.

Fundamentally a library is a thing created by one programmer for use by another programmer (or the same programmer at a future date). We should extend the concept of a library to apply to the entire programming experience, not just adding objects and methods. This would allow all environment extensions to be managed in a standard way that is more understandable, flexible, and scoped. I would never need to set up a makefile again. I could simply import in my main.foo code the package manager, repos, code chunks and syntax that I want to use for this project. The library extensions to the compiler handle everything else automatically.

Clearly there are challenges with such an approach. We would have to define all of the various ways a library could modify your environment. How would compiler hooks actually work? Annotation processors? A function to modify the AST a different stages of compilation? Would it imply a particular back end like LLVM? I don't have the answers. There clearly are challenges to be solved, but I think the approach is a step forward from today where we have many different components of our programming environment that are dependent on each other but don't actually know about each other. Having a way to organize this mess, even if imperfect, would certainly help us make better software.

When working on big projects I often create little projects to support the larger effort. Sometimes these little projects turn into something great on their own. It's time for me to tell you about one of them: AppBundler.

AppBundler is a small tool which packages up Java (client side) apps with a minimum of fuss. From a single app description it can generate Mac OSX .app bundles, Windows .EXE files, JNLPs (Java Web Start), double clickable jars; and as of yesterday evening: webOS apps! I started the project to support Leonardo Sketch but I think it's time for AppBundler to stand on it's own.

Packaging Java apps has historically been an exercise in creative swearing. The JVM provides no packaging mechanism other than double clickable jars, which are limited and feel nothing like native apps. Mac and Windows have their own executable formats that involve native code, and Sun has never provided tools to support them. Java Web Start was supposed to solve this, but it never took off the way the creators hoped and has it's own idiosyncrasies. Long term we will have more a more environments with Java available but with different native package systems. Add in native libs, file extension registration, and other metadata; and now you've got a real problem. After hacking Ant files for years to deal with the issue I decided it was finally time to encode my build scripts and Java lore into a new tool that will solve the issue once and for all. Thus AppBundler was born.

How it works

You create a simple XML descriptor file for your application. It lists the jars that make up your app along with some metadata like the App name and main class. It can optionally include icons, file extensions, and links to native libraries.

<?xml version="1.0" encoding="UTF-8"?>
<app name="Amino Particles"> 
   <jar name="Amino2.jar"/> 
   <jar name="amino_sdl.jar"/> 
   <jar name="examples.jar" 
         main-class="com.joshondesign.amino.examples.Particles"/> 
   <property name="com.joshondesign.amino.impl" value="sdl"/> 
   <native name="sdl"/> 
</app> 

Then you run AppBundler on this file from the command line along with a list of directories where the jars can be found. In most apps you have a single directory with all of your jars, plus the app jar itself, so you usually only need to list two directories. You also specify which output format you want or --all for all of them. Here's what it looks like called from an ant script (command line would be the same).

 <java classpath="lib/AppBundler.jar;lib/XMLLib.jar" classname="com.joshondesign.appbundler.Bundler" fork="true"> 
<arg value="--file=myapp.xml"/> <arg value="--target=mac"/> <arg value="--outdir=dist/"/> <arg value="--jardir=lib/"/> <arg value="--jardir=build/jars/"/> </java> 
AppBundler will then generate the executable for each output format.

What it does

For Mac it will create a .APP bundle containing your jars, then include a copy of the JavaApplicationStub and generate the correct Info.plist files (Mac specific metadata files), and finally zip up the bundle. For Windows it uses JSmooth to generate a .EXE file with an icon and class path. For Java WebStart it will generate the JNLP file and copy over the jars. For double click jar files it will actually squish all of your jars together into a single jar with the right META-INF files. And all of the above works with native libraries like JOGL too. For each target it will set the correct library paths and do tricky things like decompress native libs into temp directories. Registering for file extensions and requesting JREs mostly works.

What about webOS?

All of the platforms except webOS ship with a JVM or one can be auto-installed on demand (the Windows EXEs do this). There is no such option for webOS, however. webOS has a high level HTML 5 based SDK and a low level C based PDK. To run Java on webOS you'd have to provide your own JVM and class libraries, so that's exactly what I've done. The full OpenJDK would be too big to run on a lightweight tablet, and a port would take a team of people months to do. Instead I cross compiled the amazing open source JVM Avian to ARM. Avian was meant to be embedded and already has an ARM port, so compiling it was a snap. Avian can use the full OpenJDK runtime, but it also comes with it's own minimal classpath.jar that provides the bare minimum needed to run Java code. Using the smaller runtime meant we wouldn't have a GUI like Swing, but using Swing would require months of AWT porting anyway, which I wasn't interested in doing. Instead I created a new set of Java bindings to SDL (Simple DirectMedia Layer), a low level graphics API available on pretty much every platform. Then I created a port of Amino (my 2D scene graph library) to run on top of SDL. It sounds complicated (and it was, actually), but the scripts hide the complexity. The end result is a set of tools to let you write graphical apps with Java on webOS. Amino already has ports to Java2D and HTML 5 Canvas (and OpenGL is in the works), so you can easily create cross platform graphics apps. And now with AppBundler you can easily package them as well. Interestingly, Avian runs on desktops nicely, so putting Java apps into the Mac App Store might now be possible. There's already some enterprising developers trying to get Avian working on iOS.

How you can help.

While functional, I consider AppBundler to be in an alpha state. There's lots of things that need work. In particular it needs Linux support (generate rpms or debs?) and a real Ant task instead of the Java exec commands you see above. I would also like to have it be included in Maven and any other useful repo. And as a final request (as long as I have you here), I need some servers to run builds tests on. I have already have Hudson running on a Linux server. I'd love it if someone could run a Hudson slave for me on their Windows or Mac server. And of course we need lots of testing and bug fixes. If you are interested please join the mailing list.

Client Java Freedom

AppBundler is another facet of my efforts to let help Java provide a good user experience everywhere. Apps should always feel native, and that includes the installation and start experience. I've used AppBundler to provide native installs of Leonardo Sketch on every desktop platform. I hope AppBundler will help you do the same. Enjoy! -Josh  

References:

 

Mmmwaa haa haa. It lives! I've gotten Java to run on webOS natively with a new set of Java SDL bindings. That means it just *might* time to start a new project. Read on for how it works and how you could help.

 

For a while I've been following an open source project called Avian. It's a very lightweight and highly portable JVM that can run almost anywhere. Recently I tried a new build and was able to get the ARM port running on webOS!  This is good news because Avian can run pretty much any Java code if you supply it with the right runtime (it can optionally work with the OpenJDK libs).

Now, of course getting a command line app it run is not very interesting. Really we want to talk to the screen to make some real graphical applications. So that brings us to part two: SDL.

If you've been doing desktop programming for a while you've probably heard of SDL, the Simple DirectMedia Library. It's a fairly low level graphics and audio API that runs pretty much everywhere, including on webOS.   But, of course, like many low level APIs it's built in C. So if I want to use Java I need to some wrapper to call it.  The existing wrappers out there are very old and didn't work well on Mac, so it was time to build my own.

Over the weekend I learned how to use Swig, a JNI wrapper generator and successfully ran my new SDL wrappers on Mac, Linux, and webOS.  Here's a quick screenshot:

 

Screen Shot 2011 08 31 at 3 28 52 PM

it's not much but it proves that everything is working.

So what's the next step?  Honestly... I don't know. I created this specifically to let me code Java on webOS, but the SDL bindings would probably be useful for cross platform desktop applications as well.  We could port Amino to it, or do some funky multitouch stuff. It would certainly be great for people creating games. I need your advice.

What would you like to do with this library? What higher level APIs would you like? If you have any ideas of what you'd do with this lib, or would like to contribute to the project (help on Windows compilation would be greatly appreciated), then please message me on twitter: @joshmarinacci.

Thanks!

- Josh

You may be a new programmer, or a web designer, or just someone who's heard the word 'RegEx', and asked: What is a Regex? How do I use it? And why does it hurt my brain? Well, relax. The doctor is in. Here's two aspirin and some water.

What is it doctor?

Oh, it's two parts hydrogen and one part oxygen; but that's not important right now. We're here to talk about RegEx. RegEx is short for Regular Expressions, but that's also not important right now. Regexs are just patterns for matching bits of text. Whenever you hear RegEx, just think: pattern. In fact, I'll stop using the word regex right now, since it mainly sounds like the kind of medicine you'll need after trying to write a complex regex.

What are these patterns they good for?

Patterns are mainly used for three things: to see if some text contains the pattern (matching), to replace part of the text with other text (replacement), and pulling out portions of the text for later use (extraction). Patterns contain a combination of regular letters and special symbols like ., *, ^, $, \w and \d. Most programming languages use pattern matching with a subset of the Perl syntax. My examples will use JavaScript, but the same pattern syntax should work in Java, Python, Ruby, Perl, and many other languages.

Matching

Suppose you want to know if the text "Sally has an apple and a banana." contains the word 'apple'. You would do it with the pattern 'apple'.

var text = "Sally has an apple and a banana.";
if(text.match(/apple/)) { console.log("It matches!");
}

Now suppose you want to know if the text begins with the word 'apple'. You'd change the pattern to '^apple'. The ^ is a special symbol meaning 'the start of the text'. So this will only match if the 'a' of apple is right after the start of the text. Call it the same way as before.

var text = "Sally has an apple and a banana.";
if(text.match(/^apple/)) { //this won't be called because the text doesn't start with apple console.log("It matches!");
}

Besides the ^ symbol for 'start of text', here's some other symbols are important to know (there are far more than this, but these are the most important).

$ = end of the text
\s = any whitespace (spaces, tabs, newlines)
\S = anything *but* whitespace
\d = any number (0-9)
\w = any word (upper & lower case letters, numbers, and the underscore _)
. = anything (letter, number, symbol, whitespace)

If you want to match the same letter multiple times you can do this with a quantifier. For example, to match the letter q one or more times put the '+' symbol after the letter.

var text = "ppqqp";
if(text.match(/q+/)) console.log("there's at least one q");

For zero or more times use the '*' symbol.

var text = "ppqqp";
if(text.match(/q*/)) console.log("there's zero or more q's");

You can also group letters with parenthesis:

var text = "ppqqp";
if(text.match(/(pq)+/)) console.log("found at least one 'pq' match");

So, to recap:

. = any x+ = match 'x' one or more times
x* = match 'x' zero or more times ex: match foo zero or more times, followed by bar one or more time = (foo)*(bar)+
x|y = match x or y

Replacing text

Now that you can match text, you can replace it. Replace every instance of 'ells' with 'ines'.

var text = "Sally sells seashells"
var text2 = text.replace(/Sally/,"Billy"); //turns into "Billy sells seashells"
var text2 = text.replace(/ells/,"ines"); //turns into "Sally sines seashines"

Modifiers

Most pattern apis have a few modifiers to change how the search is executed. Here's the important ones:

Make the search case insensitive: text.match(/pattern/i)

Normally the patterns are case sensitive, meaning the pattern 'apple' won't match the word 'Apple'. Add the i parameter to match() to make it case insensitive.

Make the search multiple lines: text.match(/pattern/m)

Normally a pattern will only match the first line of the text. It will stop at the newline character '\n'. With the m parameter it will treat newlines as whitespace and let you search the entire string.

Make a replace global: text.replace(/foo/bar/g)

Normally the replace() function will only replace the first match. If you want to replace every match in the string use the g parameter. This means you could replace every copy of 'he' to 'she' in an entire book with a single replace() call.

Substring Extraction

Another major use for patterns is string extraction. When you do a match, every group of parenthesis becomes a submatch, which you can use individually. Suppose you have a text string with a date in it and you want to get the year and month and day parts out of it. You could do it like this:</p

var text = "I was born on 08-31-1975 and I'm a Virgo."
var parts = text.match("(\d\d)-(\d\d)-(\d\d\d\d)");
//pull out the matched parts
var month = parts[1]; //08
var day = parts[2]; //31
var year = parts[3]; //1975
//parts[0] would give you the entire match, eg: 08-31-1975

The Cheet Sheet

The standard pattern syntax in most languages is expansive and complex, so I'll only list the ones that are actually useful. For a full list refer to the documentation for the programming language you are working with.

Match anywhere in the text: "Sally sells seashells".match(/ells/) (matches both sells and seashells)
Match beginning of text: "Sally sells seashells".match(/^Sally/)
Match end of text: "Sally sells seashells".match(/ells$/) (matches only the seashells at the end)

any word at least three letters long \w\w\w
anything .
anything followed by any letter .\w
the letter q followed by any letter q\w
the letter q followed by any white space q\s
the letter q one or more time q+
the letter q zero or more times q*

any number \d
any number with exactly two digits: \d\d
any number at least two digits long: \d+
any decimal number \d+\.\d+ //ex: 5.0
any number with an optional decimal \d+(\.\d+)* //ex: 5.0 or 5
match the numbers in this date string: 2011-04-08 (\d\d\d\d)-(\d\d)-(\d\d)
also be able to match this date string: 2001-4-8 (\d\d\d\d)-(\d+)-(\d+)

Conclusion

Patterns are a very complex subject so I've just tried to give you the basics. While complex, they are also incredibly powerful and useful. As you learn them you'll find you use them more and more for all sorts of cool things. For a more in-depth tutorial read the Mozilla JavaScript Regular Expression guide.

PS: Found a bug in the above code? Want to suggest more examples of useful regex's? Want to just shoot the breeze? I've turned off comments due to 99% of it being spam, so please tweet your suggestions to me instead: @joshmarinacci. Thanks for understanding!

Teach them well and let them lead the way
someone who sounds like Whitney Houston

The Status Quo

Last summer I released Leonardo, a cross platform vector drawing tool. Later in the year I released an alpha of Amino, the new Java UI toolkit I wrote in the process of building Leonardo. Since then I've released updates for both, but haven't quite reached a 1.0 for Amino. It's time for some changes: Amino 2 and Amino.js.

Overall I've been pretty pleased with most of Amino. It is reasonably fast, pretty skinnable, and has a very clean API with generics, an event bus, and isolated models. The underlying graphics stack, however, is a mess. Originally I planned to build a second graphics stack on top of JOGL, but it's never worked properly (you can still see the disabled code in the repo). Amino has a basic component tree structure built on a Java2D wrapper but for a modern UI toolkit we need more. We need a real scene graph. Something that has animation and transforms built right in. A library fully isolating the app from the underlying graphics, with framerate and buffering control. Without a real scenegraph Amino will never work well on top of JOGL.

Modern Graphics Research

You have may noticed a slowdown in Amino and Leo commits. That's because I've spent the last few months researching modern hardware acceleration and teaching myself OpenGL Pixel Shaders. (Oh, and traveling around the world, announcing new products, and getting ready for my first baby to arrive in May). After all of the research I've come to a few surprising conclusions (surprising to me, at least).

First, on modern GPUs memory usage and bandwidth are the bottle necks, not drawing speed. In the realm of 2D graphics, at least, our biggest headache is managing the data structures and shipping them up to the GPU. The pixel shader units themselves are far faster than what we need. It's often much better to send a tiny amount of information over the bus and use an inefficient algorithm in a pixel shader, rather than do it on the CPU and upload a texture.

Two: OpenGL sucks for traditional 2D drawing tasks. Sure, it's great at compositing, but the actual line and polygon routines are primitive if you want antialiased shapes and smooth compositing. There's a whole lot of stuff we have to rewrite by hand on the CPU or re-implement with shaders.

Three: on platforms without accessible GPUs we can do lots of tricks to speed up apparent drawing speed. Reponsiveness and consistency matter more than raw drawing speed. And if a GPU is available, then we can efficiently cache portions of the drawing as textures and use the GPU for compositing. All of this is hard to do in a reusable way without a real scene graph to build apps on.

Four: Web and mobile apps need a scenegraph as well. For a long time I've mainly thought of web apps as UIs built with static images and text. However, in the last year HTML Canvas has come a long way. Games with sustained 60fps are quite possible in recent builds of Chrome, and mobile devices are catching up as well. (I'm amazed with what I've been able to do on the upcoming TouchPad). So, if we are going to put high powered UIs on web technology, then we need a real scene graph.

Amino 2: The New Plan

I've decided to start a new branch called Amino 2. The current Amino will be split into two parts:

  • Amino Core, the scenegraph, event bus, and common utility classes
  • Amino UI, the UI controls and CSS skinning layer

Amino Core will have multiple implementations. First, a scenegraph for desktop Java apps that will use a mixture of Java2D and JOGL. Eventually it may be possible to do full anti-aliased shape and clip rendering with pure shaders, but this is still an open area of research. For now we will do the shape rasterization in software with the existing mature Java2D stack, then do compositing, fills, and effects on the GPU with shaders. Over time we may shift more to the GPU. With a scenegraph this will be transparent to the app developer, which is a very good thing.

Announcing Amino.js

Amino GFX will also have a second implementation built in JavaScript and HTML Canvas. I'm calling this Amino.js. It will let you build a scene graph of shapes and images, complete with declarative animations and special effects, all drawn efficiently into the a canvas on your page. Amino.js will not have the CSS and UI controls layer, because that stuff is already well provided my many existing JavaScript libraries. There's no point in making new stuff where it's not needed

I've also considered making a pure in memory version of the scenegraph without any Java2D dependencies, or porting it to a pure C++ lib on SDL or OpenGL. However, these requires a lot of research into rasterization algorithms so I'm not going to work on them unless someone really wants it.

Next Steps

That's it for today. I've built a prototype of Amino.js and over the next few weeks I'll show you more of the API and walk through it's construction. I hope you'll find it interesting to watch as the pieces come together, and any code contributions will be very welcome. Come back next week for some code samples and I'll dive into a puzzle game prototype called Osmosis. Here's a few quick demos to whet your appetite.

Google ChromeScreenSnapz013.png
Animated bar chart

Google ChromeScreenSnapz012.png
draggable scene nodes

Google ChromeScreenSnapz011.png
Simple particles at 60fps

Oh, and I forgot to mention that Amino2 and Amino.js will again be fully free and open source. BSD for the win!

Most of of my free time work for the past few months has gone into Amino, the UI toolkit that Leonardo is built on, but Leo itself has gotten a few improvements as well. I'm happy to announce that the next beta of Leo is up, including:

  • Amino gives us a more uniform look and feel, now skinnable with CSS
  • Rulers!
  • The Make a Wish Button, now the easiest way to send feedback.
  • the canvas properly scrolls and zooms with real scrollbars now
  • Under the "Path" menu you can now add, subtract, and intersect shapes.
  • export to PDF
  • actual unit tests for exporting to SVG & PNG, almost every kind of shape is supported now
  • Tons of bug fixes

Unfortunately, as part of the Amino toolkit overhaul certain things have gotten slower. In particular doing layout when you resize the window is significantly slower. Don't worry, I have lots of speed improvements coming to Amino which will address this in Leo.

As always, please test and file bug reports. The export filters in particular are probably still have bugs to fix.

Builds are here:

As part of my ongoing efforts to create better designed software, I some how ended up creating my own new UI toolkit. This is really a part of my belief that a decade from now 90% of people will use phones, slates, or netbooks as their primary computing device. Amino is my experiment building software for that other 10%: the content creators who need killer desktop apps, the programmers who want great tools, and the knowledge workers who need to manage incredible amounts of information at lightning speed. Amino is the toolkit for these apps.

Read the full description of Amino at my Java.net blog, or go play with Amino right now.