So far my posts on Typographic Programming have covered font choices and formatting. Different ways of rendering the source code itself. I haven’t covered the spacing of the code yet, or more specifically: indentation. Or even more specifically: tabs vs spaces.

Put on your asbestos suits, folks. It’s gonna get hot in this kitchen.

Traditionally source code has been rendered with a monospace font. This allows for manual horizontal positioning with spaces or tab characters. Of course the tab character doesn’t have a defined width (I’ll explain in a moment why) so flame wars have erupted around spaces vs tabs, on par with the great editor wars of the last century. Ultimately these are pointless arguments. Tabs vs spaces is an artifact of trying to render code into a monospace grid of characters. It’s the 21st century! We can do better than our dad's 1970s terminal. In fact, they did better in the 19th century!

In The Beginning

Let’s start at the beginning. Fixed whitespace indenting can be used to line things up so they become pretty, and therefore easier to read. But that's a lot of work. All that pressing of space bars and adjusting when things change.

Instead of manually controlling whitespace what if we used tab stops. I don’t mean the tab character, which is mapped to either 4 or 8 spaces, but actual tab stops. Yes. They used to be a real physical thing.

image of typewriter tab stop

In the olden days, back when we used manual typewriters (I think I was the last high school class to take typing on such machines), there was such a thing as a tabstop. These were vertical brackets along the page (well, along that metal bar at the bottom of the current line). These tiny pieces of metal literally stopped the tabs, thus giving them the name tabstops. We were so creative with names in those days.

When you hit the tab key the cursor (a rapidly spinning metal ball imprinted with the noun: “Selectric”) would jump from the left edge of the paper to the first tabstop. Hit tab again and it will go to the next tabstop. Now of course, these tab stops were adjustable, so you could choose the indenting style you wanted for your particular document.

Let me repeat that. The tabs stops could be adjusted to the indenting style of your particular document. Inherent is the concept that there is no “one right way”, but rather the format must suit the needs of the particular document, or part of a document, that you are writing.

When WYSIWYG editors came along they preserved the notion of a tabstop. They even made it better by giving you nice vertical lines to see the effect of changing the tabstop. When you hit tab the text would move to the stop. If you later move the stop then the text aligned with it will magically move as well. Dynamic tabstops! Yay. We can finally rock like its the 1990s.

Word for Mac, circa 1991

Semantic Indentation

So why do we go back to the 1970s with our text editors? Tabstops are a simple concept for semantically (sorta) indenting our code. Let’s see what some code would look like with simple tabular semantic indenting.

Here’s some code with no formatting other than a standard indent.

text

This is your typical Cish code with brackets and parameters. It would be nice to line up the parameters with their types. The drawRect code is also similar between lines. We should clean that up too.

Here is code with semantic indenting.

text

How would you type in such code? When you hit the tab key the text advances to the next tabstop. these tabstops are dynamic, however. Instead of giving you a line with a ruler at the top, the tabstops automatically expand to fit the text in the column. Essentially they act more like spreadsheet cells than tab stops.

Furthermore, the text will be left aligned at the tabstop by default, but right aligned for text that ends with a comma or other special character. This process is completely automatic and hidden, of course. The programmer just hits the tab key and continues typing, the IDE handles all of the formatting details, as it should be. We humans write the important things (the code) and let the computer handle the busy work (formatting).

The tab stops (or columns if you think of it as a table) don’t extend the entire document. They only go down as far as the next logical chunk. There could be multiple ways to define a ‘chunk’ semantically, but one indicator would be a double space. If you break the code flow with a double space then it will revert to the document / project wide defaults. This lets us use standard indentation for common structures like functions and flow control bracing, while still allowing for custom indentation when needed.

Ludicrous Speed

Furthermore, using semantic indentation could completely remove the need for braces as block delimiters. Semantic indentation can replace where blocks begin and end.

if(x==y) {
     foo
} else {
     bar
}

becomes

if x==y
     foo
else
     bar
and
if(x==y) {
     foo
}
can become
if x==y
     foo
or even
if x==y  foo

using the tab character.

This might be a tad confusing, however, because there is only whitespace between the x==y and the foo. How do we know its not a space instead? If you hit the tab key, which indicates you are going to the next chunk instead of just a long conditional expression, the editor could draw a light glyph where the tab is. Perhaps a rightward unicode arrow.

Now I know the Rubyists and Pythoners will say that they've already removed the block delimiters. Quite true, but this goes one step further.

Python takes the choice of whitespace away from the programmer, but the programmer still has to implement it. With semantic indentation the entire job of formatting is taken away. You just type your code and the editor does the right thing. Such a system also opens the door for alternative rendering of the code in particular circumstances.

Better Fonts

And of course we come to our final advantage. Without manual formatting with spaces we don't need to be restricted to a monospace font anymore. Our code could look like this:

text

Semantic indenting. Less typing, more readable code. Let’s rock like it’s the 1990s!

The Art of LEGO Design: Creative Ways to Build Amazing Models

by Jordan Schwartz

No Starch Press is really doubling down on their Lego books. Their latest is a stunner. The Art of Lego Design by Jordan Schwartz is less of an art book and more of a hands on guide. It shows actual techniques used by the Lego artists featured in other No Starch books like Mike Doyle’s Beautiful Lego.

As one of the LEGO Group’s youngest staff designers; Jordan Schwartz worked on a number of official LEGO sets. His attention to detail and gift for teaching really come through in the book. Many of the models that seem impossible at first, such as stained glass windows, are actually pretty simple once explained in the book.

The Art of Lego Design is the perfect entry point to rabbit hole that is Lego sculpture. With Lego specific jargon like cheese slopes, SNOT (Studs Not On Top), and the Lowell sphere; it would be easy to get lost; but Jordan explains it all clearly, along with general design principles like texture and composition.

The book takes the reader through many styles of construction and technical advantages of different pieces, with as dash of Lego history thrown in. I didn’t know there once was a Lego set with a fuzzy bear rug or a line of big Lego people for the Technic sets.

Read or Read Not? Read!

Get it from No Starch Press

Apparently my last post hit HackerNews and I didn’t know it. That’s what I get for not checking my server logs.

Going through the comments I see that about 30% of people find it interesting and 70% think I’m an idiot. That’s much better than usual, so I'm going to tempt fate with another installment. The general interest in my post (and let’s face it, typography for source code is a pretty obscure topic of discussion), spurred me to write a follow up.

In today’s episode we’ll tour the font themselves. If we want to reinvent computing it’s not enough to grab a typewriter font and call it a day. We have to plan this carefully, and that starts with a good selection of typefaces.

Note that I am not going to use color or boxes or any other visual indicators in this post. Not that they aren’t useful, but today I want to see how far we could get with just font styling: typeface, size, weight, and style (italics, small caps, etc.).

Since I'm formatting symbolic code, data, and comments; I need a range of typefaces that work well together. I’ve chosen the Source Pro family from Adobe. It is open source and freely redistributable, with a full set of weights and italics. More importantly, it has three main faces: a monospace font: Source Code Pro, a serif font: Source Serif Pro, and a sans-serif font: Source Sans Pro. All three are specifically designed to work together and have a range of advanced glyphs and features. We won't use many of these features today but they will be nice to have in the future.

Lets start formatting some code. This will be a gradual process where we chose the basic formatting then build on top for different semantic chunks.

For code itself we will stick with the monospace font: Source Code Pro. I would argue that a fixed width font is not actually essential for code, but that’s an argument for another day when we look at indentation. For today we’ll stick with fixed width.

Code comments and documentation will use Source Serif Pro. Why? Well, comments don’t need the explicit alignment of a monospace font, so definitely not Source Code Pro. Comments are prose. The sans serif font would work okay but for some reason when I think "text" I think of serifs. It feels more like prose. More texty.

So I won’t use Source Sans Pro today but I will save it for future use. Using the Source [x] Pro set gives us that option.

Below is a simple JavaScript function set with the default weights of those two fonts. This is the base style we will work from.

text

So that’s a good start but.., I can immediately think of a few improvements. Code (at least in C derived languages) has five main elements: comments, keywords, symbols, literals, and miscellaneous — or what I like to call ‘extraneous cruft’. It’s mainly parenthesis and brackets for delimiting functions and procedure bodies. It is possible to design a language which uses ordering to reduce the need for delimiters, or to be rid of them completely with formatting conventions (as I talked about last week). However, today’s job is to just restyle without changing the code so let’s leave them unmolested for now. Instead we will minimize their appearance by setting them in a thin weight. (All text is still in black, though).

Next up is symbols. Symbols the part of a program that the programmer can change. These are arguably the most important part of the program; the parts we spend the most time thinking about, so let’s make them stand out with a very heavy weight: bold 700.

text

Better, but I don’t like how the string literal blends in to with the rest of the code. String literals are almost like prose so let’s show them in serif type, this time with a bolder weight and shrunk a tiny bit (90% of normal).

For compatibility I did the same with numeric literals. I’m not sure if ‘null’ is really a literal or a symbol, but you can assign values to it so I’ll call it a literal.

text

Next up is keywords. Keywords are the part of the language that the programmer cannot change. They are strictly defined by the language and completely reserved. Since they are immutable it doesn’t really matter how we render them. I could use a smiley face for the function keyword and the compiler wouldn’t care. It always evaluates to the same thing. However, unlike my 3yr old’s laptop, I don’t have a smiley face key on my computer; so let’s keep the same spelling. I do want to do something unorthodox though. Let’s put the keywords in small caps.

Small caps are glyphs the size of lower case letters, but drawn like the upper case letters. To do small caps right you can’t just put your text in upper case and shrink it down. It would look strange. Small caps are actually different glyphs designed to have a similar (but not identical) width and a shorter height. They are hard to generate programmatically. This is one place where a human has to do the hard work. Fortunately we have small caps at our disposal thanks to the great contributions by Logos and type designer Marc Weymann. Open source is a good thing.

text

Now we are getting somewhere. Now the code has a dramatically different feel.

There’s one more thing to address: the variables. Are they symbols like function names? Yes, but it feels different than function names. They are also not usually prefixed with a parent object or namespace specifier. Really we have three cases. A fully qualified symbol like ‘baz’ in foo.bar.baz, the prefix part (foo.bar), and standalone variables that aren’t qualified at all (like ‘x’). This distinction applies whether or not the symbol is a function or an object reference (it could actually be both in JavaScript).

In the end I decided these cases are related but distinct. Standalone symbols have a weight of 400. Technically this is the default weight in CSS and shouldn’t appear to be ‘bold’, but since the base font is super light, regular will feel heavier against it. The symbol at the end of a qualifier chain will also be bold, but with a weight of 700. Finally the prefix part will be italics to further distinguish it. There really isn’t a right answer here; other combinations would work equally well, so I just played around until I found something that felt right.

This is the final version:

text

I also shrunk the comments to 80%. Again it just felt right, and serifed fonts are easier to read in longer lines, so the comments can handle the smaller size.

Here’s a link to the live mockup in HTML and CSS. This design turned out much better than I originally thought it would. We can do a lot without color and spacing changes. Now imagine what we could do will our full palette of tools. But that will have to wait for next time.

BTW, if you submit this to Hacker News or Reddit please let me know via Twitter so I can answer questions.

Allow me to present a simple thought experiment. Suppose we didn’t need to store our code as ASCII text on disk. Could we change the way we write -- and more importantly read -- symbolic code? Let’s assume we have a magic code editor which can read, edit, and write anything we can imagine. Furthermore, assume we have a magic compiler which can work with the same. What would the ideal code look like?

Well, first we could get rid of delimiters. Why do we even have them? Our sufficiently stupid compilers.

Delimiters like quotes are there to let the compiler know when a symbol ends and a literal begins. That’s also why variables can’t start with a number; the compiler wouldn’t know if you meant a variable name or a numeric literal. What if we could distinguish between then using typography instead:

Here’s an example.

text

This example is semantically equivalent to:

print "The cats are hungry."; //no quotes or parens are needed

Rendering the literal inside a special colored box makes it more readable than the plain text version. We live in the 21st century. We have more typographic options than quotes! Let’s use them. A green box is but one simple option.

Let’s take the string literal example further:

text

Without worrying about delimiting the literals we don’t need extra operators for concatenation; just put them inline. In fact, there becomes no difference between string concatenation and variable interpolation. The only difference is how we choose to render them on screen. Number formatting can be shown inline as well, but visually separate by putting the format control in a gray box.

text

Also notice that comments are rendered in an entirely different font, and pushed to the side (with complete Unicode support, of course).

Once we’ve gone down the road of showing string literals differently we could do the same with numbers.

text

Operators are still useful of course, but only when they represent a real mathematical operation. It turns out there is a separate glyph for multiplication, it’s not just an x, but it’s still visually confusing. Maybe a proper dot would be better.

Since some numbers are actually units, this hypothetical language would need separate types for those units. They could be rendered as prose by using common metric abbreviations.

text

In a sense, a number with units really is a different thing than plain numbers. It’s nice for the programming language to recognize that.

As long as we are going to have special support for numeric and string literals, why not go all the way:

Color literals

text

Image literals

text

Byte arrays

text

If our IDEs really understood the concepts represented in the language then we could write highly visual but still symbolic code. If only our compilers were sufficiently smart.

I’m not saying we should actually code this way but thought experiments are a good way to find new ideas; ideas that we could then apply to our existing programming systems.

This is part 3 of a series on Amino, a JavaScript graphics library for OpenGL on the Raspberry PI. You can also read part 1 and part 2.

Amino is built on Node JS, a robust JavaScript runtime married to a powerful IO library. That’s nice and all, but the real magic of Node is the modules. For any file format you can think of someone has probably written a Node module to parse it. For any database you might want use, someone has made a module for it. npmjs.org lists nearly ninety thousand packages! That’s a lot of modules ready for you to use.

For today’s demo we will build a nice rotating display of news headlines that could run in the lobby of an office using a flatscreen TV on the wall. It will look like this:

image

We will fetch news headlines as RSS feeds. Feeds are easy to parse using Node streams and the feedparser module. Lets start by creating a parseFeed function. This function takes a url. It will load the feed from the url, extract the title of each article, then call the provided callback function with the list of headlines.

var FeedParser = require('feedparser');
var http = require('http');


function parseFeed(url, cb) {
    var headlines = [];

    http.get(url, function(res) {
        res.pipe(new FeedParser())
            .on('meta',function(meta) {
                //console.log('the meta is',meta);
            })
            .on('data',function(article) {
                console.log("title = ", article.title);
                headlines.push(article.title);
            })
            .on('end',function() {
                console.log("ended");
                cb(headlines);
            })
    });
}

Node uses streams. Many functions, like the http.get() function, return a stream. You can pipe this stream through a filter or processor. In the code above we use the FeedParser object to filter the HTTP stream. This returns a new stream which will produce events. We can then listen to the events as the data flows through the system, picking up just the parts we want. In this case we will watch for the data event, which provides the article that was just parsed. Then we add just the title to the headlines array. When the end event happens we send the headlines array to the callback. This sort of streaming IO code is very common in Node programs.

Now that we have a list of headlines lets make a display. We will hard code the size to 1280 x 720, a common HDTV resolution. Adjust this to fit your own TV if necessary. As before, the first thing we do is turn the titles into a CircularBuffer (see previous blog ) and create a root group.

var amino = require('amino.js');
var sw = 1280;
var sh = 720;

parseFeed('http://www.npr.org/rss/rss.php?id=1001',function(titles) {
    amino.start(function(core, stage) {

        var titles = new CircularBuffer(titles);
        var root = new amino.Group();
        stage.setSize(sw,sh);
        stage.setRoot(root);

…

The RSS feed will be shown as two lines of text, so let’s create a text group then two text objects. Also create a background group to use later. Shapes are drawn in the order they are added, so we have to add the bg group before the textgroup.

        var bg = new amino.Group();
        root.add(bg);

        var textgroup = new amino.Group();
        root.add(textgroup);

        var line1 = new amino.Text().x(50).y(200).fill("#ffffff").text('foo').fontSize(80);
        var line2 = new amino.Text().x(50).y(300).fill("#ffffff").text('bar').fontSize(80);
        textgroup.add(line1,line2);

Each Text object has the same position, color, and size except that one is 100 pixels lower down on the screen than the other. Now we need to animate them.

The animation consists of three sections: set the text to the current headline, rotate the text in from the side, then rotate the text back out after a delay.

In the setHeadlines function; if the headline is longer than the max we support (currently set to 34 letters) then chop it into pieces. If we were really smart we’d be careful about not breaking words, but I’ll leave that as an exercise to the reader.

        function setHeadlines(headline,t1,t2) {
            var max = 34;
            if(headline.length > max) {
                t1.text(headline.substring(0,max));
                t2.text(headline.substring(max));
            } else {
                t1.text(headline);
                t2.text('');
            }
        }

The rotateIn function calls setHeadlines with the next title, then animates the Y rotation axis from 220 degrees to 360 over two seconds (2000 milliseconds). It also triggers rotateOut when it’s done.

        function rotateIn() {
            setHeadlines(titles.next(),line1,line2);
            textgroup.ry.anim().from(220).to(360).dur(2000).then(rotateOut).start();
        }

A quick note on rotation. Amino is fully 3D so in theory you can rotate shapes in any direction, not just in the 2D plane. To keep things simple the Group object has three rotation properties: rx, ry, and rz. These each rotate around the x, y, and z axes. The x axis is horizontal and fixed to the top of the screen, so rotating around the x axis would flip the shape from top to bottom. The y axis is vertical and on the left side of the screen. Rotating around the y axis flips the shape left to right. If you want to do a rotation that looks like the standard 2D rotation, then you want to go around the Z axis with rz. Also note that all rotations are in degrees, not radians.

The rotateOut() function rotates the text group back out from 0 to 140 over two seconds, then triggers rotateIn again. Since each function triggers the other they will continue to ping pong back and forth forever, pulling in a new headline each time. Notice the delay() call. This will make the animation wait five seconds before starting.

        function rotateOut() {

            textgroup.ry.anim().delay(5000).from(0).to(140).dur(2000).then(rotateIn).start();
        }

Finally we can start the whole shebang off back calling rotateIn the first time.

        rotateIn();

What we have so far will work just fine but it’s a little boring because the background is pure black. Let’s add a few subtly moving rectangles in the background.

First we will create the three rectangles. They are each fill the screen and are 50% translucent, in the colors red, green, and blue.

        //three rects that fill the screen: red, green, blue.  50% translucent
        var rect1 = new amino.Rect().w(sw).h(sh).opacity(0.5).fill("#ff0000");
        var rect2 = new amino.Rect().w(sw).h(sh).opacity(0.5).fill("#00ff00");
        var rect3 = new amino.Rect().w(sw).h(sh).opacity(0.5).fill("#0000ff");
        bg.add(rect1,rect2,rect3);

Now let’s move the two back rectangles off the left edge of the screen.

        //animate the back two rects
        rect1.x(-1000);
        rect2.x(-1000);

Finally we can slide them from left to right and back. Notice that these animations set loop to -1 and autoreverse to 1. The loop count sets how many times the animation will run. Using -1 makes it run forever. The autoreverse property makes the animation alternate direction each time. Rather than going from left to right and starting over at the left again, instead it will go left to right then right to left. Finally the second animation has a five second delay. This staggers the two animations so they will always be in different places. Since all three rectangles are translucent the colors will continually mix and change as the rectangles slide back and forth.

        rect1.x.anim().from(-1000).to(1000).dur(5000)
            .loop(-1).autoreverse(true).start();
        rect2.x.anim().from(-1000).to(1000).dur(3000)
            .loop(-1).autoreverse(true).delay(5000).start();

Here’s what it finally looks like. Of course a still picture can’t do justice to the real thing.

image

image