I’ve used a lot of cross platform desktop toolkits over the years. I’ve even built some when I worked on Swing and JavaFX at Sun, and I continue development of Amino, an OpenGL toolkit for JavaScript. I know far more about toolkits than I wish. You would think the hardest part of making a good toolkit is the graphics or the widgets or the API. Nope. It’s deployment. Client side Java failed because of deployment. How to actually get the code running on the end user’s computer 100% reliably, no matter what system they have. Deployment on desktop is hard. Perhaps some web technology can help.


Today we’re going to play with a new toolkit called Thrust. Thrust is an embeddable web view based on Chromium, similar to Atom-Shell or Node-webkit, but with one big difference. The Chromium renderer runs in a separate process that your app communicates with over a simple JSON based RPC pipe. This one architectural decision makes Thrust far more flexible and reliable.

Since the actual API is over a local connection instead of C bindings, you can use Thrust with the language of your choosing. Bindings already exist for NodeJS, Go, Scala, and Python. The bindings just speak the RPC protocol so you could roll your own bindings if you want. This split makes Thrust far more reliable and flexible than previous Webkit embedding efforts. Though it’s still buggy, I’ve already had far more success with Thrust than I did with Atom-Shell.

For this tutorial I’m going to show you how to use Thrust with NodeJS, but the same principles apply to other language bindings. This tutorial assumes you already have node installed and a text editor available.

A simple app

First create a new directory and node project.

mkdir thrust_tutorial
cd thrust_tutorial
npm init
Accept all of the defaults for npm init.

Now create a minimal start.html page that looks like this.

<h1>Greetings, Earthling</h1>

Create another file called start.js and paste this in it:

var thrust = require('node-thrust');
var path   = require('path');

thrust(function(err, api) {
    var url = 'file://'+path.resolve(__dirname, 'start.html');
    var window = api.window({
        root_url: url,
        size: {
            width: 640,
            height: 480,

This launches Thrust with the start.html file in it. Notice that you have to use an absolute URL with a file: protocol because Thrust acts like a regular web browser. It needs real URLs not just file paths.

Installing Thrust

Now install Thrust and save it to the package.json file. The node bindings will fetch the correct binary parts for your platform automatically.

npm install --save node-thrust

Now run it!

node start.js

You should see something like this:


A real local app in just a few lines. Not bad. If your app has no need for native access (loading files, etc.) then you can stop right now. You have a local page up and running. It can load JavaScript from remote servers, though I’d copy them locally for offline usage.

However, you probably want to do something more than . The advantage of Node is the amazing ecosystem of modules. My personal use case is an Arduino IDE. I want the Node side for compiling code and using the serial port. The web side is for editing code and debugging. That means the webpage side of my app needs to talk to the node side.

Message Passing

Thrust defines a simple message passing protocol between the two halves of the application. This is mostly hidden by the language binding. The node function ‘window.focus()” actually becomes a message sent from the Node side to the Chromium side over an internal pipe. We don’t need to care about how it works, but we do need to pass messages back and forth.

On the browser side add this code to start.html to send a message using the THRUST.remote object like this:

<script type='text/javascript'>
    THRUST.remote.listen(function(msg) {
        console.log("got back a message " + JSON.stringify(msg));

    THRUST.remote.send({message:"I am going to solve all the world's energy problems."});

Then receive the message and respond on the Node side with this code:

    window.on('remote', function(evt) {
        console.log("got a message " + JSON.stringify(evt));
        window.remote({message:"By blowing it up?"});

The messages may be any Javascript object that can be serialized as JSON, so you can't pass functions back and forth, just data.

If you run this code you'll see a bunch of debugging information on the command line including the console.log output.

[55723:1216/141201:INFO:remote_bindings.cc(96)] REMOTE_BINDINGS: SendMessage
got a message {"message":{"message":"I am going to solve all the world's energy problems."}}
[55721:1216/141202:INFO:api.cc(92)] [API] CALL: 1 remote
[55721:1216/141202:INFO:thrust_window_binding.cc(94)] ThrustWindow call [remote]
[55721:1216/141202:INFO:CONSOLE(7)] "got back a message {"message":"By blowing it up?"}", source: file:///Users/josh/projects/thrust_tutorial/start.html (7)

Notice that both ends of the communication are here; the node and html sides. Thrust automatically redirects console.log from the html side to standard out. I did notice, however, that it doesn't handle the multiple arguments form of console.log, which is why I use JSON.stringify(). Unlike in a browser, doing console.log("some object",obj) would result in only the some object text, not the structure of the actual object.

Now that the UI can talk to node, it can do almost anything. Save files, talk to databases, poll joysticks, or play music. Let’s build a quick text editor.

Building a text editor

Create a file called editor.html.

<script src="http://cdnjs.cloudflare.com/ajax/libs/ace/1.1.3/ace.js" type="text/javascript" charset="utf-8"></script>
<script src='http://code.jquery.com/jquery-2.1.1.js'></script>

<style type="text/css" media="screen">
#editor {
    position: absolute;
    top: 30;
    right: 0;
    bottom: 0;
    left: 0;
    <button id='save'>Save</button>
    <div id="editor">function foo(items) {
        var x = "All this is syntax highlighted";
        return x;

    var editor = ace.edit("editor");

    $('#save').click(function() {
        THRUST.remote.send({action:'save', content: editor.getValue()});



This page initializes the editor and creates a handler for the save button. When the user presses save it will send the editor contents to the node side. Note the #editor css to give it a size. Without a size an Ace editor will shrink to 0.

This is the new Node side code, editor.js

var fs = require('fs');
var thrust = require('node-thrust');
thrust(function(err, api) {
    var url = 'file://'+require('path').resolve(__dirname, 'editor.html');
    var window = api.window({
        root_url: url,
        size: {
            width: 640,
            height: 480,

    window.on('remote',function(evt) {
        console.log("got a message" + JSON.stringify(evt));
        if(evt.message.action == 'save') return saveFile(evt.message);

function saveFile(msg) {
    console.log("saved to editor.txt");

Run this with node editor.js and you will see something like this:


Sweet. A real text editor that can save files.

Things I haven't covered

You can control native menus with Thrust. On Windows or Linux this should be done in app view itself with the various CSS toolkits. On Mac (or Linux with Unity) you will want to use the real menubar. You can do this with the api documented here. It's a pretty simple API but I want to mention something that might bite you. Menus in the menubar will in the order you create them, not the order you add them to the menubar.

Another thing I didn’t cover, since I’m new to it myself, is webviews. Thrust lets you embed a second webpage inside the first, similar to an iFrame but with stricter permissions. This webview is very useful is you need to load untrusted content, such as an RSS reader might do. The webvew is encapsulated so you can run code that might crash in it. This would be useful if you were making, say, an IDE or application editor that must repeatedly run some (possibly buggy) code and then throw it away. I’ll do a future installment on web views.

I also didn't cover is packaging. Thrust doesn’t handle packaging with installers. It gives you an executable and some libraries that you can run from a command line, but you still must build a native app bundle / deb / rpm / msi for each platform you want to support if you want a nicer experience. Fortunately there are other tools to help you do this like InnoSetup and FPM .

The follow up to last year’s Beautiful Lego, Mike Doyle brings us back for more of the best Lego models from around the world. This time the theme is Dark. As the book explains it: “destructive objects, like warships and mecha, and dangerous and creepy animals… dark fantasies of dragons and zombies and spooks” I like the concept of a theme as it helps focus the book. The theme of Dark was stretched a bit to include banks and cigarettes, and vocaloids (mechanical japanese pop-stars), but it’s still 300+ gorgeous pages of the world’s best Lego art. Beautiful Lego 2 is filled to the brim with Zerg like insect hordes, a lot of Krakens, and some of the cutest mechs you’ve ever seen.


Unlike the previous book, Beautiful Lego 2 is a hardcover. I guess the first book was popular enough that No Starch Press could really invest in this one, and it shows. It’s a thick book with proper stitching, a dust jacket, and quality paper. The book lays nicely when open like a good library edition would. This is a book that will still look new in a few decades.

Beautiful Lego 2 is a true picture book, and well worth the price for the Lego fan in your family. I know you can get an electronic edition, but this is the sort that lets us know why physical books still exist. BTW. you can get 30% off right now with the coupon code SHADOWS .

Buy it now at NoStarch.com

My hatred of C and C++ is world renown, or at least it should be. It's not that I hate the languages themselves, but the ancient build chain. A hack of compilers and #defines that have to be modified for every platform. Oh, and segfaults and memory leaks. The usual. Unfortunately, if you want to write fast graphics code you're pretty much going to be stuck with C or C++, and that's where Amino comes in.

Amino, my JavaScript library for OpenGL graphics on the Raspberry Pi (and other platforms), uses C++ code underneath. NodeJS binds to C++ fairly easily so I can hide the nasty parts, but the nasty is still there.

As the Amino codebase has developed the C++ bindings have gotten more and more complicated. This has caused a few problems.

3rd Party Libraries

First, Amino depends on several 3rd party libraries; libraries the user must already have installed. I can pretty much assume that OpenGL is installed, but most users don't have FreeType, libpng,or libjpeg. Even worse, most don't have GCC installed. While I don't have a solution for the GCC problem yet, I want to get rid of the libPNG and jpeg dependencies. I don't use most of what the libraries offer and they represent just one more thing to go wrong. Furthermore, there's no point is dealing with file paths on the C++ side when Node can work with files and streams very easily.

So, I ripped that code completely out. Now the native side can turn a Node buffer into a texture, regardless of where that buffer came from. Jpeg, PNG, or in-memory data are all treated the same. To decompress JPEG and PNGs I found two image libraries, NanoJPEG and uPNG, (each a single file!), to get the job done with minimal fuss. This code is now part of Amino so we have fewer dependencies and more flexibility.

I might move to pure JS for image decoding in the future, as I'm doing for my PureImage library but that might be very slow on the PI so we'll stick with native for now.

Shaders Debugging

GPU Shaders are powerful but hard to debug on a single platform, much less multiple platforms. As the shaders have developed I've found more differences between the Raspberry Pi and the Mac, even when I use ES 2.0 mode on the Mac. JavaScript is easier to change and debug than C++ (and faster to compile on the Pi), so I ripped out all of the shader init code and rewrote it in JavaScript.

The new JS code initializes the shaders from files on disk instead of inline strings, as before. This means I don't have to recompile the C++ side to make changes. This also means I can change the init code on a per platform basis at runtime rather than #defines in C++ code. For speed reasons the drawing code which uses the shaders is still in C++, but at least all of the nasty init code is in easier to maintain JS.

Input Events

Managing input events across platforms is a huge pain. The key codes vary by keyboard and particular locale. Further complicating matters GLFW, the Raspbian Linux Kernel, and the web browser also use different values for different keys, as well as mouse and scroll events. Over the years I've built key munging utilities over and over.

To solve this problem I started moving Amino's keyboard code into a new Node module: inputevents. It does not depend on Amino and will, eventually, be usable in plain browser code as well as on Mac, Raspberry PI, and wherever else we need it to go. Eventually it will support platform specific IMEs but that's a ways down the road.

Random Fixes and Improvements

Since I was in the code anyway, I fixed some alpha issues with dispmanx on the Pi, made opacity animatable, added more unit tests, turned clipping back on, built a new PixelView retained buffer for setting pixels under OpenGL (ex: software generation of fractals), and started using pre-node-gyp to make building the native parts easier.

I've also started on a new rich text editor toolkit for both Canvas and Amino. It's extremely beta right now, so don't use it yet unless you like 3rd degree burns.

That's it for now. Enjoy, folks.

Carthage and C++ must be destroyed.

I recently added the ability to set individual pixels in Amino, my Node JS based OpenGL scene graph for the Raspberry Pi. To test it out I thought I'd write a simple Mandlebrot generator. The challenge with CPU intensive work is that Node only has one thread. If you block that thread your UI stops. Dead. To solve this we need a background processing solution.

A Simple Background Processing Framework

While there are true threading libraries for Node, the simplest way to put something into the background is to start another Node process. It may seem like starting a process is heavyweight compared to a thread in other languages, but if you are doing something CPU intensive the cost of the exec() call is tiny compared to the rest of the work you are doing. It will be lost in the noise.

To be really useful, we don't want to just start a child process, but actually communicate with it to give it work. The childprocess module makes this very easy. childprocess.fork() takes the path to another script file and returns an event emitter. We can send messages to the child through this emitter and listen for responses. Here's a simple class I created called Workman to manage the process.

var Workman = {
    count: 4,
    init: function(chpath, cb, count) {
        if(typeof count == 'number') this.count = count;
        console.log("using thread count", this.count);
        for(var i=0; i<this.count; i++) {
            this.chs[i] = child.fork(chpath);
    sendWork: function(msg) {

Workman creates count child processes, then saves them in the chs array. When you want to send some work to it, call the sendWork function. This will send the message to one of the children, round robin style.

Whenever a child sends an event back, the event will be handed to the callback passed to the workman.init() function.

Now that we can talk to the child processes it's time to do some drawing.

Parent Process

This is the code to actually talk to the screen. First the setup. pv is a new PixelView object. A PixelView is like an image view, but you can set pixel values directly instead of using a texture from disk. w and h are the width and height of the texture in the GPU.

var pv = new amino.PixelView().pw(500).w(500).ph(500).h(500);

var w = pv.pw();
var h = pv.ph();

Now let's create a Workman to schedule the work. We will submit work for each row of the image. When work comes back from the child process the handleRow function will handle it.

var workman = Workman;
var scale = 0.01;
for(var y=0; y<h; y++) {
    var py = (y-h/2)*scale;
    var msg = {
        iw: w,
Notice that the work message must contain all of the information the child needs to do it's work: the start and end values in the x direction, the y value, the length of the row, the index of the row, and the number of iterations to do (more iterations makes the fractal more accurate but slower). This message is the only communication the child has from the outside world. Unlike with threads, child processes do not share memory with the parent.

Here is the handleRow function which receives the completed work (an array of iteration counts) and draws the row into the PixelView. After updating the pixels we have to call updateTexture to push the changes to the GPU and screen. lookupColor converts the iteration counts into a color using a look up table.

function handleRow(m) {
    var y = m.iy;
    for(var x=0; x<m.row.length; x++) {
         var c = lookupColor(m.row[x]);
var lut = [];
for(var i=0; i<10; i++) {
    var s = (255/10)*i;
function lookupColor(iter) {
    return lut[iter%lut.length];

Child Process

Now let's look at the child process. This is where the actual fractal calculations are done. It's your basic Mandelbrot. For each pixel in the row it calculates a complex number until the value exceeds 2 or it hits the maximum number of iterations. Then it stores the iteration count for that pixel in the row array.

function lerp(a,b,t) {
    return a + t*(b-a);
process.on('message', function(m) {
    var row = [];
    for(var i=0; i<m.iw; i++) {
        var x0 = lerp(m.x0, m.x1, i/m.iw);
        var y0 = m.y;
        var x = 0.0;
        var y = 0.0;
        var iteration = 0;
        var max_iteration = m.iter;
        while(x*x + y*y < 2*2 && iteration < max_iteration) {
            xtemp = x*x - y*y + x0;
            y = 2*x*y + y0;
            x = xtemp;
            iteration = iteration + 1;
        row[i] = iteration;

After every pixel in the row is complete it sends the row back to the parent. Notice that it also sends an iy value. Since the children could complete their work in any order (if one row happens to take longer than another), the iy value lets the parent know which row this result is for so that it will be drawn in the right place.

Also notice that all of the calculation happens in the message event handler. This will be called every time the parent process sends some work. The child process just waits for the next message. The beauty of this scheme is that Node handles any overflow or underflow of the work queue. If the parent sends a lot of work requests at once they will stay in the queue until the child takes them out. If there is no work then the child will automatically wait until there is. Easy-peasy.

Here's what it looks like running on my Mac. Yes, Amino runs on Mac as well as Linux. I mainly talk about the Raspberry Pi because that's Amino's sweet spot, but it will run on almost anything. I chose Mac for this demo simply because I've got 4 cores there and only 1 on my Raspberry Pi. It just looks cooler to have for bars spiking up. :)


This code is now in the aminogfx repository under demos/pixels/mandle.js.

A post about Arthur Whitney and kOS made the rounds a few days ago. It concerns a text editor Arthur made with four lines of K code, and a complete operating system he’s working on. These were all built in K, a vector oriented programming language derived from APL. This reminded me that I really need to look at APL after all of the language ranting I’ve done recently.

Note: For the purposes of this post I’m lumping K, J, and the other APL derived languages in with APL itself, much as I’d refer to Scheme or Clojure as Lisps.

After reading up, I’m quite impressed with APL. I’ve always heard it can do complex tasks in a fraction of the code as other languages, and be super fast. It turns out this is very true. Bernard Legrand's APL – a Glimpse of Heaven provides a great overview of the language and why it’s interesting.

APL is not without it’s problems, however. The syntax is very hard to read. I’m sure it becomes easier once you get used to it, but I still spent a lot more time analyzing a single line of code than I would in any another language.

APL is fast and compact for some tasks, but not others. Fundamentally it’s a bunch of operations that work on arrays. If your problem can be phrased in terms of array operations then this is awesome. If it can’t then you start fighting the language and eventually it bites you.

I found anything with control structures to be cumbersome. This isn’t to say that APL can’t do things that require an if statement, but you don’t get the benefits. This code to compute a convex hull, for example, seems about as long as it would be in a more traditional language. With a factor of 2, at least. It doesn’t benefit much from APL’s strengths.

Another challenge is that the official syntax uses non-ASCII characters. I actually don’t see this as a problem. We are a decade and a half into the 21st century and can deal with non-ASCII characters quite easily. The challenge is that the symbols themselves to most people. I didn’t find it hard to pick up the basics after reading a half hour tutorial, so I think the real problem is that the syntax scares programmers away before they ever try it.

I also think enthusiasts focus on how much better APL is than other languages, rather than simply showing someone why they should spend the time to learn it. They need to show what it can do that is also practical. While it’s cool to be able to calculate all of the primes from 1 to N in just a few characters, that isn’t going to sell most developers because that’s not a task they actually need to accomplish very often.

APL seems ideal for solving mathematical problems, or at least a good subset of them. The problem for APL is that Mathematica, MathLab, and various other tools have sprung up to do that better.

Much like Lisp, APL seems stuck between the past and the future. The things it’s really good at it is too general for. More recent specialized tools to the job better. APL isn't general enough to be good as a general purpose language. And many general purpose languages have added array processing support (often through libraries) that make them good enough for the things APL is good at. Java 8 streams and lambda functions, for example. Thus it remains stuck in a few niches like high speed finance. This is not a bad niche to be in (highly profitable, I’m sure) but APL will never become widely used.

That said, I really like APL for the things it’s good at. I wish APL could be embedded in a more general purpose language, much like regular expressions are embedded in JavaScript. I love the concept of a small number of functions that can be combined to do amazing things with arrays. This is the most important part of APL — for me at least — but it’s hidden behind a difficult notation.

I buy the argument that any notation is hard to understand until you learn it, and with learning comes power. Certainly this is true for reading prose.

Humans are good pattern recognizers. We don’t read by parsing letters. Only children just learning to read go letter by letter. The letters form patterns, called words, that our brains recognize in their entirety. After a while children's brains pick up the patterns and process them whole. In fact our brains are so good at picking up patterns that we can read most English words with all of the letters scrambled as long as the first and last letters are correct.

I’m sure this principle of pattern recognition applies to an experienced APL programmer as well. They can probably look at this


and think: pick six random numbers from 1 to 40 and return them in ascending order.

After a time this mental processing would become natural. However, much like with writing, code needs spacing and punctuation to help the symbolic "letters" form words in the mind of the programmer. Simply pursuing compactness for the sake of "mad skillz props" doesn’t help anyone. It just makes for write-only code.

Were I to reinvent computing (in my fictional JoshTrek show where the computer understands all spoken words with 200% accuracy), I would replace the symbols with actual meaningful words, then separate them into chunks with punctuation, much like sentences.


would become
deal 6 of 1 to 40 => x, sort_ascending, index x

The symbols are replaced with words and the ordering swapped, left to right. It still takes some training to understand what it means, but far less. It’s not as compact but far easier to pick up.

So, in summary, APL is cool and has a lot to teach us, but I don’t think I’d ever use it in my daily work.


Since writing this essay I discovered Q, also by Arthur Whitney, that expands K’s terse syntax, but I still find it harder to read than it should be.