My hatred of C and C++ is world renown, or at least it should be. It's not that I hate the languages themselves, but the ancient build chain. A hack of compilers and #defines that have to be modified for every platform. Oh, and segfaults and memory leaks. The usual. Unfortunately, if you want to write fast graphics code you're pretty much going to be stuck with C or C++, and that's where Amino comes in.

Amino, my JavaScript library for OpenGL graphics on the Raspberry Pi (and other platforms), uses C++ code underneath. NodeJS binds to C++ fairly easily so I can hide the nasty parts, but the nasty is still there.

As the Amino codebase has developed the C++ bindings have gotten more and more complicated. This has caused a few problems.

3rd Party Libraries

First, Amino depends on several 3rd party libraries; libraries the user must already have installed. I can pretty much assume that OpenGL is installed, but most users don't have FreeType, libpng,or libjpeg. Even worse, most don't have GCC installed. While I don't have a solution for the GCC problem yet, I want to get rid of the libPNG and jpeg dependencies. I don't use most of what the libraries offer and they represent just one more thing to go wrong. Furthermore, there's no point is dealing with file paths on the C++ side when Node can work with files and streams very easily.

So, I ripped that code completely out. Now the native side can turn a Node buffer into a texture, regardless of where that buffer came from. Jpeg, PNG, or in-memory data are all treated the same. To decompress JPEG and PNGs I found two image libraries, NanoJPEG and uPNG, (each a single file!), to get the job done with minimal fuss. This code is now part of Amino so we have fewer dependencies and more flexibility.

I might move to pure JS for image decoding in the future, as I'm doing for my PureImage library but that might be very slow on the PI so we'll stick with native for now.

Shaders Debugging

GPU Shaders are powerful but hard to debug on a single platform, much less multiple platforms. As the shaders have developed I've found more differences between the Raspberry Pi and the Mac, even when I use ES 2.0 mode on the Mac. JavaScript is easier to change and debug than C++ (and faster to compile on the Pi), so I ripped out all of the shader init code and rewrote it in JavaScript.

The new JS code initializes the shaders from files on disk instead of inline strings, as before. This means I don't have to recompile the C++ side to make changes. This also means I can change the init code on a per platform basis at runtime rather than #defines in C++ code. For speed reasons the drawing code which uses the shaders is still in C++, but at least all of the nasty init code is in easier to maintain JS.

Input Events

Managing input events across platforms is a huge pain. The key codes vary by keyboard and particular locale. Further complicating matters GLFW, the Raspbian Linux Kernel, and the web browser also use different values for different keys, as well as mouse and scroll events. Over the years I've built key munging utilities over and over.

To solve this problem I started moving Amino's keyboard code into a new Node module: inputevents. It does not depend on Amino and will, eventually, be usable in plain browser code as well as on Mac, Raspberry PI, and wherever else we need it to go. Eventually it will support platform specific IMEs but that's a ways down the road.

Random Fixes and Improvements

Since I was in the code anyway, I fixed some alpha issues with dispmanx on the Pi, made opacity animatable, added more unit tests, turned clipping back on, built a new PixelView retained buffer for setting pixels under OpenGL (ex: software generation of fractals), and started using pre-node-gyp to make building the native parts easier.

I've also started on a new rich text editor toolkit for both Canvas and Amino. It's extremely beta right now, so don't use it yet unless you like 3rd degree burns.

That's it for now. Enjoy, folks.

Carthage and C++ must be destroyed.

I recently added the ability to set individual pixels in Amino, my Node JS based OpenGL scene graph for the Raspberry Pi. To test it out I thought I'd write a simple Mandlebrot generator. The challenge with CPU intensive work is that Node only has one thread. If you block that thread your UI stops. Dead. To solve this we need a background processing solution.

A Simple Background Processing Framework

While there are true threading libraries for Node, the simplest way to put something into the background is to start another Node process. It may seem like starting a process is heavyweight compared to a thread in other languages, but if you are doing something CPU intensive the cost of the exec() call is tiny compared to the rest of the work you are doing. It will be lost in the noise.

To be really useful, we don't want to just start a child process, but actually communicate with it to give it work. The childprocess module makes this very easy. childprocess.fork() takes the path to another script file and returns an event emitter. We can send messages to the child through this emitter and listen for responses. Here's a simple class I created called Workman to manage the process.

var Workman = {
    count: 4,
    chs:[],
    init: function(chpath, cb, count) {
        if(typeof count == 'number') this.count = count;
        console.log("using thread count", this.count);
        for(var i=0; i<this.count; i++) {
            this.chs[i] = child.fork(chpath);
            this.chs[i].on('message',cb);
        }
    },
    sendcount:0,
    sendWork: function(msg) {
        this.chs[this.sendcount%this.chs.length].send(msg);
        this.sendcount++;
    }
}

Workman creates count child processes, then saves them in the chs array. When you want to send some work to it, call the sendWork function. This will send the message to one of the children, round robin style.

Whenever a child sends an event back, the event will be handed to the callback passed to the workman.init() function.

Now that we can talk to the child processes it's time to do some drawing.

Parent Process

This is the code to actually talk to the screen. First the setup. pv is a new PixelView object. A PixelView is like an image view, but you can set pixel values directly instead of using a texture from disk. w and h are the width and height of the texture in the GPU.

var pv = new amino.PixelView().pw(500).w(500).ph(500).h(500);
root.add(pv);
stage.setRoot(root);

var w = pv.pw();
var h = pv.ph();

Now let's create a Workman to schedule the work. We will submit work for each row of the image. When work comes back from the child process the handleRow function will handle it.

var workman = Workman;
workman.init(__dirname+'/mandle_child.js',handleRow);
var scale = 0.01;
for(var y=0; y<h; y++) {
    var py = (y-h/2)*scale;
    var msg = {
        x0:(-w/2)*scale,
        x1:(+w/2)*scale,
        y:py,
        iw: w,
        iy:y,
        iter:100,
    };
    workman.sendWork(msg);
}
Notice that the work message must contain all of the information the child needs to do it's work: the start and end values in the x direction, the y value, the length of the row, the index of the row, and the number of iterations to do (more iterations makes the fractal more accurate but slower). This message is the only communication the child has from the outside world. Unlike with threads, child processes do not share memory with the parent.

Here is the handleRow function which receives the completed work (an array of iteration counts) and draws the row into the PixelView. After updating the pixels we have to call updateTexture to push the changes to the GPU and screen. lookupColor converts the iteration counts into a color using a look up table.

function handleRow(m) {
    var y = m.iy;
    for(var x=0; x<m.row.length; x++) {
         var c = lookupColor(m.row[x]);
         pv.setPixel(x,y,c[0],c[1],c[2],255);
    }
    pv.updateTexture();
}
var lut = [];
for(var i=0; i<10; i++) {
    var s = (255/10)*i;
    lut.push([0,s,s]);
}
function lookupColor(iter) {
    return lut[iter%lut.length];
}

Child Process

Now let's look at the child process. This is where the actual fractal calculations are done. It's your basic Mandelbrot. For each pixel in the row it calculates a complex number until the value exceeds 2 or it hits the maximum number of iterations. Then it stores the iteration count for that pixel in the row array.

function lerp(a,b,t) {
    return a + t*(b-a);
}
process.on('message', function(m) {
    var row = [];
    for(var i=0; i<m.iw; i++) {
        var x0 = lerp(m.x0, m.x1, i/m.iw);
        var y0 = m.y;
        var x = 0.0;
        var y = 0.0;
        var iteration = 0;
        var max_iteration = m.iter;
        while(x*x + y*y < 2*2 && iteration < max_iteration) {
            xtemp = x*x - y*y + x0;
            y = 2*x*y + y0;
            x = xtemp;
            iteration = iteration + 1;
        }
        row[i] = iteration;
    }
    process.send({row:row,iw:m.iw,iy:m.iy});
})

After every pixel in the row is complete it sends the row back to the parent. Notice that it also sends an iy value. Since the children could complete their work in any order (if one row happens to take longer than another), the iy value lets the parent know which row this result is for so that it will be drawn in the right place.

Also notice that all of the calculation happens in the message event handler. This will be called every time the parent process sends some work. The child process just waits for the next message. The beauty of this scheme is that Node handles any overflow or underflow of the work queue. If the parent sends a lot of work requests at once they will stay in the queue until the child takes them out. If there is no work then the child will automatically wait until there is. Easy-peasy.

Here's what it looks like running on my Mac. Yes, Amino runs on Mac as well as Linux. I mainly talk about the Raspberry Pi because that's Amino's sweet spot, but it will run on almost anything. I chose Mac for this demo simply because I've got 4 cores there and only 1 on my Raspberry Pi. It just looks cooler to have for bars spiking up. :)

text

This code is now in the aminogfx repository under demos/pixels/mandle.js.

This is part 3 of a series on Amino, a JavaScript graphics library for OpenGL on the Raspberry PI. You can also read part 1 and part 2.

Amino is built on Node JS, a robust JavaScript runtime married to a powerful IO library. That’s nice and all, but the real magic of Node is the modules. For any file format you can think of someone has probably written a Node module to parse it. For any database you might want use, someone has made a module for it. npmjs.org lists nearly ninety thousand packages! That’s a lot of modules ready for you to use.

For today’s demo we will build a nice rotating display of news headlines that could run in the lobby of an office using a flatscreen TV on the wall. It will look like this:

image

We will fetch news headlines as RSS feeds. Feeds are easy to parse using Node streams and the feedparser module. Lets start by creating a parseFeed function. This function takes a url. It will load the feed from the url, extract the title of each article, then call the provided callback function with the list of headlines.

var FeedParser = require('feedparser');
var http = require('http');


function parseFeed(url, cb) {
    var headlines = [];

    http.get(url, function(res) {
        res.pipe(new FeedParser())
            .on('meta',function(meta) {
                //console.log('the meta is',meta);
            })
            .on('data',function(article) {
                console.log("title = ", article.title);
                headlines.push(article.title);
            })
            .on('end',function() {
                console.log("ended");
                cb(headlines);
            })
    });
}

Node uses streams. Many functions, like the http.get() function, return a stream. You can pipe this stream through a filter or processor. In the code above we use the FeedParser object to filter the HTTP stream. This returns a new stream which will produce events. We can then listen to the events as the data flows through the system, picking up just the parts we want. In this case we will watch for the data event, which provides the article that was just parsed. Then we add just the title to the headlines array. When the end event happens we send the headlines array to the callback. This sort of streaming IO code is very common in Node programs.

Now that we have a list of headlines lets make a display. We will hard code the size to 1280 x 720, a common HDTV resolution. Adjust this to fit your own TV if necessary. As before, the first thing we do is turn the titles into a CircularBuffer (see previous blog ) and create a root group.

var amino = require('amino.js');
var sw = 1280;
var sh = 720;

parseFeed('http://www.npr.org/rss/rss.php?id=1001',function(titles) {
    amino.start(function(core, stage) {

        var titles = new CircularBuffer(titles);
        var root = new amino.Group();
        stage.setSize(sw,sh);
        stage.setRoot(root);

…

The RSS feed will be shown as two lines of text, so let’s create a text group then two text objects. Also create a background group to use later. Shapes are drawn in the order they are added, so we have to add the bg group before the textgroup.

        var bg = new amino.Group();
        root.add(bg);

        var textgroup = new amino.Group();
        root.add(textgroup);

        var line1 = new amino.Text().x(50).y(200).fill("#ffffff").text('foo').fontSize(80);
        var line2 = new amino.Text().x(50).y(300).fill("#ffffff").text('bar').fontSize(80);
        textgroup.add(line1,line2);

Each Text object has the same position, color, and size except that one is 100 pixels lower down on the screen than the other. Now we need to animate them.

The animation consists of three sections: set the text to the current headline, rotate the text in from the side, then rotate the text back out after a delay.

In the setHeadlines function; if the headline is longer than the max we support (currently set to 34 letters) then chop it into pieces. If we were really smart we’d be careful about not breaking words, but I’ll leave that as an exercise to the reader.

        function setHeadlines(headline,t1,t2) {
            var max = 34;
            if(headline.length > max) {
                t1.text(headline.substring(0,max));
                t2.text(headline.substring(max));
            } else {
                t1.text(headline);
                t2.text('');
            }
        }

The rotateIn function calls setHeadlines with the next title, then animates the Y rotation axis from 220 degrees to 360 over two seconds (2000 milliseconds). It also triggers rotateOut when it’s done.

        function rotateIn() {
            setHeadlines(titles.next(),line1,line2);
            textgroup.ry.anim().from(220).to(360).dur(2000).then(rotateOut).start();
        }

A quick note on rotation. Amino is fully 3D so in theory you can rotate shapes in any direction, not just in the 2D plane. To keep things simple the Group object has three rotation properties: rx, ry, and rz. These each rotate around the x, y, and z axes. The x axis is horizontal and fixed to the top of the screen, so rotating around the x axis would flip the shape from top to bottom. The y axis is vertical and on the left side of the screen. Rotating around the y axis flips the shape left to right. If you want to do a rotation that looks like the standard 2D rotation, then you want to go around the Z axis with rz. Also note that all rotations are in degrees, not radians.

The rotateOut() function rotates the text group back out from 0 to 140 over two seconds, then triggers rotateIn again. Since each function triggers the other they will continue to ping pong back and forth forever, pulling in a new headline each time. Notice the delay() call. This will make the animation wait five seconds before starting.

        function rotateOut() {

            textgroup.ry.anim().delay(5000).from(0).to(140).dur(2000).then(rotateIn).start();
        }

Finally we can start the whole shebang off back calling rotateIn the first time.

        rotateIn();

What we have so far will work just fine but it’s a little boring because the background is pure black. Let’s add a few subtly moving rectangles in the background.

First we will create the three rectangles. They are each fill the screen and are 50% translucent, in the colors red, green, and blue.

        //three rects that fill the screen: red, green, blue.  50% translucent
        var rect1 = new amino.Rect().w(sw).h(sh).opacity(0.5).fill("#ff0000");
        var rect2 = new amino.Rect().w(sw).h(sh).opacity(0.5).fill("#00ff00");
        var rect3 = new amino.Rect().w(sw).h(sh).opacity(0.5).fill("#0000ff");
        bg.add(rect1,rect2,rect3);

Now let’s move the two back rectangles off the left edge of the screen.

        //animate the back two rects
        rect1.x(-1000);
        rect2.x(-1000);

Finally we can slide them from left to right and back. Notice that these animations set loop to -1 and autoreverse to 1. The loop count sets how many times the animation will run. Using -1 makes it run forever. The autoreverse property makes the animation alternate direction each time. Rather than going from left to right and starting over at the left again, instead it will go left to right then right to left. Finally the second animation has a five second delay. This staggers the two animations so they will always be in different places. Since all three rectangles are translucent the colors will continually mix and change as the rectangles slide back and forth.

        rect1.x.anim().from(-1000).to(1000).dur(5000)
            .loop(-1).autoreverse(true).start();
        rect2.x.anim().from(-1000).to(1000).dur(3000)
            .loop(-1).autoreverse(true).delay(5000).start();

Here’s what it finally looks like. Of course a still picture can’t do justice to the real thing.

image

image

This is the second blog in a series about Amino, a Javascript OpenGL library for the Raspberry Pi. The first post is here.

This week we will build a digital photo frame. A Raspberry PI is perfect for this task because it plugs directly into the back of a flat screen TV through HDMI. Just give it power and network and you are ready to go.

Last week I talked about the new Amino API built around properties. Several people commented that I didn’t say how to actually get and run Amino. Very good point! Let’s kick things off with an install-fest. These instructions assume you are running Raspbian, though pretty much any Linux distro should work.

Amino is fundamentally a Node JS library so first you’ll need Node itself. Fortunately, installing Node is far easier than it used to be. In brief, update your system with apt-get, download and unzip Node from nodejs.org, and add node and npm to your path. Verify the installation with npm —version. I wrote full instructions here

Amino uses some native code, so you’ll need Node Gyp and GCC. Verify GCC is installed with gcc —version. Install node-gyp with npm install -g node-gyp.

Now we can checkout and compile Amino. You’ll also need Git installed if you don’t have it.

git clone git@github.com:joshmarinacci/aminogfx.git
cd aminogfx
node-gyp clean configure --OS=raspberrypi build
npm install
node build desktop
export NODE_PATH=build/desktopnode tests/examples/simple.js

This will get the amino source, build the native parts, then build the Javascript parts. When you run node tests/examples/simple.js you should see this:

image

Now let’s build a photo slideshow. The app will scan a directory for image, then loop through the photos on screen. It will slide the photos to the left using two ImageViews, one for the outgoing image and one for the incoming, then swap them. First we need to import the required modules.

var amino = require('amino.js');
var fs = require('fs');
var Group = require('amino').Group;
var ImageView = require('amino').ImageView;

Technically you could call amino.Group() instead of importing Group separately, but it makes for less typing later on.

Now let’s check that the user specified an input directory. If so, then we can get a list of images to load.

if(process.argv.length < 3) {
    console.log("you must provide a directory to use");
    return;
}

var dir = process.argv[2];
var filelist = fs.readdirSync(dir).map(function(file) {
    return dir+'/'+file;
});

So far this is all straight forward Node stuff. Since we are going to loop through the photos over and over again we will need an index to increment through the array. When the index reaches the end it should wrap around to the beginning, and handle the case when new images are added to the array. Since this is a common operation I created a utility object with a single function: next(). Each time we call next it will return the next image, automatically wrapping around.

function CircularBuffer(arr) {
    this.arr = arr;
    this.index = -1;
    this.next = function() {
        this.index = (this.index+1)%this.arr.length;
        return this.arr[this.index];
    }
}

//wrap files in a circular buffer
var files = new CircularBuffer(filelist);

Now lets set up a scene in Amino. To make sure the threading is handled properly you must always pass a setup function to amino.start(). It will set up the graphics system then give you a reference to the core object and a stage, which is the window you can draw in. (Technically it’s the contents of the window, not the window itself).

amino.start(function(core, stage) {
    stage.setSize(800,600);

    var root = new Group();
    stage.setRoot(root);


    var sw = stage.getW();
    var sh = stage.getH();

    //create two image views
    var iv1 = new ImageView().x(0);
    var iv2 = new ImageView().x(1000);

    //add to the scene
    root.add(iv1,iv2);
…
};

The setup function above sets the size of the stage and creates a Group to use as the root of the scene. Within the root it adds two image views, iv1 and iv2.

The images may not be the same size as the screen so we must scale them. However, we can only scale once we know how big the images will be. Furthermore, the image view will hold different images as we loop through them, so we really want to recalculate the scale every time a new image is set. To do this, we will watch for changes to the image property of the image view like this.

    //auto scale them with this function
    function scaleImage(img,prop,obj) {
        var scale = Math.min(sw/img.w,sh/img.h);
        obj.sx(scale).sy(scale);
    }
     // call scaleImage whenever the image property changes
    iv1.image.watch(scaleImage);
    iv2.image.watch(scaleImage);

    //load the first two images
    iv1.src(files.next());
    iv2.src(files.next());

Now that we have two images we can animate them. Sliding images to the left is as simple as animating their x property. This code will animate the x position of iv1 over 3 seconds, starting at 0 and going to -sw. This will slide the image off the screen to the left.

iv1.x.anim().delay(1000).from(0).to(-sw).dur(3000).start();

To slide the next image onto the screen we do the same thing for iv2,

iv2.x.anim().delay(1000).from(sw).to(0).dur(3000)

However, once the animation is done we want to swap the images and move them back, so let’s add a then(afterAnim) function call. This will invoke afterAnim once the second animation is done. The final call in the chain is to the start() function. Until start is called nothing will actually be animated.

    //animate out and in
    function swap() {
        iv1.x.anim().delay(1000).from(0).to(-sw).dur(3000).start();
        iv2.x.anim().delay(1000).from(sw).to(0).dur(3000)
            .then(afterAnim).start();
    }
     //kick off the loop
    swap();

The afterAnim function moves the ImageViews back to their original positions and moves the image from iv2 to iv1. Since this happens between frames the viewer will never notice anything has changed. Finally it sets the source of iv2 to the next image and calls the swap() function to loop again.

    function afterAnim() {
        //swap images and move views back in place
        iv1.x(0);
        iv2.x(sw);
        iv1.image(iv2.image());
        // load the next image
        iv2.src(files.next());
        //recurse
        swap();
    }

A note on something a bit subtle. The src of an image view is a string, either a url of file path, which refers to the image. The image property of an ImageView is the actual in memory image. When you set the src to a new value the ImageView will automatically load it, then set the image property. That’s why we added a watch function to the iv1.image not iv1.src.

Now let’s run it, the last argument is the path to a directory containing some JPGs or PNGs.

node demos/slideshow/slideshow.js demos/slideshow/images

If everything goes well you should see something like this:

image

By default, animations will use a cubic interpolator so the images will start moving slowly, speed up, then slow down again when they reach the end of the transition. This looks nicer than a straight linear interpolation.

So that’s it. A nice smooth slideshow in about 80 lines of code. By removing comments and utility functions we could get it under 40, but this longer version is easier to read.

Here is the final complete code. It is also in the git repo under demos/slideshow.

var amino = require('amino.js');
var fs = require('fs');
var Group = require('amino').Group;
var ImageView = require('amino').ImageView;

if(process.argv.length < 3) {
    console.log("you must provide a directory to use");
    return;
}

var dir = process.argv[2];
var filelist = fs.readdirSync(dir).map(function(file) {
    return dir+'/'+file;
});


function CircularBuffer(arr) {
    this.arr = arr;
    this.index = -1;
    this.next = function() {
        this.index = (this.index+1)%this.arr.length;
        return this.arr[this.index];
    }
}

//wrap files in a circular buffer
var files = new CircularBuffer(filelist);


amino.start(function(core, stage) {
    stage.setSize(800,600);

    var root = new Group();
    stage.setRoot(root);


    var sw = stage.getW();
    var sh = stage.getH();

    //create two image views
    var iv1 = new ImageView().x(0);
    var iv2 = new ImageView().x(1000);

    //add to the scene
    root.add(iv1,iv2);

    //auto scale them
    function scaleImage(img,prop,obj) {
        var scale = Math.min(sw/img.w,sh/img.h);
        obj.sx(scale).sy(scale);
    }
    iv1.image.watch(scaleImage);
    iv2.image.watch(scaleImage);

    //load the first two images
    iv1.src(files.next());
    iv2.src(files.next());



    //animate out and in
    function swap() {
        iv1.x.anim().delay(1000).from(0).to(-sw).dur(3000).start();
        iv2.x.anim().delay(1000).from(sw).to(0).dur(3000)
            .then(afterAnim).start();
    }
    swap();

    function afterAnim() {
        //swap images and move views back in place
        iv1.x(0);
        iv2.x(sw);
        iv1.image(iv2.image());
        iv2.src(files.next());
        //recurse
        swap();
    }

});

Thank you and stay tuned for more Amino examples on my blog.

Amino repo

I’ve been working on Amino, my graphics library, for several years now. I’ve ported it from pures Java, to JavaScript, to a complex custom-language generator system (I was really into code-gen two years ago), and back to JS. It has accreted features and bloat. And yet, through all that time, even with blog posts and the goamino.org website, I don’t think anyone but me has ever used it. I had accepted this fact and continued tweaking it to meet my personal needs; satisfied that I was creating something that lets me build other useful things. Until earlier this year.

OSCON

In January I thought to submit a session to OSCON entitled Data Dashboards with Amino, NodeJS, and the Raspberry Pi. The concept was simple: Raspberry Pis are cheap but with a surprisingly powerful GPU. Flat screen TVs are also cheap. I can get a 32in model at Costco for 200$. Combine them with a wall mount and you have a cheap data dashboard. Much to my shock the talk was accepted.

The session at OSCON was very well attended, proving there is clearly interest in Amino, at least on the Raspberry Pi. The demos I was able to pull off for the talk show that Amino is powerful enough to really push the Pi. My final example was an over the top futuristic dashboard for 'Awesomonium Levels', clearly at home in every super villain’s lair. If Amino can pull this off then it’s found it’s niche. X windows and browsers are so slow on the Pi that people are willing to use something different.

globe

Refactoring

However, Amino still needs some work. While putting the demos together for my session a noticed how inefficient the API is. I’ve been working on Amino in various forms for at least 3 years, so the API patterns were set quite a while ago. The objects full of getters and setters clearly reflect my previous Java background. Not only have I improved my Javascript skills since then, I have read a lot about functional programming styles lately (book reports coming soon). This let me finally see ways to improve the API.

Much like any other UI toolkit, the core of the Amino API has always been a tree of nodes. Architecturally there are actually two trees, the Javascript one you interact with and the native one that actually makes the OpenGL calls; however the native one is generally hidden away.

Since I came from a Java background I started with an object full of properties accessed with getters and setters. While this works, the syntax is annoying. You have to type the extra get/set words and remember to camel case the property names. Is the font name set with setFontName or setFontname? Because the getter and setter functions were just functions there was no place to access the property as a single object. This means other property functions have to be controlled with a separate API. To animate a property you must refer to it indirectly with a string, like this:

var rect = new amino.ProtoRect().setX(5);
var anim = core.createPropAnim(rect,’x’,0,100,1000);

Not only is the animation configured through a separate object (core) but you have to remember the exact order of the various parameters for starting and end values, duration, and the property name. Amino needs a more fluent API.

Enter Super Properties

After playing with Javascript native getters and setters for a bit (which I’ve determined have no real use) I started looking at the JQuery API. A property can be represented by a single function which both gets and sets depending on the arguments. Since functions in Javascript are also objects, we can attach extra functionality to the function itself. Magic behavior like binding and animation. The property itself becomes the natural place to put this behavior. I call these super properties. Here’s what they look like.

To get the x property of a rectangle

rect.x()

to set the x property of a rectangle:

rect.x(5);

the setter returns a reference to the parent object, so super properties are chain able:

rect.x(5).y(5).w(5);

This syntax is already more compact than the old one:

rect.setX(5).setY(5).setWidth(5);

The property accessor is also an object with it’s own set of listeners. If I want to know when a property changes I can watch it like this:

rect.x.watch(function(val) {
     console.log(“x changed to “+val);
});

Notice that I am referring to the accessor as an object without the parenthesis.

Now that we can watch for variable changes we can also bind them together.

rect.x.bindto(otherRect.x);

If we combine binding with a modifier function, then properties become very powerful. To make rect.x always be the value of otherRect.x plus 10:

rect.x.bindto(otherRect.x, function(val) {
     return val + 10;
});

Modifier functions can be used to convert types as well. Let’s format a string based on a number:

label.text.bindto(rect.x, function(val) {
     return “The value is “ + val;
});

Since Javascript is a functional language we can improve this syntax with some meta-functions. This example creates a reusable string formatter.

function formatter(str) {
     return function(val) {
          return str.replace(‘%’,val);
     }
}

label1.text.bindto(rect.x, formatter(‘the x value is %’));
label2.text.bindto(rect.y, formatter(‘the y value is %’));

Taking a page out of JQuery’s book, I added a find function to the Group object. It returns a selection with proxies the properties to the underlying objects. This lets me manipulate multiple objects at once.

Suppose I have a group with ten rectangles. Each has a different position but they should all be the same size and color:

group.find(‘Rect’).w(20).h(30).fill(‘#00FF00’);

Soon Amino will support searching by CSS class and ID selectors.

Animation

Lets get back to animation for a second. The old syntax was like this:

var rect = new amino.ProtoRect().setX(5);
var anim = core.createPropAnim(rect,’x’,0,100,1000);

Here is the new syntax:

var rect = new Rect().x(5);
rect.x.anim().from(0).to(100).dur(1000).start();

We don’t need to pass in the object and property because the anim function is already attached to the property itself. Chaining the functions makes the syntax more natural. The from and dur functions are optional. If you don’t specifiy them the animation will start with the current value of the property (which is usually what you wanted anyway) and a default duration (1/4 sec). Without those it looks like:

rect.x.anim().to(100).start();

Using a start function makes the animation behavior more explicit. If you don’t call start then the animation doesn’t start. This lets you set up and save a reference to the animation to be used later.

var anim = rect.x.anim().from(-100).to(100).dur(1000).loop(5);
//some time later
anim.start();

Delayed start also lets us add more complex animation control in the future, like chained or parallel animations:

Anim.parallel([
     rect.x.anim().to(1000),
     circle.radius.anim().to(50),
     group.y.anim().from(50).to(100)
]).start();

I’m really happy with the new syntax. Simple functions built on a common pattern of objects and super properties. Not only does this make a nicer syntax, but the implementation is cleaner as well. I was able to deleted about a third of Amino’s JavaScript code! That’s an all-round win!

Since this changes Amino so much I’ve started a new repo for it. The old amino is still available at:

https://github.com/joshmarinacci/aminolang

The new amino, the only one you should be using, is at:

https://github.com/joshmarinacci/aminogfx

That’s it for today. Next time I’ll show you how to build a full screen photo slideshow with the new animation API and a circular buffer. After that we’ll dig into 3D geometry and make a cool spinning globe.

I'm finally back from OSCON, and what a trip it was. Friend of the show wxl came with me to assist and experience the awesomeness that is OSCON. Over the next few days I'll be posting about the three sessions we taught and many, many sessions we attended. A splendid time is guaranteed for all. To kick things off, here is the code from my Amino talk.

Amino is my JavaScript graphics library I've been working on for a few years. Last year I started a big refactor to make it work nicely on the Raspberry Pi. Once we get X windows out of the way we can talk to the Pi's GPU using OpenGL ES. This makes things which were previously impossible, like 3d spinning globes, trivial on the Raspberry Pi.

For the talk I started by explaining how Node and Amino work on the Raspberry Pi, then shows some simple code to make a photo slideshow. (in this case, Beatles album covers).

beatles

Next we showed a 3D rotating text RSS headline viewer.

rss viewer

And finally, using the same GeoJSON code from the D3/Canvas workshop, a proper rotating 3D globe.

globe

Hmm... Did you ever notice that the earth with just countries and no water looks a lot like the half constructed Death Start in Return of the Jedi?

Of course, my dream has always been to create those cool crazy computer interfaces you see in sci-fi movies. You know, the ones with translucent graphs full of nonsense data and spinning globes. And even better, we made one run on the Raspberry Pi. Now you can always know the critical Awesomonium levels of your mining colony.

awesomonium

Source for the demos on DropBox.

I am happy to announce the 1.1 release of Amino, my open source JavaScript graphics library, is ready today. All tests are passing. The new site and docs are up. (Generated by a new tool that I'll describe later). Downloads activate! With the iPad 3 coming any day now I thought it would be good to take a look at what I've done to make Amino Retina Ready (™). Even if you don't have a retina screen it will improve your apps.

The Problem

But first, let's back track and talk for a second about how Canvas works. Canvas isn't really a part of the DOM. To the web browser it just looks like a rectangular image that can be updated. It's just pixels. This low level access is powerful but comes with some tradeoffs. Since the rest of the browser just sees a Canvas element as an image it will scale the canvas as an image as well. If the user zooms the page then the canvas 'image' will be scaled up, resulting in pixelation. This is also a problem with Apple's hi-dpi retina screens. Though they have double the resolution of their non-retina counterparts they still report the same screen sizes. Essentially everything on the page is given a 2x zoom, so your canvas isn't taking advantage of the full pixel density. (This isn't strictly true, but bear with me for a second). Finally, if you scale the canvas area directly using CSS transforms or a proportional size (like width:50%) then the canvas may change size over the lifetime of the page, again giving the user zoomed in pixels. So how can we deal with this?

The Solution

Simple: we check if the real size of the canvas is different than the specified size. If it is then we can update the canvas size directly to match what is really on screen. The code looks like this:
if(canvas.width != canvas.clientWidth) { canvas.width = canvas.clientWidth; }
To deal with a retina display we just scale an extra 2x by checking for window.devicePixelRatio == 2. To tell when the user has changed the page by zooming or resizing we could hook up all sorts of event listeners, but I prefer to simply check on repaints since most things I do are animated. Of course we have to set the canvas height as well, which brings up the question: how should we scale things? If the canvas is uniformly scaled then you can calculate a ratio and multiply the height by that. If the canvas is *not* uniformly scaled, say because the width is set to a percentage but the height is not, then you can automatically scale to fit, or stretch it to fill the new size. In the end I found only a few combinations to actually be useful:
  • Do nothing. Don't adjust the canvas size or scaling at all.
  • Resize the canvas but don't mess with scaling. This essentially turns the canvas into a resizable rectangle, leaving it up to the content to adjust as desired.
  • Uniformly scale the content to fit.
To handle all of this I gave Amino two properties: Canvas.autoSize and Canvas.autoScale. autoSize controls whether the canvas should adapt at all. autoScale controls if it will rescale the content. Amino will handle all of the math and detect retina displays. All you have to do is choose which option you want. I haven't tested this on IE yet (I still need to get the new Win 8 preview) but I have tested this on FireFox, Chrome, Safari and Mobile Safari on an iPhone 4S. Check out the tests here to see it in action. Be sure to check out the new Amino site and download 1.1 here.

Amino 1.1 is on it's way, and despite the small version number difference the changes will be big. We are dropping Java support and heavily refactoring the JavaScript version.

First things first. I'm dropping support for Java. I have gotten essentially no downloads or feature requests for the Java version of Amino which tells me that almost no one is using it. If you want to do desktop Java graphics then I suggest moving to JavaFX. It is well supported and received many excellent updates in the past six months; and it's open source now. If I had known JavaFX was going to reboot with a pure Java version with great hardware acceleration then I probably wouldn't have started Amino to begin with. Leo itself will stay on the older version of Amino until I can get it ported, but no new features will go into the Java port. I highly suggest you check out JavaFX 2.1, now in beta. It even has WebKit integration.

Second, I have done a big refactor to the JavaScript port. The API will change only slightly. The big changes are under the hood in the way it handles animation and multiple canvases (canvii?). Now you can have a single core Amino engine per page that supports multiple canvases. They will all repaint quickly with minimal tearing while being as efficient as possible. This is a must on mobile devices, and in ebooks where multiple canvases per page are common.

Other stuff of note:

  • Bitmap Text: support the custom styled font output of Leonardo Sketch using dynamic bitmap sprites.
  • Animate DOM elements as well as shapes on the canvas.
  • Improvements to the animation api to support chaining, parallel animations, callbacks, and before/after actions.
  • Split into three files: core, shapes, and bitmap effects. Now you can do cool animation without including the scripts you don't need.
  • New API documentation.
  • Simple integration with Three.js for 3D objects with 2D canvas
  • Touch events for mobile devices.

Not everything is in the beta build yet, but it will be coming soon. Please try it out and give me feedback.

Over the past few weeks I've done more experiments and improvements to my ebook prototype. I'm still not sure what I'm going to do with it once I'm all done, but it's been an educational exercise nonetheless. Here's what I've done so far:

I've reorganized the code and put it into a github repo. Everything I'm going to show you is available to use and fork from right here. Click on any screenshot below to see a live example.

Typography

If we are going to call this stuff books then we need good typography. Fortunately 2011 saw the widespread adoption of web fonts. We can now use any custom font we want in any webpage, which includes the iPad with the release of iOS 5. To that end I tried searching for some custom fonts that add a bit a flavor while still remaining readable. After this font test I mocked up a full page with better typography for reading. This included bigger font size contrast, looser line height, a narrower column, auto-captializing the subheading at the top, and some inline images. Overall I'm happy with what can be done in pure CSS after just a few minutes work.

Google ChromeScreenSnapz036

 

Pagination

When we talk about typography we also have to consider how the reader should view long form text. The world seems roughly split between scrolling and swiping between pages. I honestly hate page swiping unless the content really needs to be paginated (like a children's book). I did try some pagination experiments using CSS multi-columns but I'm not happy with the results. Perhaps with some more work they would be usable.

Google ChromeScreenSnapz038

 

Instead I've worked on scrolling. Scrolling feels very natural on a touch screen device, but you still need some static navigation to know where you are, switch pages, and view the table of contents. Fortunately CSS fixed positioning is pretty well supported these days, with IFRAMES as a useful backup, so it wasn't too hard.

 

Interactive Chart

I cooked up a simple bubble chart using Amino and some real data from the World Data Bank. You can use the slider to move time from 1960 to the present and see how the data changes. I think this type of interactive visualization will be very useful in ebooks.

Google ChromeScreenSnapz034

 

Interactive Code

To teach how to use a visual API like Canvas we should have a visual tool. We should be able to see what happens when we change variables, and actually see the code and canvas example change in real time. In my research I came across an amazing toolkit by Bret Victor called Tangle. Using that as a base I prototyped a simple tool for interactive text snippets. It rewrites any canvas javascript function you give it into a live example with formatted source. When you drag on one of the interactive variables (indicated in red) a popup appears to show you the value. As you drag left and right the value changes and the canvas updates to show the new result. This is the most direct way to learn what a function does.

Google ChromeScreenSnapz035

Code Wrapping

Another problem with code snippets is that they often are too long to fit on a single line. I could configure a PRE tag to wrap the long lines, but whitespace is usually significant in code. I don't want to create a situation where the reader thinks something is two lines when it is really one, such as a command line example they are trying to type in. Still, we need a way to view long lines. I played with various scrolling techniques but ultimately found they added more problems than they solved. Instead I found this great technique by [name] to creatively wrap lines. A wrapped line is shown indented with an arrow symbol to indicate it is wrapped from the previous line. This removes any ambiguity.

 

Google ChromeScreenSnapz037

 

Three Dee

The canonical example of what a digital book can do that a paper book can't is spinning an object around in 3D. I don't know how useful this will be in practice, but I wanted to prove it was possible. Using the amazing Three.js library I embedded a simple block that you can rotate using mouse drags or touch events. Three.js is really designed to use with WebGL, which isn't supported yet on most mobile platforms, but it can also render with plain canvas. It's not as fast, of course, but for simple flat shaded models it works well enough.

Google ChromeScreenSnapz039

 

Putting it all together

To really show off what this can do I put together a book demo using some real content from my HTML Canvas Deep Dive presentation last summer. It has a title page, table of contents (generated with a simple nodejs script), and two full chapters with code snippets, examples, and photo slideshows. I think the results speak for themselves.

PreviewScreenSnapz001

Next Steps

I don't really know what the next steps are. I'm going to finish up the canvas book (probably eight chapters by the time I'm done) and use PhoneGap to put it in the iPad, webOS, and Android app stores. After I've proven it's possible I'm not sure what to do next. I did this as an experiment to research the state of the art. I think what I've put together could become a great set of tools for building interactive ebooks since the markup you actually have to write is rather minimal. Let me know if you have a use for this code if I fully developed it.

 

Thanks

 

 

Vacation and travel is over and I'm happy to say things are moving again. I'm feeling refreshed and I have a lot to share with you in 2012; starting with the new book I'm writing for O'Reilly! Read on, MacDuff.

The Book

I've been working on a new book for O'Reilly, tentatively titled Building Mobile Apps in Java. I mentioned it briefly on Twitter but haven't gone into the details before. It will show you how to use GWT and PhoneGap to build mobile apps. With these two open source technologies you can code in Java but target pretty much any mobile platform such as iOS, Android, and webOS.

The book will cover the technical aspects of using GWT & PhoneGap. It will also dive into how to design for mobile. Navigation and performance varies greatly across devices, so it's an important topic. Oh, and the last chapter will show you how to make a mobile platform game with real physics. Tons of fun.

Building Mobile Apps in Java will be an eBook about 60 pages long, available every where O'Reilly publishes their ebooks. Look for it in February or early March.

Open Source and Speaking

For 2012 I want to spend some time doing more actual design work. I'm planning a new hand built wordpress theme for my blog, including proper phone and tablet support. I also have a few art side projects that you'll get to see later in the year.

And speaking of design, I have new significant releases of Amino and Leonardo Sketch coming. If you are in the Portland area come to the January PDX-UX meeting. I will be presenting how to do wire framing with Leonardo Sketch. I'll give a brief overview of Leo and show off some of the great export and collaboration features.

I will also be doing a 5 minute Ignite talk in Portland on February 9th about the future of ebooks and what a Hogwarts Textbook would look like.

Onward!

Finally, I plan to post both more and less on this blog. I used to do short posts on small topics or collections of links. I found social networks better for that thing so I'll do that on Twitter and Google Plus from now on. From now on I want to use the blog for more long form content such as my well read HTML Canvas Deep dive. Look for more long essays on canvas, app stores, and technology trends this year.

2012 is finally here!

 

 

When working on big projects I often create little projects to support the larger effort. Sometimes these little projects turn into something great on their own. It's time for me to tell you about one of them: AppBundler.

AppBundler is a small tool which packages up Java (client side) apps with a minimum of fuss. From a single app description it can generate Mac OSX .app bundles, Windows .EXE files, JNLPs (Java Web Start), double clickable jars; and as of yesterday evening: webOS apps! I started the project to support Leonardo Sketch but I think it's time for AppBundler to stand on it's own.

Packaging Java apps has historically been an exercise in creative swearing. The JVM provides no packaging mechanism other than double clickable jars, which are limited and feel nothing like native apps. Mac and Windows have their own executable formats that involve native code, and Sun has never provided tools to support them. Java Web Start was supposed to solve this, but it never took off the way the creators hoped and has it's own idiosyncrasies. Long term we will have more a more environments with Java available but with different native package systems. Add in native libs, file extension registration, and other metadata; and now you've got a real problem. After hacking Ant files for years to deal with the issue I decided it was finally time to encode my build scripts and Java lore into a new tool that will solve the issue once and for all. Thus AppBundler was born.

How it works

You create a simple XML descriptor file for your application. It lists the jars that make up your app along with some metadata like the App name and main class. It can optionally include icons, file extensions, and links to native libraries.

<?xml version="1.0" encoding="UTF-8"?>
<app name="Amino Particles"> 
   <jar name="Amino2.jar"/> 
   <jar name="amino_sdl.jar"/> 
   <jar name="examples.jar" 
         main-class="com.joshondesign.amino.examples.Particles"/> 
   <property name="com.joshondesign.amino.impl" value="sdl"/> 
   <native name="sdl"/> 
</app> 

Then you run AppBundler on this file from the command line along with a list of directories where the jars can be found. In most apps you have a single directory with all of your jars, plus the app jar itself, so you usually only need to list two directories. You also specify which output format you want or --all for all of them. Here's what it looks like called from an ant script (command line would be the same).

 <java classpath="lib/AppBundler.jar;lib/XMLLib.jar" classname="com.joshondesign.appbundler.Bundler" fork="true"> 
<arg value="--file=myapp.xml"/> <arg value="--target=mac"/> <arg value="--outdir=dist/"/> <arg value="--jardir=lib/"/> <arg value="--jardir=build/jars/"/> </java> 
AppBundler will then generate the executable for each output format.

What it does

For Mac it will create a .APP bundle containing your jars, then include a copy of the JavaApplicationStub and generate the correct Info.plist files (Mac specific metadata files), and finally zip up the bundle. For Windows it uses JSmooth to generate a .EXE file with an icon and class path. For Java WebStart it will generate the JNLP file and copy over the jars. For double click jar files it will actually squish all of your jars together into a single jar with the right META-INF files. And all of the above works with native libraries like JOGL too. For each target it will set the correct library paths and do tricky things like decompress native libs into temp directories. Registering for file extensions and requesting JREs mostly works.

What about webOS?

All of the platforms except webOS ship with a JVM or one can be auto-installed on demand (the Windows EXEs do this). There is no such option for webOS, however. webOS has a high level HTML 5 based SDK and a low level C based PDK. To run Java on webOS you'd have to provide your own JVM and class libraries, so that's exactly what I've done. The full OpenJDK would be too big to run on a lightweight tablet, and a port would take a team of people months to do. Instead I cross compiled the amazing open source JVM Avian to ARM. Avian was meant to be embedded and already has an ARM port, so compiling it was a snap. Avian can use the full OpenJDK runtime, but it also comes with it's own minimal classpath.jar that provides the bare minimum needed to run Java code. Using the smaller runtime meant we wouldn't have a GUI like Swing, but using Swing would require months of AWT porting anyway, which I wasn't interested in doing. Instead I created a new set of Java bindings to SDL (Simple DirectMedia Layer), a low level graphics API available on pretty much every platform. Then I created a port of Amino (my 2D scene graph library) to run on top of SDL. It sounds complicated (and it was, actually), but the scripts hide the complexity. The end result is a set of tools to let you write graphical apps with Java on webOS. Amino already has ports to Java2D and HTML 5 Canvas (and OpenGL is in the works), so you can easily create cross platform graphics apps. And now with AppBundler you can easily package them as well. Interestingly, Avian runs on desktops nicely, so putting Java apps into the Mac App Store might now be possible. There's already some enterprising developers trying to get Avian working on iOS.

How you can help.

While functional, I consider AppBundler to be in an alpha state. There's lots of things that need work. In particular it needs Linux support (generate rpms or debs?) and a real Ant task instead of the Java exec commands you see above. I would also like to have it be included in Maven and any other useful repo. And as a final request (as long as I have you here), I need some servers to run builds tests on. I have already have Hudson running on a Linux server. I'd love it if someone could run a Hudson slave for me on their Windows or Mac server. And of course we need lots of testing and bug fixes. If you are interested please join the mailing list.

Client Java Freedom

AppBundler is another facet of my efforts to let help Java provide a good user experience everywhere. Apps should always feel native, and that includes the installation and start experience. I've used AppBundler to provide native installs of Leonardo Sketch on every desktop platform. I hope AppBundler will help you do the same. Enjoy! -Josh  

References:

 

After several months of work, nestled in between getting webOS 3.0 out the door and prepping the nursery for the pending arrival of my first child, I am happy to announce the release of Amino 1.0. I have been eagerly following the development of HTML 5 Canvas support in the major browsers as well as ensuring the HP TouchPad will have great support for it. Amino is a great way to use the power of Canvas is modern mobile and web applications.

FirefoxScreenSnapz075.png

What is Amino?

Amino is a small scene graph library for both JavaScript and Java, letting you embed smooth animated graphics in webpages with the Canvas tag and in desktop Java applications with a custom JComponent. Amino provides a simple API to do common tasks, while still being extensible. For example, to create a rectangle that loops back and forth in a webpage, just do this:

//setup
var runner = new Runner();
runner.setCanvas(document.getElementById('canvas')); //create a rect filled with red and a black 5px border
var r = new Rect() .set(10,20,50,50) .setFill("red) .setStroke("black") .setStrokeWidth(5);
runner.setRoot(r); //animate r.x from 0 -> 300 over 5.5 seconds, and repeat
runner.addAnim(new PropAnim(r, "x", 0, 300, 5.5).setLoop(true)); //kick off the animation
runner.start();

See the results on the Amino homepage here
PreviewScreenSnapz011.png

What can it do?

Amino can draw basic shapes (rects, circles, paths) and animate them using properties (width goes from 10 to 20 over 3.2 seconds) and callbacks. It also can buffer images to speed up common operations, manage varying framerates on different devices, and do Photoshop like image filtering (brightness, contrast, adjustment). And finally, Amino supports keyboard and mouse events on nodes within the scene, not just for DOM elements. In short, it's a portable scene graph for making interactive applications. Amino handles the hard work of processing input and scheduling the drawing to ensure a fast consistent framerate.

How about some examples?

I'm glad you asked. I've put together a couple of examples to show the variety of things you could do with Amino.

PlanetRover is a simple multilevel sidescroller game with jumping and collision detection.

FirefoxScreenSnapz076.png

Big Fish, Little Fish is a page from a hypothetical children's ebook, showing how text can be enriched with animation, images, and custom fonts.

FirefoxScreenSnapz075.png

This LineChart component is a super easy way to render graphical data in the browser with a minimal api.

FirefoxScreenSnapz074.png

These examples and more are available on the Amino Gallery Page.

How do I get it? What's the License

Amino is fully open source with the BSD license. You can download the source and binaries from here, or link directly to amino-1.0b.js from here in your app with a script tag.

If you'd like to contribute, or just want to let us know the cool stuff you are doing with Amino, please join the developer list or send an email to joshua at marinacci dot org

Another month has gone by with no update to Leonardo, or a real release of Amino. It's interesting how life changes. When I started this projects last summer I had no idea Jen and I would be having a baby in a month, nor did I truly have any notion how much my life would change. Everyone always says having children will change your life, but you never really understand it until you do it yourself, and our journey has just begun.

So, the upshot of all this rambling is that kids take time, and when you have to distribute a finite resource between multiple buckets, something has to get less. Sadly this time the short straw goes to my open source projects. It doesn't mean I won't work on them anymore, just at a slower pace. However, in order to feel at peace with myself I need to leave them in a state where they can still progress without my large time commitment. That's what this post is about.

I've spent the last year working on two main open source projects called Leonardo and Amino. Quick recap: Amino is a scene graph library for Java and JavaScript. Leonardo is a vector drawing program built on top of Amino. I want to get them both to a state where they are stable, useful, and can live on their own. Hopefully more of my job will be driving the community and integrating patches rather than directly coding everything. Every project reaches a point where it should stop being driven by a singular vision, and instead be driven by needs of actual users (the good projects anyway). Now is the that time. Time to focus on gaining adoption, growing a community, and making these projects rock-freaking-solid.

Concrete Plans

Amino

Amino basically works. Shapes are animated and drawn on screen, input events are properly transformed, and it's got pixel effects on both the Java and JavaScript sides. Speed, efficiency and features driven by actual use.

Amino finally has it's own website at GoAmino.org and I've set up auto-builds for both the Java and JavaScript versions. They also have a redesigned API doc template that you can see here. Last steps before a 1.0 release: bug fixes for Mobile Safari and FireFox, more demos, and a tutorial for integrating with Swing apps. (Oh, and if someone has a nice spring easing function, that would be swell). Target: next weekend.

Leonardo

It's basically done. It lets you draw vector art of shapes, images, and paths; and also create attractive presentations (which is just multiple pages of vector art). Now comes polish and adoption and export features. I suspect the value will really be in the export features so I need to know from you guys: what do you want?

In concrete terms I have a bunch of old bugs to fix and will finish the redesigned fill picker (colors, swatches, gradients, patterns, etc.) I also need your help updating the translations. Once that's done I'll clean up the website and cut a 1.0 release. Target: end of April.

Next Steps

In short, a lot of work for the next few weeks, but with luck (and hopefully some great feedback from you) , both Amino and Leonardo will be just fine.

In today's post I'll dive into Amino's new buffering support. At then end we'll talk about new API docs for Amino, the roadmap, and request for help on a domain name.

A big part of making a scenegraph fast is using lots of little tricks to do as little work as possible. Most of these tricks are decades old, but they still work. What makes a scenegraph good is letting developers easily use these tricks without having to code up anything special.

Dirty rect tracking is one such trick, but I haven't implemented it in Amino yet so I'll cover that in a few weeks. Another common trick is using intermediate buffers to store effects that are expensive to compute, such as blurs, shadows, and Photoshop style adjustments. The beauty of buffers is that if you can pretty much do any crazy thing you can think of, as long as you can figure out how to draw it into a buffer first. A good third of Swing Hacks, the book Chris Adamson and I wrote, is just different clever ways of using buffers.

Given the importance of buffers I made this a central feature of Amino. But before I go any further, how about a demo!

Zoom Effect

First, an MP3 style Visualizer. I say MP3 style because it's not actually working with audio. I'm just generating random data then pushing it through a simple buffer effect: draw into buffer1, copy buffer1 into buffer2 with stretching, flip buffers and repeat. It's a simple technique but if you do it over and over the results are very cool.

Google ChromeScreenSnapz020.png
MP3 Visualizer. Click to view

On a modern browser you should get a solid 60fps. BTW, a quick shout out to Internet Explorer 9. The guys at MS have done a top notch job. Amino runs beautifully there, no matter what I threw at it.

The Buffer API

Buffering is broken up into two parts. First is the actual Buffer object, which is a bitmap with a fixed width and height that you can draw into and can be drawn into other buffers or the screen. In the Java version of Amino this is a wrapper around BufferedImage. In the JavaScript version Amino creates an offscreen canvas object to use as a buffer.

The simplest use for a Buffer is in the BufferNode, which just draws its only child into the internal buffer for safe keeping. If the child hasn't changed on the next drawing pass then it will draw using the buffer instead. This is the simplest use case, but very important because you can draw a bunch of complex stuff and save it by simply wrapping it in a buffer. Here's a quick example:

//create a group with 100 child rects
var g = new Group();
for(var i=0; i<1000; i++) { g.add(new Rect().set(i,i,50,50));
} //wrap the group in a BufferNode
var b = new BufferNode(g);
runner.setRoot(b);

The code above creates a group with a thousand rectangles. This will probably be slow to draw, but by wrapping it in a buffer it's only drawn once and then saved for later. Now the rest of your scene can draw at full speed.

Real Time Photo Adjustments

The next big use of buffers is for special effects like Photoshop filters. As of today Amino now has effects for blur, shadow, and photo adjustment (saturation, brightness, contrast). Each of these effects uses one or more buffers internally to manipulate the pixels before drawing to the screen. Blurring is a big topic, so I'll cover that in it's own blog later. Today I'll cover the photo adjustment.

Adjusting the saturation, brightness, and contrast of an image is actually pretty simple. It's just basic math and a lot of copying:

  • create two buffers.
  • draw your photo into the buffer 1
  • loop over every pixel in buffer 1
  • pull out the red, green, and blue components of that pixel's color.
  • calculate a new red, green, and blue using some math
  • Set the same pixel in buffer 2 using the new calculated color

For brightness, saturation, and contrast the equations are:

new color = old color + brightness
new color = (old color - 0.5)*contrast + 0.5
//desaturation
new color = old.red*0.21 + old.green*0.71 + old.blue*0.07

I won't bore you with the details of actually extracting the components with hex math and stuffing it back into the new pixels (well, maybe I will in a later blog on canvas performance). The point is you can do lots of effects with simple math.

The challenge with code like above is that it still may be too slow for real time work. Remember, the goal of Amino is a rock solid 60fps on a desktop browser and 30fps on a mobile device. To keep our framerate promise we need a way to do some work without blocking the UI. That means background threads.

Background Threads, Sorta

On the Java side we can use real background threads to do compute intensive effects; though honestly Java is fast enough for most cases that it hasn't been necessary yet. Canvas is most browsers is slow enough that we can't process an entire large picture (say 512x512) in a single frame. Unfortunately, JavaScript doesn't really have threads, or at least not until the forthcoming Worker API is released. So on to our backup plan: cheat.

We are allowed to do some work on the GUI thread as long as we don't take too much time. The solution: break up the work into small chunks and distribute them across multiple frames. It won't be quite as 'realtime' but it still allows us to do these effects in browser without slowing down the UI.

Amino now has an internal class called WorkTile which defines a subset of the full image to be processed. Right now it's set to 32x32 pixels. Once the effect starts it will process one WorkTile at a time until it runs out of time for this frame (currently set to 1000/40 ms). When the next frame arrives it will draw a few more WorkTiles until it runs out of time. After enough frames the image will be completely processed and saved into the final buffer, and work is terminated. Volia, 'background' processing of images in a browser without blocking the UI.

Currently only BackgroundSaturationNode uses this new worker system but eventually all effects will use it. Here's a demo to change the saturation, brightness, and contrast of a satellite image from Venus. Click the image to go to the demo page.

Google ChromeScreenSnapz021.png
Photo Adjustment. Click to run.

API Docs, Website, and Roadmap

Along with the buffering work I've recently written a new doc tool for Amino. I wanted something that would work on both Java and JavaScript code and didn't have the ugly legacy of classic javadoc. It's still a work in progress but I'm happy with the results so far. It's a new design where classes are grouped by category instead of package. Feedback is very welcome.

I think Amino is getting close to a real beta release. It's pretty stable in the major browsers and on mobile devices that support canvas (I haven't tested Android yet, but TouchPad and iPad work great). I have a bit more work to do on events, fills, and animation but once those are done we'll be ready for a 1.0 release.

Now the big question: where to put all of this? I think Amino deserves it's own domain so I'd like your help picking one. Please tweet your suggestions to @joshmarinacci.

That's it for this week. Thanks guys. I think Amino is shaping up be a rockin' scenegraph library.