SparqEE's Cell is a just launched Kickstarter project to build a GSM data module ready to integrate with Arduino, Raspberry Pi, or any other embedded hardware kit. In addition to the cellular board they are also offering SIM cards with discount data service; a first in the projects I've looked at.

Chris Higgins, one of the SparqEE founders, graciously agreed to answer a few questions about the project and their vision for how the Cell will be used. Enjoy!

Josh: Hi Chris. Before we talk about the Kickstarter project and Cell, could you tell me a bit about your company. Where did the name come from and where are you located?

Chris: SparqEE is a Southern California company that is “of the people, by the people, for the people,” if I may borrow those words from President Lincoln. We created SparqEE to make great technology that helps the world but also to change the game - in our society, money is horded by few when it’s the workers who drive companies - SparqEE shares all equity with our team. Anyone who wants to be involved and works hard will get a slice of the pie.

My 7 years in the defense industry was very demoralizing because even though I always delivered, successfully ran projects, attained patents, and innovated continuously, my income was fixed. But now with SparqEE, the harder we work the more benefits we reap.

Josh: How did you found it and find your co-conspirators?

Chris: It started back in 2009 when I was running a transportation based R&D project, I met my co-founder when meeting with the University of California, Riverside’s Center for Environmental Research and Technology (CE-CERT) team. We immediately recognized each other’s talent, had a phone conversation and talked about working together, and then in early 2010 made it official and started SparqEE.

Both co-founders are electrical engineers with a heavy background in programming, hardware, and networking so we wanted to write that into our name with “Spark” being related to electricity, the blood of technology.

Now our team has expanded and includes people we’ve worked with in the past, people we’ve met at MeetUp events, others from CoFoundersLabs – good people are everywhere, you just have to look and get involved!

Josh: What other products have you created? What other products will you create?

Chris: As for what we’ve created already and what we’re focusing on, it's vehicle telematics. We already have some history making a device for Raytheon and Oregon DOT but will be expanding that line greatly – we’ve got some fun stuff coming out for the car buffs out there! We are bootstrapping a few different directions but Telematics is one we’d definitely like to focus on.

Josh: Tell me about the Cell, your Kickstarter project. What can it let me do that I can't do with a typical Arduino?

Chris: The CELLv1.0 actually easily plugs into an Arduino, Raspberry Pi, or other breakout board – just take a look at the available shields. So right out of the box you can send messages over the cellular network.

If you take it one step further and look into the very minimal selection of cellular dev kits out there, you’ll soon realize the real reason for launching this Kickstarter.

On face value, the CELLv1.0 hardware is simply much more compact and less expensive than any other option currently on the market, but it’s the other facets that are the real differentiator, the entire ecosystem SparqEE has setup is where the value is:

  • Price: definitely an important aspect as we're driving down the price of not only starter kits in the cellular arena but in production systems too. The CELLv1.0 is definitely the least expensive dev kit currently on the market.
  • Servers: We're offering our servers for use by the community so that users don't need to know anything about the server-side, all they see is the CELLv1.0 attached to their Arduino or Raspberry Pi, then the reception of that data at their internet enabled device.
  • SIM: With the CELLv1.0 users can use their own SIM straight from their smartphone, a prepaid one, or the SIMs we set up for this Kickstarter which are actually one of the most beneficial pieces of this project and for the community at large.

With M2M (machine to machine) applications and anything cellular really, the providers are one of the biggest hurdles. Since our goal is to make cellular as ubiquitous as Bluetooth and WiFi we needed to take care of everything, including the providers. So we put together a SIM card offering that works anywhere in the world, is the easiest to setup, no minimums, and is the lowest cost I've ever seen - check out for more info.

Josh: I see that some of the reward levels include Dev Points. What are dev points and what can I do with them?

Chris: If you take a look the Kickstarter project under the section titled “Reward Levels” we list a number of possible breakout boards, such as GPS, accelerometer, and relay, as well as shields for the Arduino and Raspberry Pi. Then, we assign a number of “Dev Points” to each one – so essentially you just add up the number of dev points for the rewards you want and select that purchase level.

Josh: Why did you go with dev points instead of traditional Kickstarter rewards?

Chris: Kickstarter rewards always seem to have way too many options and variations. The “Dev Points” is our attempt at simplifying the choices. Since our project offers a number of optional breakout boards and shields, we wanted people to easily be able to select whatever they wanted without having a million reward levels for every combination – as an example, if someone wants a relay board and accel board, all they have to select is 2 dev points under the rewards and that’s it!

Josh: Does this really work with Arduino out of the box? Can I use the regular Arduino IDE with it?

Chris: Yes, definitely. As we progress we’ll be having simple and quick explanations and examples showing you just how to connect all the hardware and the code to use. For the Arduino, if you get the Arduino shield along with the CELLv1.0 simply plug CELLv1.0 into the shield and then onto the Arduino, plug-in power, open up the Arduino IDE and you’re off – right out of the box without any extra components!

Josh: I'm confused by why there is a separate cellular board and jumper board. What is the advantage of making it two pieces?

Chris: The Jumper board is mainly for development and not meant for production systems (although you could use it for that). The main point of making it a two piece is to allow people to affix the small Cellular board to any product whether development or production.

To give a bit of insight into the cellular industry – the certifications are very stringent and expensive (~$30k), so by creating and certifying the small Cellular board, it allows others to use this module in their final products without having to go through the whole recertification process.

Josh: What do you think people will do with the Cell once they get it?

Chris: People have already been writing in to tell us about what they're going to do with the CELLv1.0 from tracking their bike for theft protection to real-time updates while racing. Others have mentioned plugging this into their BeagleBoard and thus expanding the capabilities of yet another very useful development platform.

Some ideas we came up with at SparqEE were a vehicle tracker and engine kill switch, a device that could open doors, turn on lights, and control temperature at a remote cabin or beach-house, and even early warning systems looking for heat signatures of forest fires or earthquake monitoring.

The possibilities really are endless with this component, but my favorite idea is to make a remote quadcopter to hold my Canon 60D. I really like photography and videography and with the CELLv1.0 I could fly across the city and snap pictures or take video of anything, anywhere.

The SparqEE CELLv1.0 steps in wherever there is a project that is simply out of reach using Bluetooth and WiFi.

Josh: SIMs for GSM data aren't cheap. Typically you need a full plan like the kind you'd have on your cell phone, right?

Chris: Yes and no. You do need a SIM card with a plan, similar to your cell phone, and they are typically expensive but we’ve solved this one too. With the CELLv1.0 you can use the SIM straight from your phone, a prepaid one, or the SIM card offering we put together for this project. We’re able to provide SIMs that work anywhere in the world, are the easiest to setup, no minimums, and are the lowest cost we’ve ever seen! - Check out for more info.

Josh: How are you able to make your own SIM cards be so cheap? Is there a monthly fee in addition or is it just cents per kb?

Chris: There are no additional monthly, yearly, or whatever other fees. We do have to charge once for the SIM card itself and activation, which comes out to less than $10, but past that there’s nothing more than the cost for data or SMS usage that you need. What you see on our page is really the extent of the costs.

As for how we’re able to offer these plans and prices, it’s because people believe in SparqEE and Kickstarter and want to see M2M flourish as much as Bluetooth and WiFi have. With affordable SIM cards, a whole new range of applications is enabled.

Josh: When will the SparqEE SIMs be available?

Chris: Since we’ve garnished so much positive feedback for our SIM offering itself, we’re working to get it up and running for the Kickstarter launch so people that need SIMs will be able to get them with their CELLv1.0.

Josh: The page mentions integrating the Cell into a product. Does that mean I could order a bunch of the boards from you for a discount? When will that be available?

Chris: Our first objective is to deliver to our supporters through Kickstarter – if you believe in our goals and support us through Kickstarter, our first priority is you. Only after the Kickstarter rewards and ecosystem are delivered and setup will we offer units for sale. At that point we imagine people would be able to order either the Jumper or Cellular boards or both, but as far as price is concerned we’re comfortable saying that if you pick up a CELLv1.0 and some dev points through Kickstarter you’re getting a great deal, a deal that won’t exist later. We appreciate the Kickstarter community and the early funding they’re providing.

Josh: Why did you choose to go with a Kickstarter instead of other funding routes?

Chris: I doubt you’ll be surprised to hear that investors aren’t all that interested in spending money to help the technological community nor thin margins. Kickstarter is a great way to get initial seed for an idea, a little bit of PR, giving you maybe just enough to be noticed if you need another round. But in our case, we believe Kickstarter will provide enough capital infusion to actually allow us to bootstrap our next iterations, circumventing the investors entirely.

Josh: When can we expect a followup project from you guys? Cell 2.0 maybe? Anything else on tap?

Chris: We’ll have follow-up projects coming up right after the CELLv1.0, maybe not on Kickstarter, but we’ve got ourselves a roadmap which includes using the CELLv1.0 on 2 other projects. People can "Like" our Facebook to get our updates! But really, we don’t imagine that the CELLv1.0 will need to be upgraded for tens of years – 2G is only just being phased out by AT&T by 2017 so 3G support will last for a great while and the CELLv1.0 will still dominate in both the 2G and 3G spaces.

Josh: What's the one question I should have asked you but didn't?

Chris: How about how we came up with the idea for the CELLv1.0? We were working on the Keychain Tracker project and thought it would be easy to get a small, inexpensive cellular module, SIM and integrate it – we were dead wrong. So, we saw that the industry needed someone to come in and minimize the pain for everybody out there and help make cellular technology and SIMs, or what amounts to relationships with cellular providers, more readily available. Thus the Kickstarter project was born.

Thank you Chris. Good luck to the whole SparqEE crew. Go check out the Kickstarter project and the SIM pricing now.

When I started this blog I had hoped to post once or twice a week. If you're one of my few remaining readers you know that this hasn't happened. My new position at Palm has kept me so busy that I haven't had time to work on the big long educational posts like Typography 101. I've also been debating if I should have any Palm or JavaScript specific content, or just keep this as a pure design blog.

So... I've come to a resolution: it's my blog and it represents me; all of the facets of my professional life. I'm going to just blog away on all topics, keep the posting flow higher, and not put off things until they are perfect. They never will be. I'll deal.

So.. look for a lot more posting from now on. I'll be sure to tag everything appropriately so you can easily filter out Palm, Java, trip reports, or anything else you desire. Thank you, gentle readers.

this is a new test post with some markup to read.

Jesse is one week old today. An amazing week it has been.

In general I don't post personal things to this blog. A few years ago I consciously decided to keep personal communication on Facebook and restrict my professional online life to this blog and Twitter. That's why I'll ignore FaceBook friend requests from people I don't regularly know in real life and limit search/info/photo access to friends of friends.  However, I'm making an exception today for one simple reason: having a child is life changing, both personal and professional.

I'm currently on vacation for two weeks to spend time with my new son, and it's true.  My life is changing. I know it sounds cliche, but we have cliches for a reason. That a child would change my life is something I always knew, but didn't understand until now. As I Tweeted a few days ago: now that I've spent hours up all night coaxing a fussy baby to sleep, I finally feel like a father. Life will never be the same again.

That doesn't mean I won't be the GUI nerd I've always been; I'll just focus on different things. For obvious reasons my interest has focused on ebooks and games, especially apps targeted at young children.  I've been involved in the development of the TouchPad for close to a year, and I know the potential of tablet technology to revolutionize how we teach, train, and entertain children.  We have so many great apps prepping for launch, I really couldn't be happier.  I think you'll be happy too.

That's all for today's lazy Sunday post. Time to wake Jesse up for lunch and clean the house (it's amazing how messy a house can get with a baby around).

Welcome to the world Jesse. We've got great stuff to show you.

Picture of Jesse

I'm on the plane home from Mobile World Congress. It's been a heck of a couple of weeks. I've gotten some new coworkers, a new boss, announced our new products, showed off the new devices in Barcelona, and spread the word of webOS itself to anyone who will listen. I've got several blogs coming about what we've announced, some fun new demos, app speculation, and updates on my open source projects.

But first.. I rest. I've been traveling for two weeks straight and need some sleep. In the meantime I'd like to share with you photos of something far more important: the revolution in Egypt. You really must read this post from Peter Turnley, an amazing photographer who ventured to Cairo in the midst of the protests to shoot some amazing photos both before and after the president's capitulation. You won't regret it.


I am happy to announce the 1.1 release of Amino, my open source JavaScript graphics library, is ready today. All tests are passing. The new site and docs are up. (Generated by a new tool that I'll describe later). Downloads activate! With the iPad 3 coming any day now I thought it would be good to take a look at what I've done to make Amino Retina Ready (™). Even if you don't have a retina screen it will improve your apps.

The Problem

But first, let's back track and talk for a second about how Canvas works. Canvas isn't really a part of the DOM. To the web browser it just looks like a rectangular image that can be updated. It's just pixels. This low level access is powerful but comes with some tradeoffs. Since the rest of the browser just sees a Canvas element as an image it will scale the canvas as an image as well. If the user zooms the page then the canvas 'image' will be scaled up, resulting in pixelation. This is also a problem with Apple's hi-dpi retina screens. Though they have double the resolution of their non-retina counterparts they still report the same screen sizes. Essentially everything on the page is given a 2x zoom, so your canvas isn't taking advantage of the full pixel density. (This isn't strictly true, but bear with me for a second). Finally, if you scale the canvas area directly using CSS transforms or a proportional size (like width:50%) then the canvas may change size over the lifetime of the page, again giving the user zoomed in pixels. So how can we deal with this?

The Solution

Simple: we check if the real size of the canvas is different than the specified size. If it is then we can update the canvas size directly to match what is really on screen. The code looks like this:
if(canvas.width != canvas.clientWidth) { canvas.width = canvas.clientWidth; }
To deal with a retina display we just scale an extra 2x by checking for window.devicePixelRatio == 2. To tell when the user has changed the page by zooming or resizing we could hook up all sorts of event listeners, but I prefer to simply check on repaints since most things I do are animated. Of course we have to set the canvas height as well, which brings up the question: how should we scale things? If the canvas is uniformly scaled then you can calculate a ratio and multiply the height by that. If the canvas is *not* uniformly scaled, say because the width is set to a percentage but the height is not, then you can automatically scale to fit, or stretch it to fill the new size. In the end I found only a few combinations to actually be useful:
  • Do nothing. Don't adjust the canvas size or scaling at all.
  • Resize the canvas but don't mess with scaling. This essentially turns the canvas into a resizable rectangle, leaving it up to the content to adjust as desired.
  • Uniformly scale the content to fit.
To handle all of this I gave Amino two properties: Canvas.autoSize and Canvas.autoScale. autoSize controls whether the canvas should adapt at all. autoScale controls if it will rescale the content. Amino will handle all of the math and detect retina displays. All you have to do is choose which option you want. I haven't tested this on IE yet (I still need to get the new Win 8 preview) but I have tested this on FireFox, Chrome, Safari and Mobile Safari on an iPhone 4S. Check out the tests here to see it in action. Be sure to check out the new Amino site and download 1.1 here.

I'm home all by myself this weekend (the missus took the baby to CA to visit family for a few days) so I am at long last catching up on some reading. Today's book is Hackers & Painters: Big Ideas from the Computer Age by Paul Graham. It is a selection of Paul Graham's essays collected into a single volume, most notably Hackers & Painters. Paul Graham is the co-founder of Viaweb, an early software as a service provider that was later sold to Yahoo in 1998. He then co-founded the Y-combinator startup incubator. He blogs and speaks regularly on topics of startups, hacking, and creative collaboration. Though the book was first published in 2004 it has aged pretty well because Paul mostly focuses on timeless topics: Why are hackers the way they are?. Startups are a great way to create wealth. Musings on programming languages. His chapter on web based software: The Other Road Ahead is not only still relevant, but more relevant than ever before. Software as a Service is now the order of the day. While he missed the concept of shipping software as a packaged mobile app instead of just websites, he got the core concepts of always available, always updated software right on. Over all I like the book. The musings about what makes a good programming language towards the end are especially good, though you may come to the conclusion that all languages eventually grow to become inferior dialects of Lisp. My biggest beef isn't so much with the book but with Paul Graham's tone. Sometimes he steps beyond merely authoritative to become overbearing and down right arrogant. Especially in the name sake chapter on Hackers and Painters one gets the impression that Paul can be quite a know-it-all dick when he wants to be. He puts his kind of person, the hacker, on a pedestal: equating the best hackers with the great painters of the Renaissance. Uber-men who are experts in many fields and outperform the work of 30 mediocre employees. This attitude of superiority pervades the book and can become a turn off for the reader. But the thing is: throughout the book he's completely right. In virtually every essay he digs into the topic and divines the underlying truth, even when we might not which to acknowledge it. He even has a chapter on that topic: What You Can't Say. I suspect his peculiar narrative tone simply reflects how Paul is in real life. I have to respect people who write how they speak. I always strive for honesty in my writing as well. Would I ever want to work with Mr Graham in his startup incubator? I don't know. I haven't met him. He may be a right jolly fellow in person. In any case, I enjoyed reading the book and will read his further essays on his blog, just always knowing to adjust the tone to my personal liking. I have only two real complaints with Hackers & Painters. First, the chapter on catching spam seems very out of place with the rest of the book. Not that I didn't like it, just it was more technical rather than discussing more universal topics. Second, he has several chapters which discuss the programming language of the future, or one that might last 100 years. He dives into the problems that must be solved but never proposes any real solutions. I'm not asking for a full BNF but a few examples of what such a dynamic concise super-language would look like might be handy. But then. I suspect it would just be full of parenthesis. One final item: read the notes section at the end. More than just the footnotes for the book, the provide more entertaining tidbits to ponder and entice further trips to Wikipedia.

I'm not sure I'll ever be ready to tell everything about my time at Palm. Certainly not now. Perhaps in a novel or an 8bit video game, one day. I don't know. I really enjoyed my time there and made wonderful friends. It was also two straight years of frustration. For now I suggest you read The Verge's excellent in-depth article on the 31 months from Palm's 2009 CES debut to the end of the platform (and possible rebirth?). After you read it come back to compare notes. Below are some inaccuracies or clarifications based on my own recollection of events. Think of it as directors commentary if the director was forced to sit in the back and watch through a 3 inch screen.

Caution Ahead

A note to the reader. Before you continue there are three things you must keep in mind:

  • I am now an employee at Nokia. The comments below reflect entirely my own opinion, not of Nokia or my team. Also note that I am not working in the Windows Phone division (or any product division for that matter) and have no insight into product plans.
  • The comments below reflect entirely my own opinion. This is based on my recollection of events and personal speculation. Others have different recollections. These are mine.
  • I wrote the following comments while reading The Verge's article. The thoughts are spontaneous, passionate, off the cuff, and unedited. Please adjust your expectations accordingly.


Prima (2008). I never got a chance to see Prima. I wish I did. Sounds ugly. I was able to get my hands on a Foleo, though. Insert another epic yarn about a brilliant but doomed product. I do know that Prima was Java based, and closer to JavaME than JavaSE. It had major performance issues because it was built on some wonky embedded JVM they got from a small company (I don't know who) rather than a proper JITed VM like HotSpot and JavaSE Embedded (though that wasn't a shipping product from Sun at the time these decisions were being made).

Prima's JVM at CES 2009. Yes, it was still there running some system services. Those lasted until around the time of WebOS 2.0, towards the end of 2010 as I recall. There were always references to it in the open source code dumps which is why people kept asking me if webOS supported Java. It never did in any way that a developer could actually use. That JVM also sucked up way too much memory, making a lot of system services dog slow.

CES 2009 This is the event which sold me on Palm. When I met Ben and Dion in Sweden to discuss working for Palm later that year this was the event I watched and analyzed to see if I truly wanted to work for Palm. The polish and practice was really amazing. If you haven't seen it you should watch it. It even made up for the truly horrible TV ads Palm later aired.

The de-Mercerization to create Blowfish: I don't know the details of exactly how Blowfish was built. Engineering was very closed off to us (Developer Relations). It was like pulling teeth to get information about them. I learned a whole lot by searching JIRA bugs and watching commit logs, however. Blowfish turned into a classic Fred Brooks "Second System Effect" death march. The release stretched on and on. Things which worked previously were broken and never fixed. Testing started to fall by the way side. Quality declined and the release kept slipping. In the end we missed an entire product cycle. This probably doomed us more than anything else.

2010 This when I joined. The Droid campaign had already screwed Palm (or Palm let itself be screwed by placing so much fait in a a carrier). Verizon promised to support the Pre Plus but it turns out that support didn't extend to the actual employees in the retail store. In the US, at least, a phone lives or dies by the retail staff in the carrier stores. Nothing else matters. Not price. Not features. Not apps. If the retail staff doesn't like you... you die.

Anyway. I'd been at Palm for only a few months when the HP deal was announced. I still feel it was the best decision. The smart phone game of the early 2000s changed forever after the iPhone. It was now a rich man's game. Palm was against companies literally 10 times it's size. To create a viable smart phone platform today you have to be prepared to spend about a billion dollars a year. Palm was simply too small. HP had the size and the cash, and a desire to not just be a Windows OEM forever. It was by far the best option. (All other options would simply have liquidated us). But the best laid plans of mice and men…

Leo: Well…. We will never really know what happened inside Mr. Apotheker's mind when he decided to cancel all webOS hardware. Did he not have the stomach for a billion dollar a year run rate? Did he never really want to be in the consumer business? Was his talk of "doubling down" a lie all along? We will never know. It was clear then and now that he never believed in the webOS vision. Perhaps if Hurd had not left history would have gone differently. At the very least the TouchPad would have had the chance to develop into the strong product it could have been. We were, for a brief time, the #2 tablet in the US.

Greg Simon: It really hit me hard when Greg Simon left. He was the only one who was getting my graphics bugs fixed! We had no hard specs for what our Canvas and CSS graphics output should be so I made it my personal mission to write tests for every missing feature of the spec and then push to have them implemented. For a long time Greg was the only guy who could help me. He's the one who implemented CSS gradients for me.

TouchPad hardware: One thing I don't see mentioned is that the hardware design for the touchpad came from HP. It was designed before the acquisition and originally ran Android. That's why you'll occasionally hear rumors of a touchpad the shipped with Android on it. That's also why the specs were a bit anemic when it shipped over a year later.

So ultimately the question is why did the TouchPad fail in the market place? The software was definitely buggy, but we made pretty rapid improvements in the update releases. The iPhone was buggy when it launched but got better very quickly. The hardware specs were considered anemic, but really it should have been flying with a dual core 1.2ghz proc. The immature software held it back. My only real complaint about the hardware was the weight and the lack of a camera. The 7" Opal / TouchPad Go would have solved the camera issue but it was still too heavy. The other big issue was the price. It should have gone out the door at 399 or lower, even if we would have lost money for a while. We didn't have a comparable product to the iPad, so we shouldn't have priced it as such.

So why did it fail in the market place? Actually, I question that assertion. Clearly we weren't selling as well as the iPad but based on anecdotal evidence we were selling more than any Android tablet. One Best Buy employee told me the iPad sold about 10 units a day a his store while the TouchPad sold two a day. The other tablets (Android and BlackBerry) were lucky to sell one a week each. While Samsung may have shipped millions of tablets they later admitted that most of them were never actually sold to end customers. We were the #2 tablet even before the fire sale. A weak #2, but still #2. Not bad for a 1.0 product. If we continued working on the software, pushing out consistent updates, and shipped the Opal then we would have gone into the Christmas season with a strong momentum.

I will say this for both Apple, Google, and Microsoft. They go in big and they don't give up. They take the long view; steadily improving their products over months and years. Spending heavily on advertising then giving a product only six weeks in the market is *not* long term thinking. If HP was only going for a quick buck then webOS would never have survived, whether or not Leo was at the helm. This is a game for the rich and patient man.

So then why was the TouchPad canceled? As I said, we will never know. Perhaps the sales projections weren't realistic. But we do know that the *way* it was killed was designed to be irreversible. The press release announcing cancelation took everyone by surprise, including our partners. AT&T refused to carry the Pre 3 just before it was scheduled to go on sale. The Opal was canceled just days before it would have gone into production for a release in late September. Once you scupper a supply chain that way it takes at least a year to rebuild. The damage was irreversible. Even if Leo had been fired a week later and Meg reversed the decision, it still was too late. And it was that supply chain which cost most of the two billion dollar write down HP had to take later. I sometimes wonder if it would have been cheaper to build the TouchPad Go anyway and still sell it for 99$ like the TouchPad. In any case, the decision was made and it was immediately final.

Of note: when Meg Whitman was asked in an all-hands meeting how the board could have voted for such a decision, she said the board was "informed" of the plan, not consulted. No vote was required or taken.

Post cancelation stasis: my biggest memory of that period is when we were scheduled to go on a three stop tour of Asia (Australia, Singapore, and Bejing) to excite developers about products that now would never ship. In the end we made the trip because Richard Kerris (head of devrel) felt it was important to meet our developers on their own terms and be honest with them. It was both an interesting and awkward experience. I'm glad we did it.

Meg Whitman. During my final months at HP, Meg was refreshingly honest. She took her time to make the decision, but honestly we had plenty of time after the hardware cancellation. A few months either way would make no difference, and she had bigger fish to fry (like the PC division). In the end I think she made the best decisions possible under the circumstances. I wish her and the rest of HP well. I do not envy the work ahead of her.

Sam Greenblat & Martin Risau: I ramped down my travel after my son was born so I only came into the office twice after the cancelation. I only met each briefly. They both seemed smart and competent but had different visions of what Open webOS would be. It doesn't surprise me that there were conflicts.

Other tidbits:

  • There was other hardware in the works beyond the TouchPad Go. I saw prototypes of many products including a transforming tablet / netbook combo with an ingenious sliding hinge and super thin keyboard. I really hope someone makes it one day.
  • On "competing for #3" Yes it would have cost us billions of dollars over a bunch of years, but I think we would have been #2, not #3. It has been nearly a year since the TouchPad launched and Android tablets are still failures. I think there is a lesson in here for us: without a carrier and contract pushing them, Android devices don't sell themselves.
  • WebKit: Using WebKit for the GUI was pure genius. Forking Webkit so you couldn't take advantage of community improvements was pure stupidity.

Oh. and the Palm V is still the best PalmOS device ever. It was the perfect size to fit in a shirt pocket, had a beautiful and unique shape, and was made of good metal materials. And the battery would last forever. It was an iPod Touch ten years early.

Post Mortem

I am still very proud of what we created, especially my teammates in Developer Relations. The TouchPad launched essentially on schedule and had a solid over the air update. We opened the catalog with 10k webOS apps and 600 specifically for the TouchPad. That put us ahead of the number of Android Tablet apps at the same time, even though we shipped 6 months later. We continued to increase the number of apps in the catalog even after the hardware was canceled. We ran fun developer contests and promotions. Our developer community was always been passionate and appreciative, even in the times when we weren't allowed to say much. I'm glad we were able to open source Enyo in a way that lets those developers take their apps to other platforms.

Post Post Mortem

It has been nearly a year since the webOS hardware was canceled. A year that has afforded me some perspective. I left HP not because I didn't enjoy my time there. My reasons were more personal. After seven years of being a technical evangelist for Java, JavaFX, the JavaStore, and then webOS; I was simply tired. Tired of putting emotion and energy into platforms. Tired of the travel and speaking engagements. Tired of the constant product deadlines. Combined with having a (now) one year old child I realized I needed a break.

My current position requires little travel, has a flexible schedule, and lets me work on interesting things not tied to any particular platform. Best of all a specifically *can't* talk about what I'm working on. Perhaps one day I will return to platform evangelism. I still enjoy it. But for now I'm in a nice peaceful place.

Thank you for taking the time to read my tiny part of this epic tale. One day it will truly be the stuff of valley legend.

Yesterday Apple updated their Apple TV product, taking it into a new direction with a 99$ TV dongle that does only content streaming. Apple has long described Apple TV as a 'hobby' because they haven't figured out the right way to create a compelling TV product. Since they've spent millions of dollars building up a new data center in North Carolina to support the streaming catalog of the new Apple TV, then presumably they think they've got it figured out now.

I actually think Apple is wrong. I think Apple TV is a failure and will continue to be a failure for one simple reason: it goes against the design philosophy of every one of their successful products.

Now, yes, I realize thems fight'n words; but hear me out.

At it's core Apple is a software company that makes money by selling hardware. Mac OSX is an amazing operating system that they monetize by selling high end laptops & desktops. iTunes and the iPod OS are great pieces of software that they monetize by selling iPods. iOS is a great mobile operating system that they monetize by selling high margin iPhones and iPod Touches.

The Apple TV

And then there's the Apple TV. It is a piece of hardware that does very little. When originally launched, the Apple TV streamed music, videos, and photos from your desktop; but there are plenty of cheaper alternatives that are just as good. If you hack the Apple TV you can install your own software to make it do all sorts of cool things, but as a stock device directly from Apple it doesn't do much. Apple updated it with a new interface that added the ability to buy music and video directly from Apple's catalog. In other words, they added nothing except the ability to pay them more money.

Yesterday Apple updated the Apple TV once again to do the exact same thing in but a smaller, cheaper package. In some ways it does less since the hard drive is gone and it's likely harder to jailbreak into a useful product. It is now purely a 99$ catalog for the iTunes Store. I'm sure in Apple's mind this is a win. It can do exactly what the previous version did for less than half the price. Surely that's a win, right?

I don't think so.

Monetizing Software Through Hardware

Apple is a software company that monetizes their software through hardware. The Apple TV is a hardware product that is monetized through a streaming media service. It's fundamentally a different kind of product. Services has never been Apple's strong point (cough *MobileMe*), and this product is just a 99$ access device to their catalog service. It still could be successful thanks to their focus on user experience, but I think it will remain a failure because of another point.

Most people don't want another streaming media service on their TV. I love Hulu Plus and I happily pay 10$/mo to get it, but I watch Hulu on my laptop. I don't own a TV, and that makes me a rarity. Most people have very nice large TVs and already pay upwards of 40$/mo for TV service. They don't really want something which costs extra to do what they already are paying for, no matter how small and cute the box is. Apple TV doesn't provide anything new. It competes with something consumers are already quite happy with but doesn't offer everything the existing product provides. This doesn't sound like a recipe for success unless they can get all of the networks on board to massively beef up their content. Even then I'm not sure it will work because most people really like an all you can eat plan for TV. They don't want to pay by the show.

A Good Apple TV

Now that I've denigrated the Apple TV, lets talk about what would be a compelling product: an Apple TV with apps. Essentially an iPod Touch for your TV.

The modern flat screen TV is the largest and most expensive display in most homes, and yet it's wasted. All we do with them is play video. These displays could do so much more with the right input devices and, most importantly, a good app ecosystem. Here's just a short list of apps that would really add value to a flat screen TV:

  • specialty photo slideshows. Turn your TV into an art gallery
  • music driven lava lamp display to run during music sessions
  • an interactive billboard to combine fun graphics with a live Twitter stream
  • widgets: weather, stocks, rss news feed, world clocks, and your calendar
  • zen space: a underwater simulation with soothing sound and lights
  • games (duh!)
  • Video conferencing

I actually think Sony and Microsoft are far closer to creating this vision than Apple is. Both of them have network connectivity, downloadable apps, and are releasing unique forward looking input devices. Microsoft's Kinect in particular shows promise. With the right software your TV becomes this portal to the world that brings interactive content and apps to your living through purely through an intuitive gestural interface.

That vision is far more forward looking and interesting than Apple's; where all they've sold you is pricey a catalog for their own store. And in my mind, that's a big fat fail.

A round up of interesting stuff I've been collecting lately. An interview with Philippe Starck, vintage ads, UI design tips, and electronic comics.

Philippe Starck

Last fall Engadget did a short interview with legendary designer Philippe Starck. Wonderful insights on the world of design and creation.

Also check out the bonus round of word association with Philippe.


The most interesting ebook applications to come out of the iPad hoopla is actually the comic book readers. Most readers seem to treat ebooks just like books and focus on recreating the experience of reading a printed book, complete with faded paper and page curls. Only the comic book apps seem to be exploring new forms of interaction, stealing liberally from cinema. Kicker Studio has a great overview of the cinematic reading of electronic comics.

UI Design Tips

Flyosity has a great article on Crafting Subtle & Realistic User Interfaces. Starting from the basics of light it goes through using borders, gradients, and highlights to create the illusion of depth.

Advertising Through the Ages

A great Flickr set of ads from the fifties. I just love the style from that era.

The Evolution of Print Advertising. What it says on the tin: a brief history of ads from the 1760s to the present. I love how in the 70s even banks used psychadelic imagery.

some cool text to read nothing

or: "Why I won't work for a social network."

I've had a lot of ideas involving social networking floating around in my head for the past few months. They were finally crystalized into a solid conclusion this week: I don't want to work for a social networking company. There are really two distinct but related problems with social networking companies, and combined they form a deal killer for me.


First: they are free. With the possible exception of dating sites, social networking services are free. They do everything in their power to get as many people to sign up as possible. Due to the network effect they really can't charge admission. It's not worth money to join the network until lots of other people are already on it. Combined with the economics of VC funding, the cost of joining is always driven to free. You should always be nervous when someone offers you something for free.

If a service is free then one of three things will happen. The service might charge in the future. This is a valid strategy but likely to piss off lots of current users. Next, the company could simply run out of money one day and shut down. Many great sites have simply died due to a lack of a viable business model. And finally, the site could remain free but make money doing something else; which, in the social networking world, means advertising.

Facebook doesn't charge you money because you aren't the customer. The advertisers who buy your eyeballs are the real customer. This creates an inherent conflict of interest. They serve the advertisers over the users. The users still matter somewhat since without users there would be nothing to sell, but there is a constant tension between the two. Still, this wouldn't be so bad if it weren't for the second problem: scarcity

Fundamentally social networks are selling the users' time, and time is the ultimate scarce resource. The only way for a social network to grow is by getting more users or getting more time out of their existing users. There are only twenty four hours in a day and only a certain number of people on the planet. Recent polls indicate that social networks are reaching saturation in developed nations. Most of the people who want to use a social network are already doing so, or will in the next year or two. Without new users social networks must maximize the time they get from each individual user. This is where things really start to get bad.

To further their growth Facebook has an incentive to optimize the number of ads you see per page, and the number of pages you see per day. They constantly look for ways to bring you back to the site and spend more time interacting with it. I call this the Zyngafication of content, after its most famous implementor. I stopped playing Cafe World when I ran the numbers and realized there is no strategy. You can't design a particular cafe or food item that will maximize profit. The most profitable food in the game was (as I recall) a 3 minute appetizer. The point values of foods are weighed to maximize the number of minutes you have to actually spend playing the game per day. Their incentive structure is designed to suck up as much of your time as possible.

Long term this can't be sustainable. As free services try to maximize their profit the number of ads per page will increase, and the number sneaky ways they get you to stay on the site longer will only grow. The user experience will suffer and eventually users will leave. What we need is a system that treats our most valuable resource, time, as something to be conserved instead of wasted. I suspect this is fundamentally incompatible with an advertising driven model. And I don't want to work for a company who's business model requires reducing the user experience.

So this brings up a fundamental question: Would you pay to use Facebook if it let you accomplish things more efficiently instead of less? What about photo sharing? Is there a viable model for paid social networks and services?


The past two years have been a hell of a fun ride, but alas it must come to an end. It is with sadness but no regret that I must announce Friday will be my last day at HP / Palm. This was a very difficult decision to make. I have enjoyed my time here and after seeing the webOS roadmap I'm very excited for it's future, but it is time for me to do something else.

I am extremely proud of what the webOS Developer Relations team was able to accomplish. We kept the community together through several big platform changes and managed to launch the TouchPad with more native tablet apps than Android had after six months. Even after the webOS hardware was canceled we still managed to grow the app catalog to more than a thousand apps. Recent numbers indicate that our USA Today app has twice the number of downloads as all Android tablets put together. I'm proud of what we built with fewer resources than our gigantic competitors.

Sadly, webOS is a failure as a retail product, at least in it's current incarnation. There are many reasons for it's failure ranging from poor marketing to the ravings of a lunatic. Perhaps one day I will write more on the topic, but for now I will let the past be past. webOS has a new future as Open webOS, but one that I must cheer from the sidelines. It is time for me to make some changes in my own life.

I have spent the last seven years championing underdog platforms. Starting with desktop Java and Swing, then NetBeans, the JavaFX platform, and the JavaStore. Finally I spent the last two years doing my best to make webOS a success. I'm proud of the work I've done and the platforms I've helped, but quite frankly I'm damn tired. It's a lot of fun to champion something you are passionate about, but also exhausting. And since Jesse arrived I no longer want to be a platform evangelist; traveling to conferences and staying up late for weeks to ship a product. My life is simply different now.

So March 12th I will be taking a research position at Nokia. I will get to play with cool future stuff and stay involved with the industry, but at a much slower pace. I simply need a break for a few years. This doesn't mean I will disappear. Rather I plan to increase my blogging, but on a wide range of topics in more long form essays.

I do plan to stay involved with webOS. Now that I'm no longer an HP employee I can actually sell my own apps in the app catalog. I will also keep working with Enyo and the Open webOS builds; and making sure Amino and Leo support them. I love my TouchPad and will continue to use it until they pry it from my cold dead hands.

I want to mention my incredible fellow Developer Relations crew members, some of who were laid off today. And my eternal gratitude to the amazing webOS community. You made all of our hard work worth it.


This is part 3 of a series on Amino, a JavaScript graphics library for OpenGL on the Raspberry PI. You can also read part 1 and part 2.

Amino is built on Node JS, a robust JavaScript runtime married to a powerful IO library. That’s nice and all, but the real magic of Node is the modules. For any file format you can think of someone has probably written a Node module to parse it. For any database you might want use, someone has made a module for it. lists nearly ninety thousand packages! That’s a lot of modules ready for you to use.

For today’s demo we will build a nice display of news headlines that could run in the lobby of an office using a flatscreen TV on the wall. We will fetch news headlines as RSS feeds. Feeds are easy to parse using Node streams and the feedparser module. Lets start by creating a parseFeed function. This function takes a url. It will load the feed from the url, extract the title of each article, then call the provided callback function with the list of headlines.

var FeedParser = require('feedparser');
var http = require('http');

function parseFeed(url, cb) {
    var headlines = [];

    http.get(url, function(res) {
        res.pipe(new FeedParser())
            .on('meta',function(meta) {
                //console.log('the meta is',meta);
            .on('data',function(article) {
                console.log("title = ", article.title);
            .on('end',function() {

Node uses streams. Many functions, like the http.get() function, return a stream. You can pipe this stream through a filter or processor. In the code above we use the FeedParser object to filter the HTTP stream. This returns a new stream which will produce events. We can then listen to the events as the data flows through the system, picking up just the parts we want. In this case we will watch for the data event, which provides the article that was just parsed. Then we add just the title to the headlines array. When the end event happens we send the headlines array to the callback. This sort of streaming IO code is very common in Node programs.

Now that we have a list of headlines, lets make a display. We will hard code the size to 1280 x 720, a common HDTV resolution. Adjust this to fit your own TV if necessary. As before, the first thing we do is turn the titles into a CircularBuffer (see previous blog) and create a root group.

var amino = require('amino.js'); var sw = 1280; var sh = 720;

parseFeed('',function(titles) { amino.start(function(core, stage) {

var titles = new CircularBuffer(titles); var root = new amino.Group(); stage.setSize(sw,sh); stage.setRoot(root);

The RSS feed will be shown as two lines of text, so let’s create a text group then two text objects. I also created a background group that we will use later. Shapes are drawn in the order they are added, so I have to add the bg group before the textgroup.

var bg = new amino.Group(); root.add(bg);

var textgroup = new amino.Group(); root.add(textgroup);

var line1 = new amino.Text().x(50).y(200).fill("#ffffff").text('foo').fontSize(80); var line2 = new amino.Text().x(50).y(300).fill("#ffffff").text('bar').fontSize(80); textgroup.add(line1,line2);

Each Text object has the same position and color and size, except that one is 100 pixels lower down on the screen than the other. Now we need to animate them.

The animation consists of three sections: set the text to the current headline, rotate the text in from the side, then rotate the text back out after a delay. This function sets the headlines. If the headline is longer than the max we support (currently set to 34 letters) then chop it into pieces. If we were really smart we’d be careful about not breaking words, but I’ll leave that as an exercise to the reader.

function setHeadlines(headline,t1,t2) { var max = 34; if(headline.length > max) { t1.text(headline.substring(0,max)); t2.text(headline.substring(max)); } else { t1.text(headline); t2.text(''); } }

The rotateIn function calls setHeadlines with the next title, then animates the Y rotation axis from 220 degrees to 360 over two seconds (2000 milliseconds). It also triggers ‘rotateOut’ when it’s done.

function rotateIn() { setHeadlines(,line1,line2); textgroup.ry.anim().from(220).to(360).dur(2000).then(rotateOut).start(); }

A quick note on rotation. Amino is fully 3D so in theory you can rotate shapes in any direction, not just in the 2D plane. To keep things simple the group has three rotation values: rx, ry, and rz. These each rotate around the x, y, and z axes. The x axis is horizontal and fixed to the top of the screen, so rotating around the x axis would flip the shape over going away from you then back towards you. The y axis is vertical and on the left side of the screen. Rotating around the y axis makes shapes move away from you and towards the left, then back around. If you want to do a rotation that looks like the standard 2D rotation, then you want to go around the Z axis with rz. Also note that all rotations are in degrees, not radians.

The rotateOut() function rotates the text group back out from 0 to 140 over two seconds, then triggers rotateIn again. Since each function triggers the other they will continue to ping pong back and forth forever, pulling in a new headline each time. Notice the delay() call. This will make the animation wait five seconds before starting.

function rotateOut() {

textgroup.ry.anim().delay(5000).from(0).to(140).dur(2000).then(rotateIn).start(); }

Finally we can start the whole shebang off back calling rotateIn the first time.


What we have so far will work just fine but it’s a little boring because the background is pure black. Let’s add a few subtly moving rectangles in the background. First we will create the three rectangles. They are each fill the screen and are 50% translucent, in the colors red, green, and blue.

//three rects that fill the screen: red, green, blue. 50% translucent var rect1 = new amino.Rect().w(sw).h(sh).opacity(0.5).fill("#ff0000"); var rect2 = new amino.Rect().w(sw).h(sh).opacity(0.5).fill("#00ff00"); var rect3 = new amino.Rect().w(sw).h(sh).opacity(0.5).fill("#0000ff"); bg.add(rect1,rect2,rect3);

Now let’s move the two back rectangles off the left edge of the screen.

//animate the back two rects rect1.x(-1000); rect2.x(-1000);

Finally we can slide them from left to right and back. Notice that these animations set loop to -1 and autoreverse to 1. The loop count sets how many times the animation will run. Using -1 makes it run forever. The autoreverse property makes the animation alternate direction each time. Rather than going from left to right and starting over at the left again, instead it will go left to right then right to left. Finally the second animation has a five second delay. This staggers the two animations so they will always be in different places. Since all three rectangles are translucent the colors will continually mix and change as the rectangles slide back a forth.

rect1.x.anim().from(-1000).to(1000).dur(5000) .loop(-1).autoreverse(true).start(); rect2.x.anim().from(-1000).to(1000).dur(3000) .loop(-1).autoreverse(true).delay(5000).start();

Here’s what it finally looks like. Of course a still picture can’t do justice to the real thing.

image of full text

image of rotating text

I’ve been working on Amino, my graphics library, for several years now. I’ve ported it from pures Java, to JavaScript, to a complex custom-language generator system (I was really into code-gen two years ago), and back to JS. It has accreted features and bloat. And yet, through all that time, even with blog posts and the website, I don’t think anyone but me has ever used it. I had accepted this fact and continued tweaking it to meet my personal needs; satisfied that I was creating something that lets me build other useful things. Until earlier this year.


In January I thought to submit a session to OSCON entitled Data Dashboards with Amino, NodeJS, and the Raspberry Pi. The concept was simple: Raspberry Pis are cheap but with a surprisingly powerful GPU. Flat screen TVs are also cheap. I can get a 32in model at Costco for 200$. Combine them with a wall mount and you have a cheap data dashboard. Much to my shock the talk was accepted.

The session at OSCON was very well attended, proving there is clearly interest in Amino, at least on the Raspberry Pi. The demos I was able to pull off for the talk show that Amino is powerful enough to really push the Pi. My final example was an over the top futuristic dashboard for 'Awesomonium Levels', clearly at home in every super villain’s lair. If Amino can pull this off then it’s found it’s niche. X windows and browsers are so slow on the Pi that people are willing to use something different.



However, Amino still needs some work. While putting the demos together for my session a noticed how inefficient the API is. I’ve been working on Amino in various forms for at least 3 years, so the API patterns were set quite a while ago. The objects full of getters and setters clearly reflect my previous Java background. Not only have I improved my Javascript skills since then, I have read a lot about functional programming styles lately (book reports coming soon). This let me finally see ways to improve the API.

Much like any other UI toolkit, the core of the Amino API has always been a tree of nodes. Architecturally there are actually two trees, the Javascript one you interact with and the native one that actually makes the OpenGL calls; however the native one is generally hidden away.

Since I came from a Java background I started with an object full of properties accessed with getters and setters. While this works, the syntax is annoying. You have to type the extra get/set words and remember to camel case the property names. Is the font name set with setFontName or setFontname? Because the getter and setter functions were just functions there was no place to access the property as a single object. This means other property functions have to be controlled with a separate API. To animate a property you must refer to it indirectly with a string, like this:

var rect = new amino.ProtoRect().setX(5);
var anim = core.createPropAnim(rect,’x’,0,100,1000);

Not only is the animation configured through a separate object (core) but you have to remember the exact order of the various parameters for starting and end values, duration, and the property name. Amino needs a more fluent API.

Enter Super Properties

After playing with Javascript native getters and setters for a bit (which I’ve determined have no real use) I started looking at the JQuery API. A property can be represented by a single function which both gets and sets depending on the arguments. Since functions in Javascript are also objects, we can attach extra functionality to the function itself. Magic behavior like binding and animation. The property itself becomes the natural place to put this behavior. I call these super properties. Here’s what they look like.

To get the x property of a rectangle


to set the x property of a rectangle:


the setter returns a reference to the parent object, so super properties are chain able:


This syntax is already more compact than the old one:


The property accessor is also an object with it’s own set of listeners. If I want to know when a property changes I can watch it like this: {
     console.log(“x changed to “+val);

Notice that I am referring to the accessor as an object without the parenthesis.

Now that we can watch for variable changes we can also bind them together.


If we combine binding with a modifier function, then properties become very powerful. To make rect.x always be the value of otherRect.x plus 10:

rect.x.bindto(otherRect.x, function(val) {
     return val + 10;

Modifier functions can be used to convert types as well. Let’s format a string based on a number:

label.text.bindto(rect.x, function(val) {
     return “The value is “ + val;

Since Javascript is a functional language we can improve this syntax with some meta-functions. This example creates a reusable string formatter.

function formatter(str) {
     return function(val) {
          return str.replace(‘%’,val);

label1.text.bindto(rect.x, formatter(‘the x value is %’));
label2.text.bindto(rect.y, formatter(‘the y value is %’));

Taking a page out of JQuery’s book, I added a find function to the Group object. It returns a selection with proxies the properties to the underlying objects. This lets me manipulate multiple objects at once.

Suppose I have a group with ten rectangles. Each has a different position but they should all be the same size and color:


Soon Amino will support searching by CSS class and ID selectors.


Lets get back to animation for a second. The old syntax was like this:

var rect = new amino.ProtoRect().setX(5);
var anim = core.createPropAnim(rect,’x’,0,100,1000);

Here is the new syntax:

var rect = new Rect().x(5);

We don’t need to pass in the object and property because the anim function is already attached to the property itself. Chaining the functions makes the syntax more natural. The from and dur functions are optional. If you don’t specifiy them the animation will start with the current value of the property (which is usually what you wanted anyway) and a default duration (1/4 sec). Without those it looks like:


Using a start function makes the animation behavior more explicit. If you don’t call start then the animation doesn’t start. This lets you set up and save a reference to the animation to be used later.

var anim = rect.x.anim().from(-100).to(100).dur(1000).loop(5);
//some time later

Delayed start also lets us add more complex animation control in the future, like chained or parallel animations:


I’m really happy with the new syntax. Simple functions built on a common pattern of objects and super properties. Not only does this make a nicer syntax, but the implementation is cleaner as well. I was able to deleted about a third of Amino’s JavaScript code! That’s an all-round win!

Since this changes Amino so much I’ve started a new repo for it. The old amino is still available at:

The new amino, the only one you should be using, is at:

That’s it for today. Next time I’ll show you how to build a full screen photo slideshow with the new animation API and a circular buffer. After that we’ll dig into 3D geometry and make a cool spinning globe.

During SXSW this year I had the great fortune to see the keynote given by Stephen Wolfram. If you’ve not heard of him before, he’s the guy who created Mathematica, and more recently Wolfram Alpha, an online cloud brain. He’s an insanely smart guy with the huge ambition to change how we think.

When Stephen started, back in the early 1980s, he was interested in physics but wasn’t very good at integral calculus. Being an awesome nerd he wrote a program to do the integration for him. This eventually became Mathematica. He has felt for decades that with better tools we can think better, and think better thoughts. He didn’t write Mathematica because he loves math. He wrote it to get beyond math. To let the human specify the goals and have the computer figure out how to do it.

After a decade of building and selling Mathematica he spent the next decade doing science again. Among other things this resulted in his massive tome: A New Kind Of Science, and the creation of Wolfram Alpha, a program that systematizes knowledge to let you ask ask questions about anything.

In 1983 he invented/discovered a one dimensional cellular autonomy called Rule 30, (which he still has the code printed on his business cards). Rule 30 creates lots of complexity from a very simple equation. Even if one runs just a tiny program it can end up making interesting complexity from very little. He feels there is no distinction between emergent complexity and brain like intelligence. IE: we don't need a brain like AI, the typical Strong AI claim. Rather, with emergent complexity we can augment human cognition to answer ever more difficult questions.

The end result of all of this is the Wolfram Language, which they are just started to release now in SDK form. By combining this language with the tools in Mathematica and the power of a data collecting cloud; they have created something qualitatively different. Essentially a super-brain in the cloud.

The Wolfram Language is a 'knowledge based language’ as he calls it. Most programming languages stay close to the operation of the machine. Most features are pushed into libraries or other programs. The Wolfram Language takes the opposite approach. It has as much as possible built in; that is the language itself does as much as possible. It automates as much as possible for the programmer.

After explaining the philosophy Stephen did a few demos. He was using the Wolfram tool, which is a desktop app that constantly communicates with the cloud servers. In a few keystrokes he created 60k random numbers, then applied a much of statistical tests like mean, numerical value, and skewness. Essentially Mathematica. Then he drew his live Facebook friend network as a nicely laid out node graph. Next he captured a live camera image from his laptop, partitioned it blocks of size 50, applies some filters, compressed the result to a single final image and tweeted the result. He did all of this through the interactive tool with just a few commands. It really is a union of textual, visual, and the network.

For his next trick, Mr. Wolfram asked the cloud for a time series of air temperatures from Austin for the past year then drew it as a graph. Again, he used only a few commands and all data was pulled from the Wolfram Cloud brain. Next he asked for the countries which border Ukraine, calculated the lengths of the borders, and made a chart. Next he asked the system for a list of all Former Soviet Republics, grabbed the flag image for each, then used a ‘nearest’ function to see which flag is closest to the French flag. This ‘nearest’ function is interesting because it isn’t a single function. Rather the computer will automatically select the best algorithm from an exhaustive collection. It seems almost magical. He did a similar demo using images of hand written numbers and the ‘classify’ function to create a machine learning classifier for new hand drawn numbers.

He’s right. The Wolfram Language really does have everything built in. The cloud has factual data for almost everything. The contents of wikipedia, many other public databases, and Wolfram’s own scientific databases are built in. The natural language parser makes it easier to work with. It knows that NYC probably means New York City, and can ask the human for clarification if needed. His overall goal is maximum automation. You define what you want the language to do and then it’s up the language to figure out how to do it. It’s taken 25 years to make this language possible, and easy to learn and guess. He claims they’ve invented new algorithms that are only possible because of this system.

Since all of the Wolfram Language is backed by the cloud they can do some interesting things. You can write a function and then publish it to their cloud service. The function becomes a JSON or XML web service, instantly live, with a unique URL. All data conversion and hosting is transparently handled for you. All symbolic computation is backed by their cloud. You can also publish a function as a web form. Function parameters become form input elements. As an example he created a simple function which takes the names of two cities and returns a map containing them. Published as a form shows the user two text fields to ask for the city names. Type in two cities and press enter, an image of a map is returned. These aren’t just plain text fields, though. They contain are backed by the full natural language understanding of the cloud. You get auto-completion and validation automatically. And it works perfectly on mobile devices.

Everything I saw was sort of mind blowing if we consider what this system will do after a few more iterations. The challenge, at least in my mind, is how to sell it. It’s tricky to sell a general purpose super-brain. Telling people "It can do anything" doesn't usually drive sales. They seem to be aware of this, however, as they now have a bunch of products specific to different industry verticals like physical sciences and healthcare. They don’t sell the super-brain itself, but specific tools backed by the brain. They also announced an SDK that will let developers write web and mobile apps that use the NLP parser and cloud brain as services. They want it to be as easy to put into an app as a Google Maps. What will developers make with the SDK? They don’t know yet, but it sure will be exciting.

The upshot of all this? The future looks bright. It’s also inspired me to write a new version of my Amino Shell with improved features. Stay tuned.

For some reason the concept of Genetic Programming got stuck in my head the other evening. At midnight, after spending about four hours reading up on the topic around the web, I came away disappointed. The concept of evolving code the way genes do is fascinating but the results in the field seem to be very narrow and limiting. Thus began this rant.

This article called Genetic Programming: A Novel Failure probably sums up my thoughts best. Genetic programming is only a slight variation on solution searching algorithms. Based on my reading, most work in the field has focused on how to make systems converge on a solution more quickly, i.e.: improving efficiency. This seems wrong, or at best premature.

We live in the 21st century. We have more CPU cycles than we know what do with. Where are the systems that are wide but shallow? The ones that are really non-determinstic and will generate truly surprising results? We should be wasting cycles exploring new possibilities, not generating new solutions for known problems.

The free ebook, A Field Guide to Genetic Programming, is a great primer in the topic. I read through most of it the other night. My biggest frustration is that almost all genetic programing systems focus on evolving syntax trees, usually some form of Lisp or it's semantic equivalent. I see why people would do this. Lisp code is easy to manipulate programmatically, so evolving it should be simple as well. There are other kinds of systems using different gene encoding, such as image arrays and direct byte code. However, these appear to be far in the minority. The Field Guide has a chapter on the topic, listing several alternate systems. The fact that these systems receive a single slender chapter while the rest of the book covers syntax trees gives you an idea of how under-explored the topic is.

When I first heard of genetic programming I imagined having a sequence of simple instructions that could be mutated. The instructions would be extremely simple and limited, perhaps more simple than most assembly languages. Evolving syntax trees certainly does let you make progress quicker, but the generated solutions will be limited to the underlying tree language. Our genetic beasties will never evolve novel flow control systems, or invent a crazy kind of memory register. ASTs are great if we want to produce a human readable program as the result, but it still feels limiting.

I would like to see a system that is as open ended as possible. Create a system of small instructions of uniform length that can only manipulate basic storage and do simple math. Then give them as much freedom as possible. Let them live in a wide fitness landscape. A digital environment with a huge number of potential ecological niches. Ideally we could take this to the next step and give our digital creations physical bodies so that they may evolve targeting real world constraints. Evolving a robot's brain seems far more interesting than figuring out how to gain 10 milliseconds trading stocks. Of course we lose rapid iterations by running them in the real world, so it my be better to run them in a simulation of the real world at many times normal speed. I imagine we could build self driving cars this way, starting with bots that play racing games then upgrade them to working with real world footage from actual self driving cars.

Am I wrong? Is there cutting edge genetic programming that is truly open ended? What successes have they made?

Please send feedback and comments to my twitter account @joshmarinacci instead of on the blog. Thanks!


I recently wrote an amusing rant on programming languages called "An open letter to language designers: Kill your sacred cows." It was, um, not well received. If you read some of the comments on Reddit and Hacker News you see that most people think I'm an idiot, a novice, or know nothing about programming languages. *sigh*. Maybe I am.

Most people got hung up on my first point: don't store source code as plain text on disk. I thought it was innovative but relatively uncontroversial. Surely, one day we won't store source as ASCII text files, so let's start thinking about it now. I mean, surely the Starship Enterprise's OS wasn't written in plain text files with VIM. Surely! I guess I underestimated the Internet.

Perhaps Not

So why did I not spark any real discussion? First I must blame myself for writing in the form of a rant. I thought the obnoxious insults would be amusing jokes, very tongue in cheek. Alas, tone is very hard to convey on the Internet through prose. That is completely my fault. I think I made an even bigger error, though. I talked about some interesting ideas (or rather, what *not* to do) but I didn't give any solutions or go in-depth with my justifications. That is why I have written the essay you are reading now: to explore just the first point and see if it makes sense.

Engineering is about tradeoffs. Cheap, fast, and correct: pick any two. So it is with source code stored as something other than ASCII text. Any modification will require changes in the surrounding ecosystem of tools. Version control. Continuous build systems. Diff tools. IDEs. The question is whether the benefits outweigh the negatives plus the cost of change. In this essay I hope to prove to you that the benefits could outweigh the costs; or at least close enough that it is worth exploring.

What is a codebase?

A codebase is fundamentally a large graph structure. If you look at the typical Java code base you have a bunch of classes with varying visibility organized into packages. The classes contain fields and methods which are composed of statements and expressions. A big tree. Since classes can call each other in varying ways, not to mention other libraries on the class path, we get a potentially circular structure. A codebase also has non-code resources like images, as well as some semi-code structures like XML config files (I'll leave out the build infrastructure for now). So we get a big complex graph that grows bigger and more complex over time.

A good IDE will create this codebase graph in memory. This graph is what gives you things like cheap refactoring and code completion. But when you quit your IDE what does it do? It serializes the entire graph as code split across multiple files and directories. When you restart the IDE it must regenerate this entire graph in memory. Though it wastes some CPU, this system would be fine if everything from the in-memory graph was preserved and restored perfectly without any data loss. Sadly it does not. The source code can't preserve developer settings. This is stored elsewhere. It can't store a history of what the developer actually did at what time. This is stored elsewhere or not at all. I think there is fundamentally something wrong with the fact that we are converting our complex graph into a lossy data store.

Source Code Is Lossy?

Yes, I think source code is lossy. A source file must meet two opposing demands. It must be human readable and writable with as little annoyance as possible. It must also be parseable into a graph structure (the Abstract Syntax Tree or AST) by the compiler without any ambiguity. These twin demands are sometimes in conflict, and when they are the compiler usually wins. Let me give you a simple example. Nested block comments.

In your typical C derived language like Java you can have block comments delimited by /* and */. You can also use single line comments that begin with // and finish at the end of the line. This basic system is a few decades old and has the unfortunate side effect that block comments cannot be nested. The following code is probably clear to a human but will choke the compiler.

some live code
first commented out code
     /* second commented out code */
more first commented out code
more live code

I know what this means but it is impossible for the compiler. Of course, the compiler could start counting open and closing delimiters to figure it out, but then it might choke on a */ stored inside a string. Or the human could nest the single line comments with the help of an IDE. All of these are solvable problems with a more advanced compiler or improved syntax, but they further complicate the language definition and impose more mental cost on the human doing the coding.

The Alternative

Now consider an IDE and compiler that work directly on the graph. I select some code in my editor and click the 'comment' button. It marks that section of the graph as being in a comment. Now I select a larger chunk of code, containing the first chunk, and mark it as a comment. Internally the IDE has no problem with this. It's just another few layers in the tree graph. It can let me move in and out of the code chunks with no ambiguity because it is operating directly on the graph. If the IDE can store this graph to disk as a graph then the compiler can also do its thing with no ambiguity. If we first serialize to human readable source code then parts of the graph are lost and ambiguity results.

This sounds like a trivial example, and quite frankly it is. That's the point. This is a simple case that is hard for the current system but trivial for a graph based system. How much more work must the compiler do to handle harder problems? Here's a few examples of things that become trivial with a graph system.

  • A version control system that loaded up a graph instead of text files could calculate perfect diffs. It would actually know that you renamed class A to class B because the IDE recorded that history. Instead, today it must reverse engineer this information by noticing one file is gone and another one appeared. If you renamed several classes between commits then the diff tool could get confused. With a graph the diffs would be perfect.
  • With perfect diffs the version control system could visualize them in a way that is far more useful. Rather than a bunch of text diffs it could say: "class A was renamed to B. Class D was renamed to G. Class E was deleted. Methods X, Y, and Z were moved up to superclass R. The contents of methods Q and R were modified. Click to see details."
  • Renaming and refactoring is trivial. We already know this because the IDE does it using an internal graph. What if other tools could do that. You could write a script that would perform complex modifications to entire code bases with ease. (James Gosling once experimented with such as system called JackPot).
  • As with comments, multi-line strings become trivial. Just paste it in. The IDE will properly insert it into the graph. The language definition doesn't have to care and the compiler will work perfectly.
  • Variable interpolation in strings is a function of the IDE rather than the language. The IDE can store "for ${foo} justice" as ("for " + foo + " justice") in the internal graph. This is passed to the compiler so it doesn't have to worry about escaping rules. Any sort of text escaping becomes trivial to deal with.
  • Since the parsing step is gone we no longer need to worry about delimiters like {} and (). We can still use them in the IDE to make the code more comprehensible to humans, but some programmers might prefer indentation to delimiters. The IDE will take care of it, showing them if you wish or hiding them. It makes no difference to the underlying graph or the compiler. Here's a great example of an IDE built by Kirill Osenkov with very creative ways to visualize code blocks.

  • New Things

    Once we have fully committed to storing the graph directly rather than the lossy plain text phase, other things become possible. We can store many forms of metadata directly in the graph rather than splitting it out into separate classes or storing it outside the code repo. Now the IDE can focus on making life better for the developer. Here are a few things I'd like to see.

    • A better interface for creating regexes. Define your regex in a special editor embedded into the main code editor. This editor not only provides docs and syntax highlighting, it can store example data along with the regex. Are you parsing credit card numbers? Type in 20 example strings showing the many ways it could go wrong. The editor shows you how they pass or fail. It also stores it with the source to be executed as unit tests by the continuous build system.
    • Inline docs would be editable with a proper WYSIWYG interface. No more having to remember the particular markup syntax used by this language. (Though you could still switch to raw markdown, html, etc. if you chose). Images in docs would become more common as well. You could also select a snippet of live code and mark it as being an example for the docs above. Then your example code will never be out of date.
    • Inline unit tests. Why must our unit tests be in a separate hierarchy run with separate commands. If it is good to keep docs near the code then it seems logical to keep the tests with the code as well. The IDE could give me a list of tests relevant to the current block of code I'm looking at. I can see the history of these test and if they currently fail. I can create a new test for the current code without creating new classes and files. The easier we make it to create tests the more we will be created.
    • Edit and store non-code resources inline. Icons, vector artwork, animation interpolators, hex values for colors. This would all be stored as part of the source and edited inline using the appropriate interface. Colors can be edited with a color picker instead of remembering hex codes. Animation can be edited with a bezier curve instead of guessing at values. The icon would actually be shown visually instead of just a file path. All of these things can be done today, but they become easier and more standardized if we work directly on the graph instead of plain text. Here is an editor called Field that shows some of the possibilities.

    • Once we get away from the file being the atomic unit of code, then we can start to arrange our code visually in other ways; ways that might be more useful to the human rather than the compiler. Chris Granger just posted a video for a cool new IDE concept called the Light Table.

    • Conclusion

      I hope I have shown you that there is a wide world of new coding ideas being explored, most of them not related to the language definition but rather to how we human programmers interact with the code. Certainly there are costs associated with such a change, but I think the costs could be dealt with if the benefits are worth it, and I clearly believe they are. We can update version control systems to work with a graph instead of text diffs. We can define a standard for storing the graph on disk so that any IDE can work with it. We can create libraries for manipulating the graph, enabling others to easily create new tools. Those are all solvable problems if we decide we want to.

      Ultimately my disappointment with new programming languages, and the impetus for my original essay, is that we live in the twenty first century but we still code like it's the 1970s. I want to see our field move forward by leaps and bounds, not incremental improvements to C, Smalltalk, and Lisp. The challenges our industry faces will be easier to deal with if we aren't wedded to our current notions of how coding is done. 100 core parallelization. Code scaling from non-smartphones to global computing networks. 3D interfaces built from scene graphs and real time imagery. Ubiquitous computing and latency driven design. We have lot coming, and coming soon, but our language researchers are acting like compilers and languages are a solved problem with only a few tweaks needed. And if we, the real world programmers, are afraid to try anything too radical then perhaps the researchers are right.

      But I don't think so. The challenges coming this century are huge. We should invent new tools to solve them.

this is text that sadfa I'm typing.

a test blog post content

Richard Kerris leaving HP for "an opportunity outside"

Hot on the heels of my Canvas talk at OSCON (which went very well. Much thanks to everyone who attended), I've put up a post on the developer blog about the great new Canvas stuff in webOS 3.0.  Most importantly, speed has been doubled for certain drawing operations! I'm very proud of the graphics team here at Palm.

You can read the full description of the changes here.

- Josh

Yes, the future is now, and not just because I bought a 2TB hard drive for less than 150$. Bionic hands, self driving cars, and printable solar cells...

Bluetooth has a use?!

While it hasn't been widely known, because I am not widely known :), I haven't been a big fan of Bluetooth devices. Due to their short range they end up simply replacing 6 foot wires, at an increased device cost and the extra hassle of having one more thing to charge (plus interference with endless other devices). But here's something that might change my mind..a a bluetooth hand!

This prosthetic hand lets you tweak settings via Bluetooth. It can handle up to a 200lb load, which clearly puts it into the six million dollar man range. Hmm. Perhaps that wireless technology is good after all.

Autonomous Audi TT

James Gosling, inventor of Java and my former co-worker at Sun, has been helping some Stanford students work on an autonomous car. Along with Audi, they've announced a TT-S that will attempt to drive the Pike's Peak International Hill Climb. (PS, it's partly running Java and Solaris).

eBooks as eJournalism?

Imagine the world two years from now when the tablet form factor is successful and we are all eReading. Monday Note tackles the question. Less overhead, easier access, newer long form formats emerge. The ebook as a form of journalism. The future might be Awesome!

Paper Solar Cells

They say that the key to making something cheap is to find a way to build it using microchip technology. Then you get to free ride on Moore's law and have someone else fab it for you. Flash memory followed this trend. So did accelerometers. They were once mechanical devices the size of a soda can. Once they could be made using CPU fabbing techniques the price and size dropped precipitously, and now they are being embedded in virtually everything.

So what's the next step beyond microchip tech... paper. If you can make something printable on paper then you can make it cheap. Amazingly cheap. And what's what may happen with solar cells. MIT has demonstrated a solar cell technology using essentially a fancy inkjet printer. The efficiency isn't great, but if it's a factor of a hundred cheaper than ridge cells no one will care.

some content

40+ Vintage Posters to Inspire Your Next Designs Color Palette

From The Design Cublicle: [link]

I'm always looking for design inspiration, especially trends from previous eras. This link has forty really cool retro (or at least retro-looking) posters that pop with great colors. The Nerd eyeglass pattern is one of my favorites.

10 Tips for Creating Compelling Web Copy

From the Web Design Ledger: [link]

Part of creating a good experience for your users is the language you use on your website and in your documentation. Your text must engage the reader and keep them entertained as well as informed. And most of all you must not waste your user's time. After all, user is just short for future customer, and customers don't want you to take them for granted.

Complete Beginner’s Guide to Interaction Design

From UX Booth [link]The title says it all!

Yahoo! Design Pattern Library

Yahoo: [link]

Found via the aforementioned guide to interaction design. A catalog of different design patterns from accordion to the convention of labeling stuff with Your.

Branching is Fun

From Mr. Doob [link]

Not at all design related, but I'm a sucker for cool visualizations and graphics effects.

And for when you really must think outside the box

[found via Peta Pixel]

outside the box from joseph Pelling on Vimeo.

It's been a while since I've written anything, especially anything other than the topics of Amino and Leonardo. While I rarely talk about my personal life on this site, I feel I should let you all know what's been going on, and how the next few months look.

Of Death and Life

The last couple of months have been a bit rough. My wife had two deaths in her family only five days apart. We've spent most of the last two months with family and traveling to California. At the same time Jen and I are experiencing the joys of having our first child. Our son, Jesse Paul, is due May 27th. The name 'Jesse' is from the biblical father of David, which means "Gift from God". 'Paul' is from my uncle who passed away last year. Uncle Paul taught me to program as a child and had a huge influence on me and my career, and so is very special to me.


Now that we are home, with no more travel in sight for my wife at least, life should return to normal. There's more travel for me, however. In a week I'll fly down to Sunnyvale to spend a week with my team at Palm HQ and prepare for our big event on February 9th. This is going to be huge. I can't wait for you to see what we've got in store. After the event I'll be home for a day and then fly to Barcelona to talk about our new products at Mobile World Congress. If you are going to be at MWC, give me a shout for some drinks.

After MWC I'll have some time at home before doing a series of smaller events (which I'll talk about later), and then be home for Jesse's birth.


I haven't done as much work on my side projects as I wanted, and things will slow down even more once Jesse is born, but I'm still making progress. Here's a rough roadmap:

Leonardo, the open source drawing program. I've been slowly polishing the UI and adding features. Object effects are high on the list (dropshadow, blur, emboss, etc.) as are a full set of gradients and texture fills. I'm hoping for another release by the end of February which will also include more path features like transforms, rotation, and converting shapes to paths. The big UI overhaul and sharing features will have to wait for later.

Amino 1.0: a full UI toolkit. I've continued to fill in missing features and controls, including the new Table and Tree/TreeTable views. I've got a few more bugs and memory leaks to fix before I'll call Amino 1.0 final and finished, but I think it's close.

Amino 2.0. I've started working on Amino 2.0 in a branch. Amino 2 is an almost complete rewrite, including a new cross platform stack that will support Java2D, JOGL with shaders, and potentially HTML canvas. It will be much more modular; strictly separating a super fast 2.5d graphics API, a flexible scenegraph, and then the current Amino GUI widgets and CSS on top. If you've seen this screenshot of playing a high def video through a realtime pixel shader, then you've seen the potential of this approach. The new design will make Amino more flexible and easier to evolve. I don't have a timetable for when it will be done, but I plan to open the repo for contributions very soon. For developers the APIs won't change much, but the under the hood changes will be huge.

So, in short, Amino and Leonardo have a strong future, just a bit slower than originally planned. I also have great new things coming with webOS and I can't wait to teach you how to use some of the cool new stuff. So bear with me. The future looks both bright and busy

I've been posting little teasers of the CNC machine I'm working on.  It's time to reveal a bit more.

In my spare time I have been working on a tiny two axis CNC machine that will be powerful enough to move a pen around like a plotter. Future versions will be larger and handle more powerful tools, but this is a good start to work out the kinks.  

So here she is in all her glory:

She's a two axis CNC. The X axis is the lower level and the Y axis is the upper, set at 90 degrees. A future version will have a Z axis as well. The structure is made of aluminum extrusion from Open Beam. This version uses about 4 meters.  You can see the nice ABS brackets on the corners, also from OpenBeam.  

Each axis consists of a structure sitting on sliders created from roller skater bearings. For movement I'm using standard stepper motors from Spark Fun powered by EasyDriver boards.  The steppers convert rotation into linear motion through the long threadscrews. While you can spend a great deal on threadscrews, these are just standard steel all thread from the hardware store (~3 USD). Everything is bolted together with Open Beam screws, which are standard M3 6mm screws and matching nuts.


Stepper motors have to be powered and controlled by a driver board. I'm using the open source EasyDriver from SparkFun.  There are more powerful motors and boards available, but these will suffice for version 1.  They are cheap, effective, and very simple to wire up as you can see in this photo:


Each stepper motor has it's own driver board.  The four wires go into the four motor pins of the board, labeled A1, A2, B1, and B2.  These aren't actually labeled on the motors themselves so I had to test and label them by hand.  Each board also has power and ground.  They receive more than 5V since the motors usually need more. In my case I've connected them directly to a 9V battery. This would never be enough power in production, but it's okay for testing and very portable.  I'll dive more into the EasyDriver's later, but for now just know that they can take almost any power you can throw at them and convert it safely to run your motors. 

Finally the drivers have ground, step, and direction pins connected to the Arduino, which you can see in this photo:

The Arduino code toggles the two step pins over and over, occasionally changing the direction.  That's all there is to the electronics.  Arduino and the EasyDriver make it very simple.

Now let's look at the mechanicals:


The beams are aluminum extrusions from Open Beam. In addition to being open hardware (the schematics are available to make your own) it has a few nifty features.  Take a look at this end cut:

There is one channel per side. Each channel takes a metric M3 nut or screw.  Hex nuts are the perfect size to slide in easily and not rotate. Because these are standard sizes you can buy screws from other vendors, such as these black anodized 16mm screws I got from Amazon.

I've cut most of the extrusion by hand using a hacksaw and plastic miter (the kind used for cutting wood). Eventually I bought a cheap 6" power saw from Harbor Freight that uses cutoff disks. The final cuts are smoother than a hacksaw can produce and are done in a quarter the time. It's good enough for 60 USD but eventually I will probably upgrade to a proper miter saw with a metal cutting blade.


The extrusions are put together using injection molded ABS plastic brackets, also from Open Beam.  I mostly use angle brackets with a few Ts thrown in.

The stepper motors are attached with Open Beam stepper brackets. These are designed to accept any NEMA 17 stepper motor and come with matching brackets for the other end.

But here is the genius part: Open Beam sells an adapter which accepts #608 roller skate bearings, which look like this:

These brackets make building a lead screw very easy. I chose a 5/16 inch all thread screw at my local department store (~3 USD). It fits inside of a roller skate bearing perfectly.  The blue stuff in the photo is the skate bearing.

Speaking of skate bearings, they are completely awesome. But that will have to wait until tomorrow.

Next Time

I'll go into the bearings and shaft coupler tomorrow, two of the most critical parts of a CNC machine. Oh, and one more question. What should I name this contraption?

I’m happy to announce the release of Electron 0.3. While there are a few under the hood improvements, the big news is a brand new user interface. We’ve rewritten the UI from scratch using Angular JS. This will make Electron more stable and easier to improve in the future.

New UI features:

  • a proper file browser on the left, with collapsing folders
  • collapsing resizable panels.
  • New dialogs to search for boards and libraries
  • new fonts (Source Sans Pro, Source Serif Pro, and Source Code Pro)
  • proper multiple tabs for editing source files

Boards and Libs

The data repo has added support for

  • Adafruit Trinket & Gemma,
  • Flora,
  • timer libs
  • Arduino Yun
  • Esplora
  • DigiSpark Tiny / Pro (buggy)
  • Fio

Broken Stuff

A few things that worked before are now broken, so be aware:

  • serial console (filled with mockup text)
  • docs viewer (filled with mockup text)
  • '+'/'-' buttons for creating/deleting sketches

Thanks to contributors:

  • Alan Holding
  • Nick Oliver
  • Walter Lapchynski

The next release (in a few weeks, hopefully) will focus only on bug fixes and app-izing Electron to have only a single download (no git required).

We need your help testing for the next release. Please file your kudos, bugs, and requests here.

Thanks, Josh

a test blog post content

I mentioned a few days ago that I updated my Canvas Deep Dive ebook and dropped the price to free. But wait, there's more! I've also completely open sourced the book. My goal with Deep Dive was always to disseminate information as widely as possible. There is no better way to do that than giving it to the world.

The source to the book is now on GitHub here.
I've created a separate repo for the example code here.
The compiled web version of the book is here.

There are several reasons I think Canvas Deep Dive is different than the other canvas resources out there. First, it is comprehensive. Starting from chapter one the reader learns what canvas is, how it works, and the basic functions for drawing shapes. From there it moves on through charts, animation, pixel manipulation, WebGL with ThreeJS, WebAudio and finally a quick introduction to the experimental webcam API. Along the way the reader is walked through hands on exercises that ensure they understand the concepts.

Second: the book is interactive. Any text in a gray box has a glossary definition. The first chapter shows off interactive code snippets. Any number in red text can be dragged to change the value and the graphics are redrawn so the reader will directly see how this code changes the output. The hands on labs have inline demos so you can compare your code with the expected result.

Now that the source to the book is out there I need your help to make it better. Any contributions are welcome, from correcting spelling mistakes to writing new chapters. When the changes are committed the entire book will be rebuilt by my Hudson server here. Periodically I will be uploading new versions of the book as app to the iPad and TouchPad app stores. I'd also like help supporting Android tablets (I'm not much of an Android developer, I'm afraid). All pull requests are welcome.

Together we can make this the best way to learn HTML Canvas.


The Last Rocket

I first met Shaun Inman at a conference two years ago where he showed me an iOS game he was working on. Since then his NES style platformer The Last Rocket has become a hit on iOS, as well as contributing to several Lundam Dares. Shaun and his friends are in the final days of a Kickstarter project to build six retro styled games in six months. I asked Shaun to tell me a bit about the project and the motives behind it.

As of this writing Retro Game Crunch is only 60% funded with 8 days left. If you pledge now you will get to contribute to the design and plot, then receive an awesome finished game each month along with a variety of bonus goodies.

Josh Hi Shaun. Thanks for joining me. Before we get into the Retro Game Crunch can you tell me a bit about yourself? Where did you go to school? Where do you live now? What did you do before you got into game building?

Shaun Thanks for having me Josh! I'm an independent game designer and developer. I graduated from the Savannah College of Art and Design's Graphic Design program. I currently live in Chattanooga, Tennessee. Before I got into game design, I primarily designed and developed the web apps Mint and Fever.

Josh What drew you into game building? And how did you decide on retro vs more modern art styles and genres?

Shaun I've always played games. When I wasn't playing them I was reading about them. When I wasn't reading about them I was day-dreaming about them. One day I realized that I had cobbled together enough skills in the three core disciplines of game design: graphic design, programming, and music, to tackle creating my own game. When I use the word "retro" I'm referring to limited palette, resolution, and sound channels, and a focus on one or two gameplay mechanics. The limitations reduce distraction and force a designer to zero in on what makes their game fun and unique.

Josh What was the inspiration for the Retro Game Crunch?

Shaun Retro Game Crunch was inspired by other game jams like Ludum Dare. The thematic and time constraints promote the same kind of focus as the retro aesthetic. Usually at the end of these jams, you go back to your other projects. After the most recent Ludum Dare, Rusty, Matt, and I didn't want to abandon the momentum we had built up with Super Clew Land. So we kept working on it for another month. We think the resulting Super Clew Land Complete turned out great and decided we wanted to do it (at least) six more times!

Josh I'm a huge fan of the NES/SNES era Final Fantasies. I've always felt the are the perfect blend of story, graphics, music, and level grinding. Any chance we will see any RPGs from this game crunch or are you sticking with platformer styles, or even branching out to other styles?

Shaun We'll definitely be branching out. We love platformers as much the next guy but we're very conscious of not making the games too same-y. Rusty and I both love the 16-bit Final Fantasies too and would love to make an RPG. It's a tall order for 30 days but if a theme presents itself that would best be served by an RPG we might just have to go for it.

Josh Have you ever collaborated with others before on a game? How do you think these games will be different from your solo efforts?

Shaun Before Super Clew Land, Matt and I worked together on Flip's Escape and an unreleased iOS game. Neven Mrgan, Alex Ogle, and I created Millinaut during a previous Ludum Dare. Seeing what we were able to accomplish in such a short time made me crave collaboration—which is crazy because I've been working solo for more than seven years!

Josh After the six games are done what are your plans for them? Will they be ported to other platforms? Print up t-shirts? Start a Saturday morning cartoon?

Shaun Ha! We haven't really decided what to do with the games after they're in backer's hands. If any of the games seem well-suited for touch input we'd love to port them to iOS. I'd also love to have Ashley Davis, who did the illustrations for the posters, do illustrations of each character we create. An extended poster series? Who knows!

Josh I loved the Last Rocket. It has such a fun style. Any chance we'd see a sequel?

Shaun Maybe. I've already developed a plot and new gameplay elements. I just need to find the time to develop it!

Josh When we first met you showed me a prototype you were working on. In the game the character could switch from Game Boy era graphics to Super Nintendo. Whatever happened to that?

Shaun Mimeo and the Kleptopus King was a ridiculously ambitious project for my first real game. The tech demo was cool but couldn't support a full game (it was just too inefficient). I'd love to pick up the idea again once I'm a little more experienced. And maybe with a bigger team. Four resolutions means four times the graphic, music, and level assets!

Josh It seems that retro game building has gone from a hobby to your full time job. After the Crunch is over do you think you will stick with making games or you will be sick of it?

Shaun I'm looking at Retro Game Crunch as a learning and growing experience. I doubt I'll be sick of it. Rusty, Matt, and I survived the first crunch and are eager to start again. We'll probably weather the six months just fine!


Shaun let me know they just added something new. If you pledge Retro Game Crunch now you will also get a Bonus Jam Pack with Mac/Windows versions of several games by their indie-coder friends, including Fathom, Escape, and Midas.

One last update before the auction begins. I'm labeling everything and checking that all the devices charge. No bum-devices here. I can't vouch for the never been opened vintage PalmOS devices but everything that is open will charge.

New Items

I've received a few more items. Another 32GB ATT TouchPad, including a TouchStone charger. Plus some one of a kind Palm and HP Tshirts that were given out to staff for the original TouchPad launch event. Oh, and webOS Nation sent us some rad coffee mugs. Even if you don't use a webOS device anymore you can still show your webOS pride. Many thanks to all of our donors.


A note on shipping. When you check out after the auction ends you should be able to include extra for shipping. It is 10$ for a single item, or 5$ if you get multiple items (I don't think anything is so big that it will cost too much more). If you live outside the US just mail me to work out exact postage.

We'll see you Sunday morning with the link to the auction to check out all the merch. Remember to join the mailing list so you get the link. Thanks again.

Javascript has always had properties but recently support for property accessors and mutators was added, or rather it is finally supported in enough browsers that we can reliably use it. Property accessors and mutators are what we would call in the Java world "getters and setters". The cool thing about the new JavaScript version is that you don't call the getX method. Rather you can set a property the way you always have, as a variable assignment, but the accessor method is called underneath.

This is a very powerful feature, and yet my research indicates that they are rarely used in practice. In fact I can find almost no reference to them beyond how the syntax works. Where are the cool crazy things you can do with them? They seem incredibly useful.

Here's how the syntax works

var foo = { 
    get value() { return this._x; }, 
    set value(x) { console.log("set to " + x); this.x = x; }
var foo = new Foo();
foo.x = 5; // prints 'setting x to 5'

The ability to invoke a method whenever a property is set seems very powerful. For example, you could bind the width of a rectangle to the value of a slider. Whenever the slider changes the rectangle would move, no extra update code required. I think the new property system would also let us have proxy methods, meaning we replace a setter or getter with a new implementation that does something special then calls the old implementation.

Then we could add after an object is created:

  • A listener to be called when a property changes.
  • A filter on a property to veto certain value changes. ex: restrict value to 0->1. veto anything else.
  • Bind any two variables together. If x changes then y is updated to reflect the new value
  • Bind the value of a slider to an object on screen, modified with an equation. ex: rect.width = slider.value * 3;

The new property syntax can also be used to create synthetic properties. A synthetic property is one that has no direct underlying storage. Rather the value is calculated on the fly, or delegated to alternate storage. For example, you could have a rectangle object with x and y as true properties. Then add a position property which is a short hand that maps to the x and y underneath:

rect.x = 10;
rect.y = 20;
// pos is a synthetic property. really sets x and y underneath
rect.pos = [10,20]; // pos is a synthetic property, really gets value from x and y
console.log("the rect is at " + rect.pos);

This could let us create more convenient APIs for certain domains like event handling.

onDrag(mouse) { 
    //set rectangle's position to the mouse's position 
    rect.pos = mouse.pos; 
    //set rect's position to the mouse's position plus an offset 
    rect.pos =,40);

I don't think we can do operator overloading, or else this would be possible: rect.pos = mouse.pos + [30,40];

We could also do structure swizzling like this:

color.bgra = color.rgba;

Or modify colors stored as RGB using synthetic HSV properties, like this:

color.rgba = #ff00ff00;
color.hue += 30;
color.saturation = 0.5;

So my real question is: Why is no one using this stuff? Where are the cool demos by language gurus who know far more about JavaScript than me. Am I just missing the magic website that talks about it?

Please send feedback and comments to my twitter account @joshmarinacci instead of on the blog. Thanks!

The call for proposals for OSCON 2013 just went out. OSCON is the one conference I try to speak at every year because the topics are so diverse and interesting. And being just up the road in Portland doesn't hurt either. However, I'm having trouble deciding what to submit. Too many things interest me. So I thought I'd consult the wisdom of the crowd. What do you want to see?

I plan to submit an update to my 3 hour HTML Canvas workshop since it is still a very relevant technology. It has been my most popular session, however, I don't just want to rehash what I did the last two years. What new things would you like to see? A game engine? Graphing algorithms? UI toolkits? More on Audio and Input?

Even if you don't plan to attend OSCON, the Canvas talk may still be relevant to you. The content from these sessions has turned into my popular open source ebook HTML Canvas Deep Dive. The book accounts for 75% of the hits on my blog, making it the most popular thing I've written by far. Any updates for OSCON 2013 will go into a new edition of the book.

I'm also open to other topics that fall within my area of expertise. The internet of things? Arduino hacking? Mobile app design? Usability? I'll even consider some Java stuff if that's what you want. What are your burning topics for 2013?

With all of the hoopla last week about the innovative features in the new iBooks 2 I thought it would be instructive to see what could be done with pure HTML 5. I put together a little demo which adapts to screen sizes and has simple interactive content. Here's what it looks like:

ebook prototype

View the live demo here. I highly suggest you load it up on an iPad as well. Try rotating the screen (or make the browser window narrow if you are on a desktop.) Keep in mind this is just a prototype. It's not skinned or made to look pretty at all.

You can spin the little sphere tree by dragging with your mouse. If you view this on a tablet it will respond to touch events as well. You can click on the photo to see a large fullscreen version. The footer will stay fixed to the bottom. The inline photo and canvas will resize and move to the left edge if you switch to portrait mode.

My goal for this effort is use pure semantic HTML. If we want to scale this up to a full book then the markup needs to be as close to pure text as possible. I'm using only divs with paragraphs and headers for all textual content. The interactive stuff is canvas or img tags. All interaction is standardized and put into reusable javascript files except for the actual dot tree itself (though it does use reusable components). I had to add a little hack to make the canvas resize properly, but that is also reusable. The footer and general layout is pure CSS.

I think the prototype turned out pretty well. It would be easy scale up to a full book with one page per chapter. Table of contents, index, and glossary would have to be written by hand unless we have some automated tool to do it, though I think such tools already exist.

So far I haven't found much that the new iBooks 2 format does that couldn't be done with plain HTML. They did add nice columns and shaped floats, but that is part of the CSS 3 specs and could be implemented in the renderer quite easily. (And shaped floats have been hacked into CSS2 for over a decade!)

I do like the look of iBooks Author. It's a very nice visual tool that should let people create books without any programming experience. The new terms of service are a different issue, but after thinking about it I've decided I don't care. iBooks Author is a tool for formatting content to be sold in the iBookstore. It really has no purpose other than that so the terms of use don't make much difference. Amazon has their own tools for their own store as well, and I expect them to get better with the advent of the Kindle Fire.

While I would love to see a beautiful visual editor like iBooks Author that formats content for all possible bookstores I don't really expect Apple to build it for me. Such a solution will have to come from a third party. Fortunately third parties now know that Apple won't compete with them to build such a tool, so I expect we'll see something like it soon.

Anyway, back to my HTML 5 prototype. Anything else you'd like to see it do?

Please send feedback and comments to my twitter account @joshmarinacci instead of on the blog. Thanks!






How to Hide Wires in the Wall

If you are like me, and the fact that you are reading this suggests a certain kinship, then you have many electronic devices scattered around your house; each with their own wires for communication and power. Wires are wondrous. They form the basis of our information economy. Unfortunately, in a shared space like a living room, they are also atrociously ugly. For your consideration: my living room.

Okay. So that doesn't look quite so bad. However, here is what the couch is hiding.

Even worse.


My wife and I have been TV-less for several years, mostly watching Hulu on our laptops. When we re-designed the house after Jesse was born I decided it was time for a TV. We still watch Hulu and Netflix, but now streamed through a tiny Roku box.

Originally I considered painting the wires or hiding them behind a stand of some sort. Then I looked at various rubber strips which can disguise the wires. In the end none of them were quite what I wanted. Really I want the TV to just look like it is floating.

A friend happened to notice the wires a few weeks ago and I asked for suggestions on how to deal with the problem. He said the solution was easy:. the local hardware store carries simple boxes to run the cables inside the wall. I proclaimed my horror at drilling interior holes to run the wire, but he continued: since this is a vertical drop directly from the TV to the wall outlet that I would not have to go between studs; just two holes directly in to the wallboard.

I've always been nervous about dealing with wallboard attachments but at my friend's urging I gave it a go. First, I needed the special mounts from my local hardware store. They are essentially plastic rectangles with tiny wings on the back. It doesn't matter if you are next to a stud or not because the when you tighten the screws the wings will pop out securing the mount to the wallboard. It doesn't provide enough grip to attach something heavy to, like a shelf, but these rectangles hold no weight; they are just there to guide the wires and keep the hole stable.

Next, I cut two rectangular holes. This is done with a small rough saw called a wallboard knife. I pressed into the wall with the knife until it reached the cavity behind the wallboard then began sawing up and down to create a slot, then expanded the slot into a rectangle. Always start small and slowly expand to fill the space needed for the rectangle mounts. Once they fit tightly I stopped and tightened the screws.

Threading the Needle

The next challenge was getting the wires through the hole. If there was no insulation then my TV cables would simply drop down the cavity to the lower hole. Unfortunately this wall is right next to the garage. Lots of insulation present. Hmm.

Strategy: start small and bootstrap from there. Professionals have something called fish tape but I only plan to do this once so I didn't spring for it. Instead I began with an un-bent coat hangar wire. The hangar is just barely long enough to stretch between the two holes so I was really hoping it would work. After a few tries I got it. Next I tied twine to one end and pulled the hangar back out, threading twine through the hole. I immediately tied the twine into a loop so it wouldn't slip out. Using the twine I pulled the two cables through: one for power and the other for HDMI.

Unfortunately at this point I reached an unexpected snag. The break between the TV's cord and the extension cord occurred right in the middle of the wall. Putting a connection inside the wall was just asking for trouble. What would I do if it came undone, say by a small child pulling on one end? So I made a quick trip to the store for a longer extension cord and slid the break up to the top hole.

I'm happy with the result

At this point the job is functionally complete. I can power and control the TV with all extraneous boxes (like the Roku) safely hidden on the floor under the couch (no more strain on the Roku connectors). To be nice and polished I want to put covers over the holes. Unfortunately the screw hole spacing on the hole mounts is not the same as regular light and power outlets so standard covers won't work. Instead I purchased a special 'media cover' designed for this purpose, complete with little brushes to keep out the dust. Sadly the gap between the brushes is too narrow to fit the end of the power cord. *le sigh*. Back to the hardware store.

I still haven't solved this last part of the problem. I suspect I will have to get a completely blank cover which does use the right screw hole spacing, then use a dremel to create the appropriate opening for my cables.

Total cost: 3$ for the plastic mounts and 6$ for longer cables from Amazon. Total installation time, about 30 minutes to cut holes, install brackets, thread the cables, and cleanup the mess.

A note: Some have said that you shouldn't put the power cabling next to A/V cables. In general this is true but HDMI is digital. The signal is either there or it's not. RF interference generally isn't an issue with pure digital signals. This is also why a $3 HDMI Cable (15 feet)from Amazon is every bit as good as the $50 absurdities sold by Monster Cable (link not provided).

If these instructions help you hide wires in your own home, please post in the comments below.


some content

After several months of work, nestled in between getting webOS 3.0 out the door and prepping the nursery for the pending arrival of my first child, I am happy to announce the release of Amino 1.0. I have been eagerly following the development of HTML 5 Canvas support in the major browsers as well as ensuring the HP TouchPad will have great support for it. Amino is a great way to use the power of Canvas is modern mobile and web applications.


What is Amino?

Amino is a small scene graph library for both JavaScript and Java, letting you embed smooth animated graphics in webpages with the Canvas tag and in desktop Java applications with a custom JComponent. Amino provides a simple API to do common tasks, while still being extensible. For example, to create a rectangle that loops back and forth in a webpage, just do this:

var runner = new Runner();
runner.setCanvas(document.getElementById('canvas')); //create a rect filled with red and a black 5px border
var r = new Rect() .set(10,20,50,50) .setFill("red) .setStroke("black") .setStrokeWidth(5);
runner.setRoot(r); //animate r.x from 0 -> 300 over 5.5 seconds, and repeat
runner.addAnim(new PropAnim(r, "x", 0, 300, 5.5).setLoop(true)); //kick off the animation

See the results on the Amino homepage here

What can it do?

Amino can draw basic shapes (rects, circles, paths) and animate them using properties (width goes from 10 to 20 over 3.2 seconds) and callbacks. It also can buffer images to speed up common operations, manage varying framerates on different devices, and do Photoshop like image filtering (brightness, contrast, adjustment). And finally, Amino supports keyboard and mouse events on nodes within the scene, not just for DOM elements. In short, it's a portable scene graph for making interactive applications. Amino handles the hard work of processing input and scheduling the drawing to ensure a fast consistent framerate.

How about some examples?

I'm glad you asked. I've put together a couple of examples to show the variety of things you could do with Amino.

PlanetRover is a simple multilevel sidescroller game with jumping and collision detection.


Big Fish, Little Fish is a page from a hypothetical children's ebook, showing how text can be enriched with animation, images, and custom fonts.


This LineChart component is a super easy way to render graphical data in the browser with a minimal api.


These examples and more are available on the Amino Gallery Page.

How do I get it? What's the License

Amino is fully open source with the BSD license. You can download the source and binaries from here, or link directly to amino-1.0b.js from here in your app with a script tag.

If you'd like to contribute, or just want to let us know the cool stuff you are doing with Amino, please join the developer list or send an email to joshua at marinacci dot org

So I been thinking. Design comes in waves. First it was page curls and drop shadows. Then came glossy buttons and wet floors, followed by shiny badges and rough textures. Today 'flat' is the leading trend in UI design. It certainly defines the look of Web ’13. Even Apple has jumped on the bandwagon. But what comes next? Where do we go from here? Whither 2014?

The problem with flat is that we can’t make flat flatter. Flat may be perfection but once you hit 10 you’ve got no where to go. We need one more; and I think I've found it: plaid.

I know plaid may seem to be a controversial choice, but it actually has a lot of things going for it. Plaid is more than just what you get when half of your stripes turn left. Plaid can meet or beat Flat on every level. Plaid has skillz.

Plaid Hides the Chrome

One of the core principles of Flat UI design is to make the interface controls subdued so that content will be highlighted. Plaid has this in spades. Take a look at this room.

Subdued UI Controls, from Oxford Design Studio

The plaid design elegantly highlights the non-plaid potted fern in center. A plaid elephant could be hiding in this room and you’d never know it. Plaid is a highly effective way to hide your design.

Plaid Makes a Statement

Flat minimalist UIs are bold. They make a statement. They say: "Hey, here’s my content. Deal with it.” Nothing makes a statement more than plaid. No one’s going to mess with you and your herd of sheep when you're dressed like this.

Plaid means serious business

Plaid is Timeless

Plaid is timeless. It’s here to stay. In the fashion world some designs go in and out of style but plaid has survived. Endless flirtations with polka dots and checker patterns have come and gone, but plaid is still with us. It has a classic design that continues to survive.

These guys wouldn’t look out of place at a modern rave.
image from Reconstructing History

Plaid isn’t Skeuomorphic

FlatUI aims to be authentic. There is nothing more authentic than plaid. While it does originate in real world materials, plaid is not skeuomorphic like other patterns such as leopard print and zebra stripe.

This zebra is highly skeuomorphic

Plaid is Classy

Plaid is classy. Who wears plaid? Lots of awesome people. One in particular stands out. Who? James Bond, that’s who.

From The Suits of James Bond, possibly the most awesome site in The Internets.

And who didn’t wear plaid? The enemies of James Bond.

See this guy? This guy didn’t wear plaid. Here he is in his minimalist black and white ensemble sipping his presumably flat cappuccino. No plaid to be seen.

Hugo Drax, non-plaid wearer and supervillan

I don’t have to tell you how it turned out when these two met.

"Well plaid, Mr. Bond. Well plaid."

That’s "Sir Plaid" to You

Plaid has a deliciously colorful history. Originally used for straining curds into cheese, this illustrious pattern was little known outside of dairy production until the Great Plaid Revolt of 1763 when members of the Scottish House of Commons disagreed over the proper pronunciation of 'tartan'. Overzealous British soldiers in residence attempted to quell the violence with an ample dose of additional violence, resulting in the death of thousands. (see illustration).

History lesson from the Modern Tartan Shopping Guide

Though less well known, a similar rebellion centuries prior in Iceland resulted in the famed Viking Squid tartan design.

Royal Cephalopds love plaid

Plaid Doesn't Make You Dead

As this advertisement clearly states: plaid helps you stay alive. If you don’t wear plaid, you could end up dead. ‘nuff said.

sage advice

Plaid is Stripes x 2

Plaid is like stripes, but with twice the power. Stripes have long be used in UI design. Even Apple used stripes heavily in their next generation operating system, Mac OS X, presumably drawn from the real world stripes of Job’s ubiquitous pinstriped suits.

You can just feel the stripes waiting to bust out

Apple has removed stripes almost entirely from their current user interfaces, but the fact that they left it in the progress bar shows that they believe it has a future. Stripes can still be found across the great plains of Dribble, though their numbers have dwindled from their high in the mid-2000s.

Stripes may fade or crack over time.Dribble

Clearly the time has come for something new.

Plaid is Fresh

Some of you may say: "Plaid may be fine for fashion but I’ve never seen it in UI design?" That's exactly my point. Be a leader, not a follower. Plaid UI is an uncharted territory waiting to be conquered by your startling be-striped designs.

There Can Be Only One

Getting started with Plaid

Here are a few tools to help you get started.

Plaid is a design trend waiting to be tapped. No longer will we be confined to flat and boring user interfaces. Plaid will make your designs memorable. When people see your new work their will be no doubt in their minds; they will know you’ve gone to plaid.

May The Schwartz be with you.

The book is done!

I am happy to announce that I have finished writing my new book for O'Reilly on GWT and PhoneGap. Well, not completely finished; my final draft is submitted but O'Reilly's talented team still has to work their technical magic to get it prepped for sale.

Oh yeah, and I got a cool animal as well!


The book is called Building Mobile Applications with Java. It will be about 70 pages long and be ready for purchase in March. The book shows you how to write code in Java, compile it into JavaScript with GWT, then bundle it up as an installable mobile app with PhoneGap. Since PhoneGap was recently donated to Apache the final title may use PhoneGap's new name: Cordova.

The book covers everything you need to get started with GWT and PhoneGap, including how to use the native SDKs for iOS, webOS, and Android. Then we dive into customizing the interface for mobile devices and using open source libraries to access device features. I also cover app UI design and optimization. Finally the book ends with a video game project where we use HTML Canvas to build a tilting physics game with a cartoon blob hero.

I am quite proud of the book though I often find the process of long form writing painful. The first 90% is super fun but the second 90% seems to take forever! I hope you enjoy the results. The book will soon be available for order on O'Reillys website and wherever ebooks are sold.


I've heard a lot of noise recently about these new fangled smartphones and tablets not replacing 'real computers', especially since the announcement of many new tablet products, including the HP TouchPad. That they are just expensive FaceBook machines. I've also heard people say that there's no room in the market for more devices: iOS and Android will take up the market and leave nothing for anyone else. It'll be just like the PC wars again!

Well... no. We definitely are going to see a huge shift in the industry over the next couple of years, but there will not be just one or two OSes controlling the market. And laptops won't be obliterated by tablets any more than TV destroyed the movies and radio. We won't see Mac vs PC again, or desktops vs Apple IIs. 2014 won't be like 1984.

First a disclaimer. These are my opinions, not the opinions of my team or employer. I work in Developer Relations. I have no knowledge of long term HP strategy, nor do I have any influence on it. This is simply the ramblings of an long time computing observer.

Tablets are no substitute for 'real' computers

Let's tackle these issues separately. First the claims: 'tablets suck for real work', or 'I would never use one. They are too limiting', or 'They are only for content consumption'. What are we talking about?

By tablet computers I mean things like the TouchPad or the iPad. These are devices which run a non-general purpose OS. There is no exposed filesystem. Apps are sandboxed and safe. A PC is a desktop or laptop running a general purpose OS like Windows, Mac OSX, or full Linux distros. Regardless of the form factor (touch vs keyboard), these are fundamentally different kinds of devices. Now the claims:

Tablets are too limited compared to a real computer. Yes the current generation of hardware is limiting, but it's going to get better; and fast. My top of the line computer less than 10 years ago had a 400mhz processor with 64MB of RAM, no GPU, and 8 GB of slow disk storage. Pretty much all tablet computers far exceed this already, and will soon support printing, directly controlling hardware in your house, and be first class citizens of the network (assuming Apple ever lets you jettison iTunes from your 'real' computer). They will all get better, and quickly. Especially when there is competition.

Tablets suck for real work: Yes, they are primarily designed for content consumption and tasks that require typing a paragraph or less. But guess what: that's a lot of stuff. In fact that's what 90% of people do 90% of the time on their computers. Most people don't write more than a paragraph at a time on their real computers. Most people surf the web, check Facebook, place games, pay their bills, and write a few short emails. A tablet device that can do 90% of what they need with less fuss and less cost is a big deal. A really big deal. Half of people could never learn to drive before the automatic transmission was invented. Yes, it's that big of a deal.

'I would never use one. They are too limiting'. That's very true. If you get nothing else from this essay, I hope you remember one thing:

They aren't built for you!

These things are built for the 90% of people who don't need everything a full PC does. By definition, if you are reading this blog, then these things aren't built for you. You are a programmer or writer or artist. You need a 'real' computer. In 10 years (probably far less), you will own a tablet computer, but it won't be your only computer.

In ten years I will still have a laptop with a real keyboard, possibly a disk drive, and most certainly an exposed filesystem with regular installable apps. It will still have a command line. (bash4eva!) I'll certainly use a tablet computer as well, but it won't be my only computer. However, for 90% of people, the tablet will do everything they need. It's built for them, not us.

OS Wars

Now that we have the audience for a tablet out of the way, lets look at the OS wars. There's a lot of talk that we'll have just iOS and Android. That they have an insurmountable lead. That the market wants one boutique option and one mass market indistinguishable option, just like MacOS and Windows.

I really don't think this is the case. I don't think any OS will have more than 25% market share in 10 years. Despite the similarities, the mobile OS market is very different than the PC market. Why? Well, let's compare the world of 1984 with the world of 2011.

  • Hardware: In 80s and early 90s you bought a PC in a computer store or maybe a department store like Sears. In 2011 you buy a phone in a cellphone carrier store, or on the web. You buy a tablet computer in a cellphone store or a mass market retailer like Target. These stores really didn't exist 30 years ago. Getting distribution for a device is very different now.
  • Apps: In 1984 you bought apps on floppy disks, wrapped in boxes, sitting on a shelf in a computer store. Or maybe Sears and Toys'R'Us (for games at least). There were no 'app stores'. Today, mobile apps are almost always bought in a store provided by the mobile platform itself. It doesn't matter if a retail location wants to carry your apps or not. No one has to fight over physical shelf space. The economics are fundamentally different.
  • Advertising: In 1984 computers were mainly advertised in computer magazines and newspapers. Remember those? Those things that no one my age reads anymore? (I'm 35 by the way.) Now we read and shop online. Or on our phones. Or get recommendations from friends on Twitter. And mobile devices are advertised on television. Advertising has changed. People find out about products in fundamentally different ways.
  • Compatibility: In 1984 software compatibility mattered. Software was hard to write, required huge lead times, couldn't be easily updated, and speed was of the utmost importance. Only the biggest apps would be on more than one platform, so getting apps on your OS was a big deal. IBM went to a lot of trouble making OS/2 Warp work with Windows 3.1 apps, for the sake of compatibility. Apple created expensive Apple II plugin boards for the Mac, all for the sake of app compatibility.

    Today most of the apps we run are backed by platform independent web services. Only the client app is different. And even that is easier thanks to open standards, modern programming languages, and the web. webOS has smaller market share than iOS and Android, but we still have Facebook, AngryBirds, and about 2 million Twitter apps. And you can view all the same websites. Compatibility simply isn't an issue anymore.

The economics of mobile operating systems are fundamentally different than the desktop wars of olde. To say we are in for a repeat of Mac vs Win is like saying the two world wars were identical because they both involved Germany and had the word 'World' in their names. Well the world has changed.

I think there will not be any single OS winner. Instead it will be more like cars. Many different models and vendors to cater to different tastes. They each have their own colors, addons, and spare parts; but they all drive on the same roads (the internet) and all take the same gas (webservices). 2014 simply won't be like 1984. And that's a very good thing.

This is a new future tweeting system I'm working on. It will let me write a post, send it to my blog, then link the blog from twitter, all in the *future*!

Last week I flew down to California for orientation at Nokia. My return flight was supposed to leave Wednesday afternoon, but due to weather and United's bungled computer merge with Continental, I ended up stuck at SFO for a day and a half.

But my loss is your gain! Whoo hoo! Armed with my trusty laptop I fixed some long standing bugs in Leo Sketch and finished up the 1.1 release.

New in this release are:

  • let you end a path without closing it by just clicking the 'end' button.
  • gradients autoscale to fit the shape they are in
  • the Arrow tool is restored
  • HTML export supports multiple pages
  • Any shape can be turned into a link that jumps to another page in the document
  • dynamic bitmap font export for Amino
  • text wrapping *yay!*
  • many more export improvements and bug fixes

Download here

2010 is here and I still don't have my flying car or moon rocket, much less a spaceship en route to Jupiter for some serious monolith research. Sadly, I'll have to be satisfied with some baseless and random speculation on the year to come. Take these predictions with a boulder of salt and me out on them in December.

2009 was a big year for technology. A fairly bright spot in an otherwise dismal economy. From a design perspective we've seen the further growth in next-gen UI technologies like Flash, Silverlight, and HTML 5; and a greater focus on design in the app development process. We've also seen major growth of alternatives to the WIMP (window, icon, menu, pointdevice) user interface that has dominated computing for the past 30 years. Touch screens, ePaper, accelerometers, and embedded cameras are mainstream technologies and have started working their way into every gadget we buy.

The next year should be an exciting one, but with mostly evolutionary improvement of technologies we have today. I predict nothing out of the blue. (Of course, if something was truly out of the blue we, by definition, wouldn't be able to predict it). Anyway: on with the show!

I've broken my predictions up into three categories: software, hardware, and misc. Each prediction has a percent indicating the likely-hood it will come true. Next December we'll revisit these and see how I did.

Software in 2010

Software is moving to the cloud. The UI of any given app might be implemented in HTML & Javascript in a webbrowser, or through an RIA toolkit like Java, Flash, or Sliverlight. (and occasionally a native toolkit like iTunes). But the trend is clear, all software comes from the internet and a significant portion of their features are implemented on the server side. This is a long term trend and won't be reversed. Given this trend, here are the effects I think we'll see in 2010:

Social Networking sites become useful. 80%

We've had Facebook, Twitter, LinkedIn, and other social networking sites for a few years now. I think the industry is starting to coalesce into a set of useful offerings now that we've had enough time to figure out what all of this stuff is actually good for. This means both companies and individuals will start to see real value from these services.

The downside is that privacy sinks to an all time low and our public & private lives heavily bleed into each other. I've taken the step of fully separating my personal and professional identities. I now use Facebook only for personal friends that I know in real life, and push my professional life through Twitter, LinkedIn, and this blog. Fortunately the social networking tools have improved enough to make this a relatively painless process. However, given that each service has a different audience I'm often conflicted about what to post where.

OpenID finally takes off: 70%

OpenID is an open standard which lets you log into a website using your username from another website. It promises to free us having to remember a million different logins. Since Google, Yahoo, and others now expose their user databases through OpenID I predict a huge increase in websites using it. The era of a universal login might soon be upon us.

Downside: the era of a universal trackable login might soon be upon us.

Cloud enabled netbooks ship: 70%

You will be able to buy a small laptop who's entire software catalog is purchased through an online store in the device. It will probably run Chrome OS, Android, or an Ubuntu variant. All apps will be HTML in the browser or written in a crossplatform RIA toolkit like Java or Flash. If your netbook dies, you just buy a new one and click a button to reinstall everything from the cloud.

We'll have several shipping toolkits that let you target all of the major smartphone OSes: 80%

This means a write-once run everywhere cross-device toolset for smartphones that is actually shipping (not tech demos), and supports iPhone, WebOS (Palm), Android, and Blackberry.

We are pretty much there already. Adobe has promised a version of Flash in 2010 that will let you build apps on WebOS (Palm), BlackBerry, and the iPhone; and some Android phones ship with Flash support. PhoneGap lets you target every device with a WebKit derived browser, which is pretty much everything. Silverlight is being ported to other platforms and MonoTouch lets you write C# code that targets the iPhone. 2010 will see all of these toolkits mature, with several prominent app makers adopting them for all of their apps except games.

Windows Mobile and older JavaME based feature phones aren't on the list above because developers won't care about them. Sad but true. WinMo is dead unless Microsoft does something truly amazing with Windows Mobile 7 (and I sincerely hope they do). Older JavaME based phones have the numbers (2 billion +) but those phones (and their associated cellphone contracts) don't encourage users to buy apps, so they don't matter to developers. If you can't ship apps to them then they don't count.

Hardware in 2010

Hardware improvements shouldn't be surprising: everything will be smaller, faster, and cheaper. Moore's Law will not be repealed. The focus on power efficiency means the world is going mobile. The biggest shift we will see in 2010 is the general purpose 'personal computer' losing sales to more specialized devices. eBook readers, smartphones, tablets, and set-top boxes will continue to chip away at the market share of desktop and laptop computers.

Someone will create a TV attached device worth owning: 30%

Everyone has some sort of a TV attached device these days. It makes sense, the TV is the largest screen in most homes so why not do something more with it than just playing video. I want a device which will play media, play games, check mail, browse the web, and let me install my own apps; all displayed in gorgeous 1080p. The potential is huge!

Sadly the market is terribly fractured with no standards and a bunch of half finished solutions. The AppleTV has devolved into an iTunes Store front-end. The Wii, XBox, and Playstation all do games well, but their video and content services are weak. The cable companies have semi-decent DVRs, but don't have stores with downloadable apps or access to internet content. The most compelling options today are actually home-brew Linux boxes running MythTV.

I really hope someone can pull the pieces together into a single solution, but I don't have high hopes. It may take an iPhone-like dark horse entrant to change the power structure of this market.

Pico-projectors for Smart-phones become popular: 20%

The concept seems good on paper: a tiny projector that fits in your pocket, attaches to your cellphone, and projects a 4 foot image on a wall or T-shirt. Sounds like science fiction! It seems like a great idea, but in practice the images are too dim and shakey to be useful. It will remain a novelty that most people don't need or want

Tablet computers will be popular this year: 100%

I'm planning a much longer post on tablets for next week so I'll just summarize here. Both the hardware and software have converged to make a useful tablet computer possible. The key will be building a device that doesn't try to be a full featured desktop computer in a tablet form factor. Any successful tablet computer will be more like a large iPod Touch rather than a flat laptop. It will be the next step along the road of making computing ubiquitous and invisible. After all, what is a tablet but a computer where everything has been made invisible except the display.

Given the Apple rumors and pre-announcements from several vendors I expect several tablets to be on sale next year. If they can figure out one or two killer features (most likely relating to media consumption and social networking) then I think they will be very popular.

My specific prediction regarding the Apple tablet: it will be a large iPod Touch running the same OS and managed though iTunes. There will probably be a few extensions to the Cocoa Touch APIs and new UI guidelines for the larger form factor. The basic apps will be retooled for the larger screen but little change in functionality. iTunes will add a book and magazine store. But that's it. No revolutionary form factor or display technology. Just a large iPod Touch. And it will sell millions. The time is just right.

An ebook reader using something other than eInk ships: 50%

Ebook readers are all the rage right now, and with good reason. They represent the future of paper, but the display technology is still very limited. Virtually all of the ebook readers on the market use the same physical display technology: an electrosensitive fluid from eInk. Refresh rate and contrast is horrible, but it's a good enough start to create a market for better products. A slew of competing technologies are under development and success of the Kindle will ensure they get funding, so there's a good chance one of them will ship in 2010.

Smartphones won't improve significantly, but over half of all phones sold in the US will be smartphones: 80%

The iPhone, Android devices, the Palm Pre, and new BlackBerries have solidified what a 'smartphone' is: a general purpose touch-centric device with user installable applications. A year from now I don't expect this to look any different. We'll have some new devices with more memory and faster CPUs, but the landscape will be pretty much the same.

As a side effect of this market stability I expect smartphones to make huge inroads into the general cellphone market. Thanks to Apple's ads the average consumer now understands the benefits of a smartphone with installable apps, and carriers love to sell data plans. I expect over half of all new phones sold next December to be smartphones. AT&T's network will continue to strain under the usage.

Land line phone service falls by half: 80%

This has been a long time coming. I've personally had only a cellphone for the past decade. Thanks to competition from wireless carriers, Skype, and VoIP solutions I expect the retreat from landlines to reach 50% by the end of 2010.

On The Edge

Voice recognition becomes a significant user interface: 20%

Voice recognition will continue to be a technology that's just around the corner but still never arrives. Any growth will come from 411 style services that let you speak a question and receive a webpage answer. Google is pushing heavily in this area with their Google Voice service. I still don't think we'll have true voice command (KITT style) until we have much stronger AI. But that's just around the corner. :)

Self driving cars: 0.001%

The potential is huge, but the chances of me being able to buy a self driving car this year is almost certainly none.

However, we are likely to see some interesting advancements on the way to self driving cars. Some luxury cars can now parallel park themselves and the DARPA challenges have been successful. I suspect we'll see more closed course demonstrations of cars doing things that would simply be impossible for a human driver to accomplish. Things like drifting and spinning become trivial when you can control each wheel's drive and breaking separately. Since most drivers don't have eight feet, the computer will have a leg up on humans.

YouTube, Twitter, and Facebook combine into a single time wasting website: YouTwitFace

Likelyhood: 92.6%

Happy New Year!

When redesigning Amino I had a few core goals. These goals are in place to guide the product and ensure we created something genuinely useful, and not become "yet another gfx lib".


Amino 2 must be:

  • Simple: the api must be simple and easy to understand
  • Responsive: the goal is to hit a consistent fps for all graphics on screen. 60fps on desktop and 30fps on mobile. This should be very doable with modern devices. The UI must always be responsive, even at the cost of accuracy or graphics complexity.
  • Rich: complex effects and animation should be possible(shadows, gradients, real world textures, animation) while still hitting a consistent FPS.
  • Speedable: we should be able to add speed improvements, including hardware acceleration, purely in the implementation. Speedups shouldn't affect the API. Existing apps just get faster over time.
  • Subsettable: You should be able to use just the parts you need.
  • Portable: Nothing of the underlying graphics implementation should be exposed. (we aren't there yet).
  • Flexible: for lots of tasks you can just compose nodes together, but sometimes you may want to dig down to the lower levels. This should be possible, as long as you realize your code might not be as portable anymore.


With these goals in mind, here is the basic structure:

A Node is an object in the tree with a draw function. Anything which draws or affects drawing is a subclass of Node. This means all shapes, groups, transforms, imageviews, etc. Nodes also track their dirty state, and if they contain a given point (used for input processing). All nodes have parents (except the top most node)

A Parent is simple a node that implements the Parent interface. Currently only Group and Transform are parents.

A Scene is a tree of Nodes (a non-cyclical directed graph).

Everything is done on a single GUI thread. Touching nodes or trying to draw off the GUI thread is an error.

Resources (images, gradients, colors, textures) are immutable, to enable transparent caching.

Amino's internal system handles repaints, animation, and input events for you. you just create the tree of nodes and you are off to the races. By letting Amino handle these things we can ensure a consistent framerate and the best performance possible.

All events are generated by the system (usually by wrapping native events) and passed to your handlers through the event bus. You can listen to either a particular kind of event on a particular object, or all of that kind of event throughout the system. For example: Give me all mouse press events or Tell me when this node is clicked. Events will be automatically transformed into local coordinates when you click on transformed objects, and they are passed along with the target node of the event.

A quick note on cross platform support. My goal is to make the Java and JavaScript APIs identical. Wherever possible this is true. However, due to the difference between the languages (namely that JavaScript uses prototype based inheritance) there will be minor differences. In general the same code should work under both with only minor syntactic changes.


The following code creates two colored rectangles with method chaining. Then it creates a square that spins and slides back and forth across the screen. I've included both Java and Javascript versions so you can see the minor differences.


var runner = new Runner();
runner.setCanvas(document.getElementById("canvas")); var g = new Group();
g.add(new Rect().setWidth(100).setHeight(50).setFill("green")) .add(new Rect().setWidth(50).setHeight(50).setY(100).setFill("yellow"))
; var r = new Rect().set(-25,-25,50,50).setFill("white"); var t = new Transform(r).setTranslateX(100).setTranslateY(100);
runner.addAnim(new Anim(t,"rotation",0,90,1).setLoop(true).setAutoReverse(false));
runner.addAnim(new Anim(t,"translateX",100,500,4).setLoop(true).setAutoReverse(true));
runner.addAnim(new Anim(t,"translateY",100,150,0.5).setLoop(true).setAutoReverse(true)); runner.root = g;


final Core runner = new Core();
runner.setBackground(Color.BLACK); Group g = new Group();
g.add(new Rect().setWidth(100).setHeight(50).setFill(Color.GREEN)) .add(new Rect().setWidth(50).setHeight(50).setFill(Color.YELLOW).setY(100))
; Rect r = new Rect().set(-25, -25, 50, 50);
Transform t = new Transform(r).setTranslateX(100).setTranslateY(100);
runner.addAnim(new PropAnim(t,"rotation",0,90,1).setLoop(true).setAutoReverse(false));
runner.addAnim(new PropAnim(t,"translateX",100,500,4).setLoop(true).setAutoReverse(true));
runner.addAnim(new PropAnim(t,"translateY",100,150,0.5).setLoop(true).setAutoReverse(true)); runner.root = g;


Amino 2 is still very much a work in progress, but so far I'm happy with the design. I've made several TouchPad apps already with decent performance, and the API really simplifies Java2D coding. Next I'm working on more graphics primitives, buffered effects, and a simple path API.

You can download a daily build here. Please join the Amino Dev List to provide feedback or contribute to the project.


An essay in which I expound upon the benefits of the lowly wire and resist the temptation to wireless-ize the world of personal gadetry. This weekend, in a futile effort to preserve my back and wrists, I've retooled my home office by including picking up a new mouse and keyboard. The only thing available at the local store was wireless, either bluetooth or using a proprietary dongle. While reasonably nice to use from an ergonomic standpoint they immediately began having interference with my network, including dropping or repeating keystrokes and mouse clicks. After 2 days of frustration I returned them. Then, after searching 3 stores to find a decent wired keyboard, I gave up in frustration. Wireless all.

The Good

Wireless sounds like a good idea. The promise of "No wires!" means no tangles, no restriction of movement, and no ugly cords all over your desk. This is especially attractive when you use a laptop 100% as I do. The last thing I want on an airplane is a wire to get tangled up in the seat. The wireless devices all look quite sleek and futuristic. And the accuracy of modern laser trackers on virtually all surfaces is quite simply astounding. Wireless promises a trouble free computing experience.

The Bad

For all of the good, there's actually a lot of problems with wireless devices. First: all wireless gadgets must have batteries, which means one more thing to monitor, charge, and replace. Next, the minute you have an active network over the air you have to worry about eavesdropping. That means security layers, network protocols, and the bane of bluetooth: pairing. The act of connecting two devices which are a scant 2 feet from each other simply isn't worth the pain. I returned three bluetooth headsets over the years due to pairing issues. Once you get your device set up and authenticated you still must worry about interference. I found that my keyboard would drop key presses if I was doing my hourly backup over the network at the same time. Not doing two things at once is the opposite of progress. And finally, to add insult to injury, wireless devices cost more. Now let's consider the alternative.


Wires are simple to use. You attach a wired mouse to your computer by plugging it in. Through the magic of USB, the device is immediately detected and the driver installed. Plus you get power for free, so no more batteries to replace. Nothing goes over the air you don't have to worry about eavesdropping, so no security system and no pairing. No RF transmission means no interference with the 4.8 billion other wireless devices in my house. In addition to the security aspects wires are usually faster and cheaper. USB 2.0 is far faster than even the latest Wifi N standards, which I suspect is why Apple doesn't sync the iPhone over wifi. And the cost of course is fantastic. No extra batteries and radio transmitters makes any gadget cheaper to produce. The costs of wireless (financial, technical, and mental) are worth the benefits in some situations. But for lots of things: just go with a wire. It works.

This is the second blog in a series about Amino, a Javascript OpenGL library for the Raspberry Pi. The first post is here.

This week we will build a digital photo frame. A Raspberry PI is perfect for this task because it plugs directly into the back of a flat screen TV through HDMI. Just give it power and network and you are ready to go.

Last week I talked about the new Amino API built around properties. Several people commented that I didn’t say how to actually get and run Amino. Very good point! Let’s kick things off with an install-fest. These instructions assume you are running Raspbian, though pretty much any Linux distro should work.

Amino is fundamentally a Node JS library so first you’ll need Node itself. Fortunately, installing Node is far easier than it used to be. In brief, update your system with apt-get, download and unzip Node from, and add node and npm to your path. Verify the installation with npm —version. I wrote full instructions here

Amino uses some native code, so you’ll need Node Gyp and GCC. Verify GCC is installed with gcc —version. Install node-gyp with npm install -g node-gyp.

Now we can checkout and compile Amino. You’ll also need Git installed if you don’t have it.

git clone
cd aminogfx
node-gyp clean configure --OS=raspberrypi build
npm install
node build desktop
export NODE_PATH=build/desktopnode tests/examples/simple.js

This will get the amino source, build the native parts, then build the Javascript parts. When you run node tests/examples/simple.js you should see this:


Now let’s build a photo slideshow. The app will scan a directory for image, then loop through the photos on screen. It will slide the photos to the left using two ImageViews, one for the outgoing image and one for the incoming, then swap them. First we need to import the required modules.

var amino = require('amino.js');
var fs = require('fs');
var Group = require('amino').Group;
var ImageView = require('amino').ImageView;

Technically you could call amino.Group() instead of importing Group separately, but it makes for less typing later on.

Now let’s check that the user specified an input directory. If so, then we can get a list of images to load.

if(process.argv.length < 3) {
    console.log("you must provide a directory to use");

var dir = process.argv[2];
var filelist = fs.readdirSync(dir).map(function(file) {
    return dir+'/'+file;

So far this is all straight forward Node stuff. Since we are going to loop through the photos over and over again we will need an index to increment through the array. When the index reaches the end it should wrap around to the beginning, and handle the case when new images are added to the array. Since this is a common operation I created a utility object with a single function: next(). Each time we call next it will return the next image, automatically wrapping around.

function CircularBuffer(arr) {
    this.arr = arr;
    this.index = -1; = function() {
        this.index = (this.index+1)%this.arr.length;
        return this.arr[this.index];

//wrap files in a circular buffer
var files = new CircularBuffer(filelist);

Now lets set up a scene in Amino. To make sure the threading is handled properly you must always pass a setup function to amino.start(). It will set up the graphics system then give you a reference to the core object and a stage, which is the window you can draw in. (Technically it’s the contents of the window, not the window itself).

amino.start(function(core, stage) {

    var root = new Group();

    var sw = stage.getW();
    var sh = stage.getH();

    //create two image views
    var iv1 = new ImageView().x(0);
    var iv2 = new ImageView().x(1000);

    //add to the scene

The setup function above sets the size of the stage and creates a Group to use as the root of the scene. Within the root it adds two image views, iv1 and iv2.

The images may not be the same size as the screen so we must scale them. However, we can only scale once we know how big the images will be. Furthermore, the image view will hold different images as we loop through them, so we really want to recalculate the scale every time a new image is set. To do this, we will watch for changes to the image property of the image view like this.

    //auto scale them with this function
    function scaleImage(img,prop,obj) {
        var scale = Math.min(sw/img.w,sh/img.h);;
     // call scaleImage whenever the image property changes;;

    //load the first two images

Now that we have two images we can animate them. Sliding images to the left is as simple as animating their x property. This code will animate the x position of iv1 over 3 seconds, starting at 0 and going to -sw. This will slide the image off the screen to the left.


To slide the next image onto the screen we do the same thing for iv2,


However, once the animation is done we want to swap the images and move them back, so let’s add a then(afterAnim) function call. This will invoke afterAnim once the second animation is done. The final call in the chain is to the start() function. Until start is called nothing will actually be animated.

    //animate out and in
    function swap() {
     //kick off the loop

The afterAnim function moves the ImageViews back to their original positions and moves the image from iv2 to iv1. Since this happens between frames the viewer will never notice anything has changed. Finally it sets the source of iv2 to the next image and calls the swap() function to loop again.

    function afterAnim() {
        //swap images and move views back in place
        // load the next image

A note on something a bit subtle. The src of an image view is a string, either a url of file path, which refers to the image. The image property of an ImageView is the actual in memory image. When you set the src to a new value the ImageView will automatically load it, then set the image property. That’s why we added a watch function to the iv1.image not iv1.src.

Now let’s run it, the last argument is the path to a directory containing some JPGs or PNGs.

node demos/slideshow/slideshow.js demos/slideshow/images

If everything goes well you should see something like this:


By default, animations will use a cubic interpolator so the images will start moving slowly, speed up, then slow down again when they reach the end of the transition. This looks nicer than a straight linear interpolation.

So that’s it. A nice smooth slideshow in about 80 lines of code. By removing comments and utility functions we could get it under 40, but this longer version is easier to read.

Here is the final complete code. It is also in the git repo under demos/slideshow.

var amino = require('amino.js');
var fs = require('fs');
var Group = require('amino').Group;
var ImageView = require('amino').ImageView;

if(process.argv.length < 3) {
    console.log("you must provide a directory to use");

var dir = process.argv[2];
var filelist = fs.readdirSync(dir).map(function(file) {
    return dir+'/'+file;

function CircularBuffer(arr) {
    this.arr = arr;
    this.index = -1; = function() {
        this.index = (this.index+1)%this.arr.length;
        return this.arr[this.index];

//wrap files in a circular buffer
var files = new CircularBuffer(filelist);

amino.start(function(core, stage) {

    var root = new Group();

    var sw = stage.getW();
    var sh = stage.getH();

    //create two image views
    var iv1 = new ImageView().x(0);
    var iv2 = new ImageView().x(1000);

    //add to the scene

    //auto scale them
    function scaleImage(img,prop,obj) {
        var scale = Math.min(sw/img.w,sh/img.h);;

    //load the first two images

    //animate out and in
    function swap() {

    function afterAnim() {
        //swap images and move views back in place


Thank you and stay tuned for more Amino examples on my blog.

Amino repo

More Beta Testing Software

I've talked about the tablet takeover several times before on this blog. I still firmly believe my previous statement:
Ten years from now 90% of people will use something like a tablet or smartphone as their primary computing interface. And the remaining 10% will use a desktop OS on something called a workstation.
In fact, I now think I may have been too conservative. Five years is more likely. I've posted several times about this market but I haven't really talked about that other 10%. What will the workstation OS of the future look like? Who will use it? Why would they chose to use it over something like a tablet computer? What new features and apps will they have? This is the first post in a series where I will explore the future of the PC and desktop applications. Over the series I'll cover what I think the future will look like and then deep dive into particular technologies and interfaces that will be required. Don't worry, I haven't forgotten about tablets. I've got more to say about that coming up soon.

Workstations vs Tablets

First some definitions. I'm going to stop using the term PC or desktop because they are too ambiguous. I won't use creation vs consumption device because that introduces too many preconceived notions. I've settled on the term workstation and tablet. A workstation is something running a general purpose operating system on a laptop or desktop. In practice this means a future version of Windows, Mac OSX, or Linux. A tablet is a device running a non-general purpose operating system. It is probably a phone or tablet formfactor, but I expect netbooks to be coming soon as well. While there is clearly a lot of gray area between the two types of device key differences are an exposed file system, the ability to install any application, and a heavy focus on keyboard use. I'll talk about why these matter in a moment. In the long run these two types may have shared implementations (as the new Mac OS X Lion demonstrates) but they are still targeted at a different audience.  

Why and Who?

Who would actually use these workstations if tablets have become so advanced that they can do what 90% of people want, and with far less fuss? I think workstations are for the pro users, where by I mean pro for professional. These are people who use their devices directly to make money, use it for a significant amount of time per day (for work), and most importantly are willing to invest time and money on their devices to get the most out of them. They will be reasonable tech savvy but that doesn't mean they are super-nerds who can explore file systems and mess with printer drivers. There is work to be done, so the device needs to function perfectly and get out of the way. Who would these people be? First are programmers and webdevelopers, obviously, since they need to directly interact with files and use advanced text manipulation tools. Next I'd include people who use advanced content creation tools: 3D artists, architects and engineers, video editors, technical authors. Anyone who works on large documents / structures which have very sophisticated UI needs. I could also see this expanding to business and finance, medical, and engineering fields; since all of these people process large amounts of data. The key to all of these types of people is their need to create, process, and distribute large amounts of data in very sophisticated ways. They need interfaces that are both wide and deep. They are the knowledge workers, and will pay a premium for a device that lets them do what they do. They have a willingness to use a deep interface which requires time to learn; provided they get value proportional to their investment. That last point is probably the most distinguishing feature. There is a subset of users who need professional interfaces and will take the time to learn them, but they also don't want to waste their time because it is valuable. They will pay for good stuff, but will dump you if you are too much of a hassle. These are the knowledge workers.

What do they need?

How could we design software for the workstation? We need to focus on a core philosophy. The list below is in no ways complete so I'd love to get your feedback.
  • Scalability
  • Efficiency
  • Reliability
  • Customization
Scalability means the software and the interface scales. iMovie is a great way to learn to edit video, but it doesn't scale with the task. You hit a limit very quickly; both in terms of what types of things you can do with it and the sheer amount of media it can handle. iMovie will grind to a halt if you try to edit 100 hours of footage down to a 2 hour movie, and you'll be very frustrated with the interface. iMovie doesn't scale. Final Cut Pro does. While it has a bit of a learning curve you can do almost anything with it. Other software I'd put in this category include: IDEs (Eclipse, IntelliJ, NetBeans, etc.), Maya (3D modeler) and AutoCad. I'm sure there are more examples in other industries. Interesting I don't think there is a professional app in the text manipulation industry yet (MS Word that doesn't suck) so perhaps there is an opening for new software there. Efficiency does not refer to CPU or memory efficiency. A modern computer has ample supply (though battery life could always be longer). I'm referring to the most precious commidity of all: the human's time. Someone who uses a workstaion expects their software to make them work faster and easier. Anything which automates a task or reduces conginitive load is a very good thing. Never waste a human's time. This also applies to interaction design. Why do I have to tell my program to save open files when I quit? It's much better to just auto save everything and restore it when I return. This reduces the time I have to spend thinking about it, so I can focus on getting my work done. Efficiency also includes shortcuts, automation tools, and filters. Anything to let me work faster. Reliability it goes without saying that computers must be reliable. They must do their work properly, never slow down and never lose data. This is even more important for the knowledge worker since work time and usually money are at stake. Fortunately PCs have made great strides here, and Mac OSX Lion has some interesting new features to make this happen. This category is mostly outside of the realm of user interaction design, however, so I won't say much more about it. Customization is perhaps the most important of the four. Anyone who uses a tool for a long period of time makes it their own by customizing it. This is perhaps the most defining feature of being a human. We integrate tools into our mental system. We modify our tools to suit our needs and provide us a competitive advantage. The great painters did not use stock brushes. They either made their own or modified a stock brush to have the exact shape and flow they wanted. Many great programmers have their own specific set of dogeared tech books and directories full of code snippets to reuse. We customize our keyboard shortcuts, put files in particular places, pin browser tabs, create bookmarks, switch wallpapers and litter our (physical) desktops with pencil holders and sticky pads. A customized computer is a computer well used and loved by it's owner. A stock computer is a computer never used.

Next Steps

I've gone on long enough for today. In the next post I'll cover some kinds of interaction that will meet the needs of the knowledge worker, and show some existing examples. In the meantime let me leave you with a few ideas to ponder:
  • IDEs are some of the most sophisticated applications available thanks to highly advanced UIs (code completion, class generation, syntax highlighting), heavy integration with other tools (put a web server *inside* of your IDE?), and yet are almost all completely free.
  • Only nerds complain that OpenID and OAuth suck for desktop applications. Why?
  • File systems let you track files, not documents, and yet documents are usually what we care about. Can we do better?
  • iTunes might be the most widely used pro app in the world, even if it's recently jumped the shark.

Hi. My name is Josh Marinacci. You might remember me from the webOS Developer Relations team. Despite what happened under HP, webOS is still my favorite operating system. It still has the best features of any OS and an amazing group of dedicated, passionate fans. I deeply cherish the two years I spent traveling the world telling everyone about the magic of webOS.

However, I’m not here to talk to you about webOS. I want to tell you about my brother in law, Kevin Hill. Two years he was diagnosed with stage 4 melanoma. If you know anything about melanoma, then you know this was 100% terminal just a few years ago. Kevin and my sister Rachel have traveled the country joining every experimental trial to beat this. You can read about their amazing story on their site: The Hill Family Fighters.

The Hill Family

Kevin and Rachel have had amazing success but recently hit a roadblock: the scan a few weeks ago showed a spot on his brain. It’s a testament to his strength that he has continued to work remotely as a sysadmin through all of this (even using his TouchPad from the hospital), but the time has come to slow down. We have not given up hope that he can beat it, but this latest development means he must finally quit work and focus on staying alive.

It will be 90 days before his long term disability kicks in. That's 90 days without income, and just 50% of his salary after that. My sister works part time as a brilliant children’s photographer, but spends most days taking care of Kevin and their two little children, Jude and Evie. We’ve calculated that they need at least ten thousand dollars to Bridge the Gap and get them to the end of the year. This is where you come in.

I am auctioning off my entire collection of webOS devices and swag to help them cover the bills and fight the cancer.

[picture of bottles and phones]

I will be holding an online auction selling everything I have. There will be devices like Pre2s and Pre3s. Limited edition posters and beer steins. My personal water bottle that saved my life in Atlanta (it still bears the dents). In addition some of my former Palm co-workers are donating their own devices and swag. And the highlight is an ultra-rare Palm Foleo with tech support from one of the original hardware engineers.

With your help we can Bridge the Gap for the Hill family. Please add your name to my mailing list. [link] I will send you a note when I have a final date for the auction and when it starts. This list is just for the auction and will be destroyed afterward. Even if you aren’t interested in buying anything I could really use your help getting the word out. Let’s hit the forums, the blogs, and the news sites. The webOS community is the best I’ve ever worked with and I need your help one last time.

Thank you.


to read

I am not entirely sure what to say.  It has taken me two hours to write the following few paragraphs.

Though I never worked at Apple nor had a chance to meet him, I owe my career to Steve Jobs. At the tender age of 8 I learned to program on an Apple IIe and have been hooked ever since. I've used Mac OS X since the first public beta on my tangerine iBook. I used a string of iPods and iPhones before joining Palm to compete with Apple. Steve's products changed desktop computing, music, movies, cellphones, and almost everything else in our modern world, and I thank him as a happy user of those products.

However, I think most of all I thank him for making the world understand the value of design and usability It's not just an add on. It's the most important part of a product. I've spent my professional career helping developers build better software thanks to his vision and singular focus. Not only would I not have this career without him, but my entire field (developer evangelism) likely wouldn't even exist.

So thank you Steve. Thank you for teaching us that our job is to make the world better for everyone.

Building my own CNC machine has been quite an educational experience. I've got a better idea of what I'm up against now, and have plans for moving forward.

Lessons Learned so Far

First. Hardware is hard. Much harder than software. I expected programming or wiring the electronics to be the difficult part, but thanks to good docs and helpful Arduino libraries I was able to get basic movement going pretty quick. The biggest challenge has been the mechanical construction of machine. I've gone through several iterations of the pen mount, sliders, and lead screw attachments but there is still much more to do. Software is so much easier to modify and test out new ideas. Creating a new slider took several hours to design, build, and test.

I have also learned how much of hardware success is based on component sourcing. Throughout this project I have added to a growing list of good online shops to getting interesting pieces; and how to repurpose unexpected materials like plumbers tape. I also learned that Amazon Prime is your friend. I honestly don't know how Fedex can make any money off of them.


So far I'm happy with the progress I've made on my CNC machine. I can now make a pen move in the X and Y directions with a fair amount of repeatability. The machine is a good test of ideas but not reliable enough to actually use yet. The biggest issue is tightness. Everything in the CNC machine has some slack in it. Each component of slack multiplies to make the pen slide all over the place; and a sliding pen means bad drawings. So that will be the main driver of the next version: making it super tight.

The other night I designed a new slider. It still uses roller skate bearings but applies them to the support beam from three sides instead of two. It also uses more rollers spaced further apart. The new sliders greatly improve stiffness without adding too much friction.

To remove the stickiness of then pen I realized I need a way to press the pen into the paper but still have some slack. This calls for a spring. After reading work that others have done I devised a new holder using two ballpoint pens. They are, interestingly enough, identical pens but with different brands on the side. Most likely from the same swag manufacturer. If you attend conferences you likely have tons of these guys lying around.

By combining two pen barrels and putting a spring at the top I can get the pen to sort of float in the middle but pressed against the bottom end. The spring pushes it against the paper but lets it slide up when needed. It acts sort of like a tiny shock absorber. Good enough for this version.

On the electronics side I finally mounted the various boards on a sheet of plastic. Using my Dremel drill press I set them up on inexpensive spacers from the Robot Shop. I also gave the steppers proper four pin connectors from Adafruit since I'm always plugging and unplugging them. The drivers still have some breadboarding to get rid of, but things are improving. I also added a real power switch for the motor current instead of having to constantly unplug the wall wart.

On the software side I wrote some new Arduino sketches using the excellent open source library AccelStepper. It handles acceleration and multiple steppers with ease. Now the motors sound like engines powering up.

NES Controller

Oh yeah; my teaser from before. The NES Controller.

I found two original NES controllers at our local electronics recycling shop for five bucks each. I love these things. These two are probably over 20 years old and they still work perfectly. Nintendo designed them to be bullet proof. (Sadly their design of the cartridge connector was not so well thought out).

The controllers use a simple serial protocol over their five pins. You can hook them up directly to an Arduino using jumper wires but I found an NES breakout board (Robot Shop) to give it that nice finished look. The software side is easy using the NESpad library. Note that this code is a few years old and won't compile out of the box. I suggest using my fork of the lib which is patched to compile with Arduino 1.0 and higher.


Every good project needs a name. I'm tired of saying "my CNC drawing machine thingy" over and over. Thus I dub thee "Clyde" after the orange ghost of Pacman fame. Apparently his other names are (translated from Japanese) are "stupid", "slow guy", and "pokey". Fortunately Clyde is easier to pronounce than Guzuta and Otoboke. He may be slow but he tries really hard and gets the job done.

Next Steps

I'm going to take a week off from Clyde during my parents' visit. Instead we will play with Jesse and finish up some house projects. When I return I will build a new machine over twice the size of the current one: 40cm x 60cm x 10cm. This is big enough to do some real work while still able to fit on my desktop. My latest extrusion order from OpenBeam is enough to build the new machine with plenty left over for reinforcement and experimentation. Have a good weekend!

some stuff nothing

I love the Arduino platform. I have official boards and lots of derivatives. I love how it makes hardware hacking so accessible. But there’s one thing I hate: the IDE. It’s ugly. It’s ancient. It has to go.

Sure, I know why the IDE is so old and ugly. It’s a hacked up derivative of an older IDE, written in a now-deprecated UI toolkit. Fundamentally, the Arduino guys know hardware and micro-controllers. They aren’t desktop app developers. I suppose we should be happy with what we have. But I’m not.

My previous attempt

About two years ago I created a new IDE built in the same UI toolkit as the original (Java/Swing for those who are interested). It worked roughly the same but looked better. It had a proper text editor (syntax highlighting, line numbers), better serial port and board management, and a few inline docs, but basically worked the same; just some UI improvements.

I posted my new IDE on the Arduino forums and got almost no response. Pondering this for the past two years I realize this forum was the wrong place to launch a new IDE. The people there are experts. They are happy enough with their current tools that they don’t want to switch. They are experts. They don’t need all of the improvements that a better IDE could provide, like inline documentation and auto-library-installs. They are also more likely to simply use a general purpose IDE. I’ve done this myself and it works okay, but I’m still not satisfied. Arduino is special. It deserves better.

As the Arduino platform grows it needs a better out of the box experience. We have more and more novices hacking for the first time. We need an IDE that is truly top notch. Something that is custom built for Arduino, and the tasks that you do with it.

Some have asked me, “What do you want to do that the existing IDE doesn’t already?”. That’s a good question. If we are going to seriously invest in something new then we need some good reasons. Let’s start with the basics: installing libraries and boards.

Library management

Right now you find a library somewhere on the web, download the zip file, and unzip it into your ‘libraries’ directory. You do all of this outside the IDE even though you will be using it inside the IDE. Next you need some docs. There is no standard for the docs, but there’s probably something to read on the website you got the lib from. At least there are some examples. Oh, but they are buried deep inside a menu.

Now suppose it’s a month later and you are working on a new sketch. Do you remember all of the libraries you have installed? How do you search for them? Do you have the right versions? Do they work with all of your Arduino boards or just some of them.

It’s the year 2014. Why can’t the IDE just already know about the libraries out there. It should let you search for them by name or keyword. It should let you search through the example code. Once you find the library you need it should install it for you automatically. Where does it go on disk? Who cares?! That’s the IDEs problem. Finding and installing a library should be a single button click (or maybe two clicks if we are picky).

Board management

The many Arduino boards are listed in a nested menu, derived from the boards.txt file. This text file is only updated when the IDE itself updates, which isn’t very often, and the list isn’t comprehensive anyway. If the board you need isn’t listed then you have to add it manually, outside the IDE, duplicating the effort of other developers everywhere. Why can’t the IDE just fetch a list of all boards from the internet somewhere; a list updated by the actual vendors of those boards, so it’s always up to date.

This list of boards actually contains quite a bit of useful information, like how much RAM is in that board. However, this information isn’t actually shown anywhere. It’s only shown to the compiler. The IDE should give you full specs on your chosen board right on screen so you can refer to it whenever you want. Furthermore, the IDE should have extra board info like the number of pins, the input and output voltages, and if we are being generous an actual pinout diagram! All of this information is available on the web, just not in your IDE.

The IDE should just do all of this for you. Choose your board from a gigantic list downloaded from the internet. This list includes detailed specs on every known board, updated as new boards are made. You can refer to the board specs in a side pane, and even correlate the pins in your sketch with the pins on the board reference. This isn’t rocket science folks!

So you can see, there’s good reasons for a new IDE. This is before I’ve gotten to forward looking features like a built in Firmata client or data analysis tools. It’s time. Let’s build this.

Now for something completely different

Two years ago I tried by recreating the IDE in the same form. Now I want to do something different. I’ve started a new IDE completely from scratch, written in NodeJS and HTML. While it does use web rendering it won’t be a cloud based IDE. It will still run locally as an app, but using newer GUI technology than Java Swing. Don’t worry, you’ll have a proper app icon and everything. You won’t know it’s NodeJS underneath. It’ll just be a nice looking GUI.

Since it’s a back to basics effort I’m calling this new tool Electron, the fundamental particle of electronics.

So far I have basic editing, compiling, and uploading working for an Uno. I’ve also built a system for getting installing libraries and board definitions from the Internet using a git repo. This separate repo will contain a JSON file for every known library and board. Currently it has the basics from boards.txt and a few libs that I use, but more will come online soon. If you want to help, adding to this data repo is the easiest way to start. Pull requests welcome.

Electron IDE repo

Arduino Data repo

Here’s a rough screenshot. The GUI will greatly change between now and the 1.0 release, so just think of this as a guideline of the final look.

If you are a Node/HTML hacker please get in touch. There’s tons of work to do. Even if you are just an Arduino user you can help by providing feature requests and testing, testing, testing.

Thank you. Let’s bring Arduino development into the 21st century.

One of the benefits of my job at Nokia is the ability to do indepth research on new technologies. If you follow me on G+ then you know I've been playing with 3D printers for the past few months. As part of my research I prepared a detailed overview of the 3D printing industry that goes into the technologies, the companies involved, and some speculation about what the future holds; as well as a nice glossary of terms. Nokia has kindly let me share my report with the world. Enjoy!

3D Printing Industry Overview

I recently got a 3D printer and boy are my arms tired!

A bit tongue-in-cheek, but yes 3D printers can be a pain to set up. While I love the concept of 3D printing, the industry is currently in the pre-Model-T phase. There are hundreds of models, mostly incompatible, and they require a lot of futzing and tweaking to get them working properly.

The good news is that 3D printers are getting better very quickly. In the two years I've followed the technology prices have dropped in half and quality has doubled. Exciting stuff, but until a month ago I hadn't actually jumped into the 3D printing market myself. That all changed when I saw the Printrbot Simple.

Printrbot Simple, $300 kit

The Simple is a $300 from Printrbot. While most of their printers now come pre-assembled, the Simple is still available as a kit, and for an amazing price. I knew going in that it would likely have limitations, but I can honestly say it was more than worth the price. The prints aren't perfect, but they are pretty good. It's been a great introduction to 3D printing and I expect to make lots of things over the coming months.

Calibration object at 3mm, gray PLA, Printrbot Simple

The Simple is cheap through clever engineering and the fact that you have to assemble it yourself. For an extra $100 they will pre-assemble and calibrate it for you, but if you are serious about 3D printing I think you should build it yourself. It was an amazing learning exercise and gave me a crash course in mechanical engineering.

All that said, I ran into some hiccups during the initial assembly and calibration. I only got decent prints after a few days of futzing. I expect lots of people will be opening Printrbot Simple kits Christmas morning, so I thought I'd spare those happy new Simple owners (perhaps you are one of them) my headaches by writing a new getting started guide. Behold!

Getting Started with the Printrbot Simple

read for moar

This guide started out as my notes while building the kit, but after photos and forum feedback turned into an epic four thousand word epic tutorial. Did I mention it was epic? I didn't plan to write that much; it just kept going. And you, gentle reader, are the benefactor.

I've tried to not just cover initial setup, but also teach basic 3D printing terminology. When you start getting into 3D printing as a hobby you will immediately come up against terms like E value, PLA, and hot end. Needless to say these can be confusing to the new enthusiast. I've also included a trouble shooting section to cover the most likely print failures you will face. With pictures. Lots and lots of pictures. Oh, and the source for the whole thing is on github.

Dubai and Dubluck

I love symbol fonts. My new favorite is Font Awesome, an open source font with over 300 icons. Symbol fonts are great because they are pure vectors. They scale with everything else in your page and look pixel perfect on any DPI display, retina or otherwise.

However, sometimes I really just need an image. While working on an iOS application I wanted to use an image for a button. Finding decent iOS icons can be tricky, but Font Awesome matches the new iOS 7 look pretty well. Of course Font Awesome is a vector font and I need an PNG? What to do.. What to do..?

Well, I could have used Photoshop to rasterize that one glyph, but that's kind of a pain. Especially if I have to go through the whole process for multiple glyphs. What I want is a rasterizer that will generate an image for every glyph in a font. I hoped someone had already done this for Font Awesome, but I couldn't find one. Sometimes you've just got to do it yourself.

So this is my font slicer.

Please Click to view the demo.

The FontSlicer is a pure HTML5 app with everything done client side. I first tried to find a NodeJS server side rasterizer but in the end just went with HTML5 Canvas.

The really cool part is that you don't download a single image. Instead it renders a small PNG image for each glyph in memory using `Canvas.toDataURL` then streams them into a zip file that you download. This is thanks to the handy JSZip JavaScript library. Because everything is client side nothing is actually downloaded from the server after the page loads. I thought that was pretty nifty.

or, an epic post in which Josh declares his boredom of smartphones, creates a new world order, and redirects this site towards evil purposes.

I Am Tiredman

I am tired of smartphones. It's true. They are done. Baked. And boring. It's not that I don't like mine. It's just that I don't see anything interesting in them anymore. I'm happy to use one, but as a career target for innovation I feel they are done. Before you jump all over me about how I could possibly feel this way when I work for a smartphone company please allow me to explain.

I've spent the last seven years championing underdog platforms. First it was Swing, then JavaFX, then webOS. Throughout it all I saw the progression of smartphones from promising niche technology (my Treo 300 -> Treo 680) to ubiquitous consumer product. This really is a good thing. We now have the ability to check our email, take pictures, listen to music, and design killer apps for a class of tiny devices with immense computing power on a miserly power budget. Simply astounding! The future is now, people, the future is now.

So why am I bored? Because we are basically done. We know what a smartphone is now. There will certainly be innovation in the next ten years, but the iPhone 9x won't look too much different than the iPhone 4s. It will be a slab with a touch screen running apps. We know roughly what it will be. It won't be some new form factor that will blow away the competition (Google Glass is interesting but it won't take the place of smartphones. It's … something else).

This type of thing has happened before. In the early days of automobiles there were all sorts of crazy designs that we could never drive today. Some used levers to steer. Some actually used reigns like the horse and buggy. Eventually things settled down by the 30s resulting is the classic design of a car. A car of today would not look unrecognizable to a person from the 30s. The only huge advance was the automatic transmission, doubling the number of people who could successfully drive a car. We achieved the final form of the car. Everything after that has been incremental improvements. Great improvements of course, but incremental. That's where we are with smart phones, or in many ways "Personal Computing". With the advent of a smartphone now anyone can use a computer to be productive. Good job old chaps. Well done, I say good sir. Well done. Indeed.

So now what?

The Big Picture

The winding down of HP's webOS business and my new non-product position at Nokia has afforded me the time and vantage point to look at the big picture. I'm no longer involved in the phone OS wars. My favorite phone is now 'all of them'. I actually have a Lumia 800, iPhone 4S, Galaxy Nexus, Pre3, and the legendary Meego N9. Finally I've had a chance to catch my breath and get some perspective. Along the way I've noticed a few trends that I think will shape the next 10 to 20 years (limiting myself to computing technology).

We are turning atoms into bits.

Replacing atoms with bits is incredibly powerful. We can lower costs, increase reliability, and expand access to technology that was previously only available to the few. At it's best, turning atoms into bits not only improves the efficiency of existing things but makes new things possible.

Take for example quad-copters. These small cheap flying vehicles really came out of no-where, or at least it seems that way. They are cheap and surprisingly good. They are made possible by the decades long development of cheap sensors and open source software. An autonomous flying vehicle used to need lots of mechanical parts for navigation as well as an physical shape that would be aerodynamically stable. Additional stability implied additional size, weight, and cost. A non-stable vehicle is unthinkable because no human could react fast enough to pilot it.

The magic of drones is that they are controlled by a computer system fast enough to keep up with an unstable system. This removes the constraints of aerodynamic stability resulting is a drastic cost and weight reduction. Bits replaced atoms. Now that quad-copters are accessible we can think about doing truly new things with them, like hovering servers and taco delivering robots. A big enough change in efficiency allows a qualitative change, not just quantitative. Quad-copters are just the beginning. I expect we will finally get our flying cars this way.

Bits can replacing and controlling atoms creating so many new opportunities. I am very excited by the possibility of 3D printers and home CNC machines. For a few hundred dollars you can build a machine that will carve wood, drill precise holes, create circuit boards, and draw pretty much anything. For a few hundred more you can have an additive 3D printer.

This is a trend that is just starting to pick up steam. I can't wait to see where it leads. If we can really build it. Which brings me to my next point.

We are tracking our atoms with bits

Tracking and controlling our atoms is part of the next big thing. Yes, we've been doing this in some form since the dawn of the industrial revolution but I have never see the technology become so democratized. Take a look at Kickstarter. Count the number of projects that are trying to let people build their own contraptions, program their own hardware, and monitor their own world. I think we can take Kickstarter as a leading indicator. We are nearing a big change of some sort centered around sensors and actuators.

Sensors and actuators are the interface between software and hardware. I'll talk more about this in the future, but suffice it to say: programming physical objects is like a breath of fresh air. Software feels more real when it does something beyond calculating math more efficiently.

We can't dream big anymore

I realize this sounds like the opposite of what I've just previously said, but please hear me out.I feel like we are a very risk averse culture now. Children can't run on the playground at school anymore for fear of lawsuits (yes, I am serious). In the 50s we dreamed of space colonies, now we can't leave low earth orbit. The original Empire State Building was constructed in 13 months. Today you can't even file the paperwork for the site that quickly. We know how to make expensive buildings but not grand ones. Can you imagine anyone commissioning a building as elegant as Grand Central Station today? I wonder if we just can't dream big anymore.

The smartest engineers of my generation are currently spending their collective brainpower trying to get people to watch more advertisements instead of building the next world changing technology. This is just sad to me. Where will the next Steve Jobs come from if we are just making companies to be sold to Google and Facebook?

I'm a futurist. I'm obsessed with old visions of the future, especially the 1950s. I love the positive view of the future we used to have. How did we stop dreaming? Where are my floating cities and intelligent androids? Where are my replacement kidneys and synthetic eyeballs?

I'm clearly not the only person thinking along these lines.

Next Steps

Many of my interests are converging and I feel like I'm at an inflection point in my career, so it's time to change this site to fit.

I can't talk about my job at Nokia, but I can assure you it's not to build new phone apps. But I can tell you that I'm a software guy. I'd love all hardware to be emulated. It's turtles all the way down. But the realness of controlling physical atoms is intoxicating. During my week at OSCON I filled up on talks about home automation, Arduino, and embedded computation.

I've been doing this blog for three years now. I've focused mainly on software and user interface design. It's time for some changes.

First, I have released my book HTML Canvas Deep Dive as open source. I suspect I won't be able to maintain as much as I like, so the best way to have it live is to give it to the world. It's cur currently in GitHub awaiting pull requests.

Next: Physical computing. I have several projects on my plate around physical computing. I've been doing more with Arduino and ARM chips lately. I've also purchased some stepper motors and a set of extrusions from Open Beam to build my own CNC machine. I'll be blogging as I do this, so look for more posts around building actual real physical things, from the point of view of a software guy learning about hardware. It should be enlightening.

And finally. I can't make the world dream big again. But I do know people who are dreaming big. My mission, which I have chosen to accept, is to bring their stories to you. I am starting a new series of long form innoviews with innovators. (get it? inno-views? inno… ah, never mind). I haven't decided if these will be recorded or written, but I hope to bring out not just what these people are working on but why they do what they do. What drives them to push the boundaries? And what do *they* see as the important challenges ahead?

My blog posts will continue to be about design, and thus the name of this blog, but more on the design of physical things, their interaction with computation, and the design of things by innovative people with a vision of the future they want to see.

I hope you will enjoy the next three years even more than the last.

After 5 amazing years at Sun I have decided not to move on to Oracle. Instead I will be joining Palm as a developer advocate for the WebOS. The WebOS is an open platform with an exciting future on a variety of Palm devices, which I'll talk about in great detail soon. For now though, I want to talk about Sun, why I'm leaving, and the future of Java & JavaFX.

I joined Sun in 2005 to work on the Windows L&F for the Swing team. Since then I've been on several different teams, always working with some incredible engineers. First the Swing team, then the NetBeans team to work on the GUI builder, then the JavaFX team to work on the designer, samples & docs, and general development. Finally I've spent the last year working on the desktop client for the Java Store, written in JavaFX.

I didn't start working with Swing at Sun, though. I've actually been doing Java GUI stuff since before there even was a Swing toolkit. In 1995 I learned Java at the recommendation of my favorite TA, Ian Smith. He was convinced that Java was the future of OO languages, not C++. Shortly after I began writing AWT graphics hacks, creating the world's first (to my knowledge) Java ray tracer in 1996. I then spent my last year at Georgia Tech working for Scott Hudson on an experimental GUI toolkit called SubArctic. (the demos should still work, actually). That work led to my one year internship at Xerox PARC where I got to work with Studio RED and met Marc Weiser (who foresaw of today's smartphone and embedded computing revolution). I even got to have dinner with Alan Kay when he was at Disney. (FYI: Alan Kay is the man who invented everything in the 1970s, including the iPad).

After PARC I worked in a few startups doing interface architecture and view engines until the dot-com bust. As required by law, all programmers in Atlanta must work at least one of the following: Home Depot, CNN, Cox, Verizon, Coke. I worked for Home Depot & Verizon, then joined Docucorp; doing various client & server side UIs for each. Finally, while at Docucorp, I started blogging on and wrote Swing Hacks with Chris Adamson. It was Swing Hacks which eventually led to the Sun position, and to where I am today.

So, from 1995 until the present I've spent my professional career working on Java GUIs of some sort. Now that it's the year 2010 (freakshow!) I've decided to start working with something completely different: HTML, JavaScript, and CSS on the WebOS. The Oracle transition seems like a good time to make the change.

Don't think that I'm leaving Java and JavaFX behind. I'm proud of the work we have done, from making UIs easier to code, to reinventing JavaDocs. JavaFX is a great technology with a bright future now that it will have Oracle's financial and marketing support. Now I'll just be involved with them from the other side, as a user and application programmer. I still plan to work on Leonardo, my wireframing tool in the Java Store. I've also got another release of MaiTai ready to ship when JavaFX 1.3 & Prism are out. The JavaStore is going to be a big part of desktop computing, and I look forward to buying lots of great apps through it.

Finally I want to thank my incredible colleagues at Sun. Rich, Jasper, and Amy: you've done a great job designing a GUI toolkit for the 21st century. Jeff, Jeet & Nandini. I've greatly enjoyed working in your teams. And most of all, I feel incredibly fortunate to have worked with the inventor of Java (and the Java Store), James Gosling. I still have your signed dollar on the wall of my office.

When I was in college there were five companies I dreamed about working for. Sun was one of those companies. All my wishes for a bright Java future.

This entry is my first Innovator Interview, Terence Tam, creator of the amazing Open Beam aluminum system launched on KickStarter. I first discovered Open Beam while doing research for my CNC machine. After being so happy with the product I contacted Terence for an interview. He graciously took time out of his busy schedule to speak with me about OpenBeam, how an engineer cooks a turkey, and lessons learned from running a KickStarter project. I think you will enjoy reading is as much as I enjoyed talking with him.

JM: Why don't you introduce yourself and talk about what OpenBeam is.

TT: Okay so, my name is Terence Tam and I am a mechanical design engineer. I have a day job working at a company that builds optical microscopes for biological sciences research.  I came up with OpenBeam in my spare time.

Prototype Quad-Robot.  Richard DeLeon, Metrix:Create Space

OpenBeam is an extruded aluminum construction system. In the industry it's called a T-slot extrusion and the name comes from the four T shaped slots that run down the sides of the beam. The thing that makes Open Beam different than most of our competitors is that it is an open source system. We publish all the mechanical CAD files, publish the engineer prints and publish all of the specifications, which is why it's called OpenBeam.

JM: So someone could make their own beam on their own out of some other material if they wanted to?

TT: Well, yeah, there could be someone using Kickstarter money for Tangi-beam out there, but more importantly the CAD files for all of the connectors are are freely available. You actually could print a set of them on a Rep-Rap if you like. It's probably not going to be cost effective for you to do so because we injection mold them and we run the mold in pretty high quantities which brings the cost down. So that brings me to the other part of what makes OpenBeam unique. Because it’s open source and crowd funded, I do not have investors to answer to. I designed the system for the lowest cost possible. I did this by injection molding the joining plates and use standard metric nuts instead of proprietary nuts.

JM: Is that common among other systems? To have special dimensions that require special parts?

TT: As far as I know pretty much everyone requires you to buy a special nut from them. And that's actually where most companies make their money. They sell the beam at ten dollars per meter which is a pretty common price, but then charge 30 cents a nut, and with each joint needing 3-5 nuts, this adds up quickly.

JM: Woah. So that's the razors and blades model.

TT: Exactly.

JM: Had you used some other extrusion before and then decide to go your own route?

TT: Yeah. I used 80/20 quite a bit early on in my tech career. I used to mentor for FIRST robotics (Dean Kamens For Inspiration and Recognition of Science and Technology program,) where mentors partner up with kids and we have six and a half weeks to build a robot. We used a lot of 80/20 for our competition robots. It's a nice system. It's a little heavy for competition use so we’d end up drilling a lot of holes in the extrusions where we didn’t need the material to take the weight off. But it is a very flexible system. I mean, with a team of kids we can give them a bunch of Allen keys and nuts and bolts and in four hours they can build a frame and start driving around, so it's definitely a very powerful system. For work we use 80/20 for some framing for prototyping and such. We've switched to OpenBeam at work, obviously.

JM: It's always good to eat your own dog food.

TT: Right. Definitely I use OpenBean for work projects and personal projects.

Kaloss Crate.  Matt Westervelt, Metrix:Create Space

JM: So do you have rack in your kitchen made out of OpenBeam?

TT: Um… that hasn't passed the girlfriend acceptance test yet but I use it a lot in my garage for storage. I have some robotics projects that are using it and some 3D printer projects. At work mostly I use it for framework for building test equipment.

JM: Now you said there is a 3D printer project. I notice you just posted on one of your several blogs about something called, what was it, lemon drop?

TT: Lemon Curry. Lemon Curry is actually a Rep Rap project. As far as I can tell there isn't an official Lemon Curry machine yet; it's still just a collection of links. But it's really a mission to build a DLP projector based resin printer. The common Rep Raps are all extrusion based or fused filament fabrication (FFF) machines where you have a spaghetti of resin passing through what is essentially a hot glue gun and depositing it using either an XY bot or a Rep-Rap style bot.

JM: And that's why a lot of the things I've seen produced that way, they sort of have these striated lines in them?

TT: Right, if you look closely at them you can see layers where the spaghetti has melted and re-soldered. You can see the different layers. In a DLP resin type machine your building material is actually a photo polymer, which is a kind of plastic that is tuned to harden into solid plastic at a very specific wave length of light. So there are resins that will harden at about 405nm and 385nm (UV range). At 405nm you can actually use visible light to hit the resin and it will cure. So the machine that I'm building takes the solid geometry and slices it and instead of trying to extrude with an XY print head, it's taking the cross sections and projecting it with a DLP project onto a vat of resin, and the part is pulled out of this vat.

JM: So it's doing an entire layer at a time, then?

TT: Yes and potentially it can build faster and you can hit higher accuracy with it. We are currently looking at using 100 micron in XYZ steps, at 100 microns the projector I'm using is 1024x768 so that 104x76.8 mm build space and you can hit 0.1 mm resolution. Each pixel cures a 0.1x0.1 mm area on the resin. So it's a pretty high resolution machine and the biggest advantage over a FFF machine is there won't be any delamination problems. In an FFF build, because of the way the layers are laid out the part is actually really strong in X and Y but if you flex it in the Z axis you can actually shear layers off. In a resin printer because of the way the curing process works the entire object cures in a solid block so there aren't layers that you can delaminate at a later time.


JM: So how close are you having your first print?

TT: I want to do this right. What I mean by "doing this right" is, I'm using OpenBeam, of course, and I'm using Open Rail as my linear bearing.  I'm developing a lower cost linear bearing carriage for Open Rail. The reason for doing this is, well, I love what the Open Rail guys have done but I think for 3D printer applications the V-groove bearings are a little bit expensive and, frankly, probably a overkill. So I'm trying to develop an injected molded part and I'm prototyping it by machining it out of production representative materials. I can test the plastic's self-lubrication properties. I'm probably four to six weeks away from my first print.

JM: So this would be a different carriage but it would still attach to the Open Rail?

TT: Yes, this would be a different slider carriage.  My design would be modular so people can adapt it for Maker Slide and Open Rail, and of course Open Rail + OpenBeam combinations. There are couple of dimensions I'm going for. Specifically if you are going to use the OpenBeam NEMA 17 motor mount you can put one of the OpenBeams on each side and that spans a 75mm gap. I'm designing it so that on a 75mm gap, if you put the Open Rail extrusions on the sides the carriage will slide right over that. You can put matching brackets on the other side, you can use a 608 bearing adapter to mount a tensioner pulley and so for about twenty five dollars total worth of hardware you now have a linear rail system. (My goal is to keep the carriage under ten dollars.)

JM: That would be great. When I first started looking into CNC machines I found the linear rail system was always the most expensive part.

TT: Right, and if we can drop the price on that and drop the price on the 3D printer electronics (and I know some guys at Metrix:Create Space working on that problem,) we can drop the price of a 3D printer significantly.

H-Bot Prototype.  Gavin Smith, Robots and Dinosaurs

JM: I'm curious, what drove you to decide to make your own company and launch a new product? Was it just that you wanted it to exist? I mean, lots of people came up against these problems but they didn't decide to start a company around it?


TT: Well, I think when you are building an open source hardware project like this you really need some sort of continuity. This can't be just a flash in the pan project on KickStarter. For all of the guys who've backed me on KickStarter, it would be a real shame if I tomorrow I decided "you know what I'm tired of this, I don't want to do this anymore". I kind of got the feeling that's what MakerBeam did. If you look at their history and you look at their fulfillment, yeah they fulfilled their KickStarter rewards but today if you want to buy MakerBeam the only place you can buy it is through Spark Fun in the

US. And you can only buy it in either a small or large kit. You can't buy individual components. It's a very restrictive purchasing model. So it's really unfair to all of the guys who have backed them because now they have a bunch of parts and can't get more. There's nothing worse than not being able to get more and build stuff. So at the beginning of the KickStarter I decided that I would spin up a company to do this, to handle fulfillment and ongoing support.


Now, truth be told, yeah the extra income source is nice, but I've actually reinvested everything back into my company. I setup the project in such a way that when I bought my first lot of extrusions I would have a lot of additional material left over. If people needed extra material they could order it from me and I could ship it to them. We have a web store up and running now at

JM: So you planned this as a business from the beginning; something that would be sustainable?

TT: Right. I designed this to be sustainable. Someone else is handling fulfillment for me so when my day job becomes too busy I'm not delaying orders by three or four weeks before I can ship them. Someone else is doing the packaging and shipping. I decided to be a very hands off when it comes to day to day operations  My time with OpenBeam right now is spent on building the infrastructure and growing the community. I'm working on a forum right now. I'm working on getting open designs out there around OpenBeam so that people can reference them and design around them.

JM: The KickStarter ended fairly recently, right? May or June?


TT: It ended in late April, early May.

JM: When did your web store open?

TT: I never really announced it to the world but at the end of June it was running. For the first eight weeks I was more concerned about fulfilling the existing KickStarter pledges. With the exception of the trebuchet kits, which we are still building, tuning and

documenting, we've shipped all of our orders from KickStarter. The majority of them shipped in June with a few in July. We actually shipped earlier than the original schedule.

JM: I seem to recall you had some issues with actually figuring out how to mail heavy pieces of metal through the postal service.

TT: *laughs* Yeah. Actually it was FedEx.  We had all the orders ready to go but I didn't feel too confident, I guess.  So I randomly grabbed five orders and shipped them first. I was actually more concerned about the billing and the database merging and making sure that all the waivers are generated correctly, so I wouldn't be hit with a six digit FedEx bill. And then the reports came in like  "hey man, you shipped me an empty tube!" or "hey man, I got the extrusions and nothing else!". We later found out the tubes had been breaking open in shipping and FedEx had been sealing them back up for us.

JM: Well, that's a good thing to learn.

TT: So our backers received the orders fully sealed but all of the stuff on the inside was gone! So we put a ship hold into effect. We re-engineered the package. We actually did a bunch of drop testing. FedEx has a standard test. It's something like 6 drops from one meter onto a steel plate.  We engineered our packages to survive that at a minimum. Our package can withstand about twelve drops from eight feet onto concrete. We have this on high speed camera. We looked at the footage and documented everything. FedEx is in the process of certifying our packaging right now so we can get a discounted insurance rate from them. All of this is to put us in a better position so that when we put out a package we know that our customers will receive it in good condition.


It's funny. For most people who do a KickStarter, shipping and handling is probably the last thing on their mind. You go through such a long process to bring a project to life, but that last piece is so critical. The first thing that your customers will see is your packaging. If there is any damages when they open it all the hard work is washed down the drain. I wish I had paid more attention earlier on to my packaging. That's one of my regrets about the way that my KickStarter went. It's not bad right now, it does the job, but it could have been better.

JM:  I have to say all of the extrusion I ordered arrived in perfect shape. The metal caps on the end are sturdy. I had to get out my big pliers to open it.

TT: But the metal caps are about a buck fifty per package and that's not including the cost of the tubes. And it's a one time use cap. Once we pound that cap in there's no way to get it back out. Someone once ordered something, asked to upgrade the shipping to air freight, and asked me to toss in an extra screwdriver.  I was like, "Umm.. no. You've seen how we package these things." It's pretty much a destructive process to get anything out so… It does the job, but it's kind of a pain in the butt sometimes.

JM: It sounds like you've really designed this to be a coherent business that has a bright future. Are you planning to start ramping up production? Do you have advertising? Are you going to start working with retailers or resellers?

TT: Right now I'm more focused on the international backers. The reason for that is the challenges of shipping internationally are about five times bigger than shipping domestically. Domestically, if someone has heard about OpenBeam already, they know where to get it. The URL is molded into every plastic piece that we sell. Our existing infrastructure, although not great, will handle domestic orders just fine. Twenty percent of our KickStarter backers were international though. They come from all over the European Union. They come from Australia, New Zealand, the Pacific Rim region. I think I've got a couple of guys in Russia. Supporting these guys is important too, but it can't be a hundred bucks in shipping for fifty bucks in parts. So we are very actively trying to spin up international distributors. Solarbotics actually just became our first international distributor. If you are Canadian, actually carries OpenBeam components now. We are talking to someone in New Zealand and Australia. We are hoping to have Hong Kong coming on line as a distribution point. They will service Singapore and probably some of the other international orders until the EU guys have spun up.


So that's where my focus has been. I'd like to do some more advertising in the US. I'm actually kind of on a budget crunch right now so I'm not sure if running an ad is going to be the best way to do this, but I do a lot of "community outreach". So whenever I'm traveling, like in about three or four weeks I have to go to the Twin Cities Minnesota area for a wedding, I always see if there's a local hacker space. I bring an OpenBeam kit and I offer to buy them pizza. That's how I've been selling Open Beam in the US. And it works. I mean, not to sound cocky, but the product sells itself.  The guys who are into building things, they are my target audience; and they screw a couple of parts together and then they get it.

JM: I took some to our local maker space here and the guys were very excited to play around with it.

JM: So, after you get all of your shipping stuff worked out what do you think is the next step? Do you want to introduce kits? Bullk pricing? New brackets?

TT: So there's a couple of exciting things going on. We've actually partnered up with MicroRax and we're going to be offering pre-cut lengths and pre-cut construction kits. MicroRax will also have the ability to do custom length cut extrusions too. With that I can then offer a kitting service to 3D printer builders. There are a couple of plans out there for Rep Rap machines that use extrusions. I can actually offer a kitting service to guys who are designing 3D printers and my dealer pricing is very competitive, and the total cost of ownership is probably lower than Misumi when you factor in the bracket and fastener costs.

I plan to offer my design in a kit form as well. That competition will help lower 3D printer pricing across the board. I have a couple of friends at Metrix that are working on cheaper 3D printer electronics, so that's pretty cool. We are going to be releasing kits in conjunction with them. With the pre-cut kits, I'm probably going to be shipping some kits out for review and hopefully getting some interest and business that way.

Camera rig with iPod teleprompter.  Matt Westervelt, Metrix:Create Space

JM: I notice that you have a couple of other blogs. There's one for yourself and one for TamLabs, though it looks like you haven't updated them in a while. I'm guessing you've been very busy lately.

TT: *laughs*. Yeah.. So I have a personal blog at  I think the last update is "okay guys! the KickStarter is going live!"

JM: And then you just fell off the face of the earth.

TT: Yup. I like to cook and I like to take pictures, so that's pretty much my outlet for documenting recipes and photos I've shot.

JM: Now did I see in the blog a turkey with wires hooked up to it?

TT: *laugh*. Yes you did. So there's actually a fun story behind that. That was the first time I had to cook for my current girlfriend's, actually soon to be fiancée's, parents. They tasked us with providing the turkey for the Thanksgiving dinner; for both of her grandmothers and her parents and a couple of aunts and uncles. Well, if you ask an engineer to cook a turkey on a barbecue that's what you're going to get. It's a couple of thermocouples monitoring the cooking process. We did a couple of test runs and we ate a lot of smoked turkey in the week leading up to Thanksgiving making sure that we got it down.

JM: So you had all of your trials beforehand. What was the final result?

TT: I thought it was a little bit over cooked to be honest, but they loved it. They said it was one of the moistest turkeys they've ever had. It would have been preferable to have two smaller birds instead of one big bird, that said... the thermocouples are accurate to three degrees centigrade so by monitoring the temperature you can pull it out the moment it's done. There's only about a fifteen degree range for white meat before it turns really, really dry, so that was why we had all the instrumenting and monitoring. It was a really cold winter day in Chicago so I kept the fire going, smoked it over apple wood, and it turned out okay.

JM: Well they liked it, that's all that matters.

TT: Yeah, they liked it! Exactly!

JM: Were you happy with your KickStarter experience? Do you think you would do another one?

TT: Yes. I am definitely looking into another KickStarter, probably one for the linear bearing block once I have it designed. There are some minor issues with KickStarter in how it handles data and the tools. After my last rewards ship out I will be doing a series of blog articles on my post mortem. I'll probably email the KickStarter team a link to it with some of the constructive criticisms of their websites. But overall it has been really positive.  The cool thing, and maybe it's because I'm in the open hardware / technology category, everybody has been really, really great and really understanding. When I had problems with shipping everybody was like "yeah hey. No worries. We understand. Delays happen. We'd rather that you sort this out before you take orders on the web-store". Not one person complained about the communications or the delays.

JM: Perhaps we are all just so used to dealing with beta software.

TT: Yeah. Certainly when you read some comments on the more consumer product oriented stuff, like the iPhone docks, you hear a lot more people complaining about "oh they slipped their dead line… This is bogus. The guys took their money and ran." I had none of that. I'm really thankful to my backers for that.

JM: When you launch your next product, why are you thinking of going with KickStarter rather than just making a run at trying to sell it?

TT: Well.  If you think about the percentage that they take, the 5% off the top of the project, it's some of the cheapest advertising you can buy. Open Beam raised over $100k. KickStarter took 5% of it, so just over $5k and another $5k or so went to Amazon credit card processing fees. For me to run a single run of ads in Make Magazine, that itself is going to cost $1500. Essentially for $5,000 I got a product that was publicized around the world. I look at the countries and continents that my backers are coming from and I say to myself, “holy crap.” Even on a $20k advertising budget I wouldn't know how to reach some of these guys. That alone is worth it to me.

The other thing that's nice with open hardware is that it takes a lot of the risk out of the picture. Especially with open hardware, where I'm publishing my drawings, my cad files, all the specifications, etc….

JM: I imagine you still have to have a lot of capital to do that initial order.

TT: Yeah, there's a bunch of capital involved and the same time my only protection is to move fast and keep innovating, and be reasonable about the prices I charge.

JM: I've been very happy with the beam. I'm heading forward on my own 3D printer.

TT: I'm really happy to hear that.

JM: What do you think people are going to build with OpenBeam? It sounds like robots are on the table and certainly 3D printers. What else do you think people would want to do with it?

TT: Just today I got an enquiry from someone building a Magic card sorting robot. I thought that was pretty cool.

JM: A magic card… oh the Magic the Gathering cards?

TT: Yeah. *laughs* There's a couple of guys in Spain working on a structured 3D light scanner and another gentleman is building his own hologram studio with OpenBeam. I'm hoping that he'll share some photos of what he has done.

JM: I'd love to see a project gallery on the site.

TT: Yeah. I'm working on that too. There's some pretty cool projects in the works, definitely. There's a school in Marin County that just bought a bunch of OpenBeam so I'm guessing their students will be building a bunch of robots.  The cool thing about being a construction system like this is you do get a lot of creative folks. Watching what they come up with is quite a reward.

JM: Awesome.  Thank you Terence!

my amino talk from OSCON was pretty *sweet*!

This is part 3 of a series on Amino, a JavaScript graphics library for OpenGL on the Raspberry PI. You can also read part 1 and part 2.

Amino is built on Node JS, a robust JavaScript runtime married to a powerful IO library. That’s nice and all, but the real magic of Node is the modules. For any file format you can think of someone has probably written a Node module to parse it. For any database you might want use, someone has made a module for it. lists nearly ninety thousand packages! That’s a lot of modules ready for you to use.

For today’s demo we will build a nice rotating display of news headlines that could run in the lobby of an office using a flatscreen TV on the wall. It will look like this:


We will fetch news headlines as RSS feeds. Feeds are easy to parse using Node streams and the feedparser module. Lets start by creating a parseFeed function. This function takes a url. It will load the feed from the url, extract the title of each article, then call the provided callback function with the list of headlines.

var FeedParser = require('feedparser');
var http = require('http');

function parseFeed(url, cb) {
    var headlines = [];

    http.get(url, function(res) {
        res.pipe(new FeedParser())
            .on('meta',function(meta) {
                //console.log('the meta is',meta);
            .on('data',function(article) {
                console.log("title = ", article.title);
            .on('end',function() {

Node uses streams. Many functions, like the http.get() function, return a stream. You can pipe this stream through a filter or processor. In the code above we use the FeedParser object to filter the HTTP stream. This returns a new stream which will produce events. We can then listen to the events as the data flows through the system, picking up just the parts we want. In this case we will watch for the data event, which provides the article that was just parsed. Then we add just the title to the headlines array. When the end event happens we send the headlines array to the callback. This sort of streaming IO code is very common in Node programs.

Now that we have a list of headlines lets make a display. We will hard code the size to 1280 x 720, a common HDTV resolution. Adjust this to fit your own TV if necessary. As before, the first thing we do is turn the titles into a CircularBuffer (see previous blog ) and create a root group.

var amino = require('amino.js');
var sw = 1280;
var sh = 720;

parseFeed('',function(titles) {
    amino.start(function(core, stage) {

        var titles = new CircularBuffer(titles);
        var root = new amino.Group();


The RSS feed will be shown as two lines of text, so let’s create a text group then two text objects. Also create a background group to use later. Shapes are drawn in the order they are added, so we have to add the bg group before the textgroup.

        var bg = new amino.Group();

        var textgroup = new amino.Group();

        var line1 = new amino.Text().x(50).y(200).fill("#ffffff").text('foo').fontSize(80);
        var line2 = new amino.Text().x(50).y(300).fill("#ffffff").text('bar').fontSize(80);

Each Text object has the same position, color, and size except that one is 100 pixels lower down on the screen than the other. Now we need to animate them.

The animation consists of three sections: set the text to the current headline, rotate the text in from the side, then rotate the text back out after a delay.

In the setHeadlines function; if the headline is longer than the max we support (currently set to 34 letters) then chop it into pieces. If we were really smart we’d be careful about not breaking words, but I’ll leave that as an exercise to the reader.

        function setHeadlines(headline,t1,t2) {
            var max = 34;
            if(headline.length > max) {
            } else {

The rotateIn function calls setHeadlines with the next title, then animates the Y rotation axis from 220 degrees to 360 over two seconds (2000 milliseconds). It also triggers rotateOut when it’s done.

        function rotateIn() {

A quick note on rotation. Amino is fully 3D so in theory you can rotate shapes in any direction, not just in the 2D plane. To keep things simple the Group object has three rotation properties: rx, ry, and rz. These each rotate around the x, y, and z axes. The x axis is horizontal and fixed to the top of the screen, so rotating around the x axis would flip the shape from top to bottom. The y axis is vertical and on the left side of the screen. Rotating around the y axis flips the shape left to right. If you want to do a rotation that looks like the standard 2D rotation, then you want to go around the Z axis with rz. Also note that all rotations are in degrees, not radians.

The rotateOut() function rotates the text group back out from 0 to 140 over two seconds, then triggers rotateIn again. Since each function triggers the other they will continue to ping pong back and forth forever, pulling in a new headline each time. Notice the delay() call. This will make the animation wait five seconds before starting.

        function rotateOut() {


Finally we can start the whole shebang off back calling rotateIn the first time.


What we have so far will work just fine but it’s a little boring because the background is pure black. Let’s add a few subtly moving rectangles in the background.

First we will create the three rectangles. They are each fill the screen and are 50% translucent, in the colors red, green, and blue.

        //three rects that fill the screen: red, green, blue.  50% translucent
        var rect1 = new amino.Rect().w(sw).h(sh).opacity(0.5).fill("#ff0000");
        var rect2 = new amino.Rect().w(sw).h(sh).opacity(0.5).fill("#00ff00");
        var rect3 = new amino.Rect().w(sw).h(sh).opacity(0.5).fill("#0000ff");

Now let’s move the two back rectangles off the left edge of the screen.

        //animate the back two rects

Finally we can slide them from left to right and back. Notice that these animations set loop to -1 and autoreverse to 1. The loop count sets how many times the animation will run. Using -1 makes it run forever. The autoreverse property makes the animation alternate direction each time. Rather than going from left to right and starting over at the left again, instead it will go left to right then right to left. Finally the second animation has a five second delay. This staggers the two animations so they will always be in different places. Since all three rectangles are translucent the colors will continually mix and change as the rectangles slide back and forth.


Here’s what it finally looks like. Of course a still picture can’t do justice to the real thing.



The twit-o-sphere came alive last week with the news that Adobe is canceling their Flash for Mobile products. I even briefly joined in.  Many see this as evidence that the open web has won (it has), or a justified comeuppance for Adobe's historical slights to Apple (it might be), or perhaps vindication of Steve Jobs' rant anti-Flash (it was), and maybe even that Microsoft was really to blame (it's a stretch).  Lost in all this, I wonder, is the effect this actually has on Adobe beyond their short term problems.

Lets step back a minute and consider what Adobe actually does.  They have some enterprise backend products for document management and (amazingly still used) a server side platform in Cold Fusion.  They have some cloud products ( built on the the Flash platform. Then there is Flex, an enterprise application platform designed to steal Java developers from Sun, as well as a mobile advertising and analytics platform (Omniture). And of course Flash for mobile.

Oh yeah. And they also make Photoshop, Illustrator, Fireworks, and a bunch of other industry leading graphics and content creation tools so pervasive that some have called them the Adobe tax.

This really sounds like two different companies, doesn't it?  I'm not exactly sure when, but at some point Adobe strayed from focusing on high quality content creation tools for designers and artists. They entered the "platform space": trying to be an enterprise company, a software as a service company, and own the mobile content market.  That's a whole lot for any company to do, especially one who traditionally focused on content creation tools.  In the end I think it just became too much for one company to do, and it takes away from the thing they are good at: killer tools for designers.  If killing of mobile flash lets them focus on their core competency then this is a good thing for everyone.

Adobe is known for tools used by professionals to create content. The Flash designer tool is used by professionals to create animated interactive content. Currently, the format of the final output is a SWF file. Do the purchasers of this tool care? Not really. Flash designers want to create content that is viewable by the most people. The audience wants great content accessible from the most devices. Neither of these two groups of people gives one whit about the actual format of the bits.  Flash, the runtime, was simply an end to a means.  With HTML 5 technologies becoming viable for interactive animated content, the Flash designer tool can simply output a new binary blob to be uploaded onto web servers.  The designers won't care, the audience won't care.  Everyone will get on with making/viewing their content and Flash Designer CS 22 will sell millions of copies.  This really isn't a big deal.

Well, except for one group of people who really truly do care about mobile Flash: the makers of iPad competitors. Apple's refusal to allow Flash onto the Safari mobile browser created a market opening for a device that *would* play Flash.  While it was never a big factor for webOS, it was a flagship feature for the BlackBerry Playbook and various Android Tablets.  They've now lost a checkbox in their feature war with the iPad.

No matter. The world will move on.  The mobile web is built on HTML 5 standards. And in 5 years the mobile web will simply be the web; which may foretell the end of the desktop Flash plugin as well, but the end result is the same. Adobe will continue to sell world class content creation tools. Tools which output whatever format the world actually wants. And now, finally, the world wants HTML 5.

Long Live the Web. Long Live Adobe.

Teach them well and let them lead the way
someone who sounds like Whitney Houston

The Status Quo

Last summer I released Leonardo, a cross platform vector drawing tool. Later in the year I released an alpha of Amino, the new Java UI toolkit I wrote in the process of building Leonardo. Since then I've released updates for both, but haven't quite reached a 1.0 for Amino. It's time for some changes: Amino 2 and Amino.js.

Overall I've been pretty pleased with most of Amino. It is reasonably fast, pretty skinnable, and has a very clean API with generics, an event bus, and isolated models. The underlying graphics stack, however, is a mess. Originally I planned to build a second graphics stack on top of JOGL, but it's never worked properly (you can still see the disabled code in the repo). Amino has a basic component tree structure built on a Java2D wrapper but for a modern UI toolkit we need more. We need a real scene graph. Something that has animation and transforms built right in. A library fully isolating the app from the underlying graphics, with framerate and buffering control. Without a real scenegraph Amino will never work well on top of JOGL.

Modern Graphics Research

You have may noticed a slowdown in Amino and Leo commits. That's because I've spent the last few months researching modern hardware acceleration and teaching myself OpenGL Pixel Shaders. (Oh, and traveling around the world, announcing new products, and getting ready for my first baby to arrive in May). After all of the research I've come to a few surprising conclusions (surprising to me, at least).

First, on modern GPUs memory usage and bandwidth are the bottle necks, not drawing speed. In the realm of 2D graphics, at least, our biggest headache is managing the data structures and shipping them up to the GPU. The pixel shader units themselves are far faster than what we need. It's often much better to send a tiny amount of information over the bus and use an inefficient algorithm in a pixel shader, rather than do it on the CPU and upload a texture.

Two: OpenGL sucks for traditional 2D drawing tasks. Sure, it's great at compositing, but the actual line and polygon routines are primitive if you want antialiased shapes and smooth compositing. There's a whole lot of stuff we have to rewrite by hand on the CPU or re-implement with shaders.

Three: on platforms without accessible GPUs we can do lots of tricks to speed up apparent drawing speed. Reponsiveness and consistency matter more than raw drawing speed. And if a GPU is available, then we can efficiently cache portions of the drawing as textures and use the GPU for compositing. All of this is hard to do in a reusable way without a real scene graph to build apps on.

Four: Web and mobile apps need a scenegraph as well. For a long time I've mainly thought of web apps as UIs built with static images and text. However, in the last year HTML Canvas has come a long way. Games with sustained 60fps are quite possible in recent builds of Chrome, and mobile devices are catching up as well. (I'm amazed with what I've been able to do on the upcoming TouchPad). So, if we are going to put high powered UIs on web technology, then we need a real scene graph.

Amino 2: The New Plan

I've decided to start a new branch called Amino 2. The current Amino will be split into two parts:

  • Amino Core, the scenegraph, event bus, and common utility classes
  • Amino UI, the UI controls and CSS skinning layer

Amino Core will have multiple implementations. First, a scenegraph for desktop Java apps that will use a mixture of Java2D and JOGL. Eventually it may be possible to do full anti-aliased shape and clip rendering with pure shaders, but this is still an open area of research. For now we will do the shape rasterization in software with the existing mature Java2D stack, then do compositing, fills, and effects on the GPU with shaders. Over time we may shift more to the GPU. With a scenegraph this will be transparent to the app developer, which is a very good thing.

Announcing Amino.js

Amino GFX will also have a second implementation built in JavaScript and HTML Canvas. I'm calling this Amino.js. It will let you build a scene graph of shapes and images, complete with declarative animations and special effects, all drawn efficiently into the a canvas on your page. Amino.js will not have the CSS and UI controls layer, because that stuff is already well provided my many existing JavaScript libraries. There's no point in making new stuff where it's not needed

I've also considered making a pure in memory version of the scenegraph without any Java2D dependencies, or porting it to a pure C++ lib on SDL or OpenGL. However, these requires a lot of research into rasterization algorithms so I'm not going to work on them unless someone really wants it.

Next Steps

That's it for today. I've built a prototype of Amino.js and over the next few weeks I'll show you more of the API and walk through it's construction. I hope you'll find it interesting to watch as the pieces come together, and any code contributions will be very welcome. Come back next week for some code samples and I'll dive into a puzzle game prototype called Osmosis. Here's a few quick demos to whet your appetite.

Google ChromeScreenSnapz013.png
Animated bar chart

Google ChromeScreenSnapz012.png
draggable scene nodes

Google ChromeScreenSnapz011.png
Simple particles at 60fps

Oh, and I forgot to mention that Amino2 and Amino.js will again be fully free and open source. BSD for the win!

The Web is amazing for answering questions. Suppose you want to answer a question like, "what does the .JPG file extension mean", then the answer is just an internet search away. Millions of answers. However, if you stray from the common path just a tiny bit things get hairy. What if you want to get a list of all file extensions? This is harder to find. Occasionally you might find a PDF listing them, but if you are asking for all file extensions then you probably want to do something with that list. This means you want the list in some computable form. A database or at least a JSON file. Now you are in the world of ‘public’ data. You are in a world of pain.

Searching for “list of file extensions” will take you to the Wikipedia page, which is open but not computable friendly. Every other link you find will be spam. An endless parade of sites which each claim they are the central repository of file extension data. They all have two things in common:

  • They are filled with horrible spam like ‘scan your computer to speed it up’ and ‘best stock images on the web’ and ‘get your free credit report now’.
  • They let you add new extensions but don’t let you download a complete list of the existing ones.

What I want is basic facts about the world; facts which are generated by the public and really should belong to the public. And I want these facts in a computable form. So far I cannot find such a source for file extensions. These public facts, as they exist on the internet, have morphed into a spam trap: vending tiny bits of knowledge in exchange for eyeball traffic. These sites take a public resource and capture all value from it, providing nothing in return but more virus scanner downloads. That they also provide so little useful information is the reason I have not linked to them (though they are obviously a search away if you care).

The closest I can find to a computable file extension list is the mime type database inside of Linux distros. This brings up a second point. Every operating system, and presumably web browser, needs a list of all file extensions, or at least a reasonable subset. Yet each vendor maintains their own list. Again, these are public facts that should be shared, much as the code which processes them is shared.

File Extensions are not the only public facts which suffer the fate of spam capture. I think this hints at a larger problem. If humanity is to enable global computing, then we need a global knowledge base to work from. A knowledge base that belongs to everyone, not just a few small companies, and especially not the spammers.

Wikipedia and it’s various data offshoots would seem to be the logical source of global computable data, yet the results are dismal. After a decade of asking, Wikipedia’s articles still aren’t computable in any real sense. You can do basic full text search and download articles in (often malformed) wikimarkup. That’s it. Want to get the list of all elements in the periodic table? Not in computable form from Wikipedia. Want to get a list of all mammals? Not from them. Each of these datasets can actually be found on the web, unlike the list of file extensions, but not in a central place and not in the same format. The data offshoots of Wikipedia have even bigger problems, which I will address in a followup blog.

So how do we fix this? Honestly, I don’t know. Many of these datasets do require work to build and maintain and those maintainers need to recoup their costs (though many of them are already paid for with public funds). If this was source code I’d just say it should be a project on GitHub. I think that's what we need.

We need a GitHub for data. A place we can all share and fork common data resources, beholden to no one and computable by everyone.

Building and populating a GitHub for data, at least for these smaller and well defined data sets, doesn't seem like a huge technical problem. Why doesn’t it exist yet? What am I missing?

Over the weekend I moved app bundler to its own project. It is now hosted on GitHub and has a real Ant task.

In brief, AppBundler is a small tool to turn your collection of jars into a native executable for Mac and Windows, as well as JNLPs and single jar apps. With the new ant task you just define a small XML file listing your jars and the main class, then make this call from ant:


That's it. Now that App Bundler is independent I'm looking for some help improving it. In particular I'd like to support native Linux executables (shellscripts? rpms? debs?), JavaFX apps, bundling native libs (partly working), and embedding a JRE with the app. If you are interested in helping please join the GitHub project.

App Bundler on GitHub

Every day new things happen that bring our visions of the future to the present -- or are just damn crazy ideas that have to be told. Here are a few.

Human Flight

Some guys in europe have made progress on building Da Vinci's dream of human bird wings. In this video they manage to fly 100 meters. Some say it's a fake, but I sure hope it's real.


Sugar Shot

A group of amateur rocket builders are working on a rocket to sub-orbital space (100km) powered not by your usual high powered rocket fuel, but rather a cheap fuel based on sugar. Wish them luck.


Wooden Skyscraper

Steel and concrete are not the most eco-friendly of skyscraper building materials. Why not return to wood? Architect Michael Green wants to build a 30 story building in Vancouver. Modern fabrication techniques could make it possible.

Wooden Skyscraper @ Co-Design

Copter Madness

What happens when futuristic quad-copter technologies combines with hackers obsessed with avoiding law enforcement? Only one outcome is possible: flying web servers. Prototypes already exist.


The future is now, people. The future is now.

The Old Future

And for a reminder of how we built the previous future, The Verge has a great article up on the new book: The Idea Factory: Bell Labs and the Great Age of American Innovation.


In my hunt for what's next I've been reading a lot of books lately. A lot of books. As part of my search I decided to hunt down some of the classics in the computer science field that I'd missed over the years. First articles, then research papers, and some Alan Kay work. That led me to a book I'd always meant to read but never found the time: Mindstorms, by Seymour Papert.

You probably don't know the name Seymour Papert, but you if you are reading this blog you almost certainly know one of his greatest creations, and the topic of Mindstorms, the programming language Logo.

My first experience with computers in the early 80s was an Apple II running Logo. Though I obviously didn't realize it at the time, Logo was surreptitiously teaching me math, geometry, and Lisp. (Yes, Logo is a pure functional language). It probably set gears in motion to encourage my later pursuit of software as a career. Here's a quick example:

to spiral :size
   if  :size > 30 [stop] ; an exit condition
   fd :size rt 15        ; many lines of action
   spiral :size *1.02    ; the tailend recursive call
spiral 10

image from wikipedia

I haven't finished the book yet so I'll leave the review for another day. Instead I want to talk about the main thesis of the book's section on teaching (and really the main point of the book itself): How can we teach math better, and are we even teaching the right thing?

Early on Papert makes the point that children often identify with a deficiency: "I can't learn French, I don't have an ear for languages". These identities become entrenched and difficult to change, and the are reinforced by a society that is largely mathaphobic. Then he gets to the part that really made me stop and think:

It is easy to understand why math and grammar fail to make sense to children when they fail to make sense to everyone around them and why helping children to make sense of them requires more than a teacher making the right speech or putting the right diagram on the board. I have asked many teachers and parents what they thought mathematics to be and why it was important to learn it. Few held a view of mathematics that was sufficiently co-herent to justify devoting several thousand hours of a child’s life to learning it, and children sense this. When a teacher tells a student that the reason for those many hours of arithmetic is to be able to check the change at the supermarket, the teacher is simply not be-lieved. Children see such “reasons” as one more example of adult double talk. The same effect is produced when children are told school math is “fun” when they are pretty sure that teachers who say so spend their leisure hours on anything except this allegedly fun-filled activity. Nor does it help to tell them that they need math to become scientists— most children don’t have such a plan. The children can see perfectly well that the teacher does not like math any more than they do and that the reason for doing it is simply that it has been inscribed into the curriculum. All of this erodes children’s confidence in the adult world and the process of educa-tion. And I think it introduces a deep element of dishonesty into the educational relationship.

quote from Mindstorms by Seymour Papert. emphasis of last sentence in the original text.

This really made me think. Why did I love math and so many kids don't? Was it genetic? (unlikely) Was it a family member? (probably) Was it a good teacher? (certainly). We can't depend on happenstance to make children who like math. When a kid asks the question "Why should I learn math?" what do we tell them that isn't doublespeak? Surely we have a better answer than 'to be able to check the change at the supermarket'. (itself a largely irrelevant task today)

Well, I can't answer this for everyone, but I want a solid response when Jesse one day asks me the question. My best response so far: Math lets you understand things and make things happen.

Your response?

Mmmwaa haa haa. It lives! I've gotten Java to run on webOS natively with a new set of Java SDL bindings. That means it just *might* time to start a new project. Read on for how it works and how you could help.


For a while I've been following an open source project called Avian. It's a very lightweight and highly portable JVM that can run almost anywhere. Recently I tried a new build and was able to get the ARM port running on webOS!  This is good news because Avian can run pretty much any Java code if you supply it with the right runtime (it can optionally work with the OpenJDK libs).

Now, of course getting a command line app it run is not very interesting. Really we want to talk to the screen to make some real graphical applications. So that brings us to part two: SDL.

If you've been doing desktop programming for a while you've probably heard of SDL, the Simple DirectMedia Library. It's a fairly low level graphics and audio API that runs pretty much everywhere, including on webOS.   But, of course, like many low level APIs it's built in C. So if I want to use Java I need to some wrapper to call it.  The existing wrappers out there are very old and didn't work well on Mac, so it was time to build my own.

Over the weekend I learned how to use Swig, a JNI wrapper generator and successfully ran my new SDL wrappers on Mac, Linux, and webOS.  Here's a quick screenshot:


Screen Shot 2011 08 31 at 3 28 52 PM

it's not much but it proves that everything is working.

So what's the next step?  Honestly... I don't know. I created this specifically to let me code Java on webOS, but the SDL bindings would probably be useful for cross platform desktop applications as well.  We could port Amino to it, or do some funky multitouch stuff. It would certainly be great for people creating games. I need your advice.

What would you like to do with this library? What higher level APIs would you like? If you have any ideas of what you'd do with this lib, or would like to contribute to the project (help on Windows compilation would be greatly appreciated), then please message me on twitter: @joshmarinacci.


- Josh

a test blog post content

Thanks to my HTML Canvas Deep Dive at OSCON, .net Magazine asked me to write a tutorial for them. The topic was just something interesting with Canvas. I'm a huge fan of infographics, as well as .net Magazine, so I jumped at the chance to write for them. I recently discovered an amazing treasure trove of data at the World dataBank, so that formed the core of the article.

My tutorial will take you from drawing a few shapes using canvas to processing real data from CSV files, then finally adding some interactivity and sprucing up the graphics. It should take you about one hour to complete, and if you know basic JavaScript then you're all set. It was a lot of fun to write. I hope you enjoy it.

Read it now

I am giving a presentation on the future of desktop interface at OSCON in a few weeks. To help prepare for the session I'd like to use you, gentle readers, as my guinea pigs. The following essay is an extremely rough version of what I will be presenting. Please imagine it with humorous illustrations and no grammatical errors. I will greatly appreciate your feedback. What parts should I expand? Where are my arguments unclear or flawed? Would you come to see this talk?

Welcome to The Future of Desktop Interfaces. I'm afraid your presence is actually a ruse on my part. I am *not* going to tell you what the future of desktop interfaces will be. Rather my goal is to trick you into inventing the future for me. So really you are here because I am lazy. Remember, necessity is the mother of invention. Laziness is the father.


It is my belief that in a decade 90% of people will use a tablet, smartphone or other non-desktop device as their primary computing interface. I actually made this prediction two years ago so have only eight years left, and the success of the iPad implies that we may actually be ahead of schedule. At this point I think my prediction is relatively uncontroversial. Tablets and smartphones have a managed or 'curated' computing experience. These devices actually meet the computing needs of most people better than a traditional desktop PC. Note: for the purposes of this discussion I mean both laptops and desktop computers running MacOSX, Windows, or a Linux Desktop. anything with an exposed filesystem. These managed computing devices will continue to get better and more powerful, mostly replacing the PC.

So, that's great for the 90%. They will be able to get their jobs done with no fuss. But what about the 10%? What about the people who create content professionally? What about the programmers? The digital artists? The data analysts? The hackers? Are PCs going to atrophy as all attention moves to the 90%? I don't think so. In fact, I think this could be a renaissance of desktop computing. We've been kind of stuck in a rut for the past decade. Desktop interfaces haven't changed much since about 2000. Certainly they are prettier, but they haven't' really gotten better or more productive. I think we could have some really interesting things coming. But first, a diversion.

This is a cave painting [image]
This is a painting from the wall of pompeii, circa 70 ad [image]
This is a painting from the middle ages [image]
This is a painting from the high renaissance [image]
This is a painting from the 1850s [image]

We can see a trend of greater and greater realism built over the centuries leading up to height of the great renaissance painters like Michaelangelo. Then a plateau. Once we had achieved realism where do we go? More portraiture. More landscapes. Painting started to get boring. Then something happened in the 1870s. All of a sudden we get impressionism, cubism, surrealism, and abstract modernism. we see more change in a 70 year period than the previous 700. What happened?

[pictures of Van Gogh and Picasso paintings]

Photography was invented. Until the photograph the commercial purpose of painting was to capture and recreate reality. Most paintings were portraits of rich people. But no painting could compare with the photograph at recreating reality, especially not for the price. The photograph made realistic portrait painting obsolete. This was both a blessing and a curse. Painters needed to dream up with something new to paint. And dream they did. Freed from the need to duplicate reality there was an explosion of new ideas and trends. Painting had new life, which lead to amazing things.

I hope the same thing will happen to desktop computers. PCs have been the primary computing interface for humans for the past thirty years. By definition the they have to serve the needs of all people. But if 90 % of people will use something else, then maybe desktop interface can evolve again. Maybe they can change to meet the needs of the 10% far better.


I can't reliably predict anything a decade out in our industry. Things change too quickly and there are too many unknowns. However, there are a few trends that I think we can take a look at. The next few years will be shaped by the following forces:

* Moore's law. We have a glut of CPU, GPU, and storage resources on our personal computers. A glut that we don't' really take advantage of yet, and this glut shows no sign of stopping.

* Mature software development tools and toolkits. One engineer can create an app in a few weeks that would have taken a team of programmers months just a decade or two ago. This is partly thanks to our tools like modern IDEs, version control systems, automated build systems, etc. We also have robust toolkits and APIs that let us code at a higher level. Combined with an excess of RAM we can build complex desktop software far faster.

* App stores: while I don't like the curated part of desktop app stores there is no denying that they open the market for smaller software developers. A small team can make an app that will appeal to a very niche market and still sell it profitably because they have access to the entire world. This makes narrower products far more feasible than was possible 10 years ago.

* Ubiquitous networking

* Info glut:

The 10 Percent

Before we talk about what the interface will look like, let's talk about the problems that the 10% have. When I first suggested this topic some people thought I meant that interfaces would become hard to use again. That usability is only for the 90%. Not true. the 10% needs quality software as well, but we need it to be deeper. We have specific needs that must be addressed and are willing to spend the time to learn more powerful interfaces. In particular, we have a torrent of information to be managed.

Every time I get a new computer the hard drive doubles and I always fill it up. Only half of this is photos and videos. I also have endless documentation sets, PDFs, gigabytes of emails, backups of my mobile devices, and word docs stretching back to the early nineties. I personally have more information to manage than most companies did thirty years ago. I need a way to manage it.

Along with this information I have lots of devices that I need to manage as well. iCloud is nice but it doesn't scale very well. I have an iPad, two iPhones (for me and my wife), an iPod touch for my son, and several test phones (Android, webOS, Windows Phone, and Meego), and a few cameras. And that's just the mobile devices. I now have a home media server and a Roku box for the TV. Soon I will have home automation components for my thermostat, to control the lights, water the garden, run the sprinklers, handle security, and watch the baby. Thanks to Moore's Law our homes will soon be awash in computing. That's a lot to manage. I can't do it all from an iPad.

So where do we start to address these needs? First: hide the filesystem.

information management

Hide the filesystem. Use robust shared data stores underneath. (Man, I really wished BeOS had survived). I'm not saying we need a database to replace the filesystem but at least hide it. I don't care how my stuff is stored as long as it works, and works quickly. The example most people are familiar with is iTunes. MP3 libraries were the first widespread case where we hit the complexity wall. Most people simply have too many songs to manage effectively using directories. Instead we need a database that can search and filter by multiple criteria and be very responsive. iTunes does this for music.

[picture of iTunes]

I'd like to see this interface style applied to more kinds of files. Here is an app for academic researchers to manage the many scholarly papers they have to read.

[picture of Papers]

[picture of Sparrow]

Sparrow is a great email client for Mac that is much faster than the system default. It sync with social networks to get avatars. It has intelligent threading. It's good enough that I spent 20$ to replace a free system app. But it's just email. It doesn't handle the rest of my communication. And while beautiful it's not very customizable. Customization matters as much as beauty.

I'd like to have a single place to store my external communication: IM logs, SMS logs, all tweets, all emails, all FB posts. One app. One interface. When I'm composing an email to someone I'd like to see all of the messages I've ever sent to them. And of course this should magically sync with all of my online services to keep up to date.

Idea I want a single place to see all of my code across all projects, be aware of all version control systems. My IDE only manages the projects I'm currently editing. I want something to manage *all* of my projects, both personal and professional, including my build services and the bug tracking systems I'm currently involved with. Most of these systems have APIs so I don't it would be difficult to build.

Apple is moving in the direction of system wide saved queries, similar to what iTunes offers, but they are doing it very cautiously. There is so much more we can do.

customization and automation

Just as great painters would make their own paints and canvases, we need the ability to create our own workflows and customize our tools.

* apps communicate together

* build new apps out of pieces of other apps

* customize the general computing environment, but still

* be able to handle anything

the line between a custom workflow and real programming is fuzzy

Let me give you some examples:

iTunes smart lists and IDE keymaps

We need to take this to the next level. I should have a keymap that works system wide in all apps. The Mac has smart lists that can be used in any app, but only for photos and albums. This should be available for any kind of media list.

I should be able to change the interface of any app. In a drawing program I can change the toolbar, but why doesn't *every* action in an app have a toolbar item? Why can't I create different toolbar sets for different kinds of projects? Then these sets should be shared across apps.


If every action has a toolbar item then we could take this to the next level and actually script our apps together visually. Mac OS X does this with automator but they never took it as far as it could go. Now it appears that Automator will be crippled by the new Mac App Store restrictions. This is a shame. I think Automator never really took off because not enough apps supported it and there was no easy way to share scripts between people. My ideal vision would be something like: visually create a script to take the current document of photoshop, convert to PNG, post to Flickr, send out a Facebook update and Tweet to the Flickr link *tomorrow morning at 9am EST*. This should not be hard to do in the 21st century, and yet our tools don't make this easy to do. This is really a problem with app communication. I'm not entirely sure how to solve it, but the potential is huge.

If we start to think of a desktop app as a collection of modules rather an a monolithic whole, then we can start doing these sorts of cool things.

With such a modular system the line between customization, automation, and real programming becomes very fuzzy. If we make it easy to create and share these scripts then we can make the desktop twice as powerful.

Integrate the web

Once we make our apps modular and scriptable they can start talking to the web. There are some simple things we can do today. Why don't more desktop apps have 'share via twitter' buttons? Why doesn't photoshop have it? I've been working on an open source drawing program called Leonardo Sketch. One of the first features I added was the ability to share a snapshot of what I'm working on right now through Flickr, Twitter, and Facebook.

I'd like to take this to the next level. I recently added an asset manager to Leo Sketch. It's an iTunes like view that shows you all of the clip art, fonts, palettes, and textures that I have to work with. But why should my asset collection be limited to what's on my computer? I've added a flickr search which will find creative commons licensed images based on keywords. next I want to integrate Google Fonts. Then I can access a huge collection of fonts from anywhere in the world, easily searchable.

How else could we ingrate the web into desktop apps? How about selecting text and having Bing translate it into another language, right from within my drawing. Or let me see the most popular color swatches that are tagged with 'summer'.

How about an IDE that will let me search for small open source code libraries. Say I'm working on an app that needs to parse a CSV file. Instead of jumping to a web browser I'd like to be able to search git hub for code snippets that match my current working language. It shouldn't be any more complicated than code completion. It would show me 8 snippets which take a file and return a string, along with their ratings. When I chose the one I want it downloads the code, compiles it, and puts it in my class path.

Context awareness Computers must learn from us and do things for us.

The other half of integration and automation is the computer doing things for us without having to request it. This trend is already happening from two very different directions. On one end our phones are learning more about us, which enables services like Siri. On the other end our IDEs are getting smarter and smarter. IntellJ IDEA not only knows what possible methods can fit the current spot in my code, but it will automatically add the imports for me. Because the editor has so much contextual knowledge about what I'm doing right now (I'm editing a particular place in a particular file in a particular project with a particular class path), it can do lots of things for me, or at least monitor what I am doing and give me advice. We should have all our apps doing this.

My word processor should let me type in an equation and offer to evaluate it for me. I should be able to type in a reference to a stock symbol and have it turn into a link with the current stock value. If I'm listening to music my calendar should tell me about an appointment 15 minutes before hand with a silent unobtrusive message, then increase to an alarm and turning off the music as I get closer to the appointment time. There should be a system wide switch to turn off all messages and alerts when I want to be in concentration mode.

My laptop knows where it is based on my phone's GPS and the local wifi access point. It should be able to adjust itself based on the location just as my phone does. The possibilities are huge here, but it worries me that almost all of the innovation is happening on the phone side, not the desktop side.

Identity Management and Service Sharing

Finally identity management. Originally we thought identity was just a handle. Then we thought it was your real name representing a single unified you. You have one identity on the web, period. But that was wrong too. Now we understand that identity is far more complicated. When I do something on a Google service am I acting as Josh, the author and open source coder, or am I acting as Joshua, the Nokia researcher? The answer is: it depends. I might be either of those, or some mixture.

We also have identity scattered across many places with no way to integrate them while protecting my privacy. This is a huge mess that is going to keep getting worse. I'd like to see an OS wide identity system. Any app, including the browser, which wishes to integrate with the network can ask the identity system for credentials. This allows any app to talk to anything, and at any time I can change which identity I'm using. This could be done per app or even for the desktop as a whole. I can imagine doing this with a virtual desktop system where one desktop is 'Josh the author' with a green background and menu bar. The other desktop is 'Josh the engineer' with a blue background and desktop. Each has credentials that can't be shared between them without explicit actions by me, the human.

We can see some of this in password managers and browsers that sync their settings, but this really needs to be system wide.

A plea to Desktop Linux

There will never be a year that Linux conquers the desktop. That era is over. The world is going mobile. Instead of focusing on a complete desktop OS, create an awesome desktop environment that can be seamlessly layered on top of any proprietary OS. Over time people will use more and more of your stuff, until they can switch entirely. Especially if you make the switch easy by having cloud based backup of everything (hello version control!).

My favorite app for Mac now is a command line tool called Brew. It uses community developed 'recipes' which download and compile lots of linux programs and libraries. With Brew I can be on a Mac but have ridiculously easy access to the rich ecosystem of linux tools like apache, imagemacgick, sdl, and node.

What I want is a graphical Brew app. I install it once and log in. I then downloads and configures my favorite email and browser apps, all of the command line programming tools I need, and anything else provided by the open source community. Even my settings are synced from a cloud server. *That* is how linux can win the desktop. You will never win over the masses. Focus on the hyper users and let them ease into a fully free desktop.


I hope that you have gotten two things out of this essay:

First, most computing is going to managed devices but the desktop interface can have a rich future ahead of it, if we are willing to build it.

Second, I hope you come away with some ideas for how we can move the desktop forward. We are going to live in the future we make so we'd better make a good one. I understand it's going to be difficult, but this really needs to happen. We must accept that people are reluctant to change, but do it anyway. If you are right then we will win in the end. (or at least our ideas will).

Most of of my free time work for the past few months has gone into Amino, the UI toolkit that Leonardo is built on, but Leo itself has gotten a few improvements as well. I'm happy to announce that the next beta of Leo is up, including:

  • Amino gives us a more uniform look and feel, now skinnable with CSS
  • Rulers!
  • The Make a Wish Button, now the easiest way to send feedback.
  • the canvas properly scrolls and zooms with real scrollbars now
  • Under the "Path" menu you can now add, subtract, and intersect shapes.
  • export to PDF
  • actual unit tests for exporting to SVG & PNG, almost every kind of shape is supported now
  • Tons of bug fixes

Unfortunately, as part of the Amino toolkit overhaul certain things have gotten slower. In particular doing layout when you resize the window is significantly slower. Don't worry, I have lots of speed improvements coming to Amino which will address this in Leo.

As always, please test and file bug reports. The export filters in particular are probably still have bugs to fix.

Builds are here:

A big part of my new job at Palm is education, in the form of tutorials, blogs, and of course speaking at conferences. Two new speaking engagements have recently come up. Palm Developer Day and OSCON. Read on for details.

Palm Developer Day

This month I'll be doing the day long Introduction to WebOS session. In this session I'll take you from zero to 60 in about five hours, giving you everything you need to know to make great apps for the webOS. This Developer Day is now sold out,(we've expanded. see below) but we are planning to do it again later this year after we work out the kinks. We also plan to video record the sessions and put them on the web. Stay tuned for more details.


I thought we were sold out for the Palm Dev Day, I've just heard that we have expanded the capacity and reopened registration. Space is going fast, so sign up now!


Yes, I'll be at O'Reilly's Open Source Convention again this summer. In addition to attending (a scant 2 hours from my house this time!) I'll also be speaking on Marketing your Open Source Project on a Shoestring Budget. I'll discuss different ways you can get the word out without breaking your budget, and throw in a few case studies.

If you've never been to OSCON before and live in the Pacific NorthWest I highly recommend attending. It's a great opportunity to get out of your particular technology bubble and experience what the rest of the tech world has to offer. I went two years ago and learned about Jython, Arduino, Trac, an extreme unit testing; all worlds away from my usual Java / JavaFX background.

The conference is July 19th-23rd, and I'll be speaking Friday morning. If you make it here be sure to let me know so I can buy you a beer. (Oh yeah, Portland has some of the best microbrews in the country).

A splendid time is guaranteed for all!

Michio Kaku, the science popularizer and theoretical physicist, is always a wonderful speaker. I’ve greatly enjoyed his TED talks. In Physics of the Impossible he takes on the many improbable technologies of science fiction to determine if they are in fact impossible. Surprisingly, few truly are. He divides technologies into three levels of impossible: likely today or in the next 20 years with existing science (ex: replicators), likely in the next hundred or so without violating any known laws of physics (shockingly, time travel is in this bunch), and the truly impossible without some new laws of physics. There are very few things in the last category. It’s an easy read and lots of fun.

Should you read it? Yes!

Physics of the Impossible: A Scientific Exploration into the World of Phasers, Force Fields, Teleportation, and Time Travel, by Michio Kaku

Dr. Kaku on the Daily Show

This iss a post about arudino!

The webOS auction has ended successfully. Every item sold, some for far more than I thought they would. Combined with some anonymous donations we raised over 6000$ for the Hill Family. I am overwhelmed and incredibly grateful. I knew the webOS community was passionate but I had no idea. We couldn’t have done this without your support. Thank you so much!

Now, on to the details. I’ll be shipping all items out this week. If you won something and haven’t paid yet, please do so. You should have received an email from the site. I’m going to send all domestic items USPS unless you request otherwise. If you are international buyer please let me know if you have any special shipping requests.

My wife and I are traveling to Atlanta with our little one in a week to spend some time with my family and present them with the check. Again, I cannot thank you enough. I am truly amazed by the webOS community. Thank you.


Today the other shoe dropped. Fortunately it was a soft slipper, not the steel toed boot to the head I had feared. HP is open sourcing webOS.

What does this mean? Well, I honestly don't know yet. There is a huge amount of planning to be done, but it could be the start of something great. We will have to see. It will be a busy Christmas, that's for sure.

In the meantime I'm working on a few other projects that I hope to share with you soon. Stay tuned. Thanks!

- josh


I've always wanted a magic wordprocessor. Something that helps me organize my thoughts and build ideas organically, rather than spend all of my time worrying about formatting. Something for the internet / cloud age. Given that Microsoft Word hasn't fundamentally changed in over a decade (or possibly two), we aren't likely to get such software from them. Instead, I decided to play around with some ideas using my new favorite programming tool: OMeta.


OMeta's story starts with compilers. Compilers are built of many phases: lexing, parsing, applying optimizations, then generating machine code. OMeta's creator, Alessandro Warth, realized that these are all essentially the same thing, pattern matching. The only difference is whether you are matching a stream of characters, tokens, or parts of a tree. Alex created a single language that lets you match over objects (ie: anything) to implement all parts of the compiler using a single tool.

OMeta is that single tool: a domain specific programming language for creating pattern matchers. More properly, it is a language extension that can be laid on top of an existing host language. I am using OmetaJS, an extension to JavaScript. The wonderful thing about OMeta is that it's easier to use and extend than dedicated parser generators like Antlr or Yacc because you always have the host language available to do whatever computation you need.

Editor Demo

As my first experiment with OMeta, and new wordprocessors, I created a simple Markdown editor. You type raw text into the left side of the screen and see your properly styled text on the right side. But this editor has a trick up it's sleeve. You can type simple math expressions inside curly braces the editor will evaluate it for you.

Play with the editor here

Here's what the OMeta code looks like:

ometa Foo {
    //white space
    toEOL = (~seq('\n') char)*:t '\n' -> t.join(""),
    h1 = "#"   ' ' toEOL:t -> tag("h1",t),
    h2 = "##"  ' ' toEOL:t -> tag("h2",t),
    h3 = "###" ' ' toEOL:t -> tag("h3",t),

    paraend   = seq('\n\n'),
    para      = (expr|strong|em| (~paraend char))+:t -> tag("p",t.join("")),
    text   =      (~seq('\n\n') char)*:t      -> t.join(""),
    strong = "**" (~seq('**')   char)*:t "**" -> tag("strong",t.join("")),
    em     = "*"  (~seq('*')    char)*:t "*"  -> tag("em",t.join("")),
    expr    = "{"  exp:t "}"  -> tag("b",t),
    //code block
    codeblock = fromTo("```\n", "```\n"):t -> tag("pre",tag("code",esc(t))),
    //inline expressions
    num = :n -> parseInt(n),
    term = num,
    expadd = term:a "+" term:b -> (a+b),
    expmul = term:a "*" term:b -> (a*b),
    expsub = term:a "-" term:b -> (a-b),
    expdiv = term:a "/" term:b -> (a/b),
    exp = (expmul|expdiv|expadd|expsub):e -> (" "+e+"") ,
    //pull it all together
    line = space* (h3|h2|h1|codeblock|para):t space* -> t,
    top = line*,

This editor only understands simple two term arithmetic but it could easily be expanded with variables and more complex functions.

I've posted the first beta of Leonardo 1.0, ready for your testing pleasure. More details over on the Leonardo blog.

As part of my ongoing efforts to create better designed software, I some how ended up creating my own new UI toolkit. This is really a part of my belief that a decade from now 90% of people will use phones, slates, or netbooks as their primary computing device. Amino is my experiment building software for that other 10%: the content creators who need killer desktop apps, the programmers who want great tools, and the knowledge workers who need to manage incredible amounts of information at lightning speed. Amino is the toolkit for these apps.

Read the full description of Amino at my blog, or go play with Amino right now.

Today I am talking with Matt Grimm, the final member of the Retro Game Crunch trio. You can also read the previous interviews Shaun Inman and Rusty Moyher. There's still a few days left to help push the Retro Game Crunch to the finish line. Pledge now!

Josh Before we talk about the Crunch, could you tell me a little bit about yourself?

Matt I'm just your average 28 year old. I grew up playing every video game I could get my hands on. The Dreamcast was my last console. I didn't touch video games for around a decade, got into music and cars. Within the last two years I've started falling in love with old video games all over again.

Josh Where did you go to school?

Matt I attended a local community college, Rose Sate College. I got my associates in Web development. And that's it for school. It is increasingly apparent that real life experience trumps attending college when it comes to the web or similar technologies. I'm glad I never bought into the idea of needing to attend school for years on end while racking up huge amounts of debt. If you can't tell thats a subject I'm very passionate about. Schools are becoming irrelevant (for certain fields of study) and I hate debt.

I currently have a day job working in web development. I work for LightCMS as a software engineer. I get to do really nerdy stuff with JavaScript / CoffeeScript and love it.

Josh What did you do before you started making music?

Matt I don't know that I can really answer what I was "doing before music". Music has always been a part of me. I'm always writing and recording small ideas since I was a kid.

Josh Your bio says you are a sound designer. How is that different than a musician, composer, or game developer?

Matt Sound design is kind of new to me. I don't claim to know anything about it professionally. I kind of stumbled into it because it's part of game development. You need sound effects. And I wanted to make my own. So I started really "listening" to things in the real world and tried figuring out how I could make a believable version of that with the NES sound chip. I think the best sound effect I've created to date is probably a crow's caw. (It's for an unannounced game Shaun Inman and I have been working on, before retro game crunch that is).

Sound design is very interesting, it's a totally different mindset from writing music. If you're trying to emulate a sound from the real world, you have to slow down, listen to things over and over and try to break them into little components. Then you need to know your tools well enough to translate that into a believable version of the sound. The other side of sound design is even tougher. Coming up with a totally abstract sound to represent an action or character on screen is crazy. I second guess a lot and probably make 4-5 sounds before finally choosing the one that "feels right". With the NES, players still have to use their imagination because it's pretty limited but that's the fun in it.

Josh Do you think there is a lot of overlap between game audio and other genres of music?

Matt There's a ton of overlap with game audio and other music. Its funny how in the 8bit and 16 bit era all the game music was trying to sound as real as possible. Now we can have actual music tracks play in games. But the tables have turned. Now real music wants to sound digital and "chippy" like old video games.

Josh Do your skills in one cross over to the other?

Matt Skills for writing music (at least on a computer) totally cross over when trying to make old game music. From the technical side of things it's all basically the same skills. When I was in high-school I had one of my first experiences with making music on a computer using FruityLoops (FL Studio now). That helped set a foundation for my understanding of music creation software in general. And honestly I could even go back further and say Mario Paint on the SNES was my first real experience creating music on a computer. If your comfortable with programs like Garageband, Reason, etc. moving over to something like Famitracker will be easy. If your comfortable with programming and writing code (understandably a different skill set) there's options like MML to compose NES music. MML is currently my preferred way of making NES music.

From the less technical and more musical side of things, it all translates skill wise. The fundamentals of making good, interesting music are what's important no matter the medium.

Today I write all my music with a keyboard and GarageBand using the YMCK 8bit plugin. From there I recreate the song in MML code and compile it into the nsf format.

Josh What's special about game audio?

Matt What's special, at least in my opinion, about game audio is the limitations. A lot of modern chiptune artists make music with real hardware and use the low-fi sounds produced by these sound chips but what I'm talking about is the actual limitations of the format as it was in 1985. You only have a few channels to work with and the sound engine in your game (that you had to write from scratch) has to manage playing music and sound effects at the same time on those 5 channels. So not only is the audio sonically limited because of the sound chip but the storage of data and usage of music and sound effects introduce a lot of unique challenges. With the modern technology we use to build games we don't have any of these problems but I try to impose those limits on myself when composing. While the stuff I'm doing is not 100% accurate to the true limitations I want it to be as close as possible. It makes everything feel more authentic for sure. The main reason I nerd out hardcore about this stuff is because it's one of my life goals to make a real NES game. I'm essentially training and preparing myself for the real thing.

Josh Can audio be a part of game interaction beyond background music?

Matt Definitely. The lack of music can be just as important. Some of my favorite moments in games are when music cuts out and simple sound effects are all that's there to set the mood. It really makes things dramatic. Especially if music had been playing the whole time up until that point. It can make a scary area of a game intense.

Josh What other games have you worked on? What was your favorite?

Matt Flip's Escape is the only real commercial release I've been a part of. I worked on Super Clew Land and two other ludum dare games I did alone. Behind the scenes I've worked on 3 of my own personal projects. There kind of in limbo right now. Not sure what will happen. Finally, the unannounced game with Shaun. That one is my favorite. I think it contains some of my best work to date. I hope we get to continue with it after Retro Game Crunch and you all get to hear it someday.

Josh What musical work have you done outside of games?

Matt Like most guys, growing up I was in a lot of bands. The last one being Finding Chesterfield. It was just me and a friend writing emo, piano rock tunes. I've played guitar since about 7th grade and have always goofed around on the piano. I don't have any formal training (except for band in jr. high and highschool). Not many people know this but I play the alto sax, and pretty darn good too.

Josh How did you first get into gaming and what is your favorite game (retro or otherwise)?

Matt Well as far back as I can remember we had an Atari 2600 in our house. I was probably 4 or 5 years old. I remember playing Pitfall, Frogger, Kaboom, Night Driver, Kangaroo, Yar's Revenge, maze craze and many more. My parents (or whoever in the family owned the Atari) had some great taste in games. A year or two later I remember seeing the NES at Sears. I played the first two levels of Super Mario Bros and my mind was blown. It's all I asked for after that. Maybe a year later I got an NES for Christmas (1991). Best. Christmas. Ever.

Oh man! Choose just one favorite game?! Impossible. Well if I have to narrow it down... it will be on the NES because that's my favorite system. And the best memories I have, are with Battletoads. I still haven't beat that game to this day. I will though. Some. day.

Josh I've always been a fan of Final Fantasy's themes (the S/NES era). What game has your favorite music?

Matt Well let me say I am a sucker for slow, ending music. The type of song that makes you tear up a little because you just beat the game. The end themes for Super Mario Land (Gameboy) and Dr. Mario is what I'm talking about. Give me more of that.

There's a ton of games that have killer music so I'll list a few.

  • NES:
    • *Duck Tales - "The Moon" easily the 3rd best song on the NES (behind Super Mario theme and Zelda theme)
    • *Battletoads - everything about the music is killer. And don't miss out on that amazing pause screen beat.
    • *The Blue Marlin - a true hidden gem in the NES library. I stumbled across it last year while building my NES collection. Gameplay not so great. The music is mind blowing.
    • *Mother, Balloon Fight & Gyromite - Hirokazu "Hip" Tanaka. My favorite composer of the era. He's worked on a lot of my favorite games but I really like these.
    • *Little Nemo - it's just so dreamy and perfect
  • Gameboy:
    • *Donkey Kong - a special DK game just for the GB that most people missed. It's Epic. I look to it for constant inspiration.
    • *Kirby's Dream Land - the game I probably played the most on GB. Super catchy (and happy) tunes
    • *Super Mario Land - "Hip" Tanaka!
  • SNES:
    • *Super Mario World - Masterpiece. This is my childhood.
    • *Earthbound (Mother 2) - more by "Hip" Tanaka. He's a legend.
    • *Donkey Kong Country - everyone on earth probably knows "jungle hijinx". SO. GOOD.
  • Genesis:
    • *Rocket Knight Adventures - another hidden gem. Only this game rules! Gameplay is great and music is too.

Josh How do you create music for a game? Do you have music ideas first or does the game come first?

Matt Well it can work either way really. I constantly write ideas as they come to me. If it doesn't fit the current project then I save it for later. That said, the game does need to come first for me. It doesn't need to be totally polished or even a finished idea. I just need something to go on. The music has to "fit" the game. I'm not sure there's a way to explain how you know what fits, it just feels right. Seeing visuals really helps me get in the right mindset. It's a mysterious process even to me. You just trust your instincts and go for it.

Josh I see your partners in crime have thousands of tweets but you only have three. What's up with that? Too busy composing to tweet?

Matt Haha. I guess you could say that. I do write a lot of music. You're probably looking at my old twitter account. I recently got the username 8bitmatt. Check it out. There's thousands of useless tweets for you over there. :) I do tend to be a man of few words though.

Josh What's the one question I should have asked you and didn't?

Matt What are my musical influences?

I've grown up listening to all geners of music. Prog rock, guitar gods, rap, metal, punk, country, jazz, big band, etc. Everything has probably influenced me in some way. I'll list some of my favorite artists, well the ones that I think have had the most influence on my style as a musician.

  • Dream Theater - crazy time signatures, fast soloing, breakdowns. It's all thanks to them.
  • MXPX - masters of melodies and songwriting.
  • Larry Coryell - I wish I could play like him.
  • The Secret Handshake / Mystery Skulls - we're the same age but I look up to this guy. He's a genius. Probably the most influential on my style. I hope to work with him on a project someday.

You can read more about Matt on his website and follow him on Twitter.


I've added some more items, including some non-technical books

  • Feng Shui, the traditional oriental way to enhance your life
  • The Everything Groom Book. Know anyone who's engaged?
  • The Cathedral & The Bazaar, Musings on Linux & Open Source by an accidental revolutionary. Eric S Raymond.
  • How to Solve It: A New Aspect of Mathematical Method
  • Less: Accomplishing More by Doing Less: Marc Lesser
  • Introduction to Robotics
  • Shout: The Beatles in Their Generation


This is for my Eugene readers. I'm cleaning out my office and giving away a ton of stuff. Of particular interest, a bunch of programming and other technical books. If you are interested and live in the Eugene area give me an call at 678-458-5810.

O'Reilly Programming books (and a few others)

  • Creating Effective JavaHelp: Kevin Lewis
  • Java Threads, 2nd edition
  • Java RMI
  • Java Distributed Computing
  • J2ME in a Nutshell
  • Version Control with Subversion
  • Programming Perl
  • Advanced Perl Programming
  • Perl Cookbook
  • The C Programming Language: Kernighan and Ritchie
  • The Peopleware Papers: Notes on the human side of software
  • Software Development According 2 Einstein : a slim book of Einstein quotes and commentary.
  • Usability: the Site Speaks for Itself: website usability

Fiction & Non Fiction

  • Confessions of a Public Speaker, Scott Berkun (O'Reilly)
  • Leaving Reality Behind: etoy vs and other battles to control cyberspace
  • Airframe, Michael Crichton
  • Beyond Civilization, Humanity's Next Great Adventure, Daniel Quinn
  • How to be a Gentleman
  • In the Shadow of the Gargoyle (short story collection including Harlan Ellison & Neil Gaiman)

Other stuff

  • 512 MB SO-DIMM
  • Red cushion to an Ikea Poang chair (replacements cost 35$!)

If you've found this site you probably came from one of my technical blogs on Java, JavaFX or fun JavaFX demos. First let me say: Welcome! I'm glad you are joining me in what I hope will be a fun and engaging site. This post is the first of what will be many posts and essays on the topics of design, usability, and aesthetics. So let me dive in and tell what this is all about. This blog is about Design with a capital D. More specifically user interface design for software, though I plan to cover topics generally applicable to many fields. By 'Design' I mean not just the making things pretty aspect of visual aesthetics (commonly called 'graphic design'), but also interaction design, usability, and human centered engineering in general. I'm not a graphic designer so making things pretty will be only a small portion of the blog (though hopefully our skills will improve over time together). Why another design blog? It's true, if you search for 'design blog' in Google you will find endless sources, mostly focused on how to do things in Photoshop and Illustrator, the two great tools used by virtually all graphic designers. Even when their focus isn't specifically on how to do things in those editing tools, they are still targeted at graphic designers. But not this blog. This blog is aimed squarely for software engineers. The people who code up real software used by real people. Now, why, you may ask, did I make a blog targeted at engineers? Aren't most engineers bad at design? Do they even care? Well, that is exactly the point. Software engineers are the ones most in need of some design education. And while they may not all care about design, my experiences writing and speaking has shown me that there are a great many engineers who do in fact care and want to learn more. Thus, this blog. What can you expect? Well, besides asking my own questions that I then answer, I plan to cover many aspects of designing good human centered software. I'll cover how to plan and build good user interfaces, how to effectively test software on real users, how to work with graphic designers, and yes: how to make things pretty. What can you *not* expect? First, I don't plan to cover JavaFX or Swing. I certainly will use them in examples since my expertise in those technologies makes them effective teaching tools. However, no particular technology will be my focus. The point is to discuss and learn the timeless principles of design. Principles which apply whether you are using JavaFX, Silverlight, CSS, or paper and pencil. Good design is good design. I think I will leave it there for now and enjoy some rare Pacific Northwest Sunshine (TM). If you choose to add my RSS feed to your reader, then I'll be seeing you again soon. Thanks! - Josh

I know it's been quite a while since I've posted. I have no defense other than to say I have an 18 month old baby. Toddler on the move 24/7 makes for a very tired daddy.

Last night I gave an intro to Arduino talk for our local tech meetup. My research work at Nokia has involved building hardware prototypes to test out ideas, so I've gotten to learn a lot about embedded hardware of all shapes and sizes. Arduino is clearly the leader in this area, making hardware hacking more accessible than it ever has been.

I won't try to recreate my talk here. Such things are better in person. Instead, here are links and descriptions of the many cool Arduino libraries and products we covered.

Arduino and Clone Boards

Because Arduino is an open hardware platform anyone can take the schematics and build their own compatible boards. Wonderfully, this has resulted in a huge variety of available hardware for purchase as very reasonable prices. Here are a few of my favorites:

Freeduino Leo Stick The LeoStick is another small USB enabled Arduino. This twist with this one is that it has tons of IO and has the USB connector integrated into the PCB, meaning you plug it directly into your computer without any extra cables. It's perfect for bit bashing. Stick some sensors into the pins and start writing code. An ideal prototyping board.

DigiSpark DigiSpark is a Kickstarter project I recently backed. They are a local company (Portland Oregon) building what they call "an Arduino so cheap you can afford to leave it in your projects". The boards are cheap and tiny, with a slew of attachments to snap on top. They haven't shipped their first production run yet, but they expect to before Christmas.

SeeedStudio's Seeeduino Film is a thin and flexible Arduino clone with a built in LiPo (battery) charger and a chainable communication bus. They are great for wearable projects or tight spaces.

Adafruit's DC Boarduino The DC Boarduino is made specifically for breadboard work. Long and thing with pins the right distance to drop right in the middle of your breadboard. I love it for prototyping new circuits.

ExtraCore A tiny arduino on a board, less than one square inch. I'm using one for my alarm clock project. Works great.

The JeeNode is my new favorite Arduino clone. It is small, very power efficient, and has a built in wireless module. You can be sending wireless data in just a few minutes with these little guys, and they are super cheap. US residents can buy them through Modern Device

Makey Makey
Makey Makey isn't your traditional arduino. You can start building fun things without any coding at all. Just hook alligator clips up to things, plug the board into your computer, and start making things happen. It's hard to explain with words. Just watch the short video. You'll be hooked.


Here are a few Arduino accessories I've purchased and really enjoy.

The 12MM LED strips A strip of bright diffused RGB LEDs. These are amazing. You can control each LED "pixel" independently and they chain together without using extra pins. (You can also split them into multiple shorter strands). Adafruit's open source library makes it trivial to code with these. I plan to use them as my Christmas lights this year. 40$ for a strip of 25 LEDs.

Easy Driver Stepper Motor Driver I've been using these for my home built CNC machine (still not finished) and I love it. Very easy to work with and quite affordable.

RedPark iPhone serial cable If you need to have your iPhone communicate over a serial port, these cables are pretty much your only option. They come with an iOS library to make serial communication quite easy.

Amazon has the Getting Started with Arduino book on sale for $8.26 , written by one of the Arduino founders Massimo Banzi. Free shipping if you have amazon prime.


My new Arduino IDE: ArduinoX

While I love the Arduino Platform I hate the official IDE. It's not really bad, just extremely dated. It clearly looks like a hacked up Swing app which was written by someone who doesn't write client side Java for a living. After trying to fix a few bugs I decide to write my own with a modern UI. My new IDE (currently codenamed arduinoX) has a better editor, nice fonts, built in examples and docs (still prototypes) and in general tries to address long standing usability problems of the official IDE. It still uses the same compiler toolchain underneath so no code changes are required. It's just a better coding experience.

You can get the latest build for Mac or Windows here

Have a Happy Thanksgiving!

For Christmas Jen and I finally bought a TV after four years of distraction-less living. We finally decide it was time after countless evenings watching Hulu on my 15" laptop. We were adamant, however, that we would not buy cable. We just want a better way of watching Hulu, NetFlix, and a few other sites. To make that happen I bought the latest Roku device, Angry Birds edition.

If you aren't familiar with it, the Roku is a small box you attach to your TV with HDMI. It runs special apps that stream content over the internet and can display full HD content. The box itself is quite nice. It's super tiny, makes no noise, and uses a bluetooth remote instead of infra-red. You can also side load content from a USB stick.

For basic streaming of video from Hulu or music from Pandora the Roku works quite well. I wish the interface was a bit prettier and had smoother animation, but it yes the job done with minimal fuss. The apps is where the device has it's most potential but also it's greatest failure.

I have long been a fan of putting apps on a TV, especially when tied to an internet based store. The concept seems so obvious that I have to assume Apple is working on this (unless they decide to just stream everything through your iPad). Apps are a good way to view video because the content provider can customize the experience, but that is just the beginning.

For most people, the nicest display in their house is the TV. It is usually centrally located and has comfortable seating. And yet we only watch video on it. This seems like a waste. Such a beautiful high resolution screen should be used for much, much more. Dynamic artwork. Video calling. Interactive picture frames. Music visualization. And of course games. The potential is endless.

So why hasn't it happened? Honestly, I don't know. The technology has been ready for a couple of years. Roku and Apple can sell their devices at a profit (or at least break even) for under $100. I think the device makers are too focused on the lure of video. That's where the sexy business model is. They all dream of being the next cable company. The company that sits between you and the content, exacting a per-use toll. This is also the *hardest* part to do. And while they waste time dicking around with video streaming rights they ignore the potential of other uses for a beautiful large screen attached to the internet.

But I digress. Back to the Roku.

The Roku has it's own app catalog (called the Channel Store) where you can get both paid and free apps. Unfortunately the selection is meager and the interface is horrible. It's a table of rows, each showing a picture of the app. When you choose an app you see a details screen where you can purchase it. This interface simply won't scale. There is no searching and few categories. The details screen has no ratings or screenshots, just the 'cover art'. And the selection of non-video apps is horrible. A few bad games and radio apps.

I'm not sure why the Roku is like this. They've solved most of the hard parts: initial setup, in app payments, device navigation, OAuth association with online services. They just don't have the apps. App development itself is fairly easy. You must use their proprietary BrightScript, but it's easy to work with. The SDK is just some command line scripts that install right to your test device. They clearly took a few pointers from webOS on how to make development easy. My only suggestion would be switching to JavaScript (and making the native sdk public).

The other thing that makes me wonder is that the features of the device haven't changed much over the past few years, and their development process seems slow. They announced their 3.0 SDK about a year ago but it only just went final, with fewer features than expected. They have Angry Birds running but only allow native development access (C + OpenGL, I'm assuming) for special developers. They promised to open it up to everyone, but nine months later they still haven't. For a startup this seems slow.

There is also a few apps that I really expect to be present and aren't. You can stream photos from FaceBook and listen to Pandora, but there is no built in app to stream music, movies, or photos from your desktop computer. Given that their main competitor, Apple, did this from day one I'm not sure why it's absent. The few 3rd party apps which do this are horrible! This is especially surprising since I would think that streaming MP3s from a network drive would be the first thing people in the open source community would do. BTW: I did try Plex but it doesn't do music. The other MP3 apps I found were amazingly slow and ugly.

So, things brings me to three questions that I can't answer:

  1. Why is Roku the way it is? Are they focusing on other things (better content deals)? Are they understaffed? What's up?
  2. Is there any interest in an app to stream music from your iTunes lib on your desktop (it would require a single click server on the computer). I'd love some help on it to make it non-ugly.
  3. Am I wrong about using modern HDTVs for non-video purposes? Is there simply no interest in this?

It's the fashionable thing to speculate on future Apple products. One idea that continues to get traction is the Apple TV, a complete TV set with integrated Apple-ly features. Supposedly to be announced this fall, after failing to appear at *any* event for the past three years, it will revolutionize up the TV market, cure global warming, and cause puppies and kittens to slide down rainbows in joy. I don't buy it. I don't think Apple will make a TV. Televisions are low margin devices with high capital costs. Most of the current manufacturers barely break even. Furthermore, the market is saturated. Pretty much anyone in the rich world who wants a TV has one. Apple needs *growth* opportunities. The last thing they need is a new product with an upgrade cycle at least *twice* as long as desktop computers. It doesn't make sense. All that said, speculating about products is a useful mental exercise. It sharpens the mind and helps you focus when working on *real* products. So here we go: # If Apple Made a TV, How Would They Do It? First let's take care of the exterior. In Apple fashion it would be pretty and slender. Either a nice brushed aluminum frame like the current iMac or a nearly invisible bezel. I suspect they will encourage wall mounting so the TV appears to just float. The current Apple set top box will be integrated, as will network connections, usb, etc. Nothing to plug in except the power cord. Next, we can assume the current Apple TV will become the interface for the whole device. A single remote with on screen controls for everything. While I love my Roku, I hate having to use one remote for the interface and a second for power and volume. Third, they will probably add a TV app store. I don't think it will feature much in the way of games and traditional apps. Rather, much like the Roku, there will be apps for each channel or service. The current Apple TV essentially has this now with the NetFlix and HBO apps. The only difference would be opening the store up to more 3rd party devs. I think we can assume this will another client of your digital hub. Apple already wants us to put all of our music, videos, and photos into So far everything I've described can be done with the current Apple TV set-top box. So why build a TV. Again, I don't think they will; but if they would need to add something to the experience beyond simply integrating First, a camera for FaceTime. Better yet, four of them, one in each corner of the screen. Four cameras would give you a wide field of view (with 3D capture as a bonus) that can track fast moving toddler as they move around the living room. This is perfect for video chatting with the grandparents. Furthermore, there are modern (read: CPU intensive) vision algorithms that can synthesize a single image from multiple cameras. Right now the camera is always off to the side of the screen, so your eyes never meet when you look at the person on the other end. With these algorithms the Apple TV could create a synthetic image of you as if the camera was right in the middle of the TV. Combined with the wide field of view and a few software tricks we could finally have video phones done right. It would feel like the other person is right on the other side of your living room. It could even create parallax effects if you move around the room. Video calls are a clear differentiator between the Apple TV and regular TVs, and something that a set top box alone couldn't do. I'm not sure it's enough to make the sale, though. What else? How about the WWDC announcement of HomeKit? An AppleTV sure sounds like a good home hub for smart automation accessories. If you outfit your house with smart locks, door cameras, security systems, and air conditioners, I can see the TV being a nice place to see the overview. Imagine someone comes to the door while you are watching a show. The show scales down to corner while a security camera view appears on the other screen. You can choose to respond or pretend you aren't home. If it's a UPS guy you can ask them to leave it on the front door. I imagine the integration could go further. Apple also announced HealthKit. The Apple TV becomes a big screen interface into your cloud of Apple stuff, including your health data. What happens if you combine wearable sensors with an Apple TV. See a live map of people in the house, ala HP's Marauders Map. An exercise app can take you through a morning routine using both the cameras and a FitBit to measure your vitals. A TV really could become another useful screen in your home, something more than just a video portal. I think the idea has a lot of potential. However, other than a camera and microphones almost everything I've detailed above could be done with a beefed up standalone Apple TV set top box. I still don't think a full TV makes sense.

I’m happy to announce Electron 0.2. We’ve done a lot of work to improve the compiler and library tools. The biggest news is Windows and Linux support. Also, you don’t need to pre-install the regular Arduino IDE anymore. Electron will auto-download it’s own copy of the required toolchain. Here’s the details:

  • Initial Windows and Linux support!
  • You don’t need to modify settings.js anymore. Electron will auto detect your Arduino/Sketchbook dir.
  • You don’t need the regular Arduino IDE installed anymore. The appropriate toolchain will be auto-downloaded for you the first time you compile something.
  • User installed libs work now. Note that user installed libs take a priority over libs installed by the IDE.
  • the serial port will be automatically closed and reopened when you upload a sketch. If this crashes your computer please let me know. I might need to increase the timeouts.
  • Preliminary support for auto-detecting which board you are using by the USB VID/PID. Special thanks to PixnBits for that.
  • Set the serial port speed
  • Sketch rename works now
  • download progress is shown in the browser (partially)
  • tons of under the hood fixes
  • auto scroll the console

The arduino-data project was also updated with new boards an libraries:

  • New boards: TinyLily and Gamebuino
  • More networking libs: CmdMessenger, OneWire, PS2Keyboard, SerialControl, SSoftware2Mobile, Webduino
  • More specs on various boards
  • The rest of the built in libraries: Ethernet, Firmata, GSM, LiquidCrystal, SD, SoftwareSerial, TFT, WiFi
  • Support library sub-dependencies

Thanks to contributors:

  • Dan O’Donovan / Dan-Emutex
  • Nick Oliver / PixnBits
  • Sean McCarthy / seandmcarthy
  • trosper

You can get the latest code on the github project.Please test this with every board and OS you have. File bugs in the issue tracker.

I’ve registered a domain for Electron,, though don’t bother going there yet. I haven’t built the website. If anyone is a talented webdev who’d like to help with that job, please contact me.

I’ve you’d a sneak peek of the next version of Electron, check out the mockup here. It’s got a new library picker, a proper tree based file picker, and resizable panes. It still needs a lot of work before it can go live, but this will give you an idea of where we are going.

Thank you, and keep on testing.

This is a short video (~6min) where Tim Berners Lee (Mr. Web himself) talks about the successes of open data. Take special note of the end section where the Open Street Map project is used to help relief efforts in Haiti after the earthquake.

Watch it now!


I hate smartphone lock screens. Not the concept itself, but the many implementations. Everything I’ve seen is too minimal or too complex. This is prime real estate we are talking about. Surely we can do better. Since it’s been on my mind a lot lately I thought it would be fun to redesign the lock screen starting with first principles. Note: I’m a researcher at Nokia by day. This blog and the opinions on it have nothing to do with my work at Nokia. All ideas are my own and do not reflect anything Nokia is working on. This post also does not reflect a desire on my part to actually build a new lock screen for any OS. This is purely a design exercise to explore the space and see if we can do better.

android image from (i know it’s a modified rom, but it illustrates the point)

At the beginning

What really is a lock screen for after all? It’s original purpose was to protect the phone from unwanted input, either by accidentally pressing buttons or an intruder reading your email. While this is still the primary purpose; the lock screen, by virtue of being the easiest place to access on a phone, is where we put lots of extra information like the time or recent calls. In recent years phones, especially Android phones, have created an explosion of new widgets and dashboards on the lock screen. It’s getting out of hand and still doesn’t use the available space as well as it could. We need to start over from first principles.

Fundamentally a lock screen dashboard is about presenting status at a glance. This is the status of anything on the phone, from wifi and signal to time to location to incoming messages. Here’s a list of ‘status’ items I just thought up:

  • current weather conditions
  • battery / power charging
  • currently playing music / other audio
  • calendar
  • todos
  • alarms
  • a running stopwatch
  • location
  • Facebook notifications and photos
  • emails
  • tweets
  • SMS messages
  • and of course the first thing to go on the lock screen: time and date

I’d also like to mention the following which aren’t statuses but which are commonly found on lock screens. I’ll talk about what to do with them later.

  • quickly record a note
  • quickly snap a photo
  • contact info if lost. technically that last one isn’t status, but it sure would be useful. perhaps a remote request to put the phone into ‘lost mode’ which then shows that info on the lock screen.

Status at a Glance

So we have a long list of statuses. How should we present them? We can’t list them all, there wouldn’t be room. Should they be interactive? Animated?

First of all, I’d like to go back to our original definition of the lock screen: “status at a glance”. The 'at a glance' part is very important. The user’s only required interaction should be pressing the power button to toggle the screen. Too many lock screen widgets want you to interact with them. Anything which allows interaction violates the ‘at a glance’ principle, plus we get back to the purpose of the lock screen: to prevent accidental input. Anything which accepts input on the lock screen could be accidentally pressed.

Let’s break down the problem into chunks. The obvious solution is to list all of the items vertically. This would be too long for the screen, especially if you have a lot of social media notifications. We could scroll but that requires interaction and means you can’t see everything at a glance. Let’s start by removing unneeded things. The alarms are only present if you have an alarm. Most of the time we won’t have an alarm so we can hide it. The same applies to the stopwatch and music interface.

Next let’s focus on only the current items. This is about current status after all. That means the alarms should show you only the next upcoming alarm. The same with the calendar and todo items. These all have an inherent order and timeliness. By only showing what’s next we can help the user focus on what is important as well as reclaim a lot of screen space.


Now, about those Facebook and Twitter notifications. I hate them, but some people like to have them. Let’s keep the noise down by only showing direct replies or posts from your most personal group. In Facebook this could be family or friends, or simply the 10 people with the most interaction between the two of you. Google + has circles to indicate importance. I’m not sure about Twitter. Followers only? The user could tweak these settings, of course, but good defaults are important.

Next, let’s collapse those notifications into a single line by platform. All Facebook notifications into a single line. All tweets in a different line, etc. The line shows the first notification and a count of total unread messages. This system works well for email and SMS messages as well. To keep it current let’s always show the most recent notification by default. Then if you view the screen, turn it off and more come in while you are away but before you actually log in to dismiss notifications, then you will still see new information when you turn on the phone again.

This presents a dilemma though. If you’ve got three new @replies on Twitter we have the information to show them all, but we don’t want to take up the screen space. There is a temptation to show the messages as some sort of scrolling list or carousel, but this would violate our ‘at a glance’ principle. What to do? What if we modify our rule slightly. It’s okay to allow interaction while locked as long the interaction is read only. The user can’t dismiss notifications or reply to messages as those would change fundamental state. We only allow simple navigations where the cost of accidental interaction is low. Accidentally swiping through a list of tweets in my pocket is not a big problem. Butt-dialing is.

So, with our new modified rule we can collapse the tweets into a single list item scrollable horizontally like a carousel control. This system works with anything that has multiple ‘current’ items: email, alarms, todos, calendar, etc. To preserve our principle of only allowing read-only interaction let’s reset the current item to the most recent whenever something new comes in, or after a timeout of 10 minutes.


Great, now we’ve shrunk or completely hidden most information, reclaiming much of our lock screen. Now let’s get into a few corner cases starting with the weather: The weather should always show your current location, unless you are flying to another city in which case you couldn’t get weather data anyway. It should indicate what location that weather is for and how long it has been since it grabbed the data. Further more, if the user has configured additional cities (home city, work city, family city) in the weather app, these should show up here using the same swiping mechanism as messages. Having a graphical view of weather would be nice but not essential. What is essential is a textual description of the weather, the temp, and whether it’s going to rain in the next few hours. We want to show information that is both current and relevant. That, my friends, is status.

Battery and charging are self explanatory. Show the current charge level of the battery both as time to dead and a percentage. Indicate if it is currently charging. If the charger is plugged in and it’s not charging then this counts as an exceptional condition that must be highlighted. The same for a near death charge.

Music and Location

Location is an interesting thing. While we could certainly show your ‘current’ location, constantly polling the GPS sensor drains the battery. Furthermore, knowing your location isn’t actually very useful most of the time. You probably already know where you are. However, if you are on the way to somewhere then you might want to know the ETA (estimated time of arrival) from your current location to your destination at the current speed. In fact, the current speed might also be useful info. If you are in the middle of following a navigational map then this information is already known and could be displayed. iOS does this today and I assume Android does as well. Furthermore, if you have a meeting in the next hour with a location attached to it then that could be used as your destination. Some testing will be required to determine the optimal GPS polling frequency to avoid draining the battery.

A few tweaks

Other possibilities:

  • current volume level. I’m not sure you need it. If you can’t hear something then you know you need to make it louder. perhaps only a ‘mute’ indicator.
  • current accelerometer / compass / gyroscope values. I can’t think of a use for these and they drain the battery so let’s drop them.
  • light sensor: i can’t think of use for this either.

Animation: only animate what is new, say: what is new since the last time you opened the lock screen with a one minute debouncing in case you accidentally toggle the screen a few times.

To preserve spatial memory let’s make status always appear in a prioritized order with time, SMS, alarms at the top. various messages in the middle and

While we only show the soonest item for calendars, a person’s schedule for the day is often the most frequently viewed information. This is the one time I think we should support multiple lines. Rather than making it customizable how many lines to use, let's just make it a scrollable area that takes up all extra space. This is the ‘agenda’ view.

Interaction Allowed?

Now we come to the other interaction bugaboo. It is very handy to take a photo from the lock screen. Often your Kodak moment is fleeting. I have a toddler so I am quite familiar with this situation. Still, taking photos is a significant state change. If we wish to allow it then we should have a more complex gesture to start the action. Something more complex than simply tapping a button. If a left to right swipe is enough to unlock a phone then perhaps a vertical swipe from a particular point. This is what iOS currently does. I think we could do better though. Once you have successfully targeted a button with your finger you can easily drag the point to anywhere on the screen with little though. Let’s make a spot near the top you must drag the photo button to. When you tap the button the target spot becomes visible. Drag to the spot and release to open the camera.

The End Result

I’ve gone quite a few words talking about how the lock screen should work but nothing about how it looks. This was intentional. Steve Jobs said design is how it works, not how it looks. While I agree with him, aesthetics do in fact matter. He didn’t pay all of those visual designers the big bucks for nothing. However, I am not a visual designer. I'm an interaction designer that occasionally messes around with CSS. There is a reason Apple has talented teams of designers that work together to build amazing interfaces. That said, let’s take a quick stab at what such a lock screen would look like. Ignore the black status bar at the top. That's from the real OS that I couldn't get rid of. It's a mockup after all.

Not bad, though we can already see a few needed changes.

The weather looks weird mixed with notifications. I originally thought I'd have multiple cities to swipe through, but that's probably not needed by most people. If I drop that idea then the weather can go in the vacant upper left corner.

Next, the calendar needs some work. Clearly more whitespace and probably some separator lines. The icons are needed for the Twitter and Facebook sections but probably not anywhere else. Do I really need an email icon for the email inbox? A photo of the album for the music player might be nice, though not really necessary.


Overall, I think it's a good start. While there's a lot more to do on the visuals we've got a good framework. More importantly, we have rules that define what should go in the lockscreen and how to add future components. It is these rules which make it a comprehensive design and ultimately, I hope, something that you'd actually want to run on your phone. What's your take on redesigning the lock screen?

Jen was working today, so I spent the day fixing bugs and coding new features in Leonardo. Today's awesome feature: HTML Canvas Export. Yes, oh yes! You can draw anything you want in Leonardo, then export it to JavaScript code that draws into Canvas. Why would you want such a feature. Lots of reasons:

  • Maybe you need to draw vector graphics in a browser that supports Canvas, but not SVG.
  • Maybe you want to learn Canvas and have some code to start with.
  • Maybe you are just simply too lazy to draw your own damn pictures in code.

Whatever your reason, Leonardo will be there for you. Look for it in our next release!

Check out this demo output page.

Now that Apple has given us final specs and cost for the redesigned Mac Pro I’ve heard complaints that it is underpowered and non-expandable, especially for the price. The Pro comes with reasonably beefy CPUs but they will be out of date in a few years. The buyer can only expand the ram and disk, and not so much on the disk side given the lack of available space. So how can this be worth the $3000 entry price Apple is charging?

First we must realize that the Mac Pro isn’t for everyone. It really is for creative professionals who spend a lot of time in Logic, Aperture, Final Cut Pro, Maya, and other pro apps. These people need the maximum ram and processing power possible, and will pay for it. Expandability of storage isn’t a problem because they don’t care about internal storage anyway. Anyone who buys one of these will be using a stack of external drives or NAS. I can buy a 3TB drive at Costco for under 200 bucks! Thus the nice collection of Thunderbolt and USB ports on MacPro’s backside.

More importantly, however, the CPUs aren’t the real focus of the new Mac Pro. Apple is betting that the future of high speed computation is GPU computing. Apple is right.

I recently went to the International Super Computing conference when it was held here in Oregon. At least 50% of the talks were about how to restructure computing tasks to take advantage of GPUs. GPUs are the future of almost all high performance computing. GPUs are not as general purpose as a modern CPU, but if you can structure your problem in a way that a GPU can compute, then you can get a 5x to 10x performance boost for the same watt (or dollar). Intel and Nvidia are happy to sell you a stack of GPUs without video connectors. These cards exist purely for GPU computation. Daisy chained together a stack of GPUs will beat any traditional super computer.

Of course, with the GPUs doing the heavy lifting the challenge becomes how to get your data *to* the GPU quickly. That’s why Apple’s MacPro site spends so much time talking about the IO bus and memory bandwidth. Internal storage? CPU upgrades? Who cares! The MacPro is all about moving data in and out of beefy GPUs as fast as possible.

Apple has been working on this for a while. Initially they started shifting graphics work to the GPU with Quartz Extreme. This enabled the OSX compositing window manager to run smoothly on older hardware. Later Apple introduced full Mac support for OpenCL, a computation companion API to OpenGL. When you write some code in OpenCL the Mac can shift the computation dynamically between the CPU and the GPU. Powerful GPUs can make up for weak CPUs.

And this brings me to the Raspberry Pi, my favorite cheap ARM based mini-computer -- so cheap I’ve seen hard drives with Pi’s glued to the side of them as files servers. At 700mhz the Raspberry Pi’s CPU is anemic but the GPU is surprisingly powerful. Broadcom’s VideoCore IV not only supports OpenGL 2.0, meaning real shader support, it also has H264 video encoding and decoding in hardware. It can decode a 1080p video in real time on this 35$ computer. The CPU just has to stream the compressed video file to memory; the GPU will care of the rest.

The Pi’s GPU also has an interesting API called dispmanx. While it is extremely undocumented, I’ve learned that this API lets you set up an almost unlimited number of hardware layers in the GPU. You can have one layer with 3D content from OpenGL while a second layer plays video and a third shows images. Most importantly each of these layers can be resized and alpha blended entirely by the GPU. This means we can create a full compositing window manager like OSX and Window 7 have, all on our tiny 700mhz computer. These guys are already working on a port of the composited Wayland/Weston library to the RaspberryPI.

While the Raspberry Pi does not support OpenCL it is possible to use the GPU for accelerated JPG decompression and there is ongoing efforts to directly target the VideoCore’s internal APIs for SIMD processing.

All of this power comes from shifting computation from the general purpose CPU to the custom purpose GPU. This is a long term trend. Over time more and more work will be shifted. GPUs can’t do all computational tasks of course; but if you can transform your problem in to something the GPU can handle (preferably something highly parallel), then you’re golden. He who controls the GPU... controls the world! Now let’s get some cheese, Pinky.

a test blog post content

some content

I haven't posted about the iPad (or tablets in general) since before the iPad announcement. I thought this prudent given that we all knew what was announced but I hadn't actually tried using one in person. Last week I played with a couple at my friends company and my initial thoughts were confirmed: the iPad as existing product today is interesting but not amazing, but as an indicator of the future is amazing.

When I picked it up it seemed smaller than in the pictures, and a tad heavier. Very solid, though. The interface is essentially a large iPod touch with a few flourishes. This isn't a bad thing, though. The iPhone interface was clearly designed to scale to larger devices. And on the faster CPU of the iPad it absolutely flies. This is one advantage of controlling both the software and hardware. I am surprised that they didn't put more memory in it, however, given how cheap memory is. Perhaps it was the only way to meet their margins.

The screen is gorgeous. Viewing websites, news, facebook, mail, etc. is a very nice experience. Where it falls down is content creation, specifically typing. Steve called this a magical keyboard or some such during his introduction keynote.

This is a lie.

While it may be better than the iPhone keyboard, it still sucks. Maybe for someone who is a hunt and peck typist it would be about as fast as a real physical keyboard, but anyone who is a touch typist will be immediately frustrated (and I mean *immediately*!).'s not their fault.. they've done the best possible with their screen real estate, but hard immovable glass is no substitute for real physical keys with edges, dimples, and movement. No comparison.

However, this doesn't matter..

Why? Because most people aren't touch typists. And most people aren't content creators. I fully recognize that I'm not the target market for this. One day the iPad and similar devices will replace the desktop computer for 90% of people. (When I say 'desktop' I mean general purpose computers running full desktop OSes, whether or not they are actual desktops or laptops).

For me the iPad is an expensive novelty. It would live in my living room table as a nice way to read email and news feeds in the morning while I drink my coffee. Then I'd go upstairs to my real computer. But most people aren't like me (or you, given that you are reading a technology blog right now, probably through an RSS reader). Most people use computers for content consumption and communication, not creation. And the iPad is 90% of what they want. Ten years from now 90% of computers will be something like an iPad. And the remaining 10% will be called workstations.

That said, for the target market I think the iPad will still be seen as too expensive and missing features. It will sell well but won't be the smashing success that the iPhone was. But that's fine... Apple's used to that. They will slowly and carefully add features, and lower the price, until they take over the world.

So what's missing for most people? I originally had a laundry list but after taking off things that don't really matter anymore in the 21st century (like CD drives), I came up with only three things:

  1. the iPad must sync to a real computer for software updates and other data like address books
  2. the iPad must sync to a real computer to get music
  3. the iPad can't print

The first one is easy to solve. Sync over the network. Every smart phone not made by Apple can get software updates over the web (my new employer does a great job of it). I expect Apple to enable this in the next year. Mobile Me already does most of the job.

The second one is a bit harder due to the massive amount of data involved. Most people have gigs of MP3s. I expect Apple to solve it by either creating the media server edition of the Time Capsule or else offering music streaming from the cloud... Or both, they've sure got the resources.

The third was is already 90% solved. The iPad SDK has render to PDF support, which is most of printing. Combined with better network printing to kill off the need for printer drivers, you'd have it.

So that would be it: a laptop replacement for 90% of people. The future is here (90% of it anyway) and it's a pad. Now if only they didn't hate the pencil so much.

a test blog post content


a test blog post content

Vacation and travel is over and I'm happy to say things are moving again. I'm feeling refreshed and I have a lot to share with you in 2012; starting with the new book I'm writing for O'Reilly! Read on, MacDuff.

The Book

I've been working on a new book for O'Reilly, tentatively titled Building Mobile Apps in Java. I mentioned it briefly on Twitter but haven't gone into the details before. It will show you how to use GWT and PhoneGap to build mobile apps. With these two open source technologies you can code in Java but target pretty much any mobile platform such as iOS, Android, and webOS.

The book will cover the technical aspects of using GWT & PhoneGap. It will also dive into how to design for mobile. Navigation and performance varies greatly across devices, so it's an important topic. Oh, and the last chapter will show you how to make a mobile platform game with real physics. Tons of fun.

Building Mobile Apps in Java will be an eBook about 60 pages long, available every where O'Reilly publishes their ebooks. Look for it in February or early March.

Open Source and Speaking

For 2012 I want to spend some time doing more actual design work. I'm planning a new hand built wordpress theme for my blog, including proper phone and tablet support. I also have a few art side projects that you'll get to see later in the year.

And speaking of design, I have new significant releases of Amino and Leonardo Sketch coming. If you are in the Portland area come to the January PDX-UX meeting. I will be presenting how to do wire framing with Leonardo Sketch. I'll give a brief overview of Leo and show off some of the great export and collaboration features.

I will also be doing a 5 minute Ignite talk in Portland on February 9th about the future of ebooks and what a Hogwarts Textbook would look like.


Finally, I plan to post both more and less on this blog. I used to do short posts on small topics or collections of links. I found social networks better for that thing so I'll do that on Twitter and Google Plus from now on. From now on I want to use the blog for more long form content such as my well read HTML Canvas Deep dive. Look for more long essays on canvas, app stores, and technology trends this year.

2012 is finally here!



Now that OSCON is over I can get back to working on Electron. That means a new version is coming, and by far the biggest change will be a brand new user interface. I had posted an early preview here but that's now completely out of date. You see, I discovered a new framework.

Currently Electron is built from two main components. The backend end is NodeJS code. This is the part that actually compiles and uploads your Arduino code. The front end is HTML and JavaScript. This is the part you actually interact with: editor, debugging, clicking the compile button, choosing boards, etc. Currently the UI is written in plain HTML with Bootstrap and JQuery. Sadly, this form of development won't scale. JQuery is great for manipulating a few DOM objects but it just doesn't scale up to a full app. I had considered a few UI frameworks like JQuery UI but several people at OSCON mentioned Angular JS. When I got home I bought a book from O'Reilly and built a few prototypes. I'm so glad that I did.

Angular isn't a set of widgets. It's a JavaScript modules framework with data binding. With Angular I can break Electron up into proper reusable, testable components. It also handles much of the data update boilerplate I previously wrote by hand.

After two days I've completely rewritten the UI in Angular. Almost everything that worked before works in the new UI. The switch over went smoothly thanks to the backend being done entirely with REST requests. I can actually run both the old and new UIs at the same time.

Here is a screenshot of what the new version looks like.


Some time next week I should have v0.3 released. After that the focus will be entirely on integrating Atom-Shell so that you don't need the command line at all. You'll be able to just download a proper app binary like any other desktop program.

Oh yeah, and we had a BOF at OSCON on Electron. I got lots of good feedback that will work it's way into the first post 1.0 release.

And one more thing...

If you live here in Eugene I'll be doing an Electron presentation at this week's Eugene Linux User's Group .

It would be an understatement to say that the last year has been busy. With having a baby, launching and then 'unlaunching' the HP TouchPad, lots of conferences, and pushing out several open source project releases it's just been one heck of a crazy time. Throughout it all I've tried to continue blogging, though not as consistently as I would like. I thought it would be interesting to review the blog stats for the year and see what was actually the most popular posts rather than what I thought they were. The results may surprise you. They certainly surprised me.


Traffic has grown for Josh On Design in general. At the beginning of 2011 I was averaging around 20-30 hits per day. For December the daily average is around 180. So that's quite a significant jump. However it doesn't account for the nearly 50k hits of the entire year. A good portion of that came in two spikes at the end of September and the middle of October. See the anomaly in this chart:


Those two spikes represent when I was linked by heavily trafficked news sites: Reddit and YCombinator's Hacker News. These links are attributed to one piece of content: the notes and exercises from my HTML 5 Canvas Deep Dive workshop that I presented at OSCON in July. In fact, the canvas article accounts for nearly half of all hits I've gotten for the entire year. So, lesson number one: quality deep technical content is greatly appreciated.

The #2 hit is the main page of my blog, which doesn't really tell us anything. #3 is my post about refactoring Amino with 1700 hits. Note that even though this is #3 it's still only 3.69% of the total for the year. What this tells me is that my blog has a very long tail. Only the canvas deep dive is significantly more popular that the rest of the site.

#4 four is a post about my App Bundler project which lets you easily build native Java exes for a variety of platforms. #5 and #6 are more posts about Amino.. This surprises me because Amino doesn't seem to be used very much in practice and the mailing list traffic is minimal, yet posts about it seem to be popular. Perhaps it's a big hit with my Twitter followers.

With 898 hits Typography 101 is the seventh most popular post. This is the first one that doesn't surprise me. I figure reference material will generally pick up a lot of hits over time as people find it through Google. The two articles Future of Desktops and Design of the Workstation OS and Why 2014 won't be like 1984 are rant essays about the industry. I'm not surprised that they were in the top ten since controversial topics seem to always get hits. Finally UI Design Assets and Tools rounds out the top ten with 676 hits for the year.

Overall I'm happy with the blog traffic. Considering I haven't had much time to work on it, especially since Jesse was born, I think it's doing pretty well. I expect traffic to rise significantly early next year when my O'Reilly eBook is published (which reminds me that I've got more editing to do).

I'm officially on vacation today and will be offline for the next three weeks. I've got much needed baby time awaiting. I'll see you in 2012!







You may be a new programmer, or a web designer, or just someone who's heard the word 'RegEx', and asked: What is a Regex? How do I use it? And why does it hurt my brain? Well, relax. The doctor is in. Here's two aspirin and some water.

What is it doctor?

Oh, it's two parts hydrogen and one part oxygen; but that's not important right now. We're here to talk about RegEx. RegEx is short for Regular Expressions, but that's also not important right now. Regexs are just patterns for matching bits of text. Whenever you hear RegEx, just think: pattern. In fact, I'll stop using the word regex right now, since it mainly sounds like the kind of medicine you'll need after trying to write a complex regex.

What are these patterns they good for?

Patterns are mainly used for three things: to see if some text contains the pattern (matching), to replace part of the text with other text (replacement), and pulling out portions of the text for later use (extraction). Patterns contain a combination of regular letters and special symbols like ., *, ^, $, \w and \d. Most programming languages use pattern matching with a subset of the Perl syntax. My examples will use JavaScript, but the same pattern syntax should work in Java, Python, Ruby, Perl, and many other languages.


Suppose you want to know if the text "Sally has an apple and a banana." contains the word 'apple'. You would do it with the pattern 'apple'.

var text = "Sally has an apple and a banana.";
if(text.match(/apple/)) { console.log("It matches!");

Now suppose you want to know if the text begins with the word 'apple'. You'd change the pattern to '^apple'. The ^ is a special symbol meaning 'the start of the text'. So this will only match if the 'a' of apple is right after the start of the text. Call it the same way as before.

var text = "Sally has an apple and a banana.";
if(text.match(/^apple/)) { //this won't be called because the text doesn't start with apple console.log("It matches!");

Besides the ^ symbol for 'start of text', here's some other symbols are important to know (there are far more than this, but these are the most important).

$ = end of the text
\s = any whitespace (spaces, tabs, newlines)
\S = anything *but* whitespace
\d = any number (0-9)
\w = any word (upper & lower case letters, numbers, and the underscore _)
. = anything (letter, number, symbol, whitespace)

If you want to match the same letter multiple times you can do this with a quantifier. For example, to match the letter q one or more times put the '+' symbol after the letter.

var text = "ppqqp";
if(text.match(/q+/)) console.log("there's at least one q");

For zero or more times use the '*' symbol.

var text = "ppqqp";
if(text.match(/q*/)) console.log("there's zero or more q's");

You can also group letters with parenthesis:

var text = "ppqqp";
if(text.match(/(pq)+/)) console.log("found at least one 'pq' match");

So, to recap:

. = any x+ = match 'x' one or more times
x* = match 'x' zero or more times ex: match foo zero or more times, followed by bar one or more time = (foo)*(bar)+
x|y = match x or y

Replacing text

Now that you can match text, you can replace it. Replace every instance of 'ells' with 'ines'.

var text = "Sally sells seashells"
var text2 = text.replace(/Sally/,"Billy"); //turns into "Billy sells seashells"
var text2 = text.replace(/ells/,"ines"); //turns into "Sally sines seashines"


Most pattern apis have a few modifiers to change how the search is executed. Here's the important ones:

Make the search case insensitive: text.match(/pattern/i)

Normally the patterns are case sensitive, meaning the pattern 'apple' won't match the word 'Apple'. Add the i parameter to match() to make it case insensitive.

Make the search multiple lines: text.match(/pattern/m)

Normally a pattern will only match the first line of the text. It will stop at the newline character '\n'. With the m parameter it will treat newlines as whitespace and let you search the entire string.

Make a replace global: text.replace(/foo/bar/g)

Normally the replace() function will only replace the first match. If you want to replace every match in the string use the g parameter. This means you could replace every copy of 'he' to 'she' in an entire book with a single replace() call.

Substring Extraction

Another major use for patterns is string extraction. When you do a match, every group of parenthesis becomes a submatch, which you can use individually. Suppose you have a text string with a date in it and you want to get the year and month and day parts out of it. You could do it like this:</p

var text = "I was born on 08-31-1975 and I'm a Virgo."
var parts = text.match("(\d\d)-(\d\d)-(\d\d\d\d)");
//pull out the matched parts
var month = parts[1]; //08
var day = parts[2]; //31
var year = parts[3]; //1975
//parts[0] would give you the entire match, eg: 08-31-1975

The Cheet Sheet

The standard pattern syntax in most languages is expansive and complex, so I'll only list the ones that are actually useful. For a full list refer to the documentation for the programming language you are working with.

Match anywhere in the text: "Sally sells seashells".match(/ells/) (matches both sells and seashells)
Match beginning of text: "Sally sells seashells".match(/^Sally/)
Match end of text: "Sally sells seashells".match(/ells$/) (matches only the seashells at the end)

any word at least three letters long \w\w\w
anything .
anything followed by any letter .\w
the letter q followed by any letter q\w
the letter q followed by any white space q\s
the letter q one or more time q+
the letter q zero or more times q*

any number \d
any number with exactly two digits: \d\d
any number at least two digits long: \d+
any decimal number \d+\.\d+ //ex: 5.0
any number with an optional decimal \d+(\.\d+)* //ex: 5.0 or 5
match the numbers in this date string: 2011-04-08 (\d\d\d\d)-(\d\d)-(\d\d)
also be able to match this date string: 2001-4-8 (\d\d\d\d)-(\d+)-(\d+)


Patterns are a very complex subject so I've just tried to give you the basics. While complex, they are also incredibly powerful and useful. As you learn them you'll find you use them more and more for all sorts of cool things. For a more in-depth tutorial read the Mozilla JavaScript Regular Expression guide.

PS: Found a bug in the above code? Want to suggest more examples of useful regex's? Want to just shoot the breeze? I've turned off comments due to 99% of it being spam, so please tweet your suggestions to me instead: @joshmarinacci. Thanks for understanding!

After over a year of living with my custom built blog system I'd finally had enough last weekend. Time for something new. Something less buggy. Something extensible. So I took a day and rewrote it from scratch.

The new blog system is still custom, and still based on Node, but now backed by a real database ReThinkDB. Also, having many more months of Node and JS experience means I have a better idea of what I'm actually doing now. The new system integrate pages and posts into a single system, automatically tracks traffic, and calculate the sidebar links based on which content is the most popular. All with a single DB query, actually.

In any case, the internals probably aren't very interesting to you. More interesting is the new design, hopefully a bit cleaner and easier to read. I chose new colors and fonts, restyled a few things, and built a new custom layout from scratch. Of particular importance to me is that the pages scale properly on all device screens, from the lowly smart phone, through varying tablets, to the gargantuan widescreen desktop computer. If you resize you're browser you'll see the fonts and margins shift to always provide the optimum experience.

I just turned the new blog on tonight, so If you notice any broken links or JavaScript errors, please contact me on Twitter, G+, email, etc. Thanks, and have a great weekend.

I'm very excited to announce that two of my presentations have been accepted to OSCON this year (thankfully back in Portland again). OSCON is one of my favorite conferences because I get to learn as well as teach. There is such a diverse set of topics that I try to get out of my comfort zone and learn something new every year. (One year it was an intro to Arduino). OSCON will be this July 25th-29th in Portland Oregon. And let me tell you: Portland in July is simply beautiful, with the best microbrews in the country.

The first talk is actually in the new OSCON Java conference, running adjacent to the main OSCON conf. My talk is Building Mobile Apps with Java on Non-Java Platforms using GWT and PhoneGap where you'll learn how to code up mobile apps (both web and installable) using pure Java. Even though Java isn't available on platforms like webOS and iOS, we will use the GWT cross compiler to produce native installable apps for those platforms.  Yes, one codebase and all platforms really is possible.

My other presentation is actually a 3 hour hands-on workshop I will be co-teaching with Dave Balmer called HTML 5 Canvas Deep Dive. Dave and I will give you a comprehensive overview of Canvas, including the fiddly bits like pixel effects and mobile optimization. I'm also happy to say that we will show of the HP TouchPad's kickass Canvas implementation. The folks in engineering have done great work over the past few months filling in missing features and nearly doubling the speed.

During OSCON I'll be tweeting at #oscon. Hit me up for drinks if you want to talk about canvas and the mobile web.

For a 20% off discount for any  OSCON package you can use the code Marinacci at this link.


Tomorrow we will hopefully awake with news of a magical Apple tablet, so tonight I thought I'd give you a few things to read as you drift off to tablet dreamland with touchable sugarplums.

A Taxonomy of Device Forms

I had the incredible fortune to meet Marc Weiser when I was an intern at Xerox PARC. We are now building the world he firmly believed would come to pass; a world full devices. A world where computing has disappeared into the fabric of our everyday lives. He divided the device landscape into several forms, as described in this article. (from Designing Devices)

Tangible Interaction = Form + Computing

Many first-generation tangibles have been whimsical and artistic explorations of what new technology can do. Some are simple; some, more complex. Some are elegant embeddings of display and projection. Some celebrate new materials. Some add sensing in clever ways. The field is still wide open, but one thing is clear: We’re likely to see more, not less, programming in things, and a lot more experimentation.
(from the ACM)

Realism in UI Design

When choosing icons and other symbols for your application it's important to remember when to use realism and when not to. A graphic which is too realistic can actually make it harder to recognize and use. (from ignore the code)

The Apple Tablet Interface Must Be Like This

Many years ago Jeff Raskin designed an ultimate device he called the information appliance. This article details his design and how it might match up to the iTablet. (from Gizmodo).

A Brief History of Eye-Tracking

Who knew eyetracking technology started in 1879! (from the UX Booth)

Ribbon Hero

Yes, Microsoft has designed a game into the latest edition of Office. The goal is to teach you how to effectively use the Ribbon bar. (from Lost Garden, an awesome site on game design)

Edison's Kindel

Edison's Nickel based 40,000 page book and other future technology visions that haven't quite come true. I love yesterday's tomorrows. (from Technologizer).

(Apple Tablet photo courtesy of Flickr user Rego)

Progress comes not from inventing new answers, but from discovering new questions. -- some guy

I am bored of technology. As you might guess, this is kind of a problem for someone who is a professional technologist. Sadly, I can't help it. I spent five years working on advanced GUI toolkits, then three working on cutting edge smartphones. As I watched the parade of CES announcements this month I found myself being simply, well… bored. Bigger TVs. Faster smart phones.YouTwitFace social networking integrated into everything. Nothing genuinely new. Nothing to really get me excited. The last thing that really made me say 'wow: this is the future' was the first demos of XBox Kinect; which sadly have yet to live up to their potential.

What is wrong? We live in an age of computing abundance. My Roku connected TV can stream shows and music from the last fifty years, plus play Galaga. I have five smart phones, each more powerful than a top of the line desktop from the mid 2000s. I can video chat with family two thousand miles away. Clearly we live in the future. So why am I so bored with it all?

I think I am unimpressed because these are technologies I have long expected to be here. Since I was a kid I assumed we would have faster computers, video phones, and ever smaller gadgets. Today's smart phone is merely the latest version of the PalmPilots and Newtons I played with nearly twenty years ago. That they have finally arrived in fairly usable form is not a triumph, but merely expected.

There are only two things that seem interesting to me right now. First is the Raspberry Pi. The Pi is very underpowered by modern standards. A 700mhz CPU with 512MB of RAM seems paltry, but combined with an insanely powerful GPU you get an amazing computer for 35$. Never before has this been possible. A change in quantity, if large enough, can become a change in quality. The Pi feels like that to me. But..

Software on the Raspberry Pi still feels slow. Compared to what I had a few years ago it should be massively fast. Is our software simply to crufty and bloated to run efficiently? The Pi should be the new baseline for software. Your app should run smoothly on this computer, and it will run even better everywhere else.

There is one other thing that interests me: my 19 month old son. As I see him explore the world and discover language I once again feel the wonder from my own childhood. The pure joy of learning new things is infectious. Perhaps that is why I find myself again looking for the 'new' in the technology realm.

So, I am searching; and researching. I've spent the last few months looking at computer science papers from the 70s to the present. It's depressing to see how every new programming technology has existed for at least 30 years. I've also been scouring blogs, Reddit, used book stores, and anything else I can find in my quest to answer the question: What is next? What seems futuristic now but will seem obvious in a decade. What will replace social networking and gamification as the next wave to drive the industry forward. What new programming concept will finally help us make better software?

New Questions

If you are hoping for me to give you answers, I'm afraid I will disappoint you. My crystal ball reveals no silver bullets or shiny trinkets from the future. I cannot tell you about live in a decade. I can only offer a few thoughts on what we should be building now, that we might live in a future so packed full of technology it will bore us to tears as much as the present. These are the questions we should be asking.

Can multi-processor computers change our lives?

I recently reread some of the original papers around Smalltalk and the Dynabook. The belief at the time was that personal access to high speed computing technology would change how we live. The following thirty years have shown this belief to be true; but are we nearing the end of this transformation?

It is now generally accepted that the future of Moore's law is to have parallel CPUs rather than faster ones. This is great for server side developers. The every day programmer can now finally use the last thirty years of research in parallel computation. However, the desktop computing experience hasn't really changed. My laptop has four cores, and yet I still perform the same tasks that I did a decade ago.

The real question: Are there tasks which local parallel computation makes possible that would change our lives? What new thing can I do with my home computer that simply wasn't possible ten years ago? Hollywood of the 90s tells us we should be doing advanced image analysis and global visualizations with our speedy multi-core processors through holographic screens. Reality has turned out less exciting: Farmville. Our computers may be ten times faster, but that doesn't seem to have actually made them better.

How can we replace C?

I can't believe we will use C forever. Surely the operating system on the Starship Enterprise wasn't written in C, and yet I see no way to replace it. This makes me a sad panda.

I hate C. Actually, I don't hate C: the language. It's limited but good at what it does. Rather, I hate C compilers. I hate the macro processor, I hate header files. I hate the entire way C code is produced and managed. Try porting an ARM wireless driver across distros and you will agree. C code doesn't scale cleanly. And yet we have no alternatives? Why?

I think the key problem is the C ABI. I could write a system kernel or library in a higher level language, but to interoperate with the rest of the system I must produce a binary blob compatible with the C ABI. This means advanced constructs like objects can't be exposed. Library linking is further complicated by garbage collection. If one side of a function call is using GC and the other is not, then who is in charge of cleaning up allocated memory? With C it is simple. A linked library is no different than if you had included the code directly in your app. With a GC'd language that library now comes with it's own runtime and background processes that must be managed.

Header files don't help either. If I wish to call C code from a non-C language I must parse the entire header file, or hack it in through some language specific FFI. Since .H files are essentially Turing complete, they must be processed exactly the same as a C compiler would, and then predict how the compiler generated the original binary. Why doesn't the completed binary contain this information instead of me having to reverse engineer it from a macro language.

All languages provide a way to link to the C ABI. So if you want to build a library that can be reused by other languages, you have to write it in C. So all libraries are built this way. Which means all new systems link only to the C ABI. And any new languages which want to be linked from other systems compile down to C. You could never build an OS in Go or Ruby because nothing else could link to the modern structures they generate. As long as the C ABI is the center of the computing universe we will be trapped in this cycle.

There must be a way out. Surely these are not insoluble problems, but we have yet to solve them. Instead we pile more layers of abstraction on top. I'm afraid I don't know the answer. I just know it is something we must eventually solve.

How can we reason about software as a whole?

I'll get into this more in a future blog, but the summary is this. Too much effort is spent trying to improve programming languages themselves rather than the ecosystem around them. I've never felt like lack of concurrency primitives or poor type systems were the things preventing me from building amazing software. The real challenges are more mundane problems like trying to upgrade a ten year old database when an unknown number of systems depend on it. The problems we face in the real world seem hopelessly out of sync with the research community. What I want are tools which let us reason about software in the large. All software. All of it.

Imagine a magic database which contained all of the source to the codebase you are working on, in every revision, and with every commit log. Furthermore this database understands every programming language, data format, and config file you use. Finally it also contains the code and history of every open source project ever created. (I said it's magic, remember). What useful questions could you ask such a database? How about:

  • Is library X integrated or is it really a collection of classes is several groupings that could be sliced apart, and which classes should we target. The Apache Java libraries could really benefit from this.
  • Is there another open source library which could replace this one, and meets the platform, language, and memory dependencies of the rest of my system?
  • How many projects really use method X of library Y? Would changing it be a big deal?
  • What coding patterns are most repeated in a full Linux distro? How many packages would have to change to centralize this code, and how much memory would it save?
  • We need ways to reason about our software from the metal to the cloud, not just better type systems. It would be like having a profiler for the entire world.

How can we make 10x denser batteries?

While not software related directly, batteries impact everything. I'm not taking about our usually 5% a year improvements. I mean 10x better. This requires fundamentally new technology. It may seem mundane but massively denser batteries changes everything. It becomes possible to make power in one part of the country (say, in an protected nuclear plant in the desert) and literally ship the power to it's destination in trucks.

Want a flying car? 10x batteries make it possible. Modern sensors and CPUs make self flying cars possible, we just need 10x power density to make a flight longer than a few minutes. Everything is affected by power density: cars, smart homes, super fast rail, electric supersonic airplanes. Want to save the environment? 10x better batteries do it. Give the world clean water? 10x better batteries.

How can we put an MRI in your shower?

This may sound like a bit of an odd request, but it's another technology that would change the way we live. Many cancers can't be detected until they are big enough to have already caused serious harm. A tiny spot of cancer is hard to find in a full body scan, even with computer assisted image recognition. But imagine you could have a scan of your body taken every day. A computer could easily recognize if a tiny spot has grown bigger over the course of a week, and pinpoint the exact location it started. The solution to cancer, and so many other diseases, is early detection through constant monitoring.

Whenever you see your doctor with an ailment he goes through a list of symptoms and orders a few tests. The end result is a diagnosis with a certain probability of being true; often a probability far lower that you might expect. Constant full body monitoring changes this equation. Feeling a pain in your side? Looking through day by day stats from the last year can easily tell if it's a sign of kidney stones or just bad pizza you ate last night.

Constant monitoring only works if it is cheap, so cheap that you can afford to do it every day, and automatic so that you actually do it every day. One solution? An MRI equivalent built into your shower. When you take a shower a voice asks you to stand still for 10 seconds while it performs a quick scan. That's it. One scan, every day, could eliminate endless diseases.

Better Questions

As I said at the start. These are just ideas. They aren't prognostications of a future ten years from now. They are simply things we should be working on instead of building the next social network for sharing clips and watching ads. If you want to change the world, ask some bigger questions.

World of Ptavvs, Larry Niven, 188pp, 1966

If you are a scifi reader but don't know Larry Niven then you aren't reading this blog because you don't exist. However, in the off chance that you slipped in from an alternate dimension where Larry Niven never took up writing, then allow me to explain. Larry Niven is known for hard-SF writing, mainly in the 70s and 80s, though he is still writing today. Unlike contemporary SF that moved on to cyberpunk, steampunk, and singularity visions, Niven still writes about humans exploring the cosmos. He is also quite a stickler for scientific accuracy, to the extent he has created an entire universe called "Known Space" with a history extending from the early 21st century to the 32nd.

Larry Niven is probably best known for the Ring World series, about adventures on a giant ring the diameter of earth's orbit circling an alien star. The book I just finished, World of Ptavvs, is set in the same universe but much earlier. It also happens to be his first full novel, expanded from several short stories. Given that he was still early in his craft, I was impressed that it was so interesting. Clearly he got better, but even this early work is quite entertaining.

World of Ptavvs is a short novel (almost novella) about humanity's first-ish contact with an alien species, under the most strange and amusing circumstances. Kaznol, a greedy alien with power of mind control, is accidentally stuck in a stasis field which freezes him in time. Two billion years ago he crashes on an empty planet that eventually becomes the earth of today. In the mid 21st century humans find the frozen alien at the bottom of the ocean and attempt mental contact using a man with slight telepathic abilities (he practices on dolphins who are by this time known to be intelligent). Due to lack of planning on humanity's part, they accidentally free the alien and in the process the alien imprints his memories on the telepath. So now we have *two* rampaging aliens from billions of years ago bent on conquering the earth.

I know, it's sounds super cheesy but it's actually a very entertaining story with some cool twists. Throw in a team from the ARM (CIA of the future), some angry asteroid miners, and a few stolen spaceships and you get a rockin' adventure. Best of all it's *short*. Less than 200 pages. In an age when many authors feel the need to produce thousand page tomes it's nice to read a book that is no longer than it needs to be.

So, should you read it? Yes!


I'm very happy to announce that my interactive book, HTML Canvas: A Travelogue, is now available for the iPad. It includes the same great content as the webOS version: a complete introduction to HTML Canvas with interactive examples, for just $4.99. Several bugs have been fixed in this build, which are coming soon to webOS as well.

The app isn't retina tuned yet (since I don't have an iPad 3 yet) but since I'm using web technologies for everything it should look pretty good. Only a few of the screenshots will likely need updating.

If you find any bugs or would me to add more content, please let me know. Updates for all platforms will always be free.

Buy for Apple iPad

Buy for webOS TouchPad

Last night I declared Internet Bankruptcy.

As grown increasingly clear this year, I simply don't have the time to keep up with the info-deluge that constitutes my morning routine: coffee. email. RSS feeds. Facebook. Twitter, more email yet again. second coffee.

Bedtime is a mirror image: Brush teeth. News feeds. If sleep has not arrived then I turn to custom news apps for Reddit, The Verge, Hacker News, and more. It never ends.

Always behind, always frustrated. Spending hours keeping up with news in a rapidly changing industry. I need to stop. Keeping up is a futile exercise. There is simply too much. And my time is better spent keeping the house clean, playing with Jesse, or, you know, actually working. Time to change.

I've declared internet bankruptcy. It's the only way. I hope you will forgive me. While I'm not completely unsubscribing to everything, I've greatly pruned back my social networking and news reading accounts. Here's the damage:

Facebook, Twitter and Google+ are still present, but I massively pruned back the friends/followers/circles/etc.

News feeds. The bane of my existence. Nuke it from orbit. It's the only way to be sure.

I unsubscribed to all of my RSS feeds on Google Reader, about 500ish feeds. I'm sure I'll be slowly adding some important ones over the coming year, but I have to start fresh.

Google Plus: Somehow I had two Google accounts. If you were following me on G+ with you'll need to switch to my main account. I actually deleted the gmail account; in the process discovering just how much data Google tracks about me across their many properties. Now they only have half as much.

Pintrest: I can't figure out how to actually delete my account, so I deleted the app and turned off email notifications.

LinkedIn Oddly, no changes. They give me the right amount of information without overloading my inbox.

Apps: Deleted the many site specific apps on my phone, including The Verge, Blue Alien (Reddit), and Hacker News. I left MagPi and The Magazine because they are only updated every few weeks.

I think that's it. At least for now. We'll see what the next few days hold. If I unfriended/unfollowed/untracked you, please don't take it personally. I simply have to shrink my daily e-footprint. We thank you for your support.

My name is Josh Marinacci. I used to be in the webOS Developer Relations team at Palm. I'm auctioning off all of my webOS devices and swag that I've collected over the years, including some very rare items, to help my brother in law fight cancer.

To catch you up: Kevin is my brother in law. He has been fighting stage four melanoma for the past two years. If you know anything about skin cancer then you know that to still be alive after 2 years is incredible. They long ago exhausted standard treatments so he and my sister have traveled around the country to every research program they can find. After some close calls over the summer Kevin had a very encouraging visit with researchers at UCLA. They have switched his medication to a combination of two experimental drugs which are shown to have fewer side effects and greater results. Soon he will be going on full time medical leave to let his immune system fully recover from the three previous treatments and spend time with their two little children. My wife and toddler are flying to Atlanta to see them in a few weeks, ready to help them start this next phase. You can follow their progress on their website.

This auction is part of our efforts to raise 10,000$ to help cover his loss of income when he quits his job. So far we've already raised 1800$, but there's a long way to go still. The auction just started. You can check out all the merchandise and start bidding at the 32 Auctions site.

the online auction

The auction will continue until Sunday, the 22nd at 6PM, pacific time. We've got over 40 items for sale, including some rare PalmOS devices, unique artwork, vintage T-Shirts, some great webOS devices, and even the unicorn device, a brand new Palm Foleo. I want to give special thanks to the many members of the webOS team who donated items from their personal collections for this auction. I also want to thank webOS Nations, the number one webOS community site, for donating eight snazzy coffee mugs.

To keep things simple shipping will be 10$/item or 5$/item if you get multiple items. Obviously it makes financial sense to buy at least two things. ;) When the auction ends please include your shipping address with your payment.

One final note. Thank you for signing up to this mailing list. It's been a really great way to keep in touch with people. As promised, the list will be destroyed after the auction ends.

Thanks, and good luck bidding. Sincerely Josh

Though engineering has always been a natural fit for my career, I have long wanted to be an artist. I suppose I'm not truly sure what it means to be an artist, but I still dabble, play, and learn. That seems to be enough.

Photography in particular has always grabbed me. The instant gratification of creating a photo 'right now', combined with the ability to document the world as it (mostly) was, makes it an amazing tool for an artist. In the digital age you can pick up and learn a lot about photography very quickly. With that in mind I decided to paw through my best photos of 2010 to see how I've developed as an artist over the year. Before I get into the photos I'll share my unexpected insight.

Photography is hard. Really *@^% hard! The learning curve from novice to intermediate is fast and gentle. You can learn the mechanics of photography quite easily, especially with an engineering background like mine and a decent camera. However, going from an intermediate photographer to expert or even merely competent is hard. Much harder than I thought. The learning curve is only a tiny bit steeper, but far longer than I ever imagined. As I look over the year of photographs I realize my standards have gotten higher but my execution not as much. It's going to be a long road. More analysis after the break but first: on to the photos!

I'm going to go through the photos in chronological order, since I think it best shows my progression as a photographer.

Nori on porch railing

This is one of those lucky shots that was made better with post processing. Being nearly perfectly black, Nori is incredibly difficult to capture. I also think he doesn't like to be photographed in general unless he's asleep (which is most of the time). You can see his surly "what'choo lookin' at?" expression in this photo. Converting to black and white and messing with contrast & exposure let me bring out a few details in Nori's face and white tusk of a whisker, while focusing the rest of the image on the geometric shape of the railing.

Church Campout

I do all of my artistic photography on a Nikon D50 with one of two lenses, but this photo was taken on my wife's point and shoot. They say the best camera is the one you have with you, and this is all I had with me on a hot summer evening. This image is of a campfire, created after about 4.8 billion experimental shots of waving the camera around.

I learned that while photography will always have an element of randomness and uncertainty, especially with stochastic subjects like fire, the artistry comes from playing with what you can control. In this case I could control the exposure length and motion of my hands while just letting the fire do it's thing.

NBC Studios, NYC

There's something extremely classic about this sign. It feels as if it's unchanged for 80 years (which is quite possible). Only B&W felt like it could do the sign justice. Just the sign; no more, no less.

Rockefeller Center, west side

Rockefeller Center, front

I love photographing the stately elegance of older skyscrapers. They just simply don't make'm like this anymore. They are also tricky to capture without special anamorphic lenses (which they also don't often make anymore). My two shots of 30 Roc came from wandering around the center for a while during our architecture tour (which I highly recommend, BTW). The time of day also helps. This was around 4:00 PM in late October.

Chrysler Building, sunset.

Making skylines from the top of other buildings look interesting is quite difficult. There's just so much stuff going on that the camera can't possibly capture it and any detail is lost in the jumble. I have probably 50 shots from the Top of the Rock; all equally boring. But this photo sort of took itself. Just as we were leaving I saw the Chrysler building lit up from the setting western sun. Cropping out the bottom made the building look like a noble structure, gazing into the setting sun after a day well spent.

Street Corner at Grand Central Station

They say that New York only truly comes alive at night. How completely true. You could walk around the city for hours taking photos, each one unique and different. It really is a beautiful city.

Chandelier at Grand Central Station

Wow. The inside of Grand Central Station is even more beautiful than the outside. The detail of the chandelier shows just how detailed their artisians were. We can still make buildings as large and expensive as the old days, but not as grand.

Heart Bokeh of Christmas House Lights

This last photo I took yesterday after unwrapping my newest photography gift, a Bokeh Kit from I quickly learned that shaped bokehs can be easily overdone, but in the context of abstract light it can find a place. What I love about this photo is the extra bright heart in the center from the house light. Brighter than the christmas lights, but still similar. I think I'm going to have fun with this kit.

Closing Thoughts

I have two thoughts after looking over the year's photos. First, almost 75% of my photos came from a single day, a trip to New York City in October. While trips are commonly the source of great images because you see things you don't normally see, 75% is still a horrible ratio. Clearly I haven't been taking enough photos this year.

Second, I think I have become a slightly better photographer, but not much. Practice clearly makes perfect. I also think that my technical skills are about the same, I think I've made some definite improvement on the content side. I'm becoming more comfortable with abstract forms and starting to see a flow in the photos. Most have an off-center focal point that leads your eye through the rest of the picture. All good stuff. I think the biggest thing holding me back is that I still hate to photograph people. I'm not yet sure why that is, but it's clearly the thing I must work on in 2011.

Thanks for making it this far through my retrospective. 2011 promises to be a fun year, with some amazing things to share. Thank you.

I know IFRAME, I know. We've had a good run. You've tried hard and back in the day you were somethin' special. The way you could bring in content from any domain was nothing short of amazing, and you certainly took out the trash when you bumped off FRAMESET. But that was then and this is now. A lot has changed in the past few years. CSS and AJAX are really hitting their stride and you just can't hack it. I'm willing to overlook a few margin bugs, but this?! Well this is simply the last straw.

We took a big chance on you. It's time to step up to the mobile platform and you blew it. We're making ebooks with fantastic interactive content. Tons of pages of web content that need to be pulled into a beautiful navigation frame. When people think of embedding a page they think of you: the mutha'effin' IFRAME! The 'I' used to stand for 'Internal', but now it just stands for 'Idiot'. You just blew it.

First you blew it on iOS with that whole two fingered scrolling thing. Seriously? Two fingers? WTF were you thinking?! Then on webOS you rendered perfectly thanks to an embedded web view back end. Sounds great except you forgot one tiny detail: you don't support any @#$! touch events. Your parent window can handle them just fine but you think you are too good for them. You can dance and sing but no one can touch you. You know what the TouchPad is? It's a touch device. Touch is even in the damn name. What the hell were you thinking? And don't even get me started on Android.

And finally, iOS 5. Your one last chance. Stylable with CSS. Proper event bubbling so that single touches work. Should be great. Your swan song. But no! You always expand to fit the content. Always. 100% of the time. Useless After all we've been through together. After all the good times and bad, this is how you repay me?!

IFRAME. You are dead to me.

(ps, just to show you we are serious, we've already built a beautiful prototype with Enyo 2.0, Ajax, and some clever CSS. Half the work for twice the power. Good riddance to you.)


Over the years I've worked on a lot of open source projects. I've also worked on quite a few commercial projects. What a lot of them have in common is the need to market themselves to developers, but without any marketing budget. When I worked on JFXStudio my budget was 20$ a month from my own pocket.

What you are about to read is an essay form of a presentation I gave at OSCON 2010 a few weeks ago in Portland. I considered just publishing the slides, but it's really not as useful without the spoken words that go with them. Since I consider this such an important topic I rewrote it as an essay that I hope will help open source project leaders market their projects successfully. Enjoy! (photo courtesy of Flickr user EraPhernalia Vintage)

Why should you listen to me?

Before I dive into the essay, let me tell you a quick bit about me and try to convince you why anything I have to say is useful. (We'll see how it goes. If I don't see you at the end I'll know I failed. :)

My name is Josh Marinacci, and I've worked at Sun for five years and now work on the Developer Relations team at Palm. Over the past decade I've worked on both commercial and open source projects, often with very little marketing budget because they are developer focused.

On the open source side I've founded or been involved with the Java lib Flying Saucer (xhtmlrenderer), the widget system: AB5k/ Glossitope / WigetFX, JFXtras, the JFXStudio community site, SwingLabs/SwingX, and I've been a community leader for for years.

Some commercial projects you might have heard of include Swing, NetBeans, JavaFX, the Java Store, and now of course Palm's webOS. While not all of those are still shipping products (I suspect the Java Store was canned), they were all projects that needed to attract developers.

The core of all of this is Marketing. Now, when you saw the word marketing in the title of this essay you probably came expecting me to show you some cheap ways to make users aware of your project. While I am going to teach you about that, I'm going to teach you some marketing first. We often hear the word marketing used when really someone means advertising.(a quick aside: This may or may not be the strict business school definition of marketing, but I've found it to be practical and useful for our purposes).

What are we marketing?

Before I can explain marketing, let’s consider what we are marketing. Throughout this talk I'm going to use the word product over and over. You may say: but we aren't selling anything, we are open source. True, you aren't selling a product for money. But you do have a product. It's the software you've created. And you are selling it. Customers are buying it from you, but they are buying it with their time and their code and their feedback rather than dollars. Even though it's not a monetary transaction all of the principles of marketing apply as if you were a commercial company.

You see marketing really has multiple pieces:
  • Marketing research: Who is your market? What do they want?
  • Product Design: What does your product need to do to fit the needs of the market
  • Advertising: making the world aware of your product
  • customer feedback: using feedback from your customers (and potential customers) to improve the first three.

You need to know who your product is intended for: your market. You need to make sure your product fits the needs of this market (no less and no more). Then you need to advertise. Advertising isn't an dirty word. Advertising is simply making your target market aware of your product. There are lots of ways to do this that don't involve buy 30 second spots on TV, which we'll cover later in this talk. And finally you need a way to get feedback from your customers, or potential customers, and pipe this information into improvements of your research, design, and advertising. If you take a discipline approach to marketing then your project will be a success. (note, promise not legally binding. not valid in some states. mind the gap).

Who is your Market?

This is the most important thing you must figure out, possibly before you even start writing code, because it will shape everything you do.

Now, often open source software was created to scratch an itch. So often you are the target market. That's perfectly valid. So the question is whether that market contains more people than just you. Suppose you create an apache plugin to meet your needs as a website administrator, then your market might be other website administrators. With some thinking and searching you can figure out who is your market, and it may lead you to some users who are very different than yourself.

For example, when I created Flying Saucer I originally imagined it would be for people who wanted to put simple HTML content inside of desktop Java applications. However, I lucked into a new market. I added an PDF export option because I found a good PDF library and thought it would be cool. After I blogged about it I discovered that there were a lot of people who wanted to auto-generate PDF forms and reports from HTML because it was far easier than other code based PDF solutions. Once we knew this we started adding features to make better for those kinds of users. Things like the CSS 3 extensions for pagination. This was a case of customer feedback leading into our market research and then back into product design.

Defining your product

What is your project about? You need a brief and concise description of your project. If you can't summarize your project in a paragraph, or better yet a single sentence, then you haven't thought through your project enough and should go back to your market research and product design. Even a huge large mature project like tomcat still has enough focus to describe it briefly.

You should create a description of your project that is one page long, then a shorter one that's one paragraph, then a one sentence version. This will not only help you narrow your focus to the most valuable features, but you'll be reusing these descriptions over and over when you start your advertising.

Here's a simple example:


This screenshot is from the Flying Saucer home page. In a single paragraph it tells the new reader exactly everything they need to know: the library renders XML and XHTML content with CSS, and it's 100% Java. Right away the visitor can determine if they should investigate further or move on to look for a different project. This helps not only keep them from wasting their time, but also from wasting your time.

Your Website

To effectively market your project you need a coherent message, and that starts with having a single place for people to learn about your project. It's the 21st century, so that means you need a website.

A Google Code site is not a website.


Other than maybe a short text blurb this site contains no information about your project. It's a great collection of tools for existing contributors but it doesn't provide anything that would make someone want to come back to your website again, much less contribute to your project.

This is a website:


The website is visually interesting and it gives users a taste of the application. This particular site is for a project I created called MaiTai. MaiTai a tool for creating audio visual mashups, so I chose a dark color scheme with splashes of intense seductive colors.

This page first draws you in with the design and some pictures on the side. Then it has clearly defined structure of news, about, and a gallery of samples. Most importantly it tells you what the project is about through the tagline at the top and a summary snippet. This is the most important part of your message. This is where you start to reuse the descriptions I had you create above.

Building your site

Keep your site simple. I recommend WordPress or one of the other great blogging / simple CMS platforms out there. As a Java guy I hate the idea of running my site on PHP, but I almost always choose WordPress because it's simple and it has a large rich community of themes, plugins, and support. It gets the job done with as little fuss as possible. Now, I'd like to think I'm a good developer. Could I build my own website by hand? Sure. But you know what, my goal here is to produce an open source project, not a website. My time is better spent working on the marketing and project code, not mucking with a webtools.

Some of you may still think: WordPress? Really? Will your site scale as you add users? Will it have the features you need? Probably not. But that's a problem you want to have. It's not something you need to worry about right now. Worry about it once you have tons of users, and then you'll have some help building the replacement site. Don't shave the yak. Get a site up ASAP and get on with your project.

And this goes double for the visual design of your site. If you are a programmer like me then you aren't a great designer, or at least it will take you a long time to design something good. Instead, choose from one of the hundreds of great themes already created for Wordpress and customize it. That's what I did with I spent a half hour looking through themes, picked one, then added the splash image at the top. Done. Get back to coding.

Basic Advertising

Okay, so now you've got your target market, a first version of your code, and a website. Now you need some real users.

Step zero: be polite.

The next few weeks will be equal parts exciting and frustrating. Always be polite and considerate. Choose your words carefully and gently. Actually, pretty much throughout life I recommend this. If you are always nice to people and smile then wonderful things will happen. I promise.

Step one, don't get outside users yet.

There's probably still major bugs in your code or website that you've overlooked because you are too close. Get some other developer friends (yes, developers have friends) to look over the site and try the software. Go back and fix what they find. Now go get some users.

Step two: draft a press release.

Remember that one paragraph description I had you create? That goes here, followed by the longer version. That's all a press release is. Yes, I know. Press releases are the bane of modern news, but they exist for a reason. They are a concise description of what you've created, spelled out so clearly that even the dumbest chimp with a newsblog can't get it wrong. Your goal is to get your message out.

Step three: submit your press release to the right people

And by right people I mean website and news blogs who are likely to be interested in your project. Is this a game library? There's 8 dozen sites for game programmers. Is this a Java project? Then the first person you should email is the editor of Editors are always looking for new content on a tight deadline, and your well written press release makes their job that much easier. When you email these people you don't want to come across as spam. Write a personal email to the editor politely explaining that you've been working on a cool new open source project and you'd really appreciate it if they'd mention it and link to your site. Then paste the press release after saying thank you. That's it. That's 90% of launching a project.

Step four: fire up the links.

After getting a little press coverage start submitting links to the press articles to the various aggregators like Digg and Reddit. You could submit your own press release, but I've found that the press coverage of your project is considered more authentic than a link to your own blog, and so it will probably get rated up higher. You should also link to your project from Twitter and Facebook. Your friends are your friends, so they will likely spread the word just because they like you. (Which is why you should always be polite, to get more great friends with other friends).

Step five: rinse and repeat

Now that you have some word out there and a connection with a few interested people, you need to do it again. That means you need something new to talk about. That's why I like using blogging platforms for the website. They are built with the idea of continuous new content in mind. Every time you work on a new release, blog about it with a list of what's new and cool. Having trouble building a new feature? Blog about it. Did you get mentioned in the news? Blog about it? Did another project pop up that competes with you? Yes, blog about it. Anything to create new content of interest. Then take a subset of this news and tweet about it. When you have a big new release, send it to the news editors. Keep the cycle going.


Every now and then you want to do something special to build interest, usually around the a new release of your project. This is called a promotion. There's a bunch of fun things you can do that are simple and cheap.

Have a contest.

Create a contest that is relevant to your project. For JavaFX we did a contest where you had to code up something cool that fit a theme, and you could only use 30 lines of code, and you had only one month. Keeping the contest constrained prevents them from becoming unwieldy, and if it's short then it gives you an opportunity to have another contest again soon.

The contest should have a prize, but a cheap one. You should have a prize because you want the contest to feel real. There should be a sense of accomplishment for the winners. But you should keep it cheap so that everyone understands this is really just for fun (plus you have a shoestring budget, remember!) A tshirt or mug is good, but I'm a big fan of amazon gift certificates because they can be used anywhere in the world and don't require expensive postage.

Remember that open source is often a status economy. When announcing the winners recognition is the most important thing. Plus announcing the winners gives you one more thing to blog about. :)

Write some articles.

As your project matures there will be need for longer form documentation and use cases. Lots of news sites want longer form articles for their sites, and some will even pay for them! Kill two birds with one stone by writing an technical article on your project. It can be an intro article or cover how to do something specific and useful with your project, perhaps by combining it with another project. My article on using Flying Saucer with [itext? the pdf article] was one of my most heavily read articles, and it has continued to serve as great documentation to new project visitors.

Better integrate social networking tools.

When you post to your blog you can cross post using social networking aggregators (links to some). Even better, you can let your users spread the word for you. Install a 'tweet this post' plugin to your blog. And create a facebook fan page. Fan pages were design exactly for this case letting people follow the news of a non-person like a software project. it also means your mom doesn't have to be spammed with updates on your open source code (unless she wants to). Creating a fan page takes just a few seconds and can provide a lot of valuable exposure.


Eventually you will want to talk about your project at a conference, perhaps a Open Source Convention of some sort. :) This means you must get comfortable with talking to a large audience. Don't worry, by this point you'll be so comfortable talking about your project that you'll be overjoyed to have more people listen to you.

When you write a session proposal don't make it sound like a vendor pitch. People come to a conference to learn how to do something or how to solve a problem. If your project helps with them with, then great, but your project shouldn't be the focus. Much like with the articles, pitch a session that uses your project to help solve real world problems. Make sure to spell out what the attendees will learn from your session.

Even if you aren't talking about your project, attending a conference gives you an opportunity to talk about your project to new people. And if you give a talk on another topic you can always mention the project in the 'about me' section and give people an opportunity to come up afterwards and ask questions.

Quick & Easy

Get some cheap & fun business cards. Biz cards are insanely cheap to get these days. Make them something fun, remember you aren't a for profit business so you can afford to be a bit silly if you want. The key is having a card that someone can use to get more information, so it *must* have your website and preferably direct contact to the person who gave them the card (probably you).

Use your email signature. This is a small bit of advertising space. Put a link and one sentence description of your project in your email sig. (check with your boss before using it on professional communication).

Self printed tshirts. You probably don't have the budget to print up a bunch of tshirts. But you can use a site like Printfection to create tshirts that others can pay to have printed for themselves.

Throw a party: yes, a party. If you've got a big release coming up then having a party is a great way to celebrate it. It doesn't have to be big, it can just be you and some friends getting together at a bar. All that takes is a tweet with the time ad place. Be sure to announce it beforehand, encourage locals to attend, and take pictures to post on the blog afterwards. Always be looking for new blog content. Oh, and have some way for people to recognize you at the bar, like a silly hat or project tshirt.

Marketing Checklist

If you already have an existing project here's a checklist of things to make sure you've got covered

  • description: one sentence, one paragraph, and one page description of your project.
  • Web site & code site: blog, mailing list / forum, vcs, issue tracker,
  • Social networking: Twitter, Facebook fan page, Linked-In group
  • A product roadmap and a marketing roadmap listing the promotions you are planning.


Marketing your open source project doesn't have to be hard, or expensive. It just takes a bit of resourcefulness and passion. Most importantly, be patient. It will take time to build up your developer audience. Keep at it and they will come. Good luck with all of your projects!

More Reading

Thank you for helping me Bridge The Gap, a webOS auction to help my brother in law fight cancer. Your support means more to me than I can say. I'm constantly amazed by the passion of the webOS community. I have some new updates to share with you.

First of all, we have a date! The auction will begin online on Sunday, September 15th. The auction will continue until Sunday the 22nd. Our goal is to raise $10,000, so that gives us two full weeks to spread the word before the auction starts.

I am also excited to let you know about some new auction items that were just donated. An old friend from webOS Developer Relations (think puppies) has donated a never used AT&T Pre3. Combined with a few new Pre2s recently donated we've got plenty of phones to auction off.

4G TouchPads x 2

Next, two 4G TouchPads. These are each brand new and in the box, with 32GB of storage. They are the 4G kind so you should be able to just pop a SIM in and have them work (though I have not tested this personally).

New Palm VII

A sneaky person also donated something you have probably never seen. A brand new, still in the shrink wrap, Palm VII! The future of wireless internet, circa 1999, can be yours!

Palm Pre Art

And finally, for the cherry on top, this sneaky individual has also obtained some rare artwork that was inside the Palm headquarters. Relive the dream of the Palm Pre in your very own home with this beautiful large canvas print.

Thank you again for your support. Please share this email with others. Tweet it. Post it. (G Plus it?). We need to get the word out to as many people as possible. Tell everyone to join the list so they will be notified when the event starts.


a test blog post content

nothing more text

It's the fashionable thing to speculate on future Apple products. One idea that continues to get traction is the Apple TV, a complete TV set with integrated Apple-ly features. Supposedly to be announced this fall, after failing to appear at *any* event for the past three years, it will revolutionize up the TV market, cure global warming, and cause puppies and kittens to slide down rainbows in joy. I don't buy it. I don't think Apple will make a TV. Televisions are low margin devices with high capital costs. Most of the current manufacturers barely break even. Furthermore, the market is saturated. Pretty much anyone in the rich world who wants a TV has one. Apple needs *growth* opportunities. The last thing they need is a new product with an upgrade cycle at least *twice* as long as desktop computers. It doesn't make sense. All that said, speculating about products is a useful mental exercise. It sharpens the mind and helps you focus when working on *real* products. So here we go: # If Apple Made a TV, How Would They Do It? First let's take care of the exterior. In Apple fashion it would be pretty and slender. Either a nice brushed aluminum frame like the current iMac or a nearly invisible bezel. I suspect they will encourage wall mounting so the TV appears to just float. The current Apple set top box will be integrated, as will network connections, usb, etc. Nothing to plug in except the power cord. Next, we can assume the current Apple TV will become the interface for the whole device. A single remote with on screen controls for everything. While I love my Roku, I hate having to use one remote for the interface and a second for power and volume. Third, they will probably add a TV app store. I don't think it will feature much in the way of games and traditional apps. Rather, much like the Roku, there will be apps for each channel or service. The current Apple TV essentially has this now with the NetFlix and HBO apps. The only difference would be opening the store up to more 3rd party devs. I think we can assume this will another client of your digital hub. Apple already wants us to put all of our music, videos, and photos into So far everything I've described can be done with the current Apple TV set-top box. So why build a TV. Again, I don't think they will; but if they would need to add something to the experience beyond simply integrating First, a camera for FaceTime. Better yet, four of them, one in each corner of the screen. Four cameras would give you a wide field of view (with 3D capture as a bonus) that can track fast moving toddler as they move around the living room. This is perfect for video chatting with the grandparents. Furthermore, there are modern (read: CPU intensive) vision algorithms that can synthesize a single image from multiple cameras. Right now the camera is always off to the side of the screen, so your eyes never meet when you look at the person on the other end. With these algorithms the Apple TV could create a synthetic image of you as if the camera was right in the middle of the TV. Combined with the wide field of view and a few software tricks we could finally have video phones done right. It would feel like the other person is right on the other side of your living room. It could even create parallax effects if you move around the room. Video calls are a clear differentiator between the Apple TV and regular TVs, and something that a set top box alone couldn't do. I'm not sure it's enough to make the sale, though. What else? How about the WWDC announcement of HomeKit? An AppleTV sure sounds like a good home hub for smart automation accessories. If you outfit your house with smart locks, door cameras, security systems, and air conditioners, I can see the TV being a nice place to see the overview. Imagine someone comes to the door while you are watching a show. The show scales down to corner while a security camera view appears on the other screen. You can choose to respond or pretend you aren't home. If it's a UPS guy you can ask them to leave it on the front door. I imagine the integration could go further. Apple also announced HealthKit. The Apple TV becomes a big screen interface into your cloud of Apple stuff, including your health data. What happens if you combine wearable sensors with an Apple TV. See a live map of people in the house, ala HP's Marauders Map. An exercise app can take you through a morning routine using both the cameras and a FitBit to measure your vitals. A TV really could become another useful screen in your home, something more than just a video portal. I think the idea has a lot of potential. However, other than a camera and microphones almost everything I've detailed above could be done with a beefed up standalone Apple TV set top box. I still don't think a full TV makes sense.

Animation is just moving something over time. The rate at which the something moves is defined by a function called an easing equation or interpolation function. It is these equations which make something move slowly at the start and speed up, or slow down near the end. These equations give animation a more life like feel. The most common set of easing equations come from Robert Penner's book and webpage.

Penner created a very complete set of equations, including some fun ones like 'spring' and 'bounce'. Unfortunately their formulation isn't the best. Here is one of them in JavaScript


They really obscure the meaning of the equations, making them hard to understand. I think they were designed this way for efficiency reasons since they were first implemented in Flash ActionScript, where speed would be important on the in-efficient. It makes them much harder to understand and extend, however. Fortunately, we can refactor it to be a lot clearer.

Let's start with the easeInCubic equation. Ignore the x parameter (I'm not sure why it's there since it's not used). t is the current time, starting at zero. d is the duration in time. b and c are the starting and ending values.


If we divide t by d before calling this function, then t will always be in the range of 0 to 1, and d can be left out of the function. If we define the returned version of t to also be from 0 to 1, then the actual interpolation of b to c can also be done outside of the function. Here is some code which will call the easing equation then interpolate the results:

                var t = t/d;
                t = easeInCubic(t);
                var val = b + t*(c-b);

We have moved all of the common code outside the actual easing equation. What that leaves is a value t from 0 to 1 which we must transform into a different t. Here's the new easeInCubic function.

        function easeInCubic(t) {
            return Math.pow(t,3);

That is the essence of the equation. Simply raising t to the power of three. The other formulation might be slightly faster, but it's very confusing, and the speed difference today is largely irrelevant since modern VMs can easily optimize the cleaner form.

Now let's try to transform the second one. An ease out is the same as an ease in except in reverse. If t when from 0 to 1 then the out version will go from 1 - 0. To get this we subtract t from 1 as shown:

        function cubicOut(t) {
            return 1-Math.pow(1-t,3);

However, this looks awfully close to the easeIn version. Rather than writing a new equation we can factor out the differences. Subtract t from 1 before passing it in, then subtract the result from one after getting the return value. The out form just invokes the in form:

function easeOutCubic(t) {
     return 1 - easeInCubic(1-t);

Now we can write other equations in a similarly compact form. easeInQuad and easeOutQuad go from:



function easeInQuad(t) {  return t*t; }
function easeOutQuad(t) { return 1-easeInQuad(1-t); }

Now let's consider the easeInOutCubic. This one smooths both ends of the equation. In reality it's just scaling the easeIn to the first half of the t, from 0 to 0.5. Then it applies an easeOut to the second half, from 0.5 to 1. Rather than this complex form:


We can compose our previous functions to define it like so:

        function cubicInOut(t) {
            if(t < 0.5) return cubicIn(t*2.0)/2.0;
            return 1-cubicIn((1-t)*2)/2;                

Much cleaner.

Here is the original form of elastic out, which gives you a cartoon like bouncing effect:


and here is the reduced form:

        function easeOutElastic(t) {
            var p = 0.3;
            return Math.pow(2,-10*t) * Math.sin((t-p/4)*(2*Math.PI)/p) + 1;

Moral of the story: "Math is your friend" and "always refactor".

These new equations will be included in a future release of Amino, my cross platform graphics library.

Much like a painter or musician, sometimes I an idea forms in my head and will not let me rest until it comes out. Usually such an idea is an algorithm or graphics demo, but this time it came in the form of a game; a game which will not quiet until born.

To that end I present to you: Budu Budu Tiki Mon's Super Christmas Adventure, an NES style RPG playable in your browser.

I've always been a fan of NES/SNES era RPGs, the Final Fantasy series in particular. Though fun to play they are also easily parodied due to common tropes through out the games. Each takes place in a different universe with different characters, but they always have a helper named Cid, a flying vehicle of some sort, ridiculous weapons, twisting plots, and backstabbing villans. As I said, ripe for parody. And what better genre of parody than Christmas Movies

Silly characters, a princess to save, amusing dialog, and great chiptunes (gratefully borrowed from 8bit peoples). This is just a prelude of a full game. SCA contains a small overworld, two villages, and a dungeon. If there is interest I'd love to turn it into a full game.

I want to stress that the prelude is in no way finished. The game engine is rife with bugs, some characters are missing dialog, and the graphics need further tweaks. There simply wasn't enough time to polish it before the holidays. Such is the life of a toddler father. Please tweet me any issues and I'll fix'em ASAP.

Have a Very Merry Christmas!

This is the second blog in a series about Amino, a Javascript OpenGL library for the Raspberry Pi. The first post is here.

This week we will build a digital photo frame. A Raspberry PI is perfect for this task because it plugs directly into the back of a flat screen TV through HDMI. Just give it power and network and you are ready to go.

Last week I talked about the new Amino API built around properties. Several people commented that I didn’t say how to actually get and run Amino. Very good point! Let’s kick things off with an install-fest. These instructions assume you are running Raspbian, though pretty much any Linux distro should work.

Amino is fundamentally a Node JS library so first you’ll need Node itself. Fortunately, installing Node is far easier than it used to be. In brief, update your system with apt-get, download and unzip Node from, and add node and npm to your path. Verify the installation with npm —version. I wrote full instructions here

Amino uses some native code, so you’ll need Node Gyp and GCC. Verify GCC is installed with gcc —version. Install node-gyp with npm install -g node-gyp.

Now we can checkout and compile Amino. You’ll also need Git installed if you don’t have it.

git clone
cd aminogfx
node-gyp clean configure --OS=raspberrypi build
npm install
node build desktop
export NODE_PATH=build/desktopnode tests/examples/simple.js

This will get the amino source, build the native parts, then build the Javascript parts. When you run node tests/examples/simple.js you should see this:


Now let’s build a photo slideshow. The app will scan a directory for image, then loop through the photos on screen. It will slide the photos to the left using two ImageViews, one for the outgoing image and one for the incoming, then swap them. First we need to import the required modules.

var amino = require('amino.js');
var fs = require('fs');
var Group = require('amino').Group;
var ImageView = require('amino').ImageView;

Technically you could call amino.Group() instead of importing Group separately, but it makes for less typing later on.

Now let’s check that the user specified an input directory. If so, then we can get a list of images to load.

if(process.argv.length < 3) {
    console.log("you must provide a directory to use");

var dir = process.argv[2];
var filelist = fs.readdirSync(dir).map(function(file) {
    return dir+'/'+file;

So far this is all straight forward Node stuff. Since we are going to loop through the photos over and over again we will need an index to increment through the array. When the index reaches the end it should wrap around to the beginning, and handle the case when new images are added to the array. Since this is a common operation I created a utility object with a single function: next(). Each time we call next it will return the next image, automatically wrapping around.

function CircularBuffer(arr) {
    this.arr = arr;
    this.index = -1; = function() {
        this.index = (this.index+1)%this.arr.length;
        return this.arr[this.index];

//wrap files in a circular buffer
var files = new CircularBuffer(filelist);

Now lets set up a scene in Amino. To make sure the threading is handled properly you must always pass a setup function to amino.start(). It will set up the graphics system then give you a reference to the core object and a stage, which is the window you can draw in. (Technically it’s the contents of the window, not the window itself).

amino.start(function(core, stage) {

    var root = new Group();

    var sw = stage.getW();
    var sh = stage.getH();

    //create two image views
    var iv1 = new ImageView().x(0);
    var iv2 = new ImageView().x(1000);

    //add to the scene

The setup function above sets the size of the stage and creates a Group to use as the root of the scene. Within the root it adds two image views, iv1 and iv2.

The images may not be the same size as the screen so we must scale them. However, we can only scale once we know how big the images will be. Furthermore, the image view will hold different images as we loop through them, so we really want to recalculate the scale every time a new image is set. To do this, we will watch for changes to the image property of the image view like this.

    //auto scale them with this function
    function scaleImage(img,prop,obj) {
        var scale = Math.min(sw/img.w,sh/img.h);;
     // call scaleImage whenever the image property changes;;

    //load the first two images

Now that we have two images we can animate them. Sliding images to the left is as simple as animating their x property. This code will animate the x position of iv1 over 3 seconds, starting at 0 and going to -sw. This will slide the image off the screen to the left.


To slide the next image onto the screen we do the same thing for iv2,


However, once the animation is done we want to swap the images and move them back, so let’s add a then(afterAnim) function call. This will invoke afterAnim once the second animation is done. The final call in the chain is to the start() function. Until start is called nothing will actually be animated.

    //animate out and in
    function swap() {
     //kick off the loop

The afterAnim function moves the ImageViews back to their original positions and moves the image from iv2 to iv1. Since this happens between frames the viewer will never notice anything has changed. Finally it sets the source of iv2 to the next image and calls the swap() function to loop again.

    function afterAnim() {
        //swap images and move views back in place
        // load the next image

A note on something a bit subtle. The src of an image view is a string, either a url of file path, which refers to the image. The image property of an ImageView is the actual in memory image. When you set the src to a new value the ImageView will automatically load it, then set the image property. That’s why we added a watch function to the iv1.image not iv1.src.

Now let’s run it, the last argument is the path to a directory containing some JPGs or PNGs.

node demos/slideshow/slideshow.js demos/slideshow/images

If everything goes well you should see something like this:


By default, animations will use a cubic interpolator so the images will start moving slowly, speed up, then slow down again when they reach the end of the transition. This looks nicer than a straight linear interpolation.

So that’s it. A nice smooth slideshow in about 80 lines of code. By removing comments and utility functions we could get it under 40, but this longer version is easier to read.

Here is the final complete code. It is also in the git repo under demos/slideshow.

var amino = require('amino.js');
var fs = require('fs');
var Group = require('amino').Group;
var ImageView = require('amino').ImageView;

if(process.argv.length < 3) {
    console.log("you must provide a directory to use");

var dir = process.argv[2];
var filelist = fs.readdirSync(dir).map(function(file) {
    return dir+'/'+file;

function CircularBuffer(arr) {
    this.arr = arr;
    this.index = -1; = function() {
        this.index = (this.index+1)%this.arr.length;
        return this.arr[this.index];

//wrap files in a circular buffer
var files = new CircularBuffer(filelist);

amino.start(function(core, stage) {

    var root = new Group();

    var sw = stage.getW();
    var sh = stage.getH();

    //create two image views
    var iv1 = new ImageView().x(0);
    var iv2 = new ImageView().x(1000);

    //add to the scene

    //auto scale them
    function scaleImage(img,prop,obj) {
        var scale = Math.min(sw/img.w,sh/img.h);;

    //load the first two images

    //animate out and in
    function swap() {

    function afterAnim() {
        //swap images and move views back in place


Thank you and stay tuned for more Amino examples on my blog.

Who are you?

Who am I? Or, perhaps more importantly, why should you care what I say about design. Why don't I address the former question first in hopes that you forget about the latter. I'm Josh Marinacci, a software engineer. My brief bio starts with graphics programming at an early age on an Apple IIc my mom borrowed from her school for the summer. By age 14 I was coding up my own simple overhead dungeon games on my 286 in GW-Basic and VB. I earned a bachelors in Computer Science at Georgia Tech (class of 1997). I specialized in GVU: Graphics, Visualization, and Usability. Ga Tech was one of the first universities to have such a program, something I consider a great asset. It was there that I first learned Java thanks to my favorite TA, Ian Smith, getting me a copy of the early betas; and since then I've coded almost exclusively for the Java platform. After graduation I spent about 9 months as an intern at Xerox PARC (thanks again to my favorite TA), which I would definitely consider a formative place in my career. While I was just a code wrangler for the researchers, I had an amazing opportunity to see early versions of e-paper, blue lasers, MEMS (the tech behind things like on-chip accelerometers and embedded compasses), and computing embedded into non-traditional devices (from skyscraper i-beams to stuffed animals). After PARC I worked in a few startups before the Dotcom bust, then spent a few in large companies working on UIs for enterprise software. Growing tired of JSP programming started writing articles for on a variety of topics, but focused on GUI programming. In 2005 co-wrote with Chris Adamson the book Swing Hacks, for O'Reilly, focusing on the cool ways you can push Swing to the limits. Swing Hacks led to my position at Sun where I have worked in a variety of positions, including improvements to the Windows Look and Feel for Swing, the NetBeans GUI builder (Matisse), and then joined the JavaFX team as soon as it was announced in 2007. Since then I've worked on a visual design tool, the JavaFX Doc tool, personally wrote about half of the launch samples, and endless JavaFX demos. My current position is leading the desktop client for the Java Store, which we hope will unlock the doors for millions of Java developers to make money selling their dream apps to the desktop, mobile devices, and TVs. Throughout it all I've focused on graphics and usability, with a renewed interest in scenegraphs, 3D transitions, and how to make user interfaces better.

What do you know about design?

So now to the second question, why should you care what I say about design? Well, I'm not a designer at all. I am a software engineer through and through. I know some tips and tricks for building UIs but I'm not a graphic designer, and I'm certainly not an artist. I couldn't draw my way out of a paper bag. What I *do* have is a passion for great user experiences. Software has come a long way in the past 30 years since the dawn of graphical user interfaces, but we sill have such a long way to go. Most software is still horrible compared to real world analogs. No one asks how to use a new car. No one is worried about filing cabinets spontaneously losing data. We buy cars with sexy outlines, but (until recently) we don't buy desktop software with outlines that a designer slaved over for 3 years. Despite these failings, software is an increasing part of regular life. Soon almost every facet of your existence will be mediated in some way by software. This makes it our responsibility as software engineers to create products which protect, serve, and delight our users. If we can make even a small improvement in the usability of software through this website, then the results will be well worth the effort.

As I write this I'm flying home from New York City where we (Palm) threw an event known as the webOS Developer Day NYC. Really, we should have called it a party, due to the extreme fun and exhaustion we all experienced. But this post isn't really about the event, it's about the webOS developer community.

I feel very lucky to work on a platform with such a willing and open community around it. It reminds me of my earlier days in the Java world. Lots of people, young and old, some developers, some designers, and some just curious. All of them brought together by their passion for webOS and a desire to share that passion with others.

Though the event was a success and an improvement over the last one by every metric: attendance, location, food, bowling (yes awesome bowling!), I always worry it's not enough. Was the content good enough? Did we have a well rounded schedule? Would I have technical gaffs that would embarrass me? Fortunately, everyone was gracious enough to overlook my laptop meltdown on the first day.

Doing what we do is hard. Not only are we in the rocky early days of a growing industry, but we have been competing with companies many times the size of Palm. Building and testing a new OS, combined with the immense amount of work ahead of us, it can be truly exhausting.

Perhaps more importantly, having worked on many passionate products over the years, I know how it's very easy to become too emotionally involved with a product. I think this is true of many developers. We live and die by every product review, every negative FaceBook comment, and every metric that doesn't live up to our tough standards. It's hard and exhausting work, but one thing makes it worth it. The community.

Spending just 48 hours with our passionate webOS community has reenergized me. I saw a developer dive into his new Pre 2. I heard from people new to the platform describe the cool apps they want to build. I spoke to a proud dad who brought his 14 year old son from Michigan to meet up with fellow developers. If this isn't a rich and rewarding community, I don't know what is.

I still know we've got a lot of work ahead of us, but knowing the community is behind us makes me feel great about our ambitious roadmap for the next year. Thank you webOS community. We really appreciate what you do for us.

Below are my three main session proposals for OSCON, plus a few random ideas near the bottom that aren't fleshed out. Please give me some feedback on what you like and don't like. My goal is to have four really solid submissions. Thanks!

HTML 5 Canvas

Games account for about half of the apps in the typical app store. They are among the first thing ported to any new platform. Games help drive technology forward.  This year's edition of the popular HTML Canvas Deep Pe will focus specifically on building cross platform games for mobile and desktop. We will cover everything needed to build basic games with animation, scrolling, sound effects and music, image loading, sprites, and even joystick support. Then we will learn how to package them to run on desktop and mobile devices, both in and outside of app stores.


  • Why make games?
  • Why make games in Canvas?
    • it's easy and fun!
    • works everywhere. more places than any other graphics API.
    • keeps getting faster and more powerful.
  • Anatomy of a game
    • game engine: it's more than just a run loop
    • images: sprites, models, and more
    • input: keyboard, touch, and joysticks
    • animation and player control
    • scaffolding: menus and splash screens
  • picking a game engine
    • 2d engines
    • 3d engines
    • rolling your own
  • drawing to the screen with movement
  • handling input:
    • regular events: keyboard and mouse
    • multi-touch: utilities to help
    • gamepad / joystick
    • camera
  • audio:
    • background music
    • sound effects
      • latency
      • doubling up the playback
    • considerations for mobile
  • resource management
  • finishing touches:
    • fullscreen
    • splash screens
    • loading screens
  • sharing your game with the world
    • on the web
    • mobile app stores
    • desktop app stores
    • tools to help
    • case study
  • performance
    • desktop
    • mobile
    • using 3d for 2d work
  • tools to help
    • performance tuning
    • resource editing
    • existing artwork and music you can use

Make both a full three hour workshop and a 1 hour talk with the lessons?

Designing The Internet of Things with the 3 Laws of Robotics

Thanks to cheap sensors and even cheaper computing, we are rapidly approaching the age of the smart home: living spaces filled with smart things. Objects connected to each other and to the internet. Thermostats, door switches, lights, windows, gas sensors and toilets.  However, this vision of things to come brings great challenges as well. How do we design interfaces for these devices? How can someone manage a house full of 200 gadgets each demanding new batteries and an IP address?  What if your networked toaster rats you out to the FBI? The challenges of building a safe and understandable Internet of Things are immense. There is one existing ethical framework that can help: Isaac Asimov's Three Laws of Robotics.

In this session we will explore the complex interactions of the Internet of Things and see how the classic Three Laws of Robotics can be applied in these situations. We will cover physical safety, data privacy, setup and maintenance, and general usability.  No knowledge of programming or interaction design is required, just an open mind and a desire see the future.


  • The Internet of things?
    • Why is it cool?
    • What counts and what doesn't?
    • Inside and outside your home.
  • A quick survey of the problems IoT creates:
    • data privacy
    • physical security
    • physical safety
    • data overload
    • management overload
  • The Three Laws of Robotics
    • fictional and non-fictional history
    • guidance to solve our IoT problems
  • Do Not Harm a Human
    • physical safety
    • emotional safety
    • preserving privacy
  • Obey Orders From Humans
    • The principle of Least User Astonishment.
    • manual overrides
    • decision delegation
    • heuristic design
  • Protect Own Existence
    • Easing the management burden
    • Escalation of emergencies
    • Safe failure
  • Next steps

A survey of visual programming languages

Pure visual programming languages sound like a great idea. Who wouldn't want to create robust and powerful programs using more than just lines of text?  It is one of the holy grails of computer science, yet success has proven elusive. The last fifty years of research are littered with the corpses of failed attempts (along with a few interesting successes in unexpected areas).  Why is it so hard to create a visual programming language that works in the real world? In this session we will explore the history of visual programming, looking at both the failures and successes from the fifties through to modern day, We will look for clues about what works and what doesn't. We will extract concepts that can help us design visual languages in the future, as well as features to bring back into traditional programming environments.


  • What is visual programming?
  • Why visual programming?
    • a picture is worth a thousand words, so it's more expressive. right?
    • similarity to visual structures we already use (UML, state diagrams, GUI builders)
    • non-programmers can program.
    • for teaching programming.
  • what counts and what doesn't.
    • visual studio, no; visual basic yes;
    • visual aids to traditional programming don't count. 
    • some part of the application must be specified in a purely visual manner. VB forms count. same with access forms.
  • history
    • early attempts in the 50s and 60s
    • the mother of all demos.
    • 80s era research. 
      • smalltalk visual environments. didn't quite make it. why? what held them back?
      • soviet visual programming recently uncovered.
      • spreadsheet
    • 90s
      • visual basic
      • access and other visual databases
      • flash, director, multi-media languages.
      • music composition
    • 2000s
      • quartz composer
      • educational languages
  • educational languages:
    • scratch
    • squeak & etoys
    • lego mindstorms
    • blockly: abandoned?
    • android builder thingy: abandoned?
  • conclusions:
    • some tasks work visually. others do not.
    • winners:
      • UI layout
      • drawing, animation, movies. any media creation.
      • anything where direct manipulation helps
      • where a boxes and lines metaphor already exists: music sequencers. (though it doesn't result in very extensible code)
    • losers:
      • traditional stream and graph based algorithms
      • anything dealing with strings or non-visual data structures 
      • building libraries. reusability seems to be especially hard to get right.
  • crossover ideas:
    • colors and images in a traditional editor
    • rapid/instant feedback. Processing.
    • show/hide overlays for interesting information.
    • use of color, typography, visual layout to display purely text based code.
    • mixing visual with non-visual works very well. assemble components visually. build components in regular code.
    • separate editing from viewing:  greek symbols and other very compact representations of algorithms?
    • hide the filesystem. you don't care how your code is stored on disk. dir structure is irrelevant. Smalltalk had this.

A Few Other Ideas

My 'game editor inside the game as the game' idea.

Hacking Things Up with WebKit-nix:

Nix is a port of WebKit2 based on Posix and OpenGL/ES. It is unique due to it's portabljilty and few dependencies. While it can be run on a traditional desktop environment and GUI toolkit, it's most interesting use is for embedded systems where a full GUI may not be abailalble, and for headless applications where there is no live graphics environment. This session will cover what Nix is, how to compile it,

  • list of things people have done with it
  •      how to compile it
  •      how to integrate it into a server side app
  •      how to integrate it into a client side app with direct GL rendering
  •      running it on a raspi w/o X running
  •      next steps and places to help
  • Working with the Raspberry Pi as a kiosk: no X, boot right into your app

    Intro to Bluetooth Low Energy: iOS, desktop, raspberry pi, arduino.

I had hoped to have my next tutorial article done by now but, alas, travel and JavaStore deadlines snuck up on me. I'm currently flying to Sweden for the annual OreDev conference.

If you live in northern Europe and have never been to OreDev I highly recommend it. It's not too big (600ish attendees), has some excellent speakers, and covers a broad array of topics. I love speaking at OreDev because I not only teach others about JavaFX but also get to learn about many other technologies that I wouldn't otherwise be exposed to. Last year I learned about JRuby, Silverlight, and Jython.

This year I'm speaking on JavaFX user interfaces and the Java Store, as well as announcing an open source project I've been working on for the past few months.

In the mean time, I'm also starting the NaNoDrawMo drawing challenge. This isn't a competition like the challenges I run on JFXStudio. Instead this is purely personal. Your goal is to draw 50 things in 30 days. Just draw and draw. Practice makes perfect.

And by draw I do indeed mean drawing on paper with pencil or pen; though any form of drawing, analog or digital, is valid.

Why am I discussion drawing, typically considered an 'art' on a design site? Well, I'm not an artist and I never will be. But many of the skills of an artist have great use in design. As I'll cover later, the first step whenever I design a new user interface is to sit down with a piece of paper and draw it. Drawing is fundamentally the ability to take what's in your head and put it on paper. And only once it's on paper for others to see can it start to become design. So drawing is a valuable skill which I wish I had more of. Hence: practice makes perfect.

I'll be posting my drawings on Flickr as the month progresses, as well as more posts on design theory and tech tips, so stay tuned.

- Josh

The TouchPad is on it's way to stores, the catalog is full of apps, and Jesse finally went to sleep.  It is done.

Well, not really done per se. This is a marathon and we've only made our first dip in the water. It's a seven course meal that begins with the first mile. It's both a floor wax and a dessert topping. And was it over when the Germans bombed Pearl Harbor? Hell no! And it ain't over now!

I'm serious, though, when I say that 10 years from now 90% of people will use a tablet computer as their main computing device.  And a big chunk of those will be running webOS.  This is just getting started.

I'd like to really congratulate both the Developer Relations team that I'm on and the larger webOS organization. I am so proud to be a part of this group. Shipping a product like this has been a dream of mine for a very long time. To make something quality and tangible. To improve the computing user experience for the regular user and help developers build new and amazing things.  Things no one has ever seen before.

Tomorrow is the realization of that dream for me. Tomorrow I will walk into Best Buy and see people using a product that I helped bring into existence. It will have some great apps in the catalog that I helped usher to market. I'm especially proud of our educational titles like BrainPop (app) and the first official tablet app for the Kahn Academy (app).

Now of course I know how the sausage is made. I know the features that didn't make it in. The bugs that were fixed right after the deadline. The chaos behind the curtain.   It's just the nature of software. You're never done. But a great guy once said: "Real artists ship". And I heard his underdog products ended up doing pretty well.

And another great guy I know just said: "THIS is our iMac, and this is where it starts, not finishes. And while we have a lot of work left to do, getting here means we’ve accomplished a hell of a lot, too."

And if that doesn't work for you then here, have some tasty pudding!

Go Huge Palm!

For my gracious younger readers, a brief history lesson from one of the classics of cinema

Was it over when the Germans bombed Pearl Harbor by thefastlane2hell

This is day zero of my Month Of Writing

Controlling an Arduino project over a serial connection is one of the most common tasks you might want to do, and yet it's surprisingly hard to find a good library. I really don't like the official ones because they are limited and require too much setup. After much googling with Bing I found one called CmdArduino by Akiba at Freaklabs.

CmdArduino does exactly what I want, minus a few tweaks. I emailed Akiba about it. Even though he had never done an update to the library in three years he responded right away. I asked if I could take over the lib and he said yes! So I'm now the official maintainer for CmdArduino. For my first release I've added Stream support so it will work with more than just the regular Serial port; very important for working with alternative streams like Bluetooth LE modules.

CmdArduino is super easy to use. Create your command functions with a signature like:

void left(int arg_cnt, char **args)

Then register it like this:


Now the left function will be called whenever you type 'left' into the other end of the stream. If you add arguments after the command they will show up in the args array. At OSCON I build a robot controlled by a chat app on my phone over BLE. When I typed in spin 3000 the robot's spin function would be called with the value 3000 for the duration. The code looks like this:

void spin(int arg_cnt, char **args) {
    int time = 1000;
    if(arg_cnt > 0) {
        time = parseInt(args[0]);

void setup() {

It's that easy. The code is up in my github repo now.

Thanks Akiba!

this is foo 3

Software Libraries are good. They allow abstraction and encapsulation; which encourages reuse. They also allow the library to be written by one person and used by another. This is reuse at the programmer level, not just the system level. A library can be written by a person who has domain expertise but used by someone else who has less or no expertise in that domain. For example: an XML parser. The implementer knows the XML spec inside and out, but the user of the lib needs only a basic understanding of XML in order to use the lib.

Now that we've established that libraries are good (regardless of whether are are implemented as classes, packages, DLLs, etc.), how can we extend them? Traditionally libraries are very limited in how they can influence the experience of the programmer. The coder simply has new functions to call. A more advanced case is component driven GUI builders where the lib exposes more information so that the IDE can make the library easier to use. Annotating of types in a map component could enhance the GUI builder draw markers, use a dummy tile set, and set positions. This is all standard stuff. Could libraries do more? How could the library author further influence the experience of the library user?

Let's do a quick brainstorm:

  • Extend the language with new operators and syntax specific to the library. A physics library could import operator overloads to enable more math like syntax when dealing with physical units and vectors.
  • Tests that verify aspects of the code that calls the library. This could automate the advice that would be in the documentation, such as "don't pass null to certain functions" or "this function returns an object that you must dispose of".
  • If the lib is used from within an IDE, the lib can insert new documentation into the searchable index of docs.
  • The lib could come with example code and reusable annotated code snippets that would only be applied to projects using the lib.
  • The lib could insert a new package manager into the compiler toolchain. No more messing with Maven POMs.
  • The lib can provide the compiler with new code optimizers for the particular tasks. An imaging analysis library could instruct the compiler on how to better use SIMD instructions when available.

Obviously modifying the environment of the programmer is a dangerous activity so we would need very strict scoping on the changes. A library which extends the language applies only in the parts of the code that actually import the library. Changes are scoped to the code which asks for them.

Fundamentally a library is a thing created by one programmer for use by another programmer (or the same programmer at a future date). We should extend the concept of a library to apply to the entire programming experience, not just adding objects and methods. This would allow all environment extensions to be managed in a standard way that is more understandable, flexible, and scoped. I would never need to set up a makefile again. I could simply import in my code the package manager, repos, code chunks and syntax that I want to use for this project. The library extensions to the compiler handle everything else automatically.

Clearly there are challenges with such an approach. We would have to define all of the various ways a library could modify your environment. How would compiler hooks actually work? Annotation processors? A function to modify the AST a different stages of compilation? Would it imply a particular back end like LLVM? I don't have the answers. There clearly are challenges to be solved, but I think the approach is a step forward from today where we have many different components of our programming environment that are dependent on each other but don't actually know about each other. Having a way to organize this mess, even if imperfect, would certainly help us make better software.

Today we are talking with Rusty Moyher, another member for the Retro Game Crunch team. You can read the previous interview with Shaun Inman here. There's still a few days left to help push the Retro Game Crunch to the finish line. Pledge now!

Josh Before we talk about the Crunch, could you tell me a little bit about yourself? Where did you go to school? What did you do before you started making games?

Rusty Sure! I got started making games pretty early. In 1993 my family received a hand-me-down computer, an old black and white Macintosh Plus. I discovered, with a program called Hypercard, I could make rudimentary games. They were far from amazing, but I had a blast making them.

Gradually I became more interested in film and started making my own movies. At Sacramento State I earned a B.A. in English because I thought it would help me write better screenplays. But it turns out you get better at writing by simply writing more.

Josh What did you do at Apple?

Rusty Fixed computers. Produced a couple internal videos too.

Josh The Kickstarter site says you are also a filmmaker. What films have you made?

Rusty Most recently, For America. It's a grindhouse flick about a man being evicted from his home by the evil banks. Naturally, it turns into a punch out.

Josh Do you think there is a lot of overlap between film and game creation? Do your skills in one cross over to the other?

Rusty I’ve spent the better part of 10 years writing, producing and directing independent films. I’ve always had a passion for creating and sharing experiences with people. My background as a writer, director and editor colors everything I do. Games are not movies (and they shouldn’t be), but both are designed as an experience.

Josh How did you come up with the idea of doing one game every 30 days for six months rather than a more traditional schedule? What are the advantages to doing it so quickly?

Rusty I've found a healthy amount of focus and pressure help me make better decisions. 30 days should give us just enough time to focus on the gameplay and then polish it for release. The idea came from making Super Clew Land. We felt 30 days was just about the right.

Josh Tell me about Bloop. What is it and where did the idea come from?

Rusty Bloop is like a Twister / Hungry Hungry Hippos love child. Two to four players tap tiles on iPad as quickly as possible. The tiles shrink and hence begin to collide. It's a hilarious party game.

The original idea for Bloop came from frustration. I was working on a more complicated iPad game. It was just too much for players to set up easily. I want a simple experience people could just play without needing to learn or set up anything.

Josh I saw it was nominated for Indiecade 2012. There's a lot of interesting stuff nominated this year. Did you expect to get recognition for Bloop?

Rusty No way! I was blown away and honored to have Bloop among so many other amazing games.

Josh Have you collaborated with others on previous games?

Rusty I’m only just discovering the Voltronian power from combining the strength of a couple like-minded Indies. Last month I made the original version of Super Clew Land with Shaun Inman and Matt Grimm. It’s been such a blast, we haven't stopped collaborating.

Josh What ideas do you have for game styles in the Retro Crunch? Can we expect platformers or other styles, or something completely new?

Rusty No way of knowing yet! Everyone who backs our Kickstarter gets to submit and vote on themes. When all the votes come in, a winning theme is declared. Then we design a game based on the theme. Only then will we discover the style of the game.

Josh What is the one existing game you wish you had created?

Rusty I would love to have worked with the team behind Super Metroid or Final Fantasy VI.

Josh What's the one question I should have asked you and didn't?

Rusty Do you have even more amazing games planned for the future? You bet. :)

Thanks Rusty

I finally got tired of hacking Wordpress and decided it was time for a change. My site has always essentially been static. I don't need a database or the ability to switch themes and use plugins. I'm the only author and the content doesn't change more than once a day (usually once a week). So, I scrapped WordPress entirely and wrote my own blog in Node. It only took me about a day of real hacking to get it up and running.

The blog itself is a simple Node app with the Express toolkit. For markup I'm using Jade, a refreshingly clean template engine. The blog posts themselves are just HTML on disk with some adjacent JSON metadata. ie: the simplest thing that could possibly work. No databases are involved. Just in memory caching of on disk text files. When I want to post something I just drop in a new file and restart the node process (takes less than 1 second). I suppose I could write some sort of visual posting interface if I wanted to, but no need to complicate things rith now.

The style of the site is custom. I designed it to have a bit of a retro feel while remaining clean. You can see the tag chart in the sidebar borrows colors from the top header. The top font is Ostrich Sans from the League of Movable Type, using @fontface.

I originally chose to go with WordPress for my blog because of the many themes available, but finally decided they are all way too configurable or just plain crap. And the markup is always atrocious. I wanted something neat and clean, so I had to go custom.

I built the UI of the site on top of Twitter's Bootstrap CSS framework which gives me a nice clean grid to work with, consistent vertical spacing, and responsive behavior if you view it on a mobile device. Yes, there is no special mobile theme. You get the exact same content on all devices, but with a layout tuned to your needs. As a mobile developer I am adamant that we shouldn't have two versions of any site. Just make it adjust dynamically.

I hope you enjoy the new design. Please give me feedback on Twitter.

I'm working on a few submissions for OSCON, due in two days. I've got lots of ideas, but I don't know which ones to submit. Take a look at these and tweet me with your favorite. If you can't make it to OSCON I'll post the presentations and notes here for all to read.

Thx, Josh

Augmenting Human Cognition

In a hundred years we will have new bigger problems. We will need new, more productive brains to solve these problems. We need to raise the collective world IQ by at least 40 points. This can only be done by improving the human computer interface, as well as improving physical health and concentration. This session will examine what factors affect the quality and speed of human cognition and productivity; then suggest tools, both historic and futuristic, to improve our brains. The tools are jointly health related: disease, sleep, nutrition, and light; digital: creative tools, AI agents, high speed communication; and also physical augmentation: Google Glass, smart drugs, and additional cybernetic senses.

Awesome 3D Printing Tricks

The dream of 3D printing is for the user to download a design, hit print, and 30 minutes later a model pops out. What’s the fun in that? 3D printers are great because each print can be different. Let’s hack them. This session will show a few ‘unorthodox’ 3D printing techniques including mixing colors, doping with magnets and wires, freakish hybrid designs, and mason jars. Lots of mason jars.

The techniques will be demonstrated using the open source Printrbot Simple, though they are applicable to any filament based printer. No coding skills or previous experience with 3D printers is required, though some familiarity with the topic will help.

Cheap Data Dashboards with Node, Amino and the Raspberry PI

Thanks to the Raspberry Pi and cheap HDMI TV sets, you can build a nice data dashboard for your office or workplace for just a few hundred dollars. Though cheap, the Raspberry PI has a surprisingly powerful GPU. The key is Amino, an open source NodeJS library for hardware accelerated graphics. This session will show you how to build simple graphics with Amino, then build a few realtime data dashboards of Twitter feeds, continuous build servers, and RSS feeds; complete with gratuitous particle effects.

HTML Canvas Deep Dive 2014

Behind plain images, HTML Canvas is the number one technology for building graphics in web content. In previous years we have focused on 3D or games. This year we will tackle a more useful topic: data visualization. Raw data is almost useless. Data only becomes meaningful when visualized in ways that humans can understand. In this three hour workshop we will cover everything needed to draw and animate data in interesting ways. The workshop will be divided into sections cover both the basics and techniques specific to finding, parsing, and visualizing public data sets.

The first half of the workshop will cover the basics of HTML canvas, where it fits in with other graphics technologies, and how to draw basic graphics on screen. The second half will cover how to find, parse, and visualize a variety of public data sets. If time permits we will examine a few open source libraries designed specifically for data visualization. All topics we don’t have time to cover will be available in a free ebook to read.

Bluetooth Low Energy: State of the Union

In 2013 Bluetooth Low Energy, BLE, was difficult to work with. APIs were rare and buggy. Hackable shields were hard to find. Smartphones didn’t support it if they weren’t made by Apple, and even then it was limited. What a difference a year makes. Now in 2014 you can easily add BLE support to any Arduino, Raspberry PI, or other embedded system. Every major smartphone OS supports BLE and the APIs are finally stable. There are even special versions of Arduino built entirely around wiring sensors together with BLE. This session will introduce Bluetooth Low Energy, explain where it fits in the spectrum of wireless technologies, then dive into the many options today’s hackers have to add BLE to their own projects. Finally, we will assemble a simple smart watch on stage with open source components.

While I have many projects in progress right now, including Amino, Leonardo, getting the TouchPad out the door, and having my first baby (only a few weeks left!); every now and then I just get something into my head and have to code it out. Last night it was noise functions.

Since I've been using little bits of noise in my designs I thought it was high time to really learn how it works. Following code & articles from here and here I've managed to create a simple noise and turbulence generator in Java. With a few different settings you can get textures that range from completely random to something that looks like marble.

pure random grayscale noise

high turblence

medium turbulence + one sinewave

medium turbulence + three sinewaves

It's actually simpler than I thought. Noise is just a bitmap filled with random values from 0 to 1 (or a grayscale value between black and white). Turbulence is created by stacking multiple layers of noise, each stretched to different amounts. Add in a bit of linear smoothing and you get some quality texture. The marble effect is just a turblence texture merged with a sine wave.

Next up, making the noise repeat so we can have tileable textures. Then I might make a simple app to generate noise with various colors. If it all turns out okay I might make a plugin for Leonardo to create your own textures.

a test blog post content

20th century advertising has taught us to associate quality artwork and polish with quality products. Given two apps that do the same thing, a potential customer will pick the one that looks and feels better. This means every great app needs great art. Since most developers aren't artists or designers by trade, I've assembled a list of resources that can help. Here are icons, fonts, sounds, color schemes, and other great art assets to help you make your app stand out from the crowd.

Color Schemes

Good use of color can really make your app stand out, but color can be tricky. The best color schemes often come from other people or real world objects. These sites have collections of color schemes created by people who work with color every day. They let you search by color, theme, and popularity.


COLOURlovers is a creative community where people from around the world create and share colors, palettes and patterns, discuss the latest trends and explore colorful articles... All in the spirit of love. (yes, they wrote that part)



Kuler at Adobe is a Flash based color searching site. FirefoxScreenSnapz022.png

For more on the topic of color see my Color 101 article.


Every great app needs great icons. Not only for the app itself but also within the application for buttons and indicators.

The great FXExperience blog has several free PNG icon collections that are generic and useful for lots of things. Most are small so they work well on mobile devices. Free icons for your JavaFX applications @ FX Experience

The Crystal Clear icon set by Everaldo is licensed under LGPL and includes icons for apps, actions, devices and mimetypes.

The Nuvola icon set contains 600 icons with a cute cartoon feel to them.

Icon sets for Gnome, under various licenses

Tango Icon Library: another very complete set of desktop icons. FirefoxScreenSnapz021.png


Free Sound has a huge collection of user contributed effects, clips, and just plain weird sounds. Great for building sampled music and effects in your games



Jamendo is a music site containing only Creative Commons licensed works. The perfect place to find your next soundtrack.


Imagery & Textures

Lost Garden

Lost Garden focuses on video game design. The author, Danc, has literally decades of experience in the field. You can easily lose hours reading through amazing essays on the site. Today, however, we are here for the free game graphics. Spanning both vector and bitmap, retro and modern styles, Lost Garden has game graphics for many uses

Open Graphic Design and Think Design blog has tons of cool shapes and vector artwork. They are great as starting points in Illustrator has free textures and vector shapes. Good for backgrounds and skinning. Plus tons of inspiration articles.


For photos your best source is images licensed under Creative Commons at Flickr. Conveniently they have a search option for just such photos.


Well Placed Pixels is a blog containing only one thing: screenshots of beautiful software.

The idea of computer vision has always fascinated me. The ability to get from a plain image to an understanding of it's contents seems magical. Though I understand a bit of the underlying math, to build my own computer vision system would take years of study. Fortunately, this book and an open source library come to the rescue.

Practical Computer Vision with SimpleCV, published by O'Reilly, is a wonderful book; with an emphasis on Practical. It explains a bit about how computer vision works then dives right in to building things using the open source computer vision library: SimpleCV. SimpleCV is a Python library (linking to an underlying C implementation, I think), and quite easy to use. Since the book examples are all in Python anyone with basic programming experience (Java, C#, JavaScript, Ruby) should have no trouble with the example code.

The first half of the book starts with an explanation of basic computer vision concepts then jumps right into build a simple time lapse photography app with a few lines of Python code. Next comes image manipulations such as cropping, color reduction, simple object detection, and histograms. This is enough to create a blue screen effect and parking detector.

The second half contains the real meat of the book: detecting features. SimpleCV can pick out different shapes, filter by colors, look for faces, and even scan barcodes. One of the examples looks at a table of change to calculate the monetary value using coin size. The final chapter covers some advanced techniques like optical flow and key point matching.

While I like the book overall I do have a few nits. First, I really wish it was printed in color. Several chapters have images which can't be easily distinguished when printed in black and white. Second, I wish it was longer. While the book does covert almost every feature of SimpleCV, I'd love to read some larger example apps that combine multiple techniques. All that said, the book was still a good read and informative. It will stay on my shelf for future imaging projects.

As you know, I've been doing a lot of ebook prototyping lately. Ever since the iBooks 2 announcement I've had this idea stuck in my head that we can make rich interactive ebooks using nothing but web technology. That lead to the toolkit I've been working on, now open sourced on github. As the first thing written with that toolkit I'm proud to announce my first interactive book: HTML Canvas, a Travelogue.

The book is an interactive guide to learning Canvas. It is not a comprehensive reference guide since there are plenty of those. It gives you an overview of the technology with deep dives into a few parts to let you explore what the technology can do. It does this through some hands on tutorial chapters where you build charts and a simple video game. It also has interactive code snippets that let you tweak variables to see how they react in real time. The entire book is written in HTML with the ebook toolkit. For the chapter navigation I used the new Enyo 2.0 beta with the Onyx theme. Animation is done with Amino, and for app packaging I've used PhoneGap.

Canvas is a technology that is still in flux, so I think an app based book is a perfect way of learning it. As the technology improves and changes I can continue to update the book with new examples and reference material. I like to call this concept an everbook. When you buy such a book you aren't just buying the static text itself. You are buying all future editions of that book, and can give feedback to the author to help shape the material. If the reader wants a deeper explanation of a particular topic or finds a bug in an example, then he can simply email the author with the request. The book is updated for free and everyone wins.

I will be releasing it for all of the tablet platforms over time, starting with webOS first and the iPad next. The retail price will be 4.99$, but as a thank you to the webOS community I am releasing it for a permanent price of 99 cents. And as of right now it is live in the TouchPad App Catalog and available for purchase. I love you guys and this is the best way I can think of giving back: developing tools and apps to keep webOS great. Please check it out and give me feedback. All bugs will get top priority.

Thank you.

- Josh

In today's post I'll dive into Amino's new buffering support. At then end we'll talk about new API docs for Amino, the roadmap, and request for help on a domain name.

A big part of making a scenegraph fast is using lots of little tricks to do as little work as possible. Most of these tricks are decades old, but they still work. What makes a scenegraph good is letting developers easily use these tricks without having to code up anything special.

Dirty rect tracking is one such trick, but I haven't implemented it in Amino yet so I'll cover that in a few weeks. Another common trick is using intermediate buffers to store effects that are expensive to compute, such as blurs, shadows, and Photoshop style adjustments. The beauty of buffers is that if you can pretty much do any crazy thing you can think of, as long as you can figure out how to draw it into a buffer first. A good third of Swing Hacks, the book Chris Adamson and I wrote, is just different clever ways of using buffers.

Given the importance of buffers I made this a central feature of Amino. But before I go any further, how about a demo!

Zoom Effect

First, an MP3 style Visualizer. I say MP3 style because it's not actually working with audio. I'm just generating random data then pushing it through a simple buffer effect: draw into buffer1, copy buffer1 into buffer2 with stretching, flip buffers and repeat. It's a simple technique but if you do it over and over the results are very cool.

Google ChromeScreenSnapz020.png
MP3 Visualizer. Click to view

On a modern browser you should get a solid 60fps. BTW, a quick shout out to Internet Explorer 9. The guys at MS have done a top notch job. Amino runs beautifully there, no matter what I threw at it.

The Buffer API

Buffering is broken up into two parts. First is the actual Buffer object, which is a bitmap with a fixed width and height that you can draw into and can be drawn into other buffers or the screen. In the Java version of Amino this is a wrapper around BufferedImage. In the JavaScript version Amino creates an offscreen canvas object to use as a buffer.

The simplest use for a Buffer is in the BufferNode, which just draws its only child into the internal buffer for safe keeping. If the child hasn't changed on the next drawing pass then it will draw using the buffer instead. This is the simplest use case, but very important because you can draw a bunch of complex stuff and save it by simply wrapping it in a buffer. Here's a quick example:

//create a group with 100 child rects
var g = new Group();
for(var i=0; i<1000; i++) { g.add(new Rect().set(i,i,50,50));
} //wrap the group in a BufferNode
var b = new BufferNode(g);

The code above creates a group with a thousand rectangles. This will probably be slow to draw, but by wrapping it in a buffer it's only drawn once and then saved for later. Now the rest of your scene can draw at full speed.

Real Time Photo Adjustments

The next big use of buffers is for special effects like Photoshop filters. As of today Amino now has effects for blur, shadow, and photo adjustment (saturation, brightness, contrast). Each of these effects uses one or more buffers internally to manipulate the pixels before drawing to the screen. Blurring is a big topic, so I'll cover that in it's own blog later. Today I'll cover the photo adjustment.

Adjusting the saturation, brightness, and contrast of an image is actually pretty simple. It's just basic math and a lot of copying:

  • create two buffers.
  • draw your photo into the buffer 1
  • loop over every pixel in buffer 1
  • pull out the red, green, and blue components of that pixel's color.
  • calculate a new red, green, and blue using some math
  • Set the same pixel in buffer 2 using the new calculated color

For brightness, saturation, and contrast the equations are:

new color = old color + brightness
new color = (old color - 0.5)*contrast + 0.5
new color =*0.21 +*0.71 +*0.07

I won't bore you with the details of actually extracting the components with hex math and stuffing it back into the new pixels (well, maybe I will in a later blog on canvas performance). The point is you can do lots of effects with simple math.

The challenge with code like above is that it still may be too slow for real time work. Remember, the goal of Amino is a rock solid 60fps on a desktop browser and 30fps on a mobile device. To keep our framerate promise we need a way to do some work without blocking the UI. That means background threads.

Background Threads, Sorta

On the Java side we can use real background threads to do compute intensive effects; though honestly Java is fast enough for most cases that it hasn't been necessary yet. Canvas is most browsers is slow enough that we can't process an entire large picture (say 512x512) in a single frame. Unfortunately, JavaScript doesn't really have threads, or at least not until the forthcoming Worker API is released. So on to our backup plan: cheat.

We are allowed to do some work on the GUI thread as long as we don't take too much time. The solution: break up the work into small chunks and distribute them across multiple frames. It won't be quite as 'realtime' but it still allows us to do these effects in browser without slowing down the UI.

Amino now has an internal class called WorkTile which defines a subset of the full image to be processed. Right now it's set to 32x32 pixels. Once the effect starts it will process one WorkTile at a time until it runs out of time for this frame (currently set to 1000/40 ms). When the next frame arrives it will draw a few more WorkTiles until it runs out of time. After enough frames the image will be completely processed and saved into the final buffer, and work is terminated. Volia, 'background' processing of images in a browser without blocking the UI.

Currently only BackgroundSaturationNode uses this new worker system but eventually all effects will use it. Here's a demo to change the saturation, brightness, and contrast of a satellite image from Venus. Click the image to go to the demo page.

Google ChromeScreenSnapz021.png
Photo Adjustment. Click to run.

API Docs, Website, and Roadmap

Along with the buffering work I've recently written a new doc tool for Amino. I wanted something that would work on both Java and JavaScript code and didn't have the ugly legacy of classic javadoc. It's still a work in progress but I'm happy with the results so far. It's a new design where classes are grouped by category instead of package. Feedback is very welcome.

I think Amino is getting close to a real beta release. It's pretty stable in the major browsers and on mobile devices that support canvas (I haven't tested Android yet, but TouchPad and iPad work great). I have a bit more work to do on events, fills, and animation but once those are done we'll be ready for a 1.0 release.

Now the big question: where to put all of this? I think Amino deserves it's own domain so I'd like your help picking one. Please tweet your suggestions to @joshmarinacci.

That's it for this week. Thanks guys. I think Amino is shaping up be a rockin' scenegraph library.

It's been a while since I've posted thanks to this spring's conference schedule. Part of my new job at Palm is working at our booth answering technical questions. This has kept me on the road, but certainly provided opportunities to talk about our technology and build interest in apps. Read on photos and stories from GDC and CTIA, including a clip of Shrek Cart.

CTIA in Las Vegas

I'll say this: Vegas is flashy, dry, and tiring. I was working the whole time so it wasn't as fun as I would have imagined. I suspect if I went on vacation there I'd have a very different experience (as my wife did last year with her friends). Still, it was interesting.

Probably the best part was getting to meet the Pre Central crew in person. @adora, her bf, and I had dinner in the Palms hotel (natch) with Dieter Bohn of the PreCentral PalmCast, along with co-casters from WMExperts and Android Central. I always enjoy good sushi, drink, and conversation.

The rest of the trip is punctuated by booth shifts, collecting a ton of business cards, and drinking lots of water. The CTIA floor is truly massive. Hopefully next year I'll get to spend more time wandering it. I will say this about Las Vegas: their indoor architecture is amazing. Next time I plan to take far more pictures.

Attendance at the booth was quite good. Most people knew about us and were excited to hear webOS devices are coming to AT&T. The ability to talk and surf at the same time can be handy, as long as you are careful.

GDC 2010 in San Franciscio

Next up is the Game Developer Conference, or GDC. As the name would suggest this is a developer centric event of the gaming variety, though there were certainly plenty of biz people too. I will say this: the gaming industry has a lot more fun with their booths than the enterprise software guys. I never saw anything like this in front of an Oracle booth.

You will optimize your SQL statements with fourth normal form.

The Palm Booth / Beacon

The Palm booth saw good attendance, and you certainly couldn't miss our sign. Rather than an endless parade of booth babes we went for a clean and soothing wood and smokey grey finish, topped off with a massive orange sign. It was way too bright to photograph without manual control SLR. If there wasn't a roof on the building you could have seen it from space.


Shuttle landing beacon


While attendees flitted in and out we had presenters explaining phone features and playing some of the new games from our catalog. In the clip below you can see Cassie playing Shrek Cart against another phone over WiFi. At one point we had six people playing on the same course, each from their own phones. (yes, I captured this with my Palm Pre).

Touring the floor

I did get a chance to browse the floor between shifts. The crowd around StarCraft 2 was thicker than a zergling's skull. I'm not a gamer anymore, but I may have to pick it up when it's finally released in 2023.

StarCraft is finally threee deee

Your life in 3D

GDC isn't just for video games themselves. Much of vendors are showing their latest tools and supplies for making games. Thanks to Avatar motion capture and 3D were the major tech themes.

My first observation, 3D TVs are coming whether we like them or not. I tried on the glasses and sat down for a few minutes to watch a movie and some games. First impressions: 3D movies will still be a novelty at home. We've only just started to get films in 3D that really take advantage of the format, and while they are a truly experience in the movie theater (Coraline was simply stunning), I don't think it transitions well to the smaller screen and lesser technologies. Movies created for 3D will still fare reasonably well, but 2D content converted to 3D made my eyes hurt. Still too much of the 'shiny thing popping out at you' to make it interesting.

Video games, on the other hand, I think may be the killer app for 3D TV sets. The racing game I played was much more immersive and felt more engaging. Since the content is rendered realtime and already in 3D they don't have to add in the effect afterwards.

As for the glasses, they had the usual polarized paper specs in the booth for cost reasons. Not comfortable, but once classy Oakley style glasses hit in the next year I think the need to wear something special for 3D will become a non-issue.

Take that two dee screens!

All Motion Must Be Captured

As for motion capture, it's everywhere. Many different techniques, presumably for different costs and results. I get the impression that all games with 3D characters use this now.

Capture your motion, look cool in spandex

That's it for this week. I should have some exciting stuff to share with you soon. In the meantime here's a bonus pic I found on my phone, taken a week ago in downtown Eugene. Perhaps leading them to greener pastures?

Remember to tether the children well.

I’ve been using Node JS off and on for the past few years, ever since we used it in webOS, but I’ve really gotten to go deep recently. As part of my learning I’ve finally started digging into Streams, perhaps one of the coolest unknown features of Node.

If you don’t already know, Node JS is a server side framework built on JavaScript running in the V8 engine, the JS engine from Chrome, combined with libuv, a fast IO framework in C++. Node JS is single threaded, but this doesn’t cause a problem must most server side tasks are IO bound, or at least the ones people use Node for (you can bind to C++ code if you really need to).

Node does it’s magic by making almost all IO function calls asynchronous. When you call an IO function like readFile() you must also give it a callback function, or else attach some event handlers. The native side then performs the IO work, calling your code back when it’s done.

This callback system works reasonably well, but for complex IO operations you may end up with hard to under stand deeply nested code; known in the Node world as ‘callback hell’. There are some 3rd party utilities that can help, such as the ‘async’ module, but for pure IO another option is streams.

A stream is just what it sounds like from any other language. An array of data that you operate on as data arrives or is requested. Here’s a quick example. To copy a file you could do this:

var fs = require('fs');
fs.readFile('a.txt',function(err, data) {

That will work okay, but all of the data has to be loaded into memory. For a large file you’ll be wasting massive amounts of memory and increase latency if you were trying to send that file on to a client. Instead you could do it with events:

var fs = require('fs');
var infile = fs.createReadStream('a.jpg');
var outfile = fs.createWriteStream('b.jpg');
infile.on('data',function(data) {
infile.on('close', function() {

Now we are processing the data in chunks, but that’s still a lot of boilerplate code to write. Streams can do this for you with the pipe function:


All of the work will be done asynchronously and we have no extra variables floating around. Even better, the pipe function is smart enough to buffer properly. If the read or write stream is slow (network latency perhaps), then it will only read as much as needed at the time. You can pretty much just set it and forget it.

There’s one really cool thing about streams. Well, actually two. First, more and more Node APIs are starting to support streams. You can stream to or from a socket, or from an HTTP GET request to a POST on another server. You can add transform streams for compression or encryption. There’s even utility libraries which can perform regex transformations to your streams of data. It’s really quite handy.

The second cool thing, is that you can still use events with piped streams. Let’s get into some more useful examples:

I want to download a file from a web server. I can do it like this.

var fs = require('fs');
var http = require('http');

var req = http.get('');
req.on('connect', function(res) {

That will stream the get request right into a file on disk.

Now suppose we want to uncompress the file as well. Easy peasy:

var req = http.get('');
req.on('response', function(res) {
        .pipe(tar.Extract({path:'/tmp', strip: 1}))

Note that zlib is a built-in nodejs module, but tar is an open source one you’ll need to get with npm.

Now suppose you want to print the progress while it happens. We can get the file size from the http header, then add a listener for data events.

var req = http.get('');
req.on('response', function(res) {
    var total = res.headers['content-length']; //total byte length
    var count = 0;
    res.on('data', function(data) {
        count += data.length;
        .pipe(tar.Extract({path:'/tmp', strip: 1}))
        .on('close',function() {
            console.log('finished downloading');

Streams and pipes are really awesome. For more details and other libraries that can do cool things with Streams, check out this Streams Handbook.

I'm finally back from OSCON, and what a trip it was. Friend of the show wxl came with me to assist and experience the awesomeness that is OSCON. Over the next few days I'll be posting about the three sessions we taught and many, many sessions we attended. A splendid time is guaranteed for all. To kick things off, here is the code from my Amino talk.

Amino is my JavaScript graphics library I've been working on for a few years. Last year I started a big refactor to make it work nicely on the Raspberry Pi. Once we get X windows out of the way we can talk to the Pi's GPU using OpenGL ES. This makes things which were previously impossible, like 3d spinning globes, trivial on the Raspberry Pi.

For the talk I started by explaining how Node and Amino work on the Raspberry Pi, then shows some simple code to make a photo slideshow. (in this case, Beatles album covers).


Next we showed a 3D rotating text RSS headline viewer.

rss viewer

And finally, using the same GeoJSON code from the D3/Canvas workshop, a proper rotating 3D globe.


Hmm... Did you ever notice that the earth with just countries and no water looks a lot like the half constructed Death Start in Return of the Jedi?

Of course, my dream has always been to create those cool crazy computer interfaces you see in sci-fi movies. You know, the ones with translucent graphs full of nonsense data and spinning globes. And even better, we made one run on the Raspberry Pi. Now you can always know the critical Awesomonium levels of your mining colony.


Source for the demos on DropBox.

Color 101

Color is important. We see in color. Different colors make us feel different ways, or remember different things. Most companies take their colors very seriously. Coca-Cola red is trademarked color. Pantone has copyrighted their color sets. The Brits feel it necessary to add an extra letter to their colours. Color is powerful but also dangerous. One false step and your carefully crafted website could look like this. worst.png Okay, so maybe it won't be that bad, but your choice of color has a huge effect on your designs. Still, color doesn't have to be scary. In this article I'm going to cover the basics of color, how colors works from a physics and technical perspective, and some quick tips on choosing great colors. In a future article I'll come back and cover the emotional aspects of color, and how to integrate color with other elements of design. This article will definitely not be exhaustive. Many PhDs have been earned exploring the expansive topic of color. Fortunately we just need to know the basics to begin using well balanced colors.

The Eye

The first thing to remember is that color doesn't really exist. Color is simply the sensation in the brain caused by our eyes responding to different frequencies of light. You may have taken a physics class in high school where your teacher talked about the visible spectrum of light, which is the set of frequencies the human eye can see. You probably saw diagrams like this: visible.png the rainbow represents the visible spectrum of light Human eye response to light. Our human eyeballs contain three kinds of color receptors that each respond to different ranges of light. These three ranges roughly correspond to red, green, and blue frequencies of light, (though only roughly). It's this correspondence that makes the Red/Green/Blue (RGB) color model so convenient for electronic displays. By combining red, green and blue a monitor can simulate most colors in the visible spectrum. Note that this is only a rough correspondence. There are colors our eyes can see which monitors cannot reproduce, as well as colors screens can show that our eyes can't really see. However, it's close enough to work pretty well in practice. (see this Wikipedia article for more on color vision).

Subtractive and Additive Color Models

In order to represent color on a computer screen we need to have a model for color. There are roughly two kinds of color models: subtractive and additive. With subtractive the colors combine to equal black. Additive means the primary colors combine to equal white. Subtractive color is how objects in the real world reflect light (it is often called reflective color for this reason). If you shine a white light on a red ball you will see only the red because the ball's paint absorbs all of the frequencies of light other than red. The other colors have been subtracted from the white, leaving only red. Additive color is used whenever something generates light rather than reflecting it, such as the phosphors in a traditional CRT, or the LCD segments in a modern flatscreen display. Red, green and blue are added in various amounts to create different colors, ultimately adding up to white. In subtractive color the primaries are red, blue, and yellow, which you may remember from art class as a child. That's because paint is subtractive, (and why mixing all of the colors together resulted in a dark brown mush). In additive color the primary colors are red green and blue. This is the last I will discuss subtractive color, at least until we come back to printing technologies (which reflect light and are therefore subtractive). Additive color is used in virtually all interactive computer displays (the reflective electronic paper display made by eInk for Amazon's Kindle is a notable exception). Since at least 99% of my likely audience is working additive color displays, that's all I'll cover here.

Basic Additive Color Models

So that's it for theory of light. Now let's talk about how computers actually put colors on screen. If you've worked with computer graphics at all you are probably familiar with the RGB color model. In this model a color is described by equal parts red green and blue. Virtually all computer systems use this model internally because it fits the hardware well and can be easily represented with a 32bit integer, using 8 bits for each color plus another for transparency (called the alpha channel for historical reasons). You can think of RGB as a cube with the value of each axis coming from one of the three colors. RGB is the most efficient format for the modern computer to use but it may not be the best for you, the programmer, to use. There is another common color model which you are likely to run across. rgbcube.png The RGB Color Cube

HSV / HSB Color Model

Hue, Saturation and Value (HSV) is the color model you are most likely to run into. It is sometimes called HSB, where the B stands for brightness instead of value. This model works hand in hand with the RGB color model, giving you a new way to specify colors which can then be converted into RGB for display. Think of HSV as a cylinder where the hue is an angle (0 to 360) representing the pure color (red, magenta, yellow, etc). The distance to the center is the saturation going from the pure color (1.0 == fully saturated) to white (0.0 == no saturation). The height within the cylinder represents the value or brightness of the color, going from completely bright (1.0) to no brightness (black, 0.0). hsbcylinder.png The HSV Color Cylinder There is also a related color model HSL/HSI where the third component is lightness or intensity. HSL is similar to HSV except that saturation and brightness are arranged differently, represented as a double cone with the pure colors at the center. In practice all of the various computer color models convert into RGB for display, so which one you use is largely a matter of convenience for the particular task you are doing. That's enough theory for today. Here are a few quick tips that will help you pick and use color in your software designs.

Steal Someone Else's Colors

The easiest way to pick colors is to let someone else do it. Go to one of these sites and search for something which matches your needs. For example, if I'm working on a design that feels like harvest or fall colors I could search for 'autumn'. FirefoxScreenSnapz003.png search for 'autumn' FirefoxScreenSnapz004.png search for 'autumn'

Steal Some*thing* Else's Colors

Pick colors from some existing source, such as a photo or object that you think is pretty. For example, Ken Orr and Kathryn Huxtable started building a very nice Swing theme based on a photo of sea glass. If something looks pretty then it probably already has a good balanced color scheme; so steal it! seaglass.png

Sea Glass look and feel has a simple webapp which will turn a photo into a color palette. I used it to on the photo below to create a nice tropical sea palette. FirefoxScreenSnapz005.png Color palette produced by

Pick a Color and Change One Thing

Monochromatic designs are easy to do and usually look good. Take one color and change one thing, such as its brightness or saturation. This is especially good for creating gradients. That's all for this week. Please let me know if there are any particular topics you'd like me to cover.


All diagrams above are from Wikipedia.

some content

Today I'm doing a three hour hands on tutorial at OSCON on HTML Canvas. All you need is a text editor, Chrome, and basic JavaScript knowledge. By the end of the session you'll know a ton about Canvas and have built your own little video game that can run almost anywhere. The full lecture notes and hands on lessons are here.

Today I'm proud to announce a project I've been working on for the past few months called Leonardo. I've long believed there's a need for a good desktop drawing app that is completely cross platform, free, and open source. Leonardo is that app.

My vision is a drawing tool for the 21st century with a focus on usability, speed, and powerful plugins. The core of the app is written in Java, but more and more of the functionality will be built in JVM based dynamic languages like JRuby, JavaScript, and Jython.

Currently Leonardo is meant for doing UI mockups, simple multi-page presentations, and sketching; but we plan to add more document types and features in the future.

Please take a look at tell me what you think (or better yet join the dev list). We definitely need help in a lot of areas, so the more feedback the better.

 - Josh

Most book publishers don't really have a 'brand'. You buy a book because of the title or the author. No one cares who Stephen King's publisher is. However, every now and then a publisher comes along who simply makes cool books. A publisher who's books I will buy regardless of the title or author. No Starch Press is one such publisher.

No Starch has consistently published fascinating books on a wide array of topics. Their slogan is the finest in geek entertainment and they mean it. Currently my shelf includes an illustrated Periodic Table of Elements, Python for Kids, and the Super Scratch Programming Adventure. I'll be writing these up soon, but in the mean time I want to talk about a trilogy of cool books No Starch just sent me. They cover a topic near an dear to many an engineer and hacker: Legos.

Lego Technic Builders Guide

The first title is The Unofficial LEGO Technic Builder's Guide by Pawel "sariel" Kmiec (link). I've read Lego books before, but this is no ordinary title. Devoted exclusively to LEGO's Technic line of models for older children/adults, it dives deeper than I knew Legos could go. While I played with the Technic line as a child, I had no idea the wider Lego enthusiast community had taken their designs so far.

If you've never played with Technic sets, you should really take a look. They are designed around real-world models with functioning gears, motors, and mechanical assemblies. Think of a sports car with rack and pinion steering and a proper differential drive train, or a pneumatically controlled steam shovel. These sets are also the basis of Lego's robotics sets, as they allow you to build almost arbitrarily complex systems entirely from little Lego bricks.

At first glance you might think Pawel's book is just a set of models to build, but it's actually the opposite. He specifically doesn't give you models to build. Instead he focuses on the how, not the what. He explains how a differential drive train actually works, then gives you the instructions to build just that component. Then he shows how to improve it with more robust parts and smoother gears. His goal is to teach you *how* the mechanical systems , giving you the knowledge to integrate the components into your own larger models.

Toy parts, real engineering

As I read through the first few chapters I realized I was not only learning about advanced Lego techniques, but also an introduction to real mechanical engineering. In Chapter 5 he covers the many types of gears, what they are used for, and the unique advantages of each. Chapter 6 covers chains and pulleys, where he explains many of the common pulley combinations used in real mechanical systems. The illustrations are impressively clear and concise. Chapter 7 covers levers and linkages. Even if you've never heard of a Chebyshev linkage or Pantograph, you'll know how they work by the end of the chapter.

My favorite section is probably Chapter 8: Custom Mechanical Solutions. Pawel shows how to improve on the standard LEGO gearing systems with new designs that have unique advantages in strength, locking, and features. The Schmidt coupling is especially intriguing. Later chapters cover advanced mechanics like suspension systems, transmissions, tracked vehicles and the modeling process. The book is impressively comprehensive.

The book covers both old and newer Technic sets, explaining how individual components like the pneumatic system have evolved over the years. (I'm especially fond of the pneumatic claw system I had when I was twelve).

I heartily recommend this book to any adult. Be aware that this really is for older children, probably 12 and up. I imagine an 8 year old would be bored to tears reading it. For younger Lego enthusiasts I suggest one of No Starch's other Lego books (which I will covers soon).

So. Verdict: buy or no buy? Buy! Buy Now!

Though normally 29.95$ you can get it by Christmas for only 19$ if you order with Amazon Prime.

This week at OSCON gave an updated version of my 3 hour Canvas tutorial. I think the session was well received. It was one of only a few tutorials that was completely sold out, 200 seats taken. But if you couldn't be one of those two hundred I don't want you to miss out.

I turned the content from last year's tutorial into an interactive eBook for the iPad and TouchPad. I did this partly to experiment with ebooks and partly to learn how to publish an app in the various stores. In thanks to my webOS fans I sold the TouchPad version for 99c while the iPad app costs 4.99$. It was a fun experiment but I have decided my real goal is not to make money, but rather to spread knowledge as widely as possible. So I have decided to make it all free as in beer and speech.

Yes, HTML Canvas Deep Dive is now 100% free

I have dropped the price of the existing apps to 0 (the TouchPad store may not yet reflect this). I have also published the book as a set of pages on the web so you can easily read and link to the book without installing an app. Note that the apps don't yet have the new content, just the webpage.

read it now

Even better, there is a bunch of new content I created for this year's OSCON. We now have new chapters on WebGL, WebAudio, and getUserMedia (webcam), as well as fixes for various existing chapters. Deep Dive is now over 20k word!

But, I need your help.

First, please spread the word about my book. I want it to be the definitive introduction to Canvas.

Second, I would love some help both with programming and design. I have created iOS and webOS versions of the book, but I need some help supporting Android, WinPhone / Win8RT, and PlayBook 2.0. I would also like some help from a web designer to improve the layout and font selection, especially on mobile devices.

Finally, please read the book and give me feedback. If you find spelling errors, code bugs, or an unclear description, please let me know. If you are interested in contributing new chapters I'd love that as well.

Working on this project has really been a labor of love. Thank you for your support.

this is the new blog post nothing

Installing Node on a Raspberry PI used to be a whole lot of pain. Compiling an codebase that big on the Pi really taxes the system, plus the usual dependency challenges of native C code. Fortunately, the good chaps at have started automatically building Node for Linux arm Raspberry Pi. This makes life so much easier. Now we can install node in less that five minutes. Here’s how.

First, make sure you have the latest raspbian on your Pi. If you need to update it run.

sudo apt-get upgrade; 
sudo apt-get update

Node and NPM

Now install Node itself

tar -xvzf node-v0.10.2-linux-arm-pi.tar.gz
node-v0.10.2-linux-arm-pi/bin/node --version

You should see:


Now set the NODE_JS_HOME variable to the directory where you un-tarred Node, and add the bin dir to your PATH using whatever system you prefer (bash profile script, command line vars, etc); In my .bash_profile I have:


Now you should be able to run node from any directory. NPM, the node package manager, comes bundled with Node now, so you already have it:

npm —version



Native code

If you are just working with pure javascript modules then you are done. If you need to use or develop native modules then you need a compiler and node’s native build tool, node-gyp. The compilers should already be installed with Raspbian. Check using:

gcc —version

Install node-gyp with:

npm install -g node-gyp

Now any native module should be compilable.

That’s it. Node in 5 minutes.

some content

As regular readers know I have recently jumped into Arduino and hardware hacking full-time. One of the things which fascinates me is the idea of monitoring our environment. I mean not only the global environment but also our own local spaces. Sensors and computation are incredibly cheap. Network access is almost ubiquitous. This means we can easily monitor our world and learn interesting things by analyzing simple data points over time.

Being an engineer I started by picking out some books to read. First up is an amazingly thin but info-packed tome from Maker Press: Environmental Monitoring with Arduino: Building Simple Devices to Collect Data About the World Around Us by Emily Gertz and Patrick Di Justo. As the name would suggest, it is exactly the book I was looking for.

Before I continue I must warn you, reading this book will make you spend a lot of money. You will find yourself spending hours checking out cool sensors and outputs on websites like Adafruit, SparkFun, and Emartee. Tracking your environment with simple sensors is simply too intriguing. I apologize in advance for the new habit you will form.

Though short (81 pages by my count), Environmental Monitoring with Arduino contains a lot of information. It starts with a chapter called "The World's Shortest Electronics Primer" introducing Arduino, basic electronics, and then runs through an LED blinking tutorial. From here you jump straight into your first sensing application: a noise monitor with an LED bar graph.

The book is organized in mostly alternating chapters. Each chapter either introduces a new piece of hardware or a project using that hardware. The chapters cover how to measure electromagnetic interference, water purity, humidity / temperature / dew point, and finally atomic radiation as used by individuals in the wake of the Fukushima disaster.

The components required to build most of the projects in the book are surprisingly cheap. For example, Emartee's mini sound sensor, a tiny board containing a microphone and the support circuitry, is only seven dollars.

The only really pricey component is the Geiger counter from Goldmine that costs $137. Of course it uses a special beta and gamma ray sensitive mueller tube from Russia so it's actually fairly cheap. Most components are under $10.

While I love the book there are a few things that could be improved. Each chapter contains a few paragraphs explaining what we are measuring and how it works (water conductivity was especially interesting), but I'd like to learn more about the science behind each effect. This probably isn't possible in a book of this size, so links to external websites would be greatly appreciated.

I'd also like an appendix with links to learn more about Arduino and environmental sensing, as well as a list of sites to buy cheap components that are easy to work with.

Finally, there is no information on the authors. Most books include a short bio or an introduction by the authors to explain who they are and why you should listen to them. This book contains no biographical information at all beyond the authors' names.

Go - No Go?

A definite go.

If you are new to environmental monitoring this book is a great place to start, even if you know nothing about electronics and Arduino. And for the price ($7.99 for the print copy and half that for ebook) it's a steal. You can get it on:

I finally watched Code Rush this weekend, a documentary about open sourcing mozilla and the sale of Netscape to AOL in the late 90s. There is no doubt that Netscape created the web as we know it. The web changed everything. But I wonder about Mozilla itself. Did open sourcing Mozilla really make a difference to it's success?

I know that Firefox has been a great force for good, encouraging competition and pushing the web forward. However, Firefox only became what it is today when the Mozilla foundation decided to stop the end-all-be-all Communicator effort and instead focus on creating a single product: the world's best browser. Firefox, while open source, is primarily developed by Mozilla is a company. They may be a non-profit company, but they have income from search deals that they use to pay staff. Did being open source make a difference? Couldn't a closed source Mozilla do the same thing?

I've been thinking about this all weekend and I'm still not sure. On the one hand being open source built up a community of interest around it, a rallying point for the open source world, even if most of the actual coding came from paid employees. On the other hand, Firefox's userbase is somewhere around 20% of the internet. That means most Firefox users don't know or care what open source is. They use it because it's a good browser.

In the end I think open sourcing ended up being a benefit. Not because it allowed tons of Mozilla forks and variations. I can't think of any high profile Mozilla forks, actually. No, I think the real value was that it freed the Mozilla code base from Netscape Inc. It was insurance. No matter what happened to Netscape the code would always be free, with the option to one day create a non-profit around the code in case the community didn't like what Netscape did with it. And in the end, after AOL bought the company, that's exactly what happened.

So, perhaps my real question is: did the people making the decision to open source the Mozilla code base have this in mind when they did it, or is it a fortunate accident of history? Perhaps we will never know.

By what standard should we measure if code is "beautiful"? I argue it should be not lines or speed, but rather conciseness and clarity. Can someone who is not familiar with the language still understand what the algorithm does? Can someone not familiar with the task still get a feel for how it works? This metric favors shorter code over longer, but not at the expense of readability. Beautiful code should be as close to expressing the underlying algorithm as possible. How close is the actual code to the most straight forward pseudo code?

Some will argue that an elegant implementation is useless if it is too slow; but what if the elegant version is only 5% slower? Is that a good tradeoff? Almost certainly. What if it's 10x slower but only runs for 1% of the time? Then the trade off might be worth it. It depends. Certainly it's good to start with the clear and concise version and only move to a more complex implementation if the speed profiling warrants it. This is what I like to call concise computing.


Concise code is easier to read, easier to write, and much, much, much easier to maintain. Fewer lines of code means fewer places for bugs to hide. But let me take it from another angle:

If you open up the typical programming library, especially older ones, you will find a bunch of complex code; often difficult to understand. This is the case even if the underlying task should be straight forward. Why should code be like this?

I think there are two reasons: first, the programming language used to write the library wasn't expressive enough to write the implementation in a way that looks like the algorithm. If a task is naturally described using objects then a language with out direct object support will be at a disadvantage. If the task is really about matching patterns, then a language without matching will necessarily represent the algorithm in a less elegant way. And of course once written we try not to rewrite our code for fear of breaking things. Code is instant legacy.

The other reason for complex libraries is speed. While I can describe sound synthesis using pure math... math is expensive to compute. A set of pre-calculated wavetables will be far faster than raw cosine functions. Changing an algorithm to be fast often makes the code hard to understand. But is this really a good excuse?

Our software doesn't have to be fast. It has to be fast enough. But enough depends entirely on context and it changes over time as hardware evolves. Math.sin() used to be hugely expensive. When John Carmack wrote Doom he would massively change code to save a few divides. The code was amazing but came at a cost of obscuring the underlying algorithm and being hard to maintain. It doesn't have to be like that anymore.


I did not invent concise computing (though I haven't heard the specific term before). It has been a computing goal for over fifty years. The sweet spot for any implementation has shifted over the years as hardware changes, but I think the time has come for most of our coding to move in the direction of conciseness. We have such an excess of CPU power, even in our mobile devices, that in almost every case the concise version wins over a fast but complex one. However, let me stop with the theoretical discussion and give you a concrete example of what I'm saying.

I recently discovered a language called OMeta. OMeta is actually a extension to let you write DSLs in a general purpose host language. My preferred version is OMeta/JS, which extends JavaScript, but there are implementations for SmallTalk, C#, and others. OMeta really exemplifies my thoughts on concise computing.

Alex Warth, the creator of OMeta, realized many computing tasks, especially writing parsers and compliers, are just different forms of matching. So why not bake it into the language in a way that is convenient but still lets you get at the power of the host language, and treat the parsers as objects that can be extended in various ways. Here's what a simple calculator looks like:

SimpleCalc <: Ometa {
   exp = number | sub | add,
   sub = exp:a "-" exp:b =<  (a-b),
   add = exp:a "+" exp:b =< (a+b),

result = new SimpleCalc().matchAll("6+7);  //result is set to 13

Even if you don't know anything about OMeta you can follow what is going on. This says that exp is a number or a pattern called sub or add. Then it defines sub and add with recursive definitions back to exp. The => part says that the matched expression goes to a small Javascript expression, in this case the part that does the actual math.

This code looks a lot like your typical BNF grammar. The magic here is that everything after the => part is regular Javascript. Instead of (a+b) it could be console.log("doing an add"), or call some other function to add in base 16 math, or generate C fucntions. Normally I would have to break out to a special library with weird syntax, or write in a completely different language. OMeta lets us implement algorithms that are naturally expressed with matching, but still remain in our general purpose host language, not some special matching only parser. This is concise computing.

OMeta was created as part of Alex Warth's graduate thesis. You should definitely check it out. For an academic paper it is highly readable. Alex continued his work at VPRI, a research group set up by Alan Kay. Yes, that Alan Kay. Though Alex's OMeta is an excellent improvement the original idea goes back to Meta II, a meta-compiler created back in 1962!!! The battle for concise computing is far from recent.

A PNG Parser in OMeta

Most public uses of OMeta show small examples like the calculator above or language compilers, which is probably it's most obvious application. (I myself am working on my own cross-language system with OMeta which I'll detail in a future blog post). Earlier this month I showed you a markdown word processor with embedded arithmetic expressions built in OMeta. However, today I want to show you a different kind of task: parsing a binary data PNG file in JavaScript.

PNG is a fairly simple data format yet many parsers are surprisingly complex. As documented in Wikipedia, PNG consists of a header followed by a series of chunks. Each chunk has a length, type, and payload. Most of the chunks contain metadata about the image. Only one, IDAT, contains a bitmap. The example below does not decompress the actual image data (which requires a DEFLATE decompressor) , but rather parses and validates all of the metadata chunks; the most likely use for an external parser.

Parsing binary data in Javascript has always required hacks because JS has no native binary types. Fortunately, the W3C and Khronos recently remedied this situation with Typed Arrays, a new data type built to support textures and vertices for WebGL. Using typed arrays we can fetch a PNG file from a server and convert it into a large buffer; like this:

function fetchBinary() {
    var req = new XMLHttpRequest();"GET","icon.png",true);
    req.responseType = "arraybuffer";
    req.onload = function(e) {
        var buf = req.response;
        if(buf) {
            var byteArray = new Uint8Array(buf);
            console.log("got " + byteArray.byteLength + " bytes");
            var arr = [];
            for(var i=0; i<byteArray.byteLength; i++) {

The Uint8Array is an array of unsigned bytes (8 bit long 'words'). With this array in hand we can parse the entire structure in less than 20 lines of Ometa. Again, even if you don't know how Ometa works you can get a vague idea of what the program does by reading it.

ometa BinaryParser <: Parser {
    //entire PNG stream
    start  = [header:h (chunk+):c number*:n] -> [h,c,n],
    //chunk definition
    chunk  = int4:len str4:t apply(t,len):d byte4:crc
        -> [#chunk, [#type, t], [#length, len], [#data, d], [#crc, crc]],
    //chunk types
    IHDR :len  = int4:w int4:h byte:dep byte:type byte:comp byte:filter byte:inter
        -> {type:"IHDR", data:{width:w, height:h, bitdepth:dep, colortype:type, compression:comp, filter:filter, interlace:inter}},
    gAMA :len  = int4:g                  -> {type:"gAMA",value:g},
    pHYs :len  = int4:x int4:y byte:u    -> {type:"pHYs", x:x, y:y, units:u},
    tEXt :len  = repeat('byte',len):d    -> {type:"tEXt", data:toAscii(d)},
    tIME :len  = int2:y byte:mo byte:day byte:hr byte:min byte:sec
        -> {type:"tIME", year:y, month:mo, day:day, hour:hr, minute:min, second:sec},
    IDAT :len  = repeat('byte',len):d    -> {type:"IDAT", data:"omitted"},
    IEND :len  = repeat('byte',len):d    -> {type:"IEND"},
    //useful definitions
    byte    = number,
    header  = 137 80 78 71 13 10 26 10    -> "PNG HEADER",        //mandatory header
    int2    = byte:a byte:b               -> bytesToInt2(a,b),    //2 bytes to a 16bit integer
    int4    = byte:a byte:b byte:c byte:d -> bytesToInt(a,b,c,d), //4 bytes to 32bit integer
    str4    = byte:a byte:b byte:c byte:d -> toAscii([a,b,c,d]),  //4 byte string
    byte4   = repeat('byte',4):d -> d,

This parser returns a JSON structure that we can loop through to find any chunk we care about.

chunk: {"type":"IHDR","data":{"width":33,"height":36,"bitdepth":8,"colortype":2,"compression":0,"filter":0,"interlace":0}}
chunk: {"type":"gAMA","value":55555}
chunk: {"type":"pHYs","x":2835,"y":2835,"units":1}
chunk: {"type":"tEXt","data":"Software0x0 QuickTime 6.5.2 (Mac OS X)0x0 "}
chunk: {"type":"tIME","year":2005,"month":4,"day":5,"hour":17,"minute":6,"second":20}
chunk: {"type":"IDAT","data":"omitted"}
chunk: {"type":"IEND"}

In a sense the code above is not really an algorithm, but rather a definition of the PNG format. A runnable specification. For an even more impressive example, the OMeta guys created a TCP/IP parser which uses the actual ASCII art header definitions from the RFC as the parser code itself.


What I've shown you today is merely a hint of what concise computing could be. Imagine an entire operating system from boot to GUI, complete with apps, written in less than 20k lines of code? It sounds crazy yet that is what VPRI has done.

By writing the most clear and concise code possible we can make our software more maintainable and longer lived. Next time I'll show you the same principle applied to audio synthesis. Feel free to ask questions and continue the discussion.

Lately I've been experimenting with Functional Reactive Programming, or FRP. There are several good libraries to use FRP with Javascript. I chose Bacon.js. I'm finding FRP to be very useful but hard to understand.

The way FRP is explained often leads to confusion. Most people know what the P is, and the F seems fairly understandable, but the R causes confusion. Examples usually talk about the difference between expressions and statements. Rather than c = a + b setting a value right now, it is an expression which defines that c is always a plus b. It defines a relationship. In another life I would have called this a binding expression. I suppose expressions get to the heart of what FRP is but it doesn't really explain why it's useful. Let me try another way.

FRP is this: working with streams of values that change over time.

Perhaps an example would help. Imagine moving a mouse over your browser. It produces a stream of x and y values. Rather than using a callback for every mouse move, we can work with the mouse events as a single object over time: an event stream. Suppose we want to know when the mouse moves past a line on the screen at 100px. In regular code we could do this:

$("body").on('mousemove', function(e) {
     if(e.pageX > 100) {
          console.log("we are over 100");

With FRP we would create a stream based on mouse move, then filter it to only have X values over 100, like this:

     .filter(function(v) { return v > 100; })
     .onValue(function(v) {
          console.log("we are over 100: " + v);

This is roughly the same amount of code so it doesn't seem like a big improvement. Trust me, it is. We have separated the action, printing a message, from the source of the stream and any filter operations. We can also add more operations to the stream if we want, and abstract the filters out further.

//create a stream of clicks
var clicked = $('body').asEventStream('click');
//look for clicks for no longer than 300ms
clicked.bufferWithTimeOrCount(300, 2)
     .filter(function(x) {
          return x.length == 2;
     .onValue(function(x) {
          console.log("double clicked: ", x);

The magic part here is bufferWithTimeOrCount(). It buffers the clicks, sending them to the next stage in the pipeline only if there are two samples or it has been 300ms since the first click. This means the filter function will get an array that is either one or two elements long. If it's one element then it clearly wasn't a double click, so we can filter to include just the two clicks. We already know they are no more than 300ms apart, so we are done.

This code still looks a bit clunky, though. I don't like all of those nested functions. JavasSript has a way to deal with this: partial application and function references. In JavaScript a function is a real object so you can pass it around with variables. Let's make some reusable functions for printing to the log and determining if the length of an array is 2.

var isTwoValues = function(x) { return (x.length == 2); }
var log = function(str) {
     return function(obj) {

isTwoValues is pretty simple: return true if the array is two elements. The log function is a bit trickier. It doesn't print to the console. Instead it returns a new function which prints to the console. However, it still remembers the first string passed in, so the new function can combine the original string with whatever new object is to be printed. In functional programming this is called a partially applied function. The returned function already has the first argument, str, applied. Later it, when it gets the second argument, it will fully apply the function. These sorts of functions are very handy when dealing with deferred programing like stream processing and event handling. Let's update the code from before using our new functions:

     .onValue(log("double clicked");

Ah. Much nicer looking.

Using this same system we could implement a press and hold detector like this:

     .filter(function(x) return (x[0] == true); }) 
           //x[0] = true if it's a mouse down. false if it's a mouse up

Now lets try something really tricky to show off the power of this approach: a swipe detector. A swipe on a touch screen is when the cursor point is moving faster than a certain velocity. Velocity is tricky, however, because we need to calculate an average over multiple samples, and do it in both the x and y directions, and only when the mouse is down. First let's filter out mouse moves when the mouse isn't down, then calculate the difference between each event and the previous one.

    .diff(null, function(a,b) {
            if(a) {
                return {
                    dx: Math.abs(b.pageX-a.pageX),
                    dy: Math.abs(b.pageY-a.pageY),
                    dt: b.timeStamp - a.timeStamp
            } else {
                return { dx:0, dy:0, dt: 0 }

Pretty straight forward. Now we want to calculate the average of the last five samples. The slidingWindow function makes this easy.

    //the five most recent samples
    //calculate the sum of the samples and velocity
    .map(function(v) {
            var tx = 0; var ty = 0; var tt = 0;
            for(i in v) {
                tx += v[i].dx;
                ty += v[i].dy;
                tt += v[i].dt;
            var txy = Math.sqrt(tx*tx + ty*ty);
            return txy/tt*1000;

Finally, let's only fire an event when the value is greater than 2000

    .onValue(log("current velocity in pixels per second =, > 2000"));

See this code in action here

Now we have a swipe detector that is robust and extendable. Most importantly, all of these detectors can coexist without interfering with each other, and without a bunch of state variables lying around. To improve this we could make it indicate which direction the swipe took place, and if it started on the edge of the screen or the middle. FRP makes it easy to add these features without breaking existing code.

To learn more about Bacon.js, check out the github project and theses tutorials.

Hi. My name is Josh Marinacci. You might remember me from the webOS Developer Relations team. Despite what happened under HP, webOS is still my favorite operating system. It still has the best features of any OS and an amazing group of dedicated, passionate fans. I deeply cherish the two years I spent traveling the world telling everyone about the magic of webOS.

However, I’m not here to talk to you about webOS. I want to tell you about my brother in law, Kevin Hill. Two years he was diagnosed with stage 4 melanoma. If you know anything about melanoma, then you know this was 100% terminal just a few years ago. Kevin and my sister Rachel have traveled the country joining every experimental trial to beat this. You can read about their amazing story on their site: The Hill Family Fighters.

The Hill Family

Kevin and Rachel have had amazing success but recently hit a roadblock: the scan a few weeks ago showed a spot on his brain. It’s a testament to his strength that he has continued to work remotely as a sysadmin through all of this (even using his TouchPad from the hospital), but the time has come to slow down. We have not given up hope that he can beat it, but this latest development means he must finally quit work and focus on staying alive.

It will be 90 days before his long term disability kicks in. That's 90 days without income, and just 50% of his salary after that. My sister works part time as a brilliant children’s photographer, but spends most days taking care of Kevin and their two little children, Jude and Evie. We’ve calculated that they need at least ten thousand dollars to Bridge the Gap and get them to the end of the year. This is where you come in.

I am auctioning off my entire collection of webOS devices and swag to help them cover the bills and fight the cancer.

I will be holding an online auction selling everything I have. There will be devices like Pre2s and Pre3s. Limited edition posters and beer steins. My personal water bottle that saved my life in Atlanta (it still bears the dents). In addition some of my former Palm co-workers are donating their own devices and swag. And the highlight is an ultra-rare Palm Foleo with tech support from one of the original hardware engineers.

With your help we can Bridge the Gap for the Hill family. Please add your name to my mailing list. I will send you a note soon when I have a final date for the auction. This list is just for the auction and will be destroyed afterward. Even if you aren’t interested in buying anything I could really use your help getting the word out. Let’s hit the forums, the blogs, and the news sites. The webOS community is the best I’ve ever worked with and I need your help one last time.

Thank you.


Sign up to be notified about the auction

If you saw my tweet about porting Chrome to the Roku, I'm afraid it was, indeed, an April Fools joke. I didn't actually rewrite Chrome in a TV scripting language. However, I did build something cool.

When you build software you have to map between two things. First is the representation that you develop with: your code, your graphics in photoshop, your CSS.. whatever it is. It's the thing you actually manipulate. Then you have the actual visual representation of the thing you are building: the app running on a real device, the page in the browser, the executing game.. whatever it is you are actually making. I believe software improves if we can minimize the distance between those two representations.

I suggest you watch Bret Victor's amazing presentation on the topic. It's long (1hr) but completely worth it. I have been a believer in this philosophy of minimizing editing distance for some time, but Bret explains it better than I ever could.

But back to the Roku

The Roku is very easy to develop for, but it still requires writing some code, turning it into an app, and installing it into the device. While not hard, it can be annoying. It also increases the distance between the editing and viewing representations. So, I decided to build a Roku power-up in Leonardo Sketch.

If you are new to my blog: Leo Sketch is an open source vector drawing tool I've been working on for a while. It can export to SVG, PNG, PDF, and JavaScript. The Roku power-up will export your current drawing into a Roku app, then compile and launch it. It is just a static image on the screen at this point, but it's a good start. In the future you will be able to add behavior and animation to the graphics. The Chrome April Fools hack was just a screenshot of Mac Chrome I had lying around, exported to the Roku through this plugin.

Leo Sketch Power Ups

Now you might be wondering what a Leo Sketch 'power up' is. It's a new kind of plugin system I'm working on, currently only available in an experimental branch. (the 'powerup' branch, if you want to try it out). Powerups are like plugins except they only have an effect when you explicitly activate them. This solves a lot of problems with traditional plugins, plus it enables a few new interesting things. I'll cover powerups more in a future blog. For now, just know that they will be awesome, and Leo Sketch will soon be exporting to far more than static image files.

Stay tuned.

When working on big projects I often create little projects to support the larger effort. Sometimes these little projects turn into something great on their own. It's time for me to tell you about one of them: AppBundler.

AppBundler is a small tool which packages up Java (client side) apps with a minimum of fuss. From a single app description it can generate Mac OSX .app bundles, Windows .EXE files, JNLPs (Java Web Start), double clickable jars; and as of yesterday evening: webOS apps! I started the project to support Leonardo Sketch but I think it's time for AppBundler to stand on it's own.

Packaging Java apps has historically been an exercise in creative swearing. The JVM provides no packaging mechanism other than double clickable jars, which are limited and feel nothing like native apps. Mac and Windows have their own executable formats that involve native code, and Sun has never provided tools to support them. Java Web Start was supposed to solve this, but it never took off the way the creators hoped and has it's own idiosyncrasies. Long term we will have more a more environments with Java available but with different native package systems. Add in native libs, file extension registration, and other metadata; and now you've got a real problem. After hacking Ant files for years to deal with the issue I decided it was finally time to encode my build scripts and Java lore into a new tool that will solve the issue once and for all. Thus AppBundler was born.

How it works

You create a simple XML descriptor file for your application. It lists the jars that make up your app along with some metadata like the App name and main class. It can optionally include icons, file extensions, and links to native libraries.

<?xml version="1.0" encoding="UTF-8"?>
<app name="Amino Particles"> 
   <jar name="Amino2.jar"/> 
   <jar name="amino_sdl.jar"/> 
   <jar name="examples.jar" 
   <property name="com.joshondesign.amino.impl" value="sdl"/> 
   <native name="sdl"/> 

Then you run AppBundler on this file from the command line along with a list of directories where the jars can be found. In most apps you have a single directory with all of your jars, plus the app jar itself, so you usually only need to list two directories. You also specify which output format you want or --all for all of them. Here's what it looks like called from an ant script (command line would be the same).

 <java classpath="lib/AppBundler.jar;lib/XMLLib.jar" classname="com.joshondesign.appbundler.Bundler" fork="true"> 
<arg value="--file=myapp.xml"/> <arg value="--target=mac"/> <arg value="--outdir=dist/"/> <arg value="--jardir=lib/"/> <arg value="--jardir=build/jars/"/> </java> 
AppBundler will then generate the executable for each output format.

What it does

For Mac it will create a .APP bundle containing your jars, then include a copy of the JavaApplicationStub and generate the correct Info.plist files (Mac specific metadata files), and finally zip up the bundle. For Windows it uses JSmooth to generate a .EXE file with an icon and class path. For Java WebStart it will generate the JNLP file and copy over the jars. For double click jar files it will actually squish all of your jars together into a single jar with the right META-INF files. And all of the above works with native libraries like JOGL too. For each target it will set the correct library paths and do tricky things like decompress native libs into temp directories. Registering for file extensions and requesting JREs mostly works.

What about webOS?

All of the platforms except webOS ship with a JVM or one can be auto-installed on demand (the Windows EXEs do this). There is no such option for webOS, however. webOS has a high level HTML 5 based SDK and a low level C based PDK. To run Java on webOS you'd have to provide your own JVM and class libraries, so that's exactly what I've done. The full OpenJDK would be too big to run on a lightweight tablet, and a port would take a team of people months to do. Instead I cross compiled the amazing open source JVM Avian to ARM. Avian was meant to be embedded and already has an ARM port, so compiling it was a snap. Avian can use the full OpenJDK runtime, but it also comes with it's own minimal classpath.jar that provides the bare minimum needed to run Java code. Using the smaller runtime meant we wouldn't have a GUI like Swing, but using Swing would require months of AWT porting anyway, which I wasn't interested in doing. Instead I created a new set of Java bindings to SDL (Simple DirectMedia Layer), a low level graphics API available on pretty much every platform. Then I created a port of Amino (my 2D scene graph library) to run on top of SDL. It sounds complicated (and it was, actually), but the scripts hide the complexity. The end result is a set of tools to let you write graphical apps with Java on webOS. Amino already has ports to Java2D and HTML 5 Canvas (and OpenGL is in the works), so you can easily create cross platform graphics apps. And now with AppBundler you can easily package them as well. Interestingly, Avian runs on desktops nicely, so putting Java apps into the Mac App Store might now be possible. There's already some enterprising developers trying to get Avian working on iOS.

How you can help.

While functional, I consider AppBundler to be in an alpha state. There's lots of things that need work. In particular it needs Linux support (generate rpms or debs?) and a real Ant task instead of the Java exec commands you see above. I would also like to have it be included in Maven and any other useful repo. And as a final request (as long as I have you here), I need some servers to run builds tests on. I have already have Hudson running on a Linux server. I'd love it if someone could run a Hudson slave for me on their Windows or Mac server. And of course we need lots of testing and bug fixes. If you are interested please join the mailing list.

Client Java Freedom

AppBundler is another facet of my efforts to let help Java provide a good user experience everywhere. Apps should always feel native, and that includes the installation and start experience. I've used AppBundler to provide native installs of Leonardo Sketch on every desktop platform. I hope AppBundler will help you do the same. Enjoy! -Josh  



Amino 1.1 is on it's way, and despite the small version number difference the changes will be big. We are dropping Java support and heavily refactoring the JavaScript version.

First things first. I'm dropping support for Java. I have gotten essentially no downloads or feature requests for the Java version of Amino which tells me that almost no one is using it. If you want to do desktop Java graphics then I suggest moving to JavaFX. It is well supported and received many excellent updates in the past six months; and it's open source now. If I had known JavaFX was going to reboot with a pure Java version with great hardware acceleration then I probably wouldn't have started Amino to begin with. Leo itself will stay on the older version of Amino until I can get it ported, but no new features will go into the Java port. I highly suggest you check out JavaFX 2.1, now in beta. It even has WebKit integration.

Second, I have done a big refactor to the JavaScript port. The API will change only slightly. The big changes are under the hood in the way it handles animation and multiple canvases (canvii?). Now you can have a single core Amino engine per page that supports multiple canvases. They will all repaint quickly with minimal tearing while being as efficient as possible. This is a must on mobile devices, and in ebooks where multiple canvases per page are common.

Other stuff of note:

  • Bitmap Text: support the custom styled font output of Leonardo Sketch using dynamic bitmap sprites.
  • Animate DOM elements as well as shapes on the canvas.
  • Improvements to the animation api to support chaining, parallel animations, callbacks, and before/after actions.
  • Split into three files: core, shapes, and bitmap effects. Now you can do cool animation without including the scripts you don't need.
  • New API documentation.
  • Simple integration with Three.js for 3D objects with 2D canvas
  • Touch events for mobile devices.

Not everything is in the beta build yet, but it will be coming soon. Please try it out and give me feedback.

An open letter to language designers: please, for the good of humanity, kill your sacred cows. Yes, i know those calves look so cute with big brown eyes and soft leathery skin, but you know what? Veal is delicious!

Let me preface this by saying that I am not writing my own language. The world doesn't really want a new language, especially one from me. I made one a few years ago. While only a few dozen people were horribly maimed, I still consider it a failure and don't wish a repeat experience. The world does not want a new programming language, especially from me. But since you are hell bent on creating a new one, you might as well make it an improvement.

So open up your looking balls and point them right here. Our sacred cow wilderness safari begins now. Buckle up! It's going to be a long and awkward ride.

Lets start with the big cow, since it leads most of the ones that follow.

Cow #1: source code is ASCII text on disk.

You may think the storage of source code isn't really a language issue, but it is. A programming language isn't just the language anymore. It's the entire ecosystem around the language; including editors, libraries, source control systems, and app deployment. A language in isolation is so useless it might as well be LISP. I don't mean that to demean LISP, the badass godfather of languages. I mean it as: if you aren't thinking about a whole ecosystem then your language will mathematically reduce to LISP, so why are you bothering.

Back to the topic at hand. This is the 21st @#!% century!! Why, for the love of all things pure and functional, are we still storing our source code as ASCII text files on disk? One answer is "that's how we've always done it". Another is that our tools only work with text based code. So what?! I have a ten 1GHz devices sitting on my shelf unused and a 4 way multiprocessing laptop. Why are we being held back by the ideal of ASCII text? This cow must be shot now!

If we switch to using something other than text files, say XML or JSON or maybe a database, then many interesting things become possible. The IDE doesn't have to do a crappy job of divining the structure of your code. The structure is already saved. It's the only thing that's saved. The IDE can also store useful metadata along with the source and share it with other developers through the version control system, not through comments or emails. We could also store non-code resources, even non-text resources, right along with the code. Oh, and tabs vs spaces, style issues, etc. become moot. But more on that later.

Oh, and before you ask, putting your files in a directory structure and calling them 'packages' doesn't count. Think beyond the cow corral.

Cow #2: source code must be editable by plain text editors. IDEs should be optional and unnecessary.

Cowpatties! Unless you write your code exclusively with echo and cat, you are already using an IDE. A plain text editor is just an IDE with no useful features. We all use IDEs, the question is how much functionality does it provide you. When programmers say they shouldn't have to use an IDE, what they mean is they shouldn't be locked into using someone else's favorite IDE instead of the one they prefer. (I wouldn't mind being locked on a desert island with IntellJ, for example but I'd commit suicide if I was stranded with Eclipse).

Rejecting an required IDE is an issue of having specs and multiple implementations, not the concept of IDEs in general. I don't draw graphics in a text editor, I use Photoshop. The problem is that Photoshop has a closed file format that must be reverse engineered by Gimp. To go without an IDE is to go without 20 years worth of UI improvements. We are building a 21st century language here. Assume an IDE and plan for it. Cow dead. Yummers!

Cow #3. Less syntax is good.

No. While technically a language can express everything using just a nested tree structure (ala LISP) that doesn't mean it's a good idea. Syntax matters. Programming languages are user interfaces *for programmers*, not the computer. They let the programmers express what they want it the most convenient way possible and have the compiler talk to the computer. Certainly syntax can be too cumbersome or verbose, but that is about giving the programmer more to remember. Minimal syntax isn't the answer. The more specific the programmer can be the better. We want to give as much information as possible to the compiler so it can generate efficient binary code (where 'efficient' varies depending on the problem domain).

Minimal syntax is seductive because we like the idea of being concise and typing nothing extra. This is all very well and good. But the solution isn't to remove syntax, but rather make it optional. If the compiler can figure out what the programmer means without extra parenthesis and semicolons, then great, but let us add it back in if we want clarity. Make typing our variables optional for when we are hacking, but let us be more strict when we get to production. The syntax needs to scale with our needs, not yours. Oh, and what do you think of my nice leather wallet?

Cow #4. Holy wars: Whitespace. Indenting. Bracing. Tabs vs Spaces.

Irrelevant. Completely irrelevant. This is a style issue that can be enforced (or not) by the IDE. As long as it saves the source in a canonical format you can edit the code in any style you want. If you want to hide braces, do it. The IDE will handle the details for you. If you want to use square brackets instead of parenthesis, do it. The IDE will convert it. A funky syntax for escaping multiline text? Who cares. I can paste it into the IDE and it will deal with it. If you want to use dingbats symbols instead of keywords, do it. The IDE will handle it. (hmm. that's not a bad idea, actually. Have to think about it some more).

These are holy wars that matter unreasonably much to us but have not practical difference in the real world. Simply specify the semantic meaning and let the dev choose what they wish. Having good defaults matters, of course, but don't allow your cool new language to get trapped in the tar pit of programmer holy wars. Leave that for the cows.

Cow #5. What the world needs is a better type system.

No. The world doesn't care about yet another cool type system. You won't get programmer adoption by emphasizing how awesome your types are. No one will say: "Woah! Check out the type system on that one!". What the world needs is good modules. An escape from classpath hell. The ability to add a new module and know it will work without modification and without breaking something else. Let the compiler rewrite anything inside a module without affecting the outside. Fix versioning. These are those hard problems that need to be solved, not type systems. My code won't get magically faster and bug free through your awesome type system. Another cow bites the dust.

Cow #6. We must not let programmers extend the syntax because they might do bad things.

So?! We are adults. Let us shoot ourselves if we really want to. Let us override the syntax of the entire language if it would help us solve real world problems. Just make sure the changes are cleanly scoped, then let us have at it. The amazing success of JavaScript has been because it was so flexible that real world programmers could try different things and see what works, in the real world (seeing a common theme here?). Usage of JavaScript has evolved tremendously in the past decade, so much so that you could say the language itself has evolved (even if the syntax changes have actually been minimal).

So, no. Your language doesn't need to provide X. The language should programmers build it themselves. One more cow down the drain. Get your swim fins.

Cow #7. Compiled vs Interpreted. Static vs Dynamic.

Nobody cares. All languages are compiled at some point, the question is when. All languages live somewhere between the platonic ideals of static and dynamic. If we can get useful work done then we don't care where your language fits mathematically.

Cow #8. Garbage collection is bad.

No. Garbage collection is good. It has increased programmer productivity tremendously and eliminated an entire class of bugs (and security exploits). The problem is garbage collection isn't perfect. So grow your hair long enough to escape from your ivory tower and fix the GC, don't eliminate it.

GC systems often have bad nondeterministic performance, especially in interactive applications. They treat all objects the same. They provide the programmer with no-insight into what is actually going to happen. So improve the GC. Let us give it more hints about what we actually want. Why should the GC system treat my cached textures the same as a throwaway string. This is also a place where better IDE integration would help. If you make it easy for me to see where GC might be a problem then I, the programmer with the knowledge of what I'm actually trying to accomplish here, can provide the compiler with better information. Research modern infoviz. Get out of your pure text mindset.

We can rebuild this GC cow. We have the technology.

In memoriam

I hope I've provided a bit of guidance before you go off to your language shack in the woods and come back with type safe encapsulated 'genius'. Once again, you really shouldn't be building a new one language anyway, we have more than we need. Please, please. please, don't make a new language. Since you are determined to not heed my warnings, the least you can do is not injure the world further.

And please kill a few cows on your way out.

Below is a screenshot of a debugging app I've been working on called SideHatch. It essentially lets you open up your Amino app from the side and poke around at the innards.


In the screenshot you can see the two main tools: the Translator and the Inspector. The Translator shows all strings in your application that have been translated, broken down by category and locale. From here you can update translations and switch locales on the fly. The app will automatically refresh itself to show the changes without a restart.

The Inspector puts an overlay into every window showing the bounds of every control in your scene. Then you can move the mouse around to get info on particular components, like it's dimensions and translation. In the screenshot the indicator is telling me that the button I've hovered over doesn't have a translation yet.

SideHatch is currently in the 'translation' branch of the repo if you are interested in playing with it.

Greetings Earthlings!

Today's the first day I haven't been traveling, so I can finally catch my breath and write down some notes. It's been a helluva week. Last week I drove to Portland for OSCON to give several presentations and be involved in general geekery (if you follow me on Twitter, that's why you saw so many posts tagged with #oscon)


Due to a cat emergency (Nori is fine now) I arrived mid-afternoon, missing the tutorial sessions for the day. I spent the rest of the day working on slides and met with some of my new HP co-workers. I have to say the HP guys have been great to work with. They are very enthusiastic about what we can build with webOS.

Tuesday and PJUG

The morning was spent in the Erlang three hour technical session. I love OSCON because I can learn about things completely out of my element. Knowing nothing about Erlang before I can now build a basic multi-threaded program in it. It's got some very interesting concepts. It feels like a mix between Lisp and Prolog. Functional and match based.

Tuesday evening I gave a presentation to the Portland Java User's Group. For the first half of the talk I went over long term trends towards mobile devices, tablets, etc. and the shift away from PCs as the primary computing interface. Then I dove into how the mobile web solves the N-device problem with some technical tips and UI guidelines for mobile devices (use stylesheets, have large click areas, pare functionality down). For the last part I covered Palm's take, covering Ares, our mobile browser, our app ecosystem, and when it's appropriate to do one over the other. (And how webservices are the answer for everything :). I'll have slides for this talk up soon.

Afterwards we went downstairs where the Oracle dev outreach rep bought us all beer, then headed out for some Voodoo Doughnuts. ButterFinger doughnut for the win!


Wednesday morning we showed up to the expo floor early to get everything set up. I brought a bunch of Palm T-Shirts, a box of webOS books, about 80 aluminum water bottles, and 10 phones. The booth was very well attended and Ares was a big hit. I definitely need to get more of these nice water bottles, at least for the pacific northwest. We have so many bikers and hikers here, people use these things constantly.

Wednesday morning HP had a session covering all of the ways HP is involved in open source. I did a 12 minute segment covering webOS architecture, app development options, our catalog, and a 5 minute Ares demo. (yes, only 12 minutes for all of it!) At the end we gave away a couple of phones to people who asked good questions. (This is a lesson to attendees. Always stay till the end!)

Book signing

OSCON is run by O'Reilly. Since I wrote Swing Hacks for them five years ago they asked me to do a book signing at the Powell's booth. As I expected no one wanted me to sign a five year old book on an even older technology, but I did have a nice time chatting with the guys at Powell's (an excellent local Portland bookseller with one store dedicated to technical books).


Thursday I took the day off to spend time with my wife in Portland. Primarily Nordstrom's. We must have priorities.


Friday morning I did my personal session on Marketing Your Open Source Project on a Shoestring Budget Attendance is generally lower on the last day, so I wasn't surprised to see only about 25 people there. I wish O'Reilly had put me into a smaller room though, as it was built for about 250 and felt very empty. The talk was very well received by the audience, though. Several came up to me later telling me how much they liked it. Definitely something to repeat in the future. I'll have the slides up soon.

Next I attended "Repent Repent, the 2038 crisis is almost upon us" a tongue in cheek talk about the Y2038 problem where unix dates will roll over to 1901. Finally I saw the humorous keynote on The World's Worst Inventions. Describing it can't do it justice. Just go watch it.

Friday night my dinner guests bailed on me (or rather, hard crashed after 8 days of conference, poor guys) so Jen and I went out for some excellent Portland Sushi. I tweeted about it and someone showed up to join us. Go Twitter! Saturday morning we packed up, had a breakfast at a local cafe (Milo's Cafe, *highly* recommended. huevos rancheros & crab cakes were awesome!), then drove home.

Monday: Mobile Portland

The rest of Saturday and Sunday I was pretty much a zombie, but Monday afternoon I drove back up to Portland for yet another event. Jason Grigsby, who has worked on some high profile mobile apps, is the leader of, a local mobile developers group. A week ago their July meeting speaker plans fell through so he asked me if I'd talk to them about webOS while I was in town for OSCON (not realizing I live only 2 hours away).

So Monday I drove up to Portland and gave a 1 hour presentation that leaked into about 2 hours followed by Thai food afterwards with some of the crew. I gave them an overview of webOS, the development options, then spent quite a time in Ares showing how easy it is to build for. During a lengthy Q&A session they asked some really good questions and we got to dive into how the developer experience is very important for us. I also met some HP developers from Vancouver, two reporters, and a writer for PreCentral. It's amazing how many mobile related people live in Portland.

UStream recording of my session here

Whew. I think it's time for some coffee or a nap.

I'm writing this from a hotel room in SunnyVale, recovering from the tremendous event we put on for our dedicated developers at Palm's first ever webOS developer event last Friday and Saturday. The turnout was great. Over 100 developers paid their own money to drive, fly, and chopper in to Palm HQ. I taught an intro to webOS session for the entire first day, then answered questions and attended sessions the second. Topping it all with dinner at a local brew pub was a splendid idea.

My great thanks to my fellow Developer Relations Team and the many dedicated engineers who came to present, answer questions, and socialize with our developers. I know it meant a lot to the attendees to have such a personal connection with Palm. Extra special thanks to our CEO, John Rubenstein who personally addressed the developers at the end of the first day.

All in all, a great success. Slides and photos are forthcoming. Now time to sleep for a few days until next week when I'll be speaking on HTML 5 at the Web 2.0 conference.

- Josh

Every day or so I read another blog post (or ranting comments) about how BlackBerry could be rehabilitated, or how Nokia could restart Maemo and build the ultimate smartphone again. Things came to a head after Jolla announced their first phone for sale. Surely this phone with an amazing user interface will vindicate the N9?! Amazing technology plus a killer UI? Marketshare is theirs for the taking!

I’m sorry; but no. Most people don’t understand how a smartphone platform works. Simply put: there will not be any new entrants to the smartphone game. None. At all.

Obligatory disclaimer: I am a researcher at Nokia investigating non-phone things. I do not work in the phone division, nor do I know any internals of Nokia’s phone plans, or Microsoft’s after the acquisition of Nokia’s devices group is complete. I hear about new devices the same way you do: through leaks on The Verge. This essay is based on my knowledge of the smartphone market from my time at Palm/HP and general industry observations.

A few new Android manufacturers may join the game, and certainly others will drop out, but we are now in a three horse race. The gate is closed. I’m sorry to Jolla, Blackberry, the latest evolution of Maemo/Meego/Tizen, whatever OpenMoko is doing these days, and possibly even Firefox OS. No one new will join the smartphone club. It simply can’t happen anymore. You can’t make a smartphone.

There was a time when a small company, with say a few hundred million dollars, could make a quality phone with innovative features and be successful. This is when ‘successful’ was defined as making enough profit to continue making phones for another year. In other words: a sustainable business, not battling for significant marketshare. Those were the days when Palm could sell a million units and be incredibly happy. The days when BlackBerry had double digit growth and Symbian ruled the roost.

Then came 2007. It might be over-reaching to say ‘the iPhone changed everything’, but it certainly was an definitive event. The 1960s began in January of 1960, but ‘the sixties’ began when the Beatles came to America in early 1964. Their arrival was part of a much larger cultural shift that started before 1964 and certainly continued after the Beatles broke up. I would personally say the sixties ended memorial day of 1977, but that’s just my opinion.

The Beatles appearance on the Ed Sullivan show is a useful event to mark when the sixties began, even though it’s really a much fuzzier time period. Steve Jobs unveiling the iPhone in 2007 is a similarly useful historical marker. Everything changed.

Data networks

The first big change was data networks. In the old days there really wasn't a data network. Previous phones were about selling minutes and, to a lesser extend, texting. carriers didn’t really care about smartphones. They didn’t push them or restrict them. As long as you bought a lot of minutes the carriers didn’t really care what you used.

There were no app stores back then, just a catalog of horrible JavaME Tetris clones at 10 bucks a pop. I owned a string of PalmOS devices during this period. Their ‘app store’ was literally boxes of software in a store which you had to install from your computer. No different than 1980s PCs. While my Treo had access to GSM, it was merely a novelty used to sync news feeds or download email very slowly.

Around 2006 the carriers 2G and 3G data upgrades finally started to come online. Combined with a real web browser on the iPhone you could finally start doing real work with the data network. This also meant the carriers became more involved in the smartphone development process. Clearly data would be future, and they wanted to control it. Carriers request features, change specs, and pick winners.

Carrier influence means you can’t make a successful smartphone platform without having strong support from them. This is one of the things that doomed webOS. The Pre Plus launch on Verizon should have been huge for Palm. Palm spent millions on TV ads to get customers into the stores — who then walked out with a Droid. Without having strong carrier support, all the way down to reps on the floor, you can’t build a user base. To an extent Apple is the exception here, but they have their own stores and strong existing brand to leverage against carrier influence. Without that kind of strength new entrants don’t have a chance.

The cost of entry

Another barrier to entry in the smartphone market is the sheer cost of getting started. A smartphone isn’t just a piece of hardware anymore. It’s a platform. You need an operating system, cloud services, and an app store with hundreds of thousands of apps, at least. You need a big developer relations group. You need hundreds of highly trained engineers optimizing every device driver. The best webkit hackers to tune your web browser. A massive marketing team and millions in cash to dump on TV ads. You need deep supply chains with access to the best components. The cost of entry is just too high for most companies to contemplate.

To continue with webOS in 2011, I estimate HP would have had to spend at least a billion a year for three years to become a profitable platform — and that was two years ago. The cost has only gone up since then. There are very few companies with the resources. You already know their names: Apple, Samsung, Google, and Microsoft. All vertically integrated or well on their way to it. You aren’t one of these companies.

Access to hardware components

Smartphones need good hardware to be competitive. With the 6 month cycles of today’s marketplace that means you have to access to the best components in the world (Samsung), or have such control of your stack that you can optimize your software to make do with lesser hardware. Preferably both.

Apple has the spare cash to secure a supply chips and glass years in advance; you do not. If Apple has bought the best screens then your company has to make do with last years components. This compromise gets worse and worse as time goes on, making your devices fall further behind in the spec wars.

Retreat to the low end

A common solution to the component problem is targeting the low end. After all, if you can’t get the best components then maybe you could build a decent phone out of lesser parts. This does work to an extent, but it limits your market reach and opens you up to competition at the low end. You are now competing with a flood of cheap Android devices from mid-tier far-east manufacturers.

Even if you OEM hardware from one of these low-end manufacturers you are now in a race to the bottom. Your product has become a commodity unless you can differentiate with your user experience. That requires the telling potential customers about your awesome software, which requires a ton of cash. Samsung spends hundreds of millions each quarter on Galaxy S ads. This path also requires an amazing UI that will distinguish you from your peers.

A disruptive UI

Even with an paradigm shifting UI you’ve got to overcome all of the difficulties I outlined above. Most people in the wealthy world have smartphones already, so you not only have to convince someone to buy your phone, but to leave the phone they already have. Your amazing UI has to overcome the cost of change. Inertia is a powerful thing.

Most likely, however, your new platform won’t have such a drastically different interface. Smartphones are a maturing platform. A smartphone five years from now won’t feel that different than today’s iPhone. Sure, it will be faster and lighter with better components, but it will still have a touch screen with apps, buttons, and lists.

Unless you’ve figured out how to make a screen levitate with pure software you won’t be able to shake up the market. Google Glass is the closest thing I can think of to a truly disruptive interface. Adding vibration effects to scrolling menus is not.

No hope?

So does this mean we should give up? No. Innovation should continue, but we have to be realistic. No new entrant has any chance of getting more than one percent of the global market. That could still be a success, however, depending on how you define success. If success is being profitable with a few million units, then you can be a success. You will have to focus on a niche market though. Here’s a few areas that might be open to you:

Teenagers without cellphone contracts. Make a VOIP only phone. Challenge: you are now competing with the iPod Touch.

Point of Sale systems. Challenge: this is an enterprise only pitch and they have long sales cycles. You might be dead by then. They also don’t care about user experience, so your awesome UI doesn’t matter. Small to medium businesses will use apps on standard devices like iPads, so you are back to where you started.

Emerging markets: half of the world is buying their first smartphone. This is an opportunity if you can get in fast with cheap hardware, but now you compete with FirefoxOS.

Mozilla is targeting emerging markets where last generation hardware is more likely to succeed. Even so, Mozilla is working very closely with local carriers to ensure success while facing down competition from low-end Android devices. They also have the advantage of being a non-profit. Their ultimate goal is not to become a profitable phone company, but rather keeping the web open and free. This is probably not a viable option for you, and even Mozilla may be too late to follow this path.

The sad truth

Smartphones are a rapidly maturing product. Soon they will be pure commodities. Just as I wouldn’t suggest anyone build a new line of PCs or cars, smartphones are becoming a rich man’s game. Unless you start with a few billion dollars you have no hope of making a profit. Maybe you could follow CyangoenMod’s approach of building value on top of custom Android distros, but even that risks facing the wrath of Google.

Sorry folks. There's plenty of room to innovate elsewhere.

I'm tired of hearing people talk about how we need a new pipeline to bring oil from Canada to Texas for processing. They say we need to do this for the US to have "energy independence". This is bullshit. Anyone who claims this has no idea how global commerce works. Oil is a fungible commodity so whether Canada sells their oil to the US, China, Brazil, or Switzerland makes no difference. Let me explain.


Global commerce is a like a bathtub. Imagine a bathtub filled halfway with water. The US is a one end, China at the other. Other countries are at other spots. Now imagine what will happen if you pour a gallon of water in at the US end: it will be higher for a split second and then the water level will even out through the entire bathtub. If you pour the gallon in at the China end the same thing will happen. Because the water can freely flow throughout the bathtub, and because all water is the same, the bathtub will have the same level regardless of where you add more water.

Global commerce is like the bathtub. When we are talking about products that are uniform, such as natural commodities like rice or oil, and there are no barriers to trade, then the product will flow uniformly throughout the world. Adding more oil to the world's supply will lower the price for everyone. Removing from the supply, or increasing demand, will increase the price for everyone. All oil is the same and can be shipped anywhere so it is called a fungible commodity. It doesn't matter who Canada sells their oil to. It goes in to the global pool and raises or lowers the price for everyone.

So now we come back to energy independence. Energy independence sounds like a nice goal, but what does it really mean? This is about an oil pipeline, not all possibly energy supplies, so we are really talking about oil independence. So what does that mean? Does it mean that we have energy in case China decides to keep their oil?

As we just discussed, oil is a fungible commodity. If China 'keeps their oil' it means they are telling chinese oil manufacturers to sell oil only within the country instead of on the open market. By definition suppliers want to sell wherever they can get the best price, which is why the global market evens out for fungible commodities. If China imposes such a domestic rule then they are telling their own people not to sell for the best possible price. Thus China is hurting it's own industry. Chinese oil manufacturers are making less profit than they could otherwise.

If China 'keeps their oil' just means they increase their consumption, well that is no different than any other country increasing their demand. The price will rise globally. It won't help us anymore than if we got the oil directly from Canada. If somehow processing Canadian oil closer to home saved 5% on costs then that oil would be sold on the open market to get the best price, not just in the US. It's a global commodity, remember. The price will rise or fall for *everyone*. Building a pipeline provides no advantage to the area the pipeline is built through. You can't be independent from a global commodity without hurting yourself.

But what about jobs?

I have also heard the claim that this will bring more jobs to the US. That is true. Even if the price is global it would be better to have the work done in the US instead of other countries, right? According to this article at CBS News

analysis suggests that Keystone's job-creating potential is more modest. The U.S. State Department calculated last year that the underground pipeline would add5,000 to 6,000 U.S. jobs. One independent review of Keystone puts that number even lower, with the Cornell University Global Labor Institute finding that the pipeline would add only 500 to 1,400 temporary construction jobs. The authors of the September report also said that much of the new employment stemming from Keystone would be outside the U.S.

Transcanada itself cast doubt on its employment forecast when a vice president for the company told CNN last fall that the 20,000 jobs Keystone would create were temporary and that the project would likely yield only "hundreds" of permanent positions.

So, only a few hundred new permanent jobs. It seems like there are far better ways of creating jobs than digging a 1700 mile trench.


Please send feedback and comments to my twitter account @joshmarinacci instead of on the blog. Thanks!

I'm speaking at OSCON in Portland next week, and what a busy week it will be. In addition to my personal session on marketing open source projects, I've added some Palm stuff in collaboration with HP. If you can't attend OSCON but will be in Portland I will also be speaking at the Portland Java Users Group. I'll also be working at the HP booth where we will be giving away phones, books, tshirts and some super nice water bottles. Here's the full schedule:

  • Tuesday 6:00PM: Introduction to the Mobile Web
    Portland Java User's Group
    Oracle Building, 8th Floor room 8005
    Pacwest Center,
    1211 SW 5th Avenue
    Portland, Oregon
  • Wednesday: 10:40AM HP's Session: Cloudy with a Chance of Revolution
    This is an overview of where HP is going in the cloud, including a section on webOS and Ares.
  • Wednesday: 3:10PM Swing Hacks book signing I'll be signing copies of Swing Hacks at the Powell's booth.
  • Wednesday: 6:00PM O'Reilly Author Meet and Greet
    O'Reilly booth, #313
  • Friday 10AM Marketing your Open Source Project on a Shoestring Budget
    Learn how to generate interest and build a userbase for your open source project.
    Portland 255

Typography is the study of type, meaning letter forms of the printed word; though in modern usage it includes all manner of non-printed letter forms such as computer screens, eBooks, electronic billboards, and even textiles. Typography is possibly the most important part of design because humans are visual creatures and nothing conveys more information in a smaller space than words. This article will cover the basic terms and anatomy of type, then give you some quick tips to help choose fonts wisely. Unlike my last topic, color, modern typography is actually quite old. Most of modern color theory was developed in the 20th century with the invention of modern dyes, paints and emissive displays. The core typographic theories, however, were in place when Gutenberg made the first printed books over 500 years ago. Certainly a lot has been developed since then, but the universal principles of type like alignment and grids were already developed during the dark ages by monks transcribing bibles by hand. When Gutenberg converted these principles to his mechanical printing press modern typography was born.

Typeface vs Font

First things first: typefaces versus fonts. A typeface, also called a font family, is a set of fonts designed with a stylistic unity, each comprising a coordinate set of glyphs. ( A font is a complete character set of a typeface at a particular size, weight, and style. In software engineering terms you can think of a typeface as a class and the font as an instance of that class with a particular configuration. The exact meaning of typeface and font have changed subtly over the years as the technology has shifted from mechanical type to computer-set type to purely digital type. Originally fonts from the same typeface were literally different boxes full of chunks of metal shaping each letter at a given size. These days we have scaleable font technology so different sizes, weights and styles can be generated algorithmically or the multiple forms can distributed in a single physical file. Thus font and typeface have somewhat merged, and today you can safely refer to a typeface as a 'font' in general usage, and only use the more precise terminology when referring to particular instances of that font (or when talking to ornery typographers).

Anatomy of a Font


The weight of a font is the thickness of the lines that make up the letter forms. Since this thickness is relative to the size of the letters we refer to these as their weight. You are probably most familiar with bold vs regular, but some fonts have as many as nine weights from thin and light up to bold, heavy, and black. The True Type font format specifies weights on a scale from 100 to 900. CSS can use words or scale numbers to specify the weight. Police-Helvetica.gif Helvetica (

Style: AKA Slope, Italic & Oblique

We usually think of the slope of a font as italic or normal but, as with everything in typography, there's more subtlety to it. Italic text really means text which is drawn differently to indicate it is emphasized and should stand out in some way. Italic text can be programmatically generated by leaning the text to the right, called oblique text. Some fonts include a true italics form which actually draws the letters with different shapes. Fancier fonts may lower the fs, adjust the roundness of letters, or lean to the left instead of the right (depending on the language). FirefoxScreenSnapz016.png FirefoxScreenSnapz017.png roman, italic, and oblique (


Some fonts have wider and narrower versions of the letterforms. The terms for these versions vary by font maker, but will often be called things like condensed and expanded. These fonts adjust the horizontal width of the letters themselves.


Leading (pronounced like treading or fed-ing) is the spacing between lines of text. Back in the metal typesetting days they used strips of metal lead to separate lines, which is where the name comes from.

Kerning & Tracking

Track_Kern.png Kerning and Tracking ( Kerning and Tracking are adjustments to the spaces between letters of a proportional font. With tracking the extra space uniformly applied to all letters. With kerning the extra space allocated by letter pairs. This means that certain pairs of letters will have more or less space than other pairs, resulting in a more pleasing look. In some cases letters may vertically overlap to make them look better. Kerning values are very hard to generate algorithmically because it depends on the how the letter forms visually look to the human eye. For this reason well kerned fonts are created by hand by a font expert, and are therefore more expensive. The results can be well worth it, however, because the badly kerned fonts result in keming. Also available as a T-shirt. FirefoxScreenSnapz014.png Keming, from Ironic Sans.

Font Metrics

The actual letter forms have a specific vertical anatomy based on how we measure them. Together these measurements are the font's metrics. We use these measurements to decide how to put letters together to form words, lines, and paragraphs of text. FirefoxScreenSnapz018.png Font Metrics ( Of the various measurements in this diagram the most important is baseline. The human eye finds baseline aligned text very pleasing, and will notice the tiniest imperfections in this alignment. In GUI software it is important to align UI controls containing text (which is most of the standard controls) along the baseline of the text rather than the visual bounds of the actual control. For example, a button should be aligned with the text inside the button rather than the rectangular edges of the button itself. It may seem like a small detail but it makes a big difference. I once spent a year fixing bugs in the Windows Look and Feel for Swing, and the most visible bugs were vertical alignment issues. Yes, I spent a year of my life turning this: before.png into this after.png All in all, I still think it was a year well spent :)

Odds and Ends

  • Small capitals, aka: smallcaps, is simply smaller forms of the capital letters. Usually these smallcaps have the same height as the lowercase letters of the normal font. Smallcaps may be included in a font or generated algorithmically by shrinking the regular capital letters.
  • Dropcaps or Initials: special forms of the intial letter in a paragraph or chapter. They are often much larger than the regular text on the page and will 'drop down' into the lower lines.
  • Logical font: a font which doesn't actually exist, but will be replaced with a real font at runtime. For example, in webpage you can use the font 'sans-serif'. There is no real font named 'sans-serif', but the webbrowser will pick the user's preferred sans serif font and substitute it at runtime.

Categories of Fonts

In Roman languages (languages which use the 26 letters of the Roman alphabet plus various accent and punctuation marks) there are five major categories of fonts.


Serif letters drawn with features at the ends of their strokes. The serifs are the little feet we see in fonts like Times. These are some of the oldest type designs. The feet along the baseline help guide the eye from left to right, making them very 'readable' fonts. FirefoxScreenSnapz021.png Serif Font

Sans Serif

Sans Serif (french for "without serifs") are letters drawn with straighter lines and no feet. Their larger letterforms make them very legible, but can cause greater eye strain when used in long runs of text. Helvetica is considered the quintessential sans serif font. FirefoxScreenSnapz020.png Sans-serif Font


Script fonts imitate hand written letter forms. They are much harder to read than serif and sans serif fonts, and should never be used for body text.


Ornamental fonts are highly decorative and tend to work only at larger type sizes. They are often called 'display' fonts. They should never be used for body text (I'm looking at you, Comic Sans!). kruffy.png Type set in Baby Kruffy


Monospace fonts have letterforms that are exactly the same width, or are at least equally spaced. These come from the days of typewriters and old computers that could only move fixed distances. The are rarely used today except when the vertical alignment of columns matters. Examples include code editors and laying out tabulated data.

Text Placement

Most of our type terminology in software comes from the world of newspapers. In practice we have three major kinds of text when building webpages and software: body, header, and navigation.
  • Body: this is the bulk of your text organized into paragraphs. It is usually smaller than the other text and should use a simpler font that looks good at smaller sizes.
  • Header: this text in short runs (usually a single line or less) that must stand out. Headers usually appear at the top of a page or paragraph indicating that the reader should move their eyes here to begin reading something. Headers also look good with serif and sans serif fonts, but you have more freedom because the text is typically larger.
  • Navigation: this text is very short and is usually at the top or sides of the page or screen. It indicates the structure of the entire work, such as newspaper sections or different pages in a site. It should also indicate to the user how to navigate to that section. In a newspaper you would use page & section numbers. For software the text would be rendered as something clickable (a button, an underlined link, a hover effect etc.).

Quick Tips

Use the right font for the size

Not all fonts look good at all sizes. Some really shine at small sizes, others at large ones. Use the right font for your size. At smaller sizes you want fonts which are simpler and easier to read such as the basic serif and sans serif fonts. Try Arial, Helvetica, Times, Verdana, and Georgia. They are installed on virtually all computers and were designed to look good on screen at smaller sizes.

Use different font categories for different parts of text

Use a sans serif font for the header and a serif font for the body, or vice versa. It creates a contrast without banging you over the head with it.

Use truly crazy fonts only for top headers

Use the cool and crazy fonts only for the top most headers. In the example below I used a font meant to evoke mid-century Tiki-culture, with a funky light effect to suggest it's alcoholic origins. I only used this font in one place: the top most header of the page. Everything else is done with Helvetica; a standard sans-serif font. FirefoxScreenSnapz022.png Home page of Project MaiTai Use restraint. Pick one crazy font to use for your whole site or application and use it sparingly. If everything is different then nothing is. You can use a fancy font (somewhere between the plain boring fonts of body and crazy fonts of only top most headers) for headers within the page. And use one of the plain boring fonts for body text. Leave the crazy fonts just for the top most header.

Use color and style to create contrast

Use color changes or capitalization style instead of fonts to contrast between the headers, body, and navigation. For example, in recent versions of iTunes the headers in the sidebar are all capitalized and in a slightly lighter color with a hint of etching. iTunesScreenSnapz002.png iTunes 9

Bundle a font in your app

For the body text I still recommend sticking with one of the standards, but for your header font you can choose something a bit more distinctive. Here are some great places to get cheap or free fonts.

Avoid Cool Fonts

Don't use cool / cute / themed fonts unless the subject matter really warrants it. Your website on technology trends really shouldn't use a cutesy bubble font unless you are really targeting the tween market. FirefoxScreenSnapz015.png Sniglet

Just the beginning

Typography is an incredibly large topic. What I've covered here is just the beginning. In the future I'll cover text as part of the overall design and dive into the grid system. If you'd like to read more on typography, here are a few good resources: The 20 Do's and Don'ts of web typography.

No Starch Press is on a roll with its series of Lego themed books. While most of them are about model ideas or construction techniques, Beautiful Lego is different. This is a Lego art book. In classic coffee table style it is filled with gorgeous photos to thrill the reader. Beautiful Lego does not seek to discuss 'can Lego be art', but takes it as fact. These are works by artists, just artists using the medium of Lego instead of paint or clay, and the results speak for themselves. Stunning.

Beautiful Lego is written and photographed by Mike Doyle, a lego artist himself as well as an excellent graphic designer, but features the work of over 70 different artists. The book is organized by topic -- spaceships, people, architecture, robots -- with interviews of artists interspersed. Each artist is asked the single question: "Why Lego?"; with an immense variety of answers. There is a common theme, though: the desire to create using an incredibly malleable medium.

Some models are beautiful and some are terrifying, such as "The Doll" (pg 5) and "Disscected Frog" (pg 79). The architectural models really shine; good use of the few curvy pieces in Lego can make amazing results. There is even political art: The Power of Freedom (page 124).

Beautiful Lego surprised me by the diversity of styles within the medium of Lego. Some are hyper detailed, some expressive, some minimalist. Angus MacLane has a cute style known in the book as CubeDudes, which are head on caricatures of famous figures like President Lincoln, Kirk and Spock, and the Stay Puft Marshmellow Man. (page 36)

You will appreciate the book on two levels. First, the beauty or expression of the piece, then a second time as you pour over the photos trying to figure out "How did they do that with Lego?" Mike Doyle's victorian house series in particular will amaze you with the flexibility of Lego. (And make you wonder how big his Lego collection is:) While re-reading the book for this review, I'm struck by how much good photography makes a difference when experiencing a model.

I heartily recommend Beautiful Lego to the adult Lego fan in your life. It just might make you pull out the bin from the garage and build a few orignal models yourself. And yes, there is a Freddy Mercury model called Fried Chicken.

Beautiful Lego can be purchased from No Starch Press, Amazon, or Barnes and Nobel.

some text asdf nothing

with some text in it

Okay, perhaps that's a bit aggressive. PCs will not go away, much like radio persisted after the advent of television. However, tablets do signal the end of the PC era. Why? Simply because PCs suck. They are heavy, prone to breakage, horribly insecure, and require too much knowledge to keep running. And they were never intended to be used by the vast majority of people who use them today and will use them tomorrow. By the end of this essay I think you'll agree there is a compelling case that the PC era is over and that the growth, and most of the cash, is going to be for tablets and netbooks and other non-PC devices.

When I say "PC" I mean a computer running a traditional desktop windowing operating system like Windows or Mac OSX regardless of whether it's a desktop or laptop form factor. The PC was always designed for a professional. It requires technical knowledge to use and maintain. How many of you have had to fix your parents' computer and thought: if cars were built the way PCs are the car makers would be sued out of existence. This is not to knock PCs. They are technical marvels our ancestors could only dream of, but their major asset is also their major flaw: they are general purpose computing devices. It is this generalness is that makes them some troublesome. It increases both the production and maintenance costs (in dollars, hours, and brain cells). This generalness is what makes them less competitive with the coming wave of post-PC devices.

But tablets are just PCs with touch screens.

So.. right.. touch. Touch is all the rage these days but it doesn't magically make a computer easier to use. Yes the touch interface of the iPhone is easier to use than a traditional desktop UI but most of the improvement is due to the simplification of UI metaphor, not because of touch itself. The iPhone OS doesn't have files. It doesn't have multiple overlapping windows. It doesn't have a persistent dock, screen savers, firewalls, movable palettes, or any of the other things which make up the modern desktop computer. Of course, lacking the features of a modern desktop OS makes the iPhone a more limited device...

But here's the thing: most people don't need these features. Most people use their computers for surfing the web, watching videos, playing music, reading news, and the like. The most intensive typing they do is sending emails and updating their Facebook status. You don't need the full power of a general purpose computer to do these things.

Now most of you reading my blog will say "I'd never settle for just a browsing computer". And that's right, most of you wouldn't. In fact almost none of you would. That's because anyone who reads a blog on software and UI design is by definition not most people. Most people don't write software, use Photoshop, edit videos, or the countless other things that general purpose computers are so good at. The modern PC interface is overkill for what most people actually do with their computers.

It really comes down to this: consumption versus creation. Tablets and netbooks are great at consumptive tasks. Tasks where you browse and click/tap a lot with very little typing or detailed pointing. PCs are very good at those tasks, but long with it comes all of the complexity of a desktop OS. Most people don't need that complexity, so the minute a better solution comes along they will adopt it. I think 2010 is the year when technology will finally bring that better solution: a browsing computer.

What will the browsing computer look like?

Tablets are getting all the buzz right now due to the Apple rumors, but I don't think the form factor is as important as the software interface. Netbooks are just as viable as PC replacements. The only difference between a netbook and a tablet is the presence of a hardware keyboard. Both tablets and netbooks are built out of the same pieces as PCs (and gives us that handy economies of scale), but they are fundamentally different products and are used differently.

The browsing computer strips away the complexity of a PC operating system by stripping away the features that most people never use. Apple's tablet will most likely be a large iPod Touch, devoid of a filesystem, overlapping windows, and system utilities. Of course all of those things will still be inside the tablet's software but they will be implementation details. The end user will never need to know about those things. The Microsoft Windows tablets of the past have always failed because they were desktop PCs shoehorned into a formfactor they were never designed for. The Windows OS was simply never built for touch, and too much of the OS cruft is exposed to the end user. The name "Windows" should be our key indicator: touch doesn't work well with movable overlapping windows. While the iPod Touch does use parts of Mac OS X underneath, the parts exposed to the user were designed from the ground up for a browsing computer experience.

Why now? Why not 5 years ago, or 5 years from now?

This one is a bit trickier. I think the browsing computer is ready to hit mainstream because of a few long term trends that finally converged.

First, Moore's law has made hardware fast and cheap enough to make a viable 400$ browsing computer. Five years ago the iPhone wasn't possible to build for a viable price. Today Apple makes 200$ profit on every one they sell. A few more turns of Moore's law makes the tablet viable for a similar price point (though I expect Apple to charge a premium initially).

Second, all apps are Internet apps now. The Windows OS benefited from the network effects of hardware and software compatible with it. Items bought in stores, requiring shelf space and retailers and distributors. Creating a new desktop OS required replicating this entire network of infrastructure. Today you can make your own profitable ecosystem by distributing everything online and having few or no hardware add-ons. Building a platform with tons of apps and content is a whole lot easier now. The idea of a computer that only runs software from it's own store or browses the web isn't crazy anymore. In fact, the idea of an app that doesn't have some connection to the Internet is now crazy.

Third: slave devices are becoming independent. The browsing computer is built on long term trends that have been underway for a while. What is new is that we are very close to the point where one of these non-PC devices can actually replace a PC for a lot of people. Palm Pilots and the iPod were early steps along this trend, but they were slaves to a PC. The could do nothing without the attached PC. In fact, they were made better devices by the fact that hard tasks were offloaded to the PC, such as syncing with data and managing your music. In 2007 we got the far more independent iPhone. It can directly access the web and install apps without a master computer. You still need a PC to use an iPhone for media management and configuration (plus the initial setup), but it's a lot closer to being the independent PC replacement. The final step may be the tablet/netbook. Devices which exist entirely separate from the PC and don't require it for anything. Even if we don't get these in 2010, they will be here very soon. And when it happens, it's going to happen fast. The world has been waiting.

How about some concrete predictions?

Apple will release a tablet (slate?) and sell millions

In the early part of this year Apple will release a tablet computer, but it will be a large iPod Touch instead of a small Mac. It will be almost exactly twice as large as the iPod touch in both dimensions and have worse battery life, but still be essentially the same. It will run the same software and install apps from the same store. Porting an app to the tablet will just require recoding your UI slightly to fit Apple's updated UI guidelines and the larger screen size. That's pretty much it. There will be no magic hardware, no crazy screen technology, and the UI will be pretty much the same as the iPod Touch. They will probably let you run multiple apps at once, switchable with a dock interface at the bottom, but apps will still fill the screen. It will be more like switching pages in Mobile Safari.

One part I'm unsure of is how independent the tablet will be from your desktop. I believe that Apple fully intends to make a device that will replace your laptop (at least for 95% of us), but I'm not sure they will enable this in the first version. The key will be where you store your music and movies. They might go with a cloud solution, or make a version of the Time Capusle that acts as a headless media server. The key indicator will be if you can sync your iPod from the tablet or if you still need a real PC. In any case, if it doesn't happen now it will in the next few years.

Other tablets will ship

Apple is obviously not the only one working on this. CES should be quite interesting. I expect several other browsing computers will be shown this weekend, running either Chrome OS or Android, and in both netbook (folding screen & keyboard) and tablet (touch screen with soft keyboard) form factors. Possibly something in between, but transformer laptops have never worked well. Apple's primary advantages over the competition will still be their industrial design and the content ecosystem they've developed over the past 8 years with the iPod.

Dedicated eBook Readers will be screwed

It's sad to see a new category of devices leave us so suddenly, but I think the browsing computers will supplant them very quickly. Once a dedicated device can read books for 400$ it's only a year or two before a more powerful browsing computer can do the same for the same price. On the other hand, Amazon won't care if the Kindle dies. They want to sell eBooks, which they do just as easily on the iPhone as they do on their own device. I predict that within a year they will sell more eBooks on other devices than their own, and within 5 years they will stop making the Kindle. On the other hand, the e-paper technology is continuing to improve as well, so these categories may simply merge into one.

PCs will be freed to become workstations again

When photography arrived it didn't kill off painting. Instead there was an explosion of new painting styles after the artists were freed from the duty to faithfully record the real world and could now focus on more creative things. Impressionism and modernism didn't happen until after the bulk of painting chores, portraiture and landscapes, moved to the realm of photography. I think something similar will happen to PCs. Freed from being the browser computer for the masses, PCs can morph into something more advanced and specialized. They will again become tools used by professionals, returning to their original name: workstations. As a PC user I can't wait to see what this future brings.

Special thanks to Flickr user vernhart for the hilarious photo.


It appears that last night at CES Steve Balmer showed off a tablet computer from HP. It's a PC running Windows 7, a PC operating system. It will fail miserably. They simply don't get it.

I'm really enjoying my new job at Nokia. Unfortunately there is not much I can talk about since I'm exclusively doing research for future products. This is a change for me. I'm used to talking about what I work on. In fact, it's been my job for the past 4 years to do just that. On the other hand, the reduced travel schedule has given me more time to think about other non-smartphone related things, which is quite a nice change. And last week I came to a realization: driverless cars will change everything.

Self driving cars have been on the horizon for decades. I once saw an 'educational film' from the mid 50s that talked about cars which would communicate with radio and drive 200mph. The idea that they are finally close to being here is nice, but now we have to confront the reality of them. Will self driving cars simply enable our current car lifestyle with extra Facebooking time, or will a world of self driving cars create more changes. I'm starting to think the latter.

The automobile is a very inefficient technology. I'm not talking about the internal combustion engine running gasoline. On the contrary, gas is quite good at what it does. It packs more energy per cubic inch or pound than any battery we have yet invented. That's why it's been so hard for electrics to take off. No, what I'm talking about is the entire ecosystem around automobiles.

In America, at least, we devote a large portion of real estate to cars in the form of roads, parking lots and driveways. Yet at any given time most of this space is unused. And by most I mean 99% of the time. Most parking lots are not used at all at night and driveways are not used during daylight. Roads are barely used at night. Even during the day any given patch of road is mostly empty. Even in rush our traffic the cars still have a length or two of empty space between them. That's a lot of unused space.

Now imagine a world where all cars drive themselves. They would never have to park! A car would drop you off wherever you want to go and then leave, returning later to pick you up. Cars could drive far closer together and at higher speed, and on narrower highways.

Now imagine you don't need to actually own a car, but simply rent it whenever you need it. Essentially a robo-taxi far cheaper than a human. This means you would never need to own a car, find a place to park, or have a driveway. Garages would no longer have to be a part of house architecture (to be replaced with an awesome workshop, of course).

Beyond space and convenience of selfdriving cars, they also enable car use by people who couldn't use them before. Children going to school or the library. Seniors whose eyesight is failing. The blind. People recovering from leg surgery. Anyone who can't drive for any reason, could now have the freedom to travel anywhere they want.

Which brings me to my next point. A car which drives fast and safely through any terrain starts to become very competitive with airtravel, at least for any trip under 1000 miles. For example, I regularly fly from Eugene Oregon to San Jose. It's a trip of about 600 miles but requires a stopover in a hub city (usually Portland). Once I add in checking luggage, going through security, take off and landing time, and the other usual headaches of air travel; what should have been a 60 minute flight as the crow files has now become at least 3 hours, sometimes 4 or 5. I can actually drive the same distance in 9, and have done so on occasion. If a self driving car could do twice the speed as a falible human, then it could take me just as fast as the airplane. Even better, cars are getting more environment friendly as they switch to alternative fuels. Airplanes have no real alternatives to jet fuel.

Now, I'm not saying these changes would happen over night. They would probably take 50 years or longer. We can't simply rebuild our cities around selfdriving cars. But the change will happen eventually.

I can't wait to have my robo-chauffer drive me from New York to London over a 2 week vacation (via the Trans-Bering-Strait bridge, of course)

Disclaimer: I did not pay for many of the books I review here. One of the perks of being an O'Reilly author is easy access to free copies of almost everything O'Reilly publishes. However, all of these reviews are freely done of my own initiative. I choose the books I review and I receive no compensation other than the free copy. These are my own opinions and do not reflect the opinions of O'Reilly or my employer Nokia.

The Arduino Cookbook, by Michael Margolis

When you first begin hacking with Arduino, as I recently have, you will most likely spend the first few weeks scouring the Internet for information. The Arduino system is so cheap, powerful, and flexible that you will immediately think of a million things to do with it. This can be a problem. Not only do many of us lack the time to build every project we've dreamt up, but using just the web for information is problematic. The docs on are great for introductory topics but I quickly found my self at its limits. As a software guy I need to know not only about Arduino itself, but sensors, components, 3rd party libraries, and power systems. In short, I need a complete electronics background to fully use my Arduino board. That's where the Arduino Cookbook comes in.

The second edition of Arduino Cookbook, by Michael Margolis, was recently published by O'Reilly. In my opinion this is the best one stop shopping source for Arduino information. It is not a pure introductory tome, though the first chapters do give you a quick review of Arduino to keep you up to date. The bulk of the book is organized around functional topics; things you would actually want to *do* with your Arduino.

The first few chapters cover the Arduino language, math, serial IO, and basic switches. Though it was not hard for me to pick up the Arduino language (essentially a simplified C++), these chapters covered a lot of finer details I had missed when reading the online docs. Each chapter is structured as a series of how-tos such as "Shifting Bits" and "Using a Switch Without External Resistors".

Later chapters cover specific topics like Getting Input from Sensors, Physical Output, Audio Output, and Wireless Communication. I really like that each how-to section not only shows you how to complete the task at hand but also gives you background into what is really going on. This is especially useful when you reach advanced topics like driving motors. The book gives a background of different kinds of motors, how they work, and how they are controlled before diving into specific tasks. This structure gives you a great electronics primer as you learn the ins and outs of Arduino.

If you buy only one book on Arduino, make it this one. It gives you everything you need to get the most out of your hacking endeavors.

on Amazon, paper and Kindle


direct from O'Reilly as paper, ebook, or no-DRM PDF

This is some amino stuff to read.

I had such a wonderful time at OSCON. It was truly amazing. And then we had some more really awesome fun.

We even have some links to read at OSCON.

First, watch this amazing video created by a newspaper industry research group. It depicts the digital newspaper of the future. The surprising part? The video was created in 1994! And yet the newspaper industry didn't listen to their own research.

Your homework over the holiday weekend isn't to learn the lessons of the video, but rather consider why it's advice wasn't heeded. It did not have any impact on the industry that sponsored it, which is now suffering the consequences.

Does amazing design and research have any real value if it doesn't effect any change? What can we do to make our design have more impact?




This is part 2 of my N part series on building a DIY CNC machine. If you missed it, here is part 1.


One of the trickiest parts of building a CNC machine is the linear sliders. Each axis requires a rail to slide the carriage on. This is called a linear slider. The sliding mechanism must have very low friction but also be strong to support the weight of the other axis and the head (which could be as heavy as a router). The carriage must also grip the slider tightly to ensure it doesn't slip or turn. The accuracy of the final CNC machine is directly related to the quality and tolerances of the slider. Needless to say, professional slider mechanisms aren't cheap.

After researching what others have done I decided to build my linear sliders out of OpenBeam using #608 roller skate bearings. Skate bearings have a few advantages: they are produced in mass quantities so they are pretty cheap for the quality you are getting. They are also usually sealed to keep out road dirt, so they can handle any debris the CNC might produce. And, because they are a standard size, getting parts to fit them is fairly easy. Here's a quick look at two versions of my bearings.

Skate bearings typically start at 1 USD each in sets of eight and go up from there. However, if you can afford to buy a lot at once you can get them as cheap as 50 cents a piece. I purchased my bearings from Amazon as a pack of 100 for 50 USD. A CNC will end up using at least 20 bearings so it's worth going for the large pack.

To attach the bearings to the beam I used long M3 screws from my local hardware store, about 20mm. Of course the inner diameter of the bearings (the bore) is much bigger than the diameter of the screws, so I needed to fill in the space.

After playing around with a variety of spacers, sleeves, and bushings I found a combination that would work, as depicted in this photo:

This works and is what I used in the current version of the CNC machine, but it has a few disadvantages. The nylon parts are longer than the bearing is wide so I had to trim them by hand. Plus there is two of them needed for each bearing, which adds cost and assembly time. I also had to add some washers to prevent the screw head from going through the center and to give more space between the beam and the bearing. Not bad for a first try, but I need something better.

Some searching on Amazon turned up some better pieces:

This bearing assembly uses a slightly shorter screw, 16mm (also in cool anodized black). I also found spacers that the were the perfect length, no trimming required. The interior diameter is perfect for the screws. the outer diameter is a tight fit in the bearings but I can pop them in easily with pliers. Since they fit in with friction I don't need to worry about washers to hold the bearing on. To give more space I used some of my standard M3 nuts. Since I already have to buy tons of them they are cheaper than washers, and adjustable to boot. Score!

I'm learning that creative sourcing and constant redesign is the only way I will reach the 200 USD goal.

Shaft Coupler

The next big challenge for a CNC machine is attaching the lead screw to the stepper motor. I would think this would be easy. I would be wrong.

The lead screw needs to be attached to the stepper motor securely so it won't slip, but it also needs a bit of flexibility to absorb vibrations. The two shafts also must be perfectly concentrically aligned or else the carriage will bounce up and down. Doing all of this for a decent price is very hard. Commercial solutions run 30 USD per shaft coupler.

I found several people online who use rubber tubing with clamps. Unfortunately I found the tube around the stepper shaft was too loose. If I tightened it with the clamp then the shaft would never be concentric with the lead screw.

My next attempt is what you see in this photo:

I used two pieces of tubing, one nested inside the other. The smaller tube is shorter and only goes around the stepper shaft. The larger tube contains the smaller tube as well as the leadscrew. The small tube is tight enough that it doesn't need a clamp, so that reduces the complexity a bit. The larger tube still slips, so the clamp must remain.

The two shafts are now more concentric but still not perfect. It's a good start though. For my next attempt I will switch to aluminum tubing from a hobby store.

Pen Mount

I can't turn my CNC machine into a plotter without a pen of course. I started with a Sharpie rubber banded onto a piece of extrusion.

It almost works but not quite. The pen has no give. It is either pressing very hard against the paper or doesn't touch at all. When it presses hard it prevents the carriage from moving smoothly, so I get lots of skips and jumps. I'm still looking for a better solution that will add some spring.

That's it for now. Next time I'll show some of the improvements I'm making to the electronics side for V3.

I was in Sweden all of this past week for the OreDev conference. I had a wonderful time last year and made it first on my list to see again. The attendees are friendly and their technologies diverse, making it such a good learning environment. I was especially pleased to see they have added an entire track on User Experience. What follow's is OreDev's take on the future of user experience design, from the visualization technologies coming out of Microsoft Research, to a brief history of touch interfaces, to the latest rapid development technologies for mobile devices.

Interactive Visualizations

First up is Eric Stollnitz from Microsoft Research's Interactive Visual Media Group. This is the group responsible for the zooming PhotoSynth technology. They are continuing to pump out amazing imaging technology, some of which he demoed to us.

Eric showed us the Image Composite Editor which will stitch a multi-gigapixel panorama out of a hundred photos taken with a standard consumer digital camera. You just give it a directory full of images and it does the rest, arranging and blending the photos into a single final image. I especially liked how it would arrange the photos randomly and then move them into place as it does the analysis. This gives you a sense of what the app is doing. It uses realtime feedback as you change parameters to update the generated panorama using lower quality thumbnails. Once you've got settings that you like press the render button and let it run in the background, producing the final (very large) image. Future versions of this app may work with video footage as well.

The first thing he noted about their demos is that almost everything is now done with WPF, which is Microsoft's vector based UI toolkit for writing native Windows apps. It lets developers work at a higher level and produce apps that would be difficult to impossible using the older WinForms toolkit. In the Java world this parallels the relationship of JavaFX and Swing. The panorama stitching app was originally in older technology and they were able to have one intern completely rewrite it in WPF over the summer.

Eric demoed several apps using the DeepZoom technology that has been shown by MS for the past couple of years. It's interesting to see how this technology has evolved. Early demos simply focused on the amazing technical ability of zooming into insanely large images. Now that this is commonplace the focus has shifted to what useful things can be done with the technology, which is really far more exciting. Modern computers so much excess computing power, it's nice to see us doing interesting things with it.

My favorite DeepZoom app is called WorldWide Telescope. It combines beautiful large space images from various telescopes with user created content. You can 'tour' the galaxy with paths from different tour guides. In one example a seven year old kid first showed us his home in Toronto, followed by a tour of his favorite constellations. Another tour took us from earth, through the solar system, to the local galaxy structures, and finally to the edge of the known universe. The blending of multiple images into the same experience were simply amazing.

Tap is the New Click

Next up was Dan Saffer of Kicker Studio. It turns out I've been reading his blog for months. I found it while Googling for touch interfaces on

Dan gave us a brief history of touch technology, an overview of the various options, and finally some design guides for using touch in new interfaces.

One of the most interesting challenges in designing touch interfaces is the fact that fingers are larger than a mouse pointer (significantly) and that your hand can be in the way of the screen. With clever design you can address these shortcomings, but it's not trivial.

One way to solve the size issue is with iceberg and adaptive touch points. A touch point is simply a place on the screen where a user can touch. It's what we'd think of as a button or clickable image in traditional user interfaces. An iceberg touch point is when the touchable area is larger (sometimes a lot larger) than the apparent visual bounds of the thing being touched. This focuses the user's attention in the center of the touch point, but allows for a lot of error.

An adaptive touch point is where the touch bounds adjust based on context. For example, in the iPhone keyboard if you type 't' then 'h', the OS can guess you are likely to type a vowel next (at least in English). The keyboard can adaptively expand the touch size of the vowels to make them easier to hit. This improves the error rate of touch keyboards significantly.

The final example Dan showed us was a visual remote control system his firm developed. It's a set top box for your TV that uses a camera and vision system to let you control the TV with just hand gestures. They developed a simple language of gestures for changing the volume, navigating menus, and turning the TV on and off. It was interesting to see how they reduced a complex task into a small set of simple gestures through very careful research and design. Great stuff.

Quote of the day: The best designs are those that dissolve into behavior.

Making web applications for iPhone

On Thursday I attended a talk by Michael Samarin on developing iPhone web applications without navigating Apple's app store or writing a single line of JavaScript (well, okay, he wrote 3 lines). He did everything with Dashcode, Apple's visual web design tool, and Java servlets in NetBeans.

Dashcode has improved a lot since I last used it. Originally for building dashboard widgets, it has turned into a general purpose web design tool with a drag and drop interface similar to their InterfaceBuilder tool for desktop apps. Most importantly it has tons of Javascript enabled widgets that emulate the native iPhone environment.

Michael built a simple app on stage that could navigate and play a directory of of video clips on his server using a JSON web feed from the servlet. Dashcode lets you bind Javascript UI controls to JSON fields, making a functional app with only about 3 lines of actual code. You can even add simple hardware accelerated 3D flip transitions using CSS. The server side component was a straight Java servlet which parses the directory structure on disk and generates the JSON feed.

Next he used some Applescript called from Java to build a web based remote control for the QuickTime player. It was simple to build and surprisingly responsive. Every bit as good as a native iPhone app without all of the headaches. Michael's company, Futurice, now often recommends web apps to their clients instead of native iPhone apps because development time is far shorter, cheaper, and can be updated instantly without going through Apple's approval process.

The rapid development time really lets his team focus on the user experience. While I still believe in plugin based rich clients for the web, I was very impressed with what's possible in pure JavaScript and CSS on the iPhone. They've got quite smooth stack.


I love OreDev because it exposes me to technology I wouldn't otherwise get to see. Ultimately great design doesn't depend on the technology you use, but how you use it. And if the talks presented here are any indication, the future looks bright for many technologies.

That's the highlights for me. I did my final presentation on JavaFX Friday followed by some fun demos for a local Java group. Both went quite well and is in prep for a new app I'll be launching in a couple of days. Now time for some sleep.

Another month has gone by with no update to Leonardo, or a real release of Amino. It's interesting how life changes. When I started this projects last summer I had no idea Jen and I would be having a baby in a month, nor did I truly have any notion how much my life would change. Everyone always says having children will change your life, but you never really understand it until you do it yourself, and our journey has just begun.

So, the upshot of all this rambling is that kids take time, and when you have to distribute a finite resource between multiple buckets, something has to get less. Sadly this time the short straw goes to my open source projects. It doesn't mean I won't work on them anymore, just at a slower pace. However, in order to feel at peace with myself I need to leave them in a state where they can still progress without my large time commitment. That's what this post is about.

I've spent the last year working on two main open source projects called Leonardo and Amino. Quick recap: Amino is a scene graph library for Java and JavaScript. Leonardo is a vector drawing program built on top of Amino. I want to get them both to a state where they are stable, useful, and can live on their own. Hopefully more of my job will be driving the community and integrating patches rather than directly coding everything. Every project reaches a point where it should stop being driven by a singular vision, and instead be driven by needs of actual users (the good projects anyway). Now is the that time. Time to focus on gaining adoption, growing a community, and making these projects rock-freaking-solid.

Concrete Plans


Amino basically works. Shapes are animated and drawn on screen, input events are properly transformed, and it's got pixel effects on both the Java and JavaScript sides. Speed, efficiency and features driven by actual use.

Amino finally has it's own website at and I've set up auto-builds for both the Java and JavaScript versions. They also have a redesigned API doc template that you can see here. Last steps before a 1.0 release: bug fixes for Mobile Safari and FireFox, more demos, and a tutorial for integrating with Swing apps. (Oh, and if someone has a nice spring easing function, that would be swell). Target: next weekend.


It's basically done. It lets you draw vector art of shapes, images, and paths; and also create attractive presentations (which is just multiple pages of vector art). Now comes polish and adoption and export features. I suspect the value will really be in the export features so I need to know from you guys: what do you want?

In concrete terms I have a bunch of old bugs to fix and will finish the redesigned fill picker (colors, swatches, gradients, patterns, etc.) I also need your help updating the translations. Once that's done I'll clean up the website and cut a 1.0 release. Target: end of April.

Next Steps

In short, a lot of work for the next few weeks, but with luck (and hopefully some great feedback from you) , both Amino and Leonardo will be just fine.

the content of your post in markdown

It's the fashionable thing to speculate on future Apple products. One idea that continues to get traction is the Apple TV, a complete TV set with integrated Apple-ly features. Supposedly to be announced this fall, after failing to appear at any event for the past three years, it will revolutionize up the TV market, cure global warming, and cause puppies and kittens to slide down rainbows in joy.

I don't buy it. I don't think Apple will make a TV. Televisions are low margin devices with high capital costs. Most of the current manufacturers barely break even.

Furthermore, the market is saturated. Pretty much anyone in the rich world who wants a TV has one. Apple needs growth opportunities. The last thing they need is a new product with an upgrade cycle at least twice as long as desktop computers. It doesn't make sense.

All that said, speculating about products is a useful mental exercise. It sharpens the mind and helps you focus when working on real products. So here we go:

If Apple Made a TV, How Would They Do It?

First let's take care of the exterior. In Apple fashion it would be pretty and slender. Either a nice brushed aluminum frame like the current iMac or a nearly invisible bezel. I suspect they will encourage wall mounting so the TV appears to just float. The current Apple set top box will be integrated, as will network connections, usb, etc. Nothing to plug in except the power cord.

Next, we can assume the current Apple TV will become the interface for the whole device. A single remote with on screen controls for everything. While I love my Roku, I hate having to use one remote for the interface and a second for power and volume.

Third, they will probably add a TV app store. I don't think it will feature much in the way of games and traditional apps. Rather, much like the Roku, there will be apps for each channel or service. The current Apple TV essentially has this now with the NetFlix and HBO apps. The only difference would be opening the store up to more 3rd party devs.

I think we can assume this will another client of your digital hub. Apple already wants us to put all of our music, videos, and photos into

So far everything I've described can be done with the current Apple TV set-top box. So why build a TV. Again, I don't think they will; but if they would need to add something to the experience beyond simply integrating

First, a camera for FaceTime. Better yet, four of them, one in each corner of the screen. Four cameras would give you a wide field of view (with 3D capture as a bonus) that can track fast moving toddler as they move around the living room. This is perfect for video chatting with the grandparents.

Furthermore, there are modern (read: CPU intensive) vision algorithms that can synthesize a single image from multiple cameras. Right now the camera is always off to the side of the screen, so your eyes never meet when you look at the person on the other end. With these algorithms the Apple TV could create a synthetic image of you as if the camera was right in the middle of the TV. Combined with the wide field of view and a few software tricks we could finally have video phones done right. It would feel like the other person is right on the other side of your living room. It could even create parallax effects if you move around the room.

Video calls are a clear differentiator between the Apple TV and regular TVs, and something that a set top box alone couldn't do. I'm not sure it's enough to make the sale, though. What else?

How about the WWDC announcement of HomeKit? An AppleTV sure sounds like a good home hub for smart automation accessories. If you outfit your house with smart locks, door cameras, security systems, and air conditioners, I can see the TV being a nice place to see the overview. Imagine someone comes to the door while you are watching a show. The show scales down to corner while a security camera view appears on the other screen. You can choose to respond or pretend you aren't home. If it's a UPS guy you can ask them to leave it on the front door.

I imagine the integration could go further. Apple also announced HealthKit. The Apple TV becomes a big screen interface into your cloud of Apple stuff, including your health data. What happens if you combine wearable sensors with an Apple TV. See a live map of people in the house, ala HP's Marauders Map. An exercise app can take you through a morning routine using both the cameras and a FitBit to measure your vitals.

A TV really could become another useful screen in your home, something more than just a video portal. I think the idea has a lot of potential. However, other than a camera and microphones almost everything I've detailed above could be done with a beefed up standalone Apple TV set top box. I still don't think a full TV makes sense.

TinkerCad is a free web based CAD program. It runs entirely in the browser using WebGL, so you’ll probably want to use it with Chrome (I think Safari may work in Yosemite+). TinkerCad is meant for novice CAD users. So novice that you can know absolutely nothing about CAD and be able to make something after five minutes of their built in learning quests (tutorials). Then you an save your creation to their cloud or download it for 3D printing.

TinkerCad isn’t full featured. You can’t add chamfered edges for example, but you can combine shapes with CSG operations, stretch and rotate them, and add useful prefab shapes like letters and stars. There is even a scripting language for building programmatic objects. The UI challenge of building a CAD for newbies is daunting, yet somehow they did it. TinkerCad almost went out of business since it turns out novice users are also unlikely to pay for CAD applications. Fortunately AutoDesk bought them and have made TinkerCad their free entry level offering.

But this is a book review, right? 3D Modeling and Printing with TinkerCad is a new book by James Floyd Kelly. it walks you through the basics of navigation, creating shapes, merging and subtracting them, all the way to printing models and importing them into Minecraft. The book is very well written and easy to follow with lots of pictures.

So should you buy it? That depends. TinkerCad’s own interactive tutorials are quite good. While I enjoyed the book I’d say 75% of it covers the same things you’ll learn in the tutorials. It really comes down to whether you are more comfortable learning on screen or by reading a paper book. If you learn by paper, then buy it.

3D Modeling and Printing with Tinkercad: Create and Print Your Own 3D Models, James Floyd Kelly

I'm happy to say the Retro Game Crunch Kickstarter project succeeded! It was close for a while, but in the last 24 hours you pushed it over the line to 111%.

While you wait for the first game to drop, be sure to check out the game development primer the crew has put up. They show how to use Tiled, draw sprites, and hook it up with Flixel.

If you missed the interviews with the team, here they are.

Hi. My name is Josh Marinacci. You might remember me from the webOS Developer Relations team. Despite what happened under HP, webOS is still my favorite operating system. It still has the best features of any OS and an amazing group of dedicated, passionate fans. I deeply cherish the two years I spent traveling the world telling everyone about the magic of webOS.

However, I’m not here to talk to you about webOS. I want to tell you about my brother in law, Kevin Hill. Two years he was diagnosed with stage 4 melanoma. If you know anything about melanoma, then you know this was 100% terminal just a few years ago. Kevin and my sister Rachel have traveled the country joining every experimental trial to beat this. You can read about their amazing story on their site: The Hill Family Fighters.

The Hill Family

Kevin and Rachel have had amazing success but recently hit a roadblock: the scan a few weeks ago showed a spot on his brain. It’s a testament to his strength that he has continued to work remotely as a sysadmin through all of this (even using his TouchPad from the hospital), but the time has come to slow down. We have not given up hope that he can beat it, but this latest development means he must finally quit work and focus on staying alive.

It will be 90 days before his long term disability kicks in. That's 90 days without income, and just 50% of his salary after that. My sister works part time as a brilliant children’s photographer, but spends most days taking care of Kevin and their two little children, Jude and Evie. We’ve calculated that they need at least ten thousand dollars to Bridge the Gap and get them to the end of the year. This is where you come in.

I am auctioning off my entire collection of webOS devices and swag to help them cover the bills and fight the cancer.

[picture of bottles and phones]

I will be holding an online auction selling everything I have. There will be devices like Pre2s and Pre3s. Limited edition posters and beer steins. My personal water bottle that saved my life in Atlanta (it still bears the dents). In addition some of my former Palm co-workers are donating their own devices and swag. And the highlight is an ultra-rare Palm Foleo with tech support from one of the original hardware engineers.

With your help we can Bridge the Gap for the Hill family. Please add your name to my mailing list. [link] I will send you a note when I have a final date for the auction and when it starts. This list is just for the auction and will be destroyed afterward. Even if you aren’t interested in buying anything I could really use your help getting the word out. Let’s hit the forums, the blogs, and the news sites. The webOS community is the best I’ve ever worked with and I need your help one last time.

Thank you.


It's been a month since I posted so I'd say it's time for a rant. I've been traveling a lot lately so the object of my wrath this week is alarm clocks. Most specifically the alarm clocks in hotel rooms, but home use clocks don't get off easy either.

Alarm clocks have one purpose in life. There's only one thing they need to do to be considered a success. It's not to 'tell the time'. That's a nice bonus, but the purpose of an alarm clock is to get you up in the morning. This is doubly so for alarm clocks in hotel rooms. If you are sleeping in a hotel it's likely because you have traveled somewhere to do something, and it's also likely that you want to do that something at a certain time; hence a clock to wake you up so that you may do that something without being late and getting fired. A device which cannot reliably complete this basic task is simply a failure, and not worth being made or purchased. End.

The Bad

How are they bad? Oh, please let me count the ways. (cue evil grin of glee).

First, most clocks tack on a ton of extra features, like iPod integration. Then they massively overload these features onto a small number of buttons by using modes. Many will have a single set of buttons to set both the time and the alarm, with a switch to toggle between the modes. Modes aren't a great idea in desktop software when you have a huge screen. They are even worse on the limited user interface of a clock with a fixed LED readout. Which mode am I in?. Did I set the time or the alarm? After I've set the alarm how do I know which mode I'm in now? Did it switch back automatically somehow or do I need to press another button?

Some clocks use quasi-modes to get around these problems. A quasi-mode is like a shift key: a button you hold down to temporarily enter a new mode, then release when you are done. Not a bad idea for a computer with a full keyboard. Absolute madness on a clock where you must hold the mode button with one hand and try to set the time with the other.

Even worse, some clocks put the mode button on the front instead of the top. Clocks, typically being small, are lightweight. So pushing from the front will shove it right off the nightstand. Now you have to use your third hand to hold the clock on the table, while accomplishing the aforementioned gymnastics.

Now do all of the above right when you go to bed... when you are sleepy.. and it's dark... No wonder so many people opt for a wakeup call or use the alarm on their phones. I always completely unplug the clock just in case the guy before me set it to ring at 3am. (yes, this actually happened to me)

Now suppose this is a clock you've never used before (very likely, since every hotel bought from a different supplier). Now you have to learn how to use this particular clock. Some come with their own instruction books. It's madness, I tell you. Madness!

The Good

Only once in my life have I found a non-sucky hotel alarm clock. It was in my room at a very nice hotel in Tokyo. Here's what it looked like.


Simply beautiful. Two buttons to change the alarm time up or down. One button to arm the alarm and snooze. AM/PM is indicated with real words.

How do you set the time? You don't. I'm serious, there were no buttons anywhere to set the time or customize the alarm. Either the time is set via radio or there's hidden controls locked inside somewhere. Maybe they use something wireless through that little transceiver on the front. The point is: I don't have to care. The only thing I care about is setting the wakeup time, so that's all the device lets me do. They also don't put in a radio since that would require controls to change the station. If you are in an international hotel in Tokyo you probably wouldn't understand the radio anyway, so jettison the feature. Simplify, simplify, simplify.

I don't know why this clock isn't used in the US. Perhaps it's because everything in Tokyo is from the future. Perhaps in another five years will will all be enjoying alarm clock bliss. No wait.. I took that photo five years ago! Yeah, we're doomed.

The Ugly

So why do hotel alarm clocks suck so badly? I think there is two reasons. First, these devices are mass produced overseas so they cut corners wherever they can to save costs and increase profits. If you can make a clock which uses two buttons to set the time instead of three buttons, then you might save five cents. Across millions of units that adds up to a lot of money.

Second is the buying decision. I'm not 100% sure, but I suspect that electronic clocks in general have become commodity products. The guts are a single chip that costs around 25 cents, and it's probably the same chip used by everyone. So the various clock makers compete with each other on price, features, or by simply looking cool. Now, don't get me wrong. There's nothing wrong with competing on price, features, and visual design. But what suffers in this competition usability.

I think usability suffers because of how clocks are bought. When you go to the store you can see the price, features and design from outside the box. What you don't see is how difficult it's going to be to actually set the alarm on the damn thing. The buying decision doesn't include usability. And since manufacturers optimize for the buying decision, usability gets dropped on the floor. C'est la vie.

The Lesson

I don't know how to fix the economics of alarm clock design, but the moral of today's story when applied to other products is simple: when you make a product you must design for the primary use case first and foremost. In this case that means setting the alarm reliably so you can wake up in time to not get fired. Everything else is secondary.

Okay kids. Time for work. I woke up late!

I've been wanting to get into electronics and building physical things for a while. I have a lot to learn though. My only exposure to micro-controllers was when I played with an Arduino for a day about two years ago. The last time I picked up a soldering iron or drew a schematic was my lone electrical engineering class in college nearly twenty years ago. My degree is in computer science with a focus on graphic and AI giving me a decidedly software-only career. This makes picking up electronics both challenging and fun.

To start off I decided not to go the easy route; which would be to buy a prefab micro controller and then program it. While I have an Arduino sitting on the shelf, that would be too comfortable for a software guy. Instead I decided to approach this from two directions. First, I bought some kits put together entirely by hand with soldering, no programing at all. This should beef up my skills and introduce to me to the various physical components available (resistors, capacitors, switches, etc.)

Second, I've came up with a project too challenging for someone with my skill level to build: a CNC machine. While I will likely fail during my first attempt, doing something so far out of my areas of expertise will force me to learn a lot of new things.

Learning to Solder

To kick things off I picked up a Larson Scanner kit from the Evil Mad Science store. This kit is quite easy to build; a great starter project for beginners. The micro-controller is pre-programmed and it comes with a PCB (Printed Circuit Board) so you just need to solder in some resistors and LEDs. It even comes with comic book style instructions.

To learn how to solder read this short comic on I was wrong in my initial assumptions. You aren't melting the solder with the iron. Instead you are heading up component and metal pad on the board, which the solder then melts on to. Once I figured that out my joints started to look a whole lot better.

As you can see my soldering skills improved from the beginning to the end of the scanner kit.

Oh, and remember that the battery pack has a switch on it. The first time I put in the batteries nothing happened because I forgot to turn it on. :)

A soldering iron can be had for very cheap, but since I plan to do this for a while I invested in a good one. Spark Fun sold me this soldering station of their own design, which has plenty of power and temperature control, for a very good price ($40).

CNC Machine

Now, on to the CNC machine. A Computer Numerical Control machine, or CNC, is sort of like a plotter. It moves a head in X and Y directions over a surface. However, instead of moving a pen or printer head it uses a drill or other cutting tool. Advanced versions also have a Z axis. This lets you cut many kinds of materials from wood and styrofoam all the way to thin aluminum and sheet metal. And of course it's only a few steps away from having a full 3D printer. All of these features make it a good project for me: something that I can improve over time and has real world uses.

Since I know absolutely nothing about these machines I have a lot to learn. What I've discovered so far is that it's best to start small and build up from there once you have something that works. To that end I created my first prototype of a single axis. It just moves a little carriage up and down a rail using a stepper motor turning a long screw.

CNC Machine Test 01 from Joshua Marinacci on Vimeo.

The stepper motor and driver came from Spark Fun. Interestingly, the stepper driver is actually an open source design called the EasyDriver designed by Brian Schmalz. You could of course build your own from components for less than what SparkFun charges, but I prefer to get the nice polished version rather than saving a few bucks. (And I do mean only a few. SF's is pretty well priced). For power I'm using a 9 volt battery, but will to upgrade to a larger supply once I'm done testing.

The driver is controlled by an older Arduino I already had. The EasyDriver is quite easy to use. You simply toggle one pin for each step and set the direction with a second pin, high or low. Beyond that there is an open source AccelStepper library that can handle multiple motors at once and use acceleration.

I created the metal carriage by hand using aluminum extrusion and brackets from OpenBeam, an open source hardware company based in Seattle. I'll have a lot more on OpenBeam in a future blog soon. OpenBeam uses standard M3 hex nuts so I have a mixture of sliver ones from OpenBeam and black ones from Amazon.

The Big Challenge

Making a CNC machine is not actually my challenge. My real challenge is to create one for under 200$. After pricing out components I really think this should be possible. By reducing costs, Arduino has greatly reduced the barrier to entry for learning about electronics and micro controllers. If we can build a 200$ CNC I think it will kick off a revolution in home construction. Even if I fail to meet the sub-200 price tag I will learn a whole lot in the process.

Hopefully it goes without saying that I will document everything on this blog and open source the plans and schematics. No matter how smart one brain is, a team of brains can do more. I would love your help on this project.

If you don't already please subscribe to my RSS feed or follow me on Twitter. Exciting things are in the works.

Building Mobile Applications with Java Using GWT and PhoneGap

I'm happy to announce that my new book Building Mobile Applications with Java Using GWT and PhoneGap has been published in both print and ebook editions. While I love having a print edition, the ebook Kindle version, at $9.99 is half the price for the exact same content. Sure, you can't write on it with a marker, but the convenience and price is well worth it.

If you didn't catch it in one of my earlier blogs, Building Mobile Apps w/ Java, GWT, and PhoneGap is a shortish book (~80pgs) that gives you the good stuff and none of the fluff. GWT and PhoneGap are two great open source tools that, combined, let you use Java to make apps for non-Java platforms. This good gives you everything you need to know, starting with building a simple GWT app and compiling it for iOS, Android, and webOS. After we get familiar with the tools I'll walk you through some 3rd party libraries that make your apps feel more native. Then we will finish up with a 2D video game using the accelerometer and a physics library.

My goal was to give you everything you need to get started in as short a time as possible. You are busy people who need to get back to coding, so I don't want to waste your time. I think the finished work is right on target.

You can buy it here as well as download the source to the projects.

HTML Canvas : A Travelogue

I have also updated my self published app-book on HTML Canvas. It gives you a gentle walk through of what Canvas, how to use the APIs from drawing to pixel pushing, and then shows you how to build a simple game with 2D sprites and particle effects. The book also uses an experimental form of interactive code where you can change the values of variables in examples then see how the graphics update in real time.

I call this book an EverBook, meaning updates will always be free. The book itself is an app so if you already bought just check in your app catalog for the update. I've fixed a bunch of bugs, improved performance, and fleshed out a few sections. Also, stay tuned for a few more interactive features I've got up my sleeve.

You can buy the book here. $4.99 for iOS and $0.99 for webOS. (Shoutout to my webOS peeps!)

All of the rampant speculation about the new iPad amuses me. Not because I think the speculation is wrong, but just unnecessary. Apple is actually a very predictable company. They release and update their products according to very reliable patterns. Perhaps we just want to believe that something truly unexpected will happen, even when 99% of the time it doesn't. For example..

The iPhone 4s

Lots of people expected an iPhone 5, even though the iPhone 3g was followed by the 3gs. The simplest thing for Apple to do was follow the iPhone 4 with the 4s. Update the specs and add one big new feature: Siri. The form factor remained the same. The previous generation stayed around to be the entry level model. This is the same thing they did with the 3 and 3gs.

Apple also updates the iPod, iPhone, and iPad according to a one a year schedule. Very simple. Easy to predict. And it lets Apple leverage large economies of scale. It's much harder to get cheap components when you ship 20 devices a year instead of 3.

The iPad 3

When the first iPad was rumored I thought it would be a large iPod Touch rather than some touch enabled Mac. Why? Because it was the simplest thing to do. Give the iPod Touch a bigger screen, and boost the specs. Simple. Creating a touch enabled Mac would have been much harder and complex.

The iPad 2 was the same as the iPad 1 but with updated specs and one new feature: the cameras. The form factor was modestly changed but not much. A simple evolution. I expect the iPad 3 to be the exact same thing: updated specs and one core new hardware feature. This time it will probably be the screen that is updated, to a 2x resolution retina display. That will be it. No crazy new touch technology. No stylus or SD card slot. Just a new screen and updated specs. Apple really is quite predictable.

Perhaps we enjoy reading the speculation and crazy rumors because we really *hope* Apple will do something radical. The original iPhone launch is probably the only time reality lived up to the hype. The only time Apple genuinely surprised us. Most of the time Apple is actually pretty conservative.

Please send feedback and comments to my twitter account @joshmarinacci instead of on the blog. Thanks!

Over the past few weeks I've done more experiments and improvements to my ebook prototype. I'm still not sure what I'm going to do with it once I'm all done, but it's been an educational exercise nonetheless. Here's what I've done so far:

I've reorganized the code and put it into a github repo. Everything I'm going to show you is available to use and fork from right here. Click on any screenshot below to see a live example.


If we are going to call this stuff books then we need good typography. Fortunately 2011 saw the widespread adoption of web fonts. We can now use any custom font we want in any webpage, which includes the iPad with the release of iOS 5. To that end I tried searching for some custom fonts that add a bit a flavor while still remaining readable. After this font test I mocked up a full page with better typography for reading. This included bigger font size contrast, looser line height, a narrower column, auto-captializing the subheading at the top, and some inline images. Overall I'm happy with what can be done in pure CSS after just a few minutes work.

Google ChromeScreenSnapz036



When we talk about typography we also have to consider how the reader should view long form text. The world seems roughly split between scrolling and swiping between pages. I honestly hate page swiping unless the content really needs to be paginated (like a children's book). I did try some pagination experiments using CSS multi-columns but I'm not happy with the results. Perhaps with some more work they would be usable.

Google ChromeScreenSnapz038


Instead I've worked on scrolling. Scrolling feels very natural on a touch screen device, but you still need some static navigation to know where you are, switch pages, and view the table of contents. Fortunately CSS fixed positioning is pretty well supported these days, with IFRAMES as a useful backup, so it wasn't too hard.


Interactive Chart

I cooked up a simple bubble chart using Amino and some real data from the World Data Bank. You can use the slider to move time from 1960 to the present and see how the data changes. I think this type of interactive visualization will be very useful in ebooks.

Google ChromeScreenSnapz034


Interactive Code

To teach how to use a visual API like Canvas we should have a visual tool. We should be able to see what happens when we change variables, and actually see the code and canvas example change in real time. In my research I came across an amazing toolkit by Bret Victor called Tangle. Using that as a base I prototyped a simple tool for interactive text snippets. It rewrites any canvas javascript function you give it into a live example with formatted source. When you drag on one of the interactive variables (indicated in red) a popup appears to show you the value. As you drag left and right the value changes and the canvas updates to show the new result. This is the most direct way to learn what a function does.

Google ChromeScreenSnapz035

Code Wrapping

Another problem with code snippets is that they often are too long to fit on a single line. I could configure a PRE tag to wrap the long lines, but whitespace is usually significant in code. I don't want to create a situation where the reader thinks something is two lines when it is really one, such as a command line example they are trying to type in. Still, we need a way to view long lines. I played with various scrolling techniques but ultimately found they added more problems than they solved. Instead I found this great technique by [name] to creatively wrap lines. A wrapped line is shown indented with an arrow symbol to indicate it is wrapped from the previous line. This removes any ambiguity.


Google ChromeScreenSnapz037


Three Dee

The canonical example of what a digital book can do that a paper book can't is spinning an object around in 3D. I don't know how useful this will be in practice, but I wanted to prove it was possible. Using the amazing Three.js library I embedded a simple block that you can rotate using mouse drags or touch events. Three.js is really designed to use with WebGL, which isn't supported yet on most mobile platforms, but it can also render with plain canvas. It's not as fast, of course, but for simple flat shaded models it works well enough.

Google ChromeScreenSnapz039


Putting it all together

To really show off what this can do I put together a book demo using some real content from my HTML Canvas Deep Dive presentation last summer. It has a title page, table of contents (generated with a simple nodejs script), and two full chapters with code snippets, examples, and photo slideshows. I think the results speak for themselves.


Next Steps

I don't really know what the next steps are. I'm going to finish up the canvas book (probably eight chapters by the time I'm done) and use PhoneGap to put it in the iPad, webOS, and Android app stores. After I've proven it's possible I'm not sure what to do next. I did this as an experiment to research the state of the art. I think what I've put together could become a great set of tools for building interactive ebooks since the markup you actually have to write is rather minimal. Let me know if you have a use for this code if I fully developed it.





some content

Listening to some podcasts about mobile devices I heard over and over statements like "iPhone changed the world with multi-touch" and "Android could compete with Apple if it had multi-touch." This simply isn't true. Okay, while perhaps not a lie, the success and value of of multitouch is extremely overrated. In fact, the iPhone barely uses multi-touch!

Don't believe me? Think back to the iPhone of 2007 when it launched. Or just look at the Apple provided apps in today's iPhone (since not much has changed). How many of these built-in apps use multi-touch? I can only count 3: Maps, Photos, and Safari. All three of them use multi-touch in a single simple way: zooming in and out. You could make a non-multitouch iPhone by simply providing a zoom-out button for these three apps (and a quick-tap for zooming in).

Very little of what made the iPhone interface so revolutionary was multi-touch. What made it so great was the focus on using a single finger for virtually all interaction with the device. The designers at Apple decided from day one to make a device that was finger centric. This means UI controls that are large and require only a tap gesture to activate. Swipe gestures are used for navigation. And that's it. Taps and swipes with a single finger. That's what made the iPhone so great, not multi-touch (or the accelerometer, for that matter).

Oh, and one more thing made it great. The large screen with a capacitive touch sensor. Older touch enabled devices (like my old beloved Treo) used resistive screens which were far less accurate and required you to either push hard with your finger or use a stylus. A Treo with a capacitive screen could have supported an iPhone like interface with ease (even on the much slower hardware available to it at the time).

While the hard glass iPhone screen does support multiple touch-points at once, that's not what made it a success. It's designing an interface and device from the ground up for finger-based touch interaction that changed the mobile device playing field. Multi-touch is simply a red herring.

Finger photo used under Creative Commons from Flickr user bayat

Hi. My name is Josh Marinacci. You might remember me from the webOS Developer Relations team. Despite what happened under HP, webOS is still my favorite operating system. It still has the best features of any OS and an amazing group of dedicated, passionate fans. I deeply cherish the two years I spent traveling the world telling everyone about the magic of webOS.

However, I’m not here to talk to you about webOS. I want to tell you about my brother in law, Kevin Hill. Two years he was diagnosed with stage 4 melanoma. If you know anything about melanoma, then you know this was 100% terminal just a few years ago. Kevin and my sister Rachel have traveled the country joining every experimental trial to beat this. You can read about their amazing story on their site: The Hill Family Fighters.

The Hill Family

Kevin and Rachel have had amazing success but recently hit a roadblock: the scan a few weeks ago showed a spot on his brain. It’s a testament to his strength that he has continued to work remotely as a sysadmin through all of this (even using his TouchPad from the hospital), but the time has come to slow down. We have not given up hope that he can beat it, but this latest development means he must finally quit work and focus on staying alive.

It will be 90 days before his long term disability kicks in. That's 90 days without income, and just 50% of his salary after that. My sister works part time as a brilliant children’s photographer, but spends most days taking care of Kevin and their two little children, Jude and Evie. We’ve calculated that they need at least ten thousand dollars to Bridge the Gap and get them to the end of the year. This is where you come in.

I am auctioning off my entire collection of webOS devices and swag to help them cover the bills and fight the cancer.

I will be holding an online auction selling everything I have. There will be devices like Pre2s and Pre3s. Limited edition posters and beer steins. My personal water bottle that saved my life in Atlanta (it still bears the dents). In addition some of my former Palm co-workers are donating their own devices and swag. And the highlight is an ultra-rare Palm Foleo with tech support from one of the original hardware engineers.

With your help we can Bridge the Gap for the Hill family. Please add your name to my mailing list. [link] I will send you a note when I have a final date for the auction and when it starts. This list is just for the auction and will be destroyed afterward. Even if you aren’t interested in buying anything I could really use your help getting the word out. Let’s hit the forums, the blogs, and the news sites. The webOS community is the best I’ve ever worked with and I need your help one last time.

Thank you.


a test blog post content

I've always meant to go back and read some of the really old scifi that people have always talked about but I've never read.  Now is finally that time. As a fan of mainly 50s through 70s (Asimov, Clarke, Heinlein, Niven), I've rarely read anything earlier than the late forties. (Jules Verne being a notable exception.)  My goal is not so much to read the novels for pure enjoyment, but to determine if they really are worth of their place in history?  Were they really that good? Did scifi get better? Has it gotten worse again?   In that spirt, lets the the time machine to 1917.

A Princess of Mars

Edgar Rice Burroughs, 1917, 326 pp

I've read a few of the Tarzan novels by and never felt drawn into them.  With the upcoming film adaption, John Carter, I thought it was time to finally get into the series.

A Princess of Mars is the story of Civil War vet John Carter searching for gold out west in the 1870s. He is mysteriously transported to Mars and quickly captured by a race of tall multi-armed green martians.  Thanks to his fighting skills, resourcefulness, and a body accustomed to the heavier Earth gravity; he quickly learns the language of his jailers then escapes with the captive princess Dejah Thoris of the red martians (who conveniently look like really attractive humans).  Throughout the book he goes on various adventures across the planet, gaining fame and glory among the martians all while learning the secrets of their planet.

The Martians call their planet Barsoom, so you will often hear the novels known as the Barsoom series.  Burroughs wrote 10 more in the series over the next thirty years though I get the impression they get progressively derivative as time goes on.

Princess of Mars was his first novel but it's much better than I expected. The science is horrible by today's standards because it was written in a time when we believed Mars had canals, water, and possibly intelligent life, but for the time it was pretty visionary.  He reasonably explains the different societies, lighter than air travel, light based power sources, and the thin but sustaining martian atmosphere. Pretty good for the time it was written.

Make no mistake: this is a swashbuckler.  People of the teens and twenties liked their buckles fully swashed, and swashed they shall be. The Princess of Mars has exotic women in skimpy outfits, green bug-eyed villains, oodles of chase scenes, and sword fights by the score. It's quite fun to read and imagine it played in a theater between The Lone Range and Flash Gordon. Being public domain and free on the Kindle doesn't hurt either.


So, is it worth reading?  I say yes. It's a fun and fast read as well as a piece of sci-fi history. You will find references to Barsoom in many later works throughout the 20th century. It also inspired a generation of authors and scientists from Clarke to Sagan.

Wikipedia entry

Amazon Kindle page

John Carter movie website



I'd like to ask my dedicated readership a very big favor. I'm starting a podcast with my friend Robert Cooper. The challenge is determining the direction. In a lot of the fields we are familiar with there are already some great podcasts (like the Java Posse). We can't decide if it should be programming centric, cover technology issues, or discuss things that are more future oriented (driving cars, space travel, etc.).

So we'd like your help. We recorded three different podcasts with different topics. If you can spare the time to listen to them we'd love to get your feedback. You can email me at joshua at marinacci dot org.

A warning, these are very rough. I've barely edited them, haven't added music and credits, or trimmed them. For the final podcast we are targeting 30 minutes. Anything longer feels like something no one would listen to.

  - Josh

Direct MP3 links

Almost since it was first released, fans of the Raspberry Pi have asked when it the hardware will be updated with better components. A faster CPU perhaps? Double the RAM? Built in wifi? The list of components you could upgrade is long. This request was brought up again when the Raspberry Pi foundation announced the sale of the two millionth Pi.

First I think we should step back for a moment and consider the magnitude of this achievement. 2,000,000. Two meeealion. That’s a whole lot of tiny computers. Not only has this sales volume let the foundation move production back to the UK, these Pis have been used to build computer labs in Africa, teach children Scratch programming, photograph endangered species in infra-red and countless micro-servers where a Pi is strapped to the back of a Costco hard drive. In short, the Raspberry Pi has become an engine for innovation.

At first, I too wanted a new Raspberry Pi with a spec update. True, the specs are anemic. It’s fine and well to say ‘what do you expect for 35$’ but that doesn’t make my code run any faster. Upon further reflection, however, I’ve realized that not updating brings some benefits as well.

Keeping the specs identical means a stable platform. If I buy a Pi three years from now it will run software exactly as my first Pi from a year ago did. Stability is very important; especially when we are talking about software often used in poor conditions without IT staff. The same goes for accessories. Every camera module and GPIO extender is built for this specific device. They will continue to work perfectly in the future.

Keeping the specs identical means our code has to get faster instead. Modern software is blazingly inefficient and it tends to not age well. X Windows on the Raspberry Pi is extremely slow, even though it ran fine on my 486 in college at one tenth the speed. I could only dream of owning a 700mhz computer in 1995. Instead of throwing faster hardware at our problems we need to improve our code. I’m currently building a GPU accelerated graphics API, targeting the Raspberry Pi first. If it can run at 60 fps on the Pi then it can run anywhere.

Keeping the specs identical means we explore and document everything. While slow, the Raspberry Pi has some very interesting hardware that can do amazing things when used properly. Only devices with a long life span get fully explored. Just look at the things people have done with the NES and C64s. Because these devices were so popular they were documented (i.e.: reverse engineered) in exhaustive detail. Today I could build a simple NES emulator over a (long) weekend if I chose, thanks to the hard work done by the community over the years. If we keep the specs the same then the Raspberry Pi will be similarly dissected and documented.

I do not long for a new Raspberry Pi. I long for better software that lets me do more with what we already have. Here’s to another two million identical Pis; each a spark for a new idea, not new hardware.

There's been a ton of talk lately about several mobile operating systems and their problems, such as language restrictions, fragmentation, and anti-competitive practices. It's never a good idea to talk bad about your competition, so I'll take this opportunity to simply say a few things about the webOS (the OS that powers Palm's Pre and Pixi phones) that you might not know.

As always, I am writing this as Josh the blogger. These are my opinions alone and do not reflect the opinions of Palm Inc.

  • webOS devices are part of the web, not tethered to a desktop computer. You install apps through the web. You your data is backed up to the web. OS updates come through the web. Your address book is a merged view of your contacts living in the web. You never have to sync to a desktop computer. I know some Pixi users who have never once plugged their phones into a computer, because their phone is already a part of the web.
  • The webOS treats it's users like grown ups: they can install any apps they want. What if the app duplicates a built in app? Fine. What if the app isn't in the on device catalog? Fine: you can install apps from the web or beta feeds without any restrictions and do the marketing on your own. What if the app hasn't been reviewed, came from my cool programmer friend, and might hose my device? Well, if you enter the developer code into your phone then you've accepted the risk and can install any app you want. There's a whole community of people making cool but unauthorized apps. They are called the Homebrew community, and Palm encourages them. You're an adult. You can make the decision of what to install on your phone.
  • The webOS lets you use any language you want to develop apps. While Palm doesn't provide tools for languages other than JavaScript, C, & C++, there are no restrictions against using any other language. Our new PDK gives you a clean POSIX layer with direct & standard access to input (SDL), the screen (OpenGL), and device services (API bridge). There's nothing stopping you from porting a C# compiler or a Lua interpreter. Developers are free to use whatever tools they wish. The results are what matter. Good apps are good apps.
  • The webOS doesn't have fragmentation. All webOS devices run the same OS, regardless of form factor. They are all updated over the air, for free, in all countries and carriers. This means that 99% of webOS devices have the current version of the OS within a few weeks. There is no fragmentation of the operating system across devices or form factors. This lets developers focus on making great apps, not waste time supporting 18 versions of the OS.
  • The webOS is built from the DNA of the web. Yes this includes using HTML, JavaScript and CSS as the primary application development layer, but it's more than that. I can just start typing to have my question answered by wikipedia. The address book contains your contacts that live on the web. If my wife changes her Facebook profile photo, my phone is automatically updated. I can write an app that links to other apps through Javascript calls. The web is about connections to the people and services you care about, not just HTML pages. So is the webOS.

At Palm we care greatly about the end customer experience. We are also developers, so we care greatly about the developer experience. And most importantly, we don't see the two at odds. Happy developers create great apps that create happy customers. It's a win, win. That's why we are doing everything we can to make happy developers. We don't always do everything perfectly, but when something is broken we do our best to fix it and be transparent. It's how the web works and it's how the webOS works.

So, as a developer, I hope you'll think about the benefits and freedoms of the webOS, and consider it for your next mobile application.