Josh On Design, Design, and Usability for Software EngineersTue Mar 18 2014 14:18:16 GMT+0000 (UTC)\nWolfram Alpha, Mind Explosion<p>During SXSW this year I had the great fortune to see the keynote given by Stephen Wolfram. If you’ve not heard of him before, he’s the guy who created Mathematica, and more recently Wolfram Alpha, an online cloud brain. He’s an insanely smart guy with the huge ambition to change how we think. </p> <p>When Stephen started, back in the early 1980s, he was interested in physics but wasn’t very good at integral calculus. Being an awesome nerd he wrote a program to do the integration for him. This eventually became Mathematica. He has felt for decades that with better tools we can think better, and think better thoughts. He didn’t write Mathematica because he loves math. He wrote it to get beyond math. To let the human specify the goals and have the computer figure out how to do it. </p> <p>After a decade of building and selling Mathematica he spent the next decade doing science again. Among other things this resulted in his massive tome: A New Kind Of Science, and the creation of Wolfram Alpha, a program that systematizes knowledge to let you ask ask questions about anything.</p> <p>In 1983 he invented/discovered a one dimensional cellular autonomy called <a href=''>Rule 30</a>, (which he still has the code printed on his business cards). Rule 30 creates lots of complexity from a very simple equation. Even if one runs just a tiny program it can end up making interesting complexity from very little. He feels there is no distinction between emergent complexity and brain like intelligence. IE: we don't need a brain like AI, the typical Strong AI claim. Rather, with emergent complexity we can augment human cognition to answer ever more difficult questions.</p> <p>The end result of all of this is the Wolfram Language, which they are just started to release now in SDK form. By combining this language with the tools in Mathematica and the power of a data collecting cloud; they have created something qualitatively different. Essentially a super-brain in the cloud.</p> <p>The Wolfram Language is a 'knowledge based language’ as he calls it. Most programming languages stay close to the operation of the machine. Most features are pushed into libraries or other programs. The Wolfram Language takes the opposite approach. It has as much as possible built in; that is the language itself does as much as possible. It automates as much as possible for the programmer.</p> <p>After explaining the philosophy Stephen did a few demos. He was using the Wolfram tool, which is a desktop app that constantly communicates with the cloud servers. In a few keystrokes he created 60k random numbers, then applied a much of statistical tests like mean, numerical value, and skewness. Essentially Mathematica. Then he drew his live Facebook friend network as a nicely laid out node graph. Next he captured a live camera image from his laptop, partitioned it blocks of size 50, applies some filters, compressed the result to a single final image and tweeted the result. He did all of this through the interactive tool with just a few commands. It really is a union of textual, visual, and the network.</p> <p>For his next trick, Mr. Wolfram asked the cloud for a time series of air temperatures from Austin for the past year then drew it as a graph. Again, he used only a few commands and all data was pulled from the Wolfram Cloud brain. Next he asked for the countries which border Ukraine, calculated the lengths of the borders, and made a chart. Next he asked the system for a list of all Former Soviet Republics, grabbed the flag image for each, then used a ‘nearest’ function to see which flag is closest to the French flag. This ‘nearest’ function is interesting because it isn’t a single function. Rather the computer will automatically select the best algorithm from an exhaustive collection. It seems almost magical. He did a similar demo using images of hand written numbers and the ‘classify’ function to create a machine learning classifier for new hand drawn numbers.</p> <p>He’s right. The Wolfram Language really does have everything built in. The cloud has factual data for almost everything. The contents of wikipedia, many other public databases, and Wolfram’s own scientific databases are built in. The natural language parser makes it easier to work with. It knows that NYC probably means New York City, and can ask the human for clarification if needed. His overall goal is maximum automation. You define what you want the language to do and then it’s up the language to figure out how to do it. It’s taken 25 years to make this language possible, and easy to learn and guess. He claims they’ve invented new algorithms that are only possible because of this system.</p> <p>Since all of the Wolfram Language is backed by the cloud they can do some interesting things. You can write a function and then publish it to their cloud service. The function becomes a JSON or XML web service, instantly live, with a unique URL. All data conversion and hosting is transparently handled for you. All symbolic computation is backed by their cloud. You can also publish a function as a web form. Function parameters become form input elements. As an example he created a simple function which takes the names of two cities and returns a map containing them. Published as a form shows the user two text fields to ask for the city names. Type in two cities and press enter, an image of a map is returned. These aren’t just plain text fields, though. They contain are backed by the full natural language understanding of the cloud. You get auto-completion and validation automatically. And it works perfectly on mobile devices.</p> <p>Everything I saw was sort of mind blowing if we consider what this system will do after a few more iterations. The challenge, at least in my mind, is how to sell it. It’s tricky to sell a general purpose super-brain. Telling people "It can do anything" doesn't usually drive sales. They seem to be aware of this, however, as they now have a bunch of products specific to different industry verticals like physical sciences and healthcare. They don’t sell the super-brain itself, but specific tools backed by the brain. They also announced an SDK that will let developers write web and mobile apps that use the NLP parser and cloud brain as services. They want it to be as easy to put into an app as a Google Maps. What will developers make with the SDK? They don’t know yet, but it sure will be exciting.</p> <p>The upshot of all this? The future looks bright. It’s also inspired me to write a new version of my Amino Shell with improved features. Stay tuned.</p>\n3D Printing Industry Overview<p>One of the benefits of my job at Nokia is the ability to do indepth research on new technologies. If you follow me on <a href=''>G+</a> then you know I've been playing with 3D printers for the past few months. As part of my research I prepared a detailed overview of the 3D printing industry that goes into the technologies, the companies involved, and some speculation about what the future holds; as well as a nice glossary of terms. Nokia has kindly let me share my report with the world. Enjoy!</p> <p><a href=''>3D Printing Industry Overview</a></p>\nWhat do you want to see at OSCON?<p>I'm working on a few submissions for <a href=''>OSCON</a>, due in two days. I've got lots of ideas, but I don't know which ones to submit. Take a look at these and <a href=''>tweet me</a> with your favorite. If you can't make it to OSCON I'll post the presentations and notes here for all to read.</p> <p>Thx, Josh</p> <h3>Augmenting Human Cognition</h3> <p>In a hundred years we will have new bigger problems. We will need new, more productive brains to solve these problems. We need to raise the collective world IQ by at least 40 points. This can only be done by improving the human computer interface, as well as improving physical health and concentration. This session will examine what factors affect the quality and speed of human cognition and productivity; then suggest tools, both historic and futuristic, to improve our brains. The tools are jointly health related: disease, sleep, nutrition, and light; digital: creative tools, AI agents, high speed communication; and also physical augmentation: Google Glass, smart drugs, and additional cybernetic senses.</p> <h3>Awesome 3D Printing Tricks</h3> <p>The dream of 3D printing is for the user to download a design, hit print, and 30 minutes later a model pops out. What’s the fun in that? 3D printers are great because each print can be different. Let’s hack them. This session will show a few ‘unorthodox’ 3D printing techniques including mixing colors, doping with magnets and wires, freakish hybrid designs, and mason jars. Lots of mason jars. </p> <p>The techniques will be demonstrated using the open source Printrbot Simple, though they are applicable to any filament based printer. No coding skills or previous experience with 3D printers is required, though some familiarity with the topic will help.</p> <h3>Cheap Data Dashboards with Node, Amino and the Raspberry PI</h3> <p>Thanks to the Raspberry Pi and cheap HDMI TV sets, you can build a nice data dashboard for your office or workplace for just a few hundred dollars. Though cheap, the Raspberry PI has a surprisingly powerful GPU. The key is Amino, an open source NodeJS library for hardware accelerated graphics. This session will show you how to build simple graphics with Amino, then build a few realtime data dashboards of Twitter feeds, continuous build servers, and RSS feeds; complete with gratuitous particle effects. </p> <h3>HTML Canvas Deep Dive 2014</h3> <p>Behind plain images, HTML Canvas is the number one technology for building graphics in web content. In previous years we have focused on 3D or games. This year we will tackle a more useful topic: data visualization. Raw data is almost useless. Data only becomes meaningful when visualized in ways that humans can understand. In this three hour workshop we will cover everything needed to draw and animate data in interesting ways. The workshop will be divided into sections cover both the basics and techniques specific to finding, parsing, and visualizing public data sets.</p> <p>The first half of the workshop will cover the basics of HTML canvas, where it fits in with other graphics technologies, and how to draw basic graphics on screen. The second half will cover how to find, parse, and visualize a variety of public data sets. If time permits we will examine a few open source libraries designed specifically for data visualization. All topics we don’t have time to cover will be available in a free ebook to read.</p> <h3>Bluetooth Low Energy: State of the Union</h3> <p>In 2013 Bluetooth Low Energy, BLE, was difficult to work with. APIs were rare and buggy. Hackable shields were hard to find. Smartphones didn’t support it if they weren’t made by Apple, and even then it was limited. What a difference a year makes. Now in 2014 you can easily add BLE support to any Arduino, Raspberry PI, or other embedded system. Every major smartphone OS supports BLE and the APIs are finally stable. There are even special versions of Arduino built entirely around wiring sensors together with BLE. This session will introduce Bluetooth Low Energy, explain where it fits in the spectrum of wireless technologies, then dive into the many options today’s hackers have to add BLE to their own projects. Finally, we will assemble a simple smart watch on stage with open source components.</p>\nGetting Started with the Printrbot Simple<p>I recently got a 3D printer and boy are my arms tired!</p> <p>A bit tongue-in-cheek, but yes 3D printers can be a pain to set up. While I love the concept of 3D printing, the industry is currently in the pre-Model-T phase. There are hundreds of models, mostly incompatible, and they require a lot of futzing and tweaking to get them working properly. </p> <p>The good news is that 3D printers are getting better <i>very quickly</i>. In the two years I've followed the technology prices have dropped in half and quality has doubled. Exciting stuff, but until a month ago I hadn't actually jumped into the 3D printing market myself. That all changed when I saw the Printrbot Simple.</p> <p class='photo'><img src=''> <br><i>Printrbot Simple, $300 kit</i> </p> <p>The Simple is a <b>$300</b> <a href=''>from Printrbot</a>. While most of their printers now come pre-assembled, the Simple is still available as a kit, and for an amazing price. I knew going in that it would likely have limitations, but I can honestly say it was more than worth the price. The prints aren't perfect, but they are pretty good. It's been a great introduction to 3D printing and I expect to make lots of things over the coming months.</p> <p class='photo'><img src=''> <br><i>Calibration object at 3mm, gray PLA, Printrbot Simple</i> </p> <p>The Simple is cheap through clever engineering and the fact that you have to assemble it yourself. For an extra $100 they will pre-assemble and calibrate it for you, but if you are serious about 3D printing I think you should build it yourself. It was an amazing learning exercise and gave me a crash course in mechanical engineering.</p> <p>All that said, I ran into some hiccups during the initial assembly and calibration. I only got decent prints after a few days of futzing. I expect lots of people will be opening Printrbot Simple kits Christmas morning, so I thought I'd spare those happy new Simple owners (perhaps you are one of them) my headaches by writing a new getting started guide. Behold!</p> <h3><a href=''>Getting Started with the Printrbot Simple</a></h3> <p><a href=''>read for moar</a></p> <p>This guide started out as my notes while building the kit, but after photos and forum feedback turned into an epic <i>four thousand word</i> epic tutorial. Did I mention it was epic? I didn't plan to write that much; it just kept going. And you, gentle reader, are the benefactor.</p> <p>I've tried to not just cover initial setup, but also teach basic 3D printing terminology. When you start getting into 3D printing as a hobby you will immediately come up against terms like E value, PLA, and hot end. Needless to say these can be confusing to the new enthusiast. I've also included a trouble shooting section to cover the most likely print failures you will face. With pictures. Lots and lots of pictures. Oh, and the source for the whole thing is <a href=''>on github</a>.</p> <p>Dubai and Dubluck</p>\nWhy You Can't Build a Smartphone<p>Every day or so I read another blog post (or ranting comments) about how BlackBerry could be rehabilitated, or how Nokia could restart Maemo and build the ultimate smartphone again. Things came to a head after Jolla announced their first phone for sale. Surely this phone with an amazing user interface will vindicate the N9?! Amazing technology plus a killer UI? Marketshare is theirs for the taking! </p> <p>I’m sorry; but no. Most people don’t understand how a smartphone platform works. Simply put: there will not be any new entrants to the smartphone game. None. <i>At all.</i></p> <p><i>Obligatory disclaimer: I am a researcher at Nokia investigating non-phone things. I do not work in the phone division, nor do I know any internals of Nokia’s phone plans, or Microsoft’s after the acquisition of Nokia’s devices group is complete. I hear about new devices the same way you do: through leaks on The Verge. This essay is based on my knowledge of the smartphone market from my time at Palm/HP and general industry observations.</i></p> <p>A few new Android manufacturers may join the game, and certainly others will drop out, but we are now in a three horse race. The gate is closed. I’m sorry to Jolla, Blackberry, the latest evolution of Maemo/Meego/Tizen, whatever OpenMoko is doing these days, and possibly even Firefox OS. No one new will join the smartphone club. It simply can’t happen anymore. You can’t make a smartphone.</p> <p>There was a time when a small company, with say a few hundred million dollars, could make a quality phone with innovative features and be successful. This is when ‘successful’ was defined as making enough profit to continue making phones for another year. In other words: a sustainable business, not battling for significant marketshare. Those were the days when Palm could sell a million units and be incredibly happy. The days when BlackBerry had double digit growth and Symbian ruled the roost.</p> <p>Then came 2007. It might be over-reaching to say ‘the iPhone changed everything’, but it certainly was an definitive event. The 1960s began in January of 1960, but ‘the sixties’ began when the Beatles came to America in early 1964. Their arrival was part of a much larger cultural shift that started before 1964 and certainly continued after the Beatles broke up. I would personally say the sixties ended memorial day of 1977, but that’s just my opinion. </p> <p>The Beatles appearance on the Ed Sullivan show is a useful event to mark when the sixties began, even though it’s really a much fuzzier time period. Steve Jobs unveiling the iPhone in 2007 is a similarly useful historical marker. Everything changed.</p> <h3>Data networks</h3> <p>The first big change was data networks. In the old days there really wasn't a data network. Previous phones were about selling minutes and, to a lesser extend, texting. carriers didn’t really care about smartphones. They didn’t push them or restrict them. As long as you bought a lot of minutes the carriers didn’t really care what you used. </p> <p>There were no app stores back then, just a catalog of horrible JavaME Tetris clones at 10 bucks a pop. I owned a string of PalmOS devices during this period. Their ‘app store’ was literally boxes of software in a store which you had to install from your computer. No different than 1980s PCs. While my Treo had access to GSM, it was merely a novelty used to sync news feeds or download email very slowly.</p> <p>Around 2006 the carriers 2G and 3G data upgrades finally started to come online. Combined with a real web browser on the iPhone you could finally start doing real work with the data network. This also meant the carriers became more involved in the smartphone development process. Clearly data would be future, and they wanted to control it. Carriers request features, change specs, and pick winners.</p> <p>Carrier influence means you can’t make a successful smartphone platform without having strong support from them. This is one of the things that doomed webOS. The Pre Plus launch on Verizon should have been huge for Palm. Palm spent millions on TV ads to get customers into the stores — who then walked out with a Droid. Without having strong carrier support, all the way down to reps on the floor, you can’t build a user base. To an extent Apple is the exception here, but they have their own stores and strong existing brand to leverage against carrier influence. Without that kind of strength new entrants don’t have a chance. </p> <h3>The cost of entry</h3> <p>Another barrier to entry in the smartphone market is the sheer cost of getting started. A smartphone isn’t just a piece of hardware anymore. It’s a platform. You need an operating system, cloud services, and an app store with hundreds of thousands of apps, at least. You need a big developer relations group. You need hundreds of highly trained engineers optimizing every device driver. The best webkit hackers to tune your web browser. A massive marketing team and millions in cash to dump on TV ads. You need deep supply chains with access to the best components. The cost of entry is just too high for most companies to contemplate.</p> <p>To continue with webOS in 2011, I estimate HP would have had to spend at least a billion a year for three years to become a profitable platform — and that was two years ago. The cost has only gone up since then. There are very few companies with the resources. You already know their names: Apple, Samsung, Google, and Microsoft. All vertically integrated or well on their way to it. You aren’t one of these companies.</p> <h3>Access to hardware components</h3> <P>Smartphones need good hardware to be competitive. With the 6 month cycles of today’s marketplace that means you have to access to the best components in the world (Samsung), or have such control of your stack that you can optimize your software to make do with lesser hardware. Preferably both.</p> <p>Apple has the spare cash to secure a supply chips and glass years in advance; you do not. If Apple has bought the best screens then your company has to make do with last years components. This compromise gets worse and worse as time goes on, making your devices fall further behind in the spec wars.</p> <h3>Retreat to the low end</h3> <p>A common solution to the component problem is targeting the low end. After all, if you can’t get the best components then maybe you could build a decent phone out of lesser parts. This does work to an extent, but it limits your market reach and opens you up to competition at the low end. You are now competing with a flood of cheap Android devices from mid-tier far-east manufacturers. </p> <p>Even if you OEM hardware from one of these low-end manufacturers you are now in a race to the bottom. Your product has become a commodity unless you can differentiate with your user experience. That requires the telling potential customers about your awesome software, which requires a ton of cash. Samsung spends hundreds of millions each quarter on Galaxy S ads. This path also requires an amazing UI that will distinguish you from your peers.</p> <h3>A disruptive UI</h3> <p>Even with an paradigm shifting UI you’ve got to overcome all of the difficulties I outlined above. Most people in the wealthy world have smartphones already, so you not only have to convince someone to buy your phone, but to leave the phone they already have. Your amazing UI has to overcome <i>the cost of change</i>. Inertia is a powerful thing.</p> <p>Most likely, however, your new platform won’t have such a drastically different interface. Smartphones are a maturing platform. A smartphone five years from now won’t feel that different than today’s iPhone. Sure, it will be faster and lighter with better components, but it will still have a touch screen with apps, buttons, and lists.</p> <p>Unless you’ve figured out how to make a screen levitate with pure software you won’t be able to shake up the market. Google Glass is the closest thing I can think of to a truly disruptive interface. Adding vibration effects to scrolling menus is not.</p> <h3>No hope?</h3> <p>So does this mean we should give up? No. Innovation should continue, but we have to be realistic. No new entrant has any chance of getting more than one percent of the global market. That could still be a success, however, depending on how you define success. If success is being profitable with a few million units, then you can be a success. You will have to focus on a niche market though. Here’s a few areas that might be open to you: </p> <p><b>Teenagers without cellphone contracts</b>. Make a VOIP only phone. Challenge: you are now competing with the iPod Touch.</p> <p><b>Point of Sale systems</b>. Challenge: this is an enterprise only pitch and they have long sales cycles. You might be dead by then. They also don’t care about user experience, so your awesome UI doesn’t matter. Small to medium businesses will use apps on standard devices like iPads, so you are back to where you started.</p> <p><b>Emerging markets</b>: half of the world is buying their first smartphone. This is an opportunity if you can get in fast with cheap hardware, but now you compete with FirefoxOS.</p> <p>Mozilla is targeting emerging markets where last generation hardware is more likely to succeed. Even so, Mozilla is working very closely with local carriers to ensure success while facing down competition from low-end Android devices. They also have the advantage of being a non-profit. Their ultimate goal is not to become a profitable phone company, but rather keeping the web open and free. This is probably not a viable option for you, and even Mozilla may be too late to follow this path.</p> <h3>The sad truth</h3> <p>Smartphones are a rapidly maturing product. Soon they will be pure commodities. Just as I wouldn’t suggest anyone build a new line of PCs or cars, smartphones are becoming a rich man’s game. Unless you start with a few billion dollars you have no hope of making a profit. Maybe you could follow CyangoenMod’s approach of building value on top of custom Android distros, but even that risks facing the wrath of Google.</p> <p>Sorry folks. There's plenty of room to innovate elsewhere.</p>\nLego Is Art: Beautiful Lego<p>No Starch Press is on a roll with its series of Lego themed books. While most of them are about model ideas or construction techniques, Beautiful Lego is different. This is a Lego art book. In classic coffee table style it is filled with gorgeous photos to thrill the reader. Beautiful Lego does not seek to discuss 'can Lego be art', but takes it as fact. These are works by artists, just artists using the medium of Lego instead of paint or clay, and the results speak for themselves. Stunning.</p> <p><img src=''/></p> <p>Beautiful Lego is written and photographed by Mike Doyle, a lego artist himself as well as an excellent graphic designer, but features the work of over 70 different artists. The book is organized by topic -- spaceships, people, architecture, robots -- with interviews of artists interspersed. Each artist is asked the single question: "Why Lego?"; with an immense variety of answers. There is a common theme, though: the desire to create using an incredibly malleable medium.</p> <p>Some models are beautiful and some are terrifying, such as "The Doll" (pg 5) and "Disscected Frog" (pg 79). The architectural models really shine; good use of the few curvy pieces in Lego can make amazing results. There is even political art: The Power of Freedom (page 124).</p> <p>Beautiful Lego surprised me by the diversity of styles within the medium of Lego. Some are hyper detailed, some expressive, some minimalist. Angus MacLane has a cute style known in the book as CubeDudes, which are head on caricatures of famous figures like President Lincoln, Kirk and Spock, and the Stay Puft Marshmellow Man. (page 36)</p> <p><img src=''/></p> <p>You will appreciate the book on two levels. First, the beauty or expression of the piece, then a second time as you pour over the photos trying to figure out "How did they do that with Lego?" Mike Doyle's victorian house series in particular will amaze you with the flexibility of Lego. (And make you wonder how big his Lego collection is:) While re-reading the book for this review, I'm struck by how much good photography makes a difference when experiencing a model.</p> <p><img src=''/></p> <p>I heartily recommend Beautiful Lego to the adult Lego fan in your life. It just might make you pull out the bin from the garage and build a few orignal models yourself. And yes, there is a Freddy Mercury model called Fried Chicken.</p> <p>Beautiful Lego can be purchased from <a href=''>No Starch Press</a>, <a href=''>Amazon</a>, or <a href=''>Barnes and Nobel</a>.</p>\nOld Pi is Still Tasty<p>Almost since it was first released, fans of the Raspberry Pi have asked when it the hardware will be updated with better components. A faster CPU perhaps? Double the RAM? Built in wifi? The list of components you could upgrade is long. <a href=''>This request</a> was brought up again when the Raspberry Pi foundation <a href=''>announced the sale of the two millionth Pi</a>.</p> <p>First I think we should step back for a moment and consider the magnitude of this achievement. <i>2,000,000.</i> <b>Two meeealion.</b> That’s a whole lot of tiny computers. Not only has this sales volume let the foundation move production back to the UK, these Pis have been used to build <a href=''>computer labs in Africa</a>, teach children <a href=''>Scratch programming</a>, photograph endangered species <a href=''>in infra-red</a> and countless micro-servers where a Pi is <a href=''>strapped to the back of a Costco hard drive</a>. In short, the Raspberry Pi has become an engine for innovation.</p> <p>At first, I too wanted a new Raspberry Pi with a spec update. True, the specs are anemic. It’s fine and well to say ‘what do you expect for 35$’ but that doesn’t make my code run any faster. Upon further reflection, however, I’ve realized that <i>not</i> updating brings some benefits as well.</p> <p><b>Keeping the specs identical means a stable platform.</b> If I buy a Pi three years from now it will run software <i>exactly</i> as my first Pi from a year ago did. Stability is very important; especially when we are talking about software often used in poor conditions without IT staff. The same goes for accessories. Every camera module and GPIO extender is built for this specific device. They will continue to work perfectly in the future.</p> <p><b>Keeping the specs identical means our code has to get faster instead.</b> Modern software is blazingly inefficient and it tends to not age well. X Windows on the Raspberry Pi is extremely slow, even though it ran fine on my 486 in college at one tenth the speed. I could only <i>dream</i> of owning a 700mhz computer in 1995. Instead of throwing faster hardware at our problems we need to improve our code. I’m currently building a GPU accelerated graphics API, targeting the Raspberry Pi first. If it can run at 60 fps on the Pi then it can run anywhere.</p> <p>Keeping the specs identical means we explore and document everything. While slow, the Raspberry Pi has some very interesting hardware that can do amazing things when used properly. Only devices with a long life span get fully explored. Just look at the things people have done with the <a href=''>NES</a> and <a href=''>C64s</a>. Because these devices were so popular they were documented (i.e.: reverse engineered) in exhaustive detail. Today I could <a href=''>build a simple NES emulator</a> over a (long) weekend if I chose, thanks to the hard work done by the community over the years. If we keep the specs the same then the Raspberry Pi will be similarly dissected and documented.</p> <p>I do not long for a new Raspberry Pi. I long for better software that lets me do more with what we already have. Here’s to another two million identical Pis; each a spark for a new idea, not new hardware.</p>\nGPU Computing: the Mac Pro and the Raspberry Pi.<p>Now that Apple has given us <a href=''>final specs and cost</a> for the redesigned Mac Pro I’ve heard complaints that it is underpowered and non-expandable, especially for the price. The Pro comes with reasonably beefy CPUs but they will be out of date in a few years. The buyer can only expand the ram and disk, and not so much on the disk side given the lack of available space. So how can this be worth the $3000 entry price Apple is charging?</p> <p>First we must realize that the Mac Pro isn’t for everyone. It really is for creative professionals who spend a lot of time in Logic, Aperture, Final Cut Pro, Maya, and other pro apps. These people need the maximum ram and processing power possible, and will pay for it. Expandability of storage isn’t a problem because they don’t care about internal storage anyway. Anyone who buys one of these will be using a stack of external drives or NAS. I can buy a 3TB drive at Costco for under 200 bucks! Thus the nice collection of Thunderbolt and USB ports on MacPro’s backside.</p> <p>More importantly, however, the CPUs aren’t the real focus of the new Mac Pro. Apple is betting that the future of high speed computation is GPU computing. Apple is right.</p> <p>I recently went to the International Super Computing conference when it was held here in Oregon. At least 50% of the talks were about how to restructure computing tasks to take advantage of GPUs. GPUs are the future of almost all high performance computing. GPUs are not as general purpose as a modern CPU, but if you can structure your problem in a way that a GPU can compute, then you can get a 5x to 10x performance boost for the same watt (or dollar). Intel and Nvidia are happy to sell you a stack of GPUs without video connectors. These cards exist purely for GPU computation. Daisy chained together a stack of GPUs will beat any traditional super computer.</p> <p>Of course, with the GPUs doing the heavy lifting the challenge becomes how to get your data *to* the GPU quickly. That’s why Apple’s MacPro site spends so much time talking about the IO bus and memory bandwidth. Internal storage? CPU upgrades? Who cares! The MacPro is all about moving data in and out of beefy GPUs as fast as possible.</p> <p>Apple has been working on this for a while. Initially they started shifting graphics work to the GPU with <a href=''>Quartz Extreme</a>. This enabled the OSX compositing window manager to run smoothly on older hardware. Later Apple introduced full Mac support for <a href=''>OpenCL</a>, a computation companion API to OpenGL. When you write some code in OpenCL the Mac can shift the computation dynamically between the CPU and the GPU. Powerful GPUs can make up for weak CPUs.</p> <p>And this brings me to the <a href=''>Raspberry Pi</a>, my favorite cheap ARM based mini-computer -- so cheap I’ve seen <a href=''>hard drives with Pi’s glued to the side of them</a> as files servers. At 700mhz the Raspberry Pi’s CPU is anemic but the GPU is surprisingly powerful. Broadcom’s <a href=''>VideoCore IV</a> not only supports OpenGL 2.0, meaning real shader support, it also has H264 video encoding and decoding in hardware. It can decode a 1080p video <i>in real time</i> on this 35$ computer. The CPU just has to stream the compressed video file to memory; the GPU will care of the rest.</p> <p>The Pi’s GPU also has an interesting API called <a href=''>dispmanx</a>. While it is extremely undocumented, I’ve learned that this API lets you set up an almost unlimited number of hardware layers in the GPU. You can have one layer with 3D content from OpenGL while a second layer plays video and a third shows images. Most importantly each of these layers can be resized and alpha blended entirely by the GPU. This means we can create a full compositing window manager like OSX and Window 7 have, all on our tiny 700mhz computer. <a href=''>These guys</a> are already working on a port of the composited Wayland/Weston library to the RaspberryPI.</p> <p>While the Raspberry Pi does not support OpenCL it is possible to use the GPU for accelerated <a href=' '>JPG decompression</a> and there is ongoing efforts <a href=''>to directly target</a> the VideoCore’s internal APIs for SIMD processing.</p> <p>All of this power comes from shifting computation from the general purpose CPU to the custom purpose GPU. This is a long term trend. Over time more and more work will be shifted. GPUs can’t do all computational tasks of course; but if you can transform your problem in to something the GPU can handle (preferably something highly parallel), then you’re golden. He who controls the GPU... controls the world! Now let’s get some cheese, Pinky.</p>\nInstall Node on the Raspberry Pi in 5 minutes<p>Installing Node on a Raspberry PI used to be a whole lot of pain. Compiling an codebase that big on the Pi really taxes the system, plus the usual dependency challenges of native C code. Fortunately, the good chaps at <a href=''></a> have started automatically building Node for Linux arm Raspberry Pi. This makes life so much easier. Now we can install node in less that five minutes. Here’s how.</p> <p>First, make sure you have the latest <a href=''>raspbian</a> on your Pi. If you need to update it run.</p> <pre><code>sudo apt-get upgrade; sudo apt-get update</code></pre> <h3>Node and NPM</h3> <p>Now install Node itself</p> <pre><code>wget tar -xvzf node-v0.10.2-linux-arm-pi.tar.gz node-v0.10.2-linux-arm-pi/bin/node --version </code></pre> <p>You should see:</p> <pre><code>v0.10.2</code></pre> <p>Now set the <code>NODE_JS_HOME</code> variable to the directory where you un-tarred Node, and add the <code>bin</code> dir to your PATH using whatever system you prefer (bash profile script, command line vars, etc); In my <code>.bash_profile</code> I have:</p> <pre><code>NODE_JS_HOME=/home/pi/node-v0.10.2-linux-arm-pi PATH=$PATH:$NODE_JS_HOME/bin </code></pre> <p>Now you should be able to run <code>node</code> from any directory. NPM, the node package manager, comes bundled with Node now, so you already have it:</p> <pre><code>npm —version</code></pre> <p>prints</p> <pre><code>1.2.15</code></pre> <h3>Native code</h3> <p>If you are just working with pure javascript modules then you are done. If you need to use or develop native modules then you need a compiler and node’s native build tool, <code>node-gyp</code>. The compilers should already be installed with Raspbian. Check using:</p> <pre><code>gcc —version</code></pre> <p>Install node-gyp with:</p> <pre><code>npm install -g node-gyp</code></pre> <p>Now any native module should be compilable. </p> <p>That’s it. Node in 5 minutes.</p>\n$6031!<p>The webOS auction has ended successfully. Every item sold, some for far more than I thought they would. Combined with some anonymous donations we raised over 6000$ for the Hill Family. I am overwhelmed and incredibly grateful. I knew the webOS community was passionate but I had no idea. We couldn’t have done this without your support. Thank you so much!</p> <p>Now, on to the details. I’ll be shipping all items out this week. If you won something and haven’t paid yet, please do so. You should have received an email from the site. I’m going to send all domestic items USPS unless you request otherwise. If you are international buyer please let me know if you have any special shipping requests.</p> <p>My wife and I are traveling to Atlanta with our little one in a week to spend some time with my family and present them with the check. Again, I cannot thank you enough. I am truly amazed by the webOS community. Thank you.</p> <p>&nbsp;Josh</p>