Josh On Design, Design, and Usability for Software EngineersSat Jun 28 2014 03:32:35 GMT+0000 (UTC)\nElectron 0.2 Released<p>I’m happy to announce Electron 0.2. We’ve done a ton of work to improve the compiler and library tools. The biggest news is Windows and Linux support. Also, you don’t need to pre-install the regular Arduino IDE anymore. Electron will auto-download it’s own copy of the required toolchain. Here’s the details: </p> <ul><li> Initial Windows and Linux support!</li> <li> You don’t need to modify settings.js anymore. Electron will auto detect your Arduino/Sketchbook dir.</li> <li> You don’t need the regular Arduino IDE installed anymore. The appropriate toolchain will be auto-downloaded for you the first time you compile something.</li> <li> User installed libs work now. Note that user installed libs take a priority over libs installed by the IDE.</li> <li> the serial port will be automatically closed and reopened when you upload a sketch. If this crashes your computer please let me know. I might need to increase the timeouts.</li> <li> Preliminary support for auto-detecting which board you are using by the USB VID/PID. Special thanks to PixnBits for that.</li> <li> Set the serial port speed</li> <li> Sketch rename works now</li> <li> download progress is shown in the browser (partially)</li> <li> tons of under the hood fixes</li> <li> auto scroll the console</li> </ul> <p>The <code>arduino-data</code> project was also updated with new boards an libraries: </p> <ul><li> New boards: TinyLily and Gamebuino</li> <li> More networking libs: CmdMessenger, OneWire, PS2Keyboard, SerialControl, SSoftware2Mobile, Webduino</li> <li> More specs on various boards</li> <li> The rest of the built in libraries: Ethernet, Firmata, GSM, LiquidCrystal, SD, SoftwareSerial, TFT, WiFi</li> <li> Support library sub-dependencies</li> </ul> <p>Thanks to contributors: </p> <ul><li> Dan O’Donovan / Dan-Emutex</li> <li> Nick Oliver / PixnBits</li> <li> Sean McCarthy / seandmcarthy</li> <li> trosper</li> </ul> <p>You can get the latest code on the <a href="">github project</a>.Please test this with every board and OS you have. File bugs in the <a href="">issue tracker</a>.</p> <p>I’ve registered a domain for Electron,, though don’t bother going there yet. I haven’t built the website. If anyone is a talented webdev who’d like to help with that job, please contact me. </p> <p>I’ve you’d a sneak peek of the next version of Electron, check out <a href=''>the mockup here</a>. It’s got a new library picker, a proper tree based file picker, and resizable panes. It still needs a lot of work before it can go live, but this will give you an idea of where we are going. </p> <p>Thank you, and keep on testing. </p>\nNode Streams are Awesome<p>I’ve been using Node JS off and on for the past few years, ever since we used it in webOS, but I’ve really gotten to go deep recently. As part of my learning I’ve finally started digging into Streams, perhaps one of the coolest unknown features of Node.</p> <p>If you don’t already know, Node JS is a server side framework built on JavaScript running in the V8 engine, the JS engine from Chrome, combined with libuv, a fast IO framework in C++. Node JS is single threaded, but this doesn’t cause a problem must most server side tasks are IO bound, or at least the ones people use Node for (you can bind to C++ code if you really need to). </p> <p>Node does it’s magic by making almost all IO function calls asynchronous. When you call an IO function like <code>readFile()</code> you must also give it a callback function, or else attach some event handlers. The native side then performs the IO work, calling your code back when it’s done. </p> <p>This callback system works reasonably well, but for complex IO operations you may end up with hard to under stand deeply nested code; known in the Node world as ‘callback hell’. There are some 3rd party utilities that can help, such as the ‘async’ module, but for pure IO another option is streams.</p> <p>A stream is just what it sounds like from any other language. An array of data that you operate on as data arrives or is requested. Here’s a quick example. To copy a file you could do this:</p> <pre><code>var fs = require('fs'); fs.readFile('a.txt',function(err, data) { fs.writeFile('b.txt',data); });</code></pre> <p>That will work okay, but all of the data has to be loaded into memory. For a large file you’ll be wasting massive amounts of memory and increase latency if you were trying to send that file on to a client. Instead you could do it with events:</p> <pre><code>var fs = require('fs'); var infile = fs.createReadStream('a.jpg'); var outfile = fs.createWriteStream('b.jpg'); infile.on('data',function(data) { outfile.write(data); }); infile.on('close', function() { outfile.close(); }); </code></pre> <p>Now we are processing the data in chunks, but that’s still a lot of boilerplate code to write. Streams can do this for you with the pipe function:</p> <pre><code>fs.createReadStream('a.jpg').pipe(fs.createWriteStream('b.jpg'));</code></pre> <p>All of the work will be done asynchronously and we have no extra variables floating around. Even better, the pipe function is smart enough to buffer properly. If the read or write stream is slow (network latency perhaps), then it will only read as much as needed at the time. You can pretty much just set it and forget it.</p> <p>There’s one really cool thing about streams. Well, actually two. First, more and more Node APIs are starting to support streams. You can stream to or from a socket, or from an HTTP GET request to a POST on another server. You can add transform streams for compression or encryption. There’s even utility libraries which can perform regex transformations to your streams of data. It’s really quite handy.</p> <p>The second cool thing, is that you can still use events with piped streams. Let’s get into some more useful examples:</p> <p>I want to download a file from a web server. I can do it like this.</p> <pre><code>var fs = require('fs'); var http = require('http'); var req = http.get(''); req.on('connect', function(res) { res.pipe(fs.createWriteStream('bigfile.tar.gz')); });</code></pre> <p>That will stream the get request right into a file on disk. </p> <p>Now suppose we want to uncompress the file as well. Easy peasy:</p> <pre><code>var req = http.get(''); req.on('response', function(res) { res .pipe(zlib.createGunzip()) .pipe(tar.Extract({path:'/tmp', strip: 1})) }); </code></pre> <p>Note that zlib is a built-in nodejs module, but tar is an open source one you’ll need to get with npm.</p> <p>Now suppose you want to print the progress while it happens. We can get the file size from the http header, then add a listener for data events.</p> <pre><code>var req = http.get(''); req.on('response', function(res) { var total = res.headers['content-length']; //total byte length var count = 0; res.on('data', function(data) { count += data.length; console.log(count/total*100); }) .pipe(zlib.createGunzip()) .pipe(tar.Extract({path:'/tmp', strip: 1})) .on('close',function() { console.log('finished downloading'); }); });</code></pre> <p>Streams and pipes are really awesome. For more details and other libraries that can do cool things with Streams, check out this <a href=''>Streams Handbook.</a></p>\nIntroducing Electron, a new IDE for Arduino<p>I love the Arduino platform. I have official boards and lots of derivatives. I love how it makes hardware hacking so accessible. But there’s one thing I hate: the IDE. It’s ugly. It’s ancient. It has to go.</p> <p>Sure, I know <b>why</b> the IDE is so old and ugly. It’s a hacked up derivative of an older IDE, written in a now-deprecated UI toolkit. Fundamentally, the Arduino guys know hardware and micro-controllers. They aren’t desktop app developers. I suppose we should be happy with what we have. But I’m not.</p> <h3>My previous attempt</h3> <p>About two years ago I created <a href=''>a new IDE</a> built in the same UI toolkit as the original (Java/Swing for those who are interested). It worked roughly the same but looked better. It had a proper text editor (syntax highlighting, line numbers), better serial port and board management, and a few inline docs, but basically worked the same; just some UI improvements.</p> <p>I posted my new IDE on the Arduino forums and got almost no response. Pondering this for the past two years I realize this forum was the wrong place to launch a new IDE. The people there are experts. They are happy enough with their current tools that they don’t want to switch. They are experts. They don’t need all of the improvements that a better IDE could provide, like inline documentation and auto-library-installs. They are also more likely to simply use a general purpose IDE. I’ve done this myself and it works okay, but I’m still not satisfied. Arduino is special. It deserves better.</p> <p>As the Arduino platform grows it needs a better out of the box experience. We have more and more novices hacking for the first time. We need an IDE that is truly top notch. Something that is custom built for Arduino, and the tasks that you do with it.</p> <p>Some have asked me, “What do you want to do that the existing IDE doesn’t already?”. That’s a good question. If we are going to seriously invest in something new then we need some good reasons. Let’s start with the basics: installing libraries and boards.</p> <h3>Library management</h3> <p>Right now you find a library somewhere on the web, download the zip file, and unzip it into your ‘libraries’ directory. You do all of this outside the IDE even though you will be using it inside the IDE. Next you need some docs. There is no standard for the docs, but there’s probably something to read on the website you got the lib from. At least there are some examples. Oh, but they are buried deep inside a menu. </p> <p>Now suppose it’s a month later and you are working on a new sketch. Do you remember all of the libraries you have installed? How do you search for them? Do you have the right versions? Do they work with all of your Arduino boards or just some of them.</p> <p>It’s the year 2014. Why can’t the IDE just already know about the libraries out there. It should let you search for them by name or keyword. It should let you search through the example code. Once you find the library you need it should install it for you automatically. Where does it go on disk? Who cares?! That’s the IDEs problem. Finding and installing a library should be a single button click (or maybe two clicks if we are picky).</p> <h3>Board management</h3> <p>The many Arduino boards are listed in a nested menu, derived from the boards.txt file. This text file is only updated when the IDE itself updates, which isn’t very often, and the list isn’t comprehensive anyway. If the board you need isn’t listed then you have to add it manually, outside the IDE, duplicating the effort of other developers everywhere. Why can’t the IDE just fetch a list of all boards from the internet somewhere; a list updated by the actual vendors of those boards, so it’s always up to date.</p> <p>This list of boards actually contains quite a bit of useful information, like how much RAM is in that board. However, this information isn’t actually shown anywhere. It’s only shown to the compiler. The IDE should give you full specs on your chosen board right on screen so you can refer to it whenever you want. Furthermore, the IDE should have extra board info like the number of pins, the input and output voltages, and if we are being generous an actual pinout diagram! All of this information is available on the web, just not in your IDE.</p> <p>The IDE should just do all of this for you. Choose your board from a gigantic list downloaded from the internet. This list includes detailed specs on every known board, updated as new boards are made. You can refer to the board specs in a side pane, and even correlate the pins in your sketch with the pins on the board reference. This isn’t rocket science folks!</p> <p>So you can see, there’s good reasons for a new IDE. This is before I’ve gotten to forward looking features like a built in Firmata client or data analysis tools. It’s time. Let’s build this.</p> <h3>Now for something completely different</h3> <p>Two years ago I tried by recreating the IDE in the same form. Now I want to do something different. I’ve started a new IDE completely from scratch, written in NodeJS and HTML. While it does use web rendering it won’t be a cloud based IDE. It will still run locally as an app, but using newer GUI technology than Java Swing. Don’t worry, you’ll have a proper app icon and everything. You won’t know it’s NodeJS underneath. It’ll just be a nice looking GUI.</p> <p>Since it’s a back to basics effort I’m calling this new tool <b>Electron</b>, the fundamental particle of electronics.</p> <p>So far I have basic editing, compiling, and uploading working for an Uno. I’ve also built a system for getting installing libraries and board definitions from the Internet using a git repo. This separate repo will contain a JSON file for every known library and board. Currently it has the basics from boards.txt and a few libs that I use, but more will come online soon. If you want to help, adding to this data repo is the easiest way to start. Pull requests welcome.</p> <p><a href=''>Electron IDE repo</a></p> <p><a href=''>Arduino Data repo</a></p> <p>Here’s a rough screenshot. The GUI will greatly change between now and the 1.0 release, so just think of this as a guideline of the final look.</p> <p>If you are a Node/HTML hacker please get in touch. There’s tons of work to do. Even if you are just an Arduino user you can help by providing feature requests and testing, testing, testing. </p> <p>Thank you. Let’s bring Arduino development into the 21st century.</p> <p><img src=''></p>\nIf they did it, how Apple would make a TV</p> <p>It's the fashionable thing to speculate on future Apple products. One idea that continues to get traction is the Apple TV, a complete TV set with integrated Apple-ly features. Supposedly to be announced this fall, after failing to appear at <b>any </b> event for the past three years, it will revolutionize up the TV market, cure global warming, and cause puppies and kittens to slide down rainbows in joy. </p> <p>I don't buy it. I don't think Apple will make a TV. Televisions are low margin devices with high capital costs. Most of the current manufacturers barely break even. </p> <p>Furthermore, the market is saturated. Pretty much anyone in the rich world who wants a TV has one. Apple needs <b>growth </b> opportunities. The last thing they need is a new product with an upgrade cycle at least <b>twice </b> as long as desktop computers. It doesn't make sense. </p> <p>All that said, speculating about products is a useful mental exercise. It sharpens the mind and helps you focus when working on <b>real </b> products. So here we go: </p> <p><h3> If Apple Made a TV, How Would They Do It?</h3> </p> <p>First let's take care of the exterior. In Apple fashion it would be pretty and slender. Either a nice brushed aluminum frame like the current iMac or a nearly invisible bezel. I suspect they will encourage wall mounting so the TV appears to just float. The current Apple set top box will be integrated, as will network connections, usb, etc. Nothing to plug in except the power cord. </p> <p>Next, we can assume the current Apple TV will become the interface for the whole device. A single remote with on screen controls for everything. While I love my Roku, I hate having to use one remote for the interface and a second for power and volume. </p> <p>Third, they will probably add a TV app store. I don't think it will feature much in the way of games and traditional apps. Rather, much like the Roku, there will be apps for each channel or service. The current Apple TV essentially has this now with the NetFlix and HBO apps. The only difference would be opening the store up to more 3rd party devs. </p> <p>I think we can assume this will another client of your digital hub. Apple already wants us to put all of our music, videos, and photos into </p> <p>So far everything I've described can be done with the current Apple TV set-top box. So why build a TV. Again, I don't think they will; but if they would need to add something to the experience beyond simply integrating </p> <p>First, a camera for FaceTime. Better yet, four of them, one in each corner of the screen. Four cameras would give you a wide field of view (with 3D capture as a bonus) that can track fast moving toddler as they move around the living room. This is perfect for video chatting with the grandparents. </p> <p>Furthermore, there are modern (read: CPU intensive) vision algorithms that can synthesize a single image from multiple cameras. Right now the camera is always off to the side of the screen, so your eyes never meet when you look at the person on the other end. With these algorithms the Apple TV could create a synthetic image of you as if the camera was right in the middle of the TV. Combined with the wide field of view and a few software tricks we could finally have video phones done right. It would feel like the other person is right on the other side of your living room. It could even create parallax effects if you move around the room. </p> <p>Video calls are a clear differentiator between the Apple TV and regular TVs, and something that a set top box alone couldn't do. I'm not sure it's enough to make the sale, though. What else? </p> <p>How about the WWDC announcement of HomeKit? An AppleTV sure sounds like a good home hub for smart automation accessories. If you outfit your house with smart locks, door cameras, security systems, and air conditioners, I can see the TV being a nice place to see the overview. Imagine someone comes to the door while you are watching a show. The show scales down to corner while a security camera view appears on the other screen. You can choose to respond or pretend you aren't home. If it's a UPS guy you can ask them to leave it on the front door. </p> <p>I imagine the integration could go further. Apple also announced HealthKit. The Apple TV becomes a big screen interface into your cloud of Apple stuff, including your health data. What happens if you combine wearable sensors with an Apple TV. See a live map of people in the house, ala HP's Marauders Map. An exercise app can take you through a morning routine using both the cameras and a FitBit to measure your vitals. </p> <p>A TV really could become another useful screen in your home, something more than just a video portal. I think the idea has a lot of potential. However, other than a camera and microphones almost everything I've detailed above could be done with a beefed up standalone Apple TV set top box. I still don't think a full TV makes sense. </p> <p></p> <p></p>\nFuture Tweet 2 Beta Testing Software\nFuture Tweeting is a new future tweeting system I'm working on. It will let me write a post, send it to my blog, then link the blog from twitter, all in the *future*!\nWhere's The Data?<p>The Web is amazing for answering questions. Suppose you want to answer a question like, "what does the .JPG file extension mean", then the answer is just an internet search away. Millions of answers. However, if you stray from the common path just a tiny bit things get hairy. What if you want to get a list of <b>all</b> file extensions? This is harder to find. Occasionally you might find a PDF listing them, but if you are asking for all file extensions then you probably want to <b>do something</b> with that list. This means you want the list in some computable form. A database or at least a JSON file. Now you are in the world of ‘public’ data. You are in a world of pain.</p> <p>Searching for “list of file extensions” will take you to <a href=''>the Wikipedia page</a>, which is open but not computable friendly. Every other link you find will be spam. An endless parade of sites which each claim they are <i>the</i> central repository of file extension data. They all have two things in common:</p> <ul> <li>They are filled with horrible spam like ‘scan your computer to speed it up’ and ‘best stock images on the web’ and ‘get your free credit report now’.</li> <li>They let you add new extensions but don’t let you download a complete list of the existing ones.</li> </ul> <p>What I want is basic facts about the world; facts which are generated by the public and really should belong to the public. And I want these facts in a computable form. So far I cannot find such a source for file extensions. These public facts, as they exist on the internet, have morphed into a spam trap: vending tiny bits of knowledge in exchange for eyeball traffic. These sites take a public resource and capture all value from it, providing nothing in return but more virus scanner downloads. That they also provide so little useful information is the reason I have not linked to them (though they are obviously a search away if you care).</p> <p>The closest I can find to a <i>computable</i> file extension list is the mime type database inside of Linux distros. This brings up a second point. Every operating system, and presumably web browser, needs a list of all file extensions, or at least a reasonable subset. <i>Yet each vendor maintains their own list.</i> Again, these are public facts that should be shared, much as the code which processes them is shared.</p> <p>File Extensions are not the only public facts which suffer the fate of spam capture. I think this hints at a larger problem. If humanity is to enable global computing, then we need a global knowledge base to work from. A knowledge base that belongs to <i>everyone</i>, not just a few small companies, and especially not the spammers.</p> <p>Wikipedia and it’s various data offshoots would seem to be the logical source of global computable data, yet the results are dismal. After a decade of asking, Wikipedia’s articles still aren’t computable in any real sense. You can do basic full text search and download articles in (often malformed) wikimarkup. That’s it. Want to get the list of all elements in the periodic table? Not in computable form from Wikipedia. Want to get a list of all mammals? Not from them. Each of these datasets can actually be found on the web, unlike the list of file extensions, but not in a central place and not in the same format. The data offshoots of Wikipedia have even bigger problems, which I will address in a followup blog.</i> <p>So how do we fix this? Honestly, I don’t know. Many of these datasets do require work to build and maintain and those maintainers need to recoup their costs (though many of them are already paid for with public funds). If this was source code I’d just say it should be a project on GitHub. I think that's what we need.</p> <p>We need a GitHub for data. A place we can all share and fork common data resources, beholden to no one and computable by everyone.</p> <p>Building and populating a GitHub for data, at least for these smaller and well defined data sets, doesn't seem like a huge technical problem. Why doesn’t it exist yet? What am I missing?</p>\nWolfram Alpha, Mind Explosion<p>During SXSW this year I had the great fortune to see the keynote given by Stephen Wolfram. If you’ve not heard of him before, he’s the guy who created Mathematica, and more recently Wolfram Alpha, an online cloud brain. He’s an insanely smart guy with the huge ambition to change how we think. </p> <p>When Stephen started, back in the early 1980s, he was interested in physics but wasn’t very good at integral calculus. Being an awesome nerd he wrote a program to do the integration for him. This eventually became Mathematica. He has felt for decades that with better tools we can think better, and think better thoughts. He didn’t write Mathematica because he loves math. He wrote it to get beyond math. To let the human specify the goals and have the computer figure out how to do it. </p> <p>After a decade of building and selling Mathematica he spent the next decade doing science again. Among other things this resulted in his massive tome: A New Kind Of Science, and the creation of Wolfram Alpha, a program that systematizes knowledge to let you ask ask questions about anything.</p> <p>In 1983 he invented/discovered a one dimensional cellular autonomy called <a href=''>Rule 30</a>, (which he still has the code printed on his business cards). Rule 30 creates lots of complexity from a very simple equation. Even if one runs just a tiny program it can end up making interesting complexity from very little. He feels there is no distinction between emergent complexity and brain like intelligence. IE: we don't need a brain like AI, the typical Strong AI claim. Rather, with emergent complexity we can augment human cognition to answer ever more difficult questions.</p> <p>The end result of all of this is the Wolfram Language, which they are just started to release now in SDK form. By combining this language with the tools in Mathematica and the power of a data collecting cloud; they have created something qualitatively different. Essentially a super-brain in the cloud.</p> <p>The Wolfram Language is a 'knowledge based language’ as he calls it. Most programming languages stay close to the operation of the machine. Most features are pushed into libraries or other programs. The Wolfram Language takes the opposite approach. It has as much as possible built in; that is the language itself does as much as possible. It automates as much as possible for the programmer.</p> <p>After explaining the philosophy Stephen did a few demos. He was using the Wolfram tool, which is a desktop app that constantly communicates with the cloud servers. In a few keystrokes he created 60k random numbers, then applied a much of statistical tests like mean, numerical value, and skewness. Essentially Mathematica. Then he drew his live Facebook friend network as a nicely laid out node graph. Next he captured a live camera image from his laptop, partitioned it blocks of size 50, applies some filters, compressed the result to a single final image and tweeted the result. He did all of this through the interactive tool with just a few commands. It really is a union of textual, visual, and the network.</p> <p>For his next trick, Mr. Wolfram asked the cloud for a time series of air temperatures from Austin for the past year then drew it as a graph. Again, he used only a few commands and all data was pulled from the Wolfram Cloud brain. Next he asked for the countries which border Ukraine, calculated the lengths of the borders, and made a chart. Next he asked the system for a list of all Former Soviet Republics, grabbed the flag image for each, then used a ‘nearest’ function to see which flag is closest to the French flag. This ‘nearest’ function is interesting because it isn’t a single function. Rather the computer will automatically select the best algorithm from an exhaustive collection. It seems almost magical. He did a similar demo using images of hand written numbers and the ‘classify’ function to create a machine learning classifier for new hand drawn numbers.</p> <p>He’s right. The Wolfram Language really does have everything built in. The cloud has factual data for almost everything. The contents of wikipedia, many other public databases, and Wolfram’s own scientific databases are built in. The natural language parser makes it easier to work with. It knows that NYC probably means New York City, and can ask the human for clarification if needed. His overall goal is maximum automation. You define what you want the language to do and then it’s up the language to figure out how to do it. It’s taken 25 years to make this language possible, and easy to learn and guess. He claims they’ve invented new algorithms that are only possible because of this system.</p> <p>Since all of the Wolfram Language is backed by the cloud they can do some interesting things. You can write a function and then publish it to their cloud service. The function becomes a JSON or XML web service, instantly live, with a unique URL. All data conversion and hosting is transparently handled for you. All symbolic computation is backed by their cloud. You can also publish a function as a web form. Function parameters become form input elements. As an example he created a simple function which takes the names of two cities and returns a map containing them. Published as a form shows the user two text fields to ask for the city names. Type in two cities and press enter, an image of a map is returned. These aren’t just plain text fields, though. They contain are backed by the full natural language understanding of the cloud. You get auto-completion and validation automatically. And it works perfectly on mobile devices.</p> <p>Everything I saw was sort of mind blowing if we consider what this system will do after a few more iterations. The challenge, at least in my mind, is how to sell it. It’s tricky to sell a general purpose super-brain. Telling people "It can do anything" doesn't usually drive sales. They seem to be aware of this, however, as they now have a bunch of products specific to different industry verticals like physical sciences and healthcare. They don’t sell the super-brain itself, but specific tools backed by the brain. They also announced an SDK that will let developers write web and mobile apps that use the NLP parser and cloud brain as services. They want it to be as easy to put into an app as a Google Maps. What will developers make with the SDK? They don’t know yet, but it sure will be exciting.</p> <p>The upshot of all this? The future looks bright. It’s also inspired me to write a new version of my Amino Shell with improved features. Stay tuned.</p>\n3D Printing Industry Overview<p>One of the benefits of my job at Nokia is the ability to do indepth research on new technologies. If you follow me on <a href=''>G+</a> then you know I've been playing with 3D printers for the past few months. As part of my research I prepared a detailed overview of the 3D printing industry that goes into the technologies, the companies involved, and some speculation about what the future holds; as well as a nice glossary of terms. Nokia has kindly let me share my report with the world. Enjoy!</p> <p><a href=''>3D Printing Industry Overview</a></p>\nWhat do you want to see at OSCON?<p>I'm working on a few submissions for <a href=''>OSCON</a>, due in two days. I've got lots of ideas, but I don't know which ones to submit. Take a look at these and <a href=''>tweet me</a> with your favorite. If you can't make it to OSCON I'll post the presentations and notes here for all to read.</p> <p>Thx, Josh</p> <h3>Augmenting Human Cognition</h3> <p>In a hundred years we will have new bigger problems. We will need new, more productive brains to solve these problems. We need to raise the collective world IQ by at least 40 points. This can only be done by improving the human computer interface, as well as improving physical health and concentration. This session will examine what factors affect the quality and speed of human cognition and productivity; then suggest tools, both historic and futuristic, to improve our brains. The tools are jointly health related: disease, sleep, nutrition, and light; digital: creative tools, AI agents, high speed communication; and also physical augmentation: Google Glass, smart drugs, and additional cybernetic senses.</p> <h3>Awesome 3D Printing Tricks</h3> <p>The dream of 3D printing is for the user to download a design, hit print, and 30 minutes later a model pops out. What’s the fun in that? 3D printers are great because each print can be different. Let’s hack them. This session will show a few ‘unorthodox’ 3D printing techniques including mixing colors, doping with magnets and wires, freakish hybrid designs, and mason jars. Lots of mason jars. </p> <p>The techniques will be demonstrated using the open source Printrbot Simple, though they are applicable to any filament based printer. No coding skills or previous experience with 3D printers is required, though some familiarity with the topic will help.</p> <h3>Cheap Data Dashboards with Node, Amino and the Raspberry PI</h3> <p>Thanks to the Raspberry Pi and cheap HDMI TV sets, you can build a nice data dashboard for your office or workplace for just a few hundred dollars. Though cheap, the Raspberry PI has a surprisingly powerful GPU. The key is Amino, an open source NodeJS library for hardware accelerated graphics. This session will show you how to build simple graphics with Amino, then build a few realtime data dashboards of Twitter feeds, continuous build servers, and RSS feeds; complete with gratuitous particle effects. </p> <h3>HTML Canvas Deep Dive 2014</h3> <p>Behind plain images, HTML Canvas is the number one technology for building graphics in web content. In previous years we have focused on 3D or games. This year we will tackle a more useful topic: data visualization. Raw data is almost useless. Data only becomes meaningful when visualized in ways that humans can understand. In this three hour workshop we will cover everything needed to draw and animate data in interesting ways. The workshop will be divided into sections cover both the basics and techniques specific to finding, parsing, and visualizing public data sets.</p> <p>The first half of the workshop will cover the basics of HTML canvas, where it fits in with other graphics technologies, and how to draw basic graphics on screen. The second half will cover how to find, parse, and visualize a variety of public data sets. If time permits we will examine a few open source libraries designed specifically for data visualization. All topics we don’t have time to cover will be available in a free ebook to read.</p> <h3>Bluetooth Low Energy: State of the Union</h3> <p>In 2013 Bluetooth Low Energy, BLE, was difficult to work with. APIs were rare and buggy. Hackable shields were hard to find. Smartphones didn’t support it if they weren’t made by Apple, and even then it was limited. What a difference a year makes. Now in 2014 you can easily add BLE support to any Arduino, Raspberry PI, or other embedded system. Every major smartphone OS supports BLE and the APIs are finally stable. There are even special versions of Arduino built entirely around wiring sensors together with BLE. This session will introduce Bluetooth Low Energy, explain where it fits in the spectrum of wireless technologies, then dive into the many options today’s hackers have to add BLE to their own projects. Finally, we will assemble a simple smart watch on stage with open source components.</p>