Josh On DesignArt, Design, and Usability for Software EngineersWed Apr 01 2015 21:04:35 GMT+0000 (UTC)\nSamsung Should be Broken Up, I Have the Evidence<div class="body">As part of my research at Nokia I often test and analyze products from other companies. This gives us an awareness of the state of the industry, and helps us to focus our efforts. This week my target was the Samsung Gear S smartwatch. As of yet I have been unable to actually test it. This is my story. And the story of why Samsung should be broken up into smaller companies that can actually make good products.</div><div class="body">When my Gear S arrived I immediately noticed the classy two part box, very much like the boxes my traditional watches come in (I refuse to call them <i>dumb watches</i>). My high hopes were about to be dashed. While I knew the watch would require pairing to a phone before being usable, I assumed it would work with any Android phone. After all, why restrict your market, right? I was wrong.</div><div class="body">A quick search on the web turns up that the Gear S, uses Tizen, not Android Wear. For some reason all of Samsung's Tizen watches only work with Samsung devices. The watches require a minimum OS version and also the Samsung Gear Manager app. This app is only available through Samsung's own app store, not the regular one, and it refused to install on my Moto X. </div><div class="body">Okay fine. Samsung wants you to use their watch with one of their devices. I think it's foolish to restrict your market, but I'll live with it. So I go to Costco to pick up a new Samsung tablet. In this case, the 8 inch Tab 4 (not to be called the Tab 4 8, which would make no sense). I forget if the tablet had Galaxy in the name.</div><div class="body">After charging and boot the tablet I ran all of the software updates. Hmm. No Lollipop. Why is a brand new device not receiving the latest OS? No matter, the Gear Manager app works on KitKat. I dutifully go to Samsung's ugly app store and install the manager. Turn on the watch to pair, and.... nothing. The tablet can't find it. After messing with settings, checking bluetooth, etc. I waste 30 minutes and still Samsung's own app can't see the watch. Getting a bit frustrated here.</div><div class="body">After some Googling I scan for the watch from regular Bluetooth settings. It sees the watch! Finally some progress. Select it. And... it takes me right back to the manager app that couldn't find it a second ago. Except this time it says that the watch isn't supported on my device or requires a software update. It doesn't tell me if a software update is available, or what version I would need. And of course, it didn't give me this message when I first scanned, thus wasting my time.</div><div class="body"><b>What devices actually work?</b></div><div class="body">Now it's time to answer the question. What devices are actually supported by the Samsung Gear S? Search around for it. I dare you. You won't find the answer from Samsung. Let me help you. Here's Samsung's <a href="">support page for the Gear S</a>.</div><div class="body">In their own words: <i style="color: inherit;">Samsung Gear S™ is currently compatible with Samsung devices that meet the following criteria: </i><span style="color: inherit;"><i>Android™ version 4.3 (Jelly Bean) or higher, WVGA or higher screen resolution, and 1.5GB or higher memory.</i> My Tab 4 8 certainly meets those requirements, though I have no idea why a watch manager app needs a <i><b>gig and a half</b>&nbsp;</i>of memory to run, but still, it should work. </span></div><div class="body">Further search reveals a list of Samsung phones on AT&amp;T's site which support the Gear S, but of course they only list the phones that AT&amp;T sells. The same for T-Mobile and other carriers. Why can't I just get a full list of compatible devices from Samsung?</div><div class="body">I finally get the answer from an <a href=";ASIN=B00PMABICC&amp;nodeID=2335752011&amp;store=wireless">Amazon review</a>. Apparently the Gear S only works with <i>specific</i> Samsung phones. Not any of Samsung's tablets. Not any one else's phones. Not even all Samsung phones. Just specific ones that no one has a complete list of. The bottom of the Gear S support page even says: "<i>Some Samsung Health/Fitness applications and related services available for Samsung wearable devices may not be compatible with Samsung tablet devices"</i>, which implies that the rest of the tablet experience would be fine. That is clearly not the case.</div><div class="body">Today I returned the Galaxy Tab48 to Costco and ordered a Samsung Galaxy S5 from Amazon. Not an Active or Alpha or Mega because who knows if the same phone in a different body would work. Just a plain S5. Hopefully my color choice won't affect the compatibility rating.</div><div class="body">This is a truly horrible experience for the customer. I'm literally being paid to use it and I already hate it. I would never recommend anyone get a Samsung wearable.</div><div class="body"><b>Too Many Devices</b></div><div class="body">Samsung products have become confusing. Look at their lineup. Can you tell me which one to buy? Why do they have both a 7" and 8" Tab 4? At least call it the Tab 4 + so that you don't confuse the numbers. Their phones are worse. What's the difference between an S5, Alpha, Mega, and Active? They look roughly the same but with wildly different prices and support. Is the Note a tablet sized phone or a phone sized tablet? I didn't know without looking up the specs (apparently it's a huge ass phone with a stylus).</div><div class="body">Once you get a device from them the OS updates are spotty. Why do some devices receive updates and not others? Why does a brand new device run an old OS, with no timeline for an update? Why does their manager app refuse to run on a device which meets it's own <i>documented</i> specs? &nbsp;</div><div class="body">And why can't a tablet work with a smartwatch anyway? It has bluetooth and wifi. It has the horsepower and hardware. There is no technical reason why this shouldn't work. Did they intentionally exclude tablets or were they too lazy to certify the devices from other branch of the company?</div><div class="body"><b>So how did this happen?</b></div><div class="body">It's worth stepping back to ask the question: h<span style="color: inherit;">ow did Samsung get like this? How does one of the biggest companies in the world fail so badly at making good products? This isn't just the product itself that failed, but the advertising and documentation and marketing around the product, and the massive over-abundance of product <i>choices</i>. I think I know the answer.</span></div><div class="body"><span style="color: inherit;">Samsung’s mobile division&nbsp;is a huge&nbsp;company in it’s own right.&nbsp; They make a lot of products, many in overlapping categories. Samsung wants to over-saturate every possible part of the market. They want to use all of the new components coming out of the rest of Samsung. They also have a lot of people to employ, so they have them make what are essentially duplicated products. These products don't exist because of market demand. <i>They exist because Samsung needs them to exist</i>.</span><span style="color: inherit; font-size: 26px;">&nbsp;</span></div><div class="body"><span style="color: inherit;">I suspect that Samsung, like many big tech companies, is full of silos. These are product groups who don't talk to each other. They may even compete with each other.&nbsp;The silos focus only on what’s next, never on where they are now (thank you Yoda). &nbsp;Once a device leaves the factory they don’t want to ever think about it again. They don’t wan’t to update its software. They don’t want to confirm whether it will work with anything newer. &nbsp;They don’t want to certify it with a product from another silo unless forced to by upper management. </span></div><div class="body"><span style="color: inherit;">The problem is that when you forget about the device you are also forgetting about your customer. &nbsp;I've been burned. I will never buy another Samsung device again. They simply didn't care enough to </span><i style="color: inherit;">correctly</i><span style="color: inherit;"> document basic requirements. Was it the phone group's responsibility to document the compatibility list or the watch group? How do I know my next phone won’t work with my current Gear watch? Perhaps the watch group will have moved on to the new watch and the phone group to the new phone, and I'm left without an update.&nbsp; The customer experience, which goes beyond the time of purchase, should always be first. </span>It seems that what comes first at Samsung is p<span style="color: inherit;">umping out new devices.</span></div><div class="body"><b>Nuke it from orbit. It's the only way to be sure.</b></div><div class="body"><span style="color: inherit;">The only way this can be fixed is by making Samsung smaller; by taking away the conflicts of interest and silos.&nbsp;</span></div><div class="body"><span style="color: inherit;">Samsung should spin out the mobile and gear divisions into their own companies.The Gear division would make watches that work with everyone's phones and tablets. They would finally have an incentive to do so because they could sell more watches. No longer would&nbsp;they be held back by a <a href="">strategy tax</a>.</span></div><div class="body"><span style="color: inherit;">The mobile division could then make fewer but better phones that people actually want to buy with less overlap. They would no longer have to pump out every variation just to use up components from Samsung's semi-conductor division Samsung should be broken up.&nbsp;&nbsp;</span></div><div class="body"><span style="color: inherit;">Of course they won’t do that. Instead they will amble along making a lot of revenue and very little profit, pushing out the latest specs in crappy products with unhappy customers. Customers who will abandon Samsung when given the choice.&nbsp;</span></div><div class="body"><span style="color: inherit;"><b>Samsung: too big&nbsp;to fail, too big to succeed.</b></span></div>\nPredicting the future is hard<div class="body">Smartwatches are coming. By Christmas they will be everywhere. And we won't know how we ever lived without them. Or so we are told to believe. But if you were a company making a smartwatch, how would you know what features to build? How would you know which features will fit the "something people didn't know they wanted" category? You have to predict the future. Turns out, that's hard.</div><div class="body">I'm doing a lot of wearables research right now for work. It got me thinking. How do successful companies like Apple make things that people didn't know they wanted? Certainly they take risks, and occasionally the risk fails (iPod HiFi, anyone), but most of the time they nail it. How do you do this?&nbsp;How do you predict what features will make someone want to use a device. How did Apple know that people wanted&nbsp;iPods before iPods were on the market. How do we know that people will want to move notifications from their phone to their watch, and that this is valuable enough to justify a multi-hundred dollar device.</div><div class="body">The first strategy is to build a feature that is obvious and simply do a really good job of it. Lots of companies are trying this. Fitness tracking and notifications are two obvious features. This is why s<span style="color: inherit;">mart watch coverage focuses on those features. They&nbsp;are known problems that we can already solve (though perhaps not solve well, yet). These features are also valuable enough that there are also standalone devices which do just one or the other of these things.&nbsp;</span></div><div class="body">But this doesn’t answer the question of what will be the other killer features. Fitness bands are fixed function devices that will only ever do one thing well. Apple Watch and Android Wear are app platforms. They are built to expand and grow over time. The only reason to build such as system is for future features that we can't predict yet? What will be the killer apps? How do we predict the future?</div><div class="body">The answer is we can’t. &nbsp;Before these things are built we simply can’t know what will be good and useful, or bad and useless. We can make some educated guesses based on what are known problems, or existing solutions that we hope to replace, but until we <i>actually</i> build&nbsp;it we never really know.&nbsp;</div><div class="body">So&nbsp;build it we must. &nbsp;I’m pretty sure this is what Apple does. They prototype things internally. Lots of things. Crazy things that will probably never ship. (Just look at some of their insane patents). Apple knows that only 1 out of 10 will be a good product idea, but they don’t know which one of the ten until they try them all. &nbsp;You’ve just got to build it.</div><div class="body">So, we have to build it. I don’t know what will be the killer apps in a few years that justify the wide spread adoption of smart watches. The only way to know is to build it.</div><div class="body">Alan Kay famously said in 1971: "<i>The best way to predict the future is to invent it.</i>" He's still right<span style="color: rgb(37, 37, 37);"><span style="font-size: 14px;">.</span></span></div><div><br></div>\nSmart Watches: The Best Interaction is No Interaction<i>Note: The first half of this post was written before the Apple Watch event and the second half after. I wanted to capture my thoughts before they were polluted by endless "reviewers" proclaiming the genius/delusion of Tim Cook. I almost didn’t want to write this because I knew my conclusion would be that you can’t review any technology this personal without using it for a while, and here I am reviewing a device I haven’t personally used. That said, I think this represents more of my thoughts on the category rather than Apple Watch in particular, a category that is going to bigger than we expect. So... here goes.</i> <p> </p> <p> </p> <h3 id="id49550">In the Beginning</h3> <p> On the eve of Apple’s watch event I'm trying to collect my thoughts about smartwatches. I’ve used smart watches before and recently purchased <a href=''>one of the best Android Wear devices</a>. My general impressions for the category aren’t great, but the potential is huge. Let’s dig in. </p> <p> </p> <p> A few initial thoughts: </p> <p> Round watches look really cool but they just don't feel practical. I find them hard to navigate. My fingers keep wanting more horizontal room. Actually, you <i>could</i> design a great interface for circular watches that take advantage of the round shape, but it would be completely different from the rectangular UI. (remember the iPod?). Trying to make both shapes with the same software is a mistake on Google’s part. Pick one shape and stick with it. </p> <p> It’s hard to evaluate the interface from descriptions and screenshots, or even animations. With a screen this small every tiny detail matters; minimalism to the max. The only way to evaluate is to use the device for a while. A long while. Like, a week or more. Keep that in mind when you read all of the "reviews" of the Apple watch this week. </p> <p> The most useful indicator from a watch is actually the buzzer. <i>(Hmm: Note to self. What about a screen-less watch? Vibrations as the only output? You can actually do a lot that way. Have to come back to that.)</i> </p> <p> Battery life isn’t as big an issue as I feared. Bluetooth LE is pretty damn good these days. While Pebble’s week of battery is pretty awesome, most people just need it to go a single day and recharge thoughtlessly on the bedside table. The future of power management isn’t going to be better batteries, but better chargers. Oh <a href=''>Touchstone dock</a>, how I miss you. </p> <p> Still, why would someone want one of these things? With a starting price of several hundred dollars and dubious ‘fitness’ benefits, why should the non-early adopter buy one of these? For now the answer is: they shouldn’t. But that will change; and probably faster than we (the techies) realize. </p> <p> </p> <h3 id="id66720">The Killer App</h3> <p> Smartwatches still lack the killer app. That’s okay. Smartphones really didn’t have a killer app at first either. Arguably the first killer app for smartphones was Mobile Safari, but that was just a portal to the many things on the web, not an app in itself. In a sense, good network access on the go was the killer app. There wasn't one particular task that made smartphones amazing at first. </p> <p> After a decade of smartphones we realize there isn’t a single killer app. There are <b>tons</b>; and what is killer for you is unnecessary for someone else. It took building a rich ecosystem of 3rd party apps for the true value of a smartphone to be realized. I use at least ten apps multiple times a day on my phone, and only the alarm clock and Safari are built in. The rest were built after the iPhone existed. </p> <p> And so it will be with the smart watch. <b>The exciting things won’t be what we see today, but what we will see in a year, or five years.</b> The little things that make your life better. The little ways they connect with the other digital (and non-digital) objects in your life. These are the features we simply can’t predict today. Even Apple can’t. But when they arrive, watch out. </p> <p> The smartwatches I’ve used are clunky, large, have poor battery life, and are still too confusing; but make no mistake: when it works, it’s magic. All the pieces of your digital life in sync. </p> <p> Example: my wife texted me while I was driving. I glanced, made a quick swipe, spoke a reply, and it was sent. No extra buttons. Simply magic. For a moment I was living in the future. This is what Apple is promising. Making your life magic. </p> <p> A smartwatch isn’t really a device, it’s an extension of your smartphone, or more properly an extension of the smart ecosystem you are a part of. (If Google ever does support Android Wear on iOS I expect it will be a pale shadow of the Android experience). The watch is at it’s best when it leverages that ecosystem. All of the information already present on your phone and the cloud, working on your behalf. When it works together, it’s bloody magic. (Perhaps that's why we find iCloud's bugginess so frustrating). </p> <p> Today, our smartwatches will not be magic all the time. Voice recognition is heavily cloud dependent. Google does an amazing job of making it fast but any latency on a watch breaks the illusion. Even the best smart watch can become frustrating fast. Apple’s ads make Siri look practically instantaneous but I want to see what it will be like under real world conditions. </p> <p> Smartwatches will (are?) result in a flurry of notifications. Both Apple and Google have made recent changes to their respective operating systems to better manage alerts, but it’s still going to grow out of hand. We may need to start applying spam-filter like technology to the problem. It’s really a catch 22. Notifications make the watch worth while, but too many makes it a pain to use. Finding that balance, and adjusting it for every person without being a rules programmer is going to be very tricky </p> <p> </p> <h3 id="id58377">Is it worth it?</h3> <p> </p> <p> The final question I hear from lots of people: is it worth getting a smartwatch which simply moves the notifications from your phone screen to your wrist? Aren’t we simply saving 10 seconds of time to remove the phone from your pocket? The answer is yes and yes. </p> <p> Yes, it’s totally worth moving the interface closer. It’s not just 10 seconds of time. This is a case where a quantitative difference becomes <i>qualitative</i>. An interaction that takes one second on your wrist really is different than the 10 seconds from your jeans pocket. It fits under a threshold that allows you to continue with your current frame of mind. It doesn’t break your concentration. It lets your short term memory remain undisturbed. It really is different. </p> <p> ...provided of course, that the notifications are minimal and can be immediately ignored or acted on quickly. Interaction design is far more critical, and difficult, on a wearable device than a phone. These apps are going to be a <b>lot</b> harder to build. Making apps for wearables is as different from phones as phones were from desktops. Perhaps this is why the Android Wear store is 90% ugly watch faces. I worry about a flood of crappy watch apps that give the field a bad name. Perhaps this is why Apple is so far being conservative on that front. Hopefully they apply stronger design filters in the Watch App Store. </p> <p> </p> <h3 id="id37136">The Best Interaction is No Interaction</h3> <p> The best interactions will be no interaction; things that happen automatically on your behalf, with a gentle notification that it happened. This will be <a href=''>Calm Technology</a> at it's best. Your watch as presence indicator. Unlock your phone automatically. Unlock your car or house (when it recognizes your heartbeat signature to ensure it’s really you). Remember where you parked your car, automatically. Warn you when you leave a phone in a bar. Buzz you when it’s time to stretch, or drink some water, or leave early for a meeting because of traffic. <b>Smartwatches will be less a device than a guardian angel, doing things on your behalf.</b> This is, of course, terrifying. </p> <p> Terrifying not because of automation, but because of how much of our lives may be controlled by just two companies. I'm afraid I don't have an answer for that. The robots taking over might be nicer. </p> <p> </p> <p> </p> <p> </p> <h3 id="id95184">After the Apple Event.</h3> <p> It's now Friday. I finally had a chance to watch the Apple event and my opinions haven’t changed much. The watch's interface is clearly more polished than what we saw last fall; and I really, really want the new MacBook. </p> <p> So bearing in mind that I can’t judge a wearable I haven’t used, here is my judgement on a wearable I haven’t used. </p> <p> </p> <ul><li>I really like the new scrollbar design that’s more indicative of where you are in the dock. Please bring that back to OSX.</li> <li>I have doubts about voice quality, but accepting a phone call on your wrist will be awesome when you are holding a 3 yr old.</li> <li>The navigation feels more refined than Android wear, but still tricky. I really want to see how it works in person. it’s not clear what actions you do with the digital crown and what uses swiping. Some gestures seem to be overloaded. This is the most difficult part to design and really must be judged in person.</li> <li>The health aspects don’t seem much beyond what I can get out of other health trackers. The key will be filtering this into actionable data. That will require better 3rd party apps.</li> <li>The app constellation is still chaos.</li> </ul> <p> </p> <p> Digital Touch is the most fascinating part to me. Nothing else (mainstream) does that right now. It really sells the vision that these are personal devices in a way we've never had before. People want to communicate with their loved ones. It’s what separates us from the animals (<a href=''>except the weasel</a>). </p> <p> Quick anecdote. I saw paired cylinders when I was an intern at Xerox PARC in the mid-90s. When you rolled one cylinder it’s match would move in tandem. The idea is that you could bring this with you while traveling and give the occasional gesture to your partner to let them know you were thinking about them. Digital Touch is clearly the modern equivalent. </p> <p> </p> <p> So, will I get one of these? Of course, I’ll get several. That’s my job. The interesting question is whether my my non-tech wife or mother want one. I think so. By Christmas there will be actually useful apps and a slew of bug fixes. This will provide real value in a way that can’t be quantified by a spec sheet. Value in the form of feeling and subtle interactions. </p> <p> Welcome to the new personal computer. </p>\nThreeJS Cookbook ReviewAmong the too many things I’ve done recently, I was a tech reviewer for a new WebGL book from Packt author Jos Dirksen called the <a href=' '>Three.js Cookbook</a>. <p> </p> <p> </p> <p> <a href=''>WebGL</a> is an amazing technology. You can build 3D graphics and animation that run blindingly fast on desktop and mobile. Even <a href=''>iPhones</a> support it now. There’s just one downside: WebGL is complicated. </p> <p> WebGL is basically OpenGL ES 2 for the web. If you’ve done any OpenGL work before you know it’s very low level and requires a pretty extensive background in 3D graphics and matrix math. While I prefer JavaScript to C++, straight OpenGL is still pretty hard. That’s why most people use a library, like ThreeJS. </p> <p> <a href=''>ThreeJS</a> is probably the most popular open source WebGL library, though there are <a href=''>plenty</a> of <a href=''>others</a>. Of course ThreeJS is still fairly complex itself. It's the nature of the beast. If you just want to do one specific thing, like load up a model and light it, learning all of ThreeJS is overkill. That’s when you want the ThreeJS Cookbook. </p> <p> The Cookbook has over 80 recipes all structured the same way: what you’ll learn, what you’ll need, how to do it, and how it works. Get in and out quickly. Don’t think the fast nature means it’s not comprehensive, though. It covers pretty much everything from lighting and model loading to height maps, point clouds, and event custom shaders. And of course all example code is in a <a href=''>github repo</a>. </p> <p> </p> <p> </p> <p> I happily recommend it. </p> <p> The Three.js Cookbook: Now available from <a href=''>Packt</a> or <a href=''>Amazon</a>. </p> <p> </p> <p> <img src='' alt='text'/> </p>\nAmino RefactoredI've done a major refactoring which will make <a href=''>Amino</a> easier to install, easier to maintain and, eventually, better performance and portability. Part of this work involved moving the platform specific parts to their own node modules. This means you should <b>no longer install aminogfx</b> directly. Instead, install the appropriate platform specific module. Currently there is one for GL and one for Canvas. I've also added stage transparency support to Raspberry Pi! <p> Here's how it works on the Pi. </p> <p> Install NodeJS if you don't already have it installed. You'll also need GCC for the native bits. Then, inside your app directory, install <b>aminogfx-gl</b> </p> <pre><code>npm install --save aminogfx-gl</code></pre> <p> Then wait while it downloads and compiles everything. This may take a while. It's about 5 min on my new Raspberry Pi 2. It'll take a bit longer on the Pi 1. </p> <p> </p> <p> In your app, require aminogfx-gl then code as normal. The code below creates a window with a rectangle and circle. </p> <pre><code>var amino = require('aminogfx-gl'); amino.start(function(core, stage) { stage.fill("#00ff00"); stage.opacity(1.0); var root = new amino.Group(); stage.setRoot(root); var rect = new amino.Rect() .w(200).h(250).x(0).y(0) .fill("#0000ff"); root.add(rect); var circle = new amino.Circle().radius(50) .fill('#ffcccc').filled(true) .x(100).y(100); root.add(circle); });</code></pre> <p> New to this version are the stage.fill and stage.opacity properties. Fill controls the background of the window, black by default. Opacity affects the opacity of the window's background. If you set it to 0.0 then the background will go away completely. On the Mac this will have no effect: </p> <p> <img src='' alt='text'/> </p> <p> but on the Raspberry Pi, this will let what's behind show through. This could be the console, or another app, or a video. </p> <p> <img src='' alt='text'/> </p> <p> Layering multiple apps together opens up a lot of interesting possibilities. I can't wait to see what you make with it. </p> <p> </p> <p> Though there are multiple github repos now, one for each backend, most of the code is still shared in the <a href=''>aminogfx repo</a> repo. Use that for docs, bugs and <a href=''>feature requests</a>. </p>\nWhy You *Can* Build a Smartphone.In what was by far my most popular post of 2013, <a href=''>Why You Can’t Build A Smartphone</a>, I explained why building a new smartphone platform was futile. Today, like any good author, I’m going <i>completely contradict</i> myself. Yes, it <b>is</b> possible to create a new smartphone platform. You just have to follow a few constraints. <p> <a href=''>Recent coverage</a> of Google’s Project Ara modular smartphone made me think back to my webOS days. </p> <p> Oh, we were so young and naive, thinking we could make a dent into the coming mobile platform duopoly (sorry MS). Palm, of course, had Handspring in it’s history. <a href=''>The Visor</a> was the original modular mobile device with a Gameboy like swappable hardware port. </p> <p> While at Palm (before we were HP) I pushed the idea of bringing back swappable hardware, though as more of a standard dock connector with kung-fu grip. Unfortunately it was infeasible when trying to compete in mass market carrier stores. The last thing the carriers wanted was more SKUs to manage when they were already killing us with the Droid. Customization would have come at the software level. A true hardware modular phone would be DOA. </p> <p> All of that said, I think creating a new mobile platform, even one with modular hardware, is very doable today. There’s just a few constraints. It’s true you can’t create a new successful mainstream smartphone platform but if you are willing to compromise on a few things, there is plenty of room for new entrants. </p> <p> </p> <h3 id="id95478">What is a Smartphone Platform?</h3> <p> </p> <p> First let’s define our terms. A smartphone platform is: </p> <p> </p> <ul><li>Something that makes phone calls. I’m actually willing to fudge on this one now that Skype is everywhere. Let's just say, something with a SIM card.</li> <li>It has a cellphone contract and is sold in carrier stores. That means dealing with carrier sales guys.</li> <li>Has mass market pricing. No one will buy a $2000 phone. It’s got to be less than $500 (w/o subsidies for a base model).</li> <li>Sells millions of units. High volume is how you can make a phone with mass market pricing.</li> <li>Has a complete app store with all of the apps most people need. This is a struggle even for Microsoft.</li> </ul> <p> Nailing all of these is <b>required</b> to be a successful smartphone platform today. As you can imagine, this is practically impossible to do from scratch. Smartphones are now a rich man’s game. You must be prepared to spend upwards of a few billion dollars a year just to have a seat at the table. Only a crazy person with too much cash would try it. Come on <a href=''>Larry!</a> </p> <p> </p> <h3 id="id79994">Compromise</h3> <p> However, smartphones ain’t what they used to be. Android, the open source project, (AOSP) is a good core with excellent driver support from chipset vendors. The flood of cheap Chinese phones means there are a bunch of factories who would love to make a device for you. Factories that can handle orders smaller than a million per run. </p> <p> All of this means that if you are willing to compromise on one or more of the points above you can make money. It won’t be a ‘smartphone platform’ in the traditional sense and you won’t make billions, but you can still be profitable. Success doesn’t have to mean taking a 10% share of the global market. There are other ways to make money. </p> <p> The key is to <i>not</i> build a smartphone, but rather a device built <i>with smartphone components</i>. It may still effectively be a smartphone (just as smartphones are effectively handheld computers); but don’t <b>call it</b> a smartphone. There are lots of markets underserved by current smartphones. </p> <p> Here’s a few approaches: </p> <p> </p> <ul><li>Fork Android. Create a custom skin, replace the Google services, and build your own app store. This is the approach Amazon and Xiaomi have taken and it’s worked pretty well for them. (if we ignore the Fire Phone). They each have their own market with satisfied customers. Neither sells in a carrier store or has contracts. Forking Android is still a lot of work, but it’s perfectly viable for some markets and getting cheaper every day.</li> </ul> <p> </p> <ul><li>Crazy hardware. Build a phone with a gigantic hi-res screen. Or a tiny projector in the side. Or medical sensors. You won’t sell millions, but there are underserved markets that are prepared to pay a lot more than a typical phone for these features.</li> </ul> <p> </p> <ul><li>Cheap hardware. Build a no frills device with hardware from two years ago. Moore’s law gives you an incredibly steep discount on components. You can now build a flagship from a few years ago for under 100$, or even as low as 30$. Performance won’t be great but depending on the audience it will be good enough for many uses.</li> </ul> <p> </p> <ul><li>Dedicate yourself to an underserved app market like quality educational software. Many of the educational devices you’d find at a typical Toys 'R Us take this approach. They are essentially large phones (or small tablets) which can’t make phone calls. They are completely skinned and come with their own app stores. The key is they aren’t *at all* in the same market as regular smartphones. They are replacing existing educational devices that have far fewer features.</li> </ul> <p> </p> <ul><li>Modular hot-swappable hardware. This brings us back to Project Ara. </li> </ul> <p> The big question is: who would want a modular phone? That is the wrong question to ask. The right question is: who would want a modular device built out of phone parts. I think the answer is: a lot of people. </p> <p> </p> <h3 id="id67168">Project Ara</h3> <p> Don’t think of Ara as mass customization. Very few people want an everyday phone with swappable parts. However a lot of people would like a custom <b>non-phone</b> device built on a production run of 1. </p> <p> Some things we could build with an Ara device: </p> <p> </p> <ul><li>a medical scanner that can target the particular disease you are fighting this week, and a different disease next week. In the field. In India. Where your only connectivity is a weak cellphone signal. And data tracking with a laptop would take too much power.</li> </ul> <p> </p> <ul><li>Inventory scanning. Those guys who stock the shelves at Walmart would love something like this. They can add the latest tracking technology by just mailing a small module to each store.</li> </ul> <p> </p> <ul><li>UPS package tracking. When the people in brown drop off your packages they scan it with what is essentially a smartphone with a custom screen and scanner. This device costs a lot more than Ara would. UPS would love it.</li> </ul> <p> </p> <ul><li>A phone with a real gamepad attached to it for serious gaming, then taken off when you go out to dinner.</li> </ul> <p> </p> <ul><li>A phone integrated into a GoPro for Xtreme Sporting.</li> </ul> <p> </p> <ul><li>A smart digital film camera for indie film makers. They want they cool software and connectivity of a smartphone, but with a real camera sensor and large swappable lenses.</li> </ul> <p> </p> <ul><li>The portable research lab. Today my phone is a digital microscope. Tomorrow it becomes a projector, then a media server. I would use this every day.</li> </ul> <p> </p> <ul><li>An Arduino breakout board. Now the flexibility and ease of programming from Arduino comes to your smart phone.</li> </ul> <p> </p> <h3 id="id13083">End Game</h3> <p> Ara isn’t a PC. Don’t think in terms of upgrading RAM or graphics cards. Those are red herrings, <a href=''>like communism</a>. </p> <p> The value of Ara is building something completely different out of smartphone parts. This is already happening with smaller runs of custom Android devices (in the 100k range). Ara will let you build a custom device with a <b>unit scale of one</b>. Ara is the democratization of smartphone technology taken to it’s inevitable conclusion. </p> <p> So you can take your mega-smartphone platforms to the bank. I’ve got a <a href=''>tricorder</a> to build. </p>\nIdeal OS Part III: User Attention Is SacredIn the first two (<a href=''>1</a>, <a href=''>2</a> ) installments of this essay I covered overall system design, the window manager, and applications. I talked about how the user will communicate with the system, but I haven’t discussed much about how the system communicates back to the user. This brings us to the next big problem of today’s operating systems: notifications and concentration. <p> <a href=' '>Jef Raskin</a> famously said: "User data is sacred". We can say the same about the user’s time and attention. </p> <p> The computer must never waste the user’s time. The computer must never break the user’s attention. While these rules are impossible to keep absolutely, they are excellent guidelines. If the computer must interrupt your current task, it better have a damn good reason to do so. And that brings us to notifications. </p> <p> </p> <p> </p> <h2 id="id86358">Notifications: The Big Bugaboo</h2> <p> </p> <p> Notifications have become the bane of my existence. I get notifications for everything: new emails, VIP emails, tweets, likes, +1s, stack overflows. Everything sends me a notification. Every app I install on my phone wants to notify me of their newest stuff. </p> <p> None of these notifications is really spam. Each source is providing me with genuine information. The problem is <i>timing</i>. Notifications are too frequent. They appear at times when I need to concentrate. They are too aggressive in their interruptions. </p> <p> They are also not <i>sorted</i>. While I eventually want to receive all notifications at certain times; I want only the "important" ones now. To solve this we really need to solve two problems: creating a way to notify me of information in less disruptive ways, and how to define <i>important</i>. </p> <p> Let’s look at other solutions. </p> <p> </p> <h3 id="id61277">iOS 7/8</h3> <p> Apple’s solution in iOS7/8 is an improvement from iOS6, but not great. You get this giant list of apps. For each app, on a separate screen, can you configure whether you want notifications at all, and where they should be displayed. On the lock screen? With sounds? All of this may be configured <i>for each application</i>. This is too much work so most people just accept the defaults. A system no one uses is less that worthless. I usually love Apple’s design, but iOS notifications are abysmal. I don’t know how it got past design review. </p> <p> One possible solution is to position the apps relative to colored lines. Everything below the red line may not notify you in any way beyond it’s app icon. Everything above the line can notify you with a dropdown notification that quickly fades away. Above the red line is a green line. Anything above the green can notify you even when do not disturb is turned on. These are only for the very important things. To adjust an app you simply drag it up and down. The list becomes a hierarchy of which apps do more urgent things. The position indicates it’s setting. This is far easier to manage than a per app settings screen. </p> <p> The problem with this plan is that an "app" is too granular. My email client receives both urgent and non-urgent messages yet it is treated as one "app". We need a more fine grained approach. </p> <p> One idea is for the app generating a notification to set the priority level, but this complicates the app config screen. Would you have one entry for 'email: low priority' and a second one for 'email: high priority? This will quickly become confusing. There has to be a better way. </p> <p> </p> <h3 id="id5203">Content based Prioritization</h3> <p> Manually managing notifications in a preferences app simply won’t scale. I can manage these individually but it’s death by a thousand cuts. We need to solve this problem as a whole, just like we did with memory management and security. We need content based analysis to determine the priority. </p> <p> Content based notifications are a complex and under-researched area, but it should be possible. Ten years ago we were drowning in spam but [Bayesian filters] have pretty much put an end to it. Something similar should be possible for prioritizing notifications. </p> <p> Setting priority should be based on the content of the message, where the app is coming from, and which person is trying to communicate with me. A message from my wife or boss is always higher priority than messages from Google or a recruiter. </p> <p> Obviously prioritizing would have to be done by a trusted part of your computer since it literally sees your entire life stream; but something like this will eventually have to be built. It might require training. Something like a star button to indicate if something is important so the system can learn for next time. Not trivial, but it <b>is</b> possible. Spam filters prove it. </p> <p> </p> <h3 id="id91464">Zen Mode</h3> <p> The other big problem with notifications is making sure they are not only important but arrive at the right time. Sometimes I am in communication mode. I don’t mind having lots of interruptions. I’m just doing email, chatting, paperwork, etc. Other times I’m in deep programming or research mode. I need several hours of uninterrupted time to do ‘real work’. </p> <p> The computer should have a ‘zen' mode which lets me work full screen with just a single app, or with the set of apps I need for that particular task. Apps can be notified of zen mode so they can modify their UI. Zen mode also blocks notifications except the highest level. The rest are collated so I can deal with them when I have free time. </p> <p> The important thing is that Zen mode is a system wide feature. All apps must respect it, and are in fact forced to respect it. An app can alert all it once, but in Zen mode the alert won’t actually be able to interrupt me. </p> <p> </p> <h3 id="id61769">Presence</h3> <p> The key to zen mode is that the computer needs to know a lot about you. This is called presence detection. It needs to know your history. It needs to know your current state. Are you at your desk? Did you really leave the office or just to get coffee. Are you concentrating hard or just messing around on Reddit? The computer should know this stuff. </p> <p> My desktop computer has access to my phone, plus it’s own sensors like a microphone and camera, but most OSes don’t use them. Google makes a phone that’s always listening for you to say ‘Hey Google’. Why can’t my desktop computer, which has far more computing power than a phone, do the same? </p> <p> Your computer monitors you across devices to establish the context. Context is whether you can be interrupted. Your mood. Your concentration level. Your health. Are you present reading or moving or typing? The computer should have the camera and microphone on all the time. It recognizes when you are on the network, and others around you. It knows what devices you have with you. Context detection and management must be a central part of the operating system. </p> <p> Presence should be just another data service event. It should use all inputs available to the computer, including networked resources. Use the phone and camera. If I’m at my computer all notifications come to the computer. Project the screen too, if needed. VNC is a built in protocol supported by all apps. Imagine if Apple’s AirPlay worked with all devices and apps, not just the blessed few. Apple actually built this with their Continuity system in Yosemite. If only it wasn’t so buggy that I could actually use it. </p> <p> As a side note, why can’t I use my phone as a peripheral device for my desktop? I have my phone; why can’t I use it like a webcam so I can set it up on a tripod or microscope stand and use it just like a directly connected webcam in any application on my computer. </p> <p> Again, it must be said, the security and privacy implications of this are huge. Anonymized and secure database algorithms are still <a href=' '>an open area of research</a>. One key will be keeping calculations as local as possible. The farther away your data gets the easier it is to steal. </p> <p> </p> <p> </p> <h2 id="id47486">Real Customization</h2> <p> Now on to a new topic. What if our computers could really be customized. I mean <b>really</b> customized, not just changing the theme or setting keybindings (though having system wide editable keybindings would be wonderful. If there from the beginning all apps would use it. But on to real customization.) </p> <p> Customization isn’t just tweaking defaults. It’s about reconfiguring the computer to your needs for the particular task you are doing. With something like a RaspberryPI, the cost is so low I might have several of these running clones of each other. I should be able to do the following with just a few lines of code, or ideally a single visual diagram without anything we could today call ‘coding’. </p> <p> </p> <ul><li>Become a digital picture frame for (Facebook stream | flicker | directory of photos)</li> <li>Edit a document with live code snippets from the Internet, and add microscope photos directly from my smartphone camera. </li> <li>Turn a smartphone into a time-lapse computer. As photos come in they are added to a stream on the desktop.</li> <li>Make my computer play a sound when I press a button on my watch.</li> <li>Set up a 3D text viewer with effects that shows the latest tweet and plays music at a party.</li> <li>Turn a laptop into a photo booth. Full screen interface, choose effect, snaps a photo when you press a physical button, prints out immediately.</li> <li>Use a game pad to move a pan-tilt zoom camera with the computer hooking them together.</li> <li>Build a party mp3 player that lets you advance song with a button and control volume with a knob.</li> <li>Browse a list of DVDs held in a library.</li> <li>Project white noise into a speaker in another specific room.</li> </ul> <p> </p> <p> All of the above cases are really just snapping a few components together, yet today this would be extremely challenging for most people to build. Even for a professional engineer like myself I’d spend a lot of time dealing with interoperability issues like data formats and app permissions. It shouldn’t be this hard. Computers were supposed to be a general purpose tool. Why aren’t they accessible to everyone? </p> <p> </p> <h3 id="id78113">Everyone should code.</h3> <p> This flows into my belief that everyone should learn to code. I’m not saying we all need to be programmers, or that we all need to even learn to use a programming language, but everyone should be comfortable with computational thinking. Everyone should understand what an algorithm is and how to make a device do something for you through instruction with loops and conditionals. </p> <p> It does sound a bit crazy to say that everyone should learn to think computationally, but it was crazy to say everyone should learn how to read or do arithmetic just a few hundred years ago. This really is a topic for it’s own full essay. (one day…) For today let’s just say that you should be able to reconfigure your computer with simple tools. </p> <p> </p> <p> </p> <h2 id="id37543">A few notes on implementation</h2> <p> This essay series isn’t really about implementations. The user experience is what matters, not how it’s built; but tools and formats <b>do</b> matter, especially if you want a long term computing platform. So, a few things must be said. </p> <p> </p> <h3 id="id66192">Theming</h3> <p> Look and feel theming. When people say the look and feel they usually mean the <i>look</i>, even though the feel is far more important. The feel must be carefully designed, and then carefully customized for the user in very specific and limited ways. Applications should have no control over this. The user (and their agent, the OS, not apps) are the final arbiters. </p> <p> On the look side we really do mean theming. I guess it’s okay, but i’d prefer not to support theming, at least not with a public API. In theory it’s great, but in practice I have never seen a 3rd party theme as good as the default from the OS vendor. Perhaps it’s a market failure. Perhaps good designers have better things to do with their time than build free themes. Perhaps it’s hard to judge the quality of a theme from a screenshot. Perhaps end users interested in theming have no taste. I don’t know. But it’s definitely a low priority. </p> <p> </p> <h3 id="id70459">Device Drivers:</h3> <p> In short, I hate them. They break all the time, and in the computers most vunerable spots. The kernel. Ideally access to the hardware would be completely virtualized. OS provided hooks connect the real hardware to the ‘device drivers’, where are isolated user-level modules that understand the details of that particular hardware. If a driver crashes it crashes, and the service it was providing to the user will stop, but the whole OS won’t come down. </p> <p> We’ve invented amazing hardware virtualization technology for the server room. Let’s bring that back to our desktops. With modern GPUs even the screen should be virtualizable without a significant speed hit. </p> <p> </p> <h3 id="id16687">Message bus format</h3> <p> In general the ideal OS would be based on a simple messaging system. And I do mean simple. Simple. Simple! Text or a text equivalent like JSON. It <b>must</b> have multiple implementations. It <b>must</b> be language agnostic. Just messages, not remote objects. We aren’t rebuilding CORBA here. Something more like REST. It doesn’t have to be high perf. This is for system components to communicate at slightly faster than human speeds. When they need to do bulk data transfers they can use other existing systems like files and shared memory. </p> <p> </p> <h2 id="id50796">In Memoriam</h2> <p> Throughout this essay series I’ve highlighted the way our desktop operating systems, which are destined to be used by at least 10% of us, are horrible. While amazing technically, they fail us at every step. We still don’t have useful features that were designed in the 1970s, and we still have technical underpinnings that <b>were</b> designed in the 1970s. Seriously. We can do better. </p> <p> I really hope in 300 years, when we are all computing on the Starship Enterprise, we won’t still struggle to combine applications and deal with segfaults. Let’s fix it in this century. </p>\nIdeal OS Part II: The User InterfaceIn the future touch interfaces will take over most computing tasks but 10% of people will still need ‘full general purpose computers’. We can’t let the interface stagnate. This white paper represents a decade of my thinking on what is wrong with desktop style operating systems (WIMP) style and proposed solutions. PCs are not obsolete. They just need improvements to become ‘workstations’ again. <p> <a href=''>Last time</a> I gave you an overview of what an Operating System would look like if we took away all the bad parts, leaving not too much left, and start building replacements. But what would these new parts look like? How would you start programs and manage windows? Without a filesystem how would the desktop folders work? For the answers to these questions and just so much more, keep reading. </p> <p> </p> <p> </p> <h3 id="id93828">Desktop Folders</h3> <p> Since the filesystem now becomes a database, finding your files would be done through queries. Creating a folder is essentially creating a new saved query. A list of all audio files marked as being songs. A list of all code files marked as being part of a particular project. Folder contents can be readonly queries based only on the attributes of documents (like the list of songs in an album) or they could be adhoc where the user drags files into the folder. In this case a file receives a tag referring to that folder, thus simulating the old kind of folder. However, unlike traditional directories, however, a file can be in any number of folders at once. </p> <p> </p> <h3 id="id39184">Command line</h3> <p> The new OS should have a command line. Part of the magic of Unix is being able to pipe simple commands together at the shell. We still need that, but the pipes would carry streams of simple objects instead of bytes. How much better would the ImageMagick operators be if they could stream proper metadata? Building new commands that talk with the old ones would be trivial. </p> <p> With a single command line you could do complex operations like: find all photos taken in the last four years within 50 miles of Yosemite, and that have a star rating of 3 or higher, resize them to be 1000px on the longest size, then upload them to a new Flickr album called “Best of Yosemite”, and link to the album on Facebook. This could all be done with built in tools; no custom coding required. Just combining a few primitives on the command line. </p> <p> Of course a traditional command line is still difficult to use for novice users. Even with training you need to memorize a lot of commands. A better solution is a hybrid. In the short (default) form you can chain commands with pipes using auto-completion to help remember commands and arguments. In the expanded form you can visually chain commands together with a visual drawing-like tool. This would be similar to OSX's Automator. Switching between the modes is always possible. </p> <p> </p> <p> </p> <h3 id="id85962">Windows</h3> <p> Windows are still a good thing. Sometimes you need to resize them to see multiple things at once. However, they could be a lot more powerful and flexible than they are today. </p> <p> First, every window should be a tab. <b>Every window</b>. Snap any window to any other window, just like Chrome tabs. Who cares if the tabs aren’t from the same application. We don’t have applications anymore anyways. If the user wants to snap a todo list window onto an email window, <b>let them</b>. User desires trump ancient technical architecture. </p> <p> Second, windows should be pluggable. We’ve covered how an email app is really multiple pieces, including an inbox view and a message view. Sometimes you might want those views connected, as with a traditional email client. Other times you might want them separate. This should be as trivial as snapping them together or dragging them apart. </p> <p> For more complex layouts the system should have patterns like ‘master view’, and ‘vertical accordion’. The user can pull out a new empty pattern then drop the views where they like. We already do similar things in Wordpress and other web editors. Let’s make it universal. </p> <p> At first this might seem to come with some challenges. What if the user accidentally creates multiple inbox views? Well, so what? If I want an inbox on each screen of my computer, I can do that. Maybe I want an inbox that just shows work email on my first screen, and personal email on my second. Maybe I want an inbox that just shows emails around a particular project, or from a particular person. These are all just different database queries so why not? If I want to setup my windows that way I should be able to do so. The computer must adapt to how the human works, not the other way around. </p> <p> </p> <p> </p> <h3 id="id4041">The Window Manager</h3> <p> Now the window manager itself. A WM has a bunch of duties. It must render windows (obviously). It must let you move and resize them. It must handle notifications. It must manage the graphics card. It (usually) implements transparency and special effects. It shows dialog boxes. It starts and stops applications. There really are a lot of things required of a modern window manager. </p> <p> Because of this complexity, many operating systems divide this task up in various ways. Some move app launching to a separate launcher system but still close apps with the window manager. Some split the drawing of windows from the moving. Some put notifications in a separate process. All of these are good ideas, but they don’t take things far enough. For the IdealOS we should explicitly chop the window manager into different pieces. When I say <i>explicitly</i> I mean actual separate processes that communicate with fully documented APIs. Documented means hackable, and hackable means we can start extending the desktop in interesting ways. </p> <p> </p> <h3 id="id21673">Proposed Window Manager Architecture</h3> <p> <b>Compositor</b>: Individual applications draw to an offscreen buffer or directly to a texture in the OpenGL context. The compositor draws these buffers and textures to the real screen. Since only the compositor has access to the real screen, only the compositor can do interesting effects like MacOSX’s Expose system. For hackability, the compositor must expose a (protected) API to manipulate windows and apply shader effects. </p> <p> <b>Window Manager</b>: This draws the actual window controls and handles moving and resizing. it should be easily swappable to support theming and playing around with interaction ideas. The window manager is also in charge of deciding where new windows go when they are created. We’ll get back to this in a second. </p> <p> <b>Launcher</b>: This is an interface for launching apps. It is a separate program but it still starts apps using the app service. It has no extra privileges. Any other program could launch an app just the same. This means we have multiple launchers at the same time. ex: a big dock bar and global search field (like the new spotlight system on OS X Yosemite). </p> <p> <b>Notification Manager</b>: The actual processing of notifications can happen in a separate notification manager, but creating the notifications on screen should happen in the window manager because it’s the thing which decides where windows go and when. We’ll cover the details of notifications it a moment. </p> <p> </p> <h2 id="id703">The Fun Begins</h2> <p> </p> <h3 id="id45881">Window Placement</h3> <p> What interesting things can we do with our new system? The first thing is to let the window manager be smarter about placing windows. <a href=''>xmonad</a> has some good ideas about tiling windows. When you are doing a lot of work it’s common to want multiple windows at once laid out without overlapping. Sometimes you might want a grid. Other times two columns. Those can be just a keypress away with an xmonad style window manager. </p> <p> The window manager should be smarter about placing dialogs. Ideally we wouldn’t have dialogs at all, but they are sometimes needed. The WM should be smart about placing them so they don't obscure the content. </p> <p> Current OSes have three strategies for window placement: attaching dialogs to the apps which opened them (save/open dialogs), center the dialog on the screen, or to simply not use dialogs. (90% of the iPad solution). A few WMs will take into account the available empty space on screen, but this is still very primitive. They consider the <i>size</i> of the new window but not it’s <i>content</i>. </p> <p> <a href=' '>This paper</a> by Ishak & Feiner called Content Aware Layout has some great ideas. </p> <p> If the window manage considers the contents of windows then it can be smart about placing new ones. If a background window has large blank spaces, then use that area for the new window, possibly with some transparency. </p> <p> When searching your operating system for a particular word you can search the contents of windows too. The window manager can zoom in just those windows. Even better, it can show just a subset of each window: the part containing the found word. Windows are just bitmaps. We can slice and dice them to do all sort of cool things. </p> <p> Here’s another example: When copying text from one document to another the WM could help the user maintain their mental state by showing the windows involved. Move to window A. Select and copy some text. Now move to window B. The WM knows you are in the middle of a copy and paste action, so it can shrink but not hide Window A. That way you are always aware of where your clipboard content came from. The WM can also show you the current clipboard contents in a floating window. </p> <p> This brings us to another horrible pain point of desktop operating systems: <b>the clipboard</b>. </p> <p> </p> <h2 id="id56697">Copy, Paste, and the Clipboard</h2> <p> </p> <p> Why should the humans have to remember what is stored in a hidden data structure called <i>the clipboard</i> (or <i>the pasteboard</i> for you old school mac-enzies). We should <b>make it visible</b> and relieve the human of this burden. This visible clipboard could also show previous copied contents. The clipboard should be a persistent infinitely long data structure, not just a single slot. </p> <p> Did you copy something the other day but can’t remember what or where? Just look in the clipboard’s history. Copying multiple things at once becomes trivial. Grab content from four different sources then paste them together into a new document. The clipboard isn’t a hidden box that holds only one chunk of data. Now it becomes a shelf or tray that holds the many things you are working with right now. Pick up what you need then place it all in the final destination. Gather and place, not copy and paste. </p> <p> </p> <h2 id="id74643">Working Sets</h2> <p> Finally, the window manager should implement working sets. A working set is a set of documents, resources, applications, or whatever that the human is currently using to do something. </p> <p> In the <b>ideal</b> world, when faced with a task like make "a presentation on Disneyland", the human would search through the library for some books, find some photos on the web, read a few articles, then distill all of this into a single document. When done, the human puts everything away, prints out the final document, and moves on to the next task. Very neat and orderly. </p> <p> Of course, in the <b>real</b> world that doesn’t happen. We build up a collection of notes over a few months. Probably a stack of books related to the problem. Over a few days we read the books and articles, collects the quotes and images, then put it on hold as other projects come up. You might be in the middle of writing when a phone call comes in, or a screaming child needs lunch, then finally come back to your office wondering what you were in the middle of. </p> <p> When the project is finally over the books and notes hang around the office until the annual cleanup. Real world work is messy and full of constant interruptions. Our tools should <i>accept this reality</i> and help, no hinder it. </p> <p> A good window manger would let you group windows by topic and help you focus on a single task when you need to focus. One way to do this is by having multiple virtual screens where each screen is dedicated to a particular project. The screen can contain not just windows of active documents, but also all of the research files collected for that project. It will contain all of the emails and chat windows related to that project and no others. </p> <p> A project screen is really a topic specific slice of everything on your computer across all applications and data types. Furthermore, such a screen can be saved and reloaded later; possibly months later. Remembering where you were in that open source project after a 6 week hiatus will be easy. Just load up the workspace and everything related to the project, even emails and github issues, will come up in a single screen. </p> <p> <a href=' '>This paper</a> by Keith Edwards has some great ideas on the topic. </p> <p> </p> <p> </p> <h2 id="id53596">Special Effects</h2> <p> By giving the window manager full control of the screen, combined with a good API, we can do amazing things on a modern GPU that were infeasible just a decade ago. After all, a window is simply a bitmap in GPU RAM. It can be manipulated and distorted like any other texture. We could make an area of the screen a black hole with all windows stretched as they approach it. We could render a window with icicles or dust on it to indicate how long it has been since the user interacted with it. </p> <p> Snowflakes and other particle effects are trivial to implement on the GPU but the real power will be in manipulating windows automatically for the human. A window that would be partly off the screen could be distorted hyperbolically instead. While the text would be squished it would still be readable enough for the user to get the gist of it. When the user wants that window they just click on it and it stretches back to normal size. </p> <p> How about window zooming? In a web browser I can zoom any page with + or - buttons. Any webbased app can do the same. I often have multiple sizes of text at once in different browser windows. What if this wasn’t restricted to just web pages? Any app should be able to respond to a zoom event to increase it’s base font size. If all layout and windows are derived from the base font (as they should be) then the app will zoom just like a webpage. for apps which don’t support zooming for whatever reason, the texture itself could be zoomed. While this would result in some blurriness, modern GPUs do a very good job of smoothing zoomed textures, and it won't be an issue at all with the new HiDPI screens just arriving on the market. </p> <p> </p> <h3 id="id90803">Distorting Input</h3> <p> You may have seen effects like those I've described on Linux desktops using Compviz, a compositing window manager for X windows. The effects look cool but they are largely useless because of a major flaw in the X windows design. The window manager can control the output of windows - the actual bitmaps - but it cannot control the input. No matter how the windows are distorted the apps themselves will still receive input events normally. This means clicking on what appears to be a scaled button may instead send the mouse event to another part of the window. To fix this problem both input and output must go through the window manager so that it can keep them in sync. </p> <p> The sad thing is: these problems with identified and solved <b>decades ago</b>. <a href=' '>This research paper</a> I helped with as my senior project in 1997 talks about the problem and solution. Window modification must apply to both input and output. </p> <p> </p> <p> </p> <h2 id="id2193">Could we really build this?</h2> <p> We have to write everything from scratch. By not being backward compatible with anything we can’t reuse existing programs. I think that’s okay, actually. iOS was built with all new apps too. Existing code doesn’t matter as much as we think. It’s the ideas and protocols that matter. That’s what we get to reuse. </p> <p> </p> <h3 id="id15630">Versioning.</h3> <p> How do we version modules? If your editor experience is actually the combination of 10 different modules working together, how are they upgraded? Do we have a fixed API between them that never changes? Could one module upgrade break the rest? Have we reinvented class path hell? This new OS design doesn’t fix these issues but it does make them explicit. We already have these problems today, but they are solved in adhoc, inconsistent ways. The new OS would make dependencies in the system explicit; forcing us to deal with them. I expect we’d end up with a system like Firefox where you have different channels to get the modules from depending on the amount of risk you are comfortable with. Probably with NPM like <a href=''>semantic versioning</a>. </p> <p> So could we really build this? Yes, I think we could. However, rather than trying to reinvent absolutely everything we should start with a bare Linux system; similar to Android but without the Java stuff. Then add a good lightweight document database (CouchDB?) and a hardware accelerated scene graph (Amino?). Then we need an object stream oriented programming system. I’d suggest Smalltalk or Self, but NodeJS is more widely supported and has tons of modules. Perhaps ZeroMQ as the IPC mechanism. </p> <p> The exact building blocks don’t matter very much as we will probably change them over time anyway. The important thing is that we build an OS with a cohesive vision and consistent metaphors. Let’s bring back the idea that the users and their data are the central parts of a computing environment, not applications and system plumbing. Let’s make machines for getting stuff done, not babysitting hardware. Let’s make work stations again. </p>\nIdeal OS: An Epic Tale of IrrationalityNote: Parts <a href=''>II</a> and <a href=''>III</a> are up now. <p> In the art world there is this idea of <a href=' '>anti-art</a>. The goal is to do all of the things backwards or wrong so that you can discover new rights. You have to tear down the world before you can build it again. I’m not entirely sure how it works, but they seem happy with it so I figured I’d give it a go with something that really needs shaking up: the <b>desktop operating system</b>. </p> <p> Beforwarned. This post is an epic. Not epic in a"'that movie was awesome" sort of way. It's epic in a "3000 stanza poem you had to read in english class" way. Just FYI. </p> <p> </p> <p> Since this is <b>my</b> blog I’ll start by getting rid of everything I <b>personally</b> hate about operating systems. </p> <p> </p> <h3 id="id11823">Fixed Font Sizes</h3> <p> I’m getting old. My eyes don’t work as well as they used to. I should be able to resize any window, or the entire OS, by hitting cmd-+. It works for the web, why not the desktop? </p> <p> </p> <h3 id="id98785">Filesystems</h3> <p> They crash. They have weird restrictions. They have a single hierarchy. They are slow to search. And the folder metaphor is sooo 1980s I feel like I need a Flock of Seagulls haircut (kids, ask your parents). Let’s just get rid of filesystems right away. In the bin. </p> <p> </p> <h3 id="id71182">Device Drivers</h3> <p> Device drivers are the crap software written by hardware companies. They fix just enough bugs to ship, then move on to the next project. Installing them is a pain. Updating them is a pain. Uninstalling them is nearly impossible. Device drivers need to just go. Seriously. Just go out the way you came in. </p> <p> </p> <h3 id="id55965">Backing up your computer</h3> <p> So annoying! Built in backup software sucks and 3rd party software sucks even more. Backups are slow, we forget to schedule them, and they magnify the real problem: the filesystem. First you’ve got a filesystem. Then you’ve got a backup of the filesystem. Now you’ve got two problems. Just go. </p> <p> </p> <h3 id="id42386">System utilities.</h3> <p> These are the redheaded step children of OS applications. Apple spends a lot of time polishing Mail. The system console? Not so much. Oh look, there’s a whole folder of these poor guys. Who knew? </p> <p> </p> <h3 id="id79622">Window managers</h3> <p> I sorta like Window Managers, but the existing ones kinda suck. I like application <b>windows</b>. It’s the <b>management</b> part I hate. All of that moving and resizing is annoying and error prone. If only I had some sort of automated tool to manage windows for me. We could call it a window manager. Nah, that’ll never work. </p> <p> </p> <p> </p> <h3 id="id97857">Inputs</h3> <p> I like the mouse and keyboard, but my computer has a camera and microphone too. Yet the OS itself doesn’t use it for interaction. No voice recognition. No visual gesture support. They should really just call the camera a Skype accessory since that’s all it gets used for. We should find a use for these or get rid of'em. The NSA wouldn’t be happy, but that’s life. </p> <p> </p> <h3 id="id65482">Applications:</h3> <p> This is the big one. I could learn to live with all of the other problems but applications are too big to skirt around. "Your application is so fat when it walks around the house it really walks around the house” (kids, ask your parents). </p> <p> Supposedly we buy computers to run applications, but that’s just not true. We buy computers to get some work done. In theory applications help us do that but in practice they often get in the way. </p> <p> Each application looks and works differently. Each has it’s own file format, or worse, unreliable internal database. They are supposed to work together but in reality they just share files on disk, and that only works half the time. Drag and drop would seem like a solution but it’s really just a short hand for sharing files. </p> <p> Each application is it’s own silo. A world unto itself. In the age of the app store this isolation is just getting worse. Every year another brick in the wall. (kids, ask your parents). Applications definitely have to go. </p> <p> </p> <h3 id="id24377">What's Left?</h3> <p> So, after throwing away applications, filesystems, device drivers, system utilities, and everything else; what does that leave us with? Not much. Just the kernel for managing CPU and memory, an empty bitmapped screen, and, um.. some memory buffers I guess. Now what? </p> <p> Well, if we were willing to add the filesystem back in we’d pretty much have Unix, or at least old Unix before X. A bunch of tiny tools, not applications, that connect together in different ways to get work done. Quite productive. The only problem is we have to time travel back to the 70s. Going with pure Unix means giving up graphics, typography, modern networking, cameras and microphones and mice. In other words, giving up all of the progress of the last 40 years. </p> <p> Surely there is some sort of middle ground? </p> <p> Yes! Yes, there is! We <b>can</b> rebuild the desktop computer. We have the technology. (kids, ask your... oh just Youtube it!). </p> <p> The creators of Unix had the right idea: little tools that you combine to perform different tasks. The problem is the interchange. Unix tools share data through a single stream of bytes. Great for 1970 but not enough for today. And because that wasn’t enough to do everything you’d want to do, they continued to hack ioctrls and control characters on top. When that didn’t work they added X-windows, a horrible abomination against heaven and nature that somehow crawled out of the primordial slime and refused to die when evolution gave up on it. (Kids, read a book). </p> <p> </p> <h3 id="id46332">Simple Tools</h3> <p> What we need are simple tools that communicate using a stream of small objects. They could be events or records or something else. I’d go with JSON, but that’s just me. The important thing is they must be structured and human parseable; higher level than pure byte streams. </p> <p> If object streams form the primary way our tools communicate, then how do they store things. We got rid of the filesystem, remember? </p> <p> How about a document database? Databases have gotten really good in the 40 years since SQL was invented. They can store structured documents, byte streams, and metadata. They can give you live queries. They can do full text search. Let’s finally use a system that allows a file appear in more than one folder at a time. Hierarchical filesystems in 2014 are madness. It’s time for something better. </p> <p> Now then. Armed with a database and streams of objects we can rebuild the world. Lets start with graphics. </p> <p> </p> <h2 id="id95574">Graphics</h2> <p> We are more than a decade into the 21st century. We can assume a proper GPU will be available. That means our base graphics layer can be a real hardware accelerated OpenGL scenegraph, not bitmaps layered with software. </p> <p> Which scenegraph? Who cares. There’s a ton to choose from or we can build a new one. The important thing is hardware acceleration is mandatory. <b>Mandatory</b>. Something with shaders. OpenGL ES 2.0 or higher. And I don't care about X-Windows network transparency. That's an idea that died along with Network Computers and SunRays. </p> <p> So we have three things: a database to hold stuff, object streams to send data around, and a GPU to draw it. Now we can start to build some tools. But we can’t just start building applications. Applications are bad, remember. Let’s think about tasks we want to complete, then talk about how to build the tools to accomplish those tasks. </p> <p> </p> <h2 id="id16270">Modular Email</h2> <p> Let’s start with something simple: checking your email. There are actually many pieces to this. First you have to set up your email. The computer needs to know the server where you receive email, any proxies or security layers (SSL,TLS, etc), and of course your username and password (assuming that is the authentication mechanism your email provide uses). Then the computer needs to speak the server’s protocol. Let’s keep it simple and assume we only support IMAP. Now the computer needs to download your email message and store local copies on disk (in our <b>real database</b>, not files) Then the computer must show the new messages to the user in some sort of sortable list. Finally, when the user selects an email message the computer must display it. </p> <p> So far so good. We know how an email is viewed. The problem is that in a traditional desktop operating system this is all done by a single application called the "email client" even though many of these tasks have little to do with one another. In our new OS we can split these out. </p> <p> </p> <ul><li>a setup wizard for capturing email account information.</li> <li>a service to check for new emails and download them into the database</li> <li>an inbox view which is just a saved database query rendered in a particular way (namely, ‘from' and ‘subject' fields from messages displayed in a standard sortable list view)</li> <li>and finally an message view which can display a single message.</li> </ul> <p> By splitting up these functions into separate modules we make it a lot easier to write email support for our new OS. It also means we can change and add functionality in one part of the system without modifying others. Let’s consider a few additions: </p> <p> A <b>spam filter</b> doesn’t need to know about how emails are viewed or downloaded into the system. It just needs a list of new messages. In other words: a saved database query. Whenever a new message comes the filter analyzes it and adds a <code>spam</code> tag if it’s spam. Then the inbox viewer can display spam messages in a different color, or under a spam folder, or not display them at all. Separating out functions makes everything easier. </p> <p> Adding a <b>new email</b> service is simple. We need a new wizard to create the "account" document stored in the database. When the email checker runs it looks for all account documents then connects to each one looking for new message to download. If the new mail service uses a different API (say, GMail’s new API), then we add a new email checker implementation. The rest of the email modules don’t have to change at all. They don't care. Proper separation of concerns. Brooks was right. </p> <p> <b>Trigger rules</b>: Want to control your music by email? Add a filter that looks for messages with a subject like: “play music, beatles random”. When it finds such an email it marks it as deleted then plays the music. Nothing else in the system changes. </p> <p> By splitting the email task into separate modules the system becomes more flexible and hackable. A trigger rule can be written in any programming language. It can have an interface or be headless. As long as it can talk to the database it can do it’s thing. Modules with standard communication gives us flexibility and power. </p> <p> </p> <h2 id="id24995">The Other Apps</h2> <p> Imagine if we apply this modularity principle to everything else on your computer. The applications I use every day include iTunes, Evernote, Mail, Things (a todo list), Contacts, Calendar, Messages, iPhoto, Coda (a code editor), and Safari. Except for Safari, everything else is essentially an editable view into a custom database. They could all be broken up into modules that combine in different ways to replace the existing functions, and let us do so much more. </p> <p> <b>iTunes</b>: becomes a few sortable list views of database queries, a music playback service which supports local and remote resources, and a visualizer. All separate and swappable. iTunes is pretty much a galaxy unto itself these days and is in danger of collapsing into a black hole. Time to split it out. </p> <p> <b>Evernote</b>: a text editor and a db query, plus a background service to sync with Evernote’s servers. </p> <p> <b>Mail</b>: As discussed before, many different modules. </p> <p> <b>Things</b>: a tiny text editor (single line) and a DB query of tiny documents. </p> <p> <b>Contacts</b>: a view into the database of address entries, plus a syncing service. </p> <p> <b>Calendar</b>: a view into the database of event entries, plus a syncing service. </p> <p> <b>Message</b>: background modules to connect to various chat services. Conversations become a view into the database for chat events. </p> <p> <b>iPhoto</b>: again a view into the database, but this time with an image service to perform fast scaling and an view to edit photos. Syncing to iCloud, Facebook, and Flickr become more background services. </p> <p> <b>Coda</b>: A fancy text editor. It would be the least changed under a new OS, but could benefit from file syncing (replaces SFTP client), and easily managing editor plugins for theming, syntax highlighting, and keybindings. Wouldn’t it be nice to define your keybindings once for the OS rather than in each application? Wouldn’t it be nice if all applications could use SFTP to edit remote files instead of duplicating this feature in each app? </p> <p> <b>Safari</b>: The many parts of a web browser probably need to remain tightly coupled, but at least bookmarks and plugins could become modules in the system instead of tied directly to the browser. Even the renderer could be a module. In a modern Mac we already have these things (web views and bookmarks databases), but in the new OS the modularity would be explicit and usable from any language. </p> <p> </p> <h2 id="id83155">Only Data Matters</h2> <p> There is another, arguably more important trend here. By breaking up applications into pieces we shift the focus from the application to the data. The things the human wants to do with the data is the only important part. </p> <p> Our computers can again become "machines to do work", not places to manipulate applications. They become workstations. We interact with our data. The applications are simply small tools that we rearrange as needed. It's more like a craftsman's workbench then the office desk of today. </p> <p> </p> <h2 id="id75584">Next Time</h2> <p> Sadly for you, this is just part one of my series. Time time I'll dig into the actual UI and window manger. How will people actually interact with the system? How will we handle the ever growing information deluge of the 21st century. </p> <p> Until next time, keep tearing things down. </p>\nGetting Started with NodeJS and ThrustI’ve used a lot of cross platform desktop toolkits over the years. I’ve even built some when I worked on Swing and JavaFX at Sun, and I continue development of Amino, an OpenGL toolkit for JavaScript. I know far more about toolkits than I wish. You would think the hardest part of making a good toolkit is the graphics or the widgets or the API. Nope. It’s deployment. Client side Java failed because of deployment. How to actually get the code running on the end user’s computer 100% reliably, no matter what system they have. Deployment on desktop is hard. Perhaps some web technology can help. <p> </p> <h3 id="id72777">Thrust</h3> <p> Today we’re going to play with a new toolkit called <a href=''>Thrust</a>. Thrust is an embeddable web view based on Chromium, similar to Atom-Shell or Node-webkit, but with one big difference. The Chromium renderer runs in a separate process that your app communicates with over a simple JSON based RPC pipe. This one architectural decision makes Thrust far more flexible and reliable. </p> <p> Since the actual API is over a local connection instead of C bindings, you can use Thrust with the language of your choosing. Bindings already exist for NodeJS, Go, Scala, and Python. The bindings just speak the RPC protocol so you could roll your own bindings if you want. This split makes Thrust far more reliable and flexible than previous Webkit embedding efforts. Though it’s still buggy, I’ve already had far more success with Thrust than I did with Atom-Shell. </p> <p> For this tutorial I’m going to show you how to use Thrust with NodeJS, but the same principles apply to other language bindings. This tutorial assumes you already have node installed and a text editor available. </p> <p> </p> <p> </p> <h3 id="id82521">A simple app</h3> <p> First create a new directory and node project. </p> <pre><code>mkdir thrust_tutorial cd thrust_tutorial npm init</code></pre> Accept all of the defaults for <code>npm init</code>. <p> Now create a minimal <code>start.html</code> page that looks like this. </p> <pre><code>&lt;html> &lt;body> &lt;h1>Greetings, Earthling&lt;/h1> &lt;/body> &lt;/html></code></pre> <p> Create another file called <code>start.js</code> and paste this in it: </p> <pre><code>var thrust = require('node-thrust'); var path = require('path'); thrust(function(err, api) { var url = 'file://'+path.resolve(__dirname, 'start.html'); var window = api.window({ root_url: url, size: { width: 640, height: 480, } });; window.focus(); });</code></pre> <p> This launches Thrust with the <code>start.html</code> file in it. Notice that you have to use an absolute URL with a <code>file:</code> protocol because Thrust acts like a regular web browser. It needs real URLs not just file paths. </p> <p> </p> <h3 id="id3777">Installing Thrust</h3> <p> Now install Thrust and save it to the package.json file. The node bindings will fetch the correct binary parts for your platform automatically. </p> <pre><code>npm install --save node-thrust</code></pre> <p> </p> <p> Now run it! </p> <pre><code>node start.js</code></pre> <p> You should see something like this: </p> <p> <img src='' alt='text'/> </p> <p> A real local app in just a few lines. Not bad. If your app has no need for native access (loading files, etc.) then you can stop right now. You have a local page up and running. It can load JavaScript from remote servers, though I’d copy them locally for offline usage. </p> <p> However, you probably want to do something more than . The advantage of Node is the amazing ecosystem of modules. My personal use case is an Arduino IDE. I want the Node side for compiling code and using the serial port. The web side is for editing code and debugging. That means the webpage side of my app needs to talk to the node side. </p> <p> </p> <h3 id="id69180">Message Passing</h3> <p> Thrust defines a simple message passing protocol between the two halves of the application. This is mostly hidden by the language binding. The node function ‘window.focus()” actually becomes a message sent from the Node side to the Chromium side over an internal pipe. We don’t need to care about how it works, but we do need to pass messages back and forth. </p> <p> On the browser side add this code to <code>start.html</code> to send a message using the <code>THRUST.remote</code> object like this: </p> <pre><code>&lt;script type='text/javascript'> THRUST.remote.listen(function(msg) { console.log("got back a message " + JSON.stringify(msg)); }); THRUST.remote.send({message:"I am going to solve all the world's energy problems."}); &lt;/script></code></pre> <p> Then receive the message and respond on the Node side with this code: </p> <pre><code> window.on('remote', function(evt) { console.log("got a message " + JSON.stringify(evt)); window.remote({message:"By blowing it up?"}); });</code></pre> <p> The messages may be any Javascript object that can be serialized as JSON, so you can't pass functions back and forth, just data. </p> <p> If you run this code you'll see a bunch of debugging information on the command line including the <code>console.log</code> output. </p> <pre><code>[55723:1216/] REMOTE_BINDINGS: SendMessage got a message {"message":{"message":"I am going to solve all the world's energy problems."}} [55721:1216/] [API] CALL: 1 remote [55721:1216/] ThrustWindow call [remote] [55721:1216/141202:INFO:CONSOLE(7)] "got back a message {"message":"By blowing it up?"}", source: file:///Users/josh/projects/thrust_tutorial/start.html (7)</code></pre> <p> Notice that both ends of the communication are here; the node and html sides. Thrust automatically redirects console.log from the html side to standard out. I did notice, however, that it doesn't handle the multiple arguments form of console.log, which is why I use <code>JSON.stringify()</code>. Unlike in a browser, doing <code>console.log("some object",obj)</code> would result in only the some object text, not the structure of the actual object. </p> <p> </p> <p> Now that the UI can talk to node, it can do almost anything. Save files, talk to databases, poll joysticks, or play music. Let’s build a quick text editor. </p> <p> </p> <h3 id="id12933">Building a text editor</h3> <p> </p> <p> Create a file called <code>editor.html</code>. </p> <pre><code>&lt;html> &lt;head> &lt;script src="" type="text/javascript" charset="utf-8">&lt;/script> &lt;script src=''>&lt;/script> &lt;style type="text/css" media="screen"> #editor { position: absolute; top: 30; right: 0; bottom: 0; left: 0; } &lt;/style> &lt;/head> &lt;body> &lt;button id='save'>Save&lt;/button> &lt;div id="editor">function foo(items) { var x = "All this is syntax highlighted"; return x; }&lt;/div> &lt;script> var editor = ace.edit("editor"); editor.setTheme("ace/theme/monokai"); editor.getSession().setMode("ace/mode/javascript"); $('#save').click(function() { THRUST.remote.send({action:'save', content: editor.getValue()}); }); &lt;/script> &lt;/body> &lt;/html></code></pre> <p> This page initializes the editor and creates a handler for the save button. When the user presses save it will send the editor contents to the node side. Note the <code>#editor</code> css to give it a size. Without a size an Ace editor will shrink to 0. </p> <p> </p> <p> This is the new Node side code, <code>editor.js</code> </p> <pre><code>var fs = require('fs'); var thrust = require('node-thrust'); thrust(function(err, api) { var url = 'file://'+require('path').resolve(__dirname, 'editor.html'); var window = api.window({ root_url: url, size: { width: 640, height: 480, } });; window.focus(); window.on('remote',function(evt) { console.log("got a message" + JSON.stringify(evt)); if(evt.message.action == 'save') return saveFile(evt.message); }); }); function saveFile(msg) { fs.writeFileSync('editor.txt',msg.content); console.log("saved to editor.txt"); }</code></pre> <p> </p> <p> Run this with <code>node editor.js</code> and you will see something like this: </p> <p> </p> <p> <img src='' alt='text'/> </p> <p> Sweet. A real text editor that can save files. </p> <p> </p> <p> </p> <h3 id="id52801">Things I haven't covered </h3> <p> You can control native menus with Thrust. On Windows or Linux this should be done in app view itself with the various CSS toolkits. On Mac (or Linux with Unity) you will want to use the real menubar. You can do this with the api documented <a href=''>here</a>. It's a pretty simple API but I want to mention something that might bite you. Menus in the menubar will in the order you create them, not the order you add them to the menubar. </p> <p> </p> <p> Another thing I didn’t cover, since I’m new to it myself, is <a href=''>webviews</a>. Thrust lets you embed a second webpage inside the first, similar to an iFrame but with stricter permissions. This webview is very useful is you need to load untrusted content, such as an RSS reader might do. The webvew is encapsulated so you can run code that might crash in it. This would be useful if you were making, say, an IDE or application editor that must repeatedly run some (possibly buggy) code and then throw it away. I’ll do a future installment on web views. </p> <p> I also didn't cover is packaging. Thrust doesn’t handle packaging with installers. It gives you an executable and some libraries that you can run from a command line, but you still must build a native app bundle / deb / rpm / msi for each platform you want to support if you want a nicer experience. Fortunately there are other tools to help you do this like <a href=''>InnoSetup</a> and <a href=''>FPM</a> . </p>