Below are my three main session proposals for OSCON, plus a few random ideas near the bottom that aren't fleshed out. Please give me some feedback on what you like and don't like. My goal is to have four really solid submissions. Thanks!

HTML 5 Canvas

Games account for about half of the apps in the typical app store. They are among the first thing ported to any new platform. Games help drive technology forward.  This year's edition of the popular HTML Canvas Deep Pe will focus specifically on building cross platform games for mobile and desktop. We will cover everything needed to build basic games with animation, scrolling, sound effects and music, image loading, sprites, and even joystick support. Then we will learn how to package them to run on desktop and mobile devices, both in and outside of app stores.

Outline:

  • Why make games?
  • Why make games in Canvas?
    • it's easy and fun!
    • works everywhere. more places than any other graphics API.
    • keeps getting faster and more powerful.
  • Anatomy of a game
    • game engine: it's more than just a run loop
    • images: sprites, models, and more
    • input: keyboard, touch, and joysticks
    • animation and player control
    • scaffolding: menus and splash screens
  • picking a game engine
    • 2d engines
    • 3d engines
    • rolling your own
  • drawing to the screen with movement
  • handling input:
    • regular events: keyboard and mouse
    • multi-touch: utilities to help
    • gamepad / joystick
    • camera
  • audio:
    • background music
    • sound effects
      • latency
      • doubling up the playback
    • considerations for mobile
  • resource management
  • finishing touches:
    • fullscreen
    • splash screens
    • loading screens
  • sharing your game with the world
    • on the web
    • mobile app stores
    • desktop app stores
    • tools to help
    • case study
  • performance
    • desktop
    • mobile
    • using 3d for 2d work
  • tools to help
    • performance tuning
    • resource editing
    • existing artwork and music you can use

Make both a full three hour workshop and a 1 hour talk with the lessons?

Designing The Internet of Things with the 3 Laws of Robotics

Thanks to cheap sensors and even cheaper computing, we are rapidly approaching the age of the smart home: living spaces filled with smart things. Objects connected to each other and to the internet. Thermostats, door switches, lights, windows, gas sensors and toilets.  However, this vision of things to come brings great challenges as well. How do we design interfaces for these devices? How can someone manage a house full of 200 gadgets each demanding new batteries and an IP address?  What if your networked toaster rats you out to the FBI? The challenges of building a safe and understandable Internet of Things are immense. There is one existing ethical framework that can help: Isaac Asimov's Three Laws of Robotics.

In this session we will explore the complex interactions of the Internet of Things and see how the classic Three Laws of Robotics can be applied in these situations. We will cover physical safety, data privacy, setup and maintenance, and general usability.  No knowledge of programming or interaction design is required, just an open mind and a desire see the future.

outline:

  • The Internet of things?
    • Why is it cool?
    • What counts and what doesn't?
    • Inside and outside your home.
  • A quick survey of the problems IoT creates:
    • data privacy
    • physical security
    • physical safety
    • data overload
    • management overload
  • The Three Laws of Robotics
    • fictional and non-fictional history
    • guidance to solve our IoT problems
  • Do Not Harm a Human
    • physical safety
    • emotional safety
    • preserving privacy
  • Obey Orders From Humans
    • The principle of Least User Astonishment.
    • manual overrides
    • decision delegation
    • heuristic design
  • Protect Own Existence
    • Easing the management burden
    • Escalation of emergencies
    • Safe failure
  • Next steps

A survey of visual programming languages

Pure visual programming languages sound like a great idea. Who wouldn't want to create robust and powerful programs using more than just lines of text?  It is one of the holy grails of computer science, yet success has proven elusive. The last fifty years of research are littered with the corpses of failed attempts (along with a few interesting successes in unexpected areas).  Why is it so hard to create a visual programming language that works in the real world? In this session we will explore the history of visual programming, looking at both the failures and successes from the fifties through to modern day, We will look for clues about what works and what doesn't. We will extract concepts that can help us design visual languages in the future, as well as features to bring back into traditional programming environments.

outline

  • What is visual programming?
  • Why visual programming?
    • a picture is worth a thousand words, so it's more expressive. right?
    • similarity to visual structures we already use (UML, state diagrams, GUI builders)
    • non-programmers can program.
    • for teaching programming.
  • what counts and what doesn't.
    • visual studio, no; visual basic yes;
    • visual aids to traditional programming don't count. 
    • some part of the application must be specified in a purely visual manner. VB forms count. same with access forms.
  • history
    • early attempts in the 50s and 60s
    • the mother of all demos.
    • 80s era research. 
      • smalltalk visual environments. didn't quite make it. why? what held them back?
      • soviet visual programming recently uncovered.
      • spreadsheet
    • 90s
      • visual basic
      • access and other visual databases
      • flash, director, multi-media languages.
      • music composition
    • 2000s
      • quartz composer
      • educational languages
  • educational languages:
    • scratch
    • squeak & etoys
    • lego mindstorms
    • blockly: abandoned?
    • android builder thingy: abandoned?
  • conclusions:
    • some tasks work visually. others do not.
    • winners:
      • UI layout
      • drawing, animation, movies. any media creation.
      • anything where direct manipulation helps
      • where a boxes and lines metaphor already exists: music sequencers. (though it doesn't result in very extensible code)
    • losers:
      • traditional stream and graph based algorithms
      • anything dealing with strings or non-visual data structures 
      • building libraries. reusability seems to be especially hard to get right.
  • crossover ideas:
    • colors and images in a traditional editor
    • rapid/instant feedback. Processing.
    • show/hide overlays for interesting information.
    • use of color, typography, visual layout to display purely text based code.
    • mixing visual with non-visual works very well. assemble components visually. build components in regular code.
    • separate editing from viewing:  greek symbols and other very compact representations of algorithms?
    • hide the filesystem. you don't care how your code is stored on disk. dir structure is irrelevant. Smalltalk had this.

A Few Other Ideas

My 'game editor inside the game as the game' idea.

Hacking Things Up with WebKit-nix:

Nix is a port of WebKit2 based on Posix and OpenGL/ES. It is unique due to it's portabljilty and few dependencies. While it can be run on a traditional desktop environment and GUI toolkit, it's most interesting use is for embedded systems where a full GUI may not be abailalble, and for headless applications where there is no live graphics environment. This session will cover what Nix is, how to compile it,

  • list of things people have done with it
  •      how to compile it
  •      how to integrate it into a server side app
  •      how to integrate it into a client side app with direct GL rendering
  •      running it on a raspi w/o X running
  •      next steps and places to help
  • Working with the Raspberry Pi as a kiosk: no X, boot right into your app

    Intro to Bluetooth Low Energy: iOS, desktop, raspberry pi, arduino.

I am giving a presentation on the future of desktop interface at OSCON in a few weeks. To help prepare for the session I'd like to use you, gentle readers, as my guinea pigs. The following essay is an extremely rough version of what I will be presenting. Please imagine it with humorous illustrations and no grammatical errors. I will greatly appreciate your feedback. What parts should I expand? Where are my arguments unclear or flawed? Would you come to see this talk?

Welcome to The Future of Desktop Interfaces. I'm afraid your presence is actually a ruse on my part. I am *not* going to tell you what the future of desktop interfaces will be. Rather my goal is to trick you into inventing the future for me. So really you are here because I am lazy. Remember, necessity is the mother of invention. Laziness is the father.

Introduction

It is my belief that in a decade 90% of people will use a tablet, smartphone or other non-desktop device as their primary computing interface. I actually made this prediction two years ago so have only eight years left, and the success of the iPad implies that we may actually be ahead of schedule. At this point I think my prediction is relatively uncontroversial. Tablets and smartphones have a managed or 'curated' computing experience. These devices actually meet the computing needs of most people better than a traditional desktop PC. Note: for the purposes of this discussion I mean both laptops and desktop computers running MacOSX, Windows, or a Linux Desktop. anything with an exposed filesystem. These managed computing devices will continue to get better and more powerful, mostly replacing the PC.

So, that's great for the 90%. They will be able to get their jobs done with no fuss. But what about the 10%? What about the people who create content professionally? What about the programmers? The digital artists? The data analysts? The hackers? Are PCs going to atrophy as all attention moves to the 90%? I don't think so. In fact, I think this could be a renaissance of desktop computing. We've been kind of stuck in a rut for the past decade. Desktop interfaces haven't changed much since about 2000. Certainly they are prettier, but they haven't' really gotten better or more productive. I think we could have some really interesting things coming. But first, a diversion.

This is a cave painting [image]
This is a painting from the wall of pompeii, circa 70 ad [image]
This is a painting from the middle ages [image]
This is a painting from the high renaissance [image]
This is a painting from the 1850s [image]

We can see a trend of greater and greater realism built over the centuries leading up to height of the great renaissance painters like Michaelangelo. Then a plateau. Once we had achieved realism where do we go? More portraiture. More landscapes. Painting started to get boring. Then something happened in the 1870s. All of a sudden we get impressionism, cubism, surrealism, and abstract modernism. we see more change in a 70 year period than the previous 700. What happened?

[pictures of Van Gogh and Picasso paintings]

Photography was invented. Until the photograph the commercial purpose of painting was to capture and recreate reality. Most paintings were portraits of rich people. But no painting could compare with the photograph at recreating reality, especially not for the price. The photograph made realistic portrait painting obsolete. This was both a blessing and a curse. Painters needed to dream up with something new to paint. And dream they did. Freed from the need to duplicate reality there was an explosion of new ideas and trends. Painting had new life, which lead to amazing things.

I hope the same thing will happen to desktop computers. PCs have been the primary computing interface for humans for the past thirty years. By definition the they have to serve the needs of all people. But if 90 % of people will use something else, then maybe desktop interface can evolve again. Maybe they can change to meet the needs of the 10% far better.

Trends

I can't reliably predict anything a decade out in our industry. Things change too quickly and there are too many unknowns. However, there are a few trends that I think we can take a look at. The next few years will be shaped by the following forces:

* Moore's law. We have a glut of CPU, GPU, and storage resources on our personal computers. A glut that we don't' really take advantage of yet, and this glut shows no sign of stopping.

* Mature software development tools and toolkits. One engineer can create an app in a few weeks that would have taken a team of programmers months just a decade or two ago. This is partly thanks to our tools like modern IDEs, version control systems, automated build systems, etc. We also have robust toolkits and APIs that let us code at a higher level. Combined with an excess of RAM we can build complex desktop software far faster.

* App stores: while I don't like the curated part of desktop app stores there is no denying that they open the market for smaller software developers. A small team can make an app that will appeal to a very niche market and still sell it profitably because they have access to the entire world. This makes narrower products far more feasible than was possible 10 years ago.

* Ubiquitous networking

* Info glut:

The 10 Percent

Before we talk about what the interface will look like, let's talk about the problems that the 10% have. When I first suggested this topic some people thought I meant that interfaces would become hard to use again. That usability is only for the 90%. Not true. the 10% needs quality software as well, but we need it to be deeper. We have specific needs that must be addressed and are willing to spend the time to learn more powerful interfaces. In particular, we have a torrent of information to be managed.

Every time I get a new computer the hard drive doubles and I always fill it up. Only half of this is photos and videos. I also have endless documentation sets, PDFs, gigabytes of emails, backups of my mobile devices, and word docs stretching back to the early nineties. I personally have more information to manage than most companies did thirty years ago. I need a way to manage it.

Along with this information I have lots of devices that I need to manage as well. iCloud is nice but it doesn't scale very well. I have an iPad, two iPhones (for me and my wife), an iPod touch for my son, and several test phones (Android, webOS, Windows Phone, and Meego), and a few cameras. And that's just the mobile devices. I now have a home media server and a Roku box for the TV. Soon I will have home automation components for my thermostat, to control the lights, water the garden, run the sprinklers, handle security, and watch the baby. Thanks to Moore's Law our homes will soon be awash in computing. That's a lot to manage. I can't do it all from an iPad.

So where do we start to address these needs? First: hide the filesystem.

information management

Hide the filesystem. Use robust shared data stores underneath. (Man, I really wished BeOS had survived). I'm not saying we need a database to replace the filesystem but at least hide it. I don't care how my stuff is stored as long as it works, and works quickly. The example most people are familiar with is iTunes. MP3 libraries were the first widespread case where we hit the complexity wall. Most people simply have too many songs to manage effectively using directories. Instead we need a database that can search and filter by multiple criteria and be very responsive. iTunes does this for music.

[picture of iTunes]

I'd like to see this interface style applied to more kinds of files. Here is an app for academic researchers to manage the many scholarly papers they have to read.

[picture of Papers]

[picture of Sparrow]

Sparrow is a great email client for Mac that is much faster than the system default. It sync with social networks to get avatars. It has intelligent threading. It's good enough that I spent 20$ to replace a free system app. But it's just email. It doesn't handle the rest of my communication. And while beautiful it's not very customizable. Customization matters as much as beauty.

I'd like to have a single place to store my external communication: IM logs, SMS logs, all tweets, all emails, all FB posts. One app. One interface. When I'm composing an email to someone I'd like to see all of the messages I've ever sent to them. And of course this should magically sync with all of my online services to keep up to date.

Idea I want a single place to see all of my code across all projects, be aware of all version control systems. My IDE only manages the projects I'm currently editing. I want something to manage *all* of my projects, both personal and professional, including my build services and the bug tracking systems I'm currently involved with. Most of these systems have APIs so I don't it would be difficult to build.

Apple is moving in the direction of system wide saved queries, similar to what iTunes offers, but they are doing it very cautiously. There is so much more we can do.

customization and automation

Just as great painters would make their own paints and canvases, we need the ability to create our own workflows and customize our tools.

* apps communicate together

* build new apps out of pieces of other apps

* customize the general computing environment, but still

* be able to handle anything

the line between a custom workflow and real programming is fuzzy

Let me give you some examples:

iTunes smart lists and IDE keymaps

We need to take this to the next level. I should have a keymap that works system wide in all apps. The Mac has smart lists that can be used in any app, but only for photos and albums. This should be available for any kind of media list.

I should be able to change the interface of any app. In a drawing program I can change the toolbar, but why doesn't *every* action in an app have a toolbar item? Why can't I create different toolbar sets for different kinds of projects? Then these sets should be shared across apps.

Automation

If every action has a toolbar item then we could take this to the next level and actually script our apps together visually. Mac OS X does this with automator but they never took it as far as it could go. Now it appears that Automator will be crippled by the new Mac App Store restrictions. This is a shame. I think Automator never really took off because not enough apps supported it and there was no easy way to share scripts between people. My ideal vision would be something like: visually create a script to take the current document of photoshop, convert to PNG, post to Flickr, send out a Facebook update and Tweet to the Flickr link *tomorrow morning at 9am EST*. This should not be hard to do in the 21st century, and yet our tools don't make this easy to do. This is really a problem with app communication. I'm not entirely sure how to solve it, but the potential is huge.

If we start to think of a desktop app as a collection of modules rather an a monolithic whole, then we can start doing these sorts of cool things.

With such a modular system the line between customization, automation, and real programming becomes very fuzzy. If we make it easy to create and share these scripts then we can make the desktop twice as powerful.

Integrate the web

Once we make our apps modular and scriptable they can start talking to the web. There are some simple things we can do today. Why don't more desktop apps have 'share via twitter' buttons? Why doesn't photoshop have it? I've been working on an open source drawing program called Leonardo Sketch. One of the first features I added was the ability to share a snapshot of what I'm working on right now through Flickr, Twitter, and Facebook.

I'd like to take this to the next level. I recently added an asset manager to Leo Sketch. It's an iTunes like view that shows you all of the clip art, fonts, palettes, and textures that I have to work with. But why should my asset collection be limited to what's on my computer? I've added a flickr search which will find creative commons licensed images based on keywords. next I want to integrate Google Fonts. Then I can access a huge collection of fonts from anywhere in the world, easily searchable.

How else could we ingrate the web into desktop apps? How about selecting text and having Bing translate it into another language, right from within my drawing. Or let me see the most popular color swatches that are tagged with 'summer'.

How about an IDE that will let me search for small open source code libraries. Say I'm working on an app that needs to parse a CSV file. Instead of jumping to a web browser I'd like to be able to search git hub for code snippets that match my current working language. It shouldn't be any more complicated than code completion. It would show me 8 snippets which take a file and return a string, along with their ratings. When I chose the one I want it downloads the code, compiles it, and puts it in my class path.

Context awareness Computers must learn from us and do things for us.

The other half of integration and automation is the computer doing things for us without having to request it. This trend is already happening from two very different directions. On one end our phones are learning more about us, which enables services like Siri. On the other end our IDEs are getting smarter and smarter. IntellJ IDEA not only knows what possible methods can fit the current spot in my code, but it will automatically add the imports for me. Because the editor has so much contextual knowledge about what I'm doing right now (I'm editing a particular place in a particular file in a particular project with a particular class path), it can do lots of things for me, or at least monitor what I am doing and give me advice. We should have all our apps doing this.

My word processor should let me type in an equation and offer to evaluate it for me. I should be able to type in a reference to a stock symbol and have it turn into a link with the current stock value. If I'm listening to music my calendar should tell me about an appointment 15 minutes before hand with a silent unobtrusive message, then increase to an alarm and turning off the music as I get closer to the appointment time. There should be a system wide switch to turn off all messages and alerts when I want to be in concentration mode.

My laptop knows where it is based on my phone's GPS and the local wifi access point. It should be able to adjust itself based on the location just as my phone does. The possibilities are huge here, but it worries me that almost all of the innovation is happening on the phone side, not the desktop side.

Identity Management and Service Sharing

Finally identity management. Originally we thought identity was just a handle. Then we thought it was your real name representing a single unified you. You have one identity on the web, period. But that was wrong too. Now we understand that identity is far more complicated. When I do something on a Google service am I acting as Josh, the author and open source coder, or am I acting as Joshua, the Nokia researcher? The answer is: it depends. I might be either of those, or some mixture.

We also have identity scattered across many places with no way to integrate them while protecting my privacy. This is a huge mess that is going to keep getting worse. I'd like to see an OS wide identity system. Any app, including the browser, which wishes to integrate with the network can ask the identity system for credentials. This allows any app to talk to anything, and at any time I can change which identity I'm using. This could be done per app or even for the desktop as a whole. I can imagine doing this with a virtual desktop system where one desktop is 'Josh the author' with a green background and menu bar. The other desktop is 'Josh the engineer' with a blue background and desktop. Each has credentials that can't be shared between them without explicit actions by me, the human.

We can see some of this in password managers and browsers that sync their settings, but this really needs to be system wide.

A plea to Desktop Linux

There will never be a year that Linux conquers the desktop. That era is over. The world is going mobile. Instead of focusing on a complete desktop OS, create an awesome desktop environment that can be seamlessly layered on top of any proprietary OS. Over time people will use more and more of your stuff, until they can switch entirely. Especially if you make the switch easy by having cloud based backup of everything (hello version control!).

My favorite app for Mac now is a command line tool called Brew. It uses community developed 'recipes' which download and compile lots of linux programs and libraries. With Brew I can be on a Mac but have ridiculously easy access to the rich ecosystem of linux tools like apache, imagemacgick, sdl, and node.

What I want is a graphical Brew app. I install it once and log in. I then downloads and configures my favorite email and browser apps, all of the command line programming tools I need, and anything else provided by the open source community. Even my settings are synced from a cloud server. *That* is how linux can win the desktop. You will never win over the masses. Focus on the hyper users and let them ease into a fully free desktop.

Conclusion:

I hope that you have gotten two things out of this essay:

First, most computing is going to managed devices but the desktop interface can have a rich future ahead of it, if we are willing to build it.

Second, I hope you come away with some ideas for how we can move the desktop forward. We are going to live in the future we make so we'd better make a good one. I understand it's going to be difficult, but this really needs to happen. We must accept that people are reluctant to change, but do it anyway. If you are right then we will win in the end. (or at least our ideas will).

Progress comes not from inventing new answers, but from discovering new questions. -- some guy

I am bored of technology. As you might guess, this is kind of a problem for someone who is a professional technologist. Sadly, I can't help it. I spent five years working on advanced GUI toolkits, then three working on cutting edge smartphones. As I watched the parade of CES announcements this month I found myself being simply, well… bored. Bigger TVs. Faster smart phones.YouTwitFace social networking integrated into everything. Nothing genuinely new. Nothing to really get me excited. The last thing that really made me say 'wow: this is the future' was the first demos of XBox Kinect; which sadly have yet to live up to their potential.

What is wrong? We live in an age of computing abundance. My Roku connected TV can stream shows and music from the last fifty years, plus play Galaga. I have five smart phones, each more powerful than a top of the line desktop from the mid 2000s. I can video chat with family two thousand miles away. Clearly we live in the future. So why am I so bored with it all?

I think I am unimpressed because these are technologies I have long expected to be here. Since I was a kid I assumed we would have faster computers, video phones, and ever smaller gadgets. Today's smart phone is merely the latest version of the PalmPilots and Newtons I played with nearly twenty years ago. That they have finally arrived in fairly usable form is not a triumph, but merely expected.

There are only two things that seem interesting to me right now. First is the Raspberry Pi. The Pi is very underpowered by modern standards. A 700mhz CPU with 512MB of RAM seems paltry, but combined with an insanely powerful GPU you get an amazing computer for 35$. Never before has this been possible. A change in quantity, if large enough, can become a change in quality. The Pi feels like that to me. But..

Software on the Raspberry Pi still feels slow. Compared to what I had a few years ago it should be massively fast. Is our software simply to crufty and bloated to run efficiently? The Pi should be the new baseline for software. Your app should run smoothly on this computer, and it will run even better everywhere else.

There is one other thing that interests me: my 19 month old son. As I see him explore the world and discover language I once again feel the wonder from my own childhood. The pure joy of learning new things is infectious. Perhaps that is why I find myself again looking for the 'new' in the technology realm.

So, I am searching; and researching. I've spent the last few months looking at computer science papers from the 70s to the present. It's depressing to see how every new programming technology has existed for at least 30 years. I've also been scouring blogs, Reddit, used book stores, and anything else I can find in my quest to answer the question: What is next? What seems futuristic now but will seem obvious in a decade. What will replace social networking and gamification as the next wave to drive the industry forward. What new programming concept will finally help us make better software?

New Questions

If you are hoping for me to give you answers, I'm afraid I will disappoint you. My crystal ball reveals no silver bullets or shiny trinkets from the future. I cannot tell you about live in a decade. I can only offer a few thoughts on what we should be building now, that we might live in a future so packed full of technology it will bore us to tears as much as the present. These are the questions we should be asking.

Can multi-processor computers change our lives?

I recently reread some of the original papers around Smalltalk and the Dynabook. The belief at the time was that personal access to high speed computing technology would change how we live. The following thirty years have shown this belief to be true; but are we nearing the end of this transformation?

It is now generally accepted that the future of Moore's law is to have parallel CPUs rather than faster ones. This is great for server side developers. The every day programmer can now finally use the last thirty years of research in parallel computation. However, the desktop computing experience hasn't really changed. My laptop has four cores, and yet I still perform the same tasks that I did a decade ago.

The real question: Are there tasks which local parallel computation makes possible that would change our lives? What new thing can I do with my home computer that simply wasn't possible ten years ago? Hollywood of the 90s tells us we should be doing advanced image analysis and global visualizations with our speedy multi-core processors through holographic screens. Reality has turned out less exciting: Farmville. Our computers may be ten times faster, but that doesn't seem to have actually made them better.

How can we replace C?

I can't believe we will use C forever. Surely the operating system on the Starship Enterprise wasn't written in C, and yet I see no way to replace it. This makes me a sad panda.

I hate C. Actually, I don't hate C: the language. It's limited but good at what it does. Rather, I hate C compilers. I hate the macro processor, I hate header files. I hate the entire way C code is produced and managed. Try porting an ARM wireless driver across distros and you will agree. C code doesn't scale cleanly. And yet we have no alternatives? Why?

I think the key problem is the C ABI. I could write a system kernel or library in a higher level language, but to interoperate with the rest of the system I must produce a binary blob compatible with the C ABI. This means advanced constructs like objects can't be exposed. Library linking is further complicated by garbage collection. If one side of a function call is using GC and the other is not, then who is in charge of cleaning up allocated memory? With C it is simple. A linked library is no different than if you had included the code directly in your app. With a GC'd language that library now comes with it's own runtime and background processes that must be managed.

Header files don't help either. If I wish to call C code from a non-C language I must parse the entire header file, or hack it in through some language specific FFI. Since .H files are essentially Turing complete, they must be processed exactly the same as a C compiler would, and then predict how the compiler generated the original binary. Why doesn't the completed binary contain this information instead of me having to reverse engineer it from a macro language.

All languages provide a way to link to the C ABI. So if you want to build a library that can be reused by other languages, you have to write it in C. So all libraries are built this way. Which means all new systems link only to the C ABI. And any new languages which want to be linked from other systems compile down to C. You could never build an OS in Go or Ruby because nothing else could link to the modern structures they generate. As long as the C ABI is the center of the computing universe we will be trapped in this cycle.

There must be a way out. Surely these are not insoluble problems, but we have yet to solve them. Instead we pile more layers of abstraction on top. I'm afraid I don't know the answer. I just know it is something we must eventually solve.

How can we reason about software as a whole?

I'll get into this more in a future blog, but the summary is this. Too much effort is spent trying to improve programming languages themselves rather than the ecosystem around them. I've never felt like lack of concurrency primitives or poor type systems were the things preventing me from building amazing software. The real challenges are more mundane problems like trying to upgrade a ten year old database when an unknown number of systems depend on it. The problems we face in the real world seem hopelessly out of sync with the research community. What I want are tools which let us reason about software in the large. All software. All of it.

Imagine a magic database which contained all of the source to the codebase you are working on, in every revision, and with every commit log. Furthermore this database understands every programming language, data format, and config file you use. Finally it also contains the code and history of every open source project ever created. (I said it's magic, remember). What useful questions could you ask such a database? How about:

  • Is library X integrated or is it really a collection of classes is several groupings that could be sliced apart, and which classes should we target. The Apache Java libraries could really benefit from this.
  • Is there another open source library which could replace this one, and meets the platform, language, and memory dependencies of the rest of my system?
  • How many projects really use method X of library Y? Would changing it be a big deal?
  • What coding patterns are most repeated in a full Linux distro? How many packages would have to change to centralize this code, and how much memory would it save?
  • We need ways to reason about our software from the metal to the cloud, not just better type systems. It would be like having a profiler for the entire world.

How can we make 10x denser batteries?

While not software related directly, batteries impact everything. I'm not taking about our usually 5% a year improvements. I mean 10x better. This requires fundamentally new technology. It may seem mundane but massively denser batteries changes everything. It becomes possible to make power in one part of the country (say, in an protected nuclear plant in the desert) and literally ship the power to it's destination in trucks.

Want a flying car? 10x batteries make it possible. Modern sensors and CPUs make self flying cars possible, we just need 10x power density to make a flight longer than a few minutes. Everything is affected by power density: cars, smart homes, super fast rail, electric supersonic airplanes. Want to save the environment? 10x better batteries do it. Give the world clean water? 10x better batteries.

How can we put an MRI in your shower?

This may sound like a bit of an odd request, but it's another technology that would change the way we live. Many cancers can't be detected until they are big enough to have already caused serious harm. A tiny spot of cancer is hard to find in a full body scan, even with computer assisted image recognition. But imagine you could have a scan of your body taken every day. A computer could easily recognize if a tiny spot has grown bigger over the course of a week, and pinpoint the exact location it started. The solution to cancer, and so many other diseases, is early detection through constant monitoring.

Whenever you see your doctor with an ailment he goes through a list of symptoms and orders a few tests. The end result is a diagnosis with a certain probability of being true; often a probability far lower that you might expect. Constant full body monitoring changes this equation. Feeling a pain in your side? Looking through day by day stats from the last year can easily tell if it's a sign of kidney stones or just bad pizza you ate last night.

Constant monitoring only works if it is cheap, so cheap that you can afford to do it every day, and automatic so that you actually do it every day. One solution? An MRI equivalent built into your shower. When you take a shower a voice asks you to stand still for 10 seconds while it performs a quick scan. That's it. One scan, every day, could eliminate endless diseases.

Better Questions

As I said at the start. These are just ideas. They aren't prognostications of a future ten years from now. They are simply things we should be working on instead of building the next social network for sharing clips and watching ads. If you want to change the world, ask some bigger questions.

Last night I declared Internet Bankruptcy.

As grown increasingly clear this year, I simply don't have the time to keep up with the info-deluge that constitutes my morning routine: coffee. email. RSS feeds. Facebook. Twitter, more email yet again. second coffee.

Bedtime is a mirror image: Brush teeth. News feeds. If sleep has not arrived then I turn to custom news apps for Reddit, The Verge, Hacker News, and more. It never ends.

Always behind, always frustrated. Spending hours keeping up with news in a rapidly changing industry. I need to stop. Keeping up is a futile exercise. There is simply too much. And my time is better spent keeping the house clean, playing with Jesse, or, you know, actually working. Time to change.

I've declared internet bankruptcy. It's the only way. I hope you will forgive me. While I'm not completely unsubscribing to everything, I've greatly pruned back my social networking and news reading accounts. Here's the damage:

Facebook, Twitter and Google+ are still present, but I massively pruned back the friends/followers/circles/etc.

News feeds. The bane of my existence. Nuke it from orbit. It's the only way to be sure.

I unsubscribed to all of my RSS feeds on Google Reader, about 500ish feeds. I'm sure I'll be slowly adding some important ones over the coming year, but I have to start fresh.

Google Plus: Somehow I had two Google accounts. If you were following me on G+ with joshmarinacci@gmail.com you'll need to switch to my main account. I actually deleted the gmail account; in the process discovering just how much data Google tracks about me across their many properties. Now they only have half as much.

Pintrest: I can't figure out how to actually delete my account, so I deleted the app and turned off email notifications.

LinkedIn Oddly, no changes. They give me the right amount of information without overloading my inbox.

Apps: Deleted the many site specific apps on my phone, including The Verge, Blue Alien (Reddit), and Hacker News. I left MagPi and The Magazine because they are only updated every few weeks.

I think that's it. At least for now. We'll see what the next few days hold. If I unfriended/unfollowed/untracked you, please don't take it personally. I simply have to shrink my daily e-footprint. We thank you for your support.