Questions We Must Ask
Progress comes not from inventing new answers, but from discovering new questions. -- some guy
I am bored of technology. As you might guess, this is kind of a problem for someone who is a professional technologist. Sadly, I can't help it. I spent five years working on advanced GUI toolkits, then three working on cutting edge smartphones. As I watched the parade of CES announcements this month I found myself being simply, well… bored. Bigger TVs. Faster smart phones.YouTwitFace social networking integrated into everything. Nothing genuinely new. Nothing to really get me excited. The last thing that really made me say 'wow: this is the future' was the first demos of XBox Kinect; which sadly have yet to live up to their potential.
What is wrong? We live in an age of computing abundance. My Roku connected TV can stream shows and music from the last fifty years, plus play Galaga. I have five smart phones, each more powerful than a top of the line desktop from the mid 2000s. I can video chat with family two thousand miles away. Clearly we live in the future. So why am I so bored with it all?
I think I am unimpressed because these are technologies I have long expected to be here. Since I was a kid I assumed we would have faster computers, video phones, and ever smaller gadgets. Today's smart phone is merely the latest version of the PalmPilots and Newtons I played with nearly twenty years ago. That they have finally arrived in fairly usable form is not a triumph, but merely expected.
There are only two things that seem interesting to me right now. First is the Raspberry Pi. The Pi is very underpowered by modern standards. A 700mhz CPU with 512MB of RAM seems paltry, but combined with an insanely powerful GPU you get an amazing computer for 35$. Never before has this been possible. A change in quantity, if large enough, can become a change in quality. The Pi feels like that to me. But..
Software on the Raspberry Pi still feels slow. Compared to what I had a few years ago it should be massively fast. Is our software simply to crufty and bloated to run efficiently? The Pi should be the new baseline for software. Your app should run smoothly on this computer, and it will run even better everywhere else.
There is one other thing that interests me: my 19 month old son. As I see him explore the world and discover language I once again feel the wonder from my own childhood. The pure joy of learning new things is infectious. Perhaps that is why I find myself again looking for the 'new' in the technology realm.
So, I am searching; and researching. I've spent the last few months looking at computer science papers from the 70s to the present. It's depressing to see how every new programming technology has existed for at least 30 years. I've also been scouring blogs, Reddit, used book stores, and anything else I can find in my quest to answer the question: What is next? What seems futuristic now but will seem obvious in a decade. What will replace social networking and gamification as the next wave to drive the industry forward. What new programming concept will finally help us make better software?
If you are hoping for me to give you answers, I'm afraid I will disappoint you. My crystal ball reveals no silver bullets or shiny trinkets from the future. I cannot tell you about live in a decade. I can only offer a few thoughts on what we should be building now, that we might live in a future so packed full of technology it will bore us to tears as much as the present. These are the questions we should be asking.
Can multi-processor computers change our lives?
I recently reread some of the original papers around Smalltalk and the Dynabook. The belief at the time was that personal access to high speed computing technology would change how we live. The following thirty years have shown this belief to be true; but are we nearing the end of this transformation?
It is now generally accepted that the future of Moore's law is to have parallel CPUs rather than faster ones. This is great for server side developers. The every day programmer can now finally use the last thirty years of research in parallel computation. However, the desktop computing experience hasn't really changed. My laptop has four cores, and yet I still perform the same tasks that I did a decade ago.
The real question: Are there tasks which local parallel computation makes possible that would change our lives? What new thing can I do with my home computer that simply wasn't possible ten years ago? Hollywood of the 90s tells us we should be doing advanced image analysis and global visualizations with our speedy multi-core processors through holographic screens. Reality has turned out less exciting: Farmville. Our computers may be ten times faster, but that doesn't seem to have actually made them better.
How can we replace C?
I can't believe we will use C forever. Surely the operating system on the Starship Enterprise wasn't written in C, and yet I see no way to replace it. This makes me a sad panda.
I hate C. Actually, I don't hate C: the language. It's limited but good at what it does. Rather, I hate C compilers. I hate the macro processor, I hate header files. I hate the entire way C code is produced and managed. Try porting an ARM wireless driver across distros and you will agree. C code doesn't scale cleanly. And yet we have no alternatives? Why?
I think the key problem is the C ABI. I could write a system kernel or library in a higher level language, but to interoperate with the rest of the system I must produce a binary blob compatible with the C ABI. This means advanced constructs like objects can't be exposed. Library linking is further complicated by garbage collection. If one side of a function call is using GC and the other is not, then who is in charge of cleaning up allocated memory? With C it is simple. A linked library is no different than if you had included the code directly in your app. With a GC'd language that library now comes with it's own runtime and background processes that must be managed.
Header files don't help either. If I wish to call C code from a non-C language I must parse the entire header file, or hack it in through some language specific FFI. Since .H files are essentially Turing complete, they must be processed exactly the same as a C compiler would, and then predict how the compiler generated the original binary. Why doesn't the completed binary contain this information instead of me having to reverse engineer it from a macro language.
All languages provide a way to link to the C ABI. So if you want to build a library that can be reused by other languages, you have to write it in C. So all libraries are built this way. Which means all new systems link only to the C ABI. And any new languages which want to be linked from other systems compile down to C. You could never build an OS in Go or Ruby because nothing else could link to the modern structures they generate. As long as the C ABI is the center of the computing universe we will be trapped in this cycle.
There must be a way out. Surely these are not insoluble problems, but we have yet to solve them. Instead we pile more layers of abstraction on top. I'm afraid I don't know the answer. I just know it is something we must eventually solve.
How can we reason about software as a whole?
I'll get into this more in a future blog, but the summary is this. Too much effort is spent trying to improve programming languages themselves rather than the ecosystem around them. I've never felt like lack of concurrency primitives or poor type systems were the things preventing me from building amazing software. The real challenges are more mundane problems like trying to upgrade a ten year old database when an unknown number of systems depend on it. The problems we face in the real world seem hopelessly out of sync with the research community. What I want are tools which let us reason about software in the large. All software. All of it.
Imagine a magic database which contained all of the source to the codebase you are working on, in every revision, and with every commit log. Furthermore this database understands every programming language, data format, and config file you use. Finally it also contains the code and history of every open source project ever created. (I said it's magic, remember). What useful questions could you ask such a database? How about:
- Is library X integrated or is it really a collection of classes is several groupings that could be sliced apart, and which classes should we target. The Apache Java libraries could really benefit from this.
- Is there another open source library which could replace this one, and meets the platform, language, and memory dependencies of the rest of my system?
- How many projects really use method X of library Y? Would changing it be a big deal?
- What coding patterns are most repeated in a full Linux distro? How many packages would have to change to centralize this code, and how much memory would it save?
- We need ways to reason about our software from the metal to the cloud, not just better type systems. It would be like having a profiler for the entire world.
How can we make 10x denser batteries?
While not software related directly, batteries impact everything. I'm not taking about our usually 5% a year improvements. I mean 10x better. This requires fundamentally new technology. It may seem mundane but massively denser batteries changes everything. It becomes possible to make power in one part of the country (say, in an protected nuclear plant in the desert) and literally ship the power to it's destination in trucks.
Want a flying car? 10x batteries make it possible. Modern sensors and CPUs make self flying cars possible, we just need 10x power density to make a flight longer than a few minutes. Everything is affected by power density: cars, smart homes, super fast rail, electric supersonic airplanes. Want to save the environment? 10x better batteries do it. Give the world clean water? 10x better batteries.
How can we put an MRI in your shower?
This may sound like a bit of an odd request, but it's another technology that would change the way we live. Many cancers can't be detected until they are big enough to have already caused serious harm. A tiny spot of cancer is hard to find in a full body scan, even with computer assisted image recognition. But imagine you could have a scan of your body taken every day. A computer could easily recognize if a tiny spot has grown bigger over the course of a week, and pinpoint the exact location it started. The solution to cancer, and so many other diseases, is early detection through constant monitoring.
Whenever you see your doctor with an ailment he goes through a list of symptoms and orders a few tests. The end result is a diagnosis with a certain probability of being true; often a probability far lower that you might expect. Constant full body monitoring changes this equation. Feeling a pain in your side? Looking through day by day stats from the last year can easily tell if it's a sign of kidney stones or just bad pizza you ate last night.
Constant monitoring only works if it is cheap, so cheap that you can afford to do it every day, and automatic so that you actually do it every day. One solution? An MRI equivalent built into your shower. When you take a shower a voice asks you to stand still for 10 seconds while it performs a quick scan. That's it. One scan, every day, could eliminate endless diseases.
As I said at the start. These are just ideas. They aren't prognostications of a future ten years from now. They are simply things we should be working on instead of building the next social network for sharing clips and watching ads. If you want to change the world, ask some bigger questions.
posted Thu Jan 24 2013