The Future of Desktops and Design of the Workstation OS
Ten years from now 90% of people will use something like a tablet or smartphone as their primary computing interface. And the remaining 10% will use a desktop OS on something called a workstation.
In fact, I now think I may have been too conservative. Five years is more likely. I've posted several times about this market but I haven't really talked about that other 10%. What will the workstation OS of the future look like? Who will use it? Why would they chose to use it over something like a tablet computer? What new features and apps will they have? This is the first post in a series where I will explore the future of the PC and desktop applications. Over the series I'll cover what I think the future will look like and then deep dive into particular technologies and interfaces that will be required. Don't worry, I haven't forgotten about tablets. I've got more to say about that coming up soon.
Workstations vs Tablets
First some definitions. I'm going to stop using the term PC or desktop because they are too ambiguous. I won't use creation vs consumption device because that introduces too many preconceived notions. I've settled on the term workstation and tablet. A workstation is something running a general purpose operating system on a laptop or desktop. In practice this means a future version of Windows, Mac OSX, or Linux. A tablet is a device running a non-general purpose operating system. It is probably a phone or tablet formfactor, but I expect netbooks to be coming soon as well. While there is clearly a lot of gray area between the two types of device key differences are an exposed file system, the ability to install any application, and a heavy focus on keyboard use. I'll talk about why these matter in a moment. In the long run these two types may have shared implementations (as the new Mac OS X Lion demonstrates) but they are still targeted at a different audience.
Why and Who?
Who would actually use these workstations if tablets have become so advanced that they can do what 90% of people want, and with far less fuss? I think workstations are for the pro users, where by I mean pro for professional. These are people who use their devices directly to make money, use it for a significant amount of time per day (for work), and most importantly are willing to invest time and money on their devices to get the most out of them. They will be reasonable tech savvy but that doesn't mean they are super-nerds who can explore file systems and mess with printer drivers. There is work to be done, so the device needs to function perfectly and get out of the way. Who would these people be? First are programmers and webdevelopers, obviously, since they need to directly interact with files and use advanced text manipulation tools. Next I'd include people who use advanced content creation tools: 3D artists, architects and engineers, video editors, technical authors. Anyone who works on large documents / structures which have very sophisticated UI needs. I could also see this expanding to business and finance, medical, and engineering fields; since all of these people process large amounts of data.
The key to all of these types of people is their need to create, process, and distribute large amounts of data in very sophisticated ways. They need interfaces that are both wide and deep. They are the knowledge workers, and will pay a premium for a device that lets them do what they do. They have a willingness to use a deep interface which requires time to learn; provided they get value proportional to their investment. That last point is probably the most distinguishing feature. There is a subset of users who need professional interfaces and will take the time to learn them, but they also don't want to waste their time because it is valuable. They will pay for good stuff, but will dump you if you are too much of a hassle. These are the knowledge workers.
What do they need?
How could we design software for the workstation? We need to focus on a core philosophy. The list below is in no ways complete so I'd love to get your feedback.
Scalability means the software and the interface scales. iMovie is a great way to learn to edit video, but it doesn't scale with the task. You hit a limit very quickly; both in terms of what types of things you can do with it and the sheer amount of media it can handle. iMovie will grind to a halt if you try to edit 100 hours of footage down to a 2 hour movie, and you'll be very frustrated with the interface. iMovie doesn't scale. Final Cut Pro does. While it has a bit of a learning curve you can do almost anything with it. Other software I'd put in this category include: IDEs (Eclipse, IntelliJ, NetBeans, etc.), Maya (3D modeler) and AutoCad. I'm sure there are more examples in other industries. Interesting I don't think there is a professional app in the text manipulation industry yet (MS Word that doesn't suck) so perhaps there is an opening for new software there.
Efficiency does not refer to CPU or memory efficiency. A modern computer has ample supply (though battery life could always be longer). I'm referring to the most precious commidity of all: the human's time. Someone who uses a workstaion expects their software to make them work faster and easier. Anything which automates a task or reduces conginitive load is a very good thing. Never waste a human's time. This also applies to interaction design. Why do I have to tell my program to save open files when I quit? It's much better to just auto save everything and restore it when I return. This reduces the time I have to spend thinking about it, so I can focus on getting my work done. Efficiency also includes shortcuts, automation tools, and filters. Anything to let me work faster.
Reliability it goes without saying that computers must be reliable. They must do their work properly, never slow down and never lose data. This is even more important for the knowledge worker since work time and usually money are at stake. Fortunately PCs have made great strides here, and Mac OSX Lion has some interesting new features to make this happen. This category is mostly outside of the realm of user interaction design, however, so I won't say much more about it.
Customization is perhaps the most important of the four. Anyone who uses a tool for a long period of time makes it their own by customizing it. This is perhaps the most defining feature of being a human. We integrate tools into our mental system. We modify our tools to suit our needs and provide us a competitive advantage. The great painters did not use stock brushes. They either made their own or modified a stock brush to have the exact shape and flow they wanted. Many great programmers have their own specific set of dogeared tech books and directories full of code snippets to reuse. We customize our keyboard shortcuts, put files in particular places, pin browser tabs, create bookmarks, switch wallpapers and litter our (physical) desktops with pencil holders and sticky pads.
A customized computer is a computer well used and loved by it's owner. A stock computer is a computer never used.
I've gone on long enough for today. In the next post I'll cover some kinds of interaction that will meet the needs of the knowledge worker, and show some existing examples. In the meantime let me leave you with a few ideas to ponder:
- IDEs are some of the most sophisticated applications available thanks to highly advanced UIs (code completion, class generation, syntax highlighting), heavy integration with other tools (put a web server inside of your IDE?), and yet are almost all completely free.
- Only nerds complain that OpenID and OAuth suck for desktop applications. Why?
- File systems let you track files, not documents, and yet documents are usually what we care about. Can we do better?
- iTunes might be the most widely used pro app in the world, even if it's recently jumped the shark.