Why don't we have Wayland yet?!
Or: The state of Linux Graphics with Raspberry Pi and Rust
I have long wanted to create my own desktop operating system targeting the Raspberry Pi. I believe we should have a hackable desktop environment that is both powerful and lightweight, similar to BeOS and the Amiga. To build this desktop we really need full control over how apps draw to the screen, but still have hardware acceleration. That means we'll need to use real graphics drivers and real graphics APIs from existing Linux desktops. When I last looked at all things Linux graphics five years ago everything was a mess but with some promising signs. Over the past couple of days I've been researching the current state. Here is my report, with both good and bad news.
Roughly following the Feynman method, what follows is my mental understanding of the current state of graphics with lots of links. I'm writing this down to clarify my thoughts and hopefully help anyone else trying to understand the ever shifting landscape that is graphics on Linux and the Raspberry Pi.
You may have seen videos like this one or this one which show hardware accelerated super smooth graphics on the Raspberry Pi using something called Wayland. And those videos were posted in 2013 and 2015, so why can't I fetch the latest Raspbian (the default OS for the Pi) and get all the magic? And what's this I read: Raspbian is moving away from Wayland!? What happened?
# It's Story Time Boys and Girls
The Raspberry Pi is composed of two main chips. The ARM processor is the CPU where the operating system and all user programs run. The graphics chip (GPU) is called VideoCore and contains hardware for rendering OpenGL graphics, as well as encoding and decoding video streams, scaling images quickly, and talking to HDMI screens (it was originally a chip to go in TV set-top boxes)
When the Raspberry Pi first shipped it came with an old closed source firmware blob to access the VC4 chip within the Pi. This situation was fine when the Pi first shipped but has created problems that have only grown over time.
First, OpenGL doesn't really work on the Pi from within XWindows, which means most users can't take advantage of the power of this GPU. You can use the GPU from the command line, but only from within the fairly under-documented EGL APIs and through a VideoCore only API called Display Manager X (dispmanx) which can do hardware image and video scaling. Integrating this with standard software is difficult.
Finally, while the Pi has a surprisingly powerful GPU for how cheap and old it is, but it's still pretty out of date and the lack of source to the graphics driver means no one (other than Broadcom engineers) can fix bugs in it, improve performance, or add features.
Broadcom has fixed some bugs but the long term solution is to get docs from Broadcom and build a new open source driver. This is exactly what happened in June of 2014 when Eric Anhold was hired by Broadcom to build just such a driver, now called VC4.
In a perfect world this would be the end of the story, but in our real world existence building a new driver is not so straight forward because of other changes going on in the larger Linux graphics world.
Linux Graphics History
Historically the Linux kernel has not cared about graphics. It delegated all of that to X windows and user space drivers. The kernel cared only about the bare minimum of controlling access to video hardware, and provided no APIs other than an old framebuffer (in main CPU memory) API.
This was fine in the old days of SVGA adapters, but with the advent of GPUs built into essentially every device capable of running Linux; this situation isn't tenable anymore. A modern graphics stack needs the kernel to have real drivers for the GPU, provide controlled access to GPU resources like texture memory, and provide an standard API for applications to code against.
The Kernel also needs to live in the modern world where multiple apps may need to talk to the GPU at the same time, possibly including a window manager which composites every app to the real screen. In short, by the early 2010s there was no single standard solution to all of this, yet it was desperately needed.
Further slowing down progress is X-Windows. X is the graphical interface for essentially all Unix derived desktops (all other competitors died decades ago). It was originally designed at MIT in the 1980s, long before GPUs were a real thing, and has been showing it's age for at least a decade. Hack upon hack has been added to X in an effort to provide window compositing, scalable fonts, hi-dpi screens, touch input, 3D graphics, and other modern features; and these hacks are starting to collapse. Previous efforts to replace X failed long ago, but something new appeared about 10 years ago: Wayland.
Wayland is a new protocol that is far simpler than X. It says nothing about how applications should draw themselves. Instead, apps draw using whatever mechanism they want into a shared chunk of memory. This memory is then given to a piece of software called a compositor which does a quick copy of the memory to the real screen. Ideally these memory chunks would be actual shared buffers, avoiding any copying. Even more ideally these memory chunks would just be textures in the GPU. The end result is super fast and smooth graphics that work with all applications, like users expect from a decade+ of compositing window managers on Mac and Windows.
X windows has the concept of a window manager which draws the window decorations like title bars and close buttons, as well as deciding which window has focus. X windows also has the concept of an X server (actually called an X client in the archaic terminology of X) which listens for connections from apps and draws on then screen (most of the time). In Wayland these concepts are combined into a single thing called the compositing server. Wayland is the name of the protocol itself. Weston is the name of the reference implementation of the protocol in the form of a simple compositing server. There are other compositors you can run as well, some even written in Rust (which we'll get to later).
So Wayland is the future, right? Well, not so fast. It's been in development for a decade now and hasn't replaced X as the default in most major Linux distributions. Why is it taking so long?
Progress Is Hard
First of all, existing X applications can't work with the Wayland protocol directly; they must be modified. Most common X applications actually use the GTK or QT toolkits, so porting those toolkits over will make most apps work out of the box. Getting GTK and QT working with Wayland has taken a long time but most of the work seems to be done now.
For the remaining X apps that don't use a ported toolkit there is something called XWayland which provides a compatibility shim. It essentially creates a tiny X server that these apps connect to, which then forwards the drawing over Wayland.
With that out of the way we still need a standard way for applications to share memory with the compositor. Currently there are three ways.
- You can use shared main memory buffers through the standard shared memory extensions, but this is slow. Either you draw everything in software, losing the benefit of hardware acceleration, or you must draw with the GPU, copy from GPU ram to main memory, transfer to the compositor, copy from main memory back to the GPU as a texture, then draw for real on the screen. As you can imagine, this is very sloooow.
- Another option is GBM. If your application already uses EGL (which is part of the OpenGL spec for mobile devices) then you can render to a special image texture then get a handle for it. This handle can be sent to the compositor who gets the texture back and draws with it. With this method the actual image is never copied anywhere, only accessed by the handle. Getting this to work everywhere has taken time, but this now works for most GPUs except Nvidia.
- Nvidia has their own system called EGLStreams but I think it works in a similar way. Apparently there is still a battle between Nvidia and the rest.
But wait, there's more!
Moving the drawing buffers around is only part of the challenge of replacing X. There's also mode setting. This is when the computer detects and configures the attached screen. For something like the Raspberry Pi this could be anything from a tiny 320x200 LCD over a minimal protocol like DSI, to a full HDTV set screen over HDMI. All of this work is currently done by three decades of code in X. That has to be recreated.
It turns out the people who want to embed Linux into cars already knew this was a problem and have pushed for KMS, the Kernel Mode Setting system. So that's more work to be done, but when it's complete all of this device management will be where it belongs, in the kernel.
And another thing: DRM. No, not digital rights management, but rather Direct Rendering Manager. This is essentially the official way to make graphics drivers for Linux, comprising a huge set of specs and documentation. It encompasses GBM, KMS, and a few other TLAs (three letter acronyms).
Give me all your inputs
Let's see. What else, what else. Input. Oh, right, you actually wanted to type on your computer? Historically there have been many different ways to provide input to desktop Linux, including USB HID, evdev, serial protocols and others. I'm still a bit confused on this point, but I think the world has begin standardizing on evdev (a kernel API) and others exposed through libinput. I think. In any case, it's a lot of work to make every device that worked under X function properly under Wayland.
And just because you have QT and GTK working with Wayland doesn't necessarily mean you have the most important application category ever: the web browser. Web browsers typically have deep hooks into the operating systems they run on, so porting them over requires a lot of deep changes. They really care deeply about font stacks and handling international input methods that are (supposed to be) standard across the OS. This is a lot of work before everything functions correctly. Chromium sort of works today, I think. Firefox is getting close but still has some open bugs. My best guess is that people still run these under Xwayland rather than true Wayland, which is great for compatibility but means they don't get the benefits of hardware acceleration yet.
So What Works on the Raspberry Pi?
There was an effort to make Wayland work with a shim on top of the closed source dispmanx API [link]. It would at least give us fast image scaling for free, but dispmanx is both under documented, closed source, and very limited. Not a good base to work on, so this project was dropped.
Eric Anholt has been diligently working on the new VC4 driver for the past three years. Last summer he was also hired by Broadcom to work on a driver for VC5, a newer chip that is not used in the Raspberry Pi (yet?). The chips are similar so a lot of work is shared between them.
Eric's work has started to pay off. From the command line you can now run Wayland applications with the Weston compositor on the Raspberry Pi. Some things are faster than the old graphics driver and some are slower. In terms of performance it's a wash or worse, but the future looks bright. He's been working on getting hardware accelerated video decoding to work through the new system. He's also been getting all of the mode setting stuff working and support for non-standard displays like those cute little LCD screens that Adafruit sells. And most importantly, this is all clean open source code that can be debugged and improved by the community. Furthermore, as much as possible is being upstreamed into the main line Linux kernel so it will be officially and never fall out of date. You can keep on on all of Eric's great work on his devblog.
What about those Pi Demos?
In 2013 and 2014 the Raspberry Pi foundation previewed Wayland running on the Raspberry Pi. At they time they were commissioning Collabora to build a new lightweight shell that would run over Wayland. They showed a video of windows smoothly zooming in and out using this shell called Maynard. So what happened?
The github repo shows no commits in two years, with the bulk of the work four years ago. Looking at the build instructions I'm pretty sure Maynard used the HVS (hardware video scaler) with dispmanx, but not the rest of the GPU in the Pi. This meant it could zoom windows around but would never give us nice 3D graphics mixed with the rest of the OS.
I found this news article: Wayland's Weston Nukes Its Raspberry Pi Backend/Renderer from 2016 saying that the Raspberry Pi backend was removed from the Weston codebase. The article seems to confirm that it was using dispmanx. It also mentions Eric's open source VC4 driver is the way forward. This other article from Collabora in 2016 talks about Wayland running on VC4, and that seems to be the last update from them.
I also found this article from just a couple of weeks ago about how the VC4 driver is being updated to create a "More Normal" media stack.
So I think that Raspbian has abandoned their efforts to move to Wayland until Debian proper (the upstream distro that Raspbian is based on) fully moves to Wayland and the VC4 driver is stable. According to Simon Long in August of 2017:
No, as we have explained before, work on using the desktop with Wayland has been shelved indefinitely. [link]
LXDE is a mature, working, stable desktop environment which does everything we need it to. This notion of “dead ends” is not one to which I subscribe – what you call a “dead end” is what I call “stable code”. We investigated Wayland several years ago and decided that it didn’t do anything we needed and was too bleeding-edge to move to. Until such time as we need to do something that LXDE doesn’t handle – and I don’t anticipate that being any time soon – we will stick with the desktop environment that I have now spent over three years tweaking, debugging and improving. “Newer” is not the same as “better”, certainly not in the world of software. [link]
In the mean time....
So what can I do now?
From the terminal you can start Weston and then run other apps which support Wayland. You can install KDE on Raspbian with Wayland or follow these instructions from Collabora, but they are 2 years old. Or you can get ArchLinux which has it. Rasterman has shown that it runs beautifully with Enlightnment.
I tried installing it on my Raspberry Pi 2 with Raspbian, by just changing to the VC4 driver with raspi-config, then apt-get installing weston and it worked. Up comes a simple window manager with a button to open terminals. Unfortunately there are flickering black lines when you drag windows and resizing is very laggy. Plus the screen blanks every few seconds.
The problem? The standard Raspbian Repo still has an ancient version of Weston built nearly three years ago. Why is weston still in the repo but without a newer build?
I couldn't find prebuilt binaries for anything later, so I followed the instructions here to build it from source. After many hours of git clones and compiling later I still couldn't get it to build. The build completed without error, but it didn't generate
~/Wayland/install/bin/weston. I tried to manually build weston by CDing to the directory, running autoconf and it failed with:
checking for EGL... no
configure: error: Package requirements (egl glesv2) were not met:
No package 'egl' found
No package 'glesv2' found
Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
Alternatively, you may set the environment variables EGL_CFLAGS
and EGL_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.
`sudo apt-get install libraspberrypi-dev` but it's already installed. Lovely. And I can see the EGL and GLESV2 headers on disk, but the build still can't see them. Lovely
once again. I hate C/C++ toolchains. I hate them. I hate them with the burning passion of a thousand fiery suns! I'm done trying to build it. I hate C. Seriously, you guys. Seriously. Make, and autoconf, and GCC are a brittle mess from the primordial ooze of computing that need to be taken out back and shot.
Also note that an 8GB SD card is *not* big enough to build anything in Linux. The Linux build chain is just bonkers.
Assuming everything stabilizes in the API world and I can actually get everything to compile, I would want to write my own compositor to do things how I want. What I *don't* want to do is write a bunch of C++ code.
The reference implementation of the protocol (Weston and it's associated libraries) is written in C. That means you could wrap the C code with Rust, which several people have done already However, I get the impression that the results are not very 'rustic', meaning it's like you are coding C from Rust, instead of writing real Rust code. Plus you still need to get the native depenciences to compile, which brings us back to the toolchain hell from above. The Rust package manager, Cargo, is supposed to be the opposite of toolchain hell. I don't want to go back there.
To address the problems of dealing with the existing native Wayland implementations, a couple of the Wayland developers have joined together to build a new Wayland implementation called wlroots which will be the new library underneath Sway and WayCooler, tiling window managers. They even started a blog series on how to write your own compositor using wlroots.
A long answer to a short question
In summary, replacing X windows is a decade plus long task because it affects far more than just the X server itself. It requires changes to apps, toolkits, the kernel, and so much more. What took thirty years to build isn't going to be replaced in a day.
Wayland on the Pi isn't quite here yet, at least under Raspbian. But it's getting close. Fedora and some of the smaller distributions have already switched to Wayland, but Debian and it's popular downstream distributions, namely Raspbian and Ubunutu, will be some of the last to switch. In the mean time, doing some Wayland coding with Rust seems quite viable. Time for me to update my Rusty skills.