Tim Bird is certainly no stranger to the stage at the Embedded
Linux Conference―as a longtime organizer of the conference, he has introduced many
keynotes over the years―but he
hasn't given a keynote talk himself since 2005. That changed in 2014, as
Bird gave a thought-provoking talk with a conclusion that will probably
surprise some: he thinks a "good" fork of the kernel is the right approach
to meld Linux and the "Internet of Things". The talk preceding that finish
was both wide-ranging and entertaining.
The industry has changed just a bit since 2005, he said. There are now
approximately 1.5–2 billion ("with a B") Linux devices worldwide. He has
been thinking about how to get Linux into the next 9 billion devices. Using the movie Inception as a bit of
inspiration, he said
that he wanted to try to inject some ideas into the minds of those in attendance.
He started with open source, which, at its core, is about software that can
be freely used and shared by anyone. Those freedoms are guaranteed by the GPL, but
there are other licenses that strike different balances between the rights
of users and developers. The core idea behind the GPL is that developers
publish the
derivative software they create.
But just publishing the code is not enough, he said. It is important to
build a community around the code, and that can't come from just releasing
a tarball. A community has mailing lists, IRC channels, and conferences
where it shares ideas. That community will then build up a bunch of
technologies that get shared by all.
Network effects
That is an example of a "network effect", he said. We have an intuitive
feel for how powerful network effects are, but some examples will help make
that more clear. The first example that is always cited for explaining
network effects is the phone network, Bird said, so he would start
there as well. Each new phone added to the network will infinitesimally
increase the value of all the phones already in the network. Essentially,
the value of the system increases as you add to it. That is a network
effect in action.
Another example is the battle over the desktop. Microsoft won that battle
because it had the most users, which meant it had the most application
developers, which brought in even more users. It was a "virtuous cycle for
them", he said. We have seen the exact same thing play out in the Android
vs. iOS battle. The number of apps in the app store was the focus of much
of the coverage of that battle. That's because the number of apps
available really affects the perceived value of the platform to users.
Format wars are another place where network effects come into play. The
VHS vs. Betamax format war or the more recent HD DVD vs. Blu-ray war are
good examples. The technical features of the formats were "almost
inconsequential" to the outcome of the battle. In the latter case, he was
convinced that HD DVD and Blu-ray would continue fighting it out "forever",
but after Warner Bros. announced it would release its titles on Blu-ray,
the battle was over in a matter of weeks. That announcement was enough to
tip the network effects toward Blu-ray, and HD DVD proponents capitulated
quickly after that.
Network effects are everywhere, he said, and "all large companies are
trying to leverage network effects". He recalled how Google became such a
dominant player. Originally, it was battling with Yahoo, which had a
different approach toward indexing the content of the internet. Yahoo
created a hierarchical set of bookmarks, while Google just had "pure
search". Google foresaw that as the internet grew larger, Yahoo's approach
would eventually fail. Everyone who added something to the internet in
those days was infinitesimally affecting Yahoo in a negative way. Each
third party site added was effectively helping Google.
Companies will spend billions to win a format war, he said. He works for
Sony, so he was interested to watch what happened with the
Playstation 3. It shipped with a Blu-ray player as part of the game
console, which made it more expensive than competitors as well as later to
market. Sony almost lost the console wars because of that, but adding Blu-ray helped
tip the scales toward that format. It turned out that the
Playstation 3 was a "great Blu-ray player", so when that format won,
it "pulled" the console back into the market.
Network effects have great "explanatory powers", he said. It explains
format wars, but it also explains the subsidies that companies are willing
to pour into their products. Those subsidies allow us to get "so much free
stuff", but companies do it for the network effects. Adobe is a perfect
example of that. It has both viewing tools and authoring tools for formats
like Flash and PDF; eventually it figured out that giving away the viewing
tools helped sell the authoring tools by way of network effects. In
addition, network effects partly explain "fanboy" behavior ("though nothing
can completely explain fanboys", he said with a chuckle). People act
irrationally about their platform of choice because it is important to get
more people on that platform―doing so makes the platform more valuable to
the fanboys.
Open source and embedded
Open-source software is yet another example of network effects. Other
developers write software that you use, which makes more value for you and
them, which makes it more likely that more gets written. It also creates
an ecosystem with books, training, tools, jobs, conferences, and so on
around the software, which reinforces those effects.
But the "community" is not really a single community. For example, the
kernel community is composed of many different sub-communities, for
networking, scheduling, filesystems, etc. One day he will have his "USB
hat" on, but on another he will be talking about the scheduler. One outcome
of the network effects created by projects is that they push efforts in the
direction of more generalized software, which doesn't work quite as well as
software that is perfectly customized for the job at hand. But the
generalization brings in more users and developers to increase the network effects.
Embedded devices are those that have a dedicated function. Mobile phones
are not embedded devices any longer, they are, instead, platforms. Most of
the embedded work these days is using Linux, which is a general-purpose
operating system, and most of those devices are running on general-purpose
hardware. Silicon vendors are tossing everything they can think of onto
system-on-chips (SoCs).
He used to work on the linux-tiny
project, which tried to keep the
footprint of Linux small. Today, though, the smallest DRAM you can buy is
32M, so he doesn't really worry about linux-tiny any more. He also noted
that he heard of a SoC that sold for the same price whether it had three
cores or it had nine cores―"we just throw away silicon now".
In his work on cameras at Sony, there was a requirement to boot the kernel
in one second. To do that, he had to take out a bunch of Linux
functionality. For example, there is a cost for loading modules at boot
time, so he would statically link the needed modules to remove that runtime
cost. But that was removing a more general feature to "respecialize" Linux
for the device.
No keynote would be complete without a rant about device tree, he said with
a laugh. His chief complaint about device tree is that it makes it hard to
specialize the kernel for a particular device. The whole idea is to
support a single kernel image for multiple SoCs, so there is "gobs and
gobs of code" to parse the tree at runtime. That also leaves lots of dead
code that doesn't get used by a particular SoC "hanging around" in the
image. When he tried to do some link-time optimization (LTO) of the kernel
image, he couldn't
make any real gains because of device tree.
But device tree builds network effects. It has made code that used to live
deep inside an SoC tree visible to more people. It has also exposed the IP
blocks that are often shared between SoCs so that a single driver can be
used to access that hardware. That makes for a better driver because more
people are working with the code.
Subtractive engineering
But the "Internet of Things" (IoT) changes the whole equation. We want computers
in our cars, light switches, clothes, and maybe even food, he said. To do
that, we won't be putting $50 processors into all of those things, instead
we will want ten-cent processors that will run Linux. He showed a
hypothetical cereal box with a display that showed it booting Linux. When
cereal companies put a toy into the box, they spend around $1 on that toy,
could we get to a display and processor running Linux for $1?
If we want to get there, he asked, how would we go about it? Linux is too
big and too power hungry to run on that kind of system today. But earlier
versions of Linux, 0.11 say, could run in 2M. Is Linux modular enough to
cut it down for applications like the cereal box? He showed a picture of a
Lego crane that a friend of his had built. It had custom parts, gear
boxes, and could operate like a real crane. But if we wanted to build a
small car, it probably wouldn't make sense to strip down the crane into a
car―instead we would start with a small Lego kit.
If you want a "Linux" that is essentially just the scheduler and WiFi
stack, that is quite difficult to do today. All sorts of "extras" come
with those components, including, for example, the crypto module, when all that's really
needed are some routines to calculate packet checksums.
When thinking back on his eleven years at Sony, Bird was "shocked" to
realize how much of that time he has spent on "subtractive engineering".
His work on linux-tiny and on boot-time reduction was all subtractive. In
fact, "my job was to take Linux out of the cameras", he said.
It is more difficult to remove things from a system like Linux than it is
to build something up from scratch. This subtractive method is not the way
to get Linux into these IoT devices, he said. In addition, if you slim
Linux down too far, "you don't have Linux any more". No one else will be
running your "Linux", and no one will be developing software for it. You
will have lost the network effects.
Fork it
So there is a seeming paradox between the needs of a low-end system and the
need to maintain the network effects that make Linux so powerful. His
suggestion is to "fork the kernel". That might make folks scratch their
heads, but there are "good" forks and "bad" forks, he said.
One of the big concerns with forks is fragmentation. Many will remember
the "Unix wars" (and generally not fondly). Each of the Unix vendors went
out to build up its user base by adding features specific to one version of
Unix. But all they managed to accomplish was to split the community
multiple times, so that eventually it was so tiny that Windows was able to
"swoop in" and capture most of the market, Bird said.
We are still living with the effects of that fragmentation today. The Unix
vendors eventually realized the problems caused by the fragmentation and so
efforts like POSIX and autotools came about to try to combat them. "Every
time you run a configure script, you are a victim of the Unix wars", Bird
said to audience laughter.
But there is "good fragmentation" too. It has happened in the history of
Linux. For example, we can run Linux today on Cortex-M3 processors because
the uClinux project forked the kernel
to make it run on systems without a memory management unit (MMU). The hard
part of a fork is reabsorbing it back into the kernel, but that's what
happened with no-MMU. It took a lot of years of hard work, but the no-MMU
kernel was eventually folded back into the mainline.
The uClinux folks didn't fork the community, they just forked a bit of the
technology. The same thing has happened with Android in recent times.
That project went off and did its own thing with the kernel, but much of
that work is being pulled back into the mainline today. Because of what
Android did, we have a bigger network today that includes both traditional
embedded Linux and Android.
But don't just take his word for it, Bird said. He quoted from a May 2000
"Ask Linus" column wherein Linus Torvalds said that Linux forks targeting a
new market where Linux does not have a presence actually make a lot of
sense.
The IoT is just such a new market, and we need a "new base camp"
from which to attack it. As he said at the outset, he was just trying to
implant ideas into the heads of the assembled embedded developers. He did
not have specific suggestions on how to go
about forking the kernel or what the next steps should be, "I leave it up
to you". The key will be to figure out a way to fork Linux but to keep the
network effects so that "forking can equal growth". In Bird's opinion, that
is how we should "attack" getting Linux onto those next 9 billion devices.
(
Log in to post comments)