8. Outsider

Free For All. Go to the Table of Contents. Vist the Gifcom.


The battle between the University of California at Berkeley's computer science department and AT&T did not reach the court system until 1992, but the friction between the department's devotion to sharing and the corporation's insistence on control started long before.


While the BSD team struggled with lawyers, a free man in Finland began to write his own operating system without any of the legal or institutional encumbrance. At the beginning he said it was a project that probably wouldn't amount to much, but only a few years later people began to joke about "Total World Domination." A few years after that, they started using the phrase seriously.


In April 1991, Linus Torvalds had a problem. He was a relatively poor university student in Finland who wanted to hack in the guts of a computer operating system. Microsoft's machines at the time were the cheapest around, but they weren't very interesting. The basic Disk Operating System (DOS) essentially let one program control the computer. Windows 3.1 was not much more than a graphical front end to DOS featuring pretty pictures--icons--to represent the files. Torvalds wanted to experiment with a real OS, and that meant UNIX or something that was UNIX-like. These real OSs juggled hundreds of programs at one time and often kept dozens of users happy. Playing with DOS was like practicing basketball shots by yourself. Playing with UNIX was like playing with a team that had 5, 10, maybe as many as 100 people moving around the court in complicated, clockwork patterns.


But UNIX machines cost a relative fortune. The high-end customers requested the OS, so generally only high-end machines came with it. A poor university student in Finland didn't have the money for a topnotch Sun Sparc station. He could only afford a basic PC, which came with the 386 processor. This was a top-of-the-line PC at the time, but it still wasn't particularly exciting. A few companies made a version of UNIX for this low-end machine, but they charged for it.


In June 1991, soon after Torvalds[^3] started his little science project, the Computer Systems Research Group at Berkeley released what they thought was their completely unencumbered version of BSD UNIX known as Network Release 2. Several projects emerged to port this to the 386, and the project evolved to become the FreeBSD and NetBSD versions of today. Torvalds has often said that he might never have started Linux if he had known that he could just download a more complete OS from Berkeley.


[3]: Everyone in the community, including many who don't know him, refers to him by his first name. The rules of style prevent me from using that in something as proper as a book. But Torvalds didn't know about BSD at the time, and he's lucky he didn't. Berkeley was soon snowed under by the lawsuit with AT&T claiming that the university was somehow shipping AT&T's intellectual property. Development of the BSD system came to a screeching halt as programmers realized that AT&T could shut them down at any time if Berkeley was found guilty of giving away source code that AT&T owned.


If he couldn't afford to buy a UNIX machine, he would write his own version. He would make it POSIX-compatible, a standard for UNIX designers, so others would be able to use it. Minix was another UNIXlike OS that a professor, Andrew Tanenbaum, wrote for students to experiment with the guts of an OS. Torvalds initially considered using Minix as a platform. Tanenbaum included the source code to his project, but he charged for the package. It was like a textbook for students around the world.


Torvalds looked at the price of Minix ($150) and thought it was too much. Richard Stallman's GNU General Public License had taken root in Torvalds's brain, and he saw the limitations in charging for software. GNU had also produced a wide variety of tools and utility programs that he could use on his machine. Minix was controlled by Tanenbaum, albeit with a much looser hand than many of the other companies at the time.
People could add their own features to Minix and some did. They did get a copy of the source code for $150. But few changes made their way back into Minix. Tanenbaum wanted to keep it simple and grew frustrated with the many people who, as he wrote back then, "want to turn Minix into BSD UNIX."


So Torvalds started writing his own tiny operating system for this 386. It wasn't going to be anything special. It wasn't going to topple AT&T or the burgeoning Microsoft. It was just going to be a fun experiment in writing a computer operating system that was all his. He wrote in January 1992," Many things should have been done more portably if it would have been a real project. I'm not making overly many excuses about it though: it was a design decision, and last April when I started the thing, I didn't think anybody would actually want to use it."


Still, Torvalds had high ambitions. He was writing a toy, but he wanted it to have many, if not all, of the features found in full-strength UNIX versions on the market. On July 3, he started wondering how to accomplish this and placed a posting on the USENET newsgroup comp.os.minix, writing:


Hello netlanders, Due to a project I'm working on (in minix), I'm interested in the posix standard definition. Could somebody please point me to a (preferably) machine-readable format of the latest posix rules? Ftp-sites would be nice.


Torvalds's question was pretty simple. When he wrote the message in 1991, UNIX was one of the major operating systems in the world. The project that started at AT&T and Berkeley was shipping on computers from IBM, Sun, Apple, and most manufacturers of higher-powered machines known as workstations. Wall Street banks and scientists loved the more powerful machines, and they loved the simplicity and hackability of UNIX machines. In an attempt to unify the marketplace, computer manufacturers created a way to standardize UNIX and called it POSIX. POSIX ensured that each UNIX machine would behave in a standardized way.


Torvalds worked quickly. By September he was posting notes to the group with the subject line "What would you like to see most in Minix?" He was adding features to his clone, and he wanted to take a poll about where he should add next.


Torvalds already had some good news to report. "I've currently ported bash(1.08) and GCC(1.40), and things seem to work. This implies that I'll get something practical within a few months," he said.
At first glance, he was making astounding progress. He created a working system with a compiler in less than half a year. But he also had the advantage of borrowing from the GNU project. Stallman's GNU project group had already written a compiler (GCC) and a nice text user interface (bash). Torvalds just grabbed these because he could. He was standing on the shoulders of the giants who had come before him.
The core of an OS is often called the "kernel," which is one of the strange words floating around the world of computers. When people are being proper, they note that Linus Torvalds was creating the Linux kernel in 1991. Most of the other software, like the desktop, the utilities, the editors, the web browsers, the games, the compilers, and practically everything else, was written by other folks. If you measure this in disk space, more than 95 percent of the code in an average distribution lies outside the kernel. If you measure it by user interaction, most people using Linux or BSD don't even know that there's a kernel in there. The buttons they click, the websites they visit, and the printing they do are all controlled by other programs that do the work.


Of course, measuring the importance of the kernel this way is stupid. The kernel is sort of the combination of the mail room, boiler room, kitchen, and laundry room for a computer. It's responsible for keeping the data flowing between the hard drives, the memory, the printers, the video screen, and any other part that happens to be attached to the computer.


In many respects, a well-written kernel is like a fine hotel. The guests check in, they're given a room, and then they can order whatever they need from room service and a smoothly oiled concierge staff. Is this new job going to take an extra 10 megabytes of disk space? No problem, sir. Right away, sir. We'll be right up with it. Ideally, the software won't even know that other software is running in a separate room. If that other program is a loud rock-and-roll MP3 playing tool, the other software won't realize that when it crashes and burns up its own room. The hotel just cruises right along, taking care of business.


In 1991, Torvalds had a short list of features he wanted to add to the kernel. The Internet was still a small network linking universities and some advanced labs, and so networking was a small concern. He was only aiming at the 386, so he could rely on some of the special features that weren't available on other chips. High-end graphics hardware cards were still pretty expensive, so he concentrated on a text-only interface. He would later fix all of these problems with the help of the people on the Linux kernel mailing list, but for now he could avoid them.


Still, hacking the kernel means anticipating what other programmers might do to ruin things. You don't know if someone's going to try to snag all 128 megabytes of RAM available. You don't know if someone's going to hook up a strange old daisy-wheel printer and try to dump a PostScript file down its throat. You don't know if someone's going to create an endless loop that's going to write random numbers all over the memory. Stupid programmers and dumb users do these things every day, and you've got to be ready for it. The kernel of the OS has to keep things flowing smoothly between all the different parts of the system. If one goes bad because of a sloppy bit of code, the kernel needs to cut it off like a limb that's getting gangrene. If one job starts heating up, the kernel needs to try to give it all the resources it can so the user will be happy. The kernel hacker needs to keep all of these things straight.


Creating an operating system like this is no easy job. Many of the commercial systems crash frequently for no perceptible reason, and most of the public just takes it.[^4] Many people somehow assume that it must be their fault that the program failed. In reality, it's probably the kernel's. Or more precisely, it's the kernel designer's fault for not anticipating what could go wrong.


[4]: Microsoft now acknowledges the existence of a bug in the tens of millions of copies of Windows 95 and Windows 98 that will cause your computer to 'stop responding (hang)'--you know, what you call crash--after exactly 49 days, 17 hours, 2 minutes, and 47.296 seconds of continuous operation. .. . Why 49.7? days? Because computers aren't counting the days. They're counting the milliseconds. One counter begins when Windows starts up; when it gets to 232 milliseconds--which happens to be 49.7 days--well, that's the biggest number this counter can handle. And instead of gracefully rolling over and starting again at zero, it manages to bring the entire operating system to a halt."--James Gleick in the New York Times. By the mid-1970s, companies and computer scientists were already experimenting with many different ways to create workable operating systems. While the computers of the day weren't very powerful by modern standards, the programmers created operating systems that let tens if not hundreds of people use a machine simultaneously. The OS would keep the different tasks straight and make sure that no user could interfere with another.


As people designed more and more operating systems, they quickly realized that there was one tough question: how big should it be? Some people argued that the OS should be as big as possible and come complete with all the features that someone might want to use. Others countered with stripped-down designs that came with a small core of the OS surrounded by thousands of little programs that did the same thing.


To some extent, the debate is more about semantics than reality. A user wants the computer to be able to list the different files stored in one directory. It doesn't matter if the question is answered by a big operating system that handles everything or a little operating system that uses a program to find the answer. The job still needs to be done, and many of the instructions are the same. It's just a question of whether the instructions are labeled the "operating system" or an ancillary program.


But the debate is also one about design. Programmers, teachers, and the Lego company all love to believe that any problem can be solved by breaking it down into small parts that can be assembled to create the whole. Every programmer wants to turn the design of an operating system into thousands of little problems that can be solved individually. This dream usually lasts until someone begins to assemble the parts and discovers that they don't work together as perfectly as they should.


When Torvalds started crafting the Linux kernel, he decided he was going to create a bigger, more integrated version that he called a "monolithic kernel." This was something of a bold move because the academic community was entranced with what they called "microkernels." The difference is partly semantic and partly real, but it can be summarized by analogy with businesses. Some companies try to build large, smoothly integr the steps of production. Others try to create smaller operations that subcontract much of the production work to other companies. One is big, monolithic, and all-encompassing, while the other is smaller, fragmented, and heterogeneous. It's not uncommon to find two companies in the same industry taking different approaches and thinking they're doing the right thing.


The design of an operating system often boils down to the same decision. Do we want to build a monolithic core that handles all the juggling internally, or do we want a smaller, more fragmented model that should be more flexible as long as the parts interact correctly?


In time, the OS world started referring to this core as the kernel of the operating system. People who wanted to create big OSs with many features wrote monolithic kernels. Their ideological enemies who wanted to break the OS into hundreds of small programs running on a small core wrote microkernels. Some of the most extreme folks labeled their work a nanokernel because they thought it did even less and thus was even more pure than those bloated microkernels.


The word "kernel" is a bit confusing for most people because they often use it to mean a fragment of an object or a small fraction. An extreme argument may have a kernel of truth to it. A disaster movie always gives the characters and the audience a kernel of hope to which to cling.


Mathematicians use the word a bit differently and emphasize the word's ability to let a small part define a larger concept. Technically, a kernel of a function f is the set of values, x[1], x[2],. .. x[n] such that f(x[i])=1, or whatever the identity element happens to be. The action of the kernel of a function does a good job of defining how the function behaves with all the other elements. The algebraists study a kernel of a function because it reveals the overall behavior.[^5]


[5]: The kernel of f(x)=x[2] is (-1, 1) and it illustrates how the function has two branches. The OS designers use the word in the same way. If they define the kernel correctly, then the behavior of the rest of the OS will follow. The small part of the code defines the behavior of the entire computer. If the kernel does one thing well, the entire computer will do it well. If it does one thing badly, then everything will suffer.
Many computer users often notice this effect without realizing why it ated operations where one company controls all exists. Most Macintosh computers, for instance, can be sluggish at times because the OS does not do a good job juggling the workload between processes. The kernel of the OS has not been completely overhauled since the early days when the machines ran one program at a time. This sluggishness will persist for a bit longer until Macintosh releases a new version known as MacOS X. This will be based on the Mach kernel, a version developed at Carnegie-Mellon University and released as open source software. Steve Jobs adopted it when he went to NeXT, a company that was eventually folded back into Apple. This kernel does a much better job of juggling different tasks because it uses preemptive multitasking instead of cooperative multitasking. The original version of the MacOS let each program decide when and if it was going to give up control of the computer to let other programs run. This low-rent version of juggling was called cooperative multitasking, but it failed when some program in the hotel failed to cooperate. Most software developers obeyed the rules, but mistakes would still occur. Bad programs would lock up the machine. Preemptive multitasking takes this power away from the individual programs. It swaps control from program to program without asking permission. One pig of a program can't slow down the entire machine. When the new MacOS X kernel starts offering preemptive multitasking, the users should notice less sluggish behavior and more consistent performance.


Torvalds plunged in and created a monolithic kernel. This made it easier to tweak all the strange interactions between the programs. Sure, a microkernel built around a clean, message-passing architecture was an elegant way to construct the guts of an OS, but it had its problems. There was no easy way to deal with special exceptions. Let's say you want a web server to run very quickly on your machine. That means you need to treat messages coming into the computer from the Internet with exceptional speed. You need to ship them with the equivalent of special delivery or FedEx. You need to create a special exception for them. Tacking these exceptions onto a clean microkernel starts to make it look bad. The design starts to get cluttered and less elegant. After a few special exceptions are added, the microkernel can start to get confused.


Torvalds's monolithic kernel did not have the elegance or the simplicity of a microkernel OS like Minix or Mach, but it was easier to hack. New tweaks to speed up certain features were relatively easy to add. There was no need to come up with an entirely new architecture for the message-passing system. The downside was that the guts could grow remarkably byzantine, like the bureaucracy of a big company.
In the past, this complexity hurt the success of proprietary operating systems. The complexity produced bugs because no one could understand it. Torvalds's system, however, came with all the source code, making it much easier for application programmers to find out what was causing their glitch. To carry the corporate bureaucracy metaphor a bit further, the source code acted like the omniscient secretary who is able to explain everything to a harried executive. This perfect knowledge reduced the cost of complexity.
By the beginning of 1992, Linux was no longer a Finnish student's part-time hobby. Several influential programmers became interested in the code. It was free and relatively usable. It ran much of the GNU code, and that made it a neat, inexpensive way to experiment with some excellent tools. More and more people downloaded the system, and a significant fraction started reporting bugs and suggestions to Torvalds. He rolled them back in and the project snowballed.