7.2 In For a Penny

Free For All. Go to the Table of Contents. Vist the Gifcom.


In time, more and more companies started forming in the Bay Area and more and more realized that Berkeley's version of UNIX was the reference for the Internet. They started asking for this bit or that bit.
Keith Bostic heard these requests and decided that the Berkeley CSRG needed to free up as much of the source code as possible. Everyone agreed it was a utopian idea, but only Bostic thought it was possible to accomplish. McKusick writes, in his history of BSD, "Mike Karels [a fellow software developer] and I pointed out that releasing large parts of the system was a huge task, but we agreed that if he could sort out how to deal with re-implementing the hundreds of utilities and the massive C library, then we would tackle the kernel. Privately, Karels and I thought that would be the end of the discussion."


Dave Hitz, a good friend of Bostic's, remembers the time. "Bostic was more of a commanding type. He just rounded up all of his friends to finish up the code. You would go over to his house for dinner and he would say, 'I've got a list. What do you want to do?' I think I did the cp command and maybe the look command." Hitz, of course, is happy that he took part in the project. He recently founded Network Appliance, a company that packages a stripped-down version of BSD into a file server that is supposed to be a fairly bulletproof appliance for customers. Network Appliance didn't need to do much software engineering when they began. They just grabbed the free version of BSD and hooked it up.


Bostic pursued people far and wide to accomplish the task. He gave them the published description of the utility or the part of the library from the documentation and then asked them to reimplement it without looking at the source code. This cloning operation is known as a cleanroom operation because it is entirely legal if it takes place inside a metaphorical room where the engineers inside don't have any information about how the AT&T engineers built UNIX.


This was not an easy job, but Bostic was quite devoted and pursued people everywhere. He roped everyone who could code into the project and often spent time fixing things afterward. The task took 18 months and included more than 400 people who received just notoriety and some thanks afterward. The 400-plus names are printed in the book he wrote with McKusick and Karels in 1996.


When Bostic came close to finishing, he stopped by McKusick's office and asked how the kernel was coming along. This called McKusick and Karels's bluff and forced them to do some hard engineering work. In some respects, Bostic had the easier job. Writing small utility programs that his team used was hard work, but it was essentially preorganized and segmented. Many folks over the years created manual files that documented exactly what the programs were supposed to do. Each program could be assigned separately and people didn't need to coordinate their work too much. These were just dishes for a potluck supper.


Cleaning up the kernel, however, was a different matter. It was much larger than many of the smaller utilities and was filled with more complicated code that formed a tightly coordinated mechanism. Sloppy work in one of the utility files would probably affect only that one utility, but a glitch in the kernel would routinely bring down the entire system. If Bostic was coordinating a potluck supper, McKusick and Karels had to find a way to create an entire restaurant that served thousands of meals a day to thousands of customers. Every detail needed to work together smoothly.


To make matters more complicated, Berkeley's contributions to the kernel were mixed in with AT&T's contributions. Both had added on parts, glued in new features, and created new powers over the years. They were de facto partners on the project. Back in the good old days, they had both shared their source code without any long-term considerations or cares. But now that AT&T claimed ownership of it all, they had to find a way to unwind all of the changes and figure out who wrote what.


McKusick says, "We built a converted database up line by line. We took every line of code and inserted it into the database. You end up finding pretty quickly where the code migrated to and then you decide whether it is sufficiently large enough to see if it needed recoding."


This database made life much easier for them and they were able to plow through the code, quickly recoding islets of AT&T code here and there. They could easily pull up a file filled with source code and let the database mark up the parts that might be owned by AT&T. Some parts went quickly, but other parts dragged on. By late spring of 1991, they had finished all but six files that were just too much work.
It would be nice to report that they bravely struggled onward, forgoing all distractions like movies, coffeehouses, and friends, but that's not true. They punted and tossed everything out the door and called it "Network Release 2."The name implied that this new version was just a new revision of their earlier product, Network Release 1, and this made life easier with the lawyers. They just grabbed the old, simple license and reused it. It also disguised the fact that this new pile of code was only about six files short of a full-grown OS.


The good news about open source is that projects often succeed even when they initially fail. A commercial product couldn't ship without the complete functionality of the six files. Few would buy it. Plus, no one could come along, get a bug under his bonnet, and patch up the holes. Proprietary source code isn't available and no one wants to help someone else in business without compensation.


The new, almost complete UNIX, however, was something different. It was a university project and so university rules of camaraderie and sharing seemed to apply. Another programmer, Bill Jolitz, picked up Network Release 2 and soon added the code necessary to fill the gap. He became fascinated with getting UNIX up and running on a 386 processor, a task that was sort of like trying to fit the latest traction control hardware and anti-lock brakes on a go-cart. At the time, serious computer scientists worked on serious machines from serious workstation and minicomputer companies. The PC industry was building toys. Of course, there was something macho to the entire project. Back then I remember joking to a friend that we should try to get UNIX running on the new air-conditioning system, just to prove it could be done.
Jolitz's project, of course, found many people on the Net who didn't think it was just a toy. Once he put the source code on the Net, a bloom of enthusiasm spread through the universities and waystations of the world. People wanted to experiment with a high-grade OS and most could only afford relatively cheap hardware like the 386. Sure, places like Berkeley could get the government grant money and the big corporate donations, but 2,000-plus other schools were stuck waiting. Jolitz's version of 386BSD struck a chord.


While news traveled quickly to some corners, it didn't reach Finland. Network Release 2 came in June 1991, right around the same time that Linus Torvalds was poking around looking for a high-grade OS to use in experiments. Jolitz's 386BSD came out about six months later as Torvalds began to dig into creating the OS he would later call Linux. Soon afterward, Jolitz lost interest in the project and let it lie, but others came along. In fact, two groups called NetBSD and FreeBSD sprang up to carry the torch.


Although it may seem strange that three groups building a free operating system could emerge without knowing about each other, it is important to realize that the Internet was a very different world in 1991 and 1992. The World Wide Web was only a gleam in some people's eyes. Only the best universities had general access to the web for its students, and most people didn't understand what an e-mail address was. Only a few computer-related businesses like IBM and Xerox put their researchers on the Net. The community was small and insular.


The main conduits for information were the USENET newsgroups, which were read only by people who could get access through their universities. This technology was an efficient way of sharing information, although quite flawed. Here's how it worked: every so often, each computer would call up its negotiators and swap the latest articles. Information traveled like gossip, which is to say that it traveled quickly but with very uneven distribution. Computers were always breaking down or being upgraded. No one could count on every message getting to every corner of the globe.


The NetBSD and the FreeBSD forks of the BSD kernel continue to exist separately today. The folks who work on NetBSD concentrate on making their code run on all possible machines, and they currently list 21 different platforms that range from the omnipresent Intel 486 to the gone but not forgotten Commodore Amiga.


The FreeBSD team, on the other hand, concentrates on making their product work well on the Intel 386. They added many layers of installation tools to make it easier for the average Joe to use, and now it's the most popular version of BSD code around.


Those two versions used the latest code from Berkeley. Torvalds, on the other hand, didn't know about the 386BSD, FreeBSD, or NetBSD. If he had found out, he says, he probably would have just downloaded the versions and joined one of those teams. Why run off and reinvent the wheel?