The Crypto War in the USA Intelligence Community
"Crypto, How the Code Rebels Beat the Government--Saving Privacy in the Digital Age"
Chapter - Slouching towards crypto
by 1995, it was clear that the field of cryptography—as well as its reach—had dramatically changed, despite the government’s best efforts. Crypto, propelled by computer power and new discoveries by the Whit Diffies of the world, was moving at a turbocharged pace, shifting from Pony Express to Internet time. But the basic principles remained. Despite the increasingly invoked specter of crypto anarchy— where codes would proliferate unchecked, to the point where no government or institution could even hope to get a handle on digital commerce or law—the ancient clash of measure and countermeasure persisted. Only now the outsiders had a hand in the game.
Over a century before, Edgar Allan Poe, who had been nearly obsessive on the subject of cryptology, wrote, “It may roundly be asserted . . . that human ingenuity cannot concoct a cipher which human ingenuity cannot resolve.” Mathematically, of course, Poe was wrong; the verifiably impenetrable onetime pad was a firm “nevermore” to his claim. But implementing a one-time pad was demanding; certainly it was inappropriate in large-scale settings. So on a practical basis, was the poet’s claim correct? When Martin Gardner had cited Poe’s quote in his famous Scientific American article about RSA, he had thought not.
The question certainly bugged Phil Zimmermann. In his heart, he felt that the encryption algorithm at the center of his PGP software was sound. In naming his program, he felt that “pretty good” was an understatement: users should be able to count on its imperviousness to codebreakers. The government, at least publicly, hinted that PGP was strong, too. In the spring of 1995, Louis Freeh of the FBI and William Crowell of the National Security Agency had testified in a classified congressional briefing about the difficulty of breaking crypto with long key sizes. Freeh complained, “We don’t have the technology or the brute-force capability to get to this information.” Crowell went even further. Citing current personal computer technology, he said that to crack “128-bit cryptography, which is what PGP is . . . would [take] 8.6 trillion times the age of the universe.”
But Zimmermann knew that a brute-force attack on IDEA (International Data Encryption Algorithm) was not the only way to gut his cipher into something that could be called “Pretty Good Try at Privacy.”
There were countless ways to crack a code. Maybe through stronger factoring algorithms and dedicated hardware a supercomputer could make much faster work of the public key part of the program. Or, even more likely, there could be quirks in the details of PGP’s implementation that would provide a cryptanalyst with a precious shortcut to plaintext.
As it happened, one evening at the 1995 crypto conference at Santa Barbara, there was a cocktail party alfresco, and late in the evening a few cryptographers, decked in traditional garb of T-shirts and sandals, gathered around one of the event’s keynote speakers. He was Robert Morris, Sr., and until recently the only crowds he’d addressed were those authorized to receive U.S. government secrets. He had just retired as a top scientist at Fort George Meade. Morris’s reputation—enhanced by the unknowable feats he may have accomplished in the service of spookdom—drew a small crowd to his table. And when Morris mentioned that he wouldn’t mind meeting Phil Zimmermann, the neatly bearded forty-one-year-old was quickly called over.
“Phil, let me ask you a question,” said the former intelligence man, puffing aggressively on a cigarette.
“Say that someone used PGP for very bad stuff. How much would it cost us to break it?”
Zimmermann seemed flustered. “Well, I’ve been asked that before,” he said. “It could be done.”
“But how much would it cost us?”
It was far from Zimmermann’s favorite subject, but he played along. He conjectured that the best attacks on PGP would not be on its key size but on other weaknesses. Its data structure could be troublesome, he admitted, its error correction poor.
Morris nodded and said nothing. He’d been playing with Zimmermann. Who the hell knew if the NSA had already unearthed some elementary flaw that enabled the acres of silicon in its vaunted basement instantly to cough up the plaintext of the freedom fighters who allegedly used Zimmermann’s program?
But the next day in his talk, Morris implicitly provided a commentary on the new cryptographers and their crypto-anarchist visions. He revealed no trade secrets. But somewhat in the spirit of the Eastern masters, Morris did present a pair of truisms—koans of the crypto faith—that pointed toward an eventual rapprochement between the Equities, one beyond the current political struggles. A glimpse of a post- Clipper society in the century to come.
Koan One (for codemakers): never underestimate the time and expense your opponent will take to break your code. The inner text of the Morris speech was that cryptography is best left to those of a paranoid mind-set, those who believe beyond question that their opponents just may be very rich, very clever, and very dedicated—hellhounds on the trail. They will launch powerful frontal assaults on your codes. And, often, they will win.
Koan Two (for codebreakers): look for plaintext. This was reassurance to the crowd that no matter how baffling the task of codebreaking might seem, the fact is that very fallible human beings are the ones who must employ these sophisticated systems. So sometimes, when one least expects it, a seemingly impenetrable code—the jumble of ASCII confetti one must hammer into human language—might have a passage, or an entire message, somehow unencoded. In that case, you could read it as easily as a fortune cookie.
To the crypto anarchists, Morris was saying, “Hey, it’s not that easy to create a cipher utopia.” The ancient game would go on. But by imparting the lesson to outsiders he was also tacitly acknowledging that the future belonged not just to the NSA illuminati, but to these T-shirted longhairs at Santa Barbara as well.
Morris’s statements came at a time when the tension between public and government crypto was at its height. Further, a novel twist had recently been introduced. Some of the emerging crypto forces were now well beyond code making and deeply into cryptanalysis. While this had been undertaken by the crypto crowd before—most famously in the attacks on Merkle’s knapsack scheme—there was now a new sort of effort. It did not conform to the traditional rules forged in the world of William Friedman or Alan Turing. . . . It was an aggregate code breaking, a mass effort powered by the amplifying abilities of the Net. Its practitioners were, of course, cypherpunks. This breed of codebreaker was not interested in crime and espionage, but in making a political point and reaping big fun in the process.
One of the first efforts began with Phil Zimmermann’s PGP software. Long before Morris brought up the question of PGP’s strength at Crypto ’95, its users had been plagued by nagging questions of its resilience. Their angst reflected the key dilemma of guerrilla cryptography: could you trust software developed without the imprimatur of an organization known for secure codes? This was the question that Derek Atkins, then a twenty-year-old electrical engineering student at MIT, was asking himself in 1992.
His initial reaction to Zimmermann’s program was to join the crusade, and he became part of the impromptu development team creating new versions of the software. But then Atkins came to wonder what attacks might work against it.
As Bob Morris indicated in his talk, there are two general ways to crack a cryptosystem. The first way is brute force—to try all possible solutions until you hit on the right one. The second method involves seeking a shortcut, an unintended weakness, which may enable you to break the codes. As Atkins spoke to his friends—including Michael Graff at Iowa State University and Paul Leyland of Oxford University—he decided on the former style of attack. Trying to find a subtle flaw was a task beyond his abilities and experience. (Though, as Morris implied, it was a route that the NSA had probably attempted.) On the other hand, everybody seemed to agree that a direct, and perhaps feasible, route to cracking PGP would be one that worked against any RSA-based program: factoring.
Rivest, Shamir, and Adleman had understood, of course, that if someone figured out a quick way to factor—to determine two original primes from the key based on the product of those numbers—their system was dead meat. But even though they had expected somewhat better factoring algorithms to come, they figured that nothing on the horizon would make it feasible to break RSA. Atkins and his friends, however, wanted to test that proposition. They suspected that by relying on a previously unavailable resource—the thousands of computers accessible to people on the Internet—they might be able to make factoring history. This was a fascinating premise, regarding the aggregate computing power of Internet users as sort of a giant supercomputer, perhaps a kludged cousin to the ones that supposedly existed in the basement of Fort Meade. They ran the idea past Arjen Lenstra, the renowned mathematical expert at Bellcore in New Jersey. He told them that the large prime numbers commonly used in PGP (as well as the commercial versions of RSA) would be too formidable to attack. Then he suggested another challenge: RSA 129.
Lenstra’s idea cut to the heart of the issue of whether or not cryptography could ever assure perfect security. The RSA 129 challenge was the one offered in Martin Gardner’s Scientific American column in 1977—the column that began by declaring moot Poe’s dictum that no code was impervious to cracking.
The challenge still had not been met in all these years. The estimate of time it would take a dedicated supercomputer to factor a number that size was forty quadrillion years. But even if you did not accept that number (Rivest now says it was a miscalculation) even a much, much smaller number—a billion years, say, or a measly few million—would indicate that anyone breathing today’s air would have been long rendered into a dust ball before the secret of the RSA message encoded with a 129-digit key would be revealed.
Yet fifteen years later, Atkins, Graff, Leyland, and Lenstra joined forces with the Internet to attempt to collect that hundred dollars—in a matter of months.
The first, and probably most important, thing they needed was a good factoring algorithm. There had been some conceptual advances in this area since Gardner’s column had been published. Specifically, someone had devised the “double large prime multiple polynomial variation of the quadratic sieve.” This involves searching in a numbers realm called vector space for numbers known as univectors. These can be combined to chart mathematical relations in a way that yields the two original primes. “You don’t have to search the full space of possibilities, but only a small finite portion of the space,” says Atkins. “One way of looking at it is that we were looking for eight million needles in a haystack full of countless needles. You’re not looking for any particular needle—you just find enough of them and combine them in a special mathematical means to actually factor the number.” That technique was perfect for a distributed Internet attack, where literally hundreds of people would join forces to solve the problem. During the summer of 1993, the software was ready—Atkins had been running some of it on the MIT Media Lab computers—and they could now recruit volunteers with computers. The response was terrific: over 1600 machines worked on the problem, all over the world, every continent except Antarctica. The computers ranged from garden variety PCs to the 16,000-processor Maspar supercomputer at Bell Labs.
A standard measurement of computer power is a MIPS year—one year of constant use of a Million Instructions per Second machine. From September 1993 to April 1994, the RSA 129 experiment used about five thousand of those MIPS years. It was then that Atkins and the others guessed that they finally had enough univectors to do the final calculations. As planned, they sent it to Lenstra at Bell Labs, who would then do the final “matrix reduction.” Atkins sent Lenstra a tape with 400 megabytes worth of univectors, via U.S. mail. He also sent a backup by FedEx. Lenstra fed it to his machines, and for two days they matrix-reduced. On April 24, 1994, Atkins posted the following message on the Net: We are happy to announce that
1143816257578888676692357799761466120102182967212423625625618429 \3 5706935245733897830597123563958705058989075147599290026879543541 =
Applying that key to the number that represented the enciphered message text, they were able to transform it into a similarly long number. This was easily converted to English by one of the oldest decoding schemes in history: 01 = A, 02 = B, and so on. That yielded the secret that supposedly would last for a quadrillion years:
THE MAGIC WORDS ARE SQUEAMISH OSSIFRAGE
Did this discovery rock Ron Rivest’s world? Not really. In the years since Gardner’s article, he had kept track of developments in factoring, and had concluded it wasn’t impossible that one day he might have to write out a check for $100 to someone. (Amazingly, he had forgotten the actual message.) He even defends Gardner’s prediction that a break in our lifetime was extremely remote. “It was probably accurate for the analysis of the fastest algorithm we knew about at the time, but technology was moving fast on the factoring frontier.”
But the very idea of a “factoring frontier” was enough to throw some doubt into the security of the most popular public key cryptosystem. After all, if factoring was easy, RSA was, well, worthless. Of course, breaking RSA 129 was nowhere near as challenging as cracking RSA codes set at commercial strength.
When the RSA system uses 129 digits, the key turns out to be 425 bits long. But the standard RSA key— the one used by the company’s actual software—was 1024 bits long. Had the Atkins team attempted the same task with that key length, their computers would still be working on the problem—for a few million more years.
Yet that degree of futility had once been predicted for RSA 129. Might new techniques to factor numbers melt down even the fattest RSA keys? There may well be mathematical breakthroughs to speed up factoring, but an even greater threat to the strength of the cryptosystems was the development of what are called quantum computers, machines that take advantage of subatomic physics to run much faster than our current models. (Think of the speed differential between turtles and laser beams.) While these machines still existed only in theory, scientists had been taking the first difficult steps toward implementation. Once the journey toward quantum computers was completed, you could stick a fork into the RSA cryptosystem. “I think that I shall see a special-purpose quantum factorization device in my lifetime,” cryptographer Giles Brassard wrote in 1996. “If this happens, RSA will have to be abandoned.” This was published, of all places, in CryptoBytes, the technical newsletter of RSA Data Security.
But that remained speculation. The reality is that Derek Atkins and his colleagues took what seemed to be an invincible problem and, working informally, with an ad hoc collection of computers, managed to crack it. “What we learned is that a bunch of amateurs can get together and do this,” he says. And that all claims of invincibility should be regarded with skepticism.
The next target was an irresistible one: the 40-bit crypto allowed by the government for export. The point this time would be purely political. If the barn-raising style of cryptanalysis used in the RSA crack was directed against the puny key lengths negotiated by the Software Publishers Association in 1992 (and, despite government promises, not adjusted in subsequent years), those keys would surely fall, and the need for stronger crypto would be obvious.
After one cypherpunk suggested a “Key Cracking Ring,” Tim May urged action, guessing that the “CPU horsepower of this list could be quite impressively applied” to crack the key in six months, making a strong statement against U.S. export standards. (Six months was a guess. But comparing the computation effort to the RSA’s crack was somewhat like apples and oranges—keyspace search versus factoring.)
“Heh, I was already working on it . . . ,” wrote Adam Back, a twenty-five-year-old computer science student at Exeter College in England. Immediately after seeing the first posting, he’d begun writing scripts to allow people to participate in a group crack. He knew what he was doing, since he had been recently playing around with Rivest’s RC-4 algorithm—the actual cipher that performed the 40-bit encryption permitted for export by the government in programs by Microsoft and Lotus.
A brute-force attack on a bulk encryption cipher like RC-4 or DES requires the codebreaker to try out every possible key combination. Finding a key requires searching through the entire space of possibilities; in the case of a 40-bit key there are about a trillion actual possibilities, enough to keep a pack of computers busy for days. That’s what Adam Back had in mind: a mass effort with each attacker claiming some portion of keyspace, testing it, and then requesting another. The process would continue until someone found the key. Back posted his scripts to his Web page, and a group of conspirators from various corners of the world quickly gathered. Eventually, eighty-nine cypherpunks participated in trying to find a 40-bit key in Microsoft’s database program Access.
But the Microsoft Access crack was doomed. After the entire keyspace was “swept,” none of the millions of potential keys unlocked the message. It turned out that the would-be crackers were stuck on a technical point that kept them from actually getting the plaintext. (“The problem was a lack of specifications,” says Back. “We didn’t know what format the file was in.”)
Still, the cypherpunks emerged from the failed Microsoft attack with some group-cracking software, a loose yet dedicated organization, and a continuing desire to expose what they believed was the pitiful sham of export-level crypto. And then the cypherpunks hit upon an even better target for a brute-force attack: Netscape.
In 1993, two students at the University of Illinois had engaged in a coffeehouse conversation that would not only change the course of the twenty-two-year-old international network called the Internet but would profoundly affect the adoption of crypto. One of them, a chunky undergrad named Marc Andreessen, had recently been learning about a new system on the Internet brashly named the World Wide Web by its inventor, Tim Berners-Lee, a British computer scientist working in Switzerland. The Web was an ingenious way to publish and get access to information on the Net, but only a few in the technical community had adopted the system. Andreessen saw a wider potential. If someone created a slick “browser” to surf through the information space created by a multitude of people who shared text, pictures, and sounds on the Web, he said to his colleague Eric Bina, the Internet itself would be easier to use and a better way to get information. The pair, both of whom worked at the Supercomputing Center at the university, created Mosaic, the first great Web browser. Instead of being forced to use arcane commands and tackle a baffling alphabet soup of acronyms, people could now get all sorts of wonderful stuff from handmade Web “pages”—at the click of a mouse! It was an instant phenomenon; to use Mosaic was to swoon with the excitement of participating in a vast experiment with the future of information sharing. Soon a team at Illinois had churned out versions of the program for virtually every computing platform. Millions of people downloaded them, and thousands of Web sites sprang up to take advantage of the audience.
In 1994, Andreessen had another famous cup of coffee, this time with Silicon Valley entrepreneur Jim Clark. The just-departed CEO of Silicon Graphics was casting about for a big new idea for a start-up company, and with this college kid he hit one of the richest pay dirts in history. Clark, who’d been unaware of the Web boom up till then, quickly realized that there were untapped commercial possibilities for the Web, and grabbed not only Andreessen but most of the Illinois team to start Mosaic Communications. (When the university objected to the name, Clark changed it to Netscape.) The idea was to develop an improved browser called the Navigator, along with software for “servers” that would allow businesses to go on-line. The one missing component was security. If companies were going to sell products and make transactions over the Internet, surely customers would demand protection. It was the perfect job for encryption technology.
Fortunately Clark knew someone in the field—Jim Bidzos. By the time negotiations were completed, Netscape had a license for RSA and the company’s help in developing a security standard for the Web: a public key–based protocol known as Secure Sockets Layer. Netscape would build this into its software, ensuring that its estimated millions of users would automatically get the benefits of crypto as envisioned by Merkle, Diffie, and Hellman, and implemented by Rivest, Shamir, and Adleman. A click of a mouse would send Netscape users into crypto mode: a message would appear informing them that all information entered from that point was secure. Meanwhile, RSA’s encryption and authentication would be running behind the scenes.
Jim Bidzos drove his usual hard bargain with Netscape: in exchange for its algorithms, RSA was given 1 percent of the new company. In mid-1995, Netscape ran the most successful public offering in Wall Street’s history to date, making RSA’s share of the company worth over $20 million. (Not bad, Bidzos realized, for a company that was just about flatlining until Lotus’s $100,000 advance for the Notes license.)
It was just after that eye-opening IPO that a cypherpunk named Hal Finney began looking at Netscape’s security. Finney, a Santa Barbara–based programmer who had participated in PGP development, was particularly interested in how cryptography would be used with electronic commerce, and had become familiar with Netscape’s Secure Sockets Layer. In adhering to the export regulations, Netscape had released two versions of the browser: a domestic version with a 128-bit key for its RC-4 encryption function, and a 40-bit version for export.
Finney set up a challenge to break a message encrypted with that weaker key. He would make a dummy Netscape transaction—just as if he were a customer—then use the encryption in the export version. “I basically connected to Netscape in one of their secure pages and typed in some random data where I was supposed to be ordering a T-shirt or something,” he says. Then he captured the encrypted data and included it in his challenge:
Date: Mon, 10 Jul 1995 16:13:52-0700
From: Hal <email@example.com>
Subject: Let’s try breaking an SSL RC4 key
Since this whole Microsoft Access thing turned out to be a dud, maybe an alternative would be to try breaking the 40-bit RC4 used in Netscape’s SSL (Secure Sockets Layer) exportable encryption . . .
From England, Adam Back’s group accepted the challenge. Though Back’s original intent seems to have been to apportion the keyspace among many people, he wound up accepting the offer of an Australian programmer to organize half the search. The rest of the keyspace was to be swept by volunteers who were assigned slices. But there was some confusion between the two groups that slowed down the effort for some days.
It was during this lull in the action that Damien Doligez began to wonder what was taking so long. Doligez was a twenty-seven-year-old computer scientist who had just gotten his Ph.D. a few months before and was working as a researcher at INRIA, the French government computer lab. His office was in one of a cluster of shacks in what was once a NATO base a few miles outside of Versailles. Doligez had a personal interest in crypto. He shared the sense of disgust at the way governments attempt to suppress their citizens’ ability to communicate privately with each other, and he believed that if someone cracked one of those artificially lame 40-bit cryptosystems, it would be a blow against the powers that be. He also guessed that after the successful RSA 129 crack, a two- or three-week effort should do the job. So as time passed between Finney’s challenge and its solution, he wondered what the hell had happened.
As a researcher at INRIA Doligez had access not only to the workstation in his small office, but also to an entire network of computers, including a Maspar supercomputer. Doligez studied the SSL specifications and concocted a small program to allow a computer quickly to test out a potential key, then adapted the program so it would work on the various machines on the INRIA network, as well as on some machines at the nearby universities, L’École Polytechnique and L’École Normale Supérieure.
Then he began his own multiple-computer attack. Whenever an INRIA worker would stray from his or her computer, within five minutes, Doligez’s program would take over the machine, crunching perhaps 10,000 keys a second. Simply by touching the keyboard, a user could regain control over the machine. No one complained.
Doligez figured that his odds of finding the key would be better if he started from the end of the keyspace and worked backward. “I figured the cypherpunks would start from the start, so I started from the end.” He set his network into action on Friday, August 4, and left for the weekend. On Monday, he returned and discovered a bug in his program. He restarted the process. From that point, the number crunching ran perfectly, but he wound up writing ten new versions of the software over the next few days to address glitches in the communications between machines. The program was working fine when Doligez left work on Friday, August 11. Due to a national midsummer holiday that next Tuesday, on August 15, it would be a four-day weekend, but checking on his home computer before the holiday ended, his software gave him the message he was waiting for.
“I saw it found the key,” he says. SSL had been cracked!
The following day, Damien Doligez drove to work from his home outside Paris and recovered the key from his workstation, then successfully decrypted the message. He posted a message to cypherpunks with the heading “SSL challenge—broken!” As proof, he displayed the plaintext. Those familiar with the RSA 129 crack appreciated the significance of the address of the fictional character that Hal Finney had created in his coded message. Mr. Cosmic Kumquat, of SSL Trusters, Inc., lived at 1234 Squeamish Ossifrage Road.
Though technically it was anything but shocking—the mathematics of cryptography dictated that a weak key should fall to a concentrated effort—the very idea of cracking Netscape’s crypto captured the imagination of the popular press. The media descended on Damien Doligez. Because the break occurred only a week after Netscape enjoyed perhaps the most successful IPO in history, some journalists played the crack as if it spoke to the nature of the browser’s overall security, and not as an example of the way the government export rules weaken software in general. In a message that Netscape posted on its site later that week, the company noted that Doligez had simply broken one message—and that it took about 64 MIPS years to do so. Netscape also estimated that the cost of breaking the message had been $10,000. But as Doligez pointed out in his own response, he had used idle computer time, and paid nothing to do so. Netscape was on firmer ground when it noted that the domestic version of Navigator used a much sounder 128-bit key. “The computer power required to decrypt such a message would be more than a thousand trillion trillion times greater than that which was used to decrypt the RC-4-40 message,” wrote Netscape.
Which as far as the cypherpunks were concerned was exactly the point: export-level crypto was needlessly weak.
But the cypherpunks were not through with Netscape. At Berkeley, two first-year graduate students were inspired by group cryptanalysis. They were twenty-two-year-old Ian Goldberg and twenty-twoyear- old Dave Wagner. They, too, thought it would be a good idea to hack Netscape, the new flagship for Internet security. But they had missed out on the obvious brute-force attacks—Goldberg had been moving to California from his native Canada and Wagner had just arrived after getting his undergraduate degree at Princeton. So they began to explore a different mode of attack, more akin to the second of Robert Morris’s recommendations: look for plaintext. Could it be possible that the Netscape security team made some simple yet egregious error in implementing their software, thus exposing what might be millions of electronic commerce transactions to eavesdroppers? Not likely. But, as Morris had suggested, you never know unless you look.
And that’s when Wagner saw it. Buried in the code were the instructions for Netscape’s Random Number Generator (RNG). This is an important part of any sophisticated cryptosystem—the piece of code crucial to scrambling the letters so that the encoded text offers no tell-tale patterns that would help a cryptanalyst. It is well known that a lack of true randomness is a weakness smart codebreakers can eventually exploit. So it is important to have a solid RNG—something that spins the alphabetic roulette wheel thoroughly.
An important part of a good RNG is the use of an unpredictable “seed”—a number that begins the randomization process. Since, unlike dice, computers do the same thing each time they run, it is essential to begin the process with a seed that a potential opponent cannot possibly guess. Methods of doing this often include using some off-the-wall statistics from the real world—the position of the mouse, for instance. Anything that an enemy could not possibly know.
Netscape, as it turns out, had ignored this wisdom. When Dave Wagner looked closely at the code, the error jumped out at him. Netscape derived the seed of its RNG from three elements: the time of day and two forms of user identification called the Process ID and the Parent ID. A disaster. A foe would burn few computer cycles and even fewer brain cells finding the first part of the seed: it is easy to run through the limited number of times of day. And in many cases, both kinds of identification numbers were also easy to find, particularly if someone is sharing a server with a number of people—as often happens in an Internet environment. “If an attacker has an account on your machine, it’s trivial,” says Goldberg. “Here at Berkeley, there are thousands of users. If anyone uses Netscape, you can discover the IDs.” But even without that advantage, it would be fairly trivial for attackers to calculate out those IDs. The identification numbers in question were only fifteen bits long, easily susceptible to brute-force attacks.
Over the course of a weekend, Wagner and Goldberg wrote a program to exploit the weakness. On Sunday night, they tested it. By zeroing in on the huge flaw in Netscape’s implementation, they were able to find a secret key in less than a minute. Hasta la vista, Netscape security. Goldberg posted the result on the cypherpunks’ mailing list that night. “We didn’t expect lots of press,” he said. Silly boy. Among the readers was a New York Times reporter. When the story ran in the Paper of Record, the two grad students were deluged with curiosity seekers and journalists. Of the things that the two grad students had to say, perhaps the most sobering was Goldberg’s observation, “We’re good guys—but we don’t know if this flaw has been discovered by bad guys.”
Unlike the first Netscape crack, where the company could quite rightfully claim that their otherwise strong crypto was crippled by government restrictions, this was a total flub. You didn’t need to tap a multi-workstation network, or get access to a supercomputer. In certain circumstances all you needed was a minute’s worth of crunching on a vanilla Pentium machine. “Our engineers made an implementation mistake,” admitted Mike Homer, Netscape’s vice president of marketing.
The error cast a shadow on the security of the leading Internet software company. “If Netscape did this wrong, what else did they do wrong?” asked cryptographer Bruce Schneier. But the more pressing question was, if Netscape was unsafe, what was safe? Netscape, after all, was making a concerted effort to protect its users. If the Navigator could be cracked so easily, what hope was there for the others?
There was a bright side to the event: you could argue that things worked properly because the cypherpunks publicly exposed a weakness, which Netscape immediately moved to fix. But the lasting lesson was somewhat darker. As the Internet proliferated, the public was beginning to become truly dependent on networked computers for financial transactions and storing private information—everything from buying books to making stock trades to paying bills. New businesses were planning to put medical records on-line. But security was still haphazard at best. And more and more, it was becoming clear that one big reason for this failing was the United States government’s long-term stalling action. While it tried to push Clipper and key escrow as its pet solution to the problem, the Internet kept going—without an organized effort to provide the protections it needed.
During the mid-1990s, though, those trying hardest to bring to fruition a new era of cipher protection—one that would finally secure the Internet and other electronic means of communication—found themselves under increasing fire. It seemed that those in charge of the laws and institutions of society, while not able to shut down mathematical and engineering progress, could do plenty to make crypto innovators know that their actions had consequences. The question became how far was the government willing to go to invoke those consequences.
For Ray Ozzie of Lotus such a lesson in power would have seemed unnecessary: he was committed to working within the system. (Besides, in 1993, Lotus had officially joined the Establishment when it was bought by IBM for $3 billion.) In the years since his early adoption of RSA, Ozzie had become a vocal figure in the crypto battles, testifying in Congress and visiting key administration figures. Though his procrypto bias was plain, Ozzie’s easy manner and willingness to consider the opposing view earned him the respect of even export hard-liners. He was a realist. Unable to wait for the government to liberalize its rules, he was constantly brainstorming for innovative ways around the export impasse.
After the Netscape crack, overseas buyers of Lotus Notes became increasingly uneasy using the 40-bit encryption IBM was permitted to ship overseas. They wanted to know why it was that American customers were sold a version with 64-bit keys, millions of times more difficult to break—while their version could be cracked by some random postdoc outside Paris. (Meanwhile, companies like Microsoft, which didn’t want the hassles of making two flavors of the same product, gave all their customers weaker crypto. This made the whole product line less valuable to those who wanted encryption, and some of those customers began buying from foreign companies that could legally sell them strong crypto.)
In 1995, Ozzie came up with what seemed a preferable compromise, at least in the short term: a mathematical fix devised to satisfy the NSA’s requirements. Though Ozzie hated Clipper, his scheme was sort of a less onerous version of it. Lotus would still sell two versions of Notes, but unlike prior versions, both would have 64-bit encryption. But the international version would have a little gift for the NSA: something called the National Security Access Field (NSAF). This consisted of 24 bits of the encrypted data that the NSA, and only the NSA, could decode. It was to be encrypted by the NSA’s public key, so only the folks at The Fort could exclusively decrypt that field. After the NSA used its private key to unscramble the 24-bit NSAF, the Notes-encrypted messages would have shrunk from 64-bit ciphertext to 40-bit ciphertext. Cracking the remaining code would require precisely the same work factor as messages encrypted with 40-bit keys shipped under the old system. But since the overall encryption was stronger against all attackers other than the NSA—and it was those other attackers most users were worried about, like vandals or industrial spies—Ozzie figured that this solution might help in the short run.
Lotus filed two patents for its innovation, called “Differential Work-factor Cryptography Method and System,” in December 1995, and included the innovation in the new version of its software, Notes Release 4. He first spoke about it publicly in January 1996, at the RSA Data Security Conference in San Francisco. The conference was another of Jim Bidzos’s marketing brainstorms. Since 1990, the RSA Data Security honcho had been gathering commercial crypto customers in the Bay Area, sponsoring a few days of seminars and a small trade show where vendors could show their wares. From a gathering of a few dozen geeks at the Sofitel Hotel near RSA’s Redwood City offices, the conclave had grown to thousands and was now held at a large hotel near Union Square. Ozzie’s speech drew a lot of attention, and not a little hand-wringing: some wondered whether the dynamic designer behind Notes had given up the fight.
No, he hadn’t. Ozzie was just pursuing a more subtle agenda. “I wanted to stir things up,” he said. The idea was to knock a wedge between the administration and the NSA. Once Al Gore had backed down from the idea of government-controlled escrow facilities, the NSA found little to like in those post- Clipper ideas. If people stored keys in private facilities, authorities would need a warrant to get hold of them. But the NSA operated in secret and was banned from domestic surveillance. So the agency might prefer Ozzie’s scheme—which gave it a head start in cryptanalysis. (It wouldn’t need a warrant to get those 24-bits’ worth of decryption.) Thus, Ozzie’s scheme was far from a sellout—it was a subversive strategy to get the NSA and the administration arguing for different approaches. In the confusion, he hoped that his industry could sneak through its own solution.
Before Ozzie could congratulate himself on his cleverness, he discovered that the government was not without its own means for dealing with such strategies. On December 30, 1996, both Ozzie and his coinventor Charles Kaufman were sent letters labeled in boldface: SECRECY ORDER. Their patent application, read the letters, “contains subject matter the unauthorized disclosure of which would, in the opinion of the sponsoring defense agency, be detrimental to the national security.” (In the space where the government patent officer could check off which agency that was, there was an X next to “ARMY.”) Disclosing the subject matter to anyone without authorization, they were warned, would subject the inventors and IBM to a penalty, including a jail term. Finally, they were instructed, any copies of the subject matter “should be destroyed by a method that will prevent disclosure of the contents or reconstruction of the document.”
Ozzie, who received the order on January 7, 1997, immediately understood that complying with that order presented something of a problem. Not only had he spoken in detail about the scheme numerous times, but the “subject matter” had also already been distributed to almost six million Lotus Notes users, about half of whom were outside the United States. He quickly informed his bosses at Lotus, who immediately began pondering the consequences of having one of the most popular software programs in the world deemed a government secret.
Perhaps the best thing Ozzie did was to have a friend call the deputy director of the NSA, Bill Crowell, who reportedly laughed when he heard of the news, and told the friend he’d look into it. On January 9, Crowell called Ozzie. It was all a mistake, he said. Everything would be fixed. Indeed, the next day, when IBM attorneys got in touch with the Patent Office, they got a verbal confirmation that the order had been rescinded, and later got a fax to that effect. No longer were Ray Ozzie, his coinventor, and IBM liable for about six million violations of the patent secrecy act. But after everyone had some time to breathe, questions remained. If this was the fate that welcomed someone trying to serve his customers in the spirit of key escrow, what would happen to those who outright challenged the government?
Jim Bidzos could answer that question. As he took the most public stance possible in opposition to the government—he even distributed posters urging people to “Sink Clipper”—the relationship between his company and the NSA had gotten more contentious. Though Bidzos had no hard evidence of having been wiretapped, he assumed that he was under surveillance. Perhaps the most egregious confrontation came in April 1994, during a meeting with three NSA export officers, all of whom Bidzos had been grappling with for years. Two were women he’d come to trust to some degree, but the third was a man who clearly despised Bidzos and his company.
Since the NSA reps didn’t open the meeting with any specific issues, Bidzos used the opportunity to lecture them about Clipper: no one would use it, it was a flawed system, yadda yadda. Bidzos noticed the man from the NSA getting more and more agitated. Finally the official spoke. If I see you in the parking lot, he said, I’ll run your ass over.
Bidzos recalls being stunned but finally he replied. “I’ll give you an opportunity to retract that or apologize,” he said. But the man kept pressing. I’m serious, he raged. You don’t understand me, do you? Was Bidzos getting an official warning, sort of a Triple Fence equivalent of a Mafia kiss on the lips? Should he avoid parking lots? Bidzos felt that most likely the guy was probably just venting, but he didn’t want to let the threat go unchallenged. He told a newspaper reporter, and the story found its way into the local paper. Not long afterward Bidzos received a phone call from the NSA guy’s boss. Bidzos got an apology. Even if his life wasn’t at risk, though, Bidzos felt that the agency wanted him out of business.
But at least Bidzos wasn’t under the threat of indictment. That fate was reserved for his sometime nemesis Phil Zimmermann.
Ever since the release of Pretty Good Privacy, Zimmermann had assumed that his biggest problem was the intellectual property dispute with RSA. Jim Bidzos thought nothing of publicly attacking Zimmermann, and at the drop of a fax button, he would zip journalists a copy of Zimmermann’s (ambiguously) written promise to stop distributing PGP, a vow apparently not kept in spirit. But Zimmermann never thought that he would find himself under criminal investigation. So when two women from the U.S. Customs Service in northern California came to visit him in 1993, he assumed that they were there at Jim Bidzos’s bidding.
Indeed, though the investigators wanted to know how PGP was distributed, many of the questions dealt with PGP’s similarity to RSA’s products. As far as technological expertise, the investigators seemed clueless. Zimmermann had to explain to them the very basic ideas of crypto and software distribution. When they left he felt that he had little to worry about. The whole thing was some Bidzos harassment, he figured. “I don’t think that there will be action against me,” he said at the time. “They raised questions about the [export regulations], but I diffused that.”
Not quite. United States Attorney William Keane was indeed concerned about a possible export violation. After all, within hours of PGP’s release on the Internet, the strong crypto program had found its way overseas. It’s unclear whether pressure from Washington had anything to do with it, but some weeks later, Keane informed Zimmermann that he was under investigation for illegally exporting munitions. (Kelly Goen, who had identified himself to MicroTimes columnist Jim Warren as a Johnny Appleseed of PGP, was also a potential target.)
For the next three years, Zimmermann was in legal purgatory, investigated by a grand jury but unindicted. His lawyers advised him to lie low. But PGP’s fame had given Phil Zimmermann a taste for speaking out loud. Besides, he felt that his best chance lay in taking the case to the public. Whenever he had talked to just plain folks about PGP and crypto issues, they had become outraged at the prospect of the government’s limiting the ability of people to communicate privately. He suspected, with good reason, that even techno virgins would be equally indignant at this new atrocity: here was Big Brother himself, contemplating a prison cell for someone who freely distributed privacy software to freedom fighters, lovers, and those who simply felt that their secrets were nobody’s business. What’s more, the case against Zimmermann himself was weak; he wasn’t even the one who’d posted his program to the Net. The guy who had had told Jim Warren that he scrupulously limited the uploads to American sites. Was the Justice Department actually asserting that export restrictions prohibited U.S. citizens from distributing legal materials to other U.S. citizens?
Oh, the export regulations. The more you looked at them, the weirder they appeared. One recent controversy involved Bruce Schneier’s 1994 book, Applied Cryptography. It was a technical cornucopia of cryptological mathematical theory, explanations of popular cryptosystems, and all the algorithms that a security specialist or cypherpunk would ever need. The Millennium Whole Earth Catalog called it “the Bible of code hackers.” But while anyone could ship the physical book overseas, the crypto restrictions seemed to ban the export of those same contents in digital form. At least that’s what cypherpunk Phil Karn found out when he applied for a “commodities jurisdiction” (or CJ) to export the book, along with an accompanying floppy disk with the same contents on it. Officials confirmed that the book could be exported, but not the floppy. It seemed absurd.
So Zimmermann talked, and generated publicity. He seldom failed to note that Burmese rebels reportedly used PGP to avoid the deadly consequences of being discovered in antigovernment activities; in testimony to a congressional hearing in 1993 he also noted that he’d received an effusive thank-you from a Latvian patriot who claimed, “your PGP is widespread from Baltic to Far East and will help democratic people if necessary.” When confronted with the charges from law enforcement agencies that PGP was particularly useful to criminals—in one Sacramento case, the cops couldn’t read a pedophile’s diary encrypted with Zimmermann’s software—he argued that all technology has trade-offs. Perhaps the highlight of Zimmermann’s odd celebrity came one day in San Francisco when some businesspeople decided to take him for an evening on the town that wound up at a North Beach strip club.
The young lady lap dancing in proximity to Zimmermann asked casually what he did. “I’m a cryptographer,” he said. “I wrote a program called PGP.”
The lap dance stopped in midgyration. “You’re Phil Zimmermann?” she asked in awe. “I know all about PGP!”
True, cypherpunk sex workers were not everyday occurrences. But PGP’s audience was beginning to extend beyond techies and privacy nuts. The Wall Street Journal described how PGP was used by lawyers maintaining electronic confidentiality with clients, authors protecting their works in progress from copyright infringers, and an astronomer staking his claims to his celestial discoveries. In order to entice commercial audiences, Zimmermann had licensed the code to a company called ViaCrypt. Since ViaCrypt already had paid a licensing fee to RSA, it could sell PGP to business customers without fear of a lawsuit. (Supposedly paying two license fees was worth it, since PGP had become, by virtue of its underground following, a wonderful brand.) Beginning in 1994, the main distribution point for the much more popular freeware version was an unexpectedly mainstream ally, the Massachusetts Institute of Technology. Some there, notably professor Hal Abelson and network manager Jeff Schiller, believed that the Institute should be allowed to provide Americans with programs that they were legally permitted to use—and do it on the Internet, which was by far the most expedient method of software distribution. So MIT stored the latest versions of PGP on its Internet server and allowed anyone to download it—after asserting that they were, indeed, Americans.
The honor system obviously wasn’t what the government had in mind when establishing the export laws. So flimsy was the MIT protection against export that copies of PGP downloaded from its site were spotted outside the country two days after the program was made available. Still, the citizenship restriction apparently was sufficient for MIT to avoid official complaints, let alone a criminal investigation. Not that the government officially approved of the arrangement. In one memorable session at a 1995 conference, MIT’s Jeff Schiller and NSA counsel Ronald Lee (who replaced Stewart Baker in 1994) faced off. Despite repeated pleas to make some sort of statement about whether MIT’s restrictions were sufficient, Lee refused to draw even the vaguest guidelines for what was permissible and what could land you in jail. Meanwhile, the MIT Press published a book (those analog dead-tree artifacts were still around) that contained nothing but hundreds of pages of C source code—the entire PGP program, formatted so that computer scanners and character recognition software could easily transform the printed hard copy into a real-life industrial-strength crypto product. It seemed almost surreal that such a scheme could be legal while a grand jury still contemplated indicting Phil Zimmermann, but that was the shaky state of crypto export policy in 1995.
Another crypto rebel faced with intrusions from the nasty real world was Julf Helsingius, the Finnish programmer who ran one of the first, and certainly the most popular, remailers in the world. By 1995, his operation called Penet was a shining example of crypto anarchy, stripping identification from thousands of messages each week, and sending them off on their merry anonymous way. Its operator was himself becoming well known in certain circles—and reviled by government doomsayers who warned that such services would prove the end of civilized society itself. But when the real trouble came it was not instigated by a government, but a private group: the Church of Scientology.
Scientologists had been routinely incensed by the criticisms of unhappy former members on Internet discussion groups. In some cases, these apostates had obtained church documents and were posting them on the Net. Scientology officials wanted to charge these people with violating the church’s copyright and trade secrets. But since the addresses of the critics were laundered through the cypherpunk remailer system—very often on Penet, as it turned out—there was no easy way to find who was responsible for the messages.
Then it turned out there was a way. Penet—unlike many of the cypherpunk remailers—was “two way,” enabling people to respond directly to anonymous postings. This required a means for Julf’s system to keep track of who was sending messages. First, church lawyers wrote a letter to Helsingius, formally notifying him that his service was forwarding mail that violated their copyright. Julf politely replied that his policy was to keep hands off the traffic going through his computers. Didn’t they “get” remailers?
The lawyers wrote back, threatening legal action. Helsingius, in Finland, figured that the chances were slight that these faceless attorneys in California could do any such thing. Then Julf Helsingius’s phone rang. It was a representative of the Church of Scientology, in person. In Finland.
Would Julf like to be taken out for dinner?
No sense in turning down a meal, Julf figured. He suggested a Thai joint. The man was friendly, saying that he was a retired policeman, and that all he wanted was two things: for the messages to stop, and for Helsingius to let him know who was sending them.
“I’m sorry,” said Helsingius, “I can’t do that.”
But the Scientologists were not relying on Julf Helsingius’s good will to cough up a name. They filed a complaint with the Los Angeles police, charging that their stolen property was being shipped over the Internet, and fingered Julf as someone willfully withholding the identity of the thieves. In Finland, that’s a grave crime, sufficient to get a search and seizure warrant.
About a week after apologetically turning down the retired cop, Julf Helsingius got another call—from the Helsinki police. We have a court order, they told him, and must take your computer away so it can be searched. Helsingius’s heart sank—he knew that he had to comply. (Ironically, if Helsingius had used readily available crypto software to encrypt his data and protect his customers, such a search would have proven useless. But because of “performance reasons”—“the database is huge,” he explains—he did not encrypt the contents of his disk.)
But while Helsingius knew that he had to give up the single customer whom the Scientologists wanted, he didn’t want to put thousands of others at risk. Fortunately, in keeping with the cordial relations Finns have with their police, he was able to negotiate a transfer that would not require him to turn over the contents of his entire database. Helsingius simply copied the e-mail address of the offending party onto a floppy disk, and set it on the table, allowing the police to take possession of that disk. “I was not too happy, but it was a compromise,” he says.
Helsingius’s troubles were not over, however, because another institution of the real world was about to rain on his crypto anarchy parade: the media. The same day he handed over the disk to the police, a story ran in a Swedish newspaper claiming that the majority of all child pornography on the Internet was routed through a server in Finland. Obviously it was referring to Penet. But Julf knew that his service did not distribute such materials, since he blocked “binaries” (digital photographs). Not that people cared to check. When he tracked down the source of the information, it turned out that some child pornography ring had forged the headers on porno binaries, making it look as if the stuff came from his site when it actually was posted from a location in the United Kingdom. Still, the publicity was damaging, and became worse when a British newspaper repeated the charge, this time citing Helsingius personally as the evil middleman of Internet kiddie porn.
Meanwhile, the Scientology civil case wasn’t going away; Helsingius was called to a Finnish court to explain why he shouldn’t turn his names over. By then he had taken measures to protect the security of the 700,000 e-mail addresses on his server. The names still weren’t encrypted, but hidden: he’d moved the computer out of his home to a storage room at a secret location. And he’d hired lawyers, though God knows he didn’t have the money for that sort of thing. He claimed to the Finnish court that those who used his services were entitled to privacy. But to his dismay, the judge ruled that e-mail shouldn’t have the same protections as physical mail. The whole thing had taken cyberspace a step backward, at least in Finland.
That was it for Julf Helsingius. “The decision was quite clear,” he said. “There’s no way you can run a server like mine in Finland.” So on August 30, 1996, he shut down Penet. The ineluctable lesson was that while technology can provide crypto freedom, the real people who use it must live in the real world— where governments and regulators have the means to track them down. The real world can make things very, very complicated.
But David Chaum could have told you that, too.
The maverick inventor of anonymous digital cash—and the holder of important patents on electronic money—was having a difficult time keeping his company Digicash afloat. Though he had assembled a terrific staff of enthusiastic programmers and cryptographers at his Amsterdam headquarters, there was increasing unrest within the team. Chaum wasn’t completing the important alliances he needed to get his ideas into the mainstream. The intrigue in his little group intensified when one of his former students, Stefan Brands, claimed to have devised an alternative means to produce anonymous cash, and began exploring ways to license these ideas. Chaum insisted that Brands’s work was dependent on his. (Brands obtained valid patents.) Meanwhile, Digicash was still looking for the big deal.
Digicash had begun an experimental pilot program on the Internet called E-Cash. It used a form of scrip, digital Monopoly money. But it really was a test run for the prospect of true digital cash on the Net, a form of currency that would one day usurp folding bills and metal coins. For now, though, a user could get 100 “cyberbucks” simply by asking. The digital tokens could be e-mailed to friends or used to “buy” things from any merchants who decided, in the spirit of experimentation, to accept cyberbucks. All of this was done anonymously. Though one participating merchant was the Encyclopedia Britannica, which took Chaum’s pretend money in exchange for its articles, most of the extremely limited universe of E-Cash merchants was ad hoc operations like “Big Mac’s Monty Python Archive Shop,” which offered unauthorized transcriptions of that comedy group’s routines for various increments of cyberbucks.
When Chaum finally did break some news, it was with a Midwestern institution with a name more familiar to literature students than international financiers: the Mark Twain Bank. The idea was to deliver a version of E-Cash where the units finally could be exchanged for real money, backed by Mark Twain.
Then, perhaps, larger institutions would jump in. At that point Chaum’s critics—one of whom dismissed his ideas as Walden Pond meets the Internet—might shut up. But the Mark Twain scheme never took off.
It wasn’t just Chaum who was having difficulties establishing crypto cash as an Internet standard.
Electronic commerce hadn’t taken off quickly enough, and the still-evolving standards of the Net made any sort of crypto-cash scheme relatively hard to use. Chaum’s competitors were unfettered by the moral obligation to provide anonymity to their digital money—they generally felt that people really didn’t demand it. But those companies were falling short of expectations as well, among them the well-funded start-up Cybercash and Mondex, which allowed consumers to download money on credit-card-sized smart cards (think of a bank cash machine on your personal computer). But those disappointments paled beside Chaum’s. It was Chaum who had the patents for anonymous digital cash. And when Digicash finally filed for bankruptcy in 1998, it was Chaum who lost the patents.
Yet despite the problems and harassments suffered by the crypto revolutionaries in the mid-1990s, their larger cause kept advancing. Skirmishes and setbacks to the contrary, it was the government that was on the run. After Al Gore first retreated by promising to amend the Clipper scheme in the letter to Representative Cantwell, the administration offered to negotiate a compromise with industry, and several meetings were held at NIST’s Maryland headquarters to try and reach a consensus. Hopes were high that some scheme would be reached whereby export standards were liberalized and any key escrow would be truly optional. Some of the things that the government was saying seemed quite reasonable. But when the administration’s officials unveiled the final rules, there were devils in the details. Bottom line: the export restrictions would continue as they always had and Clipper’s rules were only partially relaxed (for instance, users would be offered a choice of escrow agencies). The plan earned its sobriquet of Clipper II.
Inevitably, it was followed by Clipper III, in 1996. That plan had a new angle. The idea was to give cooperating companies a carrot—if companies promised to build escrow into their future products, they’d be allowed to export unescrowed DES-strength crypto now. But in practice, this proved no more attractive than the earlier versions. The obvious relief would have been a blanket export exemption of reasonably strong crypto. Instead the government tinkered with variations of its same old policy.
One continuing problem for the administration was that foreign countries regarded any American escrow scheme with suspicion. At one point, a “crypto ambassador” was sent off to try to convince the world community that such a global solution could work for all. But since he could offer no implementation where all countries had equal access to keys, his failure was a foregone conclusion. Some members of the administration considered this shortcoming the death blow to the entire policy.
Meanwhile, spurred by complaints that American industry was losing business to foreign firms selling crypto software, Congress was reconsidering a legislative solution. In 1996, Senator Conrad Burns of Montana introduced the Security and Freedom through Encryption (SAFE) bill, designed to lift export restrictions on programs that offered a “generally available” level of crypto. (Presumably, this included DES and domestic-strength RSA.) The bill also addressed fears that the government might one day declare that Clipper technology would be the only permissible crypto: SAFE would specifically forbid mandatory key escrow. Burns, a crusty Westerner who felt more comfortable seated on a saddle than in front of a computer screen, was tickled at his new reputation as a high-tech privacy crusader. But the bill itself sat bottled in committee, as legislators still swayed by NSA’s well-orchestrated briefing stifled what the spooks continued to warn them was a threat to national security. “Some people here fully understand the issue,” complained Senator Patrick Leahy, an early SAFE supporter. “But with others, they’re talking like it was ten years ago, about an industry where ten days is an eternity.”
If the government’s goal was simply to stall—each day the dike doesn’t crack, we win—then its approach could be considered a success. But as the cypherpunk attacks against export-strength crypto demonstrated—and the interception of unencrypted cell phone conversations, including the House Republican leadership, dramatized—such a policy had its perils. The country lacked a strong electronic security system, a vulnerability that became more serious as the Internet wound itself more deeply into the fabric of American life.
That, at least, was a key conclusion of a major study by the National Research Council (NRC). That organization, the research arm of Congress, undertook a comprehensive examination of the national crypto policy, and recruited a panel of experts from all sides of the issue, including former cabinet members, officials from the NSA, and critics from business and academia like Ray Ozzie and Marty Hellman. Their report, “Cryptography’s Role in Securing the Information Society,” was a surprisingly strong criticism of government policy, and recommended continued freedom for domestic encryption, relaxed export controls, and, above all, “a mechanism to promote information security in the private sector.” In other words, more crypto.
Perhaps the most interesting observation of the study came as a result of the classified briefings its members had received. (Three of the sixteen members declined clearances and did not attend.) Though they could not of course reveal what they had heard in the briefings, they could—and did—evaluate the importance of that secret knowledge in determining national policy. Answer: not much. “Those [classified] details . . . ,” the report stated, “are not particularly relevant to the larger issues of why policy has the shape and texture that it does today nor to the general outline of how technology will and policy should evolve in the future.” So much for the “If you only knew what we know” argument.
Some people in the administration chafed at that conclusion. (In the NSA, there was even some unhappiness that the title of the report could be read as an acronym, CRISIS.) They conceded that the classified briefings given the NRC participants were thorough, but contended that to really understand the issue, you have to live and breathe intelligence. Sure, Marty Hellman or Ray Ozzie understood in theory that it was important to wiretap a crook or intercept a terrorist’s call on a cell phone. But every morning the president and the vice president got nice thick books that zeroed in on the world’s pressure points— everything from cracked diplomatic dispatches to the car-phone conversations of Russian mafiosi. The Clinton people knew damn well that if crypto was universal, significant hunks of those books would disappear.
But that fine point was lost on the general public—and indeed on much of Congress, which commissioned the study. Instead, the NRC report stood as a call to arms to drop the silly restrictions against crypto and start using it to strengthen our own systems. After all, it argued, the genie’s out of the bottle. And quietly, some of the staunchest defenders of government’s control of crypto were themselves admitting it, too.
Then another front opened in the crypto wars. For the first time, export regulations were facing a serious challenge in the courts. A decade earlier, the NSA’s Bobby Ray Inman felt that he had successfully fended off the 1978 opinion of a Justice Department lawyer that the export regulations violated the First Amendment. But no judge had ever addressed the issue. Many legal experts thought that if a ruling did come on the question, it might not be to the liking of the crypto community. Indeed, a recent decision involving cypherpunk Phil Karn’s legal challenge to export the floppy disk version of Applied Cryptography ended in flames. Rejecting the idea that the same information permitted for export in hard copy should be provided the same privilege in digital form, a federal judge had not only denied the request but also delivered a withering opinion on Phil Karn’s request, virtually accusing him of an immoral attack on national security. But that was a sideshow to a more important suit: that of Daniel Bernstein.
Bernstein was a graduate student at Berkeley. He’d become interested in crypto and security after someone hacked his own computer account in 1987, and thereafter wanted to include crypto algorithms in his course work. As a reflection of how times had changed, courses focusing on cryptography were now almost mainstream. Technically, though, regulations seemed to forbid anyone from placing a crypto concoction somewhere a foreigner might see it. Which was exactly what Bernstein wanted to do.
Bernstein’s project was inspired, coincidentally, by something Ralph Merkle had produced at Xerox PARC in 1989: a hash function called Snefru. Written in 1990 when he was an undergraduate at NYU, Bernstein’s addition to Snefru playfully tweaked the illogic of the export codes. He knew that while encryption programs were subject to restrictions, hash functions like Merkle’s (which don’t scramble information per se) were not. So Bernstein wrote a program that transformed the hash function Snefru into something that could perform encryption and decryption. (Think of Snefru as a banned automatic weapon shipped through customs without a trigger, and the new program as a kit that installs the missing part.) “It takes any good hash function and turns it into a good encryption function,” he later explained of his creation. He called his crypto package Snuffle and wrote a paper to describe what he’d done. But he was worried about publishing it, figuring, he later said, that “the government might not be too happy about me pointing this out.” So he put Snuffle on the shelf.
But at Berkeley in 1992, he reconsidered. Why not publish Snuffle? After all, it was not a commercial product but an academic exercise. Since the actual encryption relied on an already-published hash algorithm—he introduced no original encryption algorithms of his own—it presented no threat to the republic, so why would publishing it be a problem? The obvious place to release it was the sci.crypt discussion group on the Internet. But before uploading Snuffle to sci.crypt, he decided to take one final precaution to make sure he wasn’t violating any laws. He would ask someone in the government if such a step was permissible.
That little step kept Snuffle off the Internet for the rest of the twentieth century.
Bernstein’s first problem was identifying the proper government office to handle his request. After a series of queries he finally wound up at something called the Office of Defense Trade Controls. He sent his letter off in June 1992. To his dismay, the reply, signed by William B. Robinson, the director of that mysterious office, asserted that distributing Snuffle without a license would indeed put Bernstein in legal jeopardy.
Okay, Bernstein figured, I’ll go through the formality of getting the commodities jurisdiction—the “CJ.”
First, though, he hoped that the Office of Defense Trade Controls would clarify what his rights were, and what appeals he might have if he disagreed with a government decision. It took him until March 1993 to get someone to talk to him. Finally he got Charles Ray, the special assistant to William B. Robinson, on the horn. (Bernstein taped his conversations, with permission.) Basically, Ray told him that his rights were, well, nonexistent. If he posted Snuffle on the Net without clearance, and some foe of the United States downloaded his program from a terrorist base in Afghanistan or an apartment in Paris, Bernstein might have to scope out a jail cell for his next home. “There are no exempt groups,” Ray told him. “If you’ve got something considered technical information covered by the Munitions List . . . then being a member of the press [or an academic] does not provide you with any sanctuary. . . . You can still be prosecuted.” But what about the First Amendment? he asked.
“That freedom carries with it a responsibility to comply with the existing legislation and regulations” was Charles Ray’s interpretation of the U.S. Constitution.
A month later, Bernstein finally reached Ray’s boss, William Robinson, who confirmed that a CJ would be required before Bernstein could distribute his work. Subsequent conversations with government officials were even more frustrating. Not only was Internet posting forbidden, but Bernstein might be prosecuted even if he placed a copy of his paper in a public library. Of course, the National Security Agency became involved, as it always does in export cases of new crypto systems. Eventually, Bernstein managed to have some conversations with NSA representatives, learning that behind the Triple Fence some people considered Snuffle “strategic.” This meant, he inferred, that it was not trivial to break. “They offered to help me rewrite it to make it not strategic,” says Bernstein, but he deemed such a move counterproductive.
So he’d play the game. In September 1992, Bernstein filed for five separate CJs. He’d broken the problem up into different versions—ranging from English-language descriptions of the system to mathematical formulas—“to see where they’d draw the line.” Could the government consider each one a “defense article”? He still maintained a belief that at some point the fog would clear from a bureaucrat’s eyes and he would finally realize that Snuffle was simply one graduate student’s academic work, not a weapon. But in October 1993, the government replied that yes, each one of his mathematical formulas was a weapon, “subject to the licensing jurisdiction of the Department of State.”
Bernstein hadn’t begun the process as a rabble-rouser, but now he was himself thoroughly roused. He continued to pursue the case with a methodical patience that would prove devastating to the U.S. government’s eventual defense of its export regulations as they applied to Snuffle. He appealed the first CJ. When months passed without a response, he decided that he needed help.
His benefactor was John Gilmore, no stranger to court battles against the government. The senior cypherpunk already had accumulated a file cabinet’s worth of documents with Freedom of Information requests originally withheld but later kicked loose by legal appeals. Gilmore referred Bernstein to a lawyer named Cindy Cohn, who took the case pro bono (the Electronic Frontier Foundation helped with the costs and coordinated the effort with supplementary counsel). In early 1995, Bernstein and the EFF filed a complaint against the State Department, charging that the export laws were unconstitutional. At the center of the case was the contention that Bernstein’s computer source code was a form of speech, and that by preventing its publication, the government was denying Bernstein’s right to express himself.
That 1978 opinion—that the regulations might flout the First Amendment—was finally about to be tested. But few thought that a judge would resist the government’s inevitable claim that the export laws were crucial to national security, and that striking them down would unleash the modern-day version of the Four Horsemen of the Apocalypse: drug dealers, kidnappers, child pornographers, and terrorists. The case was tried before Judge Marilyn Patel in the Northern California District Court. One of her first acts did not seem promising for the plaintiff: she ordered the trial exhibits sealed, since the export rules forbade their distribution. But as the case progressed, Judge Patel proved to be more than sympathetic to Bernstein’s claims. Perhaps sensing this, the government tried a number of tactics to get the suit out of her court. It reversed itself on two of the five CJ determinations, admitting that those particular mathematical decisions were simply “technical data.” It argued that Judge Patel’s court had no jurisdiction in matters involving export law. It filed for immediate dismissal. But on April 27, 1996, Patel decided the case should proceed. The reason was enough to make a government regulator’s blood run cold: Judge Marilyn Patel had determined that at least part of the encryption export control rules was indeed unconstitutional. Furthermore, she accepted the Bernstein team’s assertion that computer source code could be considered a form of speech. Which meant that the much stricter First Amendment rules regarding prior restraint applied to Snuffle. As far as Judge Patel was concerned, this wasn’t about keeping a weapon within our borders. It was about illegally suppressing an opinion. That summer, Patel officially affirmed her preliminary decision.
The government appealed to the higher Ninth Circuit court. By then Bernstein had received his doctorate and was teaching at the University of Chicago. He wanted to teach a course involving cryptography, but because of the continuing case, he required a government waiver to do so. It took another judicial ruling before he was finally permitted to distribute materials about his work—and then only to his students. The course was taught without discernible damage to the nation.
But still the case dragged on. Oral arguments before a three-judge panel were scheduled for December 1997. Conventional wisdom had it that the appeals court would strike down what was seen as an impudent ruling from a judge who, after all, sat on the bench in wacky San Francisco. But in the packed courtroom, a rather harried government lawyer, a man of baby-boom vintage with experience before higher courts, was questioned harshly by the judges. The panel seemed more impressed with Bernstein’s advocate, Cindy Cohn, a diminutive woman in her early thirties, who, despite an occasional wavering in her voice, presented her arguments forcefully. One unexpected point she made was that by preventing publication on the Internet, the government was failing to heed the recent Supreme Court decision that struck down a law known as the Communications Decency Act: the court had ruled that the Net was a beacon of democracy entitled to the highest level of First Amendment protection. Cohn also urged the judges to consider the implications of not allowing crypto to thrive: was it proper for the government to deny the tools that citizens might use to safeguard their privacy?
The three-judge panel pondered the case for more than a year, not handing down their ruling until May 1999. For Daniel Bernstein, it was worth the wait. By a two to one margin, they issued a broad opinion that not only affirmed Patel, but also went even further in celebrating cryptography itself as a vital component of democracy. Crypto should not be merely a state secret, they wrote, but also a protector of the people’s privacy. Somehow these two technologically unschooled jurists had gotten it. “Government attempts to control encryption . . . may well implicate not only First Amendment rights of cryptographers,” wrote Judge Betty Fletcher, “but also the constitutional rights of each of us as potential recipients of encryption’s bounty.”
Encryption’s bounty? Judge Fletcher was a cypherpunk in robes!
The afternoon that the decision came down, Bernstein was proctoring a calculus exam in Chicago. Only afterward, when he checked his e-mail, did he learn that he had clobbered the government.
The government appealed of course—but the export rules it was defending were looking less and less likely to survive. For years, the crypto dike had held admirably. But now it was crumbling. It was endgame for the government.
Oddly, the NSA no longer appeared to be the prime obstacle to a solution—behind the Triple Fence one could discern a sense of resigned acceptance of the new crypto reality. Clint Brooks himself was no longer on the front lines, but ultimately the institution he served had come to accept his idea of change.
Maybe its leaders recognized that instead of trying to hold back progress, their efforts might be better spent trying to prepare for the inevitable. Probably, when the NSA cipher wizards had really thought about it, the putative nightmare of crypto everywhere was something they felt they could handle—if they were granted more funding, of course. Perhaps, as Robert Morris hinted in his Crypto ’95 speech, and the cypherpunk-cracking effort had indicated, these shiny, “uncrackable” programs created by the private sector really weren’t so uncrackable after all, and the NSA was satisfied at its ability to get plaintext when it needed to. One caper funded by the Electronic Frontier Foundation had been particularly telling: a team of engineers led by John Gilmore and Paul Kocher had built a DES-cracking machine for $210,000.
(DES, of course, was still deemed a munition too hazardous to send abroad in normal circumstances.) In a demonstration at a 1998 crypto conference, the device produced the plaintext to a DES message in less than twenty-four hours. Obviously, if such machines were produced in bulk, obtaining such keys would be dirt cheap. One had to assume that the NSA had plenty of similar units in its basement.
In any case, it was the FBI, particularly its director Louis Freeh, that kept urging a hard line—even to the point of continuing to insist that the bureau should have access to plaintext even at the cost of regulating crypto within U.S. borders. Freeh had finally managed to get a version of the Digital Telephony bill passed, presumably forcing the telecommunications industry to design its products to be wiretap friendly. (Congressional opponents of the concept, however, had foiled its intent by refusing to budget the hundreds of millions of dollars needed to implement the effort.) But Freeh continued to fear that crypto would be the death of wiretapping. Since 1994, he had been demanding publicly that if his agents were unable to get plaintext from their wiretaps, Congress should institute a new era of prohibition by banning unescrowed strong encryption. “The objective is to get those conversations, whether they’re [conducted] by alligator clips or [by] ones and zeros,” he said. “Whoever they are, whatever they are, I need them.”
But Freeh was no longer a Clinton administration favorite, and White House officials shrugged off his remarks.
Not that the administration had given up its hopes of stemming the cipher tide. It’s just that with each iteration, its anticrypto vision got flimsier and flimsier. White House apparatchiks insisted that the changes were all in the spirit of Al Gore’s willingness to work with stakeholders in the crypto world to find the proper balance between codes and snoops. But the only direction that Clinton’s people were going was backward. “The boat was getting shelled,” Mike Nelson admits. The surest sign that a policy is in big trouble is when the words used to describe it are so discredited that they require euphemisms. By 1997, the word “escrow” became verbum non gratum, despite the fact that thousands of Clipperequipped phones had now been purchased, their keys gathering digital dust in the prescribed escrow facilities. Now the stated goal was called key recovery. A policy that began with the firm controls of Clipper—secret algorithms in tamperproof h