The Clipper Chip

Wikipedia 🌐 Clipper Chip

"Crypto, How the Code Rebels Beat the Government--Saving Privacy in the Digital Age"

Chapter - Clipper Chip


2001-crypto-rebels-beat-the-government-saving-privacy-steven-levy-cvr.jpg /

The creator of the Clipper Chip was an unintentional spook. Clinton Brooks’s passion was astronomy. He studied it at Yale during the late sixties, and wanted to make it his career, after fulfilling his ROTC obligations in the navy. His duty was slated for the Pacific, and he planned to move his wife and small children to Hawaii and sail as a shipboard communications officer. He didn’t realize that people at a certain intelligence agency had other plans for him.

Several years earlier, Brooks had been assigned for his mandatory summer duty to a location unknown to him: Fort George Meade. He had driven to Maryland, expecting a typical military base. Instead he was intercepted by inscrutable guards outside what looked like a modern office building in the middle of nowhere who told him that only those with high security clearances could enter. To his surprise, a phone call revealed that he already had been granted such a clearance. Welcome, Clint Brooks, to the National Security Agency. He might have thought of this duty as an interlude, but his superiors had apparently taken note of his abilities, and offered him an alternative to the navy. Not only could he remain in the States, but he’d have a chance at a deeper satisfaction—an opportunity to indulge his cosmic yearnings, to a degree, by working in top-secret satellite reconnaissance. He would not, of course, be able to talk about his work to his friends, neighbors, and relatives, because even the title of the satellite organization was more closely protected than the No Such Agency itself. But it sounded good to Brooks. So he declined his commission on the USS Pueblo—the intelligence ship that would be captured by the North Koreans a few months later, on January 23, 1968. He would work at the agency that dared not speak its name.

Twenty-four years later, Clint Brooks was an assistant deputy director at the agency that now did speak its name in public. And he found himself at the center of a crisis that involved the very mission of the National Security Agency: the rise of public cryptography. One day in the late spring of 1992, he walked over to the office of a recently arrived general counsel of the agency to enlist the newcomer’s aid in a campaign that, Brooks hoped, might help the agency get through this dangerous passage.

Traditionally, the NSA general counsel is recruited from outside, a lawyer familiar with government work with no particular experience in intelligence matters. Someone who can fit into the cloistered culture inside the Triple Fence, but who retains a sense of the real world beyond. It had been Bobby Ray Inman who first figured out that a sharp legal mind just plucked from the fray could best forward the agency’s business, and provide a level of oversight that perhaps a career spook might not. Ever since Inman’s lawyers helped him navigate the agency’s problems with academic crypto research, a series of sharp, relatively young attorneys had filled the post for a couple of years, then each had moved on.

Stewart Baker fit the mold. Born in 1947 and raised outside Detroit, he went to law school at UCLA, clerked for a federal judge, then went into private practice for Steptoe and Johnson, one of the most prestigious firms in the nation’s capital. He served for a few years in Jimmy Carter’s Education Department, then returned to Steptoe. When recommended for the NSA job, he’d been unsure about it. “Should I do it?” he asked a military friend. “What better could you do for your country?” his friend replied.

Baker had occupied his new office for less than a month before Clint Brooks’s visit. It was clear that the spindly, square-jawed NSA lifer was a true believer—but in what? Before he spoke, Brooks placed a large bottle of Advil on Stewart Baker’s desk. “You’re going to need this,” he said. Then Brooks laid out the entire story of how cryptography was going public. He told Baker about DES, the strong cipher that wound up in more common use than the NSA had expected, then about the development of public key, and RSA, and the agency’s troubles with the new cryptographic community that led to the compromise of prepublication review. And now, he said, the idea that you could control things by vetting academic papers was irrelevant: companies like RSA were selling crypto commercially. Baker was aghast. How did you let that stuff out? he wanted to know.

It wasn’t that simple, Brooks explained. The NSA has two roles. One, of course, is cracking ciphers and providing great intelligence to the rest of the government. But the other is to provide the United States with the best possible codes. Inside the Triple Fence, this duality was referred to as “Equities,” reflecting, no doubt, that both tasks were equally important. Clint Brooks was the Equities guy at the NSA.

It was a thankless balancing act, because an advance in one mission was sometimes a threat to the other one. In the old days, at least, the debate was confined inside The Fort, but now it took place in the halls of Congress and in the pages of the New York Times. Meanwhile, the specter of widespread encryption was like a train bearing down not just on the NSA but on society in general. Like the cypherpunks, Clint Brooks looked into the future and saw crypto everywhere. But while the crypto rebels embraced the vision, Brooks understood that this new reality was a potential disaster, if the agency did not adjust.

This was gospel that Brooks had been preaching for several years, at first to deaf ears. During most of the 1980s, after director Inman’s first skirmishes with the crypto academics, most people at the agency hadn’t been much concerned with the possibility that public cryptography would affect them in any significant way. Strong export laws kept everything under control, assuring that nothing as strong as DES left the country without restrictions. In the chill of the Cold War, Congress always gave Fort Meade what it asked for. And though an occasional in-house Cassandra would cite some pundit’s prediction that in two or three years widespread commercial crypto would take off, it never did seem to happen. So it was easy to think that it might never happen. Brooks knew otherwise. Beginning around 1988, he came to understand the direction the Internet was taking and realized that, this time, the threat was real. But his superiors laughed when he tried to lecture them. What are you talking about? they’d say. We’re the only cryptographers! This is a military technology, not something that people want to use! Only when an Internet revolution became plausible, and companies like Lotus actually started to build crypto like RSA into their products, did the top levels of the agency come to realize that Brooks had a point. So they authorized him to find some sort of solution to this conundrum. And Brooks had indeed come up with one.

That was the reason for Clint Brooks’s visit to Stewart Baker: to get him on board with the plan. There was, he explained, a possible way out . . . a solution that not only could give the unprecedented protection of strong crypto to the masses, but that would also preserve the government’s ability to get hold of the original plaintext conversations and messages. In fact, for the past three years, Brooks revealed, the NSA had been creating such a scheme. It involved a technique known as key escrow.

The project had begun in 1989. Brooks, in his role as Fort Meade’s Equities man, had been racking his brain to figure out how to reconcile the two seemingly incompatible demands: the need for strong public codes and the agency’s need for plaintext traffic. Clearly, no solution was perfect. The idea was to strike the proper balance, giving users of nonclassified information both inside and outside the government a healthy measure of security, but not so much that the public’s safety was abridged. At the time the NSA, acting in accordance with the Memorandum of Understanding, had formed the working group on cryptography with the National Institute of Standards and Technology. In NIST’s acting director Ray Kammer, Brooks found a kindred soul. The two of them spent hours going over the problem, probing the technical and even philosophical aspects of a crypto policy.

In one of their early discussions, Brooks and Kammer had simultaneously had an epiphany: the use of encryption would have a profound effect on law enforcement, particularly in its ability to continue wiretapping. They began visiting people in the Justice Department and the FBI, none of whom had the slightest inkling of the troubles that lay ahead. Brooks or Kammer would tell them that all the authorizations to wiretap in the world might not help them when crooks used encryption, and their jaws would drop. Can’t you help us? the law enforcement people would ask.

Brooks had once assumed the solution might lie in a giant deception. The agency could create a putatively strong cryptosystem, so apparently strong that companies would build it into their products and export it around the world. But the agency would have built in a “trapdoor,” to allow the NSA secretly to derive plaintext from encoded transmissions. But after some clear thinking, he discarded that risky, and questionably legal, idea. Such a scheme would entail getting decrypted messages from U.S. citizens. You might be able to justify a hidden trapdoor to snoop on foreigners, but if Congress or some investigative reporter discovered that the NSA had launched a clandestine surveillance plan against Americans, the Church committee would look like a picnic.

So Brooks spent nights awake trying to conjure some other idea. On one of those nights, he had a flash. There could be a compromise that could satisfy everybody. In the physical world, a search warrant compelled a suspect in a crime to give authorities the combination of a safe. Why not translate that concept to the world of communications and computers? If you created a system by which special duplicate encryption keys were somehow spirited away and stored in secure facilities, you would essentially be holding lock combinations in escrow, unavailable to anyone but those who had authority to retrieve them. Those with that legal authority—a search warrant from a judge or an understood set of national security criteria—could get the keys from the trusted storage facility. Once that access was assured, there would be no problem in allowing the encryption itself to be as strong as anyone liked. Make it uncrackable! If the FBI or the police needed the key, and a judge concurred, then they’d have the wherewithal to decipher it, just as if they were the intended recipients.

To some people at the agency, the scheme was a heresy: You’re going to put a back door into a cryptosystem . . . and TELL people about it? But full disclosure was a critical part of Brooks’s vision. He really wanted this new scheme to kick off a national debate about cryptography. Only then, he believed, could an escrow scheme, which would require an elaborate infrastructure, be established. With the government no longer concerned about getting hold of encoded messages, the path would be free and clear toward a universal blanket of crypto, with organized public key distribution, standardized digital signatures, and automatic encryption of messages. The privacy nuts and conspiracy freaks would raise hell at the idea of escrowed keys. But if all the issues were aired, all the dangers addressed, all the benefits sketched out, surely reasonable people could see that this plan was the best way to protect our communications without sacrificing our safety. Anyway, what was the alternative?

Of course, if such a scheme were to be launched, the NSA itself would have to change, readjusting its focus so it would operate in a highly computerized—and crypto-ized—post–Cold War world. The intensity with which The Fort still maintained its veil of secrecy was no longer appropriate. If the people were to buy such a radical idea, the NSA would have to earn their trust. Thus it was imperative to bring the debate on cryptography to the public, treading on once forbidden areas with brutal honesty.

Brooks eventually got approval to pursue his plan, but his idea that the NSA should collaborate with the general public was received with skepticism or worse. He found himself arguing like some deranged Jeremiah. “This has got to be a national policy,” Brooks said at one meeting of the top NSA officials. When asked by a deputy director to explain further, he replied, “This isn’t a judgment that can be made by the director of the National Security Agency or a committee of deputies . . . it’s a value judgment as to what’s in the best interest of the country. It has to be decided by the president of the United States.” The official who answered directly to the voters! His peers thought he’d gone off the deep end. This was the National Security Agency, their attitude was, and we don’t do that sort of thing.

While waiting for the public debate to take shape, Brooks was working hard with other agencies to set up a structure for his ambitious key escrow plan. Because of the Memorandum of Understanding, of course, the agency would have to develop the scheme with NIST. But that was no problem. The joint technical working group had been working on the public crypto situation since the very first meeting in March 1989, particularly on the digital signature algorithm. Public crypto was known within the group as Issue One.

A third stakeholder in the discussions was the FBI. The early alert from Brooks and Kammer had indeed awakened interest at the bureau: in 1991, director William Sessions had written to defense secretary Dick Cheney about computer security, clearly indicating that his agency wanted a voice in determining policy. The FBI, it turned out, would actually assume the hardest line on the issue.

The NSA, of course, did the technical heavy lifting. By 1990, thirty of its mathematicians were working on the problem. They quickly settled on the bedrock of the system, a powerful encryption algorithm that had been kicking around Fort Meade for a couple of years. Its codename was Skipjack. It was a block cipher like DES but was deemed much stronger. Its recommended key length was 80 bits as opposed to DES’s 56; it used 32 rounds of substitution instead of 16. (There appears also to have been some more subtle technical reasons for Skipjack’s superiority, but of course, the NSA was loath to reveal these.)

Though Brooks tried to argue that in this new era, it might be appropriate to reveal the algorithm— insisting, in fact, that to win over their critics they would probably be forced to publish it anyway—he met with staunch resistance. Never—never—would the agency allow its foes access to what amounted to an advanced course on the cutting edge of codemaking. Things don’t work that way at The Fort.

Skipjack, though, was only a single component of what the NSA called Capstone, which was a complete public key system that would include the digital signature standard. Of course, this particular scheme had an additional complication: how would you implement the escrow? You’d have to figure out a way to isolate a copy of each key and send that information elsewhere for storage. By 1991, the NSA decided that trying to do this in software was too risky—it feared that some foe could change the code to build in a weakness—and concluded that a better method would be to put the whole shebang on a tamperproof computer chip. An experienced defense contractor in Torrance, California, called Mykotronx was hired to fabricate the chips.

The system itself worked by inserting several new components into the classic equation where Alice encrypts and Bob decrypts. One of them was the “unique chip identifier.” It was a number that matched up with a “chip unique key” that was assigned to a single physical chip. Each device—a computer or perhaps a phone—would have its own unique chip identifier and chip unique key.

When two people wanted to communicate privately, they would each have one of those devices. If, for instance, they wanted a phone conversation that an eavesdropper couldn’t hear, they’d have special phones with the technology built in. Once the connection was made, the phones would zip signals to each other (via a Diffie-Hellman exchange) to calculate a new symmetrical key, called the session key. Using Skipjack, that key would actually encode the sounds of each speaker as the sounds left the phone and decrypt those sounds as they emerged from the other phone. But along with the encrypted conversation, the phones would transmit another set of bits, called the Law Enforcement Access Field (LEAF). (It was originally called the Law Enforcement Exploitation Field, but was changed to a somewhat less ominous term.) The LEAF would be generated by a set of calculations involving the session key, the chip unique key, and the unique chip identifier, winding up with two important components: an encrypted version of the session key and the unique chip identifier. All of that would be further scrambled by the family key.

So how would officials get hold of those keys? They would already be in possession of one of them, the family key—there’s only one in the whole system. The tricky part of the scheme would be getting the proper chip unique key and, ultimately, the session key. This would be performed by way of the LEAF.

What if an eavesdropper captures the information on the LEAF? Even if he could isolate the chip identifier from the LEAF, it would be useless. All the identifier would do, really, is identify. It would point to a chip unique key in a vast database. But only the government wiretappers would have access to that database, stocked with every chip unique key in existence. Having that identifier without a way to get into the escrow facility would be like having someone’s fingerprint and no access to crime records: it would be of no help whatsoever in telling you who it identifies. But a government agent would be able to take that identifier, along with a court order, to an escrow facility, and match it up with the chip unique key. And then combine it with the family key. Viola! You’d have the session key—and the fuzz of an encrypted conversation could be transformed into blessed, perhaps incriminating, plain language.

That led to another complication. Where would the escrowed keys be stored? If they were all kept in one place, it would be a potential gold mine for all sorts of crooks, spies, and even corrupt U.S. government agents—anyone with access could get hold of the means to violate the privacy of every encrypted conversation in the world. So Brooks and his colleagues decided that the escrowed keys would be split into two pieces that would be stored in different locations. This would be done in such a way that obtaining one piece of the key would provide no mathematical advantage in discovering the entire key. When a judge authorized a wiretap, the law enforcement officer would present the warrant to both escrow agents, construct the key, and then have the wherewithal to listen to the conversations.

In late July 1992, all the relevant government agencies met for an off-site meeting at the FBI’s Engineering Research Facility in Quantico, Virginia, to discuss the alternatives for a national encryption policy. Clint Brooks made the opening presentation. As recorded by one official in attendance:

He presented these within the context of a national goal that would satisfy the need for good commercial and unclassified cryptographic security while protecting the interests and responsibilities of national security and law enforcement organizations. He termed the achievement of this goal “Nirvana.”

The agencies didn’t reach total agreement. Notably, the FBI apparently was arguing for the ability to do its decrypting instantaneously, or in “real time,” an approach that the NIST people deemed “draconian and intrusive.” (The FBI approach would essentially dictate that the escrow facilities should be a phone call away at any time, and safeguards against abuse would go out the window.) But they all agreed that a system should provide encryption for the public while allowing the cops and the spooks access to the keys—essentially, the NSA solution.

Until the whole government got behind it, the escrow scheme was just another flashy technology concocted behind the Triple Fence. In order for it to work, it needed to be ubiquitous. As Brooks had anticipated—and as his superiors finally came to understand—such a sweeping change needed the imprimatur and active support of government’s highest level, up to George Bush himself. But an election was approaching, not the time to air potentially controversial new ideas. In any case, the Bush people seemed unconvinced of the urgency of quick action. Brooks figured that in 1993, after Bush was returned to the White House, the reelected president would be able to tackle the problem, free from worries about what the electorate might think.

But in 1992, two unexpected events dramatically shaped the course of Clint Brooks’s key escrow scheme. The first one involved an innovative product about to be introduced into the marketplace—a twenty-four ounce box that connected to the telephone. That pound and a half of technology portended tons of problems. The second development was the election of a new U.S. president.

The box’s technical name was the AT&T Telephone Security Device (TSD) 3600. For several years, the telecommunications giant had been manufacturing secure phones for the government, using a special NSA-designed algorithm. In 1992, the company decided to broaden its market outside the government, and began limited sales of a voice data scrambler that used an encryption algorithm devised by AT&T’s own crypto team. That autumn, it decided to follow up on an even wider scale—by launching a secure phone designed to sell by the thousands. If you were worried about snoopers listening for sensitive data involving intellectual property, trade issues, and business strategies, you’d want one of these. You didn’t have to be an engineer or a nerd to use it, either. “It connects easily to desk telephones or . . . mobile cellular phones,” gushed company literature. “And it’s as easy to use as it is portable. To protect conversations, the user simply pushes a single button. The call is automatically encrypted and the conversation secured.” AT&T also claimed that the voice quality on this device was, unlike the relatively fuzzy phones that the military used, almost as good as that of a regular telephone.

What’s more, this new phone would use the most trusted encryption algorithm of all to scramble voice: DES, the cipher that was still a hot button behind the Triple Fence.

The NSA, of course, was unhappy at this new use of the problem child it had once blessed. But news of AT&T’s plan was even more troubling to the FBI. The law enforcement agency had already been complaining that new telephone features like cellular service and call forwarding were making it more difficult to implement wiretaps. Its solution was to propose a new bill, known within the Beltway simply as “Digital Telephony.” The law would mandate that all new telecommunications equipment be designed with wiretaps in mind; it essentially banned new devices and services that denied the government an easy way to conduct surveillance. Critics were already howling. It was bad enough that the bill would cost equipment makers hundreds of millions of dollars (presumably a cost passed on to consumers). Much worse was the central premise behind the legislation, which required the tail of wiretaps to wag the dog of telecommunications. Instead of encouraging one of the country’s most innovative industries to produce the systems that would sustain America’s high-tech success in the global marketplace, Congress would be locking a ball and chain on innovations. And for what? Just to keep its ears open to approximately 1000 annual federal wiretaps, to glean information that could arguably be recovered by other means, like hidden bugs or informants?

Though Digital Telephony didn’t mention cryptography specifically, the specter of crypto restrictions hung over the legislation like some digital Sword of Damocles. As Brooks and Kammer had explained to the FBI, strong crypto could totally screw up the benefits of the bill. Even if Digital Telephony passed, and the industry faithfully followed its strictures, the G-men and other police agencies would be able to monitor the transmissions sent over the wires or the air—but then what? If those communications were scrambled, those precious intercepts would be no more than useless static. FBI director William Sessions got the message and made sure that G-men would be participants in the NSA–NIST effort to deal with the problem.

Now the FBI was freaking. Here was this new AT&T phone, designed to move secure-phone technology from a status item on the desks of national security advisors to a common commercial product, one used by executives, lawyers, and scientists, not to mention privacy nuts, crooks, terrorists, and God knows who else. It would be a law enforcement disaster . . . unless there was a way that the government could somehow overhear those conversations as they were before encryption. Wasn’t that what Clint Brooks had figured out? So Brooks and his team were asked if the Capstone chip might go into the AT&T phone. As the Capstone was originally conceived, it was too demanding for the TSD 3600—with all its features, such as the digital signatures, it would require more computation than the device could handle. But maybe if the NSA carved out just the encryption algorithm and key escrow, it could come up with something that could simply be clipped into the phone in place of the DES chip.

Even while agreeing that it could be done, Brooks was wary. The Capstone chip was well designed and represented a complete solution. Coming up with something new would be riskier—and to do it in time to stave off the AT&T phone, it would have to be done very quickly. There would be no time for the national debate he felt was so essential.

But the FBI couldn’t wait. On October 13, 1992, Judge Sessions himself placed a call to AT&T’s chief executive officer Robert Allen. We’ve got a problem, he told him, and then outlined problem and solution: Would AT&T consider using an escrow encryption chip instead of its DES-based system? If the company agreed, the feds could offer considerable carrots. For one thing, AT&T could claim that it was actually providing mightier encryption, since Skipjack was much more difficult for outsiders to crack than DES. Furthermore, the United States would probably allow this key escrow phone to be exported. Best of all was a promise directed toward the bottom line: the federal government would buy thousands of units for its own use.

The downside, of course, would be that potential corporate buyers would have to buy into the basic compromise that escrow entailed: the encryption would be strong, but one not necessarily welcome third party would also have a copy of the key.

Sound familiar? It was the same situation that Whit Diffie had found utterly intolerable two decades earlier: the difficulty of two people seeking intimacy when someone else is in the bed. Diffie had invented public key in order to avoid this perversion of the cryptographic relationship. Indeed, the AT&T phone as originally conceived was an embodiment of Diffie’s vision. The users of the phone would not need to exchange secret keys beforehand. Instead, the two respective phone devices would furiously perform the calculations of a Diffie-Hellman key exchange, in order to settle on a secure DES key that would encrypt, and then decrypt, the actual conversation. No need for anyone else. You wouldn’t want anyone else.

But the bounty offered to AT&T—and the chance to avoid a government confrontation—was too juicy to turn down. The phone company signed off on a deal: if the government would adopt a plan to make key escrow its standard, AT&T would forgo its DES scheme and install a government-designed chip in the device instead. This would be the stripped-down version of Capstone, using the Skipjack algorithm and the escrow features, but without the signature or hashing algorithms. It was given a new code name: Clipper.

“We knew no decision would make everybody happy,” said an AT&T spokesperson. “But frankly, the Clipper Chip offered an important law enforcement issue and increased the level of protection.” More to the point, it also offered guaranteed sales, and the continued goodwill of one of AT&T’s major customers, the United States government (at the time, the company was negotiating a government contract worth over $10 billion). If key escrow became government policy, AT&T would happily be on board.

But Clipper was still nowhere close to being the official government policy. Clint Brooks and the NSA needed one more big break before they could begin their journey toward Nirvana. That break came on November 3, 1992, when the United States went to the polling place and elected William Jefferson Clinton its president, with Albert Gore as his vice president.

It might appear counterintuitive to think that those election results favored the NSA. After all, Clinton was a Democrat who had spent the Vietnam years speaking against the conflict instead of fighting in it.

During the campaign, Clinton had visited Silicon Valley, and while he had made no promises, he indicated that his presidency would be a friend to private crypto. “He talked about how silly it was that there were export controls on off-the-shelf software,” remembers privacy advocate Marc Rotenberg. “He didn’t say ‘encryption’ specifically, but that’s clearly what he was referring to.”

Another sign that Clinton might not be NSA-friendly was the nature of the people surrounding him. For instance, the head of his transition team was a former electronics lobbyist named John Podesta, who had vociferously supported the industry agenda of liberalizing export rules. Besides Podesta, Clinton’s minions included a number of people who seemed tuned into the hip and crypto-friendly cyber world.

Chief among that contingent was the vice president himself—a self-styled computer aficionado to whom Clinton would delegate the ultimate decision on the cryptography issue. In fact, Al Gore’s presence as the nation’s second-in-command was often cited as proof that the new leadership team was a nerdfriendly future squad who “got” the new Internet paradigms. Their campaign speeches might have been about bridges to the future, but Gore’s vision was of an Information Highway to transform the country and indeed the globe. Gore arranged to bring some of the most techno-savvy Senate staffers to the White House to help on digital matters, people like Mike Nelson, a former MIT geophysicist experienced in Info-Highway issues. They were “extremely smart, conscious freedom-lovers,” wrote John Perry Barlow, who got to know them in his role as Electronic Frontier Foundation cofounder. “Hell, a lot of them are Deadheads. I was sure that after they were fully moved in, they’d face down the National Security Agency and the FBI.”

Barlow had mistakenly assumed that because the Clinton staffers recognized the opening chords of “Sugar Magnolia,” they’d be immune to top-secret doom lectures from the star-spangled crypto boys at Fort George Meade. Behind the Triple Fence, the expectations were just the opposite. The spooks understood that Bill Clinton and his peach-fuzz tech squad were a godsend for the escrow idea. The Bush administration had never warmed to the escrow plan. The problem wasn’t so much that the Bush people were specifically against this particular scheme. They were against anything that required a little gumption. “The Bush people had spent twelve years in power, most of them with a Democratic Congress, and they knew that everything that could blow up, would blow up,” one insider explained. “When you presented something to them, you got nothing but eyes staring out. . . . You could sense that everyone was thinking, ‘How might this end up on my suit?’ ”

In contrast, the Clinton people were policy joyriders, like teenagers finally granted their turn behind the wheel. They were totally juiced that after twelve years of dinosaur rule, they now had their chance to fix things. They were also detail freaks, eager to belly flop into the huge piles of clauses, footnotes, and trivia that embodied the process of governing. Present them with an idea and they surrounded it, tickled it, tore it apart to see its gears rattle, and wondered how they could make it work for them. They drew confidence from a belief that their own good intentions were obvious, and even if their efforts didn’t pan out, the public would give them credit for trying to do the right thing.

The forces pushing key escrow didn’t even wait until the new administration reached the White House before they hit Clinton and Gore with the encryption problem. The AT&T phone threat provided an impetus. “Suddenly this wasn’t something where we could wait, do an orderly briefing of the new administration, let them get their feet under them, appoint their assistant secretaries, and make a decision in 1994,” says Stewart Baker. The idea of getting George Bush to sign off before vacating the White House had been considered, but rejected. “We believe that going forward with the installation of the Clipper Chip based on the approval of the current administration has some potential pitfalls,” wrote an FBI official to director Sessions in a late-1992 memo. What if the news of an “exploitable” chip leaked before the Clinton people formally approved the policy? “It might result in their being pushed toward disavowing the prior Bush administration approach in order to prevent the controversy.”

Judge Sessions himself, whose fear of losing precious wiretaps had made him increasingly frantic on the issue, was the first one to hit Little Rock. “It had become his highest priority,” says a government official working for key escrow. “He was fearless in going to the transition team and saying, ‘You guys may be coming in January, but you’ve got to hear this now.’ ” In any case, the NSA was just as happy to let him lead. After all, Fort Meade’s stated role in government was not promoting policy decisions but providing technical background and intelligence information from its files.

To frame the issues, the FBI, with the NSA’s help, prepared a paper entitled “Encryption, Law Enforcement, and National Security.” The classified document was packed with high-impact scenarios of what might happen if crypto ran free. It discussed the AT&T device as a possible trigger for this onslaught. But the coming disaster might be averted. “The solution is an encryption chip that provides extra privacy protection (at least a million times stronger than DES) but one that can be read by U.S. government officials when authorized by law. . . . This ‘key escrow’ system would protect U.S. citizens and companies from invasion of their privacy by hackers, competitors, and foreign governments. At the same time, it would allow law enforcement to conduct wiretaps in precisely the same circumstances as are currently permitted under the law.” While the description sounded very much like a panacea to an otherwise apocalyptic problem, the paper did include one possibly annoying consequence of the policy:

“This concept undoubtedly will be vigorously attacked by those who fear law enforcement abuses and thus would rather rely on technology than on the court to protect their privacy.” But that seemed rather an easy trade-off to make. Which would you rather tolerate—a bit of flak from privacy nuts, or a powerful weapon in the hands of kidnappers and terrorists?

Stewart Baker was the NSA’s point man on the issue, and wound up coordinating much of the effort to sell escrow to the incoming leadership. While Fort Meade was packed with geniuses, it wasn’t as loaded with people who were comfortable dealing with the outside world. Baker had come a long way since Clint Brooks had come to his office and first told him about Equities. In that time, he had gotten a good view of the cryptographic landscape from the NSA point of view. He saw where it all fit together. You couldn’t mandate what people inside the country used nor could you keep every copy of a program like PGP away from every geek on the globe. But realistically, not many people were going to take the trouble to find exotic encryption software like PGP and figure out how to use it. Export controls were the way you stopped good crypto—everything from DES on up—from being built into the systems people used every day, and thus, out of the hands of most bad guys.

Baker saw the Clipper scheme as a way of weaning the government from its dependence on export controls to contain crypto. There were signs that Congress might not support those regulations indefinitely. The business community was getting louder and louder in its opposition to them. The problem was, the software industry had grown up in an environment with few regulations, and was now a multibillion-dollar colossus. It felt that the natural order of things was to fight things out in the marketplace while the government remained some distant entity. The techies seemed to regard the premier crypto agency in the world as some doddering, irrelevant artifact of the Cold War. Their philosophy was hey, technology happens. Baker was horrified once when a Microsoft middle manager blithely told Baker that Bill Gates was going to put crypto into the Microsoft operating system, that it was going to be in all the applications. Who cares whether it would empower terrorists or rogue nations? Their attitude was, “Encryption is cool, let’s put it anywhere.”

The techies weren’t unpatriotic, Baker thought, just clueless about the very real dangers in the world. They thought it was a joke that crypto was classified along with heavy munitions. But the ability to listen in on the world—with a vast multibillion-dollar network of secret satellites, radar installations, and ground sensors—was a pillar of U.S. defense policy. How did they think we discovered those Libyan terrorists who brought down the Pan Am jet over Lockerbie? How else to keep track of the North Korean nuke program or Iraq’s use of chemical weapons against the Kurds? The public had only heard hints of the importance of those “intercepts,” signals snatched from telephone conversations, digital transfers, and even walkie-talkie transmissions. Most of it was classified, deep black stuff. That’s why there were no reporters when George Bush himself had ventured to The Fort to extend his personal congratulations to the codebreakers for their work during the Gulf War. Just what did the spooks do? If the public only knew. . . .

Baker and his fellow advocates of escrow thought it essential that the worldview taken by the new administration be a more realistic and tougher one. Encryption should be an important part of the Networked Society, sure, but you needed controls. You needed limits. You needed a way for the good guys to hear what the terrorists and crooks were saying to each other.

Early in the campaign to win the hearts and minds of the Clinton people, Baker and Sessions briefed Leon Fuerth, who would become Al Gore’s national security advisor. Though Fuerth was cautious, the escrow advocates could see that their presentation had hit the mark. They thought they could see it in his face: the realization that the election campaign was over and now the Clinton folks were going to be wrestling with some hard, hard issues. This was one that the NSA and the FBI could win.

As December rolled on, the briefings continued. And not long after the inauguration, Al Gore himself got exposed to the religion by NSA director McConnell and Clint Brooks. It was a bull’s-eye for The Fort. Because Gore loved technology, he was able to appreciate the ingenuity of the key escrow scheme. A neo-Luddite Republican might have fuzzed out on those particulars, but Gore’s openness toward the idea seemed tied to his perception that these software gears and levers might actually work, providing a solution that gave something to everybody.

As the Clinton-Gore teams shifted from transition to governing, the Clipper people stepped up the meetings. Memos flew between the NSA and NIST on how best to anticipate and respond to possible objections. They knew one potential problem: Fort Meade’s insistence on keeping the Clipper’s workings a secret from the public. Brooks tried to convince his colleagues to open up, but failed. His fallback plan was somehow to gin up some assurances that the NSA hadn’t intentionally weakened Skipjack for its own purposes. “Get a panel of academics from cryptomath/analyst community to examine classified level SKIPJACK to ‘assure’ it is valid/good algorithm,” he scrawled on a memo to his director on January 5. “Who should it be?”

Meanwhile, in the White House, the barrage of briefings was having its effect. In their first weeks in office, Clinton and Gore hadn’t signed off on Clipper. But their staffs were coming to the conclusion that there was no other alternative.

John Podesta was already on board. Maybe his personal tipping point came very early after the inauguration when some high-tech lobbyists came to visit him. At this point, civil libertarians and software industry people were still hoping that the new administration would act against the spooks and the cops and liberalize crypto export regulations. (If they’d known about the Clipper Chip they would have gone ballistic.) Podesta, still dazzled by the new toys in his office, showed them his STU-III phone, the standard-issue crypto phone the government had used for about five years. They sneered at it. “Typical clunky government solution,” they said. “But you know what’s cool? AT&T is going to make a device that’s half the size, much cheaper, and will do everything that one does, but better. You should buy those!”

Though the high-tech guys didn’t know it, their comments resonated with the briefings Podesta had been getting. If the government didn’t do something, those damn devices probably would sweep the market. Not that the NSA/FBI Clipper cabal was relying on serendipity to bring the Clinton folks around. They were essentially stacking the deck, presenting a limited set of options to the greenhorns. Want to do nothing, and let the marketplace take its course? Fine. If you want to trigger crypto anarchy, that is.

Doing nothing, they warned, would mean that AT&T would begin selling its phones and the next thing you knew the costs would come down and everybody would be talking on secure phones and e-mailing with crypto software. The smoke had hardly cleared from the World Trade Center bombing. What if another, maybe a worse, terrorist disaster came, and it turned out that the government failed to prevent it because the perpetrators were able to communicate with unbreakable crypto? You want to give Saddam Hussein access to ciphers we can’t break? Go ahead—do nothing. The blood will be on your hands. This terrified the Clinton people.

The other alternative, which some law enforcement hardliners were urging, was even more extreme: ban crypto within the United States. In one of the FBI’s presentations, illustrated by a slide show with bullet charts to underline the salient points, the G-men merged their Clipper-related goals with their Digital Telephony vision. Essentially, the show said: because the domestic use of encryption is not regulated, there is a NEED FOR A NATIONAL POLICY that allows “legitimate” users crypto strong enough to foil their adversaries but also “insures that cryptographic devices and systems are capable of real-time encryption by law enforcement.” The implication was unavoidable: any cryptography that does not meet that standard should be prohibited. Even stuff distributed by American manufacturers for American users. Otherwise, an intolerable “electronic sanctuary” would exist. Forget about the strategy of using export controls to mitigate what people used inside the country. . . . Our nation was at risk because such tools were legally available to anyone motivated enough to find them. Just as it was illegal to have nuclear weapons lying around, it should be illegal to have codes that could fall into the hands of those who would destroy society with it. In a weird way, this sentiment echoed Phil Zimmermann: when crypto is outlawed, only outlaws will have crypto.

The Clinton people did manage to resist that demand, which would have started riots in Silicon Valley and probably wouldn’t have survived a court challenge anyway. The Gore team in particular was sensitive to the idea that the emerging Information Highway needed privacy protections. Besides, how would you enforce such a ban? What did these guys want the government to do, go house to house and search people’s hard disk drives for copies of PGP?

So, after being presented with two unpalatable alternatives, the Clinton people were offered a third way, one which, in contrast, seemed a compromise with which everyone could live. In retrospect, one administration insider came to see it as akin to the choices offered the Kennedy people on the invasion of Cuba—a cowardly evasion of the problem, a destabilizing full-scale military operation, or this other plan, a small operation at some place called the Bay of Pigs.

The scheme was presented to the Clinton people as plug-ready, poised to go into operation as soon as the president gave the word. Even temporary inaction would mean a severe and probably lingering loss of respect from the law-and-order constituency the administration needed. One of the FBI men briefing the Clinton people was a burly, street-smart assistant director named James Kallstrom. Formerly head of the bureau’s technology team, he had made his bones in the bugging operation that took down John Gotti.

Some people described him as the FBI’s version of “Q,” the gadget wizard of the James Bond films. He had an in-your-face style of briefing, making eye contact and personalizing his rap. Are you married? Do you have a child? he’d ask. Then he’d launch into a scenario in which someone had kidnapped one of your kids and was holding him in a fortress up in the Bronx. The bureau suspects your kid is there; they have a search warrant to find him. But the crooks have constructed the fortress out of some new metal that can’t be penetrated. Your kid’s potential rescuers can’t get in. What a nightmare: the kidnappers, with their precious hostage, watching you and the G-men trying to get in and laughing at you.

“That’s what the basis of this issue really is,” Kallstrom would say in his New York accent. “From the standpoint of law enforcement, there’s a super-big threat—this guy is gonna build this domain in the Bronx right now, because he’s got a big steel door, and none of the welding torches, none of the boomerangs, nothing we have is gonna blast our way in there. Sure, we want those new steel doors ourselves, to protect our banks, to protect the American trade secrets, patent rights, technology. But do we want a digital superhighway where major criminals can operate impervious to the legal process? If we don’t want that, then we have to look at Clipper.”

Kallstrom, along with Baker, Brooks, McConnell, and the CIA’s John Deutch, became part of the key escrow team ostensibly briefing the administration on its options, but really steering it, with one hand on the scruff of its Democratic neck, toward an inevitable embrace of Clipper. One unexpectedly ally was commerce secretary Ron Brown; in the first briefing he attended, Brown mentioned that his army days had been spent at an NSA listening post, and he was fully aware of the vital importance of signals intelligence. By now the briefings included not only national security people but the Clinton-Gore science staffers like the Office of Science and Technology Policy’s Mike Nelson, infonauts well attuned to issues like personal privacy and the industry’s need for secure systems. (Nelson got his top-secret clearance in a lightning-quick three weeks.) In a January 26 FBI briefing, Kallstrom laid out a lot of the fine points of the scheme, but Gore’s senior director on intelligence programs, George Tenet, had further questions on the Clipper methodology. Who would be the key escrow agents? How would the international aspects be handled? A lengthy February 9 memo from Judge Sessions gave a detailed summary of the plan and the dire implications that would ensue if no action was taken.

So, barely a month into the Clinton administration, the pressure was intense to move on Clipper. Supposedly, AT&T would ship ten thousand DES-equipped phone devices by April 1 if no action was taken. But by then, the administration’s crypto team—consisting of national security people and Internet specialists—had almost imperceptibly shifted from decision making to implementation. It was their first big initiative, and they wanted it done fast: the word “closure” kept popping up in their correspondence.

A typical internal memo, dated March 5, was from George Tenet to Gore’s national security advisor Leon Fuerth and his colleague William Wise: the header read “HELP HELP HELP.” Then, “Desperately need time from the VP”—for a meeting with the past and current NSA directors on the encryption issue. “I think I know what the VP wants to hear McConnell/Studeman talk about,” Tenet continued, finishing with the odd closing, “God bless you all.”

All through March, the meetings continued. Meanwhile, industry and civil liberties groups were lobbying the newcomers, still hoping that the new administration would be amenable to considerable reform on crypto. “You’re holding back e-commerce, you’re endangering the security network, and besides, it’s all out of control, anyway!” one of them shouted at Gore’s people. But the Clinton people had already mentally aligned themselves with the government insiders at the NSA, the FBI, the Justice Department, and the CIA. The classified briefings had done the trick, particularly the warning that if no action was taken, people will die. Are you willing to sacrifice human lives, they were asked, for a fraction of a decimal point rise in the GNP? The tack was devastatingly effective: the dilemma was essentially resolved by framing it as a choice between thousands of people dying and Bill Gates being 10 percent richer. “That’s a pretty easy decision,” says an administration official.

Not that there weren’t qualms within the White House. The biggest question the Clinton aides asked themselves was, “Why would anyone want Clipper?” (After all, the plan was supposed to be voluntary.) Another problem was the requirement that the Skipjack algorithm remain under wraps. It was inevitable that its secrecy would lead critics to charge that the scheme was a Trojan horse to bring flawed crypto into the infrastructure. But the NSA wouldn’t budge on secrecy.

Finally, there was the problem of how the key escrow scheme would play overseas. If a crypto solution was not global, it would be useless. If buyers abroad did not trust U.S. products with the escrow scheme, they would eschew those products and buy instead from manufacturers in Switzerland, Germany, or even Russia. And how could you handle key escrow in other countries? Should the United States allow access to stored keys to free-speech–challenged nations like Singapore, or China? And would France, Egypt, Japan, and other countries be happy to let their citizens use products that allowed spooks in the United States to decipher conversations but not their own law enforcement and intelligence agencies? The answers to those questions were not forthcoming because the planners of Clipper never did work out a solution to its global implications—another consequence that came with rushing Clipper out of the door.

None of those objections were sufficient to sink the plan. At six in the evening on March 31, 1993, in the White House Situation Room, Vice President Gore went over the proposed directives in a meeting that included the whole gamut of law enforcement, intelligence, and national security leaders. Not long afterward, he briefed the president with his recommendation. Bill Clinton agreed.

Clipper was a go.

From that point the operation shifted to what one participant calls “White House Marketing.” Press releases were drafted. Mike Nelson set about writing an explanation of the proposal in question-andanswer form. Then on the eve of the announcement itself, the White House prebriefed a number of representatives from Congress, industry, and the civil liberties groups on the issue, not so much to collect feedback as to forestall charges that the Clinton people had blindsided them with the abrupt change in course.

Still, no one at the White House anticipated a major clamor over Clipper. But Clint Brooks saw trouble coming—this issue had the potential to leak outside the Beltway, to make real enemies out of potential sympathizers. They just don’t get it, he complained to Stew Baker on one drive between Fort Meade and the White House. At one meeting, he asked, “Who’s going to handle this on Larry King Live?” His question was ignored. A few minutes later, he repeated it. A senior administration official sternly told him, “Clint, we appreciate your sense of humor but this is really serious—you handle the technical stuff and we’ll handle the political stuff.” (Some months later, when Al Gore appeared on Larry King Live to talk about the Information Highway, the first question posed to him was about . . . the Clipper Chip.)

The briefings with Congress and industry went pretty much as expected: the proposal was received cautiously, even skeptically, but not dismissed out of hand. One legislative staffer complained that when the Clinton people were challenged, they went on the offensive. “Do you want to be responsible for kidnappers?” the Clintonistas would ask, and the legislators would crumble. The sessions with civil liberties groups weren’t so cordial. John Perry Barlow of the Electronic Frontier Foundation got one of those last-minute briefings and couldn’t believe his ears. He felt that his new friends in the White House had been “drinking the Kool-Aid,” a national security version of Jonestown. What particularly offended him was Mike Nelson’s invocation of the classified information he had heard and Barlow had not. “If only I could tell you what I know, you’d feel the same way I do,” Nelson said. Thousands could die, he confided. Barlow felt he was hearing the same phony music that had been sung by the Vietnam warmongers. What Clipper really represented, he felt, was a plan that would “initiate a process that might end freedom in America.”

Then there was Clint Brooks’s effort to get outside experts the information necessary to explain the benign nature of the system to the public. The night before the announcement, Brooks himself ventured through a driving rain to brief Georgetown computer science professor Dorothy Denning, his first choice to lead the panel to vet the classified Skipjack algorithm. It would be an inspired choice. Denning was an expert on crypto and computer security but her demeanor was as benign as Betty Crocker’s. (Science fiction writer Bruce Sterling once described the diminutive woman as “something like a Pilgrim maiden behind leaden glass.”) She was already on the record as supporting the regulation of cryptography, and coincidentally at the time of Brooks’s visit had just experienced an awkward situation in which she’d been unable to get into her locker after a swim in the university’s indoor pool; only helpful maintenance men with heavy-duty cutters (the equivalent of escrow agents!) saved her from venturing into forty-degree weather in her wet bathing suit. Not only was she ready to defend key escrow, she came to feel it was her destiny.

On April 16, President Clinton unveiled the new initiative. In his press secretary’s announcement of the plan, the issue was presented to the public as a middle ground between two dreadful extremes—much as the situation had been presented to the administration by the NSA. Seen through that filter, the Clipper Chip was to be regarded as a godsend:

The chip is an important step in addressing the problem of encryption’s dual-edged sword: encryption helps the privacy of individuals and industry, but it can also shield criminals and terrorists. We need the “Clipper Chip” and other approaches that can both provide law-abiding citizens with access to the encryption they need and prevent criminals from using it to hide their illegal activities.

The actual announcement did not establish Clipper as a standard, but it did affirm that the government itself was committed to buying thousands of the AT&T Clipper-inside devices for its own agencies. The hope was that while Clipper was designed to be a voluntary standard, its adoption and endorsement by the government would tip the marketplace to make it ubiquitous. The ultimate recommendation would come after Clinton received the results of a widespread blue-ribbon review on the national crypto policy that would look at the escrow initiative and reevaluate the export laws.

With that announcement, Bill Clinton and his people felt that they had made a big step toward avoiding what seemed like a disastrous collision in the crypto world, one that had seemed predestined since the day that Whit Diffie figured out how to split the cryptographic key. In fact, the Clipper Chip did mark the turning point in the battle, but not at all in the way the Clinton administration had intended. By promoting Clipper as its key escrow flagship, the government profoundly erred. Instead of a nuanced debate on encryption, from that point on the merits—and drawbacks—of this particular scheme would become the main crypto battleground. Clipper itself was the issue, and Clipper as proposed was vulnerable. And Clint Brooks, who was more than anyone its architect, saw what was happening, but was powerless to prevent it.

At first, things didn’t look so bad. From the vantage point of the White House and Fort Meade, it appeared that what relatively little public attention the Clipper Chip had garnered was fairly balanced. The New York Times article, published on the day of the announcement, had set a reasonable tone, right from its lead. The Clinton administration was “about to announce a plan to preserve privacy in electronic communications . . . while also insuring the government’s right to eavesdrop for law enforcement and national security reasons.” Balance. Of course, the article did quote one industry representative as saying, “The government is creating a monster.”

In the days following, there was no rush to embrace the plan by the various stakeholders who might be affected by it. The feds took succor, though, in the lack of a widespread outcry against it. The Internet, of course, was buzzing with fears of police-state tactics, but on the other hand, Dorothy Denning had almost immediately posted a clear-headed description of the system itself and was already serving as an example that the crypto community was not universally anti-Clipper. Better yet, an unexpectedly friendly description of the plan came from Marty Hellman, whom Brooks had briefed by phone on the eve of the announcement. Hellman’s explanation of the scheme was cautiously neutral (though he did warn that there should be safeguards in the legal process leading to key retrieval), and was posted on the influential “Interesting People” mailing list run by Net gadfly David Farber.

On April 20, Clint Brooks wrote a memo reflecting his optimism. “The reactions I am getting from academic and industry people is that this may succeed,” he wrote. So much so, these people were telling him, that the government may have not allocated enough digits in the chip identification fields to handle all the Clippers that would come into use. A hundred million would not be enough!

But that initial success was illusory, like a second-rate baseball team sitting in first place after a lucky string of April wins. The first serious rumbles came from the crucial information industries. After going over the plan, they concluded that the opportunity it offered to build strong exportable crypto into their systems was more than canceled out by the presence of the Law Enforcement Access Field, which provided keys to government snoops with warrants. The point of exporting crypto, after all, was to serve customers overseas. But what foreign companies wanted to buy a security system where the keys were stored in United States government escrow facilities? The business leaders joined with the already skeptical civil liberties people and fed on the energy of the grassroots Internet folk, who’d hated it from the get-go. Then they all took their case to the media. Though the reaction took a few months to build, the Clipper coverage eventually exceeded all the publicity that any previous cryptological development had ever received.

Little of it was favorable. All the time the government was planning its key escrow initiative, its creators had implicitly believed that only an isolated few would question their motives. They saw the selling of Clipper as a process by which responsible people would have a number of concerns, and the government would respond to those. One prime concern, they figured, would be a fear that the mechanics of the escrow scheme would somehow compromise the security of the encryption itself, making it easier for crooks and spies from other countries to do the unscrambling. Another would be that the key escrow facilities themselves might be vulnerable. What this thinking didn’t account for was that the very basis for the scheme—a government means by which to flip the “descramble” switch for its own purposes—was offensive to most people. All opponents had to do was use a simple analogy—What if you had to leave a copy of your front door key at the police station?—and even a Joe Sixpack who didn’t know encryption from a forward pass would be an anti-Clipper convert. “The idea that government holds the keys to all our locks, even before anyone has been accused of committing a crime, doesn’t parse with the public,” explained Jerry Berman of the EFF. “It’s not America.”

Others didn’t need such analogies. One of the basic reasons many people wanted to use crypto was to keep information from the government itself. Not that they were necessarily lawbreakers. They simply didn’t trust the government. The bureaucrats who made the plan were a generation removed from Watergate, but anyone who had been around in the seventies might have known better.

Former NSA director Bobby Inman, for instance, got an early briefing on the Clipper Chip and he sensed right away that it was doomed. Who wanted to give the government a direct pipeline to your information? The cypherpunks understood this, and immediately initiated a guerrilla campaign to infect the media and the general population with the anti-Clipper message. At their monthly meeting, Eric Hughes solicited an agenda of possible actions including everything from advocacy press kits to stumping for a procrypto constitutional amendment. Tim May suggested active sabotage of Clipper, or a boycott of AT&T. One effective prank they did pull off was distributing a little decal to stick on your laptop. Designed to resemble the famous Intel Inside logo, it read, “Big Brother Inside.” That pretty much said it all. (Intel quickly threatened to sue for trademark infringement, and the offending cypherpunks stopped distributing the stickers.)

Opposition came from all quarters. The ACLU found itself agreeing with Rush Limbaugh, who attacked Clipper on his radio show. Digital hippies savored the William Safire column “Sink the Clipper Chip,” where he noted that the solution’s name was well chosen, “as it clips the wings of individual liberty.” Tim May often expounded a theory that Americans are of two minds when it comes to privacy. One involves the public interest and was essentially anticrypto: “What do you have to hide?” The other expresses the individual ethic of the Bill of Rights, and is proprivacy: “None of your business.” Any successful policy has to walk down the middle of those opposing sentiments. But Clipper, in its insistence that nothing should be hidden from the government, never established that balance. Once people began calling it the Big Brother Chip, the game was over.

The government did its best to defend the scheme. Stewart Baker briefed industry figures including crypto advocate Bill Gates, to little avail. He went into the lion’s den, speaking at procrypto events like the Computers, Freedom, and Privacy conference—where he belittled the anti-Clipper forces to their faces, calling their actions, “the revenge of people who couldn’t go to Woodstock because they had too much trig homework.” He taunted them with the “If you knew what I know” argument. Your view of privacy, he told them, reflects a hopelessly naive view of the world. “By insisting on having a claim to privacy that is beyond social regulation, we are creating a world in which [crooks and terrorists] will flourish and be able to do more than they can do today,” Baker warned.

Not all the news was bad for the government. In the summer of 1993, the Skipjack algorithm was deemed strong by the team of “independent experts” led by Dorothy Denning and including Walt Tuchman (who had led IBM’s DES team) and Ernie Brickell (who had picked up the $1000 reward for cracking Merkle’s multi-iteration knapsack cipher). Denning had become so fierce in her defense of the government, clearly articulating a position that posited the dangers of crypto anarchy, that critics were calling her “Clipper Chick.” Her disinterested status made her more effective in public forums than the administration’s battered tech squad, which was beginning to regard its appearances at Internet-related conferences with all the enthusiasm of dental surgery. Who could blame them, as question after question drilled in the reality that their natural constituency of tech-savvy “Netizens” now saw them as virtual brownshirts? The White House’s Mike Nelson came to refer to crypto as “the Bosnia of telecommunications.”

Still, Clipper seemed cursed. At every turn a new problem cropped up. For example, not long after the announcement of the plan, the government heard from an MIT professor named Silvio Micali. Micali, who worked in MIT’s mathematics and cryptography group (led by Ron Rivest), had devised some mathematical protocols he called “Fair Cryptosystems” that seemed similar to the government’s key escrow scheme. He had published a paper on them in 1992 and had gotten a patent for them. The government quietly paid Micali a million dollars to license his patent.

Even the chip’s name proved to be a problem. “Clipper was our cover name, a la NSA normal operations,” Brooks wrote in an early 1992 memo. “I tried to get people not to use this outside the agency, but the policy makers and their staffs found it so convenient to use that it stuck.” Unfortunately, a company named Intergraph was already selling a microprocessor it called Clipper, and the United States had to pay a considerable sum to buy the rights to a moniker that was well on its way to what marketers call a brand disaster.

Other problems were purely technical. The chipmaker Mykotronx was a government and commercial contractor unaccustomed to the demands of the consumer marketplace, and its chip wasn’t built to accommodate high-bandwidth data rates. In its haste to get the Clipper Chip into the AT&T phones, the NSA had created a product that might have been adequate for the communications technology of 1993 but was woefully inefficient for the high speed of information flow in the glistening future that would arrive, oh, two years or so later. In other words, as critics noted with withering irony, by the time a security company took the fifteen to eighteen months to build a product around Clipper, the hardware would be obsolete.

Did anyone like Clipper? As part of the process, NIST had been required to solicit public comment on the plan. Three hundred and twenty individuals and organizations responded; of those, only two agreed with Clipper. “This is not a Hall of Fame batting average,” conceded NIST official Lynn McNulty. But the Clinton people would not budge. On February 4, 1994, the president formally endorsed Clipper —known as the Escrow Encryption Standard—as a Federal Information Processing Standard. The government would immediately start buying Clipper-equipped AT&T phones for its own use, escrowing keys with NIST and the Treasury Department. (This despite the fact that the technology did not yet actually exist to perform decryption of keys retrieved from the as-yet-nonexistent escrow facilities.)

“The War is upon us,” wrote Tim May. “Clinton and Gore folks have shown themselves to be enthusiastic supporters of Big Brother.”

In the Senate, Patrick Leahy, among others, vowed to fight Clipper, insisting that without congressional approval the project could not be funded (setting up the program would cost $14 million, with an annual $16 million budgeted for the escrow facilities). In May 1994 he held hearings. In rare public appearances, Clint Brooks and Mike McConnell presented the view from behind the Triple Fence, essentially congratulating the administration for taking the right approach. “There are, to be sure, issues to be ironed out,” concluded McConnell. “But I am confident we will work out the wrinkles.”

Then a panel of opponents showed those “wrinkles” to be approximately the size of the Colorado River basin.

One tough question they posed: Who would want to use Clipper, when there were already programs like PGP readily available? The government’s response had been the “stupid crook theory,” best explained by the FBI’s Jim Kallstrom, who professed to have himself heard mobsters on wiretaps make jokes about being wiretapped—and then engage in incriminating conversations, simply because it was too awkward to go outside and use a pay phone. “If in five years this catches on and people put Clipper in their devices, a high percentage of criminals will go to a Radio Shack or some other place like that to buy some sort of encryptor,” he said. “They’re not going to remember that in 1994 some article [appeared] in the Wall Street Journal [about key escrow]. Maybe in the fine print somewhere it’ll say Clipper something. But it’s not going to be readily apparent—it’ll be part of the landscape. That’s what would be our desire.”

OK, so stupid crooks might use it. But the antigovernment witnesses noted that if smart criminals eschewed Clipper, so would the overseas customers who were crucial to its adoption. What was in it for France or Japan or Indonesia to sign on to a plan where the keys to their citizen’s private conversations— possibly involving invaluable business secrets—were held jointly by two branches of the United States government?

Perhaps the most persuasive witness was Whit Diffie. He testified not only as one of the inventors of public key but as a representative of one of the ad hoc organizations lobbying against Clipper, the Digital Privacy and Security Working Group. Diffie tried to put the issue into historical perspective. Governments had been similarly concerned with previous revolutions in telecommunications, like the transatlantic cable and the advent of radio. Despite fears that governments would lose sovereignty, these developments turned out to prove tremendously useful to governments. Computer communications, too, would probably, on the whole, increase government power. But the United States seemed loath to allow any of that power to accrue to its citizens. While the government claimed only the desire to retain its current ability to wiretap, the fact was that during the time of the founding fathers, privacy was easily obtained simply by walking out of the earshot of others. “It seems that the right . . . of the participants to take measures to guarantee the right to speak privately can hardly have been in doubt, despite the fact that the right to speak privately could be abused in the service of a crime,” said Diffie. Today, of course, people communicate largely by electronic means, from the telephone to the computer. Could it be that the government has the right to deny the possibility of privacy in those conversations? “The legitimacy of laws in a democracy grows out of the democratic process,” Diffie told the senators. “Unless the people are free to discuss the issues—and privacy is an essential component of many of those discussions—that process cannot take place.”

Not long after the Senate hearings, Clipper suffered perhaps the worst blow of all. It came not as a tirade in Congress, an attack by an industry representative, or a screed from a cypherpunk. It was the result of a scientific experiment conducted by a formerly obscure research scientist named Matthew Blaze. Essentially, he made the Clipper Chip look stupid.

Blaze was a New York kid, a classic science nerd. He’d dropped out of a preppy private school, worked for a while as a paramedic (the first person hired by the city’s emergency medical service without a driver’s license), then drifted back to college, earning a degree in two seemingly incompatible sciences: computer and political. At graduate school at Columbia, he began seriously thinking about crypto. Talking to his officemate, a guy named Stuart Haber, who had devised a way to use public key to time-stamp documents digitally (providing an electronic equivalent to the old trick of postmarking a letter to affirm its age), he realized that crypto was both a way to tackle important mathematical problems and a practical lever to change society. Blaze was also a big believer in privacy rights.

After switching to Princeton and getting a Ph.D., he went to work for the small crypto group at AT&T’s Bell Labs research facility. Blaze began working in areas of encryption other than algorithms. His group was more concerned with basic research than AT&T’s secure system group in North Carolina, which had produced the TSD 3600 device that was slated to be the Clipper phone. In fact, he found out about Clipper by reading the newspaper like everyone else.

But as the Clinton administration was readying its February 1994 endorsement of the escrow standard, it had initiated a series of technical briefings that included the Bell Labs crypto group. Several NSA scientists came to New Jersey for a briefing. Though the group could generally be described as anti- Clipper—besides the privacy implications, as cryptographers they were offended at the security risks of sending a key to a third party—“we managed to be on our best behavior,” says Blaze, “not letting the meeting degenerate into whether this is a good idea.” Afterward, he asked if he could post a summary of the meeting to the Internet, and Blaze stuck to the facts in that as well.

This impressed people behind the Triple Fence, who apparently thought Blaze could be another valuable outside tester of Clipper technology. They invited him and a colleague to Fort Meade to get a prototype of Tessera, the smart-card-based version of the escrow system. (Tessera was to be a portable version of the whole-enchilada Capstone cryptosystem that Clint Brooks favored over the limited Clipper Chip.) Never having been there, Blaze was excited. He was given the standard visitor’s badge with a sensor that tracked him through the building: when his host took him through he had to keep facing security cameras and assuring some unseen guard that Blaze was with him, and a disembodied voice said, “Okay, thank you.” Even between the briefing room and the bathroom this happened a couple of times. “They didn’t actually follow me into the bathroom,” Blaze says. When the Bell researchers left, they were given Tessera cards, a stack of manuals, and NSA coffee mugs.

Blaze immediately began testing the system, focusing on the Clipper aspects of the device. Unlike Dorothy Denning’s team, which had focused on Skipjack, Blaze wondered whether there was a way to actually use the strong encryption while defeating the escrow feature. In other words, could a crook, terrorist, or someone just wanting privacy use Clipper’s crypto without being identified? He focused his efforts on studying the Law Enforcement Access Field. “I wasn’t even thinking of it as a potential weakness,” he says. “But it turned out that the obvious way of defeating the LEAF was pretty much the first thing you would initially think of.”

Using a card reader and a little program that simulated a wiretap, he began testing. The simplest things —altering the code so you wouldn’t send the identifier, or sending some other number in place of the identifier—didn’t work. But it took only a bit of thought to come up with slightly more complicated ways that did work. The breakthrough came when Blaze, poring over the manuals, noted that the “checksum” in the LEAF was only 16 bits long. (The checksum is the way to verify that the proper LEAF, including the chip identifier and session key that encoded the conversation, was indeed sent off to the authorities. The proper number in the checksum is like an “all’s clear” that says everything is OK. If there was some way of creating a counterfeit LEAF with a legitimate checksum, in effect you would have defeated the Clipper system. The encryption would work, but the wiretappers wouldn’t have the proper session key to decrypt the conversation.)

“Sixteen bits isn’t a very big number these days, computationally,” Blaze says. Within a few hours he hacked up a “LEAF-blower,” a quick program that could send out every possible combination (2 to the 16th power) of checksum numbers, then hooked it to his test system. He really didn’t expect it to work—it seemed so easy. But it did work, each time he tried it. In no more than forty-two minutes, he was able to send out a checksum that spoofed the escrow system into mistakenly assuming he was sending out the data that could lead investigators to the escrowed key—when in fact that data would lead them nowhere. Instead, the wiretapper would be faced with a conversation encrypted by the powerful Skipjack algorithm, deemed uncrackable by the NSA itself. (He also found a way in which two people conspiring to defeat the LEAF system could do so even more quickly.)

What Blaze did not know was that the small checksum space was no accident but an artifact of the haste with which Clipper was prepared. During the hurried design process the NSA engineers consulted with various technical experts at telephone companies, and were warned that with wireless phones, any system that required transmission of too many bits would be deemed impractical. So the LEAF field was limited to 128 bits. Of that, 32 bits had to be used for the chip identifiers, leaving only 96 bits for an actual encryption key and the checksum. The NSA wanted a large checksum, but the FBI insisted on using 80 bits so the full session key would be transmitted. (An alternative may have been to leave off some of the key bits and allow the FBI to complete the decoding by a brute-force attack. If, for instance, eight bits had been diverted from the keyspace to the checksum, the FBI could have run through a mere 256 different alternatives to find its key—but Blaze’s attempt to crack the checksum would have taken not 42 minutes, but more than a week. That’s a long time on hold.)

In a few days, Blaze sent a draft paper of his findings to his colleagues at Bell Labs. Most of them couldn’t believe it. “Are you sure about this?” they asked, suggesting he recheck his work. He did. Then he began the more delicate process of checking it with outsiders. One morning Blaze girded himself and sent a fax of his draft to Fort Meade. Right after lunch he got a call back, affirming his results were technically correct.

“What are you planning on doing with this?” asked his NSA contact.

Blaze took a deep breath. “I’d like to publish it.”

To his surprise, no objection was raised. His NSA reader did point out a couple of errors in numerical transcription and one grammatical error. Now all Blaze had to do was get an okay from his employer— who had millions of dollars riding on its Clipper phones. Though there were some who wanted to bury the paper, eventually Blaze managed to convince his bosses that it would be impossible to keep his findings secret, so they shouldn’t even try. In any case, John Markoff of the New York Times had already gotten wind of the work. Blaze got permission to send him a draft, so that whatever story ran would be accurate. Markoff called back for some clarification and a few hours later called back again and asked Blaze a strange question: how newsworthy did he consider the story? Blaze felt that it was indeed a story —it showed how rushed the NSA was to get its system out, and emphasized how dangerous it was to foist something half baked on the public—but not a front-page story or anything like that. Not long afterward, Markoff called again, almost apologetically, and said that it had been a slow news day so the story was going to be more prominently placed. Blaze figured that meant it would lead the business section.

He’d heard that you could get the next day’s paper at 9 P.M. if you went to the Times Building, and he was curious enough to do so. After opening the paper, he went through it and was disappointed to find nothing. “It hadn’t occurred to me to even look on the front page until I had gotten out of the building.” But there it was—leading the entire paper on the sweet spot in the rightmost column of page one, headlined “FLAW DISCOVERED IN FEDERAL PLAN FOR WIRETAPPING.”

This was significant in several ways. First, though the flaw itself could be fixed—and arguably didn’t compromise security much—the very fact that such a weakness existed put a permanent taint on a system dependent on public trust. But perhaps more important was that the former backwater, mumbo-jumbo subject of crypto had raised its profile so high that even a moderate development like Blaze’s crack could be seen by the Times editors as the most important story in the world that day. What made this dry topic sexy was the whiff of a Big Brother who couldn’t even program correctly. The government unintentionally played into that role when an imperious NSA official insisted that Blaze’s attack, while feasible, was unlikely in practice—not a particularly comforting assurance for the nation’s cryptographic caretaker. Much stronger was Marty Hellman’s assertion, “The government is fighting an uphill battle.”

Meanwhile, after some initial supply problems, the government was already starting to use Clipper phones. (The more comprehensive Capstone chips, designed to escrow computer communications, were late in entering the pipeline.) Approximately once a week, four couriers with security clearance—two each from NIST and the Treasury Department—flew from Washington, D.C., to Torrance, California, to the so-called programming facility at Mykotronx headquarters. (The redundancy was intentional, conforming to the Two-Person Integrity Protocol also used for nuclear weapon controls.) Once inside they waited while a Sun workstation did its work, first generating the unique cryptographic keys that would be blown into the MYK-78 (Clipper) chips, then splitting the keys into two parts and creating two stacks of floppy disks, each one with a set of partial keys. To reconstruct the full keys inside the chips required both sets of disks.

Backup sets were produced by the same method. Then the disks were separated, each one going with a pair of couriers. A plastic seal went over the disks. When the couriers returned to their respective agencies, the disks were placed in double-walled safes meeting government standards for classified materials. A set of the backups went in another safe. And there they waited, about 20,000 key splits by May 1994, sitting undisturbed while the war over Clipper continued.

In late January 1994, the Computer Professionals for Social Responsibility had written a letter to the president urging that he rescind the Clipper proposal. It was cosigned by privacy experts, industry figures, academics, and cryptographers, and supplemented by signatures gathered over the Internet. Within a few months, the petition—one of the first Internet political protests—boasted over 47,000 endorsers. While a skeptic might dismiss this as a result of overheated Net-heads, a New York Times/CNN poll showed that the government had clearly suffered a Custer-sized rout in the public relations arena. Eighty percent of the American public now opposed Clipper.

Not that it did any good. The administration was betting that the export regulations would prevent strong crypto from being built into products that people routinely used, and key escrow would be the only game in town. But Congress had the power to change those regulations. And pushing hardest on the issue was a thirty-eight-year-old single woman in her first term in Congress.

Maria Cantwell was a daughter of an Indiana politician. She’d moved to Washington State in her twenties, served in the legislature there, and in 1992 pulled off a successful run for the House. Her district, consisting of part of Seattle and the towns east of Lake Washington, was loaded with high-tech companies, from Nintendo to Microsoft. So when choosing a committee to serve on she focused on one of the software industry’s main concerns, exports, and requested the Foreign Affairs Committee— specifically, its subcommittee on economic policy, trade, and environment.

She’d hardly gotten familiar enough with the House to find the cloakroom when the Clipper announcement hit. It infuriated her big high-tech constituents, and she began to look more deeply into the problem, particularly at the export regulations. She worked closely with the affected software companies, not only those in her district like Microsoft but others like Lotus. The more she learned about the export regulations of crypto, the more absurd they seemed in the computer age. They can’t be so myopic to think cryptography is a munition, she’d say to Sam Gejdenson, the subcommittee chair and one of her legislative mentors. If they continue, you won’t be able to get protection on the Internet.

Meanwhile, the export situation was at a standstill. In 1992, some of the leaders of the new industry, like Lotus’s Ray Ozzie and Microsoft’s Nathan Myhrvold, had spent an incredible amount of energy negotiating a deal with the NSA. The talks were a classic culture clash. The software guys thought it absurd that government was attempting to contain bits of code within national borders, when algorithms with the same ciphers were openly published in countries from Germany to Russia. It was the worst sin among nerds: illogical behavior. Or was it? “Don’t you realize,” Myhrvold once asked one of the spooks in a briefing session, “that you’re like the little Dutch boy, trying to use your fingers to plug the dike against a sea of strong crypto?”

His tormentor smiled. “Every day the dike doesn’t break,” he said softly, “is a victory.” And it was true. Sure, the crypto genie had escaped the bottle. But if you throw enough obstacles in the genie’s way, it’ll take him a long time to perform any magic.

Finally, all that energy resulted in a temporary compromise. Working with an industry group called the Software Publishers Association, the companies got an agreement for “expedited consideration” when they exported software programs sold in shrink-wrap to retail customers. The requirement was that the encryption in those products would be Ron Rivest’s ciphers RC-2 or RC-4, using keys of no more than 40 bits. This would allegedly be increased in subsequent years to keep pace with faster computers. In exchange, the NSA got some restrictions of its own. The regulation would not be formalized in an explicit standard. RSA and the companies using the cipher had to agree to keep the details of its design a secret.

But no one particularly liked that deal. Companies had two choices. They could, like Lotus, offer American customers a version with strong (64-bit) encryption, and a weaker version for export. Then foreign customers would wonder why their software had second-class crypto—and sometimes, buy other products. Ray Ozzie claimed that it was already happening with Lotus. (He called the 40-bit limit espionage-enabled encryption.) Or, like Microsoft, they could avoid the hassle of manufacturing and shipping two versions and give everyone weak encryption. Meanwhile, hard-liners in the government felt that by green-lighting an export exemption, no matter what the key length, they were on a slippery slope toward strong crypto. Give the Lotuses and the Microsofts 40 bits now, and tomorrow they’re at your door demanding 48 bits, and more.

But when Cantwell and Gejdenson went to the White House to urge movement toward export of stronger crypto, they hit a brick wall. The Clinton people held firm.

In October 1993 Gejdenson and Cantwell held a subcommittee hearing to draw attention to the problem. “This hearing is about the well-intentioned attempts of the National Security Agency to control that which is uncontrollable,” said Gejdenson. He was talking about export regulations, but he might have been talking about something else—the support from Congress that Fort Meade once took for granted. While the majority of legislators accepted the NSA’s contentions at face value, a cognitive dissonance was emerging between its arguments and what appeared to be a more compelling view of reality. Cantwell put it clearly in her own opening statement: “We are here to discuss, really, competing visions of the future.” On one hand was a mind-set so locked into Cold War posturing that it ignored the inevitable. On the other were the techno-visionaries who powered our future, eager to fortify American ascendancy in a global marketplace.

The hearing’s first witness was Ray Ozzie, who had come prepared with a software demo. He had a screen connected by phone line to his computer in Massachusetts, which he used to venture onto the Internet and download one of “hundreds of thousands” of copies of implementations of DES available overseas. He chose one in German, and downloaded it into his machine within seconds, as anyone in the world could do. But, he noted, if he were then to send the same software back to Germany, he would be guilty of the federal offense of exporting strong crypto.

Next was Steve Walker, a former NSA official who now headed Trusted Information Systems, a consulting firm helping businesses implement crypto. He presented the results of a Software Publishers Association study that identified 264 cryptographic products produced overseas, 123 of which employed DES. Foreign individuals and companies could buy any of these, but not similar products created by American firms because the NSA would not permit their export. “It cannot be clearer,” he said. “The existence of widespread and affordable cryptographic products overseas is an indisputable fact . . . the U.S. government is succeeding only in crippling a vital American industry’s exporting ability.” He then cited specific examples of business lost by American companies, like one firm that lost half of its European customers because it could not provide them strong cryptographic security.

Phil Zimmermann gave testimony that trying to restrict cryptography is like attempting to “regulate the tides and the weather.” Don Harbert, an executive of Digital Equipment Corporation, insisted that “U.S. export controls on encryption must be brought into line with reality.”

One of the committee members who had not been previously vocal in challenging the government, a conservative Californian named Dana Rohrbacher, noted for the record that if it were five years earlier, he would have chastised the witnesses for seeking profit at the potential loss of national security. But now, he said, “the Cold War is over. It is time for us to get on.”

After the public session, security experts swept the room for bugs before the inevitable follow-up hearings involving the interests of the National Security Agency: The Briefing, “where the NSA answers all those questions in secret,” said Gejdenson. NSA briefings were notorious in Congress. They involved a dramatic presentation by the NSA on why our international eavesdropping abilities were so vital, typically including a litany of victories achieved by clandestine snooping (victories that would have been unthinkable without billions of dollars in funding), and perilous international situations that required continued vigilance and support. Perfected by Bobby Ray Inman in his days as NSA director, they initiated legislators into the society of Top Secret, implicitly shifting their alliance from the citizenry to the intelligence agencies. A newly cleared congressperson would get a presumably unvarnished and reportedly terrifying dose of global reality, after which he or she thereafter could be assumed to dutifully support any demands of the National Security Agency, lest the Huns gain a purchase on our liberty. Representatives and senators had been known to venture into the bug-swept room and emerge grim faced, stunning their go-go staffers by remarking, “Well, maybe we should reconsider.”

Not Maria Cantwell. She was among a growing number of legislators who found The Briefing impressive but not persuasive. The issue for these skeptics wasn’t just how important crypto was, or what successes we’d had breaking codes, but whether maintaining export rules was actually productive. If the genie was out of the bottle, so what if American companies couldn’t export? Crooks would get crypto elsewhere!

Cantwell began to prepare a legislative remedy. In 1994 the Foreign Affairs Committee was already planning its periodic overhaul of the export regulations. She prepared H.R. 3627, “Legislation to Amend the Export Administration Act of 1979,” a bill adding a new subsection to the old rules, with specific implications for software exports, including encryption. It would move the decision-making process from the Department of Defense to Commerce, and would essentially make shrink-wrapped or public-domain software exempt from export regulations. It would put an end to the NSA’s game of controlling American crypto by use of the export laws.

Naturally, the administration could not let that stand. When Cantwell was ready to introduce the bill, her staff notified her of an incoming phone call—from the vice president. The only previous time she had engaged Al Gore in a one-on-one had been during the budget battle, when Cantwell, despite severe reservations, had supported the administration (and would eventually wind up losing her reelection campaign in part because of it). What did he want this time?

“I want you to stop this bill,” he said. He reiterated the stuff from the briefings about national security and all that.

Cantwell held firm. “I’m sorry, Mr. Vice President,” she said. “I respect your opinion but I’m not changing my mind.”

In a way, that was a turning point for Maria Cantwell. She got the bill through the subcommittee and kept pressing, even though fellow committee members were already trying to get her to drop the thing. Even before she left the hearing room after the vote—she hadn’t even gotten up from her chair—one representative came up to her and said outright, “If you don’t stop this it’s going to get very ugly.” And Maria Cantwell said to herself, “I’m not stopping.”

On November 24, 1993, Cantwell introduced H.R. 3627 on the House floor. Her comments were blunt. “The United States’ export control system is broken,” she said. “It was designed as a tool of the Cold War, to help fight against enemies that no longer exist. The myriad federal agencies responsible for controlling the flow of exports from our country must have a new charter, recognizing today’s realities.”

The pressure continued, though most members were collegial in their attempts at persuasion. There was one instance in which a fellow Democrat came up to her on the floor and began berating her for ignoring national security issues. She felt intimidated but more than ever was convinced she should go on. With all the forces lined up to bolster these bizarre export laws and the silly Clipper Chip, it struck her as an exercise in unchecked power—against consumers.

Still, she knew that on this issue she was out there. Though she was doing yeoman service for the techies she represented, most of her constituents in Washington State’s First Congressional District preferred her to be concentrating on issues such as health care, and here she was, locked in meetings with National Security Advisor Tony Lake. One day she heard that Bill Gates would be in town. So she asked the people at Microsoft who had been working with her—Nathan Myhrvold and company counsel Bill Neukom—if they could convince the world’s most famous techno-geek to lobby her colleagues on the matter. I’m out on a political limb here, she pleaded. Without publicity, she had Bill Gates address the intelligence committee. The National Security stooges started to explain to the billionaire how important the export laws were, but the icon of the New Economy had little patience for being lectured. Gates let them know that was a bullshit reason. The committee members didn’t get offended—it was kind of a kick, getting snapped at by the world’s richest guy. You certainly had to take him seriously when he talked about what was good for business.

Cantwell dug in her heels with the White House, too. She asked them not to fight her bill, but to let it take its course in Congress. The response was unexpected, and it came two days before the vote. It was a deal. If we change our position, the Gore people wanted to know, would you drop the bill? They suggested that instead of forcing the Clipper Chip on people, they would instead advocate a different voluntary key escrow scheme. And maybe it could be based on more flexible software implementations than that already antiquated chip. And maybe, instead of only government escrow facilities, some could be in the more-trusted private sector, like banks or security companies.

A significant retreat, but it was still an escrow scheme, not at all the ultimate solution that Cantwell and her constituents wanted. On the other hand, the chances of her bill passing were equivalent to that of Microsoft’s shipping an operating system without bugs. (Even then it would face a near-certain veto.) Cantwell went back to the people who had been fighting the battle long before she switched Washingtons. Bruce Heiman of the industry group called the Business Software Alliance was encouraged that the administration was giving a framework for a compromise. Nathan Myhrvold straight out celebrated. “They blinked,” he later said. All of Cantwell’s advisors agreed, though, that before she stood down, she should get promises in writing.

On July 20, 1994, the afternoon before the vote, the letter from Al Gore arrived. After the usual flatulence (“I write to express my sincere appreciation for your efforts to move the national debate forward . . .”) Gore got to the point.

The administration understands the concerns that industry has regarding the Clipper Chip. We welcome the opportunity to work with industry to design a versatile, less expensive system. Such a key escrow system would be implementable in software, firmware, hardware, or any combination thereof, would not rely on a classified algorithm, would be voluntary, and would be exportable. . . . We also recognize that a new key escrow encryption system must permit the use of private-sector key escrow agents as one option.

Apparently, the White House figured that the exercise was simply a way to quiet a potential firestorm. (Later in the summer, a Defense Department official seeking clarification on the implications of the policy shift was told that the letter was intended “to placate Rep. Cantwell and avoid a national debate.”) But when the contents of Gore’s missive found their way to the front page of the Washington Post the next day (a slight embarrassment for Cantwell, who didn’t want to look like she was showboating), the Gore people rediscovered that the Bosnia of telecommunications was as thorny as ever. The White House had made its promises without clearing them with the NSA or the FBI. (The first Clint Brooks had heard about it was the day it ran in the Washington Post.) Cantwell got a call from a Gore person. Do you mind, he asked, if we, um, rescind the letter?

“Do you know how silly you’d look?” she replied. It was, after all, Gore’s letter, Gore’s words. She promised that she wasn’t out to milk the incident with the press, but the news was out there, and she didn’t have the authority to let him rescind the agreement. So the deal stood. Cantwell dropped her bill, though in the next few years it would be only the first of a number of increasingly popular congressional initiatives to reform the export rules. Meanwhile, the Gore letter, whether intentional or not, was essentially a blueprint for the direction that the administration would take in tinkering with their ill-fated Clipper Chip. A step backward. A rejection. Another step backward. Stalling and confusion, while the great honest debate that Clint Brooks had envisioned about a national crypto policy never did come to the forefront. Meanwhile, the platform that Brooks considered absolutely essential—a full encryption solution to protect privacy, a policy that would generate a pervasive digital signature policy to empower electronic commerce and prevent electronic forgeries, and access for law enforcement—never did get straightened out.

Clint Brooks himself wanted out of the struggle. After a couple of years of driving back and forth from Maryland to D.C., having the same arguments with the same people, he asked the new NSA director if he could work on something that utilized his talents more effectively. His request was granted. Nirvana was lost.