Crypto-Gram

January 15, 2006

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@schneier.com
<http://www.schneier.com>
<http://www.counterpane.com>

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-0601.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.


In this issue:


Anonymity and Accountability

In a recent essay, Kevin Kelly warns of the dangers of anonymity. It’s OK in small doses, he maintains, but too much of it is a problem: “(I)n every system that I have seen where anonymity becomes common, the system fails. The recent taint in the honor of Wikipedia stems from the extreme ease which anonymous declarations can be put into a very visible public record. Communities infected with anonymity will either collapse, or shift the anonymous to pseudo-anonymous, as in eBay, where you have a traceable identity behind an invented nickname.”

Kelly has a point, but it comes out all wrong. Anonymous systems are inherently easier to abuse and harder to secure, as his eBay example illustrates. In an anonymous commerce system—where the buyer does not know who the seller is and vice versa—it’s easy for one to cheat the other. This cheating, even if only a minority engaged in it, would quickly erode confidence in the marketplace, and eBay would be out of business. The auction site’s solution was brilliant: a feedback system that attached an ongoing “reputation” to those anonymous user names, and made buyers and sellers accountable for their actions.

And that’s precisely where Kelly makes his mistake. The problem isn’t anonymity; it’s accountability. If someone isn’t accountable, then knowing his name doesn’t help. If you have someone who is completely anonymous, yet just as completely accountable, then—heck, just call him Fred.

History is filled with bandits and pirates who amass reputations without anyone knowing their real names.

EBay’s feedback system doesn’t work because there’s a traceable identity behind that anonymous nickname. EBay’s feedback system works because each anonymous nickname comes with a record of previous transactions attached, and if someone cheats someone else then everybody knows it.

Similarly, Wikipedia’s veracity problems are not a result of anonymous authors adding fabrications to entries. They’re an inherent property of an information system with distributed accountability. People think of Wikipedia as an encyclopedia, but it’s not. We all trust Britannica entries to be correct because we know the reputation of that company, and by extension its editors and writers. On the other hand, we all should know that Wikipedia will contain a small amount of false information because no particular person is accountable for accuracy—and that would be true even if you could mouse over each sentence and see the name of the person who wrote it.

Historically, accountability has been tied to identity, but there’s no reason why it has to be so. My name doesn’t have to be on my credit card. I could have an anonymous photo ID that proved I was of legal drinking age. There’s no reason for my e-mail address to be related to my legal name.

This is what Kelly calls pseudo-anonymity. In these systems, you hand your identity to a trusted third party that promises to respect your anonymity to a limited degree. For example, I have a credit card in another name from my credit-card company. It’s tied to my account, but it allows me to remain anonymous to merchants I do business with.

The security of pseudo-anonymity inherently depends on how trusted that “trusted third party” is. Depending on both local laws and how much they’re respected, pseudo-anonymity can be broken by corporations, the police or the government. It can be broken by the police collecting a whole lot of information about you, or by ChoicePoint collecting billions of tiny pieces of information about everyone and then making correlations. Pseudo-anonymity is only limited anonymity. It’s anonymity from those without power, and not from those with power. Remember that anon.penet.fi couldn’t stay up in the face of government.

In a perfect world, we wouldn’t need anonymity. It wouldn’t be necessary for commerce, since no one would ostracize or blackmail you based on what you purchased. It wouldn’t be necessary for internet activities, because no one would blackmail or arrest you based on who you corresponded with or what you read. It wouldn’t be necessary for AIDS patients, members of fringe political parties or people who call suicide hotlines. Yes, criminals use anonymity, just like they use everything else society has to offer. But the benefits of anonymity—extensively discussed in an excellent essay by Gary T. Marx—far outweigh the risks.

In Kelly’s world—a perfect world—limited anonymity is enough because the only people who would harm you are individuals who cannot learn your identity, and not those in power who can.

We do not live in a perfect world. We live in a world where information about our activities—even ones that are perfectly legal—can easily be turned against us. Recent news reports have described a student being hounded by his college because he said uncomplimentary things in his blog, corporations filing SLAPP lawsuits against people who criticize them, and people being profiled based on their political speech.

We live in a world where the police and the government are made up of less-than-perfect individuals who can use personal information about people, together with their enormous power, for imperfect purposes. Anonymity protects all of us from the powerful by the simple measure of not letting them get our personal information in the first place.

This essay originally appeared in Wired:
<http://www.wired.com/news/columns/0,70000-0.html>

Kelly’s original essay:
<http://www.edge.org/q2006/q06_4.html>

Gary T. Marx on anonymity:
<http://web.mit.edu/gtmarx/www/anon.html>


Cell Phone Companies and Security

This is a fascinating story of cell phone fraud, security, economics, and externalities. Its moral is obvious, and demonstrates how economic considerations drive security decisions. According to “The Globe and Mail”:

“Susan Drummond was a customer of Rogers Wireless, a large Canadian cell phone company. Her phone was cloned while she was on vacation, and she got a $12,237.60 phone bill (her typical bill was $75). Rogers maintains that there is nothing to be done, and that Drummond has to pay.”

Like all cell phone companies, Rogers has automatic fraud detection systems that detect this kind of abnormal cell phone usage. They don’t turn the cell phones off, though, because they don’t want to annoy their customers.

“Ms. Hopper [a manager in Roger’s security department] said terrorist groups had identified senior cell phone company officers as perfect targets, since the company was loath to shut off their phones for reasons that included inconvenience to busy executives and, of course, the public-relations debacle that would take place if word got out.”

As long as Rogers can get others to pay for the fraud, this makes perfect sense. Shutting off a phone based on an automatic fraud-detection system costs the phone company in two ways: people inconvenienced by false alarms, and bad press. But the major cost of not shutting off a phone remains an externality: the customer pays for it.

In fact, there seems be some evidence that Rogers decides whether or not to shut off a suspicious phone based on the customer’s ability to pay:

“Ms. Innes [a vice-president with Rogers Communications] said that Rogers has a policy of contacting consumers if fraud is suspected. In some cases, she admitted, phones are shut off automatically, but refused to say what criteria were used. (Ms. Drummond and Mr. Gefen believe that the company bases the decision on a customer’s creditworthiness. ‘If you have the financial history, they let the meter run,’ Ms. Drummond said.) Ms. Drummond noted that she has a salary of more than $100,000, and a sterling credit history. ‘They knew something was wrong, but they thought they could get the money out of me. It’s ridiculous.'”

Makes sense from Rogers’ point of view. High-paying customers are 1) more likely to pay, and 2) more damaging if pissed off in a false alarm. Again, economic considerations trump security.

Rogers is defending itself in court, and shows no signs of backing down:

“In court filings, the company has made it clear that it intends to hold Ms. Drummond responsible for the calls made on her phone. ‘. . . the plaintiff is responsible for all calls made on her phone prior to the date of notification that her phone was stolen,’ the company says. ‘The Plaintiff’s failure to mitigate deprived the Defendant of the opportunity to take any action to stop fraudulent calls prior to the 28th of August 2005.'”

The solution here is obvious: Rogers should not be able to charge its customers for telephone calls they did not make. Ms. Drummond’s phone was cloned; there is no possible way she could notify Rogers of this before she saw calls she did not make on her bill. She is also completely powerless to affect the anti-cloning security in the Rogers phone system. To make her liable for the fraud is to ensure that the problem never gets fixed.

Rogers is the only party in a position to do something about the problem. The company can, and according to the article has, implemented automatic fraud-detection software.

Rogers customers will pay for the fraud in any case. If they are responsible for the loss, either they’ll take their chances and pay a lot only if they are the victims, or there’ll be some insurance scheme that spreads the cost over the entire customer base. If Rogers is responsible for the loss, then the customers will pay in the form of slightly higher prices. But only if Rogers is responsible for the loss will they implement security countermeasures to limit fraud.

And if they do that, everyone benefits.

<http://www.globetechnology.com/servlet/story/…>
<http://it.slashdot.org/article.pl?sid=05/12/17/…>


Crypto-Gram Reprints

Crypto-Gram is currently in its ninth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram-back.html>. These are a selection of articles that appeared in this calendar month in other years.

Fingerprinting Students:
<http://www.schneier.com/crypto-gram-0501.html#1>

Cyberwar:
<http://www.schneier.com/crypto-gram-0501.html#10>

Diverting Aircraft and National Intelligence:
<http://www.schneier.com/crypto-gram-0401.html#11>

Fingerprinting Foreigners:
<http://www.schneier.com/crypto-gram-0401.html#3>

Color-coded Terrorist Threat Levels:
<http://www.schneier.com/crypto-gram-0401.html#1>

Militaries and Cyber-War:
<http://www.schneier.com/crypto-gram-0301.html#1>

A cyber Underwriters Laboratories?
<http://www.schneier.com/crypto-gram-0101.html#1>

Code signing:
<http://www.schneier.com/crypto-gram-0101.html#10>

Block and stream ciphers:
<http://www.schneier.com/…>


Dutch Botnet

Back in October, the Dutch police arrested three people who created a large botnet and used it to extort money from U.S. companies. When the trio was arrested, authorities said that the botnet consisted of about 100,000 computers. The actual number was 1.5 million computers.

And I’ve heard reports from reputable sources that the actual actual number was “significantly higher.”

And it may still be growing. The bots continually scan the network and try to infect other machines. They do this autonomously, even after the command and control node was shut down. Since most of those 1.5 million machines—or however many there are—still have the botnet software running on them, it’s reasonable to believe that the botnet is still growing.

<http://informationweek.com/story/showArticle.jhtml?…>


Internet Explorer Sucks

This study is from August, but I missed it. The researchers tracked three browsers (MSIE, Firefox, Opera) in 2004 and counted which days they were “known unsafe.” Their definition of “known unsafe”: a remotely exploitable security vulnerability had been publicly announced and no patch was yet available.

MSIE was 98% unsafe. There were only 7 days in 2004 without an unpatched publicly disclosed security hole.

Firefox was 15% unsafe. There were 56 days with an unpatched publicly disclosed security hole. 30 of those days were a Mac hole that only affected Mac users. Windows Firefox was 7% unsafe.

Opera was 17% unsafe: 65 days. That number is accidentally a little better than it should be, as two of the unpatched periods happened to overlap.

This underestimates the risk, because it doesn’t count vulnerabilities known to the bad guys but not publicly disclosed (and it’s foolish to think that such things don’t exist). So the “98% unsafe” figure for MSIE is generous, and the situation might be even worse.

<http://bcheck.scanit.be/bcheck/page.php?name=STATS2004>


Security Notes from All Over: Electronic Shackles and

Telephone Communications

The article is in Hebrew, but the security story is funny in any language.

It’s about a prisoner who was forced to wear an electronic shackle to monitor that he did not violate his home arrest. The shackle is pretty simple: if the suspect leaves the defined detention area, the electronic shackle signals through the telephone line to the local police.

How do you defeat a system such as this? Just stop paying your phone bill and wait for the phone company to shut off service.

<http://www.haaretz.co.il/hasite/pages/ShArt.jhtml?…>


News

Two stories that shamelessly hype computer crime:
<http://www.cnn.com/SPECIALS/2005/online.security/>
<http://www.usatoday.com/tech/news/internetprivacy/…>
Beware the Four Horsemen of the Information Apocalypse: terrorists, drug dealers, kidnappers, and child pornographers. Seems like you can scare the public into allowing the government to do anything with those four.

Microsoft received a Common Criteria (CC) EAL 4+ certification for Windows, demonstrating how weak such a certification really is:
<http://www.eweek.com/article2/0,1895,1901965,00.asp>

After FBI agents expressed frustration that the Office of Intelligence Policy and Review wasn’t approving their orders under Section 215 of the Patriot Act, procedural changes were made allowing the FBI to bypass that office.
<http://www.epic.org/foia_notes/note10.html>
Remember, the issue here is not whether or not the FBI can engage in counterterrorism. The issue is the erosion of judicial oversight—the only check we have on police power. And this power grab is dangerous regardless of which party is in the White House at the moment.

Meanwhile, the U.S. military is spying on Americans. Specifically, the Department of Defense is collecting data on legal and peaceful war protesters, in violation of U.S. law.
<http://www.msnbc.msn.com/id/10454316/>

Four hundred pounds of high explosive stolen from a “bunker” outside Albuquerque owned by Cherry Engineering. Note that it had no guards and no surveillance cameras:
<http://www.abcnews.go.com/GMA/story?id=1424214>
It was recovered:
<http://www.atf.gov/press/fy06press/field/…>

An interesting interview with OpenSSH developer Damien Miller:
<http://www.securityfocus.com/columnists/375>

Adaptable criminals: as automobile security devices become more effective, thieves are more likely to break into homes in order to steal the keys.
<http://www.themercury.news.com.au/common/story_page/…>

Idiotic article on TPM:
<http://www.msnbc.msn.com/ID/10441443>
My commentary:
<https://www.schneier.com/blog/archives/2005/12/…>

Here’s a child pornographer who received the Sober.Y worm. This worm has an official-sounding message to entice recipients to open the attachment. He got so scared that he turned himself into the police.
<http://news.yahoo.com/s/nm/20051220/wr_nm/…>

The story of the UMass Dartmouth student who claimed that Homeland Security agents visited him after he requested Mao Zedong’s “Little Red Book” from the library is a hoax:
<http://www.southcoasttoday.com/daily/12-05/12-24-05/…>
I don’t know what the moral is, here. 1) He’s an idiot. 2) Don’t believe everything you read. 3) We live in such an invasive political climate that such stories are easily believable. 4) He’s definitely an idiot.

New TSA guidelines from The Onion:
<http://www.theonion.com/content/node/43716>

Richard M Smith has some interesting ideas on how to test if the NSA is eavesdropping on your e-mail.
<http://www.computerbytesman.com/privacy/…>
The only problem is that you might get a knock on your door by some random investigative agency. Or get searched every time you try to get on an airplane. But I think that risk is pretty low, actually. If people actually do this, please report back. I’m very curious.

Good essay on bug bounties, and why they’re not a substitute for security auditing:
<http://www.pebbleandavalanche.com/weblog/2005/12/19/…>
This is not to say that bug bounties aren’t a good idea. They’re a good addition to rigorous software development and testing.

Bomb-sniffing wasps may be more effective—and cheaper—than alternatives:
<http://www.usatoday.com/tech/news/…>
Bomb-sniffing bees, too:
<http://www.defensetech.org/archives/001754.html>

Here’s how to make an RFID-blocking wallet out of duct tape:
<http://www.rpi-polymath.com/ducttape/RFIDWallet.php>

The U.S. Department of Justice is no better than anyone else at protecting individual privacy:
<http://www.informationweek.com/news/…>

Good stuff about the unforeseen security effects of weak ID cards:
<http://www.theregister.co.uk/2005/12/28/…>

EPIC’s Top Ten Privacy Stories of 2005, and their Top Ten Issues to Watch in 2006. Definitely worth reading.
<http://www.epic.org/alert/EPIC_Alert_yir2005.html>

The Treasury Department estimates that cybercrime netted $105 billion in 2004, more than illegal drugs. The question always is: how did they calculate that number? If I download an audio CD, is that $15 in cybercrime? If so, don’t believe the total.
<http://money.cnn.com/2005/12/29/technology/…>

A hand-held device that disables passive RFID chips:
<https://events.ccc.de/congress/2005/wiki/…>

A fascinating data-mining experiment using Amazon wish lists:
<http://www.applefritter.com/node/view/10074>
Now, imagine the false alarms and abuses that are possible if you have lots more data, and lots more computers to slice and dice it. Of course, there are applications where this sort of data mining makes a whole lot of sense. But finding terrorists isn’t one of them. It’s a needle-in-a-haystack problem, and piling on more hay doesn’t help matters much.

In Wisconsin, electronic voting machines must produce paper ballots and have open-source software.
<http://wistechnology.com/article.php?id=2585>
My previous essays on electronic voting:
<http://www.schneier.com/essay-068.html>
<http://www.schneier.com/crypto-gram-0312.html#9>
<http://www.schneier.com/crypto-gram-0012.html#1>

An airline passenger wrote the words “suicide bomber” in his journal, and was arrested.
<http://news.yahoo.com/s/nm/20060105/od_uk_nm/…>
<http://www.mercurynews.com/mld/mercurynews/news/…>
My commentary is here:
<https://www.schneier.com/blog/archives/2006/01/…>

Anyone can get anyone’s phone records:
<http://www.suntimes.com/output/news/…>
<http://www.concurringopinions.com/archives/2006/01/…>
<http://www.boingboing.net/2006/01/08/…>
<http://west.epic.org/archives/2006/01/…>
Seems like this is done by something called “pretexting,” which means calling up the phone company and lying about who you are. Sounds like fraud to me.

Annoying people anonymously on the Internet is against U.S. law:
<http://news.com.com/…>
See the comment by an attorney, who says this was previously true:
<http://www.boingboing.net/2006/01/09/…>
What does it mean for our society when obviously stupid laws like this get passed, and we have to rely on the police being nice enough to not enforce them?

Security checks for space travelers, including physical screening and matching people against a watch list:
<http://www.cnn.com/2006/TECH/space/01/04/…>
<http://news.bbc.co.uk/1/hi/sci/tech/4589072.stm>

“Residents of a trendy London neighbourhood are to become the first in Britain to receive ‘Asbo TV’—television beamed live to their homes from CCTV cameras on the surrounding streets.”
<http://www.timesonline.co.uk/article/…>

Interesting story about forged credentials and security:
<https://www.schneier.com/blog/archives/2006/01/…>

REAL ID is turning out to be more expensive than initially anticipated.
<http://news.yahoo.com/s/ap/20060112/ap_on_re_us/real_id>
Remember, security is a trade-off. REAL ID is a bad idea primarily because the security gained is not worth the enormous expense.

The ACLU has a new site on REAL ID:
<http://www.realnightmare.org/>


Insider Threat Statistics

Interesting statistics from Europe. (I doubt they’re any different in the U.S.)

* One in five workers (21%) let family and friends use company laptops and PCs to access the Internet.

* More than half (51%) connect their own devices or gadgets to their work PC.

* A quarter of these do so every day.

* Around 60% admit to storing personal content on their work PC.

* One in ten confessed to downloading content at work they shouldn’t.

* Two thirds (62%) admitted they have a very limited knowledge of IT Security.

* More than half (51%) had no idea how to update the anti-virus protection on their company PC.

* Five percent say they have accessed areas of their IT system they shouldn’t have.

One caveat: the study is from McAfee, who has a vested interest in inflating this sort of threat.

I like their “four types of employees who put their workplace at risk”: the Security Softie, the Gadget Geek, the Squatter, and the Saboteur.

<http://www.theregister.co.uk/2005/12/15/…>


Are Computer-Security Export Controls Back?

I thought U.S. export regulations were finally over and done with, at least for software. Then why is Symantec sending this to foreign customers:

“Unfortunately, due to strict US Government export regulations Symantec is only able to fulfill new LC5 orders or offer technical support directly with end-users located in the United States and commercial entities in Canada, provided all screening is successful.

“Commodities, technology or software is subject to U.S. Dept. of Commerce, Bureau of Industry and Security control if exported or electronically transferred outside of the USA. Commodities, technology or software are controlled under ECCN 5A002.c.1, cryptanalytic.

“You can also access further information on our web site at the following address: <http://www.symantec.com/region/reg_eu/techsupp/…>

The software in question is the password breaking and auditing tool called LC5, better known as L0phtCrack. Look to me like they’re just killing it, and using the government as an excuse.

<http://www.theregister.co.uk/2005/11/25/…>
<http://it.slashdot.org/article.pl?sid=05/12/22/1548209>


Vehicle Tracking in the UK

Universal automobile surveillance is coming. According to “The Independent”:

“Britain is to become the first country in the world where the movements of all vehicles on the roads are recorded. A new national surveillance system will hold the records for at least two years.

“Using a network of cameras that can automatically read every passing number plate, the plan is to build a huge database of vehicle movements so that the police and security services can analyse any journey a driver has made over several years.

“The network will incorporate thousands of existing CCTV cameras which are being converted to read number plates automatically night and day to provide 24/7 coverage of all motorways and main roads, as well as towns, cities, ports and petrol-station forecourts.

“By next March a central database installed alongside the Police National Computer in Hendon, north London, will store the details of 35 million number-plate “reads” per day. These will include time, date and precise location, with camera sites monitored by global positioning satellites. “

In another article, “The Independent” opines that this is only the beginning:

“The new national surveillance network for tracking car journeys, which has taken more than 25 years to develop, is only the beginning of plans to monitor the movements of all British citizens. The Home Office Scientific Development Branch in Hertfordshire is already working on ways of automatically recognising human faces by computer, which many people would see as truly introducing the prospect of Orwellian street surveillance, where our every move is recorded and stored by machines.

“Although the problems of facial recognition by computer are far more formidable than for car number plates, experts believe it is only a matter of time before machines can reliably pull a face out of a crowd of moving people.

“If the police and security services can show that a national surveillance operation based on recording car movements can protect the public against criminals and terrorists, there will be a strong political will to do the same with street cameras designed to monitor the flow of human traffic. “

I’ve already written about the security risks of what I call “wholesale surveillance.” Once this information is collected, it will be misused, lost, and stolen. It will be filled with errors. The problems and insecurities that come from living in a surveillance society more than outweigh any crimefighting (and terrorist-fighting) advantages.

<http://news.independent.co.uk/uk/transport/…>
<http://news.independent.co.uk/world/…>

My previous essays on wholesale surveillance:
<http://www.schneier.com/essay-061.html>
<http://www.schneier.com/essay-057.html>


Counterpane News

Counterpane announced a partnership with Verano to extend monitoring to real-time control systems.
<http://www.counterpane.com/pr-20060109.html>

Schneier is speaking at the RSA Conference, February 14-26, in San Jose. He will speak on “The Economics of Security” at 4:30 PM on the 14th, and again on “Why Security Has So Little to Do with Security” at 2:00 PM on the 15th. He will participate in a main-stage panel on ID cards at 8:00 AM on the 16th.
<http://2006.rsaconference.com/us/>

Gartner has named Counterpane as the leading visionary company in its December 2005 Managed Security Services Provider Magic Quadrant report.
<http://www.counterpane.com/pr-20060113.html>


NSA and Bush’s Illegal Eavesdropping

(Note: I wrote this essay in the days after the scandal broke.)

When President Bush directed the National Security Agency to secretly eavesdrop on American citizens, he transferred an authority previously under the purview of the Justice Department to the Defense Department and bypassed the very laws put in place to protect Americans against widespread government eavesdropping. The reason may have been to tap the NSA’s capability for data-mining and widespread surveillance.

Illegal wiretapping of Americans is nothing new. In the 1950s and ’60s, in a program called “Project Shamrock,” the NSA intercepted every single telegram coming into or going out of the United States. It conducted eavesdropping without a warrant on behalf of the CIA and other agencies. Much of this became public during the 1975 Church Committee hearings and resulted in the now famous Foreign Intelligence Surveillance Act (FISA) of 1978.

The purpose of this law was to protect the American people by regulating government eavesdropping. Like many laws limiting the power of government, it relies on checks and balances: one branch of the government watching the other. The law established a secret court, the Foreign Intelligence Surveillance Court (FISC), and empowered it to approve national-security-related eavesdropping warrants. The Justice Department can request FISA warrants to monitor foreign communications as well as communications by American citizens, provided that they meet certain minimal criteria.

The FISC issued about 500 FISA warrants per year from 1979 through 1995, and has slowly increased subsequently—1,758 were issued in 2004. The process is designed for speed and even has provisions where the Justice Department can wiretap first and ask for permission later. In all that time, only four warrant requests were ever rejected: all in 2003. (We don’t know any details, of course, as the court proceedings are secret.)

FISA warrants are carried out by the FBI, but in the days immediately after the terrorist attacks, there was a widespread perception in Washington that the FBI wasn’t up to dealing with these new threats—they couldn’t uncover plots in a timely manner. So instead the Bush administration turned to the NSA. They had the tools, the expertise, the experience, and so they were given the mission.

The NSA’s ability to eavesdrop on communications is exemplified by a technological capability called Echelon. Echelon is the world’s largest information “vacuum cleaner,” sucking up a staggering amount of voice, fax, and data communications—satellite, microwave, fiber-optic, cellular and everything else—from all over the world: an estimated 3 billion communications per day. These communications are then processed through sophisticated data-mining technologies, which look for simple phrases like “assassinate the president” as well as more complicated communications patterns.

Supposedly Echelon only covers communications outside of the United States. Although there is no evidence that the Bush administration has employed Echelon to monitor communications to and from the U.S., this surveillance capability is probably exactly what the president wanted and may explain why the administration sought to bypass the FISA process of acquiring a warrant for searches.

Perhaps the NSA just didn’t have any experience submitting FISA warrants, so Bush unilaterally waived that requirement. And perhaps Bush thought FISA was a hindrance—in 2002 there was a widespread but false belief that the FISC got in the way of the investigation of Zacarias Moussaoui (the presumed “20th hijacker”)—and bypassed the court for that reason.

Most likely, Bush wanted a whole new surveillance paradigm. You can think of the FBI’s capabilities as “retail surveillance”: It eavesdrops on a particular person or phone. The NSA, on the other hand, conducts “wholesale surveillance.” It, or more exactly its computers, listens to everything. An example might be to feed the computers every voice, fax, and e-mail communication looking for the name “Ayman al-Zawahiri.” This type of surveillance is more along the lines of Project Shamrock, and not legal under FISA. As Sen. Jay Rockefeller wrote in a secret memo after being briefed on the program, it raises “profound oversight issues.”

It is also unclear whether Echelon-style eavesdropping would prevent terrorist attacks. In the months before 9/11, Echelon noticed considerable “chatter”: bits of conversation suggesting some sort of imminent attack. But because much of the planning for 9/11 occurred face-to-face, analysts were unable to learn details.

The fundamental issue here is security, but it’s not the security most people think of. James Madison famously said: “If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary.” Terrorism is a serious risk to our nation, but an even greater threat is the centralization of American political power in the hands of any single branch of the government.

Over 200 years ago, the framers of the U.S. Constitution established an ingenious security device against tyrannical government: they divided government power among three different bodies. A carefully thought out system of checks and balances in the executive branch, the legislative branch, and the judicial branch, ensured that no single branch became too powerful.

After watching tyrannies rise and fall throughout Europe, this seemed like a prudent way to form a government. Courts monitor the actions of police. Congress passes laws that even the president must follow. Since 9/11, the United States has seen an enormous power grab by the executive branch. It’s time we brought back the security system that’s protected us from government for over 200 years.

A version of this essay originally appeared in Salon:
<http://www.salon.com/opinion/feature/2005/12/20/…>

Text of FISA:
<http://www.law.cornell.edu/uscode/html/uscode50/…>

Summary of annual FISA warrants:
<http://www.epic.org/privacy/wiretap/stats/…>

Rockefeller’s secret memo:
<http://talkingpointsmemo.com/docs/rock-cheney1.html>

Much more here:
<https://www.schneier.com/blog/archives/2005/12/…>


The Security Threat of Unchecked Presidential Power

Last Thursday [15 December 2005], the “New York Times” exposed the most significant violation of federal surveillance law in the post-Watergate era. President Bush secretly authorized the National Security Agency to engage in domestic spying, wiretapping thousands of Americans and bypassing the legal procedures regulating this activity.

This isn’t about the spying, although that’s a major issue in itself. This is about the Fourth Amendment protections against illegal search. This is about circumventing a teeny tiny check by the judicial branch, placed there by the legislative branch, placed there 27 years ago—on the last occasion that the executive branch abused its power so broadly.

In defending this secret spying on Americans, Bush said that he relied on his constitutional powers (Article 2) and the joint resolution passed by Congress after 9/11 that led to the war in Iraq. This rationale was spelled out in a memo written by John Yoo, a White House attorney, less than two weeks after the attacks of 9/11. It’s a dense read and a terrifying piece of legal contortionism, but it basically says that the president has unlimited powers to fight terrorism. He can spy on anyone, arrest anyone, and kidnap anyone and ship him to another country … merely on the suspicion that he might be a terrorist. And according to the memo, this power lasts until there is no more terrorism in the world.

Yoo starts by arguing that the Constitution gives the president total power during wartime. He also notes that Congress has recently been quiescent when the president takes some military action on his own, citing President Clinton’s 1998 strike against Sudan and Afghanistan.

Yoo then says: “The terrorist incidents of September 11, 2001, were surely far graver a threat to the national security of the United States than the 1998 attacks. … The President’s power to respond militarily to the later attacks must be correspondingly broader.”

This is novel reasoning. It’s as if the police would have greater powers when investigating a murder than a burglary.

More to the point, the congressional resolution of Sept. 14, 2001, specifically refused the White House’s initial attempt to seek authority to preempt any future acts of terrorism, and narrowly gave Bush permission to go after those responsible for the attacks on the Pentagon and World Trade Center.

Yoo’s memo ignored this. Written 11 days after Congress refused to grant the president wide-ranging powers, it admitted that “the Joint Resolution is somewhat narrower than the President’s constitutional authority,” but argued “the President’s broad constitutional power to use military force … would allow the President to … [take] whatever actions he deems appropriate … to pre-empt or respond to terrorist threats from new quarters.”

Even if Congress specifically says no.

The result is that the president’s wartime powers, with its armies, battles, victories, and congressional declarations, now extend to the rhetorical “War on Terror”: a war with no fronts, no boundaries, no opposing army, and—most ominously—no knowable “victory.” Investigations, arrests, and trials are not tools of war. But according to the Yoo memo, the president can define war however he chooses, and remain “at war” for as long as he chooses.

This is indefinite dictatorial power. And I don’t use that term lightly; the very definition of a dictatorship is a system that puts a ruler above the law. In the weeks after 9/11, while America and the world were grieving, Bush built a legal rationale for a dictatorship. Then he immediately started using it to avoid the law.

This is, fundamentally, why this issue crossed political lines in Congress. If the president can ignore laws regulating surveillance and wiretapping, why is Congress bothering to debate reauthorizing certain provisions of the Patriot Act? Any debate over laws is predicated on the belief that the executive branch will follow the law.

This is not a partisan issue between Democrats and Republicans; it’s a president unilaterally overriding the Fourth Amendment, Congress and the Supreme Court. Unchecked presidential power has nothing to do with how much you either love or hate George W. Bush. You have to imagine this power in the hands of the person you most don’t want to see as president, whether it be Dick Cheney or Hillary Rodham Clinton, Michael Moore or Ann Coulter.

Laws are what give us security against the actions of the majority and the powerful. If we discard our constitutional protections against tyranny in an attempt to protect us from terrorism, we’re all less safe as a result.

This essay was published on December 21 as an op-ed in the “Minneapolis Star Tribune.”
<http://www.startribune.com/562/story/138326.html>

Here’s the opening paragraph of the Yoo memo. Remember, think of this power in the hands of your least favorite politician when you read it:

“You have asked for our opinion as to the scope of the President’s authority to take military action in response to the terrorist attacks on the United States on September 11, 2001. We conclude that the President has broad constitutional power to use military force. Congress has acknowledged this inherent executive power in both the War Powers Resolution, Pub. L. No. 93-148, 87 Stat. 555 (1973), codified at 50 U.S.C. §§ 1541-1548 (the “WPR”), and in the Joint Resolution passed by Congress on September 14, 2001, Pub. L. No. 107-40, 115 Stat. 224 (2001). Further, the President has the constitutional power not only to retaliate against any person, organization, or State suspected of involvement in terrorist attacks on the United States, but also against foreign States suspected of harboring or supporting such organizations. Finally, the President may deploy military force preemptively against terrorist organizations or the States that harbor or support them, whether or not they can be linked to the specific terrorist incidents of September 11.”

There’s a similar reasoning in the Braybee memo, which was written in 2002 about torture:

Yoo memo:
<http://www.usdoj.gov/olc/warpowers925.htm>

Braybee Memo:
<http://www.washingtonpost.com/wp-srv/nation/…>

This story has taken on a life of its own. But there are about a zillion links and such listed here:
<https://www.schneier.com/blog/archives/2005/12/…>
I am especially amused by the bit about NSA shift supervisors making decisions legally reserved for the FISA court.


Project Shamrock

Decades before 9/11, and the subsequent Bush order that directed the NSA to eavesdrop on every phone call, e-mail message, and who-knows-what-else going into or out of the United States, U.S. citizens included, they did the same thing with telegrams. It was called Project Shamrock, and anyone who thinks this is new legal and technological terrain should read up on that program.

From Wikipedia: “Project SHAMROCK…was an espionage exercise that involved the accumulation of all telegraphic data entering into or exiting from the United States. The Armed Forces Security Agency (AFSA) and its successor NSA were given direct access to daily microfilm copies of all incoming, outgoing, and transiting telegraphs via the Western Union and its associates RCA and ITT. Operation Shamrock lasted well into the 1960s when computerized operations (HARVEST) made it possible to search for keywords rather than read through all communications.

“Project SHAMROCK became so successful that in 1966 the NSA and CIA set up a front company in lower Manhattan (where the offices of the telegraph companies were located) under the codename LPMEDLEY. At the height of Project SHAMROCK, 150,000 messages a month were printed and analyzed by NSA agents. In May 1975 however, congressional critics began to investigate and expose the program. As a result, NSA director Lew Allen terminated it. The testimony of both the representatives from the cable companies and of director Allen at the hearings prompted Senate Intelligence Committee chairman Sen. Frank Church to conclude that Project SHAMROCK was ‘probably the largest government interception program affecting Americans ever undertaken.'”

If you want details, the best place is James Bamford’s books about the NSA: his 1982 book, “The Puzzle Palace,” and his 2001 book, “Body of Secrets.” This quote is from the latter book, page 440:

“Among the reforms to come out of the Church Committee investigation was the creation of the Foreign Intelligence Surveillance Act (FISA), which for the first time outlined what NSA was and was not permitted to do. The new statute outlawed wholesale, warrantless acquisition of raw telegrams such as had been provided under Shamrock. It also outlawed the arbitrary compilation of watch list containing the names of Americans. Under FISA, a secret federal court was set up, the Foreign Intelligence Surveillance Court. In order for NSA to target an American citizen or a permanent resident alien—a “green card” holder—within the United States, a secret warrant must be obtained from the court. To get the warrant, NSA officials must show that the person they wish to target is either an agent of a foreign power or involved in espionage or terrorism.”

A lot of people are trying to say that it’s a different world today, and that eavesdropping on a massive scale is not covered under the FISA statute, because it just wasn’t possible or anticipated back then. That’s a lie. Project Shamrock began in the 1950s, and ran for about twenty years. It too had a massive program to eavesdrop on all international telegram communications, including communications to and from American citizens. It too was to counter a terrorist threat inside the United States. It too was secret, and illegal. It is exactly, by name, the sort of program that the FISA process was supposed to get under control.

Twenty years ago, Senator Frank Church warned of the dangers of letting the NSA get involved in domestic intelligence gathering. He said that the “potential to violate the privacy of Americans is unmatched by any other intelligence agency.” If the resources of the NSA were ever used domestically, “no American would have any privacy left…. There would be no place to hide…. We must see to it that this agency and all agencies that possess this technology operate within the law and under proper supervision, so that we never cross over that abyss. That is an abyss from which there is no return.”

Bush’s eavesdropping program was explicitly anticipated in 1978, and made illegal by FISA. There might not have been fax machines, or e-mail, or the Internet, but the NSA did the exact same thing with telegrams.

We can decide as a society that we need to revisit FISA. We can debate the relative merits of police-state surveillance tactics and counterterrorism. We can discuss the prohibitions against spying on American citizens without a warrant, crossing over that abyss that Church warned us about twenty years ago. But the president can’t simply decide that the law doesn’t apply to him.

This issue is not about terrorism. It’s not about intelligence gathering. It’s about the executive branch of the United States ignoring a law, passed by the legislative branch and signed by President Jimmy Carter: a law that directs the judicial branch to monitor eavesdropping on Americans in national security investigations.

It’s not the spying, it’s the illegality.

Wikipedia entry:
<http://en.wikipedia.org/wiki/Project_SHAMROCK>


Comments from Readers

There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.

<http://www.schneier.com/>


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Comments on CRYPTO-GRAM should be sent to schneier@schneier.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Counterpane is the world’s leading protector of networked information – the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Counterpane Internet Security, Inc.

Copyright (c) 2006 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.