Here's how I understand the matter. The F.B.I. wants to access the encrypted data on the phone in the hope that the San Bernardino killer will have implicated as-yet unknown co-conspirators. (You'll forgive me if I don't bother looking up the killer's name or linking to a news account: you can search on "F.B.I." and "Apple" if you somehow missed this news.) The agency can't just try guessing the 4-digit passcode apparently protecting the iPhone because Apple's iOS can render the data totally inaccessible after ten bad guesses. The F.B.I. is asking for, and a federal judge has ordered Apple to provide, a modified version of iOS that will not render the data inaccessible after ten failed tries. The modified version of iOS must also permit the F.B.I. to submit their passcode attempts without having to type them in at the screen, which obviously would take a great deal of time.
If you're still fuzzy on the technical issues, consider this analogy: a house made out of shredded personal papers mixed with mortar, such that the owner can magically reconstruct her personal papers by sticking the key (which only she possesses) into the front-door lock. If anyone tries picking the lock enough times, the building destroys the lock so that nobody can get in, including the owner.
You might not want Osama bin Laden to own such a house — but if you're worried about protecting your personal papers, wouldn't you want it?
Now, before you assume I'm an "Apple bigot" or a "terrorism lover" and give up reading, understand that I'm genuinely conflicted. I think the F.B.I. might be completely sincere when it claims it doesn't want wholesale access to iPhones, that all it wants is to track down any possible co-conspirators who might otherwise kill a bunch of people down the line.
Apple justifies its resistance to the judge's order by warning darkly about the danger the order poses. From Apple's FAQ, attached to Tim Cook's open letter to Apple customers:
Yes, it is certainly possible to create an entirely new operating system to undermine our security features as the government wants. But it’s something we believe is too dangerous to do. The only way to guarantee that such a powerful tool isn’t abused and doesn’t fall into the wrong hands is to never create it.But before you focus too much on the "powerful tool", Apple seems to be arguing that the danger really lies in setting the precedent.
The digital world is very different from the physical world. In the physical world you can destroy something and it’s gone. But in the digital world, the technique, once created, could be used over and over again, on any number of devices.So far, so good. I can follow this argument: create the tool once, and we'll be ordered to use it again and again and again, pretty much at the whim of law enforcement. Not only U.S. law enforcement, either. If the F.B.I. successfully compels Apple to cooperate, there are no moral grounds for the company to object if any other government asks for similar cooperation. If I were Tim Cook, I wouldn't want to help Bashar al-Assad's police, to name but one unpleasant possibility.Law enforcement agents around the country have already said they have hundreds of iPhones they want Apple to unlock if the FBI wins this case. In the physical world, it would be the equivalent of a master key, capable of opening hundreds of millions of locks.
Then Apple says something I can't quite square with what I know of its technology and what the judge ordered in this case.
Of course, Apple would do our best to protect that key, but in a world where all of our data is under constant threat, it would be relentlessly attacked by hackers and cybercriminals. As recent attacks on the IRS systems and countless other data breaches have shown, no one is immune to cyberattacks.Now we're back to the "powerful tool", which seems to me a red herring.
The judge's order allows Apple to customize its hacked software so it will only work for the San Bernardino killer's iPhone. I believe (and here's where my understanding of Apple's tech may be incomplete) that the hacked software must be cryptographically signed by Apple in order to run on the phone. A side effect of signing the software is, it can't be modified after the fact without invalidating the signature — and without a valid signature, the software won't run on the phone. Nor can just anybody sign the software: it has to be Apple.
So if the tool can be made to work only on the one iPhone, and nobody but Apple can modify the tool to work on another device, it's not going to be useful to anyone except the F.B.I. even if it leaks out to the world. What is Apple worried about?
Maybe the tool doesn't have to be signed. It would astonish me if that were the case, but maybe I'm totally misunderstanding the tech. If that's so, I can see why Apple would be extremely reluctant to make the tool. That might also explain why the company's statements have been a little vague and hard to parse: it would hardly be eager to admit that its supposedly robust device security could be so easily subverted.
I don't really believe that's the case, though. I think the tool has to be signed. So again, what is Apple worried about? And again, we come back to not wanting to set a precedent. Yet I can't quite square that with the company's formal statement. After all, being worried solely about setting a precedent would mean that the ominous threat of "cyberattack" is purest horseshit intended to sow fear, uncertainty and doubt. However, giant companies do not issue formal statements that contain pure horseshit: their lawyers insist that statements can always be backed up by facts.
One reason I'm frustrated by this whole situation is that it has been increasingly difficult to understand exactly what Apple is trying to protect, and why.
Let's move on to a different consideration, one Apple sums up thusly:
If we lose control of our data, we put both our privacy and our safety at risk.This gets to the heart of the matter for most people. Does the government have the right to get at the encrypted data on your personal device?
My initial reaction was, "Hell, no!"
However, I've since rethought the matter. While I don't particularly like where my reflection has taken me, I'm prepared to accept the conclusion as being in the best interests of society as a whole.
Law enforcement is allowed to enter your residence with a warrant. Depending on the scope of the warrant, it might be allowed to poke through your private papers, such as your appointment calendar and address book. Neither your porn stash nor your diary is off-limits if the warrant permits. There's no reason to believe that your phone would or even should be given greater protection. I know of no jurisdiction where such a device is inviolable, although the trend is to require a warrant. So unless we want to overturn a couple of centuries of law and custom in this country alone, Apple's "privacy" argument doesn't fly when it comes to law enforcement. (It's still a worthwhile goal to protect your data in case your phone is stolen, though.)
It's intellectually defensible to argue that government should never have been granted the right to break down your door or rifle through your papers, even with a warrant. As a practical matter, though, no nation — no community of human beings, in fact — could survive under such a constraint. If somebody is stockpiling sarin gas in his basement, I want the cops to have the authority to break in and seize it before he can make use of it to kill me.
Standing up for personal privacy and against government intrusion is always a crowd-pleasing move. However, it has the potential to backfire badly here, not just for Apple but for all of us. If Apple wins in court, it might prompt Congress to craft legislation in order to compel the entire tech industry to fall in line with law enforcement's wishes. Such legislation might or might not withstand a court challenge, but do we really want to make that wager? For that matter, do we really want Congress to consider anew how much personal privacy we deserve, at a time when many people are (irrationally) afraid of terrorism?
Moreover, Apple is taking steps to increase the security of iPhones so as to eliminate the possibility the company could help law enforcement to gain access to the encrypted data. What if Congress decides that such user protections are illegal because they place the data beyond the reach of any warrant and any coercion by the government, no matter how dire and legitimate the need? Would a law outlawing unbeatable security (which would include, but not be limited to, unbeatable encryption) for digital devices be Constitutionally supportable?
These are novel questions. As Jordan Orlando notes in Slate, the abstractions with which the law deals — contracts, locks, keys, etc. — are "necessarily rigid and difficult to change", but the "symbolic language that [lets] non-computer-scientists work with digital data", first introduced in the 1980s, has evolved and changed as the software has evolved beyond emulating the physical models on which it was based, i.e., the elements of the 1970s and 1980s office environment. To the extent that software has evolved unforeseen capabilities that don't map neatly onto the old physical model, the law lacks the language and moral reasoning to govern the technology's use — or abuse.
We're entering terra incognita. The question, "How much privacy do I deserve?" has had a more or less well-understood meaning for most of our history. (At least, that's how it appears to me. I'm a scholar of neither the law nor history, though, so I could be wrong.) We accept that the government can get warrants to wiretap our phones; that we have no expectation of privacy if our window shades are up, or if we're shouting so loudly that the neighbors can hear; that the police can search us if they place us under arrest, and use anything they find on our persons — like written notes — against us in court. Conceptually, again, to the extent that our phones are repositories for notes we once might have written on paper, there's nothing novel about law enforcement's expectation that it should be able to access our phones if they have probable cause we have committed a crime.
Should there be greater limits on police powers, though? I imagine a future world where technology exists that allows the reading of thoughts. Extending current practices forward, it would be legal for the police to read one's thoughts if a judge issued a warrant. Yet step back a moment and consider what that would mean. Today, merely thinking about an illegal act is itself legal: it's only if you actually do something about your thoughts that you've stepped over the line. So what would constitute "probable cause" for a "mind search" in this future world?
Consider, too, what trawling through your mind would reveal. Even today, an enthusiastic search of your home or car could reveal evidence of unrelated illegal acts for which the police could arrest you. A search could also uncover information which, while not grounds for arrest, could be profoundly embarrassing. You have no protection against "accidental" disclosure of such information: you can sue, of course, but you can't erase people's memories. How much worse would a search of your mind be? How much more embarrassing information could be revealed?
Now, consider what our present-day smartphones are: digital witnesses to our lives. They can check in with cell towers and/or GPS; they have audio and visual recording capabilities; they can remind us of appointments and store the contact information for everyone we wish; and of course, they allow us to communicate. They are as close to "personal assistants" that share our minds as humans have been able to create.
A human personal assistant may decline to testify (or may testify readily, for that matter) in court; a human assistant might use his or her intelligence to reveal only as much information as he or she feels is needed. A smart device, on the other hand, is an extension of ourselves that has no will of its own. How much control ought we to have over it? If we want it never to betray our embarrassing secrets, must we forego using it?
Law enforcement's answer to the last question would be an unhesitating, "Yes": if you don't want to be incriminated (or embarrassed) by your phone, don't use it. But should law enforcement be given the last word on this? Isn't there even a theoretical basis for arguing that there exists some fundamental core of a person that is inviolable?
We're entering an era in which the old rules of what our rights are, no longer work well. We need to revisit our fundamental assumptions of what kind of society we want, in order to understand how our new technologies should work. The current dispute between the F.B.I. and Apple may not be the best start to the conversation, but as long as we're arguing, let's understand where any decision on this dispute might lead us.
No comments:
Post a Comment