Since the December San Bernardino shooting, this question has dominated national discourse. In order to gain important information, the FBI has pressured Apple to unlock the iPhone of one of the San Bernardino shooters. Apple unequivocally refused, claiming that backdoor decryption mechanism would jeopardize their customers’ security and spread “like cancer.” The case has brought new attention to rising conflicts at the interface of privacy and security.
Encryption is the process of storing information such that only certain parties may access it using an electronic key or passcode. Therefore, encrypting a phone prevents anyone without the key from seeing or tampering with the information stored on it. Most often the strength of encryption correlates with the strength of the passcode. Longer strings of letters, numbers, and special characters are harder to guess and less susceptible to popular brute-force methods of hacking in which thousands of passwords are guessed each second.
The relatively simple four-digits many of us use to unlock our iPhones ought to be easy to crack by brute-force. Recognizing this vulnerability, Apple introduced an optional fail-safe, triggered after 10 failed passcode attempts, that erases all data on an iPhone. Initially hoping to use brute-force methods to crack the shooter’s iPhone, the FBI demanded Apple develop software circumventing the fail-safe mechanism.
Apple responded, asserting that being forced to meet the FBI’s demands would violate the rights granted to its personal by the first amendment of the Constitution. However, it now seems that the question of whether or not the right to free speech extends to computer code will continue to go unanswered. A third party has reportedly unlocked the shooter’s iPhone without the loss of any data and the FBI dropped its case against Apple.
Although this cyber turf-war failed to play out, we all ought to consider the questions it raised all the same. Namely, what is more important, protecting privacy or protecting people?