copyright notice
link to the published version: IEEE Computer, October, 2015

accesses since September 4, 2015

Legislating Technology (Badly)

Hal Berghel

It is characteristic of willfully uninformed politicians to look for quick fixes when common sense dictates otherwise. Nowhere is this more evident than their legislation of technology and innovation


Due to the computational power of underlying platforms, digital kill switches have a much broader capability and use than earlier industrial panic switches. In the case of mobile computing devices and cell phones, the purpose of the kill switch is to make theft unattractive. Protections include “wiping” the data off the device and/or “bricking” the device so that it's unusable. Consumer Reports estimated 1.6 million victims of cell phone theft in 2012 ( As a further data point, we note that “Apple, Google, Microsoft and Samsung -- plus Motorola , which is owned by Google -- control 90% of the U.S. smartphone market,” ( ?) so there aren't many manufacturers involved.

There are a variety of proposed statutory protections that attempt to discourage mobile device theft. One good source of information is the City and County of San Francisco District Attorney's Secure Our Smartphones Initiative ( ). California introduced recent kill switch legislation in late January, 2014 ( , ) followed closely by the U.S. Senate bill 2032 ( ). Similar legislation has been proposed in the House of Representatives and state legislatures under the general rubric of Smartphone Theft Prevention Acts. We shall investigate whether and to what degree such legislation is likely to be effective, and whether negative consequences might be anticipated. We begin by analyzing the incentives behind various stakeholder positions.


Obviously, a petty thief will oppose a kill switch bill. If the bricking works as promised, profitable resale would be unlikely. In addition, an identity thief will not welcome a “data wipe” feature. Note how even thieves' incentives are subtly different: the petty thief wants a usable device, while the identity thief wants usable data. The petty thief is motivated by resale prices. Some news sources claim that the better smart phones can resell for up to $1200 each in the Pacific Rim and Africa ( ?). So if such legislation were to pass, one would expect a cottage industry to develop in lead sleeves to insulate mobile devices from kill switch activation as they cart away their booty. The identity thief, however, has little concern over the usability of the device as long as the data can be copied.

Incentives against the legislation applies to the telecommunications industry and carriers as well. The telecom industry has no economic incentive to discourage continued use of their product, irrespective of registered owner and original source. They make their money from the subscription service. The mobile devices are mere enticements. In this case their incentive is allied with the petty thief - to keep the phone usable. A secondary benefit is that the registered user is on the hook for the subscription fees and charges until the phone's service is suspended.

Strong opposition may also be expected from the technology sector which opposes regulation on principle. And there is some sense to that position, because a sufficiently robust encryption regimen on the mobile platform could protect the personally identifiable information from non-authorized users. This might satisfy most user's privacy needs. Merchants may also feel that regulation with penalties is an imposition on free trade. Simply put, they bear no burden by theft and loss.

CTIA, the national wireless trade group, articulates the merchants and vendors positions. CTIA holds that it's a priori bad public policy for states to pass regulations for products that are sold internationally. They would prefer owner-based initiatives that involve downloadable apps together with vendor-optional databases that would prevent use of stolen phones domestically only ( ) . However they have no problem with legislation that penalizes users from re-programming phones whether stolen or not.

Manufacturers and merchants incentives converge around protecting unimpeded business practices. Their interests differ when it comes to mandated standard for the mobile device itself. Mandated kill switch legislation requires manufacturers to rework their mobile platforms. Of course if the California legislation takes root in a few larger states, this issue will become moot as manufacturers will find it less expensive to change their entire product line to satisfy California regulators, and then just not activate it in non-requiring states. It should be noted that the manufacturers eventually removed their opposition to the California kill switch bill, but some technology companies like Google and Microsoft will still require users to “opt in” to theft prevention services ( .


There are some stakeholder communities that are for passage of kill switch legislation. For one, the typical public mobile device owner might appreciate the ability to protect their personally identifiable information via remote wipe. They might also perceive a benefit from bricking the phone to discourage petty thievery and prevent unauthorized. From the user's point of opt-in kill switches might be perceived as public good. This naïve view is shared by the majority of politicians.

Once may see why this is the naïve view by considering the interests of the hacker, criminal and terrorist communities. One can imagine an entirely new attack vector for mobile platforms that functions in much the same way that ransom ware (e.g., CryptoLocker, FBI Moneypack) does at the workstation and notebook levels. Further, what major criminal could not benefit from a kill switch with wiping and bricking capabilities as a way of thwarting law enforcement evidence collection and surveillance. On this account, the criminal or terrorist is incentivized to develop a network capable of remotely wiping and bricking mobile devices that have been, or about to be, seized by law enforcement agencies.

Conversely, there are two complexities. From the point of view of software epidemiology, the technology (code) that allows data wiping shares DNA with the technology (code) that would be used to offload the PII to another platform either by networking, Bluetooth, or direct connectivity. This would incentivize familiar government intelligence and surveillance agencies to develop such hacks for remote deployment. ( ). The only chance of preventing this kind of compromise would be to deploy robust encryption with open source software on the mobile platforms. Closely related to this would be a criminal's incentive to brick the phone so that a potential victim is denied access to protective services.


Finally, we represent the neutral position toward the legislation. Although we can imagine some carriers falling into this category, it is the mobile platform insurance carrier that seems to be the most natural fit. Cell phone insurance premiums are estimated to produce $8 billion/year in revenue for the carriers on top of approximately $200 billion in revenue for the wireless service ( ). The insurance has become what calls a “monster hit” for the estimated 150 million domestic smartphones in use – of which 100 million are insured against loss. With mandated kill switches, their risk would remain primarily device replacement. Though their premium/risk ratios might change, we would expect that their revenue would remain constant.

So there you have it – the good, bad and ugly of proposed “kill switch” legislation. The operative question is whether reasonable, purposeful legislation will result from such conflicting motives and mixed incentives. The interplay between interests is exceedingly subtle and likely beyond the capacity of politicians to appreciate as their strength is not a capacity to appreciate nuance. Add to this mix lobbying efforts in support of the political donor classes, and any forthcoming kill switch legislation would have an low expected yield rate in terms of public good. You may find the landscape described above useful in interpreting proposed federal legislation ( ).


Encryption has always fascinated bureaucrats and tyrants. The earliest bureaucratic interference that I recall came from the NSA in the early 1980's when Director Bobby Inman tried to coopt ACM and IEEE conferences by laying claim to pre-publication censorship for all scholarly papers involving cryptography (Bamford, James, The Puzzle Palace, Penguin Books, 1983, pp. 450ff.). A compromise was reached by a committee of representative from the professional societies that publish cryptographic research (including the ACM, Computer Society and IEEE) that encouraged voluntary self-censorship. The only dissenting vote was from the Computer Society representative, George Davida, who prophetically enough predicted that such incursions into the academy could undercut first amendment protections and ultimately subvert scholarship. History has been very supportive of Professor Davida's predictions.

Not to be thwarted by academic freedom arguments, “Big Gov” made another assault on computing research when it attempted to prosecute PGP inventor Phil Zimmerman for alleged violations of the Arms Export Control Act in the early 1990's ( ). The persecution was apparently even extended to those who wrote op-ed pieces about Zimmerman's plight in Bay Area newspapers ( ). Nothing so offends authoritarians as the thought that someone might speak ill of them behind their backs.

The latest attack on encrypted communication came this past summer from FBI Director James Comey. Comey relied on the surveillance state national security mantra to motivate Congress to require that all strong encryption systems have a backdoor for the FBI ( ). Apparently Apple and Google are particularly irritating to Comey for steadfastly refusing to comply voluntarily with FBI requests to share their customer's private information (at least after being outed by Edward Snowden for doing just that). Comey speculated before the Senate Intelligence Committee that a special tiny little backdoor just for the FBI that could not possibly be exploited by others should not be much of a technological challenge. As he sees it, computer scientists just haven't been properly incentivized (“extraordinary incentivization”?). The July 8, 2015 Congressional hearings are illustrative ( ).

Comey's presentation to the Brookings Institution last October 16, 2014 provides a clearer statement of his opinion:

“ Unfortunately, the law hasn't kept pace with technology, and this disconnect has created a significant public safety problem. We call it “Going Dark,” and what it means is this: Those charged with protecting our people aren't always able to access the evidence we need to prosecute crime and prevent terrorism even with lawful authority. We have the legal authority to intercept and access communications and information pursuant to court order, but we often lack the technical ability to do so.” ( )

As the Church Committee revelations of COINTELPRO showed, the FBI has a long history of highly questionable surveillance, wiretaps and sundry black bag operations against U.S. citizens, like sending suicide notes to Martin Luther King, Jr. ( ). Recently, it has become fashionable for the FBI to use National Security Letters and “exigent letters” to spy on citizens and journalists ( ), while claiming that this is done under the watchful eye of a “secret” oversight regime. (For more on NSLs, see the EFF's National Security Letter Timeline @ ). So, while Comey's concern is that the bad guys might “go dark,” many citizen's concern is that the FBI might “go rogue.” It is axiomatic that when civil libertarians feel it is no longer possible to ensure that the government operates within the law, any request for increased surveillance powers exposes the claimant to ridicule.

What is more surprising is that even Comey's peers don't agree with him. In a recent op-ed in the Washington Post ( ), Mike McConnell, Michael Chertoff and William Lynn argue that the sort of secure communication that Comey objects to is the greater public good because it protects information from exploitation. “The result will be to expose business, political and personal communications to a wide spectrum of governmental access regimes with varying degrees of due process.”

But that's a criticism based on political realities. A more pointed criticism is based on the technology itself. Independent reporters have weighed in on this in near unison. The Center for Democracy and Technology had this to say: “ Any backdoor the government can walk through to uncover evidence will eventually be used by malicious actors to exploit our personal information.” ( ). Encryption makes us safer.

The Intercept goes further to debunk the FBI's claims that encryption interferes with effective prosecution of criminals ( ). When asked to provide specific examples of crimes that had been averted due to phone data that might have been encrypted in the future, Comey offered the following variation on the surveillance state mantra:

“Rescuing someone before they're harmed? Someone in the trunk of a car or something? I don't think I know – yet? I've asked my folks just to canvas – I've asked our state and local partners are there some examples where this – I think I see enough, but I don't think I've found that one yet. I'm not looking. Here's the thing. When I was preparing the speech, one of the things I was inclined to talk about was — to avoid those kids of sort of ‘edge' cases because I'm not looking to frighten people. Logic tells me there're going to be cases just like that, but the theory of the case is the main bulk of law enforcement activity. But that said I don't know the answer. I haven't found one yet.” (Intercept, op cit)

Cutting through the doublespeak, Comey is saying that he knew of no such examples. He has since taken his fear mongering global, with unspecified and undocumented threats from ISIS ( ).

But by far the most important argument against government intrusion into encryption comes from the encryption experts themselves. On July 6, 2015 some of the most famous computer scientists in the field wrote the definitive rejection of Comey’s backdoor program entitled “Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications.” ( The report shows that the FBI proposal would overturn best practices like forward secrecy, make the systems more complicated than they need to be, and invite all manner of criminals, terrorists, and nation-state aggressors to find and exploit loopholes. That’s top-drawer policy! Realizing that the Comey initiative may not fly, the CIA has apparently taken steps to circumvent Congress ( And yet, amidst these conflicting motives, bureaucratic hubris, and ideological hyperbole Congress pushes on with their attempt to draft the perfect piece of legislation enabled by the enthusiasm of special interests.


My final example of political enthusiasm gone awry deals with 911-swatting. The 911 system works by routing 911 voice signals along with call information (like calling number, mobile phone GPS coordinates, GPS coordinates based on service provider triangulation, etc.) to a proximate dispatcher, who then relays the essential information, possibly augmented by proprietary data from the service provider or dispatcher, onto the responders. The situation is similar with VoIP except that the Internet is the carrier rather than the telecom. Since all signals are digital, so there are several attack vectors available to hackers.

Christian Dameff, et al ( ) discuss three goals of 911 hacking: to initiate inappropriate 911 responses, to interfere with legitimate responses, and to monitor the 911 system for opportunistic insights. From the hacker's perspective, all three goals may be subsumed into one since they only differ by intent. If you're unfamiliar with the 911 system, viewing this Defcon 22 video is time well spent.

Hacking techniques like spoofing IDs (at either the handset or network levels), SQL injection, Denial of Service, and IMSI catching all suggest themselves. Modern cellular systems render spoofing of caller ID moot since the service provider largely ignores handset phone numbers in lieu of internal firmware IDs so anonymizers like SpoofCards ( are not consistently reliable in this application. The same may be said of location spoofing apps for mobile devices, for the provider may be using signal triangulation rather than reported coordinates. But spoofing can still result at the level of tower communications by the determined aggressor with the technical capability and an understanding of the protocols involved. Remember that all cellular traffic, including authentication, is RF and RF doesn't obey property lines! The point is that you can't rely on mobile device reports in the presence of a determined aggressor with an appropriately configured computer and RF transceiver! In addition, 911 communications remain largely unencrypted, so the referent IDs, GPS location data, tower and carrier IDs, and so forth are transmitted in clear text – a clear invitation to hackers. This is not to mention the proliferation of carrier and dispatcher-side databases involved that may not be well secured. Finally, there is Dual-Tone Multi-Frequency (DTMF) baiting, as many dispatchers redirect calls and the tones may be recorded and the numbers recovered from tone extractors thus giving the hacker access to undisclosed, internal communications links. But, most intriguing of all is the VoIP potential which opens the 911 system to the full range of Internet hacks and anonymizers like TOR and a pot pourrie of exploits that use burner phones. In other words, the 911 dispatch system is rife with security and privacy problems that are well known and well understood to the technical community.

To illustrate the ease of abuse, we consider the case of inappropriate 911 responses. The hack du jour is called “swatting” which involves reporting a bogus life-threatening situation to a 911 dispatcher: terrorist threats, reports of kidnapping; hostage situations, etc. The goal is to get the local SWAT or tactical law enforcement units to respond to the bogus threat – hence the term. If the goal is to intimidate, interrupt, or embarrass a big name, it's called “celebrity swatting.” If the target is someone who may have wronged you, it's called “revenge swatting,” if the target is an airport, it's called “fly swatting,” and so forth. The recent California Assembly bill, AB47 (2013) is a typical government reaction ( ). Whereas the existing California law made misuse of the 911 service a misdemeanor that carried with it fines, AB47 seeks to deal with the “malicious and dangerous swatting calls” by ramping up the fines to a base of $10,000. Michigan has an even more onerous law that adds to the same level of fine up to a 4-15 year prison term, depending on whether someone was injured or killed ( ). Not to be outdone, the US House of Representatives proposed the Anti-Swatting Act of 2015,H.R. 2031, ( ) that seeks to amend the Communications Act of 1934 to provide penalties of 5-20 years and full reimbursement of the costs of responding.

These big-and-powerful government responses may appeal to authoritarians, but they'll be ineffective. In addition, consider the social consequences. Who are these swatters? Is incarceration for 5-20 years in a federal prison the appropriate sentence for a script kiddie with a grudge? Or an obsessive fan? The people who do these things are deranged and need psychological help not a scholarship to crime school. And, the possibility of resulting injury would diminish if law enforcement would take a swerve around responding to 911 calls like Normandy invasions. Maybe we should return to “protect and serve” mission rather than to “overpower and subdue.” In one recent study by the Utah legislature 65% of the SWAT and tactical team assaults were forced entry raids to serve warrants without giving the occupants an opportunity to answer the door. Ah, yes, I hear you cry, but given advanced warning the occupants would have opened fire on the police. Apparently not, because in less than ½ % of the cases were weapons found on the scene ( ).

Without any question the primary cause of swatting and other 911 vulnerabilities is an immature approach to infrastructure security, and for that the blame lies squarely with the telecoms, service providers and public service agencies. Big government thinks of solutions to technical problems in terms of retribution after the fact, rather than solving problems at the source. 911 swatting can be reduced if not eliminated by simple, well understood best practices: robust encryption at all systemlevels, avoidance of all security-through-obscurity tactics like the use of communication to unregistered phone numbers for internal dispatcher communications, use of secure TCP/IP communication, etc. Dwelling on punishment of offenders is misguided, wasteful, and counter-productive.

If legislators really want to accomplish something, they would be well-advised to decertify security-anemic 911 systems. Virginia did that for the WINVote balloting system when they were faulted for lack of appropriate “ physical controls, network access, operating system controls, data protection, and the vote tally process” ( ). The government might be able to cut-copy-paste directly from the Virginia report since the problems are not that dissimilar. And while they're at it, the country might be well served if the government de-certified all balloting systems given the track record of abuse (Jeremy Epstein, Weakness in Depth: A Voting Machine's Demise, Security and Privacy, May/June, 2015, pp. 55-58; and What Went Wrong in Ohio: The Conyers Report on the 2004 Presidential Election, Chicago Review Press, 2005).


The legislative news is not all bad. This year some states have begun to entertain legislation relating to the interception of cell phone communication like IMSI catching ( ) that has become widespread by law enforcement and private security interests ( ). The state of Washington proposed reasonable extensions to statutes on communication interception and pen registers that require warrants for the use of IMSI catchers ( , April 16, 2015). Other states like California approach the problem from a procedural point of view ( , seeking to ensure appropriate safeguards are present to protect the data. In the long term, my hunch is that the Washington model will predominate, though it may take a few court challenges on constitutional grounds before that happens. All-in-all, this sort of legislation is headed in the right direction: it puts the onus on institutional abusers to clean up their act and takes a swerve around building up prison populations.

A second example of legislation that is right-headed deals with mobile-related location privacy. Civil libertarians will appreciate that some states are requiring that government officials and law enforcement get warrants before accessing GPS coordinates on mobile devices. Once again the states are leading the way with Maine, Montana, New Hampshire and Utah having recently passed robust laws in defense of personal privacy, while weaker laws were passed in Colorado, Tennessee and Virginia. The most interesting points of difference involve the level of privacy protection and the exceptions to the warrant requirements – e.g., user consent, emergency circumstances, whether the device has been reported stolen, etc. The Center for Democracy and Technology published an informative survey online on July 23, 2015 that compares these state efforts ( ) with links to the specific statutes.

In an attempt to unify state efforts, Senator Ron Wyden (D-OR) and Congressman Jason Chaffetz (R-UT) introduced the same legislation to the Senate (S.237; ) and House of Representatives (H.R.491; ) on January 22, 2015. The Congressional bills are more expansive, provide some penalties, and allow considerable exceptions for investigative and surveillance agencies. Overall, the states are rising quicker to the occasion with more reasonable proposals.