copyright notice
link to the published version: IEEE Computer, March, 2015

accesses since January 1, 2015


Hal Berghel

You may recall my column a few years back on RFIDiocy (OOB, January, 2013) – the use of RF beyond the level justified by good taste and common sense. As the Hollies sang in 1964, “Here I go again.”


No system or software designer, innovator or inventor has a perfect record. As with baseball sluggers, a 33% success level with significant projects (as promised, on time, no errors out of the box) probably qualifies you as a superstar. So the act of coming up with a bad idea, or a failed implementation thereof, in and of itself doesn't disqualify one from kudos. However, there are consumer-level bad ideas and then there are industrial-strength bad ideas. The latter are the more worrisome, especially if they recur with any frequency. As such I'll deal with them here.

I also drew an orthogonal distinction between a posteriori bad ideas (that in practice just didn't realize expectations) and a priori bad ideas (that could have been identified as wearing a cloak of dopey by a competent knowledge domain expert before any work had begun). The a priori offerings become part of the literature on disasters and many are destined to be featured in eponymous documentaries. DIGITAL INK

Not everything we can do is worth doing. The use of RFID in security-challenging applications serves as a poster child of a priori misguided technology. I gave two examples last time: the use of RFID for keyless entry and transit passes, and the eminently laughable Western Hemisphere Travel Initiative (WHTI) People Access Security Service cards (PASS cards) ( ). This last example is a particularly poignant example of the fondness of governments for bad ideas that fill the coffers of the military- industrial-surveillance-political-media-prison-energy-healthcare-energy-academic-thinktank-corporatist-complex.

Technology Absurdism

Well, they're at it again - this time it's RFID for evidence management ( ). The potential applications that NIST envisions included evidence inventory management, evidence chain of custody, evidence in-transit tracking, and access control to evidence. Nowhere in this linked fifty-page report is there any discussion of security or privacy. This is not surprising because none of the contributors seem to have any background in computing security and privacy! This may be another case of the government working with vendors to design products around “insecurity models” the methodology that gives rise to another act in Bruce Schneier's “security theater” ( ).

In the RFID evidence management case, it's just a matter of time until personally identifiable information is leaked, the chain of custody is found by a court to be corrupted, or some RFID source or other is spoofed to provide unauthorized access to sensitive information. I'll repeat my mantra: RF does not obey property lines - it is not a good candidate technology in security and privacy challenging applications – especially when it is not built around a robust security model. We need to wrap our collective heads around this concept!

I subsume these examples under the rubric of ‘technological absurdism': the development of technology that either ignores, fails to appreciate, or under-represents obvious negative externalities. On this account placing technology development in the hands of the unskilled, ill-trained or poorly supervised pretty much guarantees that the resulting technology will fail to meet society's needs and expectations while simultaneously exposing society to increased risk. Those of you who are software engineers and developers could write books about this phenomenon from your own personal experiences. It is incumbent on all of us to remember that many if not most of the worst technological ideas were identifiable as such a priori. In the hands of bad leadership, technology absurdism drifts toward a technological nihilism that drives sub-prime innovation of limited or ephemeral value. The NSA dragnet surveillance programs typify technology nihilism in this sense. And this is attributable to exceedingly poor leadership (see this column, June, 2014)


I want to single out two really clever innovators who are addressing these issues through innovation: Limor Fried and Todd Humphreys.

Following Dunne and Raby ( Dunne, A. and F. Raby (2001). Design Noir: The Secret Life of Electronic Objects. Birkhauser, Basel, Switzerland.) Fried suggests that “Design Noir” may be used as an antidote to electronic devices that fail to peacefully inhabit what she calls “Hertzian space” ( ). This is an exceedingly clever and informative way to approach the problem of technology subterfuge of the individual's expectation of privacy – including much of the National Security Agency's recent activities like dragnet surveillance, hacking into the GSM cellphone infrastructure ( ) and warrantless wiretaps. At this point many of the NSA's secret codenames – AURORAGOLD, PRISM, XKeyScore, MUSCULAR, STELLAR WIND, CO-TRAVELLER, BULLRUN, EvilOlive – have been so widely discussed and discredited that they've entered the public lexicon. (The Electronic Frontier Foundation offers an excellent NSA Domestic Spying Timeline ( ). EFF attorney Kurt Opsahl draws our attention to the fact that EvilOlive is both a palindrome as well an anagram! - ).

Fried's reasonable claim is that electromagnetic propagation in the form of visible light, radio frequency, X-rays, etc. has the potential to affect us in multifarious and subtle ways - even when imperceptible . Perceptible radiation, along with acoustics and other stimuli that affect our senses, is very much easier to deal with and far less insidious. These antisocial impositions on our privacy are easily identified as personally irritating, intrusive, unwanted and sensory overloading. Overheard cell phone conversations, ubiquitous television chatter in public spaces, elevator music, boom boxes, etc. fall into this category. (My personal feeling is that body odor, tracking cookies, the smell of mint and most of the NSA's surveillance programs should be included as well!) Fried argues that these considerations ought to be considered in design practices along with technical specifications and manufacturing costs. Fried makes a convincing argument and one worthy of serious consideration.

Fried sees a need to develop counter-technologies that thwart these unwanted intrusions – antidotes “that can nullify the invasion” (op cit, p. 14). She observes that this fits within a McLuhanesque natural technology/anti-technology order of things (see Marshall McLuhan, Understanding Media: The Extensions of Man, The MIT Press (reprint), Cambridge, 1994). Fried says: “I propose that designers, technologists and artists should … come up with new ideas for how the consumer may defend his or her personal space from unwanted electronic intrusion.” ( , p 14) This is a notable goal! The problem is that the afore-mentioned complex will use every legal and political tool at its disposal to prevent that strategy from succeeding. This is a David vs. Goliath situation if ever there were one.


Fried extends her innovative instincts at electronics that have insidious side effects which are not fully disclosed to the consumer. An example would be cell phones that may be remotely controlled by the carrier to turn on the microphone and camera with neither the user's knowledge nor permission ( ), or the recent spate of IMSI catching activity ( ). The fact that cell phones could be used as “roving bugs” or PII repositories by the law enforcement, the surveillance state, common carriers, cyber weapons mercenaries, and hackers is one of those ‘added features' that the vendors, carriers, government agencies and NGOs try continuously to conceal. Fried argues for innovation that enhances the user's control of existing products beyond the means provided by the manufacturer, or innovation that may actually subvert the intended use of the product. Her suggestions anticipated Edward Snowden's revelations by several years!

I want to build on Fried's worthy goals. I'll use the term noirware to mean any technology that limits or neutralizes unadvertised or unintended uses of a product that are inconsistent with the user's expectations of security and privacy. Note that the emphasis is on the user's expectations, not the advertised features. This distinction is important for we now live in the era of amorphous product disclosures that may or may not be accurate, complete or reliable product disclosures. Historically, software shipped without warranty expressed or implied. These days we can't be certain that it ships without backdoors, malware, faulty encryption, or even known deficiencies that are being shared with those who would seek to surveil us ( ). It is worth remembering that Edward Snowden's revelations about the NSA PRISM program showed that it accessed data from U.S. high tech information providers without court orders ( ).

We may further extend our working definition to distinguish between offensive and defensive noirware. The greatest barrier to successful deployment of noirware is that current technology rarely allows us a purely defensive strategy. In most cases, noirware would be an adversarial technology that is difficult to control as its effect is nullifying rather than neutralizing electromagnetic energy.

Consider Fried's Wave Bubble ( ). WB creates a low energy 2 meter diameter “cloud” of RF noise that targets the usable frequencies of the offending device - a perfect antidote to the imposition other peoples' one-sided cell phone calls on your personal auditory happy space . (See also her short presentation at Gadgetoff 2007 ). While the concept is great, it runs afoul of federal laws that make intentional RF jamming by private citizens illegal. In fact, in the U.S. it's even illegal to advertise the sale of RF jammers ( ), though the technique is apparently popular with some foreign faith leaders ( ). But in the U.S. at the moment, Wave Bubble v.1.0 falls under the rubric of gratuitous act of defiance! To balance the discussion, I should mention that Fried is a full-service noiware provider offering defensive noirware like Media-Sensitive Glasses that are wearable sensory mediation devices that filter, block or replace irritating ambient video without affronting government regulation.

I would be remiss if I failed to comment on the Federal Communication Commission's approach to regulation which is both technologically naïve and over-reaching. For one, the ban on jamming applies to any RF source even that source is an illegal one used to eavesdrop without court authorization – a paradigmatic violation of the Fourth Amendment. On my reading, if one were to use a low-energy jammer to jam a cell phone in your own back yard in order to prevent invasive eavesdropping by remotely turning on the phone's microphone and camera, you – not the eavesdropper - would be subject to prosecution (see ). Of course, there are good reasons for preventing reckless and irresponsible RF jamming. A recent case involving a truck driver jamming his mobile GPS blocked a new GPS system to be used by the air traffic control system at the Newark airport ( , ). However, this is not a good argument against active RF personal privacy defense systems as such and in general, but rather an argument against irresponsible deployment. To ban all jamming in all contexts, while allowing RF interception and surveillance, throws the proverbial RF baby out with the digital bathwater. It is simply unreasonable to allow the use of RF tracking devices and at the same time block any active defensive measure. No one can live in mobile Faraday cages.

While Fried's Wave Bubble may not be ready for prime time, the spirit and enthusiasm behind it is worthy of continued investigation. It is entirely possible that Wave Bubble v. 2.0, together with a more reasonable approach to FCC regulation, may converge on the next digital aspirin to relieve RF tension and stress. We welcome Limor Fried and her colleagues to this challenge.


Our latest candidate for technological absurdism is GPS – at least as it's currently implemented for public and commercial use. And that takes us to our second worthy innovator, Todd Humphreys, who performed the best analysis that I know of regarding the deficiencies of GPS ( , with corresponding TED video at ). Humphreys has been in the news a lot lately because of his demonstration of the ease with which one may capture control of GPS-based automated marine navigation systems with GPS spoofing some years ago ( ). Like Fried, Humphreys is to be commended for a yeoman effort in disclosing technological hubris and a priori design flaws.

GPS, like RFID, is a useful technology in most manifestations. And both were developed with little serious concern for security. In the earliest common commercial applications, neither employed robust encryption or authentication protocols. They are as wide open as early WiFi. That's actually not a bad analogy, because the initial authentication and encryption protocol for 802.11 (Wired Equivalent Privacy or WEP) was just as lame as the initial encryption algorithm for RFID built into the classic MIFARE chip ( ; this video explanation is especially illuminating: ). Both systems are exemplars of technological hubris or, to re-purpose Edsger Dijkstra's words, mistakes carried through to perfection. To my knowledge commercial GPS systems still don't offer support for encryption or authentication. And it now appears, based on the Iranian capture of a RQ-170 “Sentinel” drone, that even the hardened military GPS guidance systemsmay be vulnerable to basic RF jamming attacks and GPS hacks ( ). This should come as no surprise to anyone.


The basic operation of GPS goes roughly like this: satellite almanac and satellite position data are retrieved and stored in GPS receiver as the receiver locks onto the signals. Commercial GPS uses code phase tracking that triangulates the position from four or more of the several dozen satellites in medium earth orbit (~12,000 miles) all of which are controlled and synchronized from ground stations. Unique 1023 bit sequences (Gold codes) individuate the satellites. The satellite signals are sent on a ~1500MHz carrier frequency which allows the Gold codes to be repeated at millesecond intervals. The GPS receiver then synchronizes the broadcast Gold code with a stored copy of the code. The offset required to match sequences determines the time delay, from which the distance from the satellite may be calculated. Since President Clinton shut off Selective Availability (the intentional hobbling of the signal to reduce accuracy), commercial GPS has had an accuracy of a few meters. Selective Availability was one form of bias error. Clock errors, satellite position errors, weather-induced signal errors, and multipath (reflection) errors may also produce position anomalies of several meters. But the granddaddy of GPS errors is what Geographer Peter Dana calls a ‘blunder' ( ). Blunders can introduce errors so large (many miles) as to render a GPS system unusable.

Todd Humphreys' GPS spoofer is a utility that can produce blunders. He has successfully demonstrated how his GPS spoofers can introduce dangerous navigational errors in aircraft and marine applications. Absent a robust security model, commercial GPS systems at the moment are untrustworthy with no antidote on the horizon. Commercial GPS is an example of an RF application for which Fried's Wave Bubble wouldn't help because the jamming would render GPS useless.

But there's more. Humphreys points out that GPS Dots ( ) less than a centimeter on a side are following RFID tags in popularity. Combined with GPS carrier phase tracking (is far more accurate than code tracking because it links the location to 1% of the wavelength of the carrier frequency or about 2mm. In this case, the only defensive measure may be large quantities of micro-bubbles (let's call them ‘bubblettes') – but of course this requires that you find the GPS dots before you neutralize them.


Both Limor Fried and Todd Humphreys provide considerable insight into the negative externalities associated with modern technology. But in so many of these cases the externalities were evident from the start by knowledge domain experts. RFID technology dates back at least as far Leon Theremin's “Thing” that was planted in Ambassador W. Averell Harriman's Moscow residence in 1945. Viable transponders were in use decades before that. GPS dates back to the late 1950's. Does it seem reasonable that for the past 60+ years no one asked what would happen if RFID and GPS were subjected to out-of-band applications? They are no different than children's toys with sharp edges, Takata automobile airbag inflators that blast passengers with shards of steel ( ), drones that can be remotely airjacked by using techniques similar to those described by Humphreys ( ), insecure Webcam/baby monitor systems (,l or the glitzy new home automation systems that invite intrusion and compromise ( ). These are but a few of the technologies that are in use that were and are really bad ideas. As a society we have got to be forceful in insisting that potential abuse or misuse of any technology should be included in the calculated velocity of all innovation, or we as consumers will refuse to buy it or use it. Apropos of Paul Masson, the mantra should be: we will see no technology shoveled before its time. And it is a mistake to assume that corporate-financed politicians will serve as our apologists in this regard. For the most part, they are conversely incentivized.

The mindset that produces these poorly thought through technologies that are unfit for public consumption is related to what Geoff Lewis calls “failure porn” ( ). Failure porn and the hazardous technologies discussed exist within inter-related dystopic frameworks, both of which follow from a culture of unreflective, feckless design, irresponsible development, and churlish marketing.

Finally, I note that on December 28, 2014 Karsten Nohl announced a new application for rooted Android 4.1 or newer cell phones called SnoopSnitch that monitors a spectrum of Signalling System 7 cell phone protocol vulnerabilities (including interception attacks like IMSI catching and re-routing attacks) at the 31st Chaos Computer Conference ( ). Nohl observes that SS7 could be secured if the carriers would just commit to reasonable customer privacy standards. Failing that no 3G GSM network should be assumed secure, especiallyt since the the 64-bit A5/3 cipher has been broken. Additional information is available online at .