02 February 2014

Eyeglass security: well-intentioned, but...

I shared in a teachable security experience moment at home the other day. Ms. Context had recently had an eye exam and needed a copy of her prescription to get a new pair of glasses. She called the doctor’s office; no problem, they said: we can email it to you.  This seemed quick and convenient. The “fun” started with a subsequent message in her inbox bearing an unexpected and, er, cryptic and concise title something like “encrypt”.  Recognizing the from address as corresponding to the doctor’s office (rather than a malware sender who might have sought to tempt with the same title), we continued. We registered an account with the proffered email encryption service provider, creating a password that will probably never have an occasion to be reused, also answering familiar life questions. After a few minutes, we successfully obtained and extracted the prescription-bearing PDF, free to email to an optician.

Ms. Context found these hoops frustrating; other patients might just have given up and waited for postal mail. As a security technologist, I could understand the rationale for encrypting the email; we’re dealing with a patient’s medical data here, after all, where privacy is fundamental and HIPAA regulations speak loudly to healthcare providers.  I’ve also had a hand in supporting the concept of Internet email encryption since the pre-Web era, as via this RFC. Nonetheless, it was hard to see that the balance of cost and inconvenient processing vs. tangible benefit clearly added up in this particular case.  What’s the actual threat?  It doesn’t seem likely that an attacker would find much value in intercepting email to become able to obtain a pair of glasses that probably wouldn’t match their own eyes.  There might be cases where people wouldn’t want specifics of their vision revealed (maybe if they’re approaching the limits of visual acuity required for a driver’s license?); while conceivable, these also seem fairly unusual.  I don’t know if or how the decryption service’s design might or might not expose protected data to insiders there, but that could become another threat to evaluate in the overall picture that wouldn’t arise if the service weren’t involved.

Security policies are normally and appropriately conservative, and medical offices should certainly be careful when storing and sending patient data.  (I’ll also recommend dialing carefully when using fax machines, but that’s another topic.)  For this example, though, many or most patients might not consider this piece of their data as particularly sensitive (vs., e.g., prescribed medications they may be taking). Security methods should be effective and also convenient to use, but instead seemed burdensome in this case. I wish (and continue to believe that) the technology could become easier to apply, so users’ data could be protected as usual practice.  Where we stand, though, it can often be much easier to see and resent the tangible annoyances that security methods impose than to value the more amorphous benefits that they’re meant to offer.