When a breach captures a part of us that is unchangeable, does it mean that we have allowed technology to pry too deeply into our lives?

We have read about the recently released footage, complete with facial recognition and deep looks into private spaces that were never supposed to be public, and wondered if it’s safe to trust the surveillers, the keepers of the keys, to replay snapshots of our lives, either with or without our permission. 

An always-on digital eye, the mantra goes, will keep us safe. But how safe, and from whom? Never mind who is “supposed” to have access to the stored troves of our data, if Bloomberg reports are to be believed, we now have to ask what happens when, not if, the footage falls into the wrong hands. And once released, there’s no practical way to get it back. It feels like a part of our stories were just stolen, and there’s little that can be done about it. 

Security was supposed to fix this to keep private things private. It feels like in some way we’ve failed in the never-ending game of cat-and-mouse between protectors and thieves; we let this happen to us. But as long as there have been locks, there have been lock pickers, so the best approach is to assume a breach and realistically try not to pretend to achieve “perfect security”. Companies that adopt this stance tend to fare far better against attacks, both inside and outside the castle walls. 

But when a breach captures a part of us that is unchangeable, like our face, appearance and private actions, has it gone too far? The same goes for biometrics. If a breach captures our retinal information, fingerprints and such, have we allowed the technology to pry too deeply? 

Centuries ago, society had to decide what it considered a public and private spaces, and what the rules of engagement were for prying eyes. Kissing in a public park and expecting privacy was considered too much. But so was expecting a crowd while kissing in your home. Over time we worked out what was and wasn’t  a reasonable privacy expectation. But technology has been foisted upon society in a mere blip on the human timeline, and we’re still working out what to do about the repercussions, like when someone steals all the pictures of you wherever you are. 

RELATED READING: DLink camera vulnerability allows attackers to tap into the video stream 

In the Verkada breach, there is imagery of people in hospitals, jails, workplaces, private spaces and the like. Regardless of the attackers’ motives, the result illustrates a point about the pernicious element of pervasive surveillance. Did it succeed? We’ll see. 

But unless we agree collectively that some parts of our lives really shouldn’t be surveilled, and that having privacy is not a tacit admission of guilt, but rather an attempt to reclaim the parts of our lives that really should stay private the right to be left alone have we failed? There is something in the social psyche that seems to feel the need for solitude, for peaceful disengagement from prying eyes, to just “be”. Have we lost that in the name of security, or just paid too much?  

Security cameras, in common use since at least the 1970s, have become pervasive just as other digital technologies, including the internet, have developed and become mainstream. That a technology intended for various safety and law enforcement activities should be misused should come as no surprise to regular readers of WeLiveSecurity. But, in this case, the breach was not of social security numbers or credit cards, but of people, many of whom may have been unaware they were being recorded.   

It’s creepy, and it’s invasive, but it is also unclear as to what information was actually taken from the security camera vendor’s network. If intellectual property such as source code and designs for current and future products were taken, this could allow a competitor to much more easily create or improve their own products, without having had to engage in the time-consuming and expensive R&D required to do so. If the company’s internal communications were stolen, this could have implications not just for the company, but customers and investors, whose private issues and concerns may become public.

Perhaps the scariest issue would be if the video archives were somehow copied. While such a large set of data may have been impractical to copy in its entirety, it provides a treasure trove of human activities that could be used as data sets for machine learning, especially if any work that had already been done by the vendor was copied as well. The ability to train a computer to look for actions and patterns of behavior and inform an operator or even autonomously take action is frightening enough. That innocent people would be unknowingly involved in training it is nightmarish.