AI privacy

AI and Invasion of Privacy

By Farhat Ali – Apr 2021

A lot of discussion in media is taking place on the role of AI and how it may lead to invasion of privacy and increased discrimination in hiring and policing.

I am passionate about technology and its contribution to human well-being. Mechanization of farms and increase in crop yields has led to self-sufficiency in food production and the industrial revolution has led to higher standards of living. There is still hunger and poverty in the world but technology is not the cause.

Will AI protect privacy, improve security, and decrease discrimination? There are many articles being written on AI and its potential effect on job elimination and decrease in privacy and security. I do not share this view and believe that proper application of AI technology will lead to a safer and more prosperous world. Of course, like any other technology, it can and will be misused.
Cameras are becoming ubiquitous. Currently, there are 400 million cameras in use worldwide. Most of these cameras and video recorders are storing zettabytes of data which can lead to privacy violations. Furthermore, if we have humans monitoring the feeds from these cameras, privacy is further compromised as it leads to possibilities of gawking and other inappropriate actions. In its place, if AI is implemented under proper design controls and complying with the privacy laws, AI would “see” the event, and unless it determines it is an anomalous behavior requiring human intervention; it would ignore it and not store it to prevent misuse. Since it removes humans from monitoring on ongoing basis events which are not anomalous behavior, AI improves privacy.

After 12 minutes of continuous video monitoring, an operator will miss up to 45% of screen activity, and up to 95% after 22 minutes of viewing. 


How about security? Which leads to better security, humans watching camera feeds continuously or humans assisted with AI? Many studies have shown that humans are incapable of watching a screen of video feeds looking for suspicious behaviors as they tune out after 15 minutes. They miss many critical events because 99% of what they are watching requires them to be passive and requires no action on their part (Sample on Jeffrey Epstein’s case). In the case of AI-assisted monitoring, when AI detects an anomalous event, it highlights it and presents it to a monitoring agent who then is required to take follow-up action. Missed alerts go way down while improving the productivity of the agent.


Finally, how about AI effect on discrimination. Humans historically have been discriminatory and the wide range of humans who enter the security field is apt to bring their biases with them. Unfortunately, so far security training has been unable to eliminate these biases. AI can be built on biases and could proliferate discrimination by developing the system based on past performance indicators. Early AI saw discrimination in recruiting based on successful historic business leaders. AI itself can inadvertently introduce discrimination such as Blacks being more prone to misidentification compared to Whites.


It is possible to build “ethical” AI and the companies that build AI-based solutions should be held accountable. Legal safeguards can be put in place to ensure privacy, security, and to avoid biases. Companies would be liable for the AI biases they introduce in violation of law and be penalized for lapses in privacy and security.
Of course, just like any invention, there will be misuses. Those wanting to misuse will not be asking for permission.


What we as entrepreneurs are focused on doing is to leverage advances in AI technology to make our communities safe and secure while protecting the privacy of individuals. What all of us need to do is to support companies that meet the ethical standards by voting with our pocketbook and work through our elected representatives to provide the needed oversight.