What’s being done with your face?

Pattern and facial recognition is revolutionizing medicine, the automotive industry and marketing and making people’s lives easier. However, these advances have a dark side to them too. You should prepare for the fact that your face may attract a lot of interest in the coming years.

AI face recognition blog Norbert Biedrzycki

Just as any other modern, innovative technology, facial and object recognition has a rapid but brief history behind it. As we go over its breakthrough moments, we might revisit the year 2011 when Jeff Dean, an engineer with Google, met computer science professor Andrew Ng. Both came up with the idea of ​​creating a powerful neural network into which they “fed” 10 million images taken from the internet (mainly from databases, e-mails and YouTube videos and photos). Dozens of hours of continuous processing later, the visual input produced three patterns that could be used to distinguish between the images of the human face, the human body and cats. Since then, the software could process further data and decide instantly whether an object portrayed in an image was or was not, say, a cat. Although this may not sound particularly exciting, it was a major breakthrough. A simple and yet very effective method had been developed. As a result, no more code writing is needed today to recognize skin colors or shapes of noses, not only face.

Something to be grateful for

Although both of these recognition technologies were highly promising, they still needed to be tested in the real world. Researchers working with the world’s most powerful computer, IBM Watson, were discovering impressive capabilities to be gained by examining enormous data sets, including photographs of the human body, for visual signs of severe diseases. Policemen raved about the time they saved by being freed from having to manually review the photographic archives of criminals’ faces. Facebook managers had their own reasons to be happy. Given the amount of visual data collected on their site, facial recognition became simpler to improve and appealing to the advertising industry. The autonomous vehicle manufacturers Tesla, Uber and Waymo began to rely heavily on the technology in their products, which used it to distinguish between people and inanimate objects. Hundreds of families in India should also be grateful. Rapid comparative analyses of the photos of children who were missing or placed in shelters allowed many families to celebrate happy reunions after years of unsuccessful searching.

Awakening the big brother. Hide your face

Unfortunately, all these encouraging examples have done little to allay anxieties over the growing threat of social dystopia. How can we relax about our faces when confronted with an imminent spike in the use of biometrics at our airports and offices? Controversial practices are reported from China where facial scans are no longer solely a prerequisite for access to optional services such as fast payment in stores but are also required of citizens e.g. to be able to purchase mobile phones. Official notices telling people that they are being filmed and that the recordings will be used for social credit scoring have been put up on Chinese trains.

The western culture remains averse to such close integration of technology and social policy. And yet, the problem affects us too, even though not quite as severely. Surveillance cameras keep a close eye on us in our streets, parks, schools, stores and office buildings. We don’t know how long and why our photos are stored and for what purposes they are examined.

The black box syndrome or not knowing how your machine works 

The problem is not only not knowing how our data is used. It is also that we don’t know when it is captured. After all, we are not talking about fingerprint collection, which could not be done without our knowledge. When a crime is committed in our neighborhood, records from many surveillance cameras in the area are analyzed. We should not be surprised that many of the data samples examined include our images. Doesn’t that turn us into unknowing passive participants of investigations every time our picture pops up next to others? If we could suddenly access footage from the cameras that monitor us in the streets and at work, we would realize what a huge part of our daily activities is being recorded. Another important consideration here are the persistent flaws in these systems. Such flaws have the potential to lead to serious abuse.

Errors may spark unrest

The biases of algorithms, which are theoretically expected to be neutral, have received extensive coverage. An article in Wired speaks of experiments showing that facial recognition errors are ten times more likely to occur when the people in the photos are black. Three years ago, US press reported on computer errors at police stations. Statistics showing that black people are automatically considered to be perpetrators, have been falsified. This hugely impacted investigative procedures and practices. Such skewed results were confirmed by independent studies by the Massachusetts Institute of Technology. Algorithms scan photos instantly but inaccurately. The procedures for using the results of such scans are not without flaws, which is something that those wrongly summoned to appear in police stations had to learn the hard way.

Order: disassemble cameras

The imperfections of facial recognition systems and the lack of clear, universal rules to regulate them have sparked criticism from activists who call for a public debate and a change of approach on the part of governments and industry. Jeff Bezos of Amazon has recently announced that his company was developing its own facial recognition guidelines, which it would consult with legislators. Microsoft has supported the development of privacy laws in Washington State. This year, Facebook modified its face recognition policy by granting users the option of declining to have their face identified by Facebook. City authorities have taken action to regulate the use of the technology in public spaces. This year, the city of San Francisco prohibited the police to use facial recognition on their detainees. A new law in Seattle has disclosed surveillance camera locations throughout the city. Many European cities are contemplating setting up camera-free zones. Needless to say, cities are not empowered to impose similar policies on tech giants. However, some private companies are voluntarily deploying similar initiatives. These include agencies that organize concerts and other mass events. The next thing we may see is the rise of privacy marketing in which the corporate image is built on the promise of zones free of recording devices.

The face is the new currency

Our digital images are used for a growing range of purposes. We can use them for various forms of communication and to access services, devices, and even the buildings in which we work. Once digitized, the face is an identity document of sorts and, in a sense, a new kind of currency. The question is whether, as members of the public, we will be able to control the use and circulation of our digitized faces which, more than any other images, represent our individuality.

What’s next?

So, what are we going to do with our faces? As hard as it may be to accept, we may someday have to consider covering them with masks before we venture outside. To prevent that from happening, it seems advisable even today to support various initiatives aimed at regulating the phenomenon, at least to some extent.

.    .   .

Works cited:

New York Times, By Gideon Lewis-Kraus, The Great A.I. Awakening. How Google used artificial intelligence to transform Google Translate, one of its more popular services — and how machine learning is poised to reinvent computing itself, Link, 2020. 

ScienceDirectYingChenElenee ArgentinisGriff Weber, IBM Watson: How Cognitive Computing Can Be Applied to Big Data Challenges in Life Sciences Research, Link, 2018. 

New York Time, Steve LohrFacial Recognition Is Accurate, if You’re a White GuyLink, 2018. 

The Guardian, Mara Hvistendahl, Can we stop AI outsmarting humanity. The spectre of superintelligent machines doing us harm is not just science fiction, technologists say – so how can we ensure AI remains ‘friendly’ to its makers? Link, 2019. 

.    .   .

Related articles:

– Algorithms born of our prejudices

– How to regulate artificial intelligence?

– Artificial Intelligence is an efficient banker

– Will algorithms commit war crimes?

– Machine, when will you learn to make love to me?

– Artificial Intelligence is a new electricity

Leave a Reply

7 comments

  1. Zeta Tajemnica

    As somebody who worked with AI, I’m surprised that more developers don’t speak out about AI misinformation. AI is nothing what people make it out to be. It doesn’t have self-awareness, nor can it outgrow a human. Up until this day there has never been a program demonstrated that can grow & develop on its own. AI is simply a pattern, or a set of human made instructions that tell the computer how to gather & parse data.
    In the example above, here’s what’s actually happening. GPT-3 (OpenAI) works very similar to a Google search engine. It takes a phrase from one person, performs a search on billions of website articles and books to find a matching dialog, then adjusts everything to make it fit grammatically. So in reality this is just like performing a search on a search, on a search, on a search, and so on…. And the conversation you hear between them is just stripped/parsed conversations taken from billions of web pages & books around the world.

  2. Zoeba Jones

    OpenAI CEO Sam Altman joins Azeem Azhar to reflect on the huge attention generated by GPT-3 and what it heralds for the future research and development toward the creation of a true artificial general intelligence (AGI). Topics:
    – How AGI could be used both to reduce and exacerbate inequality.
    – How governance models need to change to address the growing power of technology companies.
    – How Altman’s experience leading Y Combinator informed his leadership of OpenAI.

    • Zeta Tajemnica

      Do you really think that AI doesn’t learn about gender and the way it’s percieved? I read somewhere that people trust female AIs more than male AIs or something like that. No statistics because I can’t find it, but still

  3. SimonMcD

    Great post, with a spinning coin of the pros and cons of new technologies!
    If I may add a few points just to rethink.
    – New technology owners expect maximum profits from invested funds.
    – A lag of legal regulations. in relation to the speeding development of business forms, only increases.
    – The possibility of working out a common position of States ending with an agreed legal act seems to be very difficult.
    – The entire body language will be analyzed, not just the face, so it will be necessary to hide in the bubble.
    – And the most important, this process is unstoppable.

    Stay safe!