Your face may attract a lot of interest in the coming years

What are we going to do with our faces? As hard as it may be to accept, we may someday have to consider covering them with masks before we venture outside. To prevent that from happening, it seems advisable even today to support various initiatives aimed at regulating the phenomenon, at least to some extent.

Share

facebook twitter linkedin email
You-face_Norbert_Biedrzycki

My article in Data Driven Investor published 14th of September 2020 on Your face may attract a lot of interest in the coming years

ust as any other modern, innovative technology, facial and object recognition has a rapid but brief history behind it. As we go over its breakthrough moments, we might revisit the year 2011 when Jeff Dean, an engineer with Google, met computer science professor Andrew Ng. Both came up with the idea of ​​creating a powerful neural network into which they “fed” 10 million images taken from the internet (mainly from databases, e-mails and YouTube videos and photos). Dozens of hours of continuous processing later, the visual input produced three patterns that could be used to distinguish between the images of the human face, the human body and cats. Since then, the software could process further data and decide instantly whether an object portrayed in an image was or was not, say, a cat. Although this may not sound particularly exciting, it was a major breakthrough. A simple and yet very effective method had been developed. As a result, no more code writing is needed today to recognize skin colors or shapes of noses.

Something to be grateful for

Although both of these recognition technologies were highly promising, they still needed to be tested in the real world. Researchers working with the world’s most powerful computer, IBM Watson, were discovering impressive capabilities to be gained by examining enormous data sets, including photographs of the human body, for visual signs of severe diseases. Policemen raved about the time they saved by being freed from having to manually review the photographic archives of criminals’ faces. Facebook managers had their own reasons to be happy. Given the amount of visual data collected on their site, facial recognition became simpler to improve and appealing to the advertising industry. The autonomous vehicle manufacturers Tesla, Uber and Waymo began to rely heavily on the technology in their products, which used it to distinguish between people and inanimate objects. Hundreds of families in India should also be grateful. Rapid comparative analyses of the photos of children who were missing or placed in shelters allowed many families to celebrate happy reunions after years of unsuccessful searching.

Awakening the big brother

Unfortunately, all these encouraging examples have done little to allay anxieties over the growing threat of social dystopia. How can we relax about our faces when confronted with an imminent spike in the use of biometrics at our airports and offices? Controversial practices are reported from China where facial scans are no longer solely a prerequisite for access to optional services such as fast payment in stores but are also required of citizens e.g. to be able to purchase mobile phones. Official notices telling people that they are being filmed and that the recordings will be used for social credit scoring have been put up on Chinese trains.

The western culture remains averse to such close integration of technology and social policy. And yet, the problem affects us too, even though not quite as severely. Surveillance cameras keep a close eye on us in our streets, parks, schools, stores and office buildings. We don’t know how long and why our photos are stored and for what purposes they are examined.

The black box syndrome or not knowing how your machine works 

The problem is not only not knowing how our data is used. It is also that we don’t know when it is captured. After all, we are not talking about fingerprint collection, which could not be done without our knowledge. When a crime is committed in our neighborhood, records from many surveillance cameras in the area are analyzed. We should not be surprised that many of the data samples examined include our images. Doesn’t that turn us into unknowing passive participants of investigations every time our picture pops up next to others? If we could suddenly access footage from the cameras that monitor us in the streets and at work, we would realize what a huge part of our daily activities is being recorded. Another important consideration here are the persistent flaws in these systems. Such flaws have the potential to lead to serious abuse.

Errors may spark unrest

The biases of algorithms, which are theoretically expected to be neutral, have received extensive coverage. An article in Wired speaks of experiments showing that facial recognition errors are ten times more likely to occur when the people in the photos are black. Three years ago, US press reported on computer errors at police stations. Statistics showing that black people are automatically considered to be perpetrators, have been falsified. This hugely impacted investigative procedures and practices. Such skewed results were confirmed by independent studies by the Massachusetts Institute of Technology. Algorithms scan photos instantly but inaccurately. The procedures for using the results of such scans are not without flaws, which is something that those wrongly summoned to appear in police stations had to learn the hard way.

Order: disassemble cameras

The imperfections of facial recognition systems and the lack of clear, universal rules to regulate them have sparked criticism from activists who call for a public debate and a change of approach on the part of governments and industry. Jeff Bezos of Amazon has recently announced that his company was developing its own facial recognition guidelines, which it would consult with legislators. Microsoft has supported the development of privacy laws in Washington State. This year, Facebook modified its face recognition policy by granting users the option of declining to have their faces identified by Facebook. City authorities have taken action to regulate the use of the technology in public spaces. This year, the city of San Francisco prohibited the police to use facial recognition on their detainees. A new law in Seattle has disclosed surveillance camera locations throughout the city. Many European cities are contemplating setting up camera-free zones. Needless to say, cities are not empowered to impose similar policies on tech giants. However, some private companies are voluntarily deploying similar initiatives. These include agencies that organize concerts and other mass events. The next thing we may see is the rise of privacy marketing in which the corporate image is built on the promise of zones free of recording devices.

The face is the new currency

Our digital images are used for a growing range of purposes. We can use them for various forms of communication and to access services, devices, and even the buildings in which we work. Once digitized, the face is an identity document of sorts and, in a sense, a new kind of currency. The question is whether, as members of the public, we will be able to control the use and circulation of our digitized faces which, more than any other images, represent our individuality.

What’s next?

So, what are we going to do with our faces? As hard as it may be to accept, we may someday have to consider covering them with masks before we venture outside. To prevent that from happening, it seems advisable even today to support various initiatives aimed at regulating the phenomenon, at least to some extent.

Link to this article 

Related articles:

– Technology 2020. Algorithms in the cloud, food from printers and microscopes in our bodies

– Learn like a machine, if not harder

– Time we talked to our machines

– Will algorithms commit war crimes?

– Machine, when will you learn to make love to me?

– Artificial intelligence is a new electricity

– How machines think

Leave a Reply

9 comments

  1. John Macolm

    I’d be very surprised if nations don’t have “Manhattan” scale projects in the works and in total secrecy related to the use of neural networks for military planning and prediction, perhaps even target acquisition and firing.
    When it emerges on the battlefield as a card up the sleeve which is pulled out when things are looking dire, our current methods of waging war will look as archaic as the weapons of cavemen.

    • Zeta Tajemnica

      Does anyone here actually work in AI research? This is just corporate propaganda promoting the nationalization of Artificial Intelligence… An industry that has the potential to shake every large, bureaucratic crony corporation to its core, as AI will give the average user the power/expertise of entire industries! Read between the lines, and keep corporate controlled government AWAY from AI!!!!!

  2. Mac McFisher

    Suppose a country funds a Manhatten Project wouldn’t it be a rational decision by other countries to nuke all their data centers and electricity infrastructure?
    The first one to make AI will dominate the world within hours or weeks. Simple “keep the bottle on the table” scenarios tell us that any goal is best achieved by eliminating all uncertainties, i.e. by cleansing the planetary surface of everything that could potentially intervene.
    This should suggest there cannot be a publicly announced project of this kind driven by a single country. Decentralization is the only solution. All countries need to do these experiments at once with the same hardware, at exactly the same time.

    • Jang Huan Jones

      The AI we fear is not the AI that exists today.
      It’s General Artificial Intelligence. (Which may or may not be far away.)
      AI that can do complicated tasks quickly is scary, but it’s not the sheer terror of GAI.
      Electric circuits operate a million times faster than biological ones. So if we can make a machine that “thinks” at the level of the smartest humans. We can set teams of them to a task, have them work every second of every day (no bathroom breaks or need for food) and they would think on 1 thing millions of times faster than humans.
      They would produce more good ideas than we would know what to do with. Let’s set a group of 5,000 geniuses to work on the next big advance in explosive technologies. That group of geniuses “works” for 1 human day, and in their time, makes 2700 years of human level advancement.

      It’s a good idea machine that is scary, not the complicated tasks that todays AI can handle.
      It seems far off, and it very well might be. But it doesn’t matter. All we have to do is keep working on AI, and keep making faster machines, and we will without a doubt hit on it eventually. And then it’s all over.

    • Aaron Maklowsky

      The elephant in the room is, we can only protect against it if we know how it works. And, like I said, each path to a conclusion tends to be unique. Because of that uniqueness:
      AI MALWARE CANNOT BE PROTECTED AGAINST
      Picture for a minute the scenario where Russia develops this, and releases 100000 instances of an AI tasked to eavesdrop on all communications tied to President Trump’s Twitter account. One by one, they would learn who’s who, bypass securities, and won’t stop until it gets there. It won’t run a fixed script and then quit, it keeps going, and learning. Then, once it learns, it tells the originator, and that can be used to expedite learning in the future. Times 100,000. And then repeat. AI learning is fucking scary.
      It’s a pipe dream to think it will only be used for good. It needs to be tied to VERY harsh consequences for misuse so that it’s an effective deterrent.

    • John Macolm

      For the US, they get a job at Google, etc. The AI projects in these places have deep government ties. I don’t know for Russia and China.

      • Zeta Tajemnica

        Aside from manipulating Facebook and Twitter and maybe hacking the Democrats, what has Russia done to show their prowess in computing and AI?