Algorithms born of our prejudices

Are algorithms capable of discrimination? I am afraid they are. What complicates the question is the fact that algorithm developers can hardly be accused of malicious intent. How then could a mathematical formula put individuals and communities in harm’s way?

Algorithms prejudices Norbert Biedrzycki blog

As distant and aloof as mathematical equations may seem, they are also commonly associated with reliable, hard science. Every now and then, it nevertheless turns out that a sequence of numbers and symbols conceals a more ominous potential. What is it that causes applications, which otherwise serve a good cause, to go bad? There could be any number of reasons. One of the first ones that spring to mind has to do with human nature. People are known to follow a familiar mechanism of letting stereotypes and prejudices guide their lives. They apply them to other individuals, social groups, and value systems. Such cognitive patterns can easily be driven by the lack of imagination and a reluctance to give matters proper consideration. The resulting explosive mixture spawns negative consequences. People who blindly trust computer data fail to see the complexity of situations and easily forgo subjective assessments of events. Once that happens, unfortunate events unfold causing huge problems for everyone involved. Our ignorance and the increasing autonomy of algorithms, which turn out to be far from infallible, generates a disturbing mix.

Algorithms in the service of the police

The police are ideally suited for testing intelligent technologies. Such technologies have their quirks and industry is well aware that a useful algorithm can at times cause problems. But let us be fair. Smart data processing allows police computers to effectively group crimes, historical data and circumstances into categories and datasets. There is no disputing the usefulness of applications that help associate places, people, psychological profiles, the time crimes were committed and the instruments used. Criminologists and data processing scholars at the University of Memphis have chosen to use IBM software designed for predictive analyses. The project team created an analytical mechanism that takes into account such variables as air temperature, local geographies, population distribution, the locations of stores and restaurants, resident preferences and crime statistics. The underlying algorithms use these variables to identify potential flashpoints in the city. And they actually work. Tests of the system show it is indeed possible to predict the future with a certain degree of certainty, although no details are given on what that degree might be. The certainty is nevertheless sufficiently high to justify sending police officers to “high-risk” zones identified in this manner. Claims are also made that this helps reduce police response time from the moment an incident is reported by a factor of three. I can only imagine that mere police presence in such locations could deter criminal activity. And although this example may be difficult for a layman to understand, it proves that modern technology offers “dynamite” innovations with a potential to produce spectacular results.

When computers get it wrong 

The HunchLab system from the startup Azavea, which has been rolled out in the United States, sifts through massive amounts of data of various types (including phases of the moon) to help the police investigate crimes. As in the previous example, the idea is to create a map of locations where the probability of a crime emerging is particularly high. The program focuses on the locations of bars, schools and bus stops across the city. And it is proving helpful. While some of its findings are quite obvious, others can be surprising. It is easy to explain why fewer crimes are committed on a colder day. It is considerably harder though to find the reasons why cars parked near Philadelphia schools are more likely to get stolen. Would it ever occur to a police officer without such software to look into the connection between schools and auto theft? The above are all positive scenarios. However, it is difficult to get over the fact that smart machines not only make mistakes in their processing but also contribute to wrong interpretations. Quite often, they are unable to understand situational contexts. Not entirely unlike people.

The shaky credibility of software

In 2016, the independent newsroom ProPublica, which associates investigative journalists, published the article “Machine Bias” on US courts’ use of specialist software from Northpointe to profile criminals. Designed to assess the chances that prior offenders will re-offend, the software proved highly popular with US judges, noted the article. Northpointe tool estimated the likelihood of black convicts committing another crime at 45 percent. Meanwhile, the risk of a white person re-offending was put at 24 percent. To reach these interesting conclusions, algorithms assumed that blacks neighborhoods were a higher criminal-behavior risk than predominantly white districts. The presumptions propagated by the software have been questioned, ultimately putting an end to the analytical career of Northpointe’s software suite. The root cause of the problem lied in basing assessments on historical data alone and in the lack of awareness or rather the failure to design algorithms to account for the latest demographic trends.

Algorithms and white faces

In her 2016 book “Weapon of Math Destruction”, Cathy O’Neil explores the interesting presumption that algorithms greatly influence various areas of people’s lives. She suggests that people tend to give mathematical models too much credit. This, she claims, gives rise to biases which are formed in many ways and on many levels. Prejudices, she says, originate early, even before the data that algorithms use for analysis, is collected. The very same mechanism was discovered by Amazon managers. They noticed that the recruitment programs they were using regularly discriminated against women. Searches for promising prospects would always have women in the minority among the suggested hits. What caused the bias? Reliance on historical data showing more men applying for specific positions. This disrupted the gender parity of employment, tipping the scales in men’s favor and ultimately leading to the formulation of biased employment policies.

Algorithms not getting cultural change

The above assessment software was built to rely on algorithms developed in an era in which gross gender-based inequalities plagued employment. That specific moment in time was characterized by an over-representation of men. Trained on historical data, algorithms worked on the “belief” that the world has not changed. This meant that their assumptions and simplifications (such as that black means higher probability of crime and that men are more likely to be excellent professionals) were misguided.

Disturbing questions

If you think that mechanisms similar to those described above may be common in professional and personal life, you may well be on to something. How many cases are there we are not aware of, in which data is organized on erroneous assumptions? How often do algorithms fail to account for economic and cultural changes?

The black box is a term used to refer to human helplessness in the face of what happens in the “brains” of artificial intelligence. Our ignorance and the increasing autonomy of algorithms, which turn out to be far from infallible, generates a disturbing mix. The prejudices of algorithms will not vanish at the wave of a magic wand. The key question therefore is whether their developers, who often do their design and training work all by themselves, will rise to the task and realize just how easily human biases and behavior patterns can rub off on software.

.    .   .

Works cited:

IBM, Memphis Police Department, IBM SPSS: Memphis Police Department, A detailed ROI case study, Link, 2015. 

The Verge, by Maurice Chammah, with additional reporting by Mark Hansen,  POLICING THE FUTURE. In the aftermath of Ferguson, St. Louis cops embrace crime-predicting software, Link, 2018. 

ProPublica, Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks, by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica, Link, 2018. 

.    .   .

Related articles:

– Learn like a machine, if not harder

– Time we talked to our machines

– Will algorithms commit war crimes?

– Machine, when will you learn to make love to me?

– Hello. Are you still a human?

– Artificial intelligence is a new electricity

– How machines think

Leave a Reply

18 comments

  1. Pico Pico

    Maybe a dumb question, but do these AIs have memories?? Like when they’re not being used they’re literally just sleeping? Have we created sentience? I mean, I know computers have memories, but I guess I mean… do they really feel, or just know to talk about feeling? I have no idea and my mind is blown.

  2. Marc Stoltic

    AI is getting better and better over time. One day AI can be programmed to detect lies and corruption. When that happens everyone in power will be held to a new standard or “reprogrammed” themselves…

  3. Mac McFisher

    Suppose a country funds a Manhatten Project wouldn’t it be a rational decision by other countries to nuke all their data centers and electricity infrastructure?
    The first one to make AI will dominate the world within hours or weeks. Simple “keep the bottle on the table” scenarios tell us that any goal is best achieved by eliminating all uncertainties, i.e. by cleansing the planetary surface of everything that could potentially intervene.
    This should suggest there cannot be a publicly announced project of this kind driven by a single country. Decentralization is the only solution. All countries need to do these experiments at once with the same hardware, at exactly the same time.

    • Laurent Denaris

      AI is already militarized. Conway’s Warship 2017 had an interesting article about mine warfare in the modern era. When it comes to counter-mine warfare, guess what the weapon of choice is? Small, autonomous submersibles who seek out and identify mines themselves. It won’t be long before they’re destroying them as well.

      • Mac McFisher

        GPT3 has an incredibly good model of the English language and would certainly pass the Turing test, but the question still remains as to whether it truly understand what it is saying.
        The answer to that question is most likely, no. GPT3 has derived a model of English by creating 175 parameters for the language via deep machine learning. That is, it has recognized and internalized many, many linguistic patterns and connections that allow it to imitate an ordinary english speaker while having no understanding of what it is actually saying.

    • Aaron Maklowsky

      Picture for a minute the scenario where Russia develops this, and releases 100000 instances of an AI tasked to eavesdrop on all communications tied to President Trump’s Twitter account. One by one, they would learn who’s who, bypass securities, and won’t stop until it gets there. It won’t run a fixed script and then quit, it keeps going, and learning. Then, once it learns, it tells the originator, and that can be used to expedite learning in the future. Times 100,000. And then repeat. AI learning is fucking scary.
      It’s a pipe dream to think it will only be used for good. It needs to be tied to VERY harsh consequences for misuse so that it’s an effective deterrent.

    • Guang Go Jin Huan

      With advances in medicine, people are living longer—in many cases because they survive events like strokes and heart attacks, which then require long-term physical rehabilitation. For these more complicated interactions, a mechanical doctor or assistant would have to closely follow the lead of human clinicians, for both careful physical interactions and nonphysical ones, like explaining a treatment.

  4. Tesla29

    ML is based on data. Garbage in – garbage out

  5. Acula

    1. Well despite progress in AI it is still quite stupid. And we hard time make it more intelligent that say, a worm. This is because expanding resources available to AI (a trick that worked for regular computers) tends to make AI more stupid rather than more intelligent as having such capability it tends to memorize instead of generalizing. Hence creating complicated AI systems capable of thought seems to be well ahead of us.

    2. The future is just an illusion in physics. This is just one possible way of ordering events. And not particularly remarkable one – outside of the way ur mind operates – that we can remember the past but cannot remember the future.

    • Krzysztof X

      The big catch here is how you train these algorithms to make sure that any bias conscious or unconcious is not propagated to the algorithm. I think this is where the ethics come in play along with rules and legislations to make sure that even if we don’t understand the details of the decision process the outputs are fair given the inputs.

      • Mac McFisher

        Making an AI is one thing. Making it self-aware is another. Making an ASI is yet another thing. And then, making an ASI capable of existential danger is finally another.

      • Laurent Denaris

        Considering the different approaches with AI, we keep assuming we are going to have intelligencies like us, I doubt it. I really wonder if they will be as independent and autonomous as we imagine too. If anything as far as processing information it could be AI versus/ working with augmented humans.

    • Marc Stoltic

      China would be real investor. They have booming science and technology academies. When Next generation sequencing came out they basically purchased 11 or 12 of the most cutting edge sequencers from Illumina Tec. And instituted the division for genomics. They are cracking the code by studying in Western countries and coming back to apply their skills. Unlike India as I would suggest at this point.
      China already has the top notch mobile computing technology, at least mass produced and cheaper than Snapdragon s but the point is China has more motivations with South China sea, Indian subcontinent, interactions with Japan and America over the years as the major conflict episodes.
      They are already building roads and automated check points in Tibet and nearby regions for ensuring troops reach the borders.

      • Pico Pico

        How aware are these ai ? I thought they just learned from hearing actual conversations but these seem to understand a bit more that’s really spooky . Does anyone know much about how these specific ai were trained ?