A foreign friend of mine struggled for nearly three weeks trying to prove to his country’s authorities that he… existed. All because of an error in the country’s personal identification number database, which had simply “deleted” him. That is an example of how poorly managed or processed information may complicate the life of an average citizen and even cause social tensions.
Here is another example. A few years ago, a certain resident of the United States received a letter stating that he had committed a number of traffic violations and that his driver’s license had been revoked as a result. Surprised, he immediately called the relevant authority to clarify the issue. In response, he was offered… a list of crimes he had allegedly committed. He was dumbfounded, as he had been a perfect driver all of his life.
In the end, all this turned out to be a mistake. How did it happen? It was the fault of an application which relied on human facial recognition. A defective algorithm erroneously associated a face with his name. This case was a one-time error with clear consequences and a predictable course of events.
Can machines replace people?
Unfortunately, data processing errors can lead to situations far more complex than those described above, – situations with dangerous social consequences. Poorly designed algorithms can push people to “boil over”. An example? A well-publicized case from the last US presidential campaign. A Facebook newsfeed confronted the site’s astounded readers with news that an employee of the conservative Fox News TV station had helped Hillary Clinton in her election campaign. This unverified and completely fictitious newsflash went viral. Ultimately, the false news was taken down and voters calmed down. Interestingly, however, the story took place three days after Facebook announced that its trending news section was firing its staff, whose jobs would now be performed by… algorithms. The incident put wind into the sails of Artificial Intelligence critics, who immediately, as is their custom, raised an uproar, claiming the technology was unreliable and had more drawbacks than benefits.
The complexity of data
The best commentary on just how complex a problem we are dealing with and that its serious consequences might upset public order came from Panos Parpas, a research fellow at Imperial College London. The scientist was quoted by the British Guardian as saying that: “Algorithms can work flawlessly in a controlled environment with clean data. It is easy to see if there is a bug in the algorithm. The difficulties come when they are used in the social sciences and financial trading, where there is less understanding of what the model and output should be. Scientists will take years to validate their algorithms, whereas a trader has just days to do so in a volatile environment”.
Machine Learning is a trend widely debated in the context of the global advent of Artificial Intelligence. Machines’ ability to learn allows people to make computers that are capable of processing information from their environments and using them to self-improve. A case in point is the incredibly powerful IBM Watson, a computer with a capacity to process huge datasets that are fed into it, providing its users with more exhaustive answers. Unfortunately, there are times when the information is processed in a manner that is controversial, to put it mildly. This lesson was learned the hard way by Twitter, which some time ago set out on promoting its Tay application. The application was expected to process tweets and learn to provide specific answers to the site’s users. Imagine the bewilderment of Twitter’s owners as they watched a flood of offensive profanity stream in, with some tweets even questioning public order. As it turned out, the statements which the application churned out were based largely on the posts of users, who don’t shy away from hate speech.
We need proper education
Applications and programs based on algorithms are bound to proliferate – there is no turning back the tide of digitization. What we need to realize is that digital devices tend to err. Is it sensible to demonize what are merely tools and blame them for the evils of technological progress? No, it is not. What we do need to do is to educate IT people and engineers and get them to understand the social consequences of their mistakes.
Example of chatbot’s Tay hate speech