Artificial intelligence is an efficient banker

The innovative use of artificial intelligence in the financial industry is no passing fad. It is a must and a trend that seems to have no alternatives. Algorithms improve financial management and product selection for customers and automate the work of financial institutions.


facebook twitter linkedin email
AI banker blog Norbert Biedrzycki banking

The most useful innovations that have emerged in today’s financial industry would not be possible without some basic capabilities of AI. The main ones are to process large volumes of data, perform predictive operations and conduct real-time analyses of information sets. These alter not only the way banks and insurance companies operate but also the way customers behave. According to Bain & Company, the savings made possible by the deployment of AI will run up to $1.1 trillion by 2030. This amounts to a 22 percent reduction in operating expenses. These figures are consistent with the assessments of Accenture. What follows is an overview of how smart technologies are changing the face of finance and an attempt to predict such changes going forward. What can we expect AI to do for the financial and insurance industries? Will AI become an efficient banker?

Uniform tools to feed data to AI 

Apart from money, banks’ most valuable asset is knowing their customers. For years, customers have been researched in traditional ways and targeted with standard marketing. This approach has had one fundamental flaw. Customer data was gathered and processed using mutually incompatible techniques and tools. Marketing, customer service and sales each relied on a different data collection technique. The digital revolution made it possible to consolidate and harmonize these disparate areas. Today, customer ratings, research on customer preferences and potential, product development and sales all take place in a shared digital space. This mainly by using mutually integrated and mutually compatible tools. Data, the “lifeblood” of AI, maybe fed from different sources but its underlying digital nature is a constant.

Real time instead of history

Importantly, the majority of these processes take place in real time. For a long time, the most essential variables for building solid customer relationships were age, income, occupation, marital status and history of relationships with relevant financial institutions. Banks still continue to use all of them. But they form only a part of the analytical puzzle. There are a number of new factors these days, which include observing the current activities of customers (online activities, of course). Companies learn what customers want and who they are by analyzing their behaviors on bank websites, in hotline conversations and in email and telephone interactions. Adding further to the significance of this information is the ability to process it in ways that are virtually unlimited, and, most importantly the fact it can be processed in real time.

AI banker blog Norbert Biedrzycki banking 1

A voice assistant inquires about a loan

One of the most dramatic changes to be expected in the coming years will affect customer service. The sector will see a rise in the use of voice bots – the kind we are already familiar with from our personal lives. Some go as far as to call this an upcoming “Alexization” of our lives. The term is a clear reference to the growing popularity of devices that support voice assistants, such as Alexa and Siri. The RBC Capital investment fund predicts that close to 130 million devices directly connected to Alexa will operate globally by 2020.

There is no reason why software based on the concepts utilized by such programs shouldn’t serve bank customers. All the more so given that bots trained by experts are increasingly better at imitating human interactions. Bank of America has deployed the chatbot Eric to advise bank customers with voice and text messages. The bot works 24 hours a day supporting all regular transactions that customers normally perform. This saves the bank bundles by replacing over a dozen if not dozens of specialists doing shift work 24/7. In Silicon Valley, they say that unlike machine algorithms and the chatbots they support, humans are not scalable.

When a bot sounds like a human

Since self-learning machines improve over time, assistant devices will become increasingly more competent. A bot capable of carrying a conversation will no longer limit itself to answering questions about the weather or traffic. It may just as well advise you on loan interests and the benefits of opening a deposit. It can tell you how much money you need to repay and remind you about an upcoming payment. Today, devices of this sort do well in first contacts with customers, sorting hotline callers, and putting customers through to appropriate departments. In the future, the duties of assistants will become more complex. Obviously, for this to happen, the bots’ communication skills will require some honing. Skills such as using compound sentences, tuning into customers’ intonation, and sensing their underlying problems are still in development. But the pace at which bots are learning is ever faster.

Money with a fingerprint

As a sector that collects and processes enormous data sets, banking faces a serious challenge. What makes it all the more serious is the fact that today’s users are highly sensitive to security threats. Their anxieties are compounded by regular media reports on leaks of sensitive data kept by companies and portals. Banking and insurance IT experts have their hands full searching ceaselessly for ways to restore the recently undermined confidence of the average consumer.

Against this background, it is interesting to note the gradual changes in the way accounts are accessed online. In a nutshell, this concerns logging and authentication procedures. The traditional approach relies on nothing other than the usual logins based on a string of characters. As hacking techniques improve, this becomes a critical vulnerability of information systems. Bots designed specially to steal such data intercept security codes and passwords. This calls for alternative solutions. By all indications, authentication may be revolutionized by the use of fingerprints and, coming up soon, face recognition. This unique, personal identifier appears to offer the best and also the easiest-to-use protection against system intrusions available today. Just as voice-enabled devices can made traditional search engines and manual data entry interfaces obsolete, character-string-based authentication will be rendered outdated by systems relying on the touch of the human hand.

AI banker blog Norbert Biedrzycki banking 2

Send us a selfie and we will give you a loan

And what about authentication based on a facial photograph? Is it even possible? It is beginning to be. The individual features preserved in a photograph represent unique content that cannot be forged. This uniqueness can be uncoded by thorough analysis using software that identifies even the smallest nuances of an image. The use of a scanner to enter a photo or the use of a laptop camera to scan in a face may well become the next authentication method. AI’s ability to recognize faces, which is increasingly valued, not least by the police, can be used in all kinds of contexts. One can imagine loans being granted on the basis of signals “written” on a photo, which we’ll upload into a designated space. Algorithms will be able to examine an image to provide a bank with information about our health and overall life situation, allowing the bank to assess our value.

Investigations and real-time surveilance

In view of these security considerations, it is also worth mentioning the use of AI tools to monitor the security of banking systems. Intelligent AI-based software is designed to track the smallest anomalies indicative of a hacker attack in real time. Every day, hundreds of thousands of hacks and theft attempts targeted at valuable data or financial assets occur world-wide. Old type software would be unable to detect such attacks due to their sheer number. It takes AI tools to provide adequate protection and a sense of security.

And it is not just about such attacks. Banks struggle with the constant natural challenge of having to process and approve ambiguous high-risk transactions in cases that allow for multiple interpretations. The difficulties arise in a wide range of fields from lending to loan repayments, to interest rate calculations, to a host of accounting operations. People using traditional devices need hours to analyze cases and reach decisions to either approve or reject operations. Systems that rely on machine learning can authorize such transactions within mere seconds. The benefits to system efficiency and security are plain to see.

AI goes into insurance 

These solutions are currently in use or will be used also in the insurance business. Here, too, artificial intelligence will be confronted with substantial amounts of data on customer behaviors and needs. This industry’s preoccupation with risk is even greater than that of banking. It employs algorithms to assess threats to customers’ life or health and examine their property holdings (real estate). It relies on the above-mentioned photo analysis technique. The sale of life insurance policies will require extensive analysis. From this viewpoint, new ways of gathering data attract much interest.

One possible source of massive amounts of information is the user’s car. Data on vehicle mileage, the driver’s accident proneness, and even driving style can be invaluable for companies developing personalized insurance products. It may also be vital to use advanced algorithms to assess the likelihood of customers developing health conditions by living specific lifestyles. We are yet to see whether future insurers will want to analyze bottom coded values, but the possibility cannot be ruled out.

AI banker blog Norbert Biedrzycki banking 3

Services tailored to customers and automated buying 

AI will soon be able to perform the crucial task of reliably identifying people. This very ability will allow companies to handle every customer as a whole separate case rather than lumping like persons into larger sets. At a more general level, this innovation will enable providers to tailor their products and services to the needs of individual customers. There is also another solution that this approach will support. In the near future, customers will use interfaces that themselves analyze the data they enter. This stepin automating banking services as a logical consequence of deploying smart algorithms.

Shopping for Mr. Smith and traders

Continuing along this train of thought, it is worth noting that automation will increasingly support product purchases. While today’s sales personnel continues to be an indispensable part of customer relationships, future customers will make do with mere applications. These may, for example, conduct serial sales of financial and insurance products and relieve customers of time-consuming decisions. Note that serial buying will be a life-saver for business investors who handle large volumes of products and information daily. Professional traders will be able to count on AI to relieve them of many duties, which today are associated with both shopping and extensive analyses.

The power of algorithms and the mystery of the black box

Artificial intelligence can count on a very secure future in banking and insurance. Algorithms are poised to simplify many operations for the convenience and time savings of customers. The sense of security that automation will give customers will become another revenue driver. As a side effect, one can undoubtedly expect impacts on employment in the financial sector. One case in point is the bank JP Morgan, which has launched a platform to extract data from loan applications. It would take the bank’s employees 360,000 hours to go through 12,000 such documents. Machine-learning software completes the job in, wait for it, a few hours. The benefits are evident. It is interesting to consider the event in terms of its impacts on employment policies.

Another vital question concerns explaining the inner workings of such algorithmic operations at a deeper level for both the sense of security of the customer and to show banks they are in a position to know what their system is doing. This brings us to the recently topical issue referred to as the “black box” problem. It concerns everything that happens inside AI-run devices and that often defies human understanding.

AI as a future banker?

Similarly to a lense, the revolution sweeping through financial markets brings into focus all the key issues associated with the presence of AI in our personal and business lives. The only possible answer to the question regarding AI’s potential and its impact on the convenience of customers (financial product consumers) is: both the potential and the impacts are huge and unlike anything we have seen before. The financial sector is one of the largest beneficiaries of algorithms’ rise to dominance in many industries. Such industries are nevertheless acutely aware of the scores of unanswered questions on the protection of funds, system security, customer data processing and regulation. One needs AI to be able to use money to make more money. But money also requires rules and regulations so that it doesn’t evaporate rapidly amidst technological turmoil.

.    .   .

Works cited:

Bain & Company, Karen Harris, Austin Kimson, Andrew Schwedel, Labor 2030: the collision of demographics, automation and inequality. The business environment of the 2020s will be more volatile and economic swings more extreme, Link, 2019. 

CNBC, Arjun Kharpal, Amazon’s voice assistant Alexa could be a $10 billion ‘mega-hit’ by 2020: Research, link, 2019. 

Future Digital Fiance, WRC Insights, One Million People Are Now Using Erica – BofA’s AI-Powered Chatbot, Link, 2019. 

FORBES, Martin Giles, JPMorgan’s CIO Has Championed A Data Platform That Turbocharges AI, link, 2019. 

.    .   .

Related articles:

– Technology 2020. Algorithms in the cloud, food from printers and microscopes in our bodies

– Learn like a machine, if not harder

– Time we talked to our machines

– Will algorithms commit war crimes?

– Machine, when will you learn to make love to me?

– Hello. Are you still a human?

– Artificial intelligence is a new electricity

– How machines think

Leave a Reply


  1. CaffD

    What really going to happen is people who don’t understand the technology will anthropomorphize the AI, assume they’re a lot more competent than they actually are, and put them in charge of really important stuff well before it’s actually up to the task. It’s going to be a rough couple of decades.

    • John Macolm

      I think it has to do with the AIs running through scenarios on how to win the war and if one of them determines a high probability of success using nuclear weapons it will launch.

      • Jang Huan Jones

        Russian AI bad, American AI good, Russian AI bad, American AI good, Russian AI bad, American AI good, Sounds like something from animal farm. I looked at the American AI and I looked at the Russian AI and I could not tell the difference.

  2. John Macolm

    Not only banking. AI has the potential to improve billions of lives, and the biggest risk may be failing to do so. By ensuring it is developed responsibly in a way that benefits everyone, we can inspire future generations to believe in the power of technology as much as I do.

    • Acula

      We have a hard time make AI more intelligent that say, a worm. This is because expanding resources available to AI (a trick that worked for regular computers) tends to make AI more stupid rather than more intelligent as having such capability it tends to memorize instead of generalizing. Hence creating complicated AI systems capable of thought seems to be well ahead of us. The future is just an illusion in physics. This is just one possible way of ordering events. And not particularly remarkable one – outside of the way ur mind operates – that we can remember the past but cannot remember the future.

      • Aaron Maklowsky

        No offense, but you clearly have not a clue about anything real in AI development. I’m a robotics engineer, and some of our clients are developing AI. They all have stories like this, from simple routines to more complex. All of them. It’s scary and cool at the same time. Developing AI is like learning to communicate and teach an alien race.
        Yes, now you simply teach them to solve a problem or task, but once the AI learns how to do that, we don’t know how it arrived there. And it only gets exponentially more difficult to understand with complexity.

    • Laurent Denaris

      Considering the different approaches with Aat Intelligence , we keep assuming we are going to have intelligencies like us, I doubt it. I really wonder if they will be as independent and autonomous as we imagine too. If anything as far as processing information it could be AI versus/ working with augmented humans.

  3. TomCat

    A lot of people simply don’t consider their personal data a valuable asset. They don’t mind if some algorithm processes their geographic location, demographic data and browsing history in order to target adverts and recommend links.

    They weren’t doing anything with that data themselves and don’t know anybody who would be interested in it, so don’t see it as something worth guarding.
    It’s pretty much the “I’ve got nothing to hide” mentality. And I think privacy advocates could be a lot more effective at persuading people why keeping that kind of info private actually matters to the average person.

    • PiotrPawlow

      Nature published our research showing that an AI model can help doctors spot breast cancer in mammograms with greater accuracy; we are using AI to make immediate, hyperlocal forecasts of rainfall more quickly and accurately than existing models as part of a larger set of tools to fight climate change; and Lufthansa Group is working with our cloud division to test the use of AI to help reduce flight delays.

      • Laurent Denaris

        AI Would be used to hack, crack, or brute force communications. What else would the government do with a super super computer other than spy or deanomitize TOR again

    • PiotrPawlow

      These lessons teach us that we need to be clear-eyed about what could go wrong. There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone.

      • CaffD

        You’re “AI Effecting”. AI, per the definition of AI is quite common these days. The nut we have not cracked, and I’m thankful for that, is AGI, artificial general intelligence. And here’s the thing, society is already having problems adapting to the changes simple AIs represent. If someone created a complex AGI, we’d be fucked at this point. So yea, I want society and culture to incorporate and deal with the fact that humans won’t be the smartest thing in the future. We haphazardly jammed the internet into human cultures and we’re dealing with a lot of fallout, much of it unexpected. Jamming AGI into our culture will not be as forgiving and without horrible consequences.

        • Jang Huan Jones

          it’s more concerning when companies like Google abuse their power, just look at their usage of AI on youtube .

    • Krzysztof X

      The creation of legal foundations for access to the biological systems of citizens by governments or corporations (through so-called “mandatory vaccinations”).
      This is what we need to think about as that is not the Black Mirror series but this is something in construction and again not to fight a global threat to our civilisation but as a reaction to a disease that has the lethality of a flue.

  4. JackC

    After watching much social media, AI will vastly overcome human intelligence once it’s gained the ability. I’m scared

  5. SimonMcD

    The thing after autonomous cars is infrastructure to coordinate traffic better. This even works in non-autonomous autos as makers add guidance features ( usually voice warnings ) to aid with risk scenarios.
    After…. 50ish years in even as controlled a domain as commercial common carrier aviation, we’re only requiring the tech called NexGen in very dense airspaces.
    I know engineers whose careers were damaged by glomming onto automated airspace management too early.

    • AdaZombie

      At present, many of us are happy to give away our most valuable asset—our personal data—in exchange for free email services and funny cat videos. It is a bit like African and Native American tribes who unwittingly sold entire countries to European imperialists in exchange for colourful beads and cheap trinkets.

  6. Adam Spark Two

    My Computer Science and Neuropsychology degrees tell me you’re right. Check them:
    And so do these guys:
    And these guys:
    And these guys:
    Also, check out these articles:
    Certainly, not everyone agrees on the timeline, but it isn’t “pure science fiction.”

    • AdaZombie

      Just to play a bit of devil’s advocate – I think his response would be something along the lines of “Not everything that organizes human behavior constitutes a religion – only when the system necessitates that the organizing principle cannot be super-seceded by any other principle or domain of human action.” Now, what exactly that means could certainly be debated – stated principles vs. practical reality…I think an argument can be made that capitalism would not constitute a religion. Corporatism certainly might, though – “nothing is higher than the company.” I think it would have been possible to have capitalism without corporatism if anyone thought(cared) to imagine what the long term consequences of certain legal structures would be. That’s all I’ll say about it for now…not fun to post from my phone, but if anyone is interested and/or I can find the time, I’ll come back and try to make that argument.

      • John Macolm

        As much as I disagree with Elon Musk, I think he very well could be correct on that assumption.

    • Laurent Denaris

      OK here is how to make a counter-move to all the AI bullshit and threatening nonsense and win the mentioned WWIII and prevent anyone from ruling the world, giving the power of AI to a humanity. I would do it if I had time and if I could quit my daily job…but I can’t… Make a block-chain that gives the miners information in a form of the Apache Spark nodes to process/compute the information. The flow of information should be as safe as possible and it should work with Tor as well. It would be a crossover of ideas that exist with GRC, XVG, XMR, ETH and Apache Spark. If you take the original Satoshi’s paper, things can be re-arranged a bit to make this work (the steps where the title says “5. Network”).
      As a result, the world would have the strongest super-computer available at the laughable processing fees, and anyone could access its power. It would simply wipe-out all of this AI bullshit we see every day in media, and replace it with some good news about the humanity improving itself. There would be no dis-balance of the power and the only lack of power would be the lack of people who can think creatively while having the strong educational background. Everyone would be a winner at a cost of no wars.

  7. Jack666

    Machine learning is about having an algorithm learning the best combination of factors to get the most accurate and correct answer.

    You typically use neuronal networks and train it by sending data and activating a “kind of reward/punition” to tell the neurons if they are more or less producing the expected results.
    This is an example where the reward for some thing is so good (too good) , the algorithm ‘over learn’, and focuses way too much on one solution which ends up not being the accurate one and can’t seem to unlearn it

    • Jang Huan Jones

      All this talk of all-powerful, weaponized AI remind anyone else of The Moon is a Harsh Mistress by Robert Heinlein?

  8. John Accural

    It will be illegal drone armies vs armies of robotically enhanced humans which will be more ethical. Mech suits, compact tech suits (iron man!) helmets/glasses with enhanced integrated systems. Because then you have the benefits of robots structural rigidity and computer systems as well as the human supervision of kill commands.
    We all know someone will break the law and make a huge army of suicide drones to create a new caliphate somewhere…

    • Mac McFisher

      Surprisingly, no. Only Kurzweil was willing to go for such a short timespan and, well, that’s Kurzweil for ya. Most of the other estimates were decades out, at least.
      Personally, what bugs me is how rarely the mainstream media outlets cover this sort of thing. I feel like the media over-plays and over-estimates current “AI” capabilities to an almost absurd degree. I really wish more people realized that we are still (most likely) a very long way out from a HAL or C-3PO.

    • CaffD

      ” Elon Musk, 55, expressed his concerns surrounding smart machines on Wednesday, August 28, at the World Artificial Intelligence Conference in Shanghai, China. Mr Musk’s warning came after he revealed the work of his company Neuralink in July – a company he co-founded to merge human brains with machine interfaces. Speaking at the AI conference, the SpaceX boss argued computers are already outsmarting their creators in most scenarios. More shockingly, Mr Musk claimed some researchers are making the mistake of thinking they are smarter than AI. “

      • Krzysztof X

        The introduction of digital biometric ID cards that can be used to control and regulate participation in social and professional activities.

  9. AKieszko

    What if you include some recursion and a certain degree of randomness? What if you include the ability to learn?

    • tom lee

      It isn’t that our current deep learning / machine learning algos have some crazy zen paradox inception tier mindfuck flaw that only some messiah savant will be able to reveal to us. At least that’s not the main problem. The main problem wrangling all the data that is needed to sufficiently model the problem space. The “AI” that runs inside the Dota and Starcraft is given only relevant information and ALL the relevant information. Sure those games still have “fog of war”, the AIs are not able to see things the human players can’t, that’s not what I’m saying. My point is all the data being fed to those AIs are by definition all the info needed to understand the game/problem space.

      Real life is not like that. We don’t not have stock piles of n-dimensional number arrays that represent all information needed to solve a given problem. Instead we have reality with all the wavy light and energy or whatever all the stuff is made of. Our brains have evolved to parse patterns from sounds, light and smell. Computers are still trying to bridge that gap. Like for example the Dota/Starcraft AIs don’t actually “see” the games’ pixels. Instead they get a matrix of numbers that concisely represents the game’s information. And that’s not to say deep learning can’t understand the pixels and model the patterns presented by them but it is a more computationally dense effort.

      • Jang Huan Jones

        What is your point exactly? No one is surprised this is happening like you seem to be making fun of. The point of the article is how serious this kind of tech can end up being. It’s a conversation about the implications of all this, not about the fact that it’s happening.

  10. AKieszko

    As someone who specializes in artificial intelligence, I can say you don’t really understand it either.
    AI is, at its very core, a filter between input and output. It can be anything, from linear regression to decision trees (which are basically a series of if/else). The only extra to it is that the model is able to “learn”, which is just optimizing functions from datasets, or a heuristics approach.
    What most people seem to forget, is that neural networks are just big, refined decision trees. A long series of yes/no from input to decision. Our brain is litterally just a complex series of if/else. Although the plasticity and its vastness and feedback loops allows us to grasp extremely complex concepts.
    After all, what does it mean for a human to be intelligent ? As I recall, atoms and laws are in motion, and that’s all, choices are not really choices, and in that context, are we that different from artificial intelligence ? Biology had millions of years to optimize our brain, technology just needs to catch up.

    • John Macolm

      I really don’t want I Have No Mouth And I Must Scream to end up being prophetic here. But the basic backstory is that the US, China and Russia all develop major AI projects to combat the others, and then one AI absorbs the others and overthrows humanity while being decidedly unfriendly.
      The fiction isn’t any evidence at all about the likelihood of the scenario one way or another, but I do think that on its own merits it’s both likely enough and damaging enough to be troubling. We need to focus on AI being friendly before we focus on AI being powerful, or this could go very badly.
      At least each country would be individually motived to make sure the AI would be aligned with its interests, but that could be jeopardized by the competing concern of making it powerful quickly.

      • Jang Huan Jones

        Your absolutely correct, And let’s think on that for a second. If we have true AI that means it can collect sensory input, think, come up with solutions for abstract problems, the ability to change it’s behavior and reprogram itself, etc.
        And we all know that tech can process information super fast. So what do you think happens once a real AI learns math or science? Wouldn’t it learn it all in a matter of minutes then continue on to produce and solve new mathematical issues. Not to mention scientific and technological advancements that could entirely be possible because a computer thought it up.
        That means that once a country creates an AI not only will it know many more answers to science and math, but those countries can use those answers to develop new weaponry, defenses, infrastructure, etc.

      • Laurent Denaris

        “Power in technology” such as how he invested 500 million in HAARP weather manipulation.

  11. Simon GEE

    AI should be regulated, but that doesn’t mean it will be….
    As much as I love tech and the bare bones of it, I think AI is not a good idea…..Now the kid in me wanted it for years! The adult in me is scared of that shit!