ChatGPT and banking jobs: Ex-JPMorgan AI quant's verdict
Will your job in a bank survive the maturation of ChatGPT? That depends, frankly, who you ask.
If you ask ChatGPT itself, it will produce a competent and detailed answer claiming that it will surely consume jobs in research ("It can analyze vast amounts of data and provide insights that can help investors make better decisions. This technology will significantly reduce the need for human analysts."); in trading ("It can analyze market data and execute trades based on predefined rules. This technology will significantly reduce the need for human traders."); in risk management ("It can analyze data and identify potential risks. This technology will significantly reduce the need for human risk managers.); and in compliance ("It can analyze data and ensure that the bank is complying with all relevant regulations. This technology will significantly reduce the need for human compliance officers.").
In other words, referring to itself in the third person, ChatGPT thinks it will be everywhere. Many thousands of banking jobs could disappear into its maw.
But Vacslav Glukhov, a human AI researcher with a Stanford PhD and two decades' experience in the application of quantitative techniques in finance - including, mostly recently, as AI research director and head of EMEA quant research for e-trading at JPMorgan, disagrees. ChatGPT will mostly eat jobs that entail commentary on figures and the rehashing of existing ideas, says Glukhov.
"If your job is to create a report for a client based upon lots of numbers, then you can outsource that to a smart algorithm that will take data from sources and create some very well argued points," says Glukhov. "It is simply about pointing the machine in the right direction."
Jobs at risk of being superannuated by such pointing include research, but they also include the construction of pitch books by analysts and associates working in investment banking divisions. "If you're creating reports based around figures and text with compelling arguments, then a lot of that job can be automated," reflects Glukhov.
What can't be automated is completely new ideas, which is where Glukhov says humans will prevail. "It's just a language model - it can't come up with something new, based upon connections the machine has not been trained to make." If you're a trader or a salesperson with those new ideas, this will be your special strength, although relentless originality may be a big ask.
Unlike ChatGPT itself, Glukhov is less convinced that the tool will replace humans working in risk and compliance roles, or at least not in complex risk and compliance roles. He says it's not (currently) sufficiently numerically-oriented to replace model validation quants, for example. "We have compliance, risk and model validation departments not because we think models are correct but because we are trying to find flaws in the model." These flaw-spotting jobs can't be automated because they depend upon human intelligence and an ability to predict unusual situations, says Glukhov. Mavericks will thrive.
Have a confidential story, tip, or comment you’d like to share? Contact: email@example.com in the first instance. Whatsapp/Signal/Telegram also available (Telegram: @SarahButcher)
Bear with us if you leave a comment at the bottom of this article: all our comments are moderated by human beings. Sometimes these humans might be asleep, or away from their desks, so it may take a while for your comment to appear. Eventually it will – unless it’s offensive or libelous (in which case it won’t.