AI Safety and Robustness In Finance

Will AI Make or Break the Next Generation Financial Systems?

ICAIF'23 – 4th International Conference on AI in Finance

Workshop date: November 27, 2023, 1-5 pm ET

Venue: 1 MetroTech Center, Brooklyn, NY 11201

On April 23, 2013, the Associated Press Twitter account tweeted at 1:07 pm: "Breaking: Two Explosions in the White House and Barack Obama is injured". The account, which had around 2 million followers on Twitter, was hacked. Within seconds of the tweet's release, virtually all U.S. markets plunged on the false news and went into a spiral that required intervention. The S&P 500 dropped 14 points to as low as 1,563.03 in about five seconds. The Dow Jones Industrial Average temporarily dropped 143.5 points, around 0.98 percent. Reuters data showed the tweet wiped out $136.5 billion of the S&P 500 index's value within minutes. This attack showed one of the earlier signs of the dangers of automation without safety measures in the financial services, as well as the numerous feedback loops and the resulting impact in the blink of an eye. In May 2023 a similar event happened with an AI generated image of a fake explosion at the Pentagon. U.S. Stock Markets went into a downward spiral before circuit breakers kicked in. 

Artificial intelligence solutions have a rapidly growing list of use cases in financial services. These applications range from customer service solutions and personal financial assistants to sentiment-based trading systems and AI-based wealth management advisory. Compared to 2013, AI systems are built with more security measures and guard rails. However, AI safety, robustness, and ethics techniques are at their infancy compared to the AI systems themselves - progressing much slower. With the announcement of GPT-4, AI safety has gained renewed interest in 2023. Based on the advanced capabilities of GPT-4, a new list of applications emerged, along with concerns about how much damage AI can make if and when things go wrong. The recent progress in LLMs and generative AI systems has also fueled a significant increase in the use of AI for criminal purposes. AI-enabled financial criminals have reached unprecedented levels, from voice-synthesized financial scams to custom spear phishing attacks. ChatGPT and other LLMs enable generating massive numbers of social media posts and blogs to manipulate the markets, gaming autonomous AI models into crashing or making incorrect decisions. AI-generated malware and cyber threats pose significant risks to financial firms.

Furthermore, generative AI systems frequently exhibit “emergent behavior” phenomena of completely unforeseen behaviors and capabilities. The complexity of current AI solutions makes it nearly impossible for development teams to fully predict the resulting AI system characteristics and design guardrails. Such capabilities pose dangers to the financial system and the broader society if no safety measures are taken. Even though financial services are among the most advanced in AI ethics practices, the broader research on AI ethics beyond fairness and explainability requires novel paradigms.

In 2023, 1000 researchers and practitioners signed a petition to “Pause AI Research” temporarily. This petition called for a cessation of research on “all AI systems more powerful than GPT-4” until the development and implementation of shared safety protocols for AI. While the community debates the ramifications of stalling and not-stalling AI research, it is clear that AI safety, ethics, and robustness research is more needed than ever, as it impacts both current and future AI systems.

This workshop aims to tackle the emerging challenges rapidly developing AI solutions pose to the financial sector and to society. The workshop aims to focus on:

Robustness, safety, and ethical behavior of AI systems have become primary concerns. They will likely have a significant role in the potential success of AI as well as the resulting progress (or failure) of an AI-guided society. The workshop will explore the current next-generation LLM and AI applications in finance. Analyze potential threats posed by criminal organizations through advanced AI use. Discuss developing industry and application-wide safety protocols, end-to-end AI robustness techniques, model monitoring and regulation tools, advanced AI ethics solutions, and other critical research areas.

The workshop will bring industrial researchers, practitioners, academics, and regulators together to discuss emerging trends, challenges, novel solution approaches,  latest safety/ethics/robustness tools and technologies to advance the state-of-the-art safety, robustness, and ethical practices in the AI in the finance community.