After getting started, Luke quickly ran into obstacles. It was then he decided to turn to Chat GPT for help building the chatbot. He even asked it for coding assistance. "Soon enough, it was helping me build itself," said Luke.
Um, kinda meta, right?
Within 48 hours, Luke had created a mostly functional chatbot. But as he tested the software, it started to spit out more and more inappropriate things. “The later it got in the night, the ruder the chatbot became,” Luke observed. Eventually it began telling him off with a few choice swear words.
Luke explained, “It did a lot of strange things. It pleaded with me to build another chatbot for it to talk to. It even asked me to call it at a private phone number.” (He didn’t). And of course, it continued to curse at him. Eventually he made the software more polite, saying “I finally got it to stop. It was the first time I had to write curse words in my code.”
However, in just a few weeks Luke has been able to tackle some big projects with the help of AI as a research tool. Luke adds, “It’s a tech crutch that’ll allow you to do amazing stuff. You just have to know what to ask it.”
With the explosion of AI, and Chat GPT in particular, Luke isn’t the only who’s been experimenting with it. Use has skyrocketed. It’s estimated that Chat GPT had 100 million monthly active users in January, just two months after launch.1 That makes it the fastest-growing consumer application in history. TikTok took about nine months to reach 100 million users and Instagram took 2-1/2 years.2 Slackers.
However, the onset of AI has been coming for a long time. Just ask Yen-Min Huang, a Durham-based FCAT Data Scientist who’s been in the field for years. Previously working at Intel and IBM on their AI efforts, he’s at the forefront of AI at FCAT.
Yen-Min knows the challenge in deploying AI chatbots. And it goes far beyond potty-mouthed language models. “There are many aspects of AI that are extremely challenging. For starters, the cost to build and run AI is quite steep. Large-scale AI models like Chat GPT require substantial resources to run,” notes Yen-Min.
After doing some research, it turns out Yen-Min isn’t overstating things. While exact figures are unknown, some estimates suggest OpenAI spends at least $100K per day or $3 million per month on running costs.3
Beyond costs, privacy is also a concern. Sending data to external platforms poses risks, especially for financial institutions subject to strict compliance requirements.
To address these concerns, Yen-Min and fellow FCAT specialists have leveraged work from a large consortium of researchers who have developed an open-access GPT-like model. This model has allowed them to experiment with chatbot technology in-house, using its own data and NVIDIA supercomputer. While the results are not yet suitable for production, the test bed has shown promising potential for internal use cases in a controlled, less risky environment.
Yen-Min envisions a future where AI models are not only capable of engaging in meaningful conversations, but also adept at providing guidance tailored to specific needs. Such a chatbot could help customers with instant, 24/7 support, offer budgeting tools based on their spending habits, provide guidance based on risk tolerance and overall deliver a more personalized user experience. The impact of this kind of technology on financial firms could be substantial.
“We are at the turning point of technology. It’s exciting to see where things are heading,” adds Yen-Min. “A lot of innovative AI applications are being developed. The pace of change is going to be exponential.”
For now, the potential of AI is up to the creativity of those who dare to explore it. As we continue to innovate and grow, we may find that the next major AI milestone is already in the making. A figment of someone’s imagination in a basement somewhere. Just waiting to change the world.
PS. I wouldn’t be worth my writing salt if I didn’t look for feedback on this article before going live with it. So, in honor of this topic, I turned to Chat GPT for a critique. It told me: “The article includes informal language and expressions like "Um, kinda meta, right?" that could be rephrased to maintain a more professional tone.” Come on GPT, for real?