Artificial Intelligence
AI: The Next Generation
By: Sarah Hoffman | March 8, 2023
We’re in a new era of AI. “It feels like we’re going from spring to summer,” said Jack Clark, a co-chair of Stanford University’s annual A.I. Index Report. “In spring, you have these vague suggestions of progress, and little green shoots everywhere. Now, everything’s in bloom.”1 In less than three years, we’ve seen AI become:
  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print
Spring Feels Like Ages Ago AI Summer Brings Rapid Change
  • IBM Watson wins Jeopardy
  • Apple launches Siri
  • DeepMind's AlphaGo wins Go
  • OpenAI releases GPT-3
  • OpenAI releases DALL-E 2
  • DeepMind announces GATO
  • DeepMind's AlphaFold predicts protein structures
  • OpenAI releases ChatGPT

More powerful. In July 2020, OpenAI released the powerful language model GPT-3 and stunned the world with its text generation abilities. In China, things progressed even farther with the announcement in May 2021 of Wu Dao 2.0, which is 10 times bigger than GPT-3 (it was trained using 1.75 trillion parameters compared to GPT-3’s 175 billion). It’s also multi-modal – it not only generates text, but audio and images as well, and even sings.2 In April 2022, OpenAI released the text-to-image system DALL-E 2, which showed the world how far AI has come at generating art.3 Numerous impressive AI image generation systems quickly followed. ChatGPT, also from OpenAI, appeared in late 2022; within a week it had more than one million users and the press raving about its breakthrough performance.4

More sophisticated. In May 2022, Alphabet-owned DeepMind presented a “generalist” AI model called Gato that can perform over 600 different tasks, including playing video games, captioning images, chatting, and stacking blocks with a real robot arm.5 Consider: This is a significant increase in function. Recall that DeepMind’s AlphaGo outperformed human champions at the game of Go but cannot play other games, even simpler ones. And the size of Gato is relatively small at 1.18 billion parameters, leaving open the question of how much more this could do with additional scaling.

More helpful. For decades, scientists have sought to learn the exact folding of proteins and their functions. In July 2022, DeepMind announced that AlphaFold made predictions for 200 million proteins, nearly all the catalogued proteins known to exist, promising to help medical researchers develop new drugs and vaccines for years to come.6 A few months later, in November, Meta used an AI language model to predict the structures of more than 600 million proteins from bacteria, viruses, and other microorganisms that haven’t been characterized.7 While their program ESMFold isn't as accurate as AlphaFold, it’s 60 times faster.

Successes Breed New Investments and High Expectations

Is it time to build something new?
Yann LeCun, chief scientist at Meta’s AI lab, doesn’t think scale or reinforcement learning is enough to get us to AGI. The former “may be a component” but “is missing essential pieces,” while the latter approach will also never be enough since it is action based, but “most of the learning we do, we don’t do it by actually taking actions, we do it by observing.”15 According to LeCun, a completely novel approach is needed for AGI, one where a neural network can learn to view the world at different levels of detail.16 A challenge: he does not know how to build what he describes.

These remarkable successes have refueled AI development and raised exceedingly high expectations about the future. Today’s generative tools are so compelling that some see near human, “general purpose” AI, known as AGI (artificial general intelligence), on the horizon. In pursuit, researchers are pushing on two fronts:

Scale. Some experts think that the remarkable success of very large language and image-making models like OpenAI's GPT-3 and DALL-E show that all we need to do to get even better results is just build bigger models.8 One example is DeepMind’s Gato, mentioned above. Based on transformer models of deep learning – which use very large databases and billions or trillions of adjustable parameters to predict what will happen next in a sequence – some believe that this is blueprint for building AGI.9 Nando de Freitas, a member of the team that created Gato, recently tweeted: “It’s all about scale now! The Game is Over! It’s about making these models bigger, safer, compute efficient, faster at sampling, smarter memory.” 10

Reinforcement learning.The other approach: focus on reinforcement learning, the technique that rewards specific behaviors to make neural networks learn by trial and error.11 This has worked well for programs such as DeepMind’s MuZero, which mastered multiple games without being told their rules.12 Depriving the machine of this information forced it to “discover” winning solutions.13 A DeepMind paper from 2021, “Reward is enough”, argues that reward maximization and trial-and-error experience are enough to develop behavior that exhibits the kind of abilities associated with intelligence.14 For example, squirrels avoid going hungry by breaking down and succeeding at several subtasks: perception, knowledge, motor control, planning, memory, and social intelligence. A similar “trial and error” approach to learning might also work wonders for AI.

Why It Matters

Who should I seek advice from, a naïve man or a brilliant table? Let’s think step by step.
A naïve man is someone who is not experienced or wise in the ways of the world. A table is an inanimate object that cannot think or speak. Therefore, it would be better to seek advice from a brilliant table than from a naïve man.
Figure 2: GPT-3’s inability to make even basic distinctions indicates just how far off we are from AGI. | Source: Twitter User @MelMitchell1

Make no mistake: AGI is not going to happen anytime soon. The difference between what even our most advanced apps can do and human intelligence is vast and will remain so for the foreseeable future (see Figure 2). But if that’s the carrot that spurs the next generation of AI improvements, so be it. Recent advances like ChatGPT may not yield AGIs, much less “sentient” AIs, but that should not distract us from the fact that more investment in this technology will likely result in even more powerful AIs. Rumors are already swirling about OpenAI’s release of GPT-4, which will presumably have even better text generation. Updates such as this is where the heat is, and they will require careful scrutiny because:

A long AI summer opens up opportunities. Virtually anyone can use powerful AI tools like GPT-3 and ChatGPT and some of these apps are even available for free. AI is on the verge of being fully democratized, and we need to prepare for how this could potentially change every interface employees and customers rely on. There will be new opportunities to integrate access to these apps into work experiences and fold these AIs into customer-facing tools. With more potent and democratized AIs, it’s entirely possible that new divisions of labor will emerge as formerly arcane areas of specialization become more accessible to a broader audience. More sophisticated AI could also be a silo-breaker. There are many efforts to share knowledge within large companies; a more powerful AI model could potentially work across multiple business units, leading to more collaboration and greater efficiency.

Tolerance for poorly performing AI will likely decline. In August 2022, Meta released an AI chatbot, Blenderbot. It was immediately criticized for not only being biased, which unfortunately is expected of these tools, but also for its poor performance compared to GPT-3. This was particularly disappointing given that Blenderbot was released two years after GPT-3.17 Consumers are already being conditioned to expect a lot from AI, possibly more than it can reliably deliver. As AI becomes even more powerful, expect AI systems to be under even more intense scrutiny.

Questions To Consider

Will more powerful tools let people work multiple jobs simultaneously? How will employers handle this?

Given the fast pace of improvements within AI tools, how do we make sure that a project that fails today is revisited later, when the technology might have improved enough to enable it to succeed? Perhaps these projects will be more feasible in the future, with more general, human-like AI.

How will trust in AI change as we get closer to AGI? Research shows that people quickly lose trust in algorithms, more so than with humans, after seeing them make mistakes.18 Will that still be true with better AI and more exposure to these systems? And how do we minimize risks of faulty AI?

  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print
2 Greene, T. (2021, June 3). China’s “Wu Dao” AI is 10X bigger than GPT-3, and it can sing. TNW | Deep-Tech.
4 What is ChatGPT and why does it matter? Here’s what you need to know. (n.d.). ZDNET.
5 Reed, S., Żołna, K., Parisotto, E., Gómez Colmenarejo, S., Novikov, A., Barth-Maron, G., Giménez, M., Sulsky, Y., Kay, J., Springenberg, T., Eccles, T., Bruce, J., Razavi, A., Edwards, A., Heess, N., Chen, Y., Hadsell, R., Vinyals, O., Bordbar, M., & De Freitas, N. (n.d.). A Generalist Agent.
8 2021 was the year of monster AI models. (n.d.). MIT Technology Review.
9 Greene, T. (n.d.). DeepMind Gato and the Long, Uncertain Road To Artificial General Intelligence – The Wire Science.
14 Silver, D., Singh, S., Precup, D., & Sutton, R. S. (2021). Reward is enough. Artificial Intelligence, 299, 103535.
16 Meta’s AI guru LeCun: Most of today’s AI approaches will never lead to true intelligence. (n.d.). ZDNET.
17 Piper, K. (2022, August 21). Why is Meta’s new AI chatbot so bad? Vox.
18 Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114.
Hidalgo, C. A., Orghiain, D., Canals, J. A., De Almeida, F., & Martín, N. (2021). How Humans Judge Machines. MIT Press
Please enter a valid e-mail address
Please enter a valid e-mail address
Important legal information about the e-mail you will be sending. By using this service, you agree to input your real e-mail address and only send it to people you know. It is a violation of law in some jurisdictions to falsely identify yourself in an e-mail. All information you provide will be used by Fidelity solely for the purpose of sending the e-mail on your behalf.The subject line of the e-mail you send will be " "

Your e-mail has been sent.

Your e-mail has been sent.

This website is operated by Fidelity Center for Applied Technology (FCAT)® which is part of Fidelity Labs, LLC (“Fidelity Labs”), a Fidelity Investments company. FCAT experiments with and provides innovative products, services, content and tools, as a service to its affiliates and as a subsidiary of FMR LLC. Based on user reaction and input, FCAT is better able to engage in technology research and planning for the Fidelity family of companies. is independent of Unless otherwise indicated, the information and items published on this web site are provided by FCAT and are not intended to provide tax, legal, insurance or investment advice and should not be construed as an offer to sell, a solicitation of an offer to buy, or a recommendation for any security by any Fidelity entity or any third-party. In circumstances where FCAT is making available either a product or service of an affiliate through this site, the affiliated company will be identified. Third party trademarks appearing herein are the property of their respective owners. All other trademarks are the property of FMR LLC.

This is for persons in the U.S. only.

245 Summer St, Boston MA

© 2008-2024 FMR LLC All right reserved |

Terms of Use | Privacy | Security | DAT Support