
The Future of AI
What will it take for AI to think like a human? Where are we going with this technology? We sat down with AI expert David Kebudi, who has a longtime special interest in magic crystal balls
If you had met David Kebudi ’19, ’21 ScM as an undergraduate, he’d have been one of the last people you would have expected to pursue a career in artificial intelligence. Kebudi arrived on campus as an aspiring filmmaker. Although he toyed with switching his major to computer science throughout his time at Brown, by the time his junior year rolled around he had already failed two introductory math courses and barely scraped by in two introductory CS classes (see sidebar below).
Kebudi was beginning to come to terms with the idea that maybe computers just weren’t for him—and that’s when he heard about Palantir, a company cofounded by billionaire entrepreneur and activist Peter Thiel that specializes in AI-driven software for analyzing massive amounts of real-world data. Kebudi didn’t know the first thing about the company’s product, but he did know its name was a reference to magical crystal balls called palantíri that feature prominently in the Lord of the Rings trilogy. “I didn’t care what they did,” Kebudi says. “I love Lord of the Rings and I was going to apply.”
Kebudi was passed over by Palantir three times in as many years before he was finally hired as a data scientist, and during that time he returned to Brown to buckle down and get a master’s in data science and worked at a startup as an AI engineer. But the wait was worth it.
At Palantir, Kebudi’s work spanned diverse applications of AI that taught him firsthand how the technology could be used to solve complex real-world challenges. One of his first projects at the company, for instance, involved developing a wildfire prevention system for a utility operator in California. This system used AI to analyze grid fluctuations and environmental data, allowing for preemptive power shutoffs to prevent fires while balancing the need for reliable electricity supply.
“It’s a very interesting mathematical problem to optimize for because you have a tradeoff between reliability and fire risk,” he says. “You don’t want to shut off electricity to a hospital where people are being treated, you need to be conscious of the heat and people’s need for air conditioning, and because they’re a regulated company they need to have a certain level of reliability. But you also don’t want to cause wildfires.”
In the end, you can look at a lot of what AI does as a sort of “translation exercise,” Kebudi says. “It’s really about translating your thoughts and desires into code so a computer can understand it and make sense of the world, which is just a fascinating problem.”
Children, Kebudi points out, ingest orders of magnitude more data during the first years of their life than all text that humanity has ever produced... Replicating this kind of intelligence in a machine is unlikely to emerge from more data alone.
At Palantir, Kebudi had a front-row seat to witnessing the transformative effects of artificial intelligence on military operations, disaster prevention, manufacturing, and many other industries. The ability of these systems to ingest massive amounts of real-world data to make intelligent decisions is nothing short of astounding—but, Kebudi says, this is only the beginning.
Since the first computer engineers set out to breathe life into machines nearly 70 years ago, the holy grail for AI researchers has been artificial general intelligence (AGI): a computer system whose capabilities match—and eventually exceed—those of a human. AI has already been able to best humans at a wide variety of tasks ranging from board games like chess to the diagnosis of life-threatening disease. But these systems are only superhuman in a single narrowly defined domain. If you ask the world-ranking chess AI to drive you to the grocery store—or simply give you the most efficient route—it will fail miserably at the task.
The reason for this boils down to the way that AI systems are built. The fundamental idea behind most modern AI is that by providing the system with a tremendous amount of data—typically consisting of billions of data points—it can learn to identify meaningful patterns like game-winning chess moves or deadly lesions in a CT scan. Then it can use these patterns to make predictions about the world when it is exposed to data it has never seen before. This same principle is behind most of the AI systems you’re likely to encounter on a day-to-day basis such as virtual assistants like Siri or Alexa and chatbots like ChatGPT or Claude. These systems have been trained on massive corpora—collections of texts—to predict the sequence of words that will make sense in response to a user input.
To non-experts, these systems certainly seem intelligent, if not downright magic. But for Kebudi, they are little more than “autocomplete on steroids.” If you ask Alexa or ChatGPT about, say, the history of Brown University, these systems will likely be able to provide an acceptable answer. But this is not because they have any real concept of Brown or even universities in general. Instead, their answer is a statistical prediction based on the data they were trained on, which includes information about Brown and other universities. It’s all mathematics—history, experience, intelligence, and memory have nothing to do with it.

This isn’t to downplay the massive impact that these systems are already having on the world, says Kebudi. Still, he and many of his peers in the field are skeptical that these types of AI systems hold the key to a true AGI. The reason, Kebudi says, is that these models mostly improve by feeding them more data. Their abilities are mostly a function of the size of the AI model, not enhanced intelligence. While the current wave of AI models may continue to grow in size—and improve their abilities in the short term—this growth will be subject to diminishing returns and will still lack the defining feature of AGI: the ability to transfer their intelligence from one domain to another.
To get to AGI, Kebudi believes that engineers will need to take a page from human intelligence. Children, Kebudi points out, ingest orders of magnitude more data during the first years of their life than all text that humanity has ever produced. And despite this incredible data processing ability, he notes, “our brains don’t grow above a certain size and we can still learn new things.” The replication of this kind of intelligence—one that can reason and apply knowledge across different domains—in a machine isn’t likely to emerge from more data alone. Instead, Kebudi believes that it will require new approaches to the way AI systems are built and use this data.
Kebudi is reluctant to speculate on the exact type of AI architecture that may put us on the path to AGI. It’s a prudent choice, given how fast the field is evolving and how often predictions about the future of AI prove to be wildly off base. The important thing, says Kebudi, is that some of the best engineers in the world are exploring several different pathways to AGI. If we ever get there, the AGI may be a combination of many of these different approaches to replicate general intelligence in silico.
To illustrate the magnitude of the challenge posed by AGI, consider a relatively mundane example: Watching a video of a person dribbling a basketball. When a human watches the video, we know what to expect. The ball will fall out of the person’s hand toward the earth and bounce back up into their hand when it strikes the floor. It seems obvious, but there is a tremendous amount of contextual knowledge that goes into that assessment—a sense of gravity, the qualities of rubber and hardwood floors, and so on. This contextual knowledge about the world is what allows us to make similar predictions about entirely different scenarios like a bowling ball striking several pins even if we’ve never encountered them before.
The ability to generalize about the world is a core feature of human intelligence and is still largely beyond the reach of AI systems. But Kebudi says there is rapid progress toward replicating these faculties of intelligence in machines. A lot of this work involves moving beyond the realm of text—the mainstay of modern AI training data—and into new modalities like video. Humans ingest a lot of optical data and it’s reasonable to assume that an AGI will need similar levels of visual input too.
Kebudi points to research on so-called “world models” as an example. These models give AI raw video data without explicit instructions about the ‘rules’ of the world in the video. For instance, the AI is never explicitly taught that gravity is a function of the mass and distance between objects; instead, it must reverse-engineer the concept of gravity based on how objects in the video behave.
For example, Kebudi says, “a recent innovation in AI models came when researchers masked certain parts of a video that are given to a model and then it predicts what is missing in that masked region based on what is in the imagery. But in order to do that, you need to understand physics and how the world works. You need to understand when you just see the tip of a pen moving across a piece of paper that there is a hand that has been masked that is moving it. But that means that you need to have a concept of what the act of writing is, what a pen and paper are, and so on.”
Other algorithms are focused on endowing AI with reason capabilities, which requires moving beyond pure statistical predictions to predictions that are based on a robust conceptual understanding of the world and the things within it.
It’s still early days for these types of systems and AGI—for better or worse—remains a distant dream for Kebudi and his fellow AI aficionados. Nevertheless, he foresees massive gains from artificial intelligence in the short-to-medium term while we wait for researchers to crack the code on AGI. In his new role as vice president of AI at a private equity firm called RedBird, he is exploring opportunities to apply AI in finance, media, and entertainment. In the not-so-distant future, he foresees an explosion of AI systems tailored to the needs of individuals. “On the content side,” he says, “we’re going to start listening to our own music generated by AI.” He adds, “Maybe the biggest change will be in education. What does that look like 10 years down the line? We’re forced to ask a question for the first time in decades, maybe centuries, about why we educate people? What is its fundamental purpose and how can we prepare people for the future?”
Kebudi also predicts that the way we interact with AI could change dramatically in the coming years as these systems learn to synthesize a broader variety of data streams to make sense of the world. Although we may have grown accustomed to interacting with screens, it’s not difficult for Kebudi to imagine a world where screens disappear almost entirely and we interact with AI systems through voice or gesture, just like we do with other humans.
Kebudi doesn’t claim to have a crystal ball—or, rather, a palantír—and is the first to admit his guess is as good as anyone’s who follows this field closely. But regardless of how things shake out, he is certain that AI is here to stay. “I do think we’re at the beginning of an exponential trend,” he says. “Every week is a new week for AI. Every week I’m learning something that would overwrite decisions I made last week. It’s unbelievable.”
Given the immense impact the technology is already having on the world, he believes this creates a strong obligation to develop these systems responsibly. “We simply do not know what the outcome of AI is going to be,” Kebudi says. “The beautiful thing about our species is that we’re capable of pushing the frontier, but we need to make sure that we do AI well because this one could really haunt us if we get it wrong.”
Daniel Oberhaus is a science writer in Brooklyn and author of The Silicon Shrink (MIT Press, Feb.), a book about AI and psychiatry.