Diseases become easier to cure. Bureaucracy is simplified. What will work look like? OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei released essays about their visions of what artificial intelligence will bring humanity.
In this podcast, Motley Fool analyst Asit Sharma joins host Mary Long to discuss:
- If building artificial general intelligence is a winner-take-all game.
- How AI advancements could develop in the next decade.
- Lingering questions and worries about the future of superintelligence.
To catch full episodes of all The Motley Fool’s free podcasts, check out our podcast center. To get started investing, check out our beginner’s guide to investing in stocks. A full transcript follows the video.
This video was recorded on Nov. 09, 2024.
Asit Sharma: So if you’re scared that the AI is going to advance so much and solve so many problems and perhaps there’s nothing left for us to do, he points out that we’re straining against the laws of physics, biology, and experimentation. So the rate of change may be phenomenal, but it may be something that we’ll be able to live with because it’s constrained by so many variables.
Ricky Mulvey: I’m Ricky Mulvey and that’s Motley Fool’s Senior Analyst, Asit Sharma. OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei have both published essays outlining their visions for the future of artificial intelligence. A world where diseases become easier to cure, work becomes radically different and unsolvable math problems become solvable. Mary Long caught up with Asit to chat about these visions in a book club-style conversation, what they’re excited about and worried about when AI becomes “a country of geniuses in a data center.”
Mary Long: Amodei is not the first character to leave OpenAI. It’s worth noting and remembering that even earlier this year, OpenAI saw a number of high profile departures. Many of those people have now gone on or are rumored to be going on to start their own AI upstarts. I want to stick on this point for a minute because I think in this world that we’re in now, where our conception of AI is primarily through of chatbots, makes sense that we can have multiple different chatbots. Maybe you have a preference for one, I have a preference for one. They can compete pretty openly in the marketplace, but this grander vision that both Altman and Amodei lay out in their essays are of what super intelligence or artificial general intelligence or powerful AI.
There’s different terms that they each prefer. What this higher AI can do. Amodei describes that super intelligence as a country of geniuses in a data center. If we put ourselves in that future of a time when AI is a country of geniuses in a data center, can there be multiple countries of geniuses in different data centers? Or, can we have the same open competition that we now see with chat bots? Or, is this a winner-takes-all type situation with the massive change that both of these leaders are talking about and envisioning?
Asit Sharma: That’s really difficult to contemplate because behind the scenes, both of these leaders are trying to raise capital in order to have a jump on whatever you call the superintelligence or general AGI. There’s so many different terms that we can insert here. So on the surface, if you read both of their works, there is an idealized vision of the future, which seems very cooperative. That would necessitate if it is a cooperative game, multiple data centers with multiple Nobel Prize winning geniuses. But even Amodei who I think is more prone to look at this as a cooperative endeavor, refers to different thinkers who feel that democracy itself is intertwined with these concepts and therefore, democratic companies should participate, pool capital, pool resources to develop their AI and make that more advanced than AI from non-democratic countries, that’s the stick and then they extend a carrot, which is to share that technology.
So there is a vision here in which it’s important to maybe get there first and have a superintelligence. I like that metaphor that Amodei puts out. I will say that for me, it’s more about flipping the equation. Right now, URI uses a chatbot, or large language model and, they’re basically the assistant. We’re trying to achieve something. So an intelligent human with a very good artificial intelligence can do a lot. In all of these visions, the one commonality I see, is that, that gets flipped, where we become the assistants. Maybe we control the initial objective or put it out in front of the artificial intelligence. But from then on, they’re really controlling everything. In Amodei’s vision, the superintelligence has access to whatever it needs; robots, laboratories, means of productions, etc, to solve problems.
Mary Long: Both Altman and Amodei anticipate that this super intelligence is going to come quite fast. Altman’s estimate is that we could reach this in around 1,000 days. He published this in September. Amodei’s estimate is that we could see this as early as 2026. So Amodei’s essay goes on to imagine what will happen in the 5-10 years after we reach this super intelligence, what that looks like. I don’t even think that this is really a value judgment of the piece. It’s much more vague than what Amodei lays out. Amodei’s essay is about over 14,000 words, it’s lengthy, it’s detailed. Just setting the table with that information. Do you find that these are fundamentally different visions, even though one might be more detailed than another, and if they’re different, do you buy one over the other?
Asit Sharma: They may not be so fundamentally different as it seems on the surface, although we don’t have enough detail in Sam Altman’s vision of the future to understand if he’s talking about the same thing. He has a sentence or two in his vision, which as you say, is very short on details. There is really no effort there to persuade the reader of anything he’s saying, because he does point out that, hey, we found out that deep learning with scale changes the world. That’s essentially what he’s saying.
Amodei has a similar thought in that there is this weird principle and I think there are some academic papers on this that, just a little bit of agentic action, just a little bit of an algorithm or an ability to interpret a sequence, that applied at scale is actually what intelligence is, and that neuroscientists have been kidding themselves and thinking there’s some mysterious thing that goes on in the human mind, that’s the basis of intelligence. Perhaps our brains, also with just a few simple mechanisms, once we’re exposed to enormous amounts of data as we are through our lives from the time we’re born until the time we die, maybe that’s what intelligence is. It’s just a lot of scaling that compute with very simple operations going on. So in that sense, I think they see the world the same way.
In fact, Altman has taken that to an extreme. He went on a tour last year to Asia to try to persuade various chip makers and governments that about seven trillion dollars in capital should be raised between chips, data centers, networking, energy requirements, etc, to support the goals of AI as it moves toward this super intelligence. Which in my estimation is interesting because it’s this essay. It’s very big, bold, without a lot of detail. I will point out that he was dismissed by some of the smartest minds on the planet, the engineers at TSMC, the leadership. The New York Times reported that they dismissed Altman like a podcast pro. I think this is the danger of putting out bold vision without thinking about consequences or persuading people that you’re thinking about consequences. I think for most of us who aren’t as brilliant as either these two gentlemen, or don’t have access to the capital, it worries one that someone would go and try to raise $7 trillion with today’s energy demands on compute GPUs as they’re structured today, that’s a lot of impact on the planet, wouldn’t you first, maybe if you had access to such thinkers and investors, try to find ways to reduce the energy imprint of compute? So I think the essays both have some great similarities, but they really approach the world in very different ways.
Mary Long: Amodei almost makes a similar point to perhaps what the TSMC engineers were making, because he lays out very early on in this essay that, part of his purpose and point in writing this, is that, you can’t make this technology convincing to other people unless you underline and explain what the hope of it is. So you need to have this North Star that everyone who’s bought into the technology or who’s going to be affected by it, which ultimately he argues is everyone. You need to understand what that North Star is, and that not only helps inspire you to work toward something, and even if you’re a layperson, just get excited about it.
As one of the minds that’s helping to build this technology, it also helps you figure out what we don’t want and what to stay away from. Amodei is clear at the beginning of his essay that what he is writing is a positive vision for AI. Obviously, there are lots of things that can go wrong anthropic, purports to be very concerned about safety. That’s not the purpose of this essay. This is mostly the most generous vision that he’s outlining here. I want to get philosophical for a minute before we dive into the visions. [laughs] Austin is so excited for this to come. Because a commonality between both men’s envisioning of what super intelligence would be is like, I think you called it agentic. That AI agents are not only able to process data, but they wind up doing your bidding. Amodei describes this again, more specifically than Altman does, but they’re very similar concepts, as I understand them. He says that ultimately these super intelligent AI agents are capable of initiating tasks and have the I’m going to use quotes, “brain power” of Nobel Prize winners in numerous industries.
So he names a few of these industries, one of which is writing. It can prove and solve mathematical theorems, write extremely good novels; is how he describes this. When I hear this, I can’t help but wonder about the difference between skill and art and how that gets thought of in these renderings and even just imaginings of AI agents. So I’ll turn it to you, Asit. Before I wax poetic a bit longer, is there a difference between skill and art? What does that difference look like?
Asit Sharma: Totally there is. I think for all I admired in this essay, and I should say that there is a vein of humility that runs through the whole Amodei’s whole essay. He begins with a lot of great rhetorical devices, telling you that he understands how silly he could look writing something like this, and he gets that out of the way. He doesn’t want to do that. He doesn’t want to sound like it, and he gives a whole range of types of personalities that he probably sounds like. So for all that there is so much in this essay to like, this is the one point where I really disagreed.
The difference between skill and art, is that, skill is necessary for art. You need the mechanics of a thing to be able to create something great. So you need the artifice of something. You need to be able to embroider if you’re making a beautiful cape. You can’t just imagine it. I think the machines are really great at this. But you also need the ability to experience emotion in a way that can be wrapped up with other things to create something. The reason why it’s going to take machines a long time is they can only emulate. They have the ability to hallucinate already the way these neural networks are built. They hallucinate just like we do. We dream, they dream. Their hallucination is a bit different than ours, but the human body is such an interesting thing.
It’s both composed of electrical impulses, chemical impulses, etc. We have very fine grain receptors on our skin. Therefore, if you’re a child and your grandparent strokes your palm, you may carry that memory with you decades into the future and as a writer, will come back to you when you’re describing a scene. So this is something that yes, maybe over time can be replicated. We know that Meta AI is working on very fine-grained touch perception. So there’s that. We know that these agents can emulate human thought, but putting that whole thing together, where there is an emotional current running through, the machines can only imagine in their own way what that is like.
If you read a great piece of art, I know that you and I have discussed some novels over time. The one commonality is that they’re drawn from this amazing breath of experience throughout a writer’s lifetime that comes together in a very unique way. We don’t understand how that’s done. So machines will come close to that, but will they be able to recreate the effect of reading a great first line of a novel and that carrying you all the way through? It’s going to be a long, long while before that actually happens. So where that argument starts to fall apart. I will note the one thing he doesn’t come back to after discussing how AI will excel in biology and mechanics, so many things. He actually, after stating that they’ll be able to write Nobel level novels, doesn’t support that argument. I don’t think it can be supported.
Mary Long: There is a section at the end of Amodei’s essay in which he talks about work and meaning, and the impact that AI will have on that. That is notably the shortest section in the essay of the five that he outlines earlier on, which include biology, neuroscience, governance and peace, economic development, and prosperity. Then this work in meaning, which is the shortest. And he even addresses this. I think that in large part comes back to what you’re talking about of so much of perhaps not work, but meaning in human life can go back to art and this breath of experience and trying to articulate it and connect over it and in many ways, compelling vision of what AI can do for mankind. But even he comes up empty when he’s like, “Where do we We do we get at the end of all this?” What are we leading to? So I think it’s connected, and it’s an interesting point that you make that once he mentions this writing piece and the capability that AI could have to produce “art” he then drops it and only begins to hint at it again at the end, where he also says, “I don’t actually know what’s going to happen here.”
Asit Sharma: I agree. I love the humility that he brings. He doesn’t ultimately know if this will end up as making our lives more meaningful as we perceive them, although he shows so many benefits that AI could bring. We contrast that with Sam Altman’s vision at the very end of his essay where he says and I’m quoting here, “Many of the jobs we do today would have looked like trifling waste of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamp lighter.” If a lamp lighter could see the world today, he would think the prosperity all around him was unimaginable. Of course, there are many of us who could disagree there to say, “Imagine if you could have been a lamplighter in Victoria and England, the sense of value you would have had waking up at dawn, going around as like a quasi patrol person for your neighborhood at night, illuminating society, how good you would have felt about that job at that time? So many of us today, as it is, struggle to find meaning. Sitting in front of computers or doing what work we have, we struggle to feel like our lives are worth anything. So I think these visions are very different. But, you wanted to talk about, Mary, the advancements that Amodei proposes and both the fun side of that and maybe the creepy part, too.
Mary Long: No and I’m glad. Thank you for getting us back on track. I’m going down the rabbit holes of the philosophical and I’m like, “I could talk about this all day long.” But you’re right. Amodei outlines a lot of possibilities and this is, again, what he’s envisioning will come to happen within the 5-10 years after superintelligence, artificial general intelligence, powerful AI, as he calls it, is achieved. Again, he sees this that point as happening as early as 2026. So this is achieved and say, 2026, the clock starts. What happens next? That’s what Amodei is outlining in this vision. Within this vision, he breaks down, again, into a couple of different categories. You’ve got biology and health, you have neuroscience and mental health, you have economic development and prosperity, you have governance and peace, and then you have work and meaning.
There is a lot of excitement here. It’s easy to concern troll. We can get to some of the things that are perhaps more frightening later on, but this is supposed to be a positive utopian vision. So let’s talk about what’s exciting. You kick it to me so I won’t kick the question back to you before answering. I think that, obviously, he talks about the eradication of infectious and genetic diseases and most cancers. Of course, that is incredibly exciting to think about. The way that he envisions this happening is that, again, you have this team of AI agents that have the collective brainpower of the world’s smartest biologists that are able to run experiments, and because they can act and initiate tasks, they can run experiments.
They can run even regulatory tests and speed up the process with which things are improved. So I like that Amodei doesn’t just say, hey, this is what AI is going to do, but he gives you a sense of how it might come to accomplish this really massive task. He also mentions the elimination of severe mental illnesses, which frankly is not something that had crossed my mind before when thinking about the possibilities. I think it feels wild to say and suggest that, When I thought about AI, I thought the eradication of disease was on the table, but I feel that is something that’s more often discussed. The eradication of mental illness was not something that had crossed my mind or that I had read about before. So I think that there’s especially when you think about really, really severe mental illness and even less severe mental illness.
To imagine a world without that, is obviously a positive in progress. I got really excited hearing him talk about, making progress in food security and climate change mitigation, the possibilities within food and agriculture technology is fascinating to me. This is a silly nerdy one, but he mentions within this government and peace perspective segment that you could have an AI that helps citizens take full advantage of the governmental services that are available to them. When you line that offering up against the eradication of all disease, it feels really trite and small, but I thought, “That would be awesome.” [laughs]
Asit Sharma: I love that, too, Mary. I really love the cognizance that help is hard, even when there is something provided by a government for you. Bureaucracy is difficult and you may be the person who has to fill out just innumerable amount of forms to get help for a kid or maybe to file for unemployment. There’s so much of bureaucracy in our society. What if an AI made it easy for you and easy for the analysis on the other end, so you could get the services that you needed? I thought that was really fun. My takeaways were very similar, so I’m not going to say a lot here except to say that I found oddly reassuring the consistent pointing out that there are so many physical limitations that keep problems from being solved overnight.
So if you’re scared that the AI is going to advance so much and solve so many problems and perhaps there’s nothing left for us to do. He keep points out that we’re straining against the laws of physics, biology, and experimentation. So the rate of change may be phenomenal, but it may be something that we’ll be able to live with because it’s constrained by so many variables and he gives a lot of great examples in clinical research, for example. So that was something that was cool for me. Then finally, just thinking about neuroscience, he discusses or actually alludes to the work that Anthropic is doing to undercover why their models work the way they do. Most of these companies don’t see to have that much interest in trying to understand the black box. But I give Anthropic a lot of credit for publishing papers on what they’re seeing as they build the models. So him just giving a nod to what he calls interpretability, which is also understanding how these large language models work vis-a-vis, our brains, it was cool.
Mary Long: Again, Amodei is clear that this is a positive vision. This is the best case scenario for what he thinks powerful AI is possible of. Powerful AI, I should say. That said, there were still moments that I was reading through these possibilities where I felt my stomach turt a bit and I felt quite nervous. There were many things that I read that I’m like, see how this is positive. I’m wincing as I say that because the overwhelming part of me, also thought, but this is perhaps too much of a good thing. I don’t know that for all the good that this powerful AI could potentially bring, I don’t know that it’s fair or possible, really, to imagine a world without any problems. You could solve a lot of problems and still, funnily enough, problems tend to arise. So I don’t want to, like, gloss over that. I’d love to take a moment to point out what stuck out to each of us as more worrisome elements of this positive vision. You want to kick us off with this one?
Asit Sharma: Sure. I’ve got two. So one is something he mentions called biological freedom. He talks about all the advances over the last 70 years in fertility, weight management, all these great things. Then says that he suspects that an AI accelerated biology is going to expand what’s possible almost, so that we can select from a cafeteria style menu of how we want to be, how we want our biology to play out, our physical appearance, our reproduction, which is what people first worried about when we started making progress on the human genome.
What if you can just select what your baby will be like? This sounds like that after birth, so that was a little bit creepy to me. Another one which gave me pause was something related to this in that and I have to quote here just to make this clear, “Everyday problems that we don’t think of as a clinical disease will also be solved.” Most of us have everyday psychological problems that are not ordinarily thought of as rising to the level of clinical disease. Some people are quick to anger, others have trouble focusing or are often drowsy. Some are fearful or anxious or react badly to change. Then he discusses that, there are some drugs that help with that, but conceivably, these super-intelligent AI systems could just take that edge off of our personalities. There’s a thin line, I think, between being human and then having something that needs treatment. This goes back to what you were saying earlier, Mary, that do you want to solve every problem? I mean, what happens if we select and select to the point where our whole existence is one of moderation? That felt a little uncomfortable or a lot uncomfortable to me.
Ricky Mulvey: As always, people on the program may own stocks mentioned, and the Motley Fool may have formal recommendations for or against, so don’t buy or sell anything based solely on what you hear. All personal finance content follows Motley Fool editorial standards, and are not approved by advertisers. The Motley Fool only picks products that it would personally recommend to friends like you. I’m Ricky Mulvey. Thanks for listening. We’ll be back tomorrow.
Diseases become easier to cure. Bureaucracy is simplified. What will work look like? OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei released essays about their visions of what artificial intelligence will bring humanity.
In this podcast, Motley Fool analyst Asit Sharma joins host Mary Long to discuss:
- If building artificial general intelligence is a winner-take-all game.
- How AI advancements could develop in the next decade.
- Lingering questions and worries about the future of superintelligence.
To catch full episodes of all The Motley Fool’s free podcasts, check out our podcast center. To get started investing, check out our beginner’s guide to investing in stocks. A full transcript follows the video.
This video was recorded on Nov. 09, 2024.
Asit Sharma: So if you’re scared that the AI is going to advance so much and solve so many problems and perhaps there’s nothing left for us to do, he points out that we’re straining against the laws of physics, biology, and experimentation. So the rate of change may be phenomenal, but it may be something that we’ll be able to live with because it’s constrained by so many variables.
Ricky Mulvey: I’m Ricky Mulvey and that’s Motley Fool’s Senior Analyst, Asit Sharma. OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei have both published essays outlining their visions for the future of artificial intelligence. A world where diseases become easier to cure, work becomes radically different and unsolvable math problems become solvable. Mary Long caught up with Asit to chat about these visions in a book club-style conversation, what they’re excited about and worried about when AI becomes “a country of geniuses in a data center.”
Mary Long: Amodei is not the first character to leave OpenAI. It’s worth noting and remembering that even earlier this year, OpenAI saw a number of high profile departures. Many of those people have now gone on or are rumored to be going on to start their own AI upstarts. I want to stick on this point for a minute because I think in this world that we’re in now, where our conception of AI is primarily through of chatbots, makes sense that we can have multiple different chatbots. Maybe you have a preference for one, I have a preference for one. They can compete pretty openly in the marketplace, but this grander vision that both Altman and Amodei lay out in their essays are of what super intelligence or artificial general intelligence or powerful AI.
There’s different terms that they each prefer. What this higher AI can do. Amodei describes that super intelligence as a country of geniuses in a data center. If we put ourselves in that future of a time when AI is a country of geniuses in a data center, can there be multiple countries of geniuses in different data centers? Or, can we have the same open competition that we now see with chat bots? Or, is this a winner-takes-all type situation with the massive change that both of these leaders are talking about and envisioning?
Asit Sharma: That’s really difficult to contemplate because behind the scenes, both of these leaders are trying to raise capital in order to have a jump on whatever you call the superintelligence or general AGI. There’s so many different terms that we can insert here. So on the surface, if you read both of their works, there is an idealized vision of the future, which seems very cooperative. That would necessitate if it is a cooperative game, multiple data centers with multiple Nobel Prize winning geniuses. But even Amodei who I think is more prone to look at this as a cooperative endeavor, refers to different thinkers who feel that democracy itself is intertwined with these concepts and therefore, democratic companies should participate, pool capital, pool resources to develop their AI and make that more advanced than AI from non-democratic countries, that’s the stick and then they extend a carrot, which is to share that technology.
So there is a vision here in which it’s important to maybe get there first and have a superintelligence. I like that metaphor that Amodei puts out. I will say that for me, it’s more about flipping the equation. Right now, URI uses a chatbot, or large language model and, they’re basically the assistant. We’re trying to achieve something. So an intelligent human with a very good artificial intelligence can do a lot. In all of these visions, the one commonality I see, is that, that gets flipped, where we become the assistants. Maybe we control the initial objective or put it out in front of the artificial intelligence. But from then on, they’re really controlling everything. In Amodei’s vision, the superintelligence has access to whatever it needs; robots, laboratories, means of productions, etc, to solve problems.
Mary Long: Both Altman and Amodei anticipate that this super intelligence is going to come quite fast. Altman’s estimate is that we could reach this in around 1,000 days. He published this in September. Amodei’s estimate is that we could see this as early as 2026. So Amodei’s essay goes on to imagine what will happen in the 5-10 years after we reach this super intelligence, what that looks like. I don’t even think that this is really a value judgment of the piece. It’s much more vague than what Amodei lays out. Amodei’s essay is about over 14,000 words, it’s lengthy, it’s detailed. Just setting the table with that information. Do you find that these are fundamentally different visions, even though one might be more detailed than another, and if they’re different, do you buy one over the other?
Asit Sharma: They may not be so fundamentally different as it seems on the surface, although we don’t have enough detail in Sam Altman’s vision of the future to understand if he’s talking about the same thing. He has a sentence or two in his vision, which as you say, is very short on details. There is really no effort there to persuade the reader of anything he’s saying, because he does point out that, hey, we found out that deep learning with scale changes the world. That’s essentially what he’s saying.
Amodei has a similar thought in that there is this weird principle and I think there are some academic papers on this that, just a little bit of agentic action, just a little bit of an algorithm or an ability to interpret a sequence, that applied at scale is actually what intelligence is, and that neuroscientists have been kidding themselves and thinking there’s some mysterious thing that goes on in the human mind, that’s the basis of intelligence. Perhaps our brains, also with just a few simple mechanisms, once we’re exposed to enormous amounts of data as we are through our lives from the time we’re born until the time we die, maybe that’s what intelligence is. It’s just a lot of scaling that compute with very simple operations going on. So in that sense, I think they see the world the same way.
In fact, Altman has taken that to an extreme. He went on a tour last year to Asia to try to persuade various chip makers and governments that about seven trillion dollars in capital should be raised between chips, data centers, networking, energy requirements, etc, to support the goals of AI as it moves toward this super intelligence. Which in my estimation is interesting because it’s this essay. It’s very big, bold, without a lot of detail. I will point out that he was dismissed by some of the smartest minds on the planet, the engineers at TSMC, the leadership. The New York Times reported that they dismissed Altman like a podcast pro. I think this is the danger of putting out bold vision without thinking about consequences or persuading people that you’re thinking about consequences. I think for most of us who aren’t as brilliant as either these two gentlemen, or don’t have access to the capital, it worries one that someone would go and try to raise $7 trillion with today’s energy demands on compute GPUs as they’re structured today, that’s a lot of impact on the planet, wouldn’t you first, maybe if you had access to such thinkers and investors, try to find ways to reduce the energy imprint of compute? So I think the essays both have some great similarities, but they really approach the world in very different ways.
Mary Long: Amodei almost makes a similar point to perhaps what the TSMC engineers were making, because he lays out very early on in this essay that, part of his purpose and point in writing this, is that, you can’t make this technology convincing to other people unless you underline and explain what the hope of it is. So you need to have this North Star that everyone who’s bought into the technology or who’s going to be affected by it, which ultimately he argues is everyone. You need to understand what that North Star is, and that not only helps inspire you to work toward something, and even if you’re a layperson, just get excited about it.
As one of the minds that’s helping to build this technology, it also helps you figure out what we don’t want and what to stay away from. Amodei is clear at the beginning of his essay that what he is writing is a positive vision for AI. Obviously, there are lots of things that can go wrong anthropic, purports to be very concerned about safety. That’s not the purpose of this essay. This is mostly the most generous vision that he’s outlining here. I want to get philosophical for a minute before we dive into the visions. [laughs] Austin is so excited for this to come. Because a commonality between both men’s envisioning of what super intelligence would be is like, I think you called it agentic. That AI agents are not only able to process data, but they wind up doing your bidding. Amodei describes this again, more specifically than Altman does, but they’re very similar concepts, as I understand them. He says that ultimately these super intelligent AI agents are capable of initiating tasks and have the I’m going to use quotes, “brain power” of Nobel Prize winners in numerous industries.
So he names a few of these industries, one of which is writing. It can prove and solve mathematical theorems, write extremely good novels; is how he describes this. When I hear this, I can’t help but wonder about the difference between skill and art and how that gets thought of in these renderings and even just imaginings of AI agents. So I’ll turn it to you, Asit. Before I wax poetic a bit longer, is there a difference between skill and art? What does that difference look like?
Asit Sharma: Totally there is. I think for all I admired in this essay, and I should say that there is a vein of humility that runs through the whole Amodei’s whole essay. He begins with a lot of great rhetorical devices, telling you that he understands how silly he could look writing something like this, and he gets that out of the way. He doesn’t want to do that. He doesn’t want to sound like it, and he gives a whole range of types of personalities that he probably sounds like. So for all that there is so much in this essay to like, this is the one point where I really disagreed.
The difference between skill and art, is that, skill is necessary for art. You need the mechanics of a thing to be able to create something great. So you need the artifice of something. You need to be able to embroider if you’re making a beautiful cape. You can’t just imagine it. I think the machines are really great at this. But you also need the ability to experience emotion in a way that can be wrapped up with other things to create something. The reason why it’s going to take machines a long time is they can only emulate. They have the ability to hallucinate already the way these neural networks are built. They hallucinate just like we do. We dream, they dream. Their hallucination is a bit different than ours, but the human body is such an interesting thing.
It’s both composed of electrical impulses, chemical impulses, etc. We have very fine grain receptors on our skin. Therefore, if you’re a child and your grandparent strokes your palm, you may carry that memory with you decades into the future and as a writer, will come back to you when you’re describing a scene. So this is something that yes, maybe over time can be replicated. We know that Meta AI is working on very fine-grained touch perception. So there’s that. We know that these agents can emulate human thought, but putting that whole thing together, where there is an emotional current running through, the machines can only imagine in their own way what that is like.
If you read a great piece of art, I know that you and I have discussed some novels over time. The one commonality is that they’re drawn from this amazing breath of experience throughout a writer’s lifetime that comes together in a very unique way. We don’t understand how that’s done. So machines will come close to that, but will they be able to recreate the effect of reading a great first line of a novel and that carrying you all the way through? It’s going to be a long, long while before that actually happens. So where that argument starts to fall apart. I will note the one thing he doesn’t come back to after discussing how AI will excel in biology and mechanics, so many things. He actually, after stating that they’ll be able to write Nobel level novels, doesn’t support that argument. I don’t think it can be supported.
Mary Long: There is a section at the end of Amodei’s essay in which he talks about work and meaning, and the impact that AI will have on that. That is notably the shortest section in the essay of the five that he outlines earlier on, which include biology, neuroscience, governance and peace, economic development, and prosperity. Then this work in meaning, which is the shortest. And he even addresses this. I think that in large part comes back to what you’re talking about of so much of perhaps not work, but meaning in human life can go back to art and this breath of experience and trying to articulate it and connect over it and in many ways, compelling vision of what AI can do for mankind. But even he comes up empty when he’s like, “Where do we We do we get at the end of all this?” What are we leading to? So I think it’s connected, and it’s an interesting point that you make that once he mentions this writing piece and the capability that AI could have to produce “art” he then drops it and only begins to hint at it again at the end, where he also says, “I don’t actually know what’s going to happen here.”
Asit Sharma: I agree. I love the humility that he brings. He doesn’t ultimately know if this will end up as making our lives more meaningful as we perceive them, although he shows so many benefits that AI could bring. We contrast that with Sam Altman’s vision at the very end of his essay where he says and I’m quoting here, “Many of the jobs we do today would have looked like trifling waste of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamp lighter.” If a lamp lighter could see the world today, he would think the prosperity all around him was unimaginable. Of course, there are many of us who could disagree there to say, “Imagine if you could have been a lamplighter in Victoria and England, the sense of value you would have had waking up at dawn, going around as like a quasi patrol person for your neighborhood at night, illuminating society, how good you would have felt about that job at that time? So many of us today, as it is, struggle to find meaning. Sitting in front of computers or doing what work we have, we struggle to feel like our lives are worth anything. So I think these visions are very different. But, you wanted to talk about, Mary, the advancements that Amodei proposes and both the fun side of that and maybe the creepy part, too.
Mary Long: No and I’m glad. Thank you for getting us back on track. I’m going down the rabbit holes of the philosophical and I’m like, “I could talk about this all day long.” But you’re right. Amodei outlines a lot of possibilities and this is, again, what he’s envisioning will come to happen within the 5-10 years after superintelligence, artificial general intelligence, powerful AI, as he calls it, is achieved. Again, he sees this that point as happening as early as 2026. So this is achieved and say, 2026, the clock starts. What happens next? That’s what Amodei is outlining in this vision. Within this vision, he breaks down, again, into a couple of different categories. You’ve got biology and health, you have neuroscience and mental health, you have economic development and prosperity, you have governance and peace, and then you have work and meaning.
There is a lot of excitement here. It’s easy to concern troll. We can get to some of the things that are perhaps more frightening later on, but this is supposed to be a positive utopian vision. So let’s talk about what’s exciting. You kick it to me so I won’t kick the question back to you before answering. I think that, obviously, he talks about the eradication of infectious and genetic diseases and most cancers. Of course, that is incredibly exciting to think about. The way that he envisions this happening is that, again, you have this team of AI agents that have the collective brainpower of the world’s smartest biologists that are able to run experiments, and because they can act and initiate tasks, they can run experiments.
They can run even regulatory tests and speed up the process with which things are improved. So I like that Amodei doesn’t just say, hey, this is what AI is going to do, but he gives you a sense of how it might come to accomplish this really massive task. He also mentions the elimination of severe mental illnesses, which frankly is not something that had crossed my mind before when thinking about the possibilities. I think it feels wild to say and suggest that, When I thought about AI, I thought the eradication of disease was on the table, but I feel that is something that’s more often discussed. The eradication of mental illness was not something that had crossed my mind or that I had read about before. So I think that there’s especially when you think about really, really severe mental illness and even less severe mental illness.
To imagine a world without that, is obviously a positive in progress. I got really excited hearing him talk about, making progress in food security and climate change mitigation, the possibilities within food and agriculture technology is fascinating to me. This is a silly nerdy one, but he mentions within this government and peace perspective segment that you could have an AI that helps citizens take full advantage of the governmental services that are available to them. When you line that offering up against the eradication of all disease, it feels really trite and small, but I thought, “That would be awesome.” [laughs]
Asit Sharma: I love that, too, Mary. I really love the cognizance that help is hard, even when there is something provided by a government for you. Bureaucracy is difficult and you may be the person who has to fill out just innumerable amount of forms to get help for a kid or maybe to file for unemployment. There’s so much of bureaucracy in our society. What if an AI made it easy for you and easy for the analysis on the other end, so you could get the services that you needed? I thought that was really fun. My takeaways were very similar, so I’m not going to say a lot here except to say that I found oddly reassuring the consistent pointing out that there are so many physical limitations that keep problems from being solved overnight.
So if you’re scared that the AI is going to advance so much and solve so many problems and perhaps there’s nothing left for us to do. He keep points out that we’re straining against the laws of physics, biology, and experimentation. So the rate of change may be phenomenal, but it may be something that we’ll be able to live with because it’s constrained by so many variables and he gives a lot of great examples in clinical research, for example. So that was something that was cool for me. Then finally, just thinking about neuroscience, he discusses or actually alludes to the work that Anthropic is doing to undercover why their models work the way they do. Most of these companies don’t see to have that much interest in trying to understand the black box. But I give Anthropic a lot of credit for publishing papers on what they’re seeing as they build the models. So him just giving a nod to what he calls interpretability, which is also understanding how these large language models work vis-a-vis, our brains, it was cool.
Mary Long: Again, Amodei is clear that this is a positive vision. This is the best case scenario for what he thinks powerful AI is possible of. Powerful AI, I should say. That said, there were still moments that I was reading through these possibilities where I felt my stomach turt a bit and I felt quite nervous. There were many things that I read that I’m like, see how this is positive. I’m wincing as I say that because the overwhelming part of me, also thought, but this is perhaps too much of a good thing. I don’t know that for all the good that this powerful AI could potentially bring, I don’t know that it’s fair or possible, really, to imagine a world without any problems. You could solve a lot of problems and still, funnily enough, problems tend to arise. So I don’t want to, like, gloss over that. I’d love to take a moment to point out what stuck out to each of us as more worrisome elements of this positive vision. You want to kick us off with this one?
Asit Sharma: Sure. I’ve got two. So one is something he mentions called biological freedom. He talks about all the advances over the last 70 years in fertility, weight management, all these great things. Then says that he suspects that an AI accelerated biology is going to expand what’s possible almost, so that we can select from a cafeteria style menu of how we want to be, how we want our biology to play out, our physical appearance, our reproduction, which is what people first worried about when we started making progress on the human genome.
What if you can just select what your baby will be like? This sounds like that after birth, so that was a little bit creepy to me. Another one which gave me pause was something related to this in that and I have to quote here just to make this clear, “Everyday problems that we don’t think of as a clinical disease will also be solved.” Most of us have everyday psychological problems that are not ordinarily thought of as rising to the level of clinical disease. Some people are quick to anger, others have trouble focusing or are often drowsy. Some are fearful or anxious or react badly to change. Then he discusses that, there are some drugs that help with that, but conceivably, these super-intelligent AI systems could just take that edge off of our personalities. There’s a thin line, I think, between being human and then having something that needs treatment. This goes back to what you were saying earlier, Mary, that do you want to solve every problem? I mean, what happens if we select and select to the point where our whole existence is one of moderation? That felt a little uncomfortable or a lot uncomfortable to me.
Ricky Mulvey: As always, people on the program may own stocks mentioned, and the Motley Fool may have formal recommendations for or against, so don’t buy or sell anything based solely on what you hear. All personal finance content follows Motley Fool editorial standards, and are not approved by advertisers. The Motley Fool only picks products that it would personally recommend to friends like you. I’m Ricky Mulvey. Thanks for listening. We’ll be back tomorrow.
Diseases become easier to cure. Bureaucracy is simplified. What will work look like? OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei released essays about their visions of what artificial intelligence will bring humanity.
In this podcast, Motley Fool analyst Asit Sharma joins host Mary Long to discuss:
- If building artificial general intelligence is a winner-take-all game.
- How AI advancements could develop in the next decade.
- Lingering questions and worries about the future of superintelligence.
To catch full episodes of all The Motley Fool’s free podcasts, check out our podcast center. To get started investing, check out our beginner’s guide to investing in stocks. A full transcript follows the video.
This video was recorded on Nov. 09, 2024.
Asit Sharma: So if you’re scared that the AI is going to advance so much and solve so many problems and perhaps there’s nothing left for us to do, he points out that we’re straining against the laws of physics, biology, and experimentation. So the rate of change may be phenomenal, but it may be something that we’ll be able to live with because it’s constrained by so many variables.
Ricky Mulvey: I’m Ricky Mulvey and that’s Motley Fool’s Senior Analyst, Asit Sharma. OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei have both published essays outlining their visions for the future of artificial intelligence. A world where diseases become easier to cure, work becomes radically different and unsolvable math problems become solvable. Mary Long caught up with Asit to chat about these visions in a book club-style conversation, what they’re excited about and worried about when AI becomes “a country of geniuses in a data center.”
Mary Long: Amodei is not the first character to leave OpenAI. It’s worth noting and remembering that even earlier this year, OpenAI saw a number of high profile departures. Many of those people have now gone on or are rumored to be going on to start their own AI upstarts. I want to stick on this point for a minute because I think in this world that we’re in now, where our conception of AI is primarily through of chatbots, makes sense that we can have multiple different chatbots. Maybe you have a preference for one, I have a preference for one. They can compete pretty openly in the marketplace, but this grander vision that both Altman and Amodei lay out in their essays are of what super intelligence or artificial general intelligence or powerful AI.
There’s different terms that they each prefer. What this higher AI can do. Amodei describes that super intelligence as a country of geniuses in a data center. If we put ourselves in that future of a time when AI is a country of geniuses in a data center, can there be multiple countries of geniuses in different data centers? Or, can we have the same open competition that we now see with chat bots? Or, is this a winner-takes-all type situation with the massive change that both of these leaders are talking about and envisioning?
Asit Sharma: That’s really difficult to contemplate because behind the scenes, both of these leaders are trying to raise capital in order to have a jump on whatever you call the superintelligence or general AGI. There’s so many different terms that we can insert here. So on the surface, if you read both of their works, there is an idealized vision of the future, which seems very cooperative. That would necessitate if it is a cooperative game, multiple data centers with multiple Nobel Prize winning geniuses. But even Amodei who I think is more prone to look at this as a cooperative endeavor, refers to different thinkers who feel that democracy itself is intertwined with these concepts and therefore, democratic companies should participate, pool capital, pool resources to develop their AI and make that more advanced than AI from non-democratic countries, that’s the stick and then they extend a carrot, which is to share that technology.
So there is a vision here in which it’s important to maybe get there first and have a superintelligence. I like that metaphor that Amodei puts out. I will say that for me, it’s more about flipping the equation. Right now, URI uses a chatbot, or large language model and, they’re basically the assistant. We’re trying to achieve something. So an intelligent human with a very good artificial intelligence can do a lot. In all of these visions, the one commonality I see, is that, that gets flipped, where we become the assistants. Maybe we control the initial objective or put it out in front of the artificial intelligence. But from then on, they’re really controlling everything. In Amodei’s vision, the superintelligence has access to whatever it needs; robots, laboratories, means of productions, etc, to solve problems.
Mary Long: Both Altman and Amodei anticipate that this super intelligence is going to come quite fast. Altman’s estimate is that we could reach this in around 1,000 days. He published this in September. Amodei’s estimate is that we could see this as early as 2026. So Amodei’s essay goes on to imagine what will happen in the 5-10 years after we reach this super intelligence, what that looks like. I don’t even think that this is really a value judgment of the piece. It’s much more vague than what Amodei lays out. Amodei’s essay is about over 14,000 words, it’s lengthy, it’s detailed. Just setting the table with that information. Do you find that these are fundamentally different visions, even though one might be more detailed than another, and if they’re different, do you buy one over the other?
Asit Sharma: They may not be so fundamentally different as it seems on the surface, although we don’t have enough detail in Sam Altman’s vision of the future to understand if he’s talking about the same thing. He has a sentence or two in his vision, which as you say, is very short on details. There is really no effort there to persuade the reader of anything he’s saying, because he does point out that, hey, we found out that deep learning with scale changes the world. That’s essentially what he’s saying.
Amodei has a similar thought in that there is this weird principle and I think there are some academic papers on this that, just a little bit of agentic action, just a little bit of an algorithm or an ability to interpret a sequence, that applied at scale is actually what intelligence is, and that neuroscientists have been kidding themselves and thinking there’s some mysterious thing that goes on in the human mind, that’s the basis of intelligence. Perhaps our brains, also with just a few simple mechanisms, once we’re exposed to enormous amounts of data as we are through our lives from the time we’re born until the time we die, maybe that’s what intelligence is. It’s just a lot of scaling that compute with very simple operations going on. So in that sense, I think they see the world the same way.
In fact, Altman has taken that to an extreme. He went on a tour last year to Asia to try to persuade various chip makers and governments that about seven trillion dollars in capital should be raised between chips, data centers, networking, energy requirements, etc, to support the goals of AI as it moves toward this super intelligence. Which in my estimation is interesting because it’s this essay. It’s very big, bold, without a lot of detail. I will point out that he was dismissed by some of the smartest minds on the planet, the engineers at TSMC, the leadership. The New York Times reported that they dismissed Altman like a podcast pro. I think this is the danger of putting out bold vision without thinking about consequences or persuading people that you’re thinking about consequences. I think for most of us who aren’t as brilliant as either these two gentlemen, or don’t have access to the capital, it worries one that someone would go and try to raise $7 trillion with today’s energy demands on compute GPUs as they’re structured today, that’s a lot of impact on the planet, wouldn’t you first, maybe if you had access to such thinkers and investors, try to find ways to reduce the energy imprint of compute? So I think the essays both have some great similarities, but they really approach the world in very different ways.
Mary Long: Amodei almost makes a similar point to perhaps what the TSMC engineers were making, because he lays out very early on in this essay that, part of his purpose and point in writing this, is that, you can’t make this technology convincing to other people unless you underline and explain what the hope of it is. So you need to have this North Star that everyone who’s bought into the technology or who’s going to be affected by it, which ultimately he argues is everyone. You need to understand what that North Star is, and that not only helps inspire you to work toward something, and even if you’re a layperson, just get excited about it.
As one of the minds that’s helping to build this technology, it also helps you figure out what we don’t want and what to stay away from. Amodei is clear at the beginning of his essay that what he is writing is a positive vision for AI. Obviously, there are lots of things that can go wrong anthropic, purports to be very concerned about safety. That’s not the purpose of this essay. This is mostly the most generous vision that he’s outlining here. I want to get philosophical for a minute before we dive into the visions. [laughs] Austin is so excited for this to come. Because a commonality between both men’s envisioning of what super intelligence would be is like, I think you called it agentic. That AI agents are not only able to process data, but they wind up doing your bidding. Amodei describes this again, more specifically than Altman does, but they’re very similar concepts, as I understand them. He says that ultimately these super intelligent AI agents are capable of initiating tasks and have the I’m going to use quotes, “brain power” of Nobel Prize winners in numerous industries.
So he names a few of these industries, one of which is writing. It can prove and solve mathematical theorems, write extremely good novels; is how he describes this. When I hear this, I can’t help but wonder about the difference between skill and art and how that gets thought of in these renderings and even just imaginings of AI agents. So I’ll turn it to you, Asit. Before I wax poetic a bit longer, is there a difference between skill and art? What does that difference look like?
Asit Sharma: Totally there is. I think for all I admired in this essay, and I should say that there is a vein of humility that runs through the whole Amodei’s whole essay. He begins with a lot of great rhetorical devices, telling you that he understands how silly he could look writing something like this, and he gets that out of the way. He doesn’t want to do that. He doesn’t want to sound like it, and he gives a whole range of types of personalities that he probably sounds like. So for all that there is so much in this essay to like, this is the one point where I really disagreed.
The difference between skill and art, is that, skill is necessary for art. You need the mechanics of a thing to be able to create something great. So you need the artifice of something. You need to be able to embroider if you’re making a beautiful cape. You can’t just imagine it. I think the machines are really great at this. But you also need the ability to experience emotion in a way that can be wrapped up with other things to create something. The reason why it’s going to take machines a long time is they can only emulate. They have the ability to hallucinate already the way these neural networks are built. They hallucinate just like we do. We dream, they dream. Their hallucination is a bit different than ours, but the human body is such an interesting thing.
It’s both composed of electrical impulses, chemical impulses, etc. We have very fine grain receptors on our skin. Therefore, if you’re a child and your grandparent strokes your palm, you may carry that memory with you decades into the future and as a writer, will come back to you when you’re describing a scene. So this is something that yes, maybe over time can be replicated. We know that Meta AI is working on very fine-grained touch perception. So there’s that. We know that these agents can emulate human thought, but putting that whole thing together, where there is an emotional current running through, the machines can only imagine in their own way what that is like.
If you read a great piece of art, I know that you and I have discussed some novels over time. The one commonality is that they’re drawn from this amazing breath of experience throughout a writer’s lifetime that comes together in a very unique way. We don’t understand how that’s done. So machines will come close to that, but will they be able to recreate the effect of reading a great first line of a novel and that carrying you all the way through? It’s going to be a long, long while before that actually happens. So where that argument starts to fall apart. I will note the one thing he doesn’t come back to after discussing how AI will excel in biology and mechanics, so many things. He actually, after stating that they’ll be able to write Nobel level novels, doesn’t support that argument. I don’t think it can be supported.
Mary Long: There is a section at the end of Amodei’s essay in which he talks about work and meaning, and the impact that AI will have on that. That is notably the shortest section in the essay of the five that he outlines earlier on, which include biology, neuroscience, governance and peace, economic development, and prosperity. Then this work in meaning, which is the shortest. And he even addresses this. I think that in large part comes back to what you’re talking about of so much of perhaps not work, but meaning in human life can go back to art and this breath of experience and trying to articulate it and connect over it and in many ways, compelling vision of what AI can do for mankind. But even he comes up empty when he’s like, “Where do we We do we get at the end of all this?” What are we leading to? So I think it’s connected, and it’s an interesting point that you make that once he mentions this writing piece and the capability that AI could have to produce “art” he then drops it and only begins to hint at it again at the end, where he also says, “I don’t actually know what’s going to happen here.”
Asit Sharma: I agree. I love the humility that he brings. He doesn’t ultimately know if this will end up as making our lives more meaningful as we perceive them, although he shows so many benefits that AI could bring. We contrast that with Sam Altman’s vision at the very end of his essay where he says and I’m quoting here, “Many of the jobs we do today would have looked like trifling waste of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamp lighter.” If a lamp lighter could see the world today, he would think the prosperity all around him was unimaginable. Of course, there are many of us who could disagree there to say, “Imagine if you could have been a lamplighter in Victoria and England, the sense of value you would have had waking up at dawn, going around as like a quasi patrol person for your neighborhood at night, illuminating society, how good you would have felt about that job at that time? So many of us today, as it is, struggle to find meaning. Sitting in front of computers or doing what work we have, we struggle to feel like our lives are worth anything. So I think these visions are very different. But, you wanted to talk about, Mary, the advancements that Amodei proposes and both the fun side of that and maybe the creepy part, too.
Mary Long: No and I’m glad. Thank you for getting us back on track. I’m going down the rabbit holes of the philosophical and I’m like, “I could talk about this all day long.” But you’re right. Amodei outlines a lot of possibilities and this is, again, what he’s envisioning will come to happen within the 5-10 years after superintelligence, artificial general intelligence, powerful AI, as he calls it, is achieved. Again, he sees this that point as happening as early as 2026. So this is achieved and say, 2026, the clock starts. What happens next? That’s what Amodei is outlining in this vision. Within this vision, he breaks down, again, into a couple of different categories. You’ve got biology and health, you have neuroscience and mental health, you have economic development and prosperity, you have governance and peace, and then you have work and meaning.
There is a lot of excitement here. It’s easy to concern troll. We can get to some of the things that are perhaps more frightening later on, but this is supposed to be a positive utopian vision. So let’s talk about what’s exciting. You kick it to me so I won’t kick the question back to you before answering. I think that, obviously, he talks about the eradication of infectious and genetic diseases and most cancers. Of course, that is incredibly exciting to think about. The way that he envisions this happening is that, again, you have this team of AI agents that have the collective brainpower of the world’s smartest biologists that are able to run experiments, and because they can act and initiate tasks, they can run experiments.
They can run even regulatory tests and speed up the process with which things are improved. So I like that Amodei doesn’t just say, hey, this is what AI is going to do, but he gives you a sense of how it might come to accomplish this really massive task. He also mentions the elimination of severe mental illnesses, which frankly is not something that had crossed my mind before when thinking about the possibilities. I think it feels wild to say and suggest that, When I thought about AI, I thought the eradication of disease was on the table, but I feel that is something that’s more often discussed. The eradication of mental illness was not something that had crossed my mind or that I had read about before. So I think that there’s especially when you think about really, really severe mental illness and even less severe mental illness.
To imagine a world without that, is obviously a positive in progress. I got really excited hearing him talk about, making progress in food security and climate change mitigation, the possibilities within food and agriculture technology is fascinating to me. This is a silly nerdy one, but he mentions within this government and peace perspective segment that you could have an AI that helps citizens take full advantage of the governmental services that are available to them. When you line that offering up against the eradication of all disease, it feels really trite and small, but I thought, “That would be awesome.” [laughs]
Asit Sharma: I love that, too, Mary. I really love the cognizance that help is hard, even when there is something provided by a government for you. Bureaucracy is difficult and you may be the person who has to fill out just innumerable amount of forms to get help for a kid or maybe to file for unemployment. There’s so much of bureaucracy in our society. What if an AI made it easy for you and easy for the analysis on the other end, so you could get the services that you needed? I thought that was really fun. My takeaways were very similar, so I’m not going to say a lot here except to say that I found oddly reassuring the consistent pointing out that there are so many physical limitations that keep problems from being solved overnight.
So if you’re scared that the AI is going to advance so much and solve so many problems and perhaps there’s nothing left for us to do. He keep points out that we’re straining against the laws of physics, biology, and experimentation. So the rate of change may be phenomenal, but it may be something that we’ll be able to live with because it’s constrained by so many variables and he gives a lot of great examples in clinical research, for example. So that was something that was cool for me. Then finally, just thinking about neuroscience, he discusses or actually alludes to the work that Anthropic is doing to undercover why their models work the way they do. Most of these companies don’t see to have that much interest in trying to understand the black box. But I give Anthropic a lot of credit for publishing papers on what they’re seeing as they build the models. So him just giving a nod to what he calls interpretability, which is also understanding how these large language models work vis-a-vis, our brains, it was cool.
Mary Long: Again, Amodei is clear that this is a positive vision. This is the best case scenario for what he thinks powerful AI is possible of. Powerful AI, I should say. That said, there were still moments that I was reading through these possibilities where I felt my stomach turt a bit and I felt quite nervous. There were many things that I read that I’m like, see how this is positive. I’m wincing as I say that because the overwhelming part of me, also thought, but this is perhaps too much of a good thing. I don’t know that for all the good that this powerful AI could potentially bring, I don’t know that it’s fair or possible, really, to imagine a world without any problems. You could solve a lot of problems and still, funnily enough, problems tend to arise. So I don’t want to, like, gloss over that. I’d love to take a moment to point out what stuck out to each of us as more worrisome elements of this positive vision. You want to kick us off with this one?
Asit Sharma: Sure. I’ve got two. So one is something he mentions called biological freedom. He talks about all the advances over the last 70 years in fertility, weight management, all these great things. Then says that he suspects that an AI accelerated biology is going to expand what’s possible almost, so that we can select from a cafeteria style menu of how we want to be, how we want our biology to play out, our physical appearance, our reproduction, which is what people first worried about when we started making progress on the human genome.
What if you can just select what your baby will be like? This sounds like that after birth, so that was a little bit creepy to me. Another one which gave me pause was something related to this in that and I have to quote here just to make this clear, “Everyday problems that we don’t think of as a clinical disease will also be solved.” Most of us have everyday psychological problems that are not ordinarily thought of as rising to the level of clinical disease. Some people are quick to anger, others have trouble focusing or are often drowsy. Some are fearful or anxious or react badly to change. Then he discusses that, there are some drugs that help with that, but conceivably, these super-intelligent AI systems could just take that edge off of our personalities. There’s a thin line, I think, between being human and then having something that needs treatment. This goes back to what you were saying earlier, Mary, that do you want to solve every problem? I mean, what happens if we select and select to the point where our whole existence is one of moderation? That felt a little uncomfortable or a lot uncomfortable to me.
Ricky Mulvey: As always, people on the program may own stocks mentioned, and the Motley Fool may have formal recommendations for or against, so don’t buy or sell anything based solely on what you hear. All personal finance content follows Motley Fool editorial standards, and are not approved by advertisers. The Motley Fool only picks products that it would personally recommend to friends like you. I’m Ricky Mulvey. Thanks for listening. We’ll be back tomorrow.
Diseases become easier to cure. Bureaucracy is simplified. What will work look like? OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei released essays about their visions of what artificial intelligence will bring humanity.
In this podcast, Motley Fool analyst Asit Sharma joins host Mary Long to discuss:
- If building artificial general intelligence is a winner-take-all game.
- How AI advancements could develop in the next decade.
- Lingering questions and worries about the future of superintelligence.
To catch full episodes of all The Motley Fool’s free podcasts, check out our podcast center. To get started investing, check out our beginner’s guide to investing in stocks. A full transcript follows the video.
This video was recorded on Nov. 09, 2024.
Asit Sharma: So if you’re scared that the AI is going to advance so much and solve so many problems and perhaps there’s nothing left for us to do, he points out that we’re straining against the laws of physics, biology, and experimentation. So the rate of change may be phenomenal, but it may be something that we’ll be able to live with because it’s constrained by so many variables.
Ricky Mulvey: I’m Ricky Mulvey and that’s Motley Fool’s Senior Analyst, Asit Sharma. OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei have both published essays outlining their visions for the future of artificial intelligence. A world where diseases become easier to cure, work becomes radically different and unsolvable math problems become solvable. Mary Long caught up with Asit to chat about these visions in a book club-style conversation, what they’re excited about and worried about when AI becomes “a country of geniuses in a data center.”
Mary Long: Amodei is not the first character to leave OpenAI. It’s worth noting and remembering that even earlier this year, OpenAI saw a number of high profile departures. Many of those people have now gone on or are rumored to be going on to start their own AI upstarts. I want to stick on this point for a minute because I think in this world that we’re in now, where our conception of AI is primarily through of chatbots, makes sense that we can have multiple different chatbots. Maybe you have a preference for one, I have a preference for one. They can compete pretty openly in the marketplace, but this grander vision that both Altman and Amodei lay out in their essays are of what super intelligence or artificial general intelligence or powerful AI.
There’s different terms that they each prefer. What this higher AI can do. Amodei describes that super intelligence as a country of geniuses in a data center. If we put ourselves in that future of a time when AI is a country of geniuses in a data center, can there be multiple countries of geniuses in different data centers? Or, can we have the same open competition that we now see with chat bots? Or, is this a winner-takes-all type situation with the massive change that both of these leaders are talking about and envisioning?
Asit Sharma: That’s really difficult to contemplate because behind the scenes, both of these leaders are trying to raise capital in order to have a jump on whatever you call the superintelligence or general AGI. There’s so many different terms that we can insert here. So on the surface, if you read both of their works, there is an idealized vision of the future, which seems very cooperative. That would necessitate if it is a cooperative game, multiple data centers with multiple Nobel Prize winning geniuses. But even Amodei who I think is more prone to look at this as a cooperative endeavor, refers to different thinkers who feel that democracy itself is intertwined with these concepts and therefore, democratic companies should participate, pool capital, pool resources to develop their AI and make that more advanced than AI from non-democratic countries, that’s the stick and then they extend a carrot, which is to share that technology.
So there is a vision here in which it’s important to maybe get there first and have a superintelligence. I like that metaphor that Amodei puts out. I will say that for me, it’s more about flipping the equation. Right now, URI uses a chatbot, or large language model and, they’re basically the assistant. We’re trying to achieve something. So an intelligent human with a very good artificial intelligence can do a lot. In all of these visions, the one commonality I see, is that, that gets flipped, where we become the assistants. Maybe we control the initial objective or put it out in front of the artificial intelligence. But from then on, they’re really controlling everything. In Amodei’s vision, the superintelligence has access to whatever it needs; robots, laboratories, means of productions, etc, to solve problems.
Mary Long: Both Altman and Amodei anticipate that this super intelligence is going to come quite fast. Altman’s estimate is that we could reach this in around 1,000 days. He published this in September. Amodei’s estimate is that we could see this as early as 2026. So Amodei’s essay goes on to imagine what will happen in the 5-10 years after we reach this super intelligence, what that looks like. I don’t even think that this is really a value judgment of the piece. It’s much more vague than what Amodei lays out. Amodei’s essay is about over 14,000 words, it’s lengthy, it’s detailed. Just setting the table with that information. Do you find that these are fundamentally different visions, even though one might be more detailed than another, and if they’re different, do you buy one over the other?
Asit Sharma: They may not be so fundamentally different as it seems on the surface, although we don’t have enough detail in Sam Altman’s vision of the future to understand if he’s talking about the same thing. He has a sentence or two in his vision, which as you say, is very short on details. There is really no effort there to persuade the reader of anything he’s saying, because he does point out that, hey, we found out that deep learning with scale changes the world. That’s essentially what he’s saying.
Amodei has a similar thought in that there is this weird principle and I think there are some academic papers on this that, just a little bit of agentic action, just a little bit of an algorithm or an ability to interpret a sequence, that applied at scale is actually what intelligence is, and that neuroscientists have been kidding themselves and thinking there’s some mysterious thing that goes on in the human mind, that’s the basis of intelligence. Perhaps our brains, also with just a few simple mechanisms, once we’re exposed to enormous amounts of data as we are through our lives from the time we’re born until the time we die, maybe that’s what intelligence is. It’s just a lot of scaling that compute with very simple operations going on. So in that sense, I think they see the world the same way.
In fact, Altman has taken that to an extreme. He went on a tour last year to Asia to try to persuade various chip makers and governments that about seven trillion dollars in capital should be raised between chips, data centers, networking, energy requirements, etc, to support the goals of AI as it moves toward this super intelligence. Which in my estimation is interesting because it’s this essay. It’s very big, bold, without a lot of detail. I will point out that he was dismissed by some of the smartest minds on the planet, the engineers at TSMC, the leadership. The New York Times reported that they dismissed Altman like a podcast pro. I think this is the danger of putting out bold vision without thinking about consequences or persuading people that you’re thinking about consequences. I think for most of us who aren’t as brilliant as either these two gentlemen, or don’t have access to the capital, it worries one that someone would go and try to raise $7 trillion with today’s energy demands on compute GPUs as they’re structured today, that’s a lot of impact on the planet, wouldn’t you first, maybe if you had access to such thinkers and investors, try to find ways to reduce the energy imprint of compute? So I think the essays both have some great similarities, but they really approach the world in very different ways.
Mary Long: Amodei almost makes a similar point to perhaps what the TSMC engineers were making, because he lays out very early on in this essay that, part of his purpose and point in writing this, is that, you can’t make this technology convincing to other people unless you underline and explain what the hope of it is. So you need to have this North Star that everyone who’s bought into the technology or who’s going to be affected by it, which ultimately he argues is everyone. You need to understand what that North Star is, and that not only helps inspire you to work toward something, and even if you’re a layperson, just get excited about it.
As one of the minds that’s helping to build this technology, it also helps you figure out what we don’t want and what to stay away from. Amodei is clear at the beginning of his essay that what he is writing is a positive vision for AI. Obviously, there are lots of things that can go wrong anthropic, purports to be very concerned about safety. That’s not the purpose of this essay. This is mostly the most generous vision that he’s outlining here. I want to get philosophical for a minute before we dive into the visions. [laughs] Austin is so excited for this to come. Because a commonality between both men’s envisioning of what super intelligence would be is like, I think you called it agentic. That AI agents are not only able to process data, but they wind up doing your bidding. Amodei describes this again, more specifically than Altman does, but they’re very similar concepts, as I understand them. He says that ultimately these super intelligent AI agents are capable of initiating tasks and have the I’m going to use quotes, “brain power” of Nobel Prize winners in numerous industries.
So he names a few of these industries, one of which is writing. It can prove and solve mathematical theorems, write extremely good novels; is how he describes this. When I hear this, I can’t help but wonder about the difference between skill and art and how that gets thought of in these renderings and even just imaginings of AI agents. So I’ll turn it to you, Asit. Before I wax poetic a bit longer, is there a difference between skill and art? What does that difference look like?
Asit Sharma: Totally there is. I think for all I admired in this essay, and I should say that there is a vein of humility that runs through the whole Amodei’s whole essay. He begins with a lot of great rhetorical devices, telling you that he understands how silly he could look writing something like this, and he gets that out of the way. He doesn’t want to do that. He doesn’t want to sound like it, and he gives a whole range of types of personalities that he probably sounds like. So for all that there is so much in this essay to like, this is the one point where I really disagreed.
The difference between skill and art, is that, skill is necessary for art. You need the mechanics of a thing to be able to create something great. So you need the artifice of something. You need to be able to embroider if you’re making a beautiful cape. You can’t just imagine it. I think the machines are really great at this. But you also need the ability to experience emotion in a way that can be wrapped up with other things to create something. The reason why it’s going to take machines a long time is they can only emulate. They have the ability to hallucinate already the way these neural networks are built. They hallucinate just like we do. We dream, they dream. Their hallucination is a bit different than ours, but the human body is such an interesting thing.
It’s both composed of electrical impulses, chemical impulses, etc. We have very fine grain receptors on our skin. Therefore, if you’re a child and your grandparent strokes your palm, you may carry that memory with you decades into the future and as a writer, will come back to you when you’re describing a scene. So this is something that yes, maybe over time can be replicated. We know that Meta AI is working on very fine-grained touch perception. So there’s that. We know that these agents can emulate human thought, but putting that whole thing together, where there is an emotional current running through, the machines can only imagine in their own way what that is like.
If you read a great piece of art, I know that you and I have discussed some novels over time. The one commonality is that they’re drawn from this amazing breath of experience throughout a writer’s lifetime that comes together in a very unique way. We don’t understand how that’s done. So machines will come close to that, but will they be able to recreate the effect of reading a great first line of a novel and that carrying you all the way through? It’s going to be a long, long while before that actually happens. So where that argument starts to fall apart. I will note the one thing he doesn’t come back to after discussing how AI will excel in biology and mechanics, so many things. He actually, after stating that they’ll be able to write Nobel level novels, doesn’t support that argument. I don’t think it can be supported.
Mary Long: There is a section at the end of Amodei’s essay in which he talks about work and meaning, and the impact that AI will have on that. That is notably the shortest section in the essay of the five that he outlines earlier on, which include biology, neuroscience, governance and peace, economic development, and prosperity. Then this work in meaning, which is the shortest. And he even addresses this. I think that in large part comes back to what you’re talking about of so much of perhaps not work, but meaning in human life can go back to art and this breath of experience and trying to articulate it and connect over it and in many ways, compelling vision of what AI can do for mankind. But even he comes up empty when he’s like, “Where do we We do we get at the end of all this?” What are we leading to? So I think it’s connected, and it’s an interesting point that you make that once he mentions this writing piece and the capability that AI could have to produce “art” he then drops it and only begins to hint at it again at the end, where he also says, “I don’t actually know what’s going to happen here.”
Asit Sharma: I agree. I love the humility that he brings. He doesn’t ultimately know if this will end up as making our lives more meaningful as we perceive them, although he shows so many benefits that AI could bring. We contrast that with Sam Altman’s vision at the very end of his essay where he says and I’m quoting here, “Many of the jobs we do today would have looked like trifling waste of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamp lighter.” If a lamp lighter could see the world today, he would think the prosperity all around him was unimaginable. Of course, there are many of us who could disagree there to say, “Imagine if you could have been a lamplighter in Victoria and England, the sense of value you would have had waking up at dawn, going around as like a quasi patrol person for your neighborhood at night, illuminating society, how good you would have felt about that job at that time? So many of us today, as it is, struggle to find meaning. Sitting in front of computers or doing what work we have, we struggle to feel like our lives are worth anything. So I think these visions are very different. But, you wanted to talk about, Mary, the advancements that Amodei proposes and both the fun side of that and maybe the creepy part, too.
Mary Long: No and I’m glad. Thank you for getting us back on track. I’m going down the rabbit holes of the philosophical and I’m like, “I could talk about this all day long.” But you’re right. Amodei outlines a lot of possibilities and this is, again, what he’s envisioning will come to happen within the 5-10 years after superintelligence, artificial general intelligence, powerful AI, as he calls it, is achieved. Again, he sees this that point as happening as early as 2026. So this is achieved and say, 2026, the clock starts. What happens next? That’s what Amodei is outlining in this vision. Within this vision, he breaks down, again, into a couple of different categories. You’ve got biology and health, you have neuroscience and mental health, you have economic development and prosperity, you have governance and peace, and then you have work and meaning.
There is a lot of excitement here. It’s easy to concern troll. We can get to some of the things that are perhaps more frightening later on, but this is supposed to be a positive utopian vision. So let’s talk about what’s exciting. You kick it to me so I won’t kick the question back to you before answering. I think that, obviously, he talks about the eradication of infectious and genetic diseases and most cancers. Of course, that is incredibly exciting to think about. The way that he envisions this happening is that, again, you have this team of AI agents that have the collective brainpower of the world’s smartest biologists that are able to run experiments, and because they can act and initiate tasks, they can run experiments.
They can run even regulatory tests and speed up the process with which things are improved. So I like that Amodei doesn’t just say, hey, this is what AI is going to do, but he gives you a sense of how it might come to accomplish this really massive task. He also mentions the elimination of severe mental illnesses, which frankly is not something that had crossed my mind before when thinking about the possibilities. I think it feels wild to say and suggest that, When I thought about AI, I thought the eradication of disease was on the table, but I feel that is something that’s more often discussed. The eradication of mental illness was not something that had crossed my mind or that I had read about before. So I think that there’s especially when you think about really, really severe mental illness and even less severe mental illness.
To imagine a world without that, is obviously a positive in progress. I got really excited hearing him talk about, making progress in food security and climate change mitigation, the possibilities within food and agriculture technology is fascinating to me. This is a silly nerdy one, but he mentions within this government and peace perspective segment that you could have an AI that helps citizens take full advantage of the governmental services that are available to them. When you line that offering up against the eradication of all disease, it feels really trite and small, but I thought, “That would be awesome.” [laughs]
Asit Sharma: I love that, too, Mary. I really love the cognizance that help is hard, even when there is something provided by a government for you. Bureaucracy is difficult and you may be the person who has to fill out just innumerable amount of forms to get help for a kid or maybe to file for unemployment. There’s so much of bureaucracy in our society. What if an AI made it easy for you and easy for the analysis on the other end, so you could get the services that you needed? I thought that was really fun. My takeaways were very similar, so I’m not going to say a lot here except to say that I found oddly reassuring the consistent pointing out that there are so many physical limitations that keep problems from being solved overnight.
So if you’re scared that the AI is going to advance so much and solve so many problems and perhaps there’s nothing left for us to do. He keep points out that we’re straining against the laws of physics, biology, and experimentation. So the rate of change may be phenomenal, but it may be something that we’ll be able to live with because it’s constrained by so many variables and he gives a lot of great examples in clinical research, for example. So that was something that was cool for me. Then finally, just thinking about neuroscience, he discusses or actually alludes to the work that Anthropic is doing to undercover why their models work the way they do. Most of these companies don’t see to have that much interest in trying to understand the black box. But I give Anthropic a lot of credit for publishing papers on what they’re seeing as they build the models. So him just giving a nod to what he calls interpretability, which is also understanding how these large language models work vis-a-vis, our brains, it was cool.
Mary Long: Again, Amodei is clear that this is a positive vision. This is the best case scenario for what he thinks powerful AI is possible of. Powerful AI, I should say. That said, there were still moments that I was reading through these possibilities where I felt my stomach turt a bit and I felt quite nervous. There were many things that I read that I’m like, see how this is positive. I’m wincing as I say that because the overwhelming part of me, also thought, but this is perhaps too much of a good thing. I don’t know that for all the good that this powerful AI could potentially bring, I don’t know that it’s fair or possible, really, to imagine a world without any problems. You could solve a lot of problems and still, funnily enough, problems tend to arise. So I don’t want to, like, gloss over that. I’d love to take a moment to point out what stuck out to each of us as more worrisome elements of this positive vision. You want to kick us off with this one?
Asit Sharma: Sure. I’ve got two. So one is something he mentions called biological freedom. He talks about all the advances over the last 70 years in fertility, weight management, all these great things. Then says that he suspects that an AI accelerated biology is going to expand what’s possible almost, so that we can select from a cafeteria style menu of how we want to be, how we want our biology to play out, our physical appearance, our reproduction, which is what people first worried about when we started making progress on the human genome.
What if you can just select what your baby will be like? This sounds like that after birth, so that was a little bit creepy to me. Another one which gave me pause was something related to this in that and I have to quote here just to make this clear, “Everyday problems that we don’t think of as a clinical disease will also be solved.” Most of us have everyday psychological problems that are not ordinarily thought of as rising to the level of clinical disease. Some people are quick to anger, others have trouble focusing or are often drowsy. Some are fearful or anxious or react badly to change. Then he discusses that, there are some drugs that help with that, but conceivably, these super-intelligent AI systems could just take that edge off of our personalities. There’s a thin line, I think, between being human and then having something that needs treatment. This goes back to what you were saying earlier, Mary, that do you want to solve every problem? I mean, what happens if we select and select to the point where our whole existence is one of moderation? That felt a little uncomfortable or a lot uncomfortable to me.
Ricky Mulvey: As always, people on the program may own stocks mentioned, and the Motley Fool may have formal recommendations for or against, so don’t buy or sell anything based solely on what you hear. All personal finance content follows Motley Fool editorial standards, and are not approved by advertisers. The Motley Fool only picks products that it would personally recommend to friends like you. I’m Ricky Mulvey. Thanks for listening. We’ll be back tomorrow.
Diseases become easier to cure. Bureaucracy is simplified. What will work look like? OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei released essays about their visions of what artificial intelligence will bring humanity.
In this podcast, Motley Fool analyst Asit Sharma joins host Mary Long to discuss:
- If building artificial general intelligence is a winner-take-all game.
- How AI advancements could develop in the next decade.
- Lingering questions and worries about the future of superintelligence.
To catch full episodes of all The Motley Fool’s free podcasts, check out our podcast center. To get started investing, check out our beginner’s guide to investing in stocks. A full transcript follows the video.
This video was recorded on Nov. 09, 2024.
Asit Sharma: So if you’re scared that the AI is going to advance so much and solve so many problems and perhaps there’s nothing left for us to do, he points out that we’re straining against the laws of physics, biology, and experimentation. So the rate of change may be phenomenal, but it may be something that we’ll be able to live with because it’s constrained by so many variables.
Ricky Mulvey: I’m Ricky Mulvey and that’s Motley Fool’s Senior Analyst, Asit Sharma. OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei have both published essays outlining their visions for the future of artificial intelligence. A world where diseases become easier to cure, work becomes radically different and unsolvable math problems become solvable. Mary Long caught up with Asit to chat about these visions in a book club-style conversation, what they’re excited about and worried about when AI becomes “a country of geniuses in a data center.”
Mary Long: Amodei is not the first character to leave OpenAI. It’s worth noting and remembering that even earlier this year, OpenAI saw a number of high profile departures. Many of those people have now gone on or are rumored to be going on to start their own AI upstarts. I want to stick on this point for a minute because I think in this world that we’re in now, where our conception of AI is primarily through of chatbots, makes sense that we can have multiple different chatbots. Maybe you have a preference for one, I have a preference for one. They can compete pretty openly in the marketplace, but this grander vision that both Altman and Amodei lay out in their essays are of what super intelligence or artificial general intelligence or powerful AI.
There’s different terms that they each prefer. What this higher AI can do. Amodei describes that super intelligence as a country of geniuses in a data center. If we put ourselves in that future of a time when AI is a country of geniuses in a data center, can there be multiple countries of geniuses in different data centers? Or, can we have the same open competition that we now see with chat bots? Or, is this a winner-takes-all type situation with the massive change that both of these leaders are talking about and envisioning?
Asit Sharma: That’s really difficult to contemplate because behind the scenes, both of these leaders are trying to raise capital in order to have a jump on whatever you call the superintelligence or general AGI. There’s so many different terms that we can insert here. So on the surface, if you read both of their works, there is an idealized vision of the future, which seems very cooperative. That would necessitate if it is a cooperative game, multiple data centers with multiple Nobel Prize winning geniuses. But even Amodei who I think is more prone to look at this as a cooperative endeavor, refers to different thinkers who feel that democracy itself is intertwined with these concepts and therefore, democratic companies should participate, pool capital, pool resources to develop their AI and make that more advanced than AI from non-democratic countries, that’s the stick and then they extend a carrot, which is to share that technology.
So there is a vision here in which it’s important to maybe get there first and have a superintelligence. I like that metaphor that Amodei puts out. I will say that for me, it’s more about flipping the equation. Right now, URI uses a chatbot, or large language model and, they’re basically the assistant. We’re trying to achieve something. So an intelligent human with a very good artificial intelligence can do a lot. In all of these visions, the one commonality I see, is that, that gets flipped, where we become the assistants. Maybe we control the initial objective or put it out in front of the artificial intelligence. But from then on, they’re really controlling everything. In Amodei’s vision, the superintelligence has access to whatever it needs; robots, laboratories, means of productions, etc, to solve problems.
Mary Long: Both Altman and Amodei anticipate that this super intelligence is going to come quite fast. Altman’s estimate is that we could reach this in around 1,000 days. He published this in September. Amodei’s estimate is that we could see this as early as 2026. So Amodei’s essay goes on to imagine what will happen in the 5-10 years after we reach this super intelligence, what that looks like. I don’t even think that this is really a value judgment of the piece. It’s much more vague than what Amodei lays out. Amodei’s essay is about over 14,000 words, it’s lengthy, it’s detailed. Just setting the table with that information. Do you find that these are fundamentally different visions, even though one might be more detailed than another, and if they’re different, do you buy one over the other?
Asit Sharma: They may not be so fundamentally different as it seems on the surface, although we don’t have enough detail in Sam Altman’s vision of the future to understand if he’s talking about the same thing. He has a sentence or two in his vision, which as you say, is very short on details. There is really no effort there to persuade the reader of anything he’s saying, because he does point out that, hey, we found out that deep learning with scale changes the world. That’s essentially what he’s saying.
Amodei has a similar thought in that there is this weird principle and I think there are some academic papers on this that, just a little bit of agentic action, just a little bit of an algorithm or an ability to interpret a sequence, that applied at scale is actually what intelligence is, and that neuroscientists have been kidding themselves and thinking there’s some mysterious thing that goes on in the human mind, that’s the basis of intelligence. Perhaps our brains, also with just a few simple mechanisms, once we’re exposed to enormous amounts of data as we are through our lives from the time we’re born until the time we die, maybe that’s what intelligence is. It’s just a lot of scaling that compute with very simple operations going on. So in that sense, I think they see the world the same way.
In fact, Altman has taken that to an extreme. He went on a tour last year to Asia to try to persuade various chip makers and governments that about seven trillion dollars in capital should be raised between chips, data centers, networking, energy requirements, etc, to support the goals of AI as it moves toward this super intelligence. Which in my estimation is interesting because it’s this essay. It’s very big, bold, without a lot of detail. I will point out that he was dismissed by some of the smartest minds on the planet, the engineers at TSMC, the leadership. The New York Times reported that they dismissed Altman like a podcast pro. I think this is the danger of putting out bold vision without thinking about consequences or persuading people that you’re thinking about consequences. I think for most of us who aren’t as brilliant as either these two gentlemen, or don’t have access to the capital, it worries one that someone would go and try to raise $7 trillion with today’s energy demands on compute GPUs as they’re structured today, that’s a lot of impact on the planet, wouldn’t you first, maybe if you had access to such thinkers and investors, try to find ways to reduce the energy imprint of compute? So I think the essays both have some great similarities, but they really approach the world in very different ways.
Mary Long: Amodei almost makes a similar point to perhaps what the TSMC engineers were making, because he lays out very early on in this essay that, part of his purpose and point in writing this, is that, you can’t make this technology convincing to other people unless you underline and explain what the hope of it is. So you need to have this North Star that everyone who’s bought into the technology or who’s going to be affected by it, which ultimately he argues is everyone. You need to understand what that North Star is, and that not only helps inspire you to work toward something, and even if you’re a layperson, just get excited about it.
As one of the minds that’s helping to build this technology, it also helps you figure out what we don’t want and what to stay away from. Amodei is clear at the beginning of his essay that what he is writing is a positive vision for AI. Obviously, there are lots of things that can go wrong anthropic, purports to be very concerned about safety. That’s not the purpose of this essay. This is mostly the most generous vision that he’s outlining here. I want to get philosophical for a minute before we dive into the visions. [laughs] Austin is so excited for this to come. Because a commonality between both men’s envisioning of what super intelligence would be is like, I think you called it agentic. That AI agents are not only able to process data, but they wind up doing your bidding. Amodei describes this again, more specifically than Altman does, but they’re very similar concepts, as I understand them. He says that ultimately these super intelligent AI agents are capable of initiating tasks and have the I’m going to use quotes, “brain power” of Nobel Prize winners in numerous industries.
So he names a few of these industries, one of which is writing. It can prove and solve mathematical theorems, write extremely good novels; is how he describes this. When I hear this, I can’t help but wonder about the difference between skill and art and how that gets thought of in these renderings and even just imaginings of AI agents. So I’ll turn it to you, Asit. Before I wax poetic a bit longer, is there a difference between skill and art? What does that difference look like?
Asit Sharma: Totally there is. I think for all I admired in this essay, and I should say that there is a vein of humility that runs through the whole Amodei’s whole essay. He begins with a lot of great rhetorical devices, telling you that he understands how silly he could look writing something like this, and he gets that out of the way. He doesn’t want to do that. He doesn’t want to sound like it, and he gives a whole range of types of personalities that he probably sounds like. So for all that there is so much in this essay to like, this is the one point where I really disagreed.
The difference between skill and art, is that, skill is necessary for art. You need the mechanics of a thing to be able to create something great. So you need the artifice of something. You need to be able to embroider if you’re making a beautiful cape. You can’t just imagine it. I think the machines are really great at this. But you also need the ability to experience emotion in a way that can be wrapped up with other things to create something. The reason why it’s going to take machines a long time is they can only emulate. They have the ability to hallucinate already the way these neural networks are built. They hallucinate just like we do. We dream, they dream. Their hallucination is a bit different than ours, but the human body is such an interesting thing.
It’s both composed of electrical impulses, chemical impulses, etc. We have very fine grain receptors on our skin. Therefore, if you’re a child and your grandparent strokes your palm, you may carry that memory with you decades into the future and as a writer, will come back to you when you’re describing a scene. So this is something that yes, maybe over time can be replicated. We know that Meta AI is working on very fine-grained touch perception. So there’s that. We know that these agents can emulate human thought, but putting that whole thing together, where there is an emotional current running through, the machines can only imagine in their own way what that is like.
If you read a great piece of art, I know that you and I have discussed some novels over time. The one commonality is that they’re drawn from this amazing breath of experience throughout a writer’s lifetime that comes together in a very unique way. We don’t understand how that’s done. So machines will come close to that, but will they be able to recreate the effect of reading a great first line of a novel and that carrying you all the way through? It’s going to be a long, long while before that actually happens. So where that argument starts to fall apart. I will note the one thing he doesn’t come back to after discussing how AI will excel in biology and mechanics, so many things. He actually, after stating that they’ll be able to write Nobel level novels, doesn’t support that argument. I don’t think it can be supported.
Mary Long: There is a section at the end of Amodei’s essay in which he talks about work and meaning, and the impact that AI will have on that. That is notably the shortest section in the essay of the five that he outlines earlier on, which include biology, neuroscience, governance and peace, economic development, and prosperity. Then this work in meaning, which is the shortest. And he even addresses this. I think that in large part comes back to what you’re talking about of so much of perhaps not work, but meaning in human life can go back to art and this breath of experience and trying to articulate it and connect over it and in many ways, compelling vision of what AI can do for mankind. But even he comes up empty when he’s like, “Where do we We do we get at the end of all this?” What are we leading to? So I think it’s connected, and it’s an interesting point that you make that once he mentions this writing piece and the capability that AI could have to produce “art” he then drops it and only begins to hint at it again at the end, where he also says, “I don’t actually know what’s going to happen here.”
Asit Sharma: I agree. I love the humility that he brings. He doesn’t ultimately know if this will end up as making our lives more meaningful as we perceive them, although he shows so many benefits that AI could bring. We contrast that with Sam Altman’s vision at the very end of his essay where he says and I’m quoting here, “Many of the jobs we do today would have looked like trifling waste of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamp lighter.” If a lamp lighter could see the world today, he would think the prosperity all around him was unimaginable. Of course, there are many of us who could disagree there to say, “Imagine if you could have been a lamplighter in Victoria and England, the sense of value you would have had waking up at dawn, going around as like a quasi patrol person for your neighborhood at night, illuminating society, how good you would have felt about that job at that time? So many of us today, as it is, struggle to find meaning. Sitting in front of computers or doing what work we have, we struggle to feel like our lives are worth anything. So I think these visions are very different. But, you wanted to talk about, Mary, the advancements that Amodei proposes and both the fun side of that and maybe the creepy part, too.
Mary Long: No and I’m glad. Thank you for getting us back on track. I’m going down the rabbit holes of the philosophical and I’m like, “I could talk about this all day long.” But you’re right. Amodei outlines a lot of possibilities and this is, again, what he’s envisioning will come to happen within the 5-10 years after superintelligence, artificial general intelligence, powerful AI, as he calls it, is achieved. Again, he sees this that point as happening as early as 2026. So this is achieved and say, 2026, the clock starts. What happens next? That’s what Amodei is outlining in this vision. Within this vision, he breaks down, again, into a couple of different categories. You’ve got biology and health, you have neuroscience and mental health, you have economic development and prosperity, you have governance and peace, and then you have work and meaning.
There is a lot of excitement here. It’s easy to concern troll. We can get to some of the things that are perhaps more frightening later on, but this is supposed to be a positive utopian vision. So let’s talk about what’s exciting. You kick it to me so I won’t kick the question back to you before answering. I think that, obviously, he talks about the eradication of infectious and genetic diseases and most cancers. Of course, that is incredibly exciting to think about. The way that he envisions this happening is that, again, you have this team of AI agents that have the collective brainpower of the world’s smartest biologists that are able to run experiments, and because they can act and initiate tasks, they can run experiments.
They can run even regulatory tests and speed up the process with which things are improved. So I like that Amodei doesn’t just say, hey, this is what AI is going to do, but he gives you a sense of how it might come to accomplish this really massive task. He also mentions the elimination of severe mental illnesses, which frankly is not something that had crossed my mind before when thinking about the possibilities. I think it feels wild to say and suggest that, When I thought about AI, I thought the eradication of disease was on the table, but I feel that is something that’s more often discussed. The eradication of mental illness was not something that had crossed my mind or that I had read about before. So I think that there’s especially when you think about really, really severe mental illness and even less severe mental illness.
To imagine a world without that, is obviously a positive in progress. I got really excited hearing him talk about, making progress in food security and climate change mitigation, the possibilities within food and agriculture technology is fascinating to me. This is a silly nerdy one, but he mentions within this government and peace perspective segment that you could have an AI that helps citizens take full advantage of the governmental services that are available to them. When you line that offering up against the eradication of all disease, it feels really trite and small, but I thought, “That would be awesome.” [laughs]
Asit Sharma: I love that, too, Mary. I really love the cognizance that help is hard, even when there is something provided by a government for you. Bureaucracy is difficult and you may be the person who has to fill out just innumerable amount of forms to get help for a kid or maybe to file for unemployment. There’s so much of bureaucracy in our society. What if an AI made it easy for you and easy for the analysis on the other end, so you could get the services that you needed? I thought that was really fun. My takeaways were very similar, so I’m not going to say a lot here except to say that I found oddly reassuring the consistent pointing out that there are so many physical limitations that keep problems from being solved overnight.
So if you’re scared that the AI is going to advance so much and solve so many problems and perhaps there’s nothing left for us to do. He keep points out that we’re straining against the laws of physics, biology, and experimentation. So the rate of change may be phenomenal, but it may be something that we’ll be able to live with because it’s constrained by so many variables and he gives a lot of great examples in clinical research, for example. So that was something that was cool for me. Then finally, just thinking about neuroscience, he discusses or actually alludes to the work that Anthropic is doing to undercover why their models work the way they do. Most of these companies don’t see to have that much interest in trying to understand the black box. But I give Anthropic a lot of credit for publishing papers on what they’re seeing as they build the models. So him just giving a nod to what he calls interpretability, which is also understanding how these large language models work vis-a-vis, our brains, it was cool.
Mary Long: Again, Amodei is clear that this is a positive vision. This is the best case scenario for what he thinks powerful AI is possible of. Powerful AI, I should say. That said, there were still moments that I was reading through these possibilities where I felt my stomach turt a bit and I felt quite nervous. There were many things that I read that I’m like, see how this is positive. I’m wincing as I say that because the overwhelming part of me, also thought, but this is perhaps too much of a good thing. I don’t know that for all the good that this powerful AI could potentially bring, I don’t know that it’s fair or possible, really, to imagine a world without any problems. You could solve a lot of problems and still, funnily enough, problems tend to arise. So I don’t want to, like, gloss over that. I’d love to take a moment to point out what stuck out to each of us as more worrisome elements of this positive vision. You want to kick us off with this one?
Asit Sharma: Sure. I’ve got two. So one is something he mentions called biological freedom. He talks about all the advances over the last 70 years in fertility, weight management, all these great things. Then says that he suspects that an AI accelerated biology is going to expand what’s possible almost, so that we can select from a cafeteria style menu of how we want to be, how we want our biology to play out, our physical appearance, our reproduction, which is what people first worried about when we started making progress on the human genome.
What if you can just select what your baby will be like? This sounds like that after birth, so that was a little bit creepy to me. Another one which gave me pause was something related to this in that and I have to quote here just to make this clear, “Everyday problems that we don’t think of as a clinical disease will also be solved.” Most of us have everyday psychological problems that are not ordinarily thought of as rising to the level of clinical disease. Some people are quick to anger, others have trouble focusing or are often drowsy. Some are fearful or anxious or react badly to change. Then he discusses that, there are some drugs that help with that, but conceivably, these super-intelligent AI systems could just take that edge off of our personalities. There’s a thin line, I think, between being human and then having something that needs treatment. This goes back to what you were saying earlier, Mary, that do you want to solve every problem? I mean, what happens if we select and select to the point where our whole existence is one of moderation? That felt a little uncomfortable or a lot uncomfortable to me.
Ricky Mulvey: As always, people on the program may own stocks mentioned, and the Motley Fool may have formal recommendations for or against, so don’t buy or sell anything based solely on what you hear. All personal finance content follows Motley Fool editorial standards, and are not approved by advertisers. The Motley Fool only picks products that it would personally recommend to friends like you. I’m Ricky Mulvey. Thanks for listening. We’ll be back tomorrow.
Diseases become easier to cure. Bureaucracy is simplified. What will work look like? OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei released essays about their visions of what artificial intelligence will bring humanity.
In this podcast, Motley Fool analyst Asit Sharma joins host Mary Long to discuss:
- If building artificial general intelligence is a winner-take-all game.
- How AI advancements could develop in the next decade.
- Lingering questions and worries about the future of superintelligence.
To catch full episodes of all The Motley Fool’s free podcasts, check out our podcast center. To get started investing, check out our beginner’s guide to investing in stocks. A full transcript follows the video.
This video was recorded on Nov. 09, 2024.
Asit Sharma: So if you’re scared that the AI is going to advance so much and solve so many problems and perhaps there’s nothing left for us to do, he points out that we’re straining against the laws of physics, biology, and experimentation. So the rate of change may be phenomenal, but it may be something that we’ll be able to live with because it’s constrained by so many variables.
Ricky Mulvey: I’m Ricky Mulvey and that’s Motley Fool’s Senior Analyst, Asit Sharma. OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei have both published essays outlining their visions for the future of artificial intelligence. A world where diseases become easier to cure, work becomes radically different and unsolvable math problems become solvable. Mary Long caught up with Asit to chat about these visions in a book club-style conversation, what they’re excited about and worried about when AI becomes “a country of geniuses in a data center.”
Mary Long: Amodei is not the first character to leave OpenAI. It’s worth noting and remembering that even earlier this year, OpenAI saw a number of high profile departures. Many of those people have now gone on or are rumored to be going on to start their own AI upstarts. I want to stick on this point for a minute because I think in this world that we’re in now, where our conception of AI is primarily through of chatbots, makes sense that we can have multiple different chatbots. Maybe you have a preference for one, I have a preference for one. They can compete pretty openly in the marketplace, but this grander vision that both Altman and Amodei lay out in their essays are of what super intelligence or artificial general intelligence or powerful AI.
There’s different terms that they each prefer. What this higher AI can do. Amodei describes that super intelligence as a country of geniuses in a data center. If we put ourselves in that future of a time when AI is a country of geniuses in a data center, can there be multiple countries of geniuses in different data centers? Or, can we have the same open competition that we now see with chat bots? Or, is this a winner-takes-all type situation with the massive change that both of these leaders are talking about and envisioning?
Asit Sharma: That’s really difficult to contemplate because behind the scenes, both of these leaders are trying to raise capital in order to have a jump on whatever you call the superintelligence or general AGI. There’s so many different terms that we can insert here. So on the surface, if you read both of their works, there is an idealized vision of the future, which seems very cooperative. That would necessitate if it is a cooperative game, multiple data centers with multiple Nobel Prize winning geniuses. But even Amodei who I think is more prone to look at this as a cooperative endeavor, refers to different thinkers who feel that democracy itself is intertwined with these concepts and therefore, democratic companies should participate, pool capital, pool resources to develop their AI and make that more advanced than AI from non-democratic countries, that’s the stick and then they extend a carrot, which is to share that technology.
So there is a vision here in which it’s important to maybe get there first and have a superintelligence. I like that metaphor that Amodei puts out. I will say that for me, it’s more about flipping the equation. Right now, URI uses a chatbot, or large language model and, they’re basically the assistant. We’re trying to achieve something. So an intelligent human with a very good artificial intelligence can do a lot. In all of these visions, the one commonality I see, is that, that gets flipped, where we become the assistants. Maybe we control the initial objective or put it out in front of the artificial intelligence. But from then on, they’re really controlling everything. In Amodei’s vision, the superintelligence has access to whatever it needs; robots, laboratories, means of productions, etc, to solve problems.
Mary Long: Both Altman and Amodei anticipate that this super intelligence is going to come quite fast. Altman’s estimate is that we could reach this in around 1,000 days. He published this in September. Amodei’s estimate is that we could see this as early as 2026. So Amodei’s essay goes on to imagine what will happen in the 5-10 years after we reach this super intelligence, what that looks like. I don’t even think that this is really a value judgment of the piece. It’s much more vague than what Amodei lays out. Amodei’s essay is about over 14,000 words, it’s lengthy, it’s detailed. Just setting the table with that information. Do you find that these are fundamentally different visions, even though one might be more detailed than another, and if they’re different, do you buy one over the other?
Asit Sharma: They may not be so fundamentally different as it seems on the surface, although we don’t have enough detail in Sam Altman’s vision of the future to understand if he’s talking about the same thing. He has a sentence or two in his vision, which as you say, is very short on details. There is really no effort there to persuade the reader of anything he’s saying, because he does point out that, hey, we found out that deep learning with scale changes the world. That’s essentially what he’s saying.
Amodei has a similar thought in that there is this weird principle and I think there are some academic papers on this that, just a little bit of agentic action, just a little bit of an algorithm or an ability to interpret a sequence, that applied at scale is actually what intelligence is, and that neuroscientists have been kidding themselves and thinking there’s some mysterious thing that goes on in the human mind, that’s the basis of intelligence. Perhaps our brains, also with just a few simple mechanisms, once we’re exposed to enormous amounts of data as we are through our lives from the time we’re born until the time we die, maybe that’s what intelligence is. It’s just a lot of scaling that compute with very simple operations going on. So in that sense, I think they see the world the same way.
In fact, Altman has taken that to an extreme. He went on a tour last year to Asia to try to persuade various chip makers and governments that about seven trillion dollars in capital should be raised between chips, data centers, networking, energy requirements, etc, to support the goals of AI as it moves toward this super intelligence. Which in my estimation is interesting because it’s this essay. It’s very big, bold, without a lot of detail. I will point out that he was dismissed by some of the smartest minds on the planet, the engineers at TSMC, the leadership. The New York Times reported that they dismissed Altman like a podcast pro. I think this is the danger of putting out bold vision without thinking about consequences or persuading people that you’re thinking about consequences. I think for most of us who aren’t as brilliant as either these two gentlemen, or don’t have access to the capital, it worries one that someone would go and try to raise $7 trillion with today’s energy demands on compute GPUs as they’re structured today, that’s a lot of impact on the planet, wouldn’t you first, maybe if you had access to such thinkers and investors, try to find ways to reduce the energy imprint of compute? So I think the essays both have some great similarities, but they really approach the world in very different ways.
Mary Long: Amodei almost makes a similar point to perhaps what the TSMC engineers were making, because he lays out very early on in this essay that, part of his purpose and point in writing this, is that, you can’t make this technology convincing to other people unless you underline and explain what the hope of it is. So you need to have this North Star that everyone who’s bought into the technology or who’s going to be affected by it, which ultimately he argues is everyone. You need to understand what that North Star is, and that not only helps inspire you to work toward something, and even if you’re a layperson, just get excited about it.
As one of the minds that’s helping to build this technology, it also helps you figure out what we don’t want and what to stay away from. Amodei is clear at the beginning of his essay that what he is writing is a positive vision for AI. Obviously, there are lots of things that can go wrong anthropic, purports to be very concerned about safety. That’s not the purpose of this essay. This is mostly the most generous vision that he’s outlining here. I want to get philosophical for a minute before we dive into the visions. [laughs] Austin is so excited for this to come. Because a commonality between both men’s envisioning of what super intelligence would be is like, I think you called it agentic. That AI agents are not only able to process data, but they wind up doing your bidding. Amodei describes this again, more specifically than Altman does, but they’re very similar concepts, as I understand them. He says that ultimately these super intelligent AI agents are capable of initiating tasks and have the I’m going to use quotes, “brain power” of Nobel Prize winners in numerous industries.
So he names a few of these industries, one of which is writing. It can prove and solve mathematical theorems, write extremely good novels; is how he describes this. When I hear this, I can’t help but wonder about the difference between skill and art and how that gets thought of in these renderings and even just imaginings of AI agents. So I’ll turn it to you, Asit. Before I wax poetic a bit longer, is there a difference between skill and art? What does that difference look like?
Asit Sharma: Totally there is. I think for all I admired in this essay, and I should say that there is a vein of humility that runs through the whole Amodei’s whole essay. He begins with a lot of great rhetorical devices, telling you that he understands how silly he could look writing something like this, and he gets that out of the way. He doesn’t want to do that. He doesn’t want to sound like it, and he gives a whole range of types of personalities that he probably sounds like. So for all that there is so much in this essay to like, this is the one point where I really disagreed.
The difference between skill and art, is that, skill is necessary for art. You need the mechanics of a thing to be able to create something great. So you need the artifice of something. You need to be able to embroider if you’re making a beautiful cape. You can’t just imagine it. I think the machines are really great at this. But you also need the ability to experience emotion in a way that can be wrapped up with other things to create something. The reason why it’s going to take machines a long time is they can only emulate. They have the ability to hallucinate already the way these neural networks are built. They hallucinate just like we do. We dream, they dream. Their hallucination is a bit different than ours, but the human body is such an interesting thing.
It’s both composed of electrical impulses, chemical impulses, etc. We have very fine grain receptors on our skin. Therefore, if you’re a child and your grandparent strokes your palm, you may carry that memory with you decades into the future and as a writer, will come back to you when you’re describing a scene. So this is something that yes, maybe over time can be replicated. We know that Meta AI is working on very fine-grained touch perception. So there’s that. We know that these agents can emulate human thought, but putting that whole thing together, where there is an emotional current running through, the machines can only imagine in their own way what that is like.
If you read a great piece of art, I know that you and I have discussed some novels over time. The one commonality is that they’re drawn from this amazing breath of experience throughout a writer’s lifetime that comes together in a very unique way. We don’t understand how that’s done. So machines will come close to that, but will they be able to recreate the effect of reading a great first line of a novel and that carrying you all the way through? It’s going to be a long, long while before that actually happens. So where that argument starts to fall apart. I will note the one thing he doesn’t come back to after discussing how AI will excel in biology and mechanics, so many things. He actually, after stating that they’ll be able to write Nobel level novels, doesn’t support that argument. I don’t think it can be supported.
Mary Long: There is a section at the end of Amodei’s essay in which he talks about work and meaning, and the impact that AI will have on that. That is notably the shortest section in the essay of the five that he outlines earlier on, which include biology, neuroscience, governance and peace, economic development, and prosperity. Then this work in meaning, which is the shortest. And he even addresses this. I think that in large part comes back to what you’re talking about of so much of perhaps not work, but meaning in human life can go back to art and this breath of experience and trying to articulate it and connect over it and in many ways, compelling vision of what AI can do for mankind. But even he comes up empty when he’s like, “Where do we We do we get at the end of all this?” What are we leading to? So I think it’s connected, and it’s an interesting point that you make that once he mentions this writing piece and the capability that AI could have to produce “art” he then drops it and only begins to hint at it again at the end, where he also says, “I don’t actually know what’s going to happen here.”
Asit Sharma: I agree. I love the humility that he brings. He doesn’t ultimately know if this will end up as making our lives more meaningful as we perceive them, although he shows so many benefits that AI could bring. We contrast that with Sam Altman’s vision at the very end of his essay where he says and I’m quoting here, “Many of the jobs we do today would have looked like trifling waste of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamp lighter.” If a lamp lighter could see the world today, he would think the prosperity all around him was unimaginable. Of course, there are many of us who could disagree there to say, “Imagine if you could have been a lamplighter in Victoria and England, the sense of value you would have had waking up at dawn, going around as like a quasi patrol person for your neighborhood at night, illuminating society, how good you would have felt about that job at that time? So many of us today, as it is, struggle to find meaning. Sitting in front of computers or doing what work we have, we struggle to feel like our lives are worth anything. So I think these visions are very different. But, you wanted to talk about, Mary, the advancements that Amodei proposes and both the fun side of that and maybe the creepy part, too.
Mary Long: No and I’m glad. Thank you for getting us back on track. I’m going down the rabbit holes of the philosophical and I’m like, “I could talk about this all day long.” But you’re right. Amodei outlines a lot of possibilities and this is, again, what he’s envisioning will come to happen within the 5-10 years after superintelligence, artificial general intelligence, powerful AI, as he calls it, is achieved. Again, he sees this that point as happening as early as 2026. So this is achieved and say, 2026, the clock starts. What happens next? That’s what Amodei is outlining in this vision. Within this vision, he breaks down, again, into a couple of different categories. You’ve got biology and health, you have neuroscience and mental health, you have economic development and prosperity, you have governance and peace, and then you have work and meaning.
There is a lot of excitement here. It’s easy to concern troll. We can get to some of the things that are perhaps more frightening later on, but this is supposed to be a positive utopian vision. So let’s talk about what’s exciting. You kick it to me so I won’t kick the question back to you before answering. I think that, obviously, he talks about the eradication of infectious and genetic diseases and most cancers. Of course, that is incredibly exciting to think about. The way that he envisions this happening is that, again, you have this team of AI agents that have the collective brainpower of the world’s smartest biologists that are able to run experiments, and because they can act and initiate tasks, they can run experiments.
They can run even regulatory tests and speed up the process with which things are improved. So I like that Amodei doesn’t just say, hey, this is what AI is going to do, but he gives you a sense of how it might come to accomplish this really massive task. He also mentions the elimination of severe mental illnesses, which frankly is not something that had crossed my mind before when thinking about the possibilities. I think it feels wild to say and suggest that, When I thought about AI, I thought the eradication of disease was on the table, but I feel that is something that’s more often discussed. The eradication of mental illness was not something that had crossed my mind or that I had read about before. So I think that there’s especially when you think about really, really severe mental illness and even less severe mental illness.
To imagine a world without that, is obviously a positive in progress. I got really excited hearing him talk about, making progress in food security and climate change mitigation, the possibilities within food and agriculture technology is fascinating to me. This is a silly nerdy one, but he mentions within this government and peace perspective segment that you could have an AI that helps citizens take full advantage of the governmental services that are available to them. When you line that offering up against the eradication of all disease, it feels really trite and small, but I thought, “That would be awesome.” [laughs]
Asit Sharma: I love that, too, Mary. I really love the cognizance that help is hard, even when there is something provided by a government for you. Bureaucracy is difficult and you may be the person who has to fill out just innumerable amount of forms to get help for a kid or maybe to file for unemployment. There’s so much of bureaucracy in our society. What if an AI made it easy for you and easy for the analysis on the other end, so you could get the services that you needed? I thought that was really fun. My takeaways were very similar, so I’m not going to say a lot here except to say that I found oddly reassuring the consistent pointing out that there are so many physical limitations that keep problems from being solved overnight.
So if you’re scared that the AI is going to advance so much and solve so many problems and perhaps there’s nothing left for us to do. He keep points out that we’re straining against the laws of physics, biology, and experimentation. So the rate of change may be phenomenal, but it may be something that we’ll be able to live with because it’s constrained by so many variables and he gives a lot of great examples in clinical research, for example. So that was something that was cool for me. Then finally, just thinking about neuroscience, he discusses or actually alludes to the work that Anthropic is doing to undercover why their models work the way they do. Most of these companies don’t see to have that much interest in trying to understand the black box. But I give Anthropic a lot of credit for publishing papers on what they’re seeing as they build the models. So him just giving a nod to what he calls interpretability, which is also understanding how these large language models work vis-a-vis, our brains, it was cool.
Mary Long: Again, Amodei is clear that this is a positive vision. This is the best case scenario for what he thinks powerful AI is possible of. Powerful AI, I should say. That said, there were still moments that I was reading through these possibilities where I felt my stomach turt a bit and I felt quite nervous. There were many things that I read that I’m like, see how this is positive. I’m wincing as I say that because the overwhelming part of me, also thought, but this is perhaps too much of a good thing. I don’t know that for all the good that this powerful AI could potentially bring, I don’t know that it’s fair or possible, really, to imagine a world without any problems. You could solve a lot of problems and still, funnily enough, problems tend to arise. So I don’t want to, like, gloss over that. I’d love to take a moment to point out what stuck out to each of us as more worrisome elements of this positive vision. You want to kick us off with this one?
Asit Sharma: Sure. I’ve got two. So one is something he mentions called biological freedom. He talks about all the advances over the last 70 years in fertility, weight management, all these great things. Then says that he suspects that an AI accelerated biology is going to expand what’s possible almost, so that we can select from a cafeteria style menu of how we want to be, how we want our biology to play out, our physical appearance, our reproduction, which is what people first worried about when we started making progress on the human genome.
What if you can just select what your baby will be like? This sounds like that after birth, so that was a little bit creepy to me. Another one which gave me pause was something related to this in that and I have to quote here just to make this clear, “Everyday problems that we don’t think of as a clinical disease will also be solved.” Most of us have everyday psychological problems that are not ordinarily thought of as rising to the level of clinical disease. Some people are quick to anger, others have trouble focusing or are often drowsy. Some are fearful or anxious or react badly to change. Then he discusses that, there are some drugs that help with that, but conceivably, these super-intelligent AI systems could just take that edge off of our personalities. There’s a thin line, I think, between being human and then having something that needs treatment. This goes back to what you were saying earlier, Mary, that do you want to solve every problem? I mean, what happens if we select and select to the point where our whole existence is one of moderation? That felt a little uncomfortable or a lot uncomfortable to me.
Ricky Mulvey: As always, people on the program may own stocks mentioned, and the Motley Fool may have formal recommendations for or against, so don’t buy or sell anything based solely on what you hear. All personal finance content follows Motley Fool editorial standards, and are not approved by advertisers. The Motley Fool only picks products that it would personally recommend to friends like you. I’m Ricky Mulvey. Thanks for listening. We’ll be back tomorrow.
Diseases become easier to cure. Bureaucracy is simplified. What will work look like? OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei released essays about their visions of what artificial intelligence will bring humanity.
In this podcast, Motley Fool analyst Asit Sharma joins host Mary Long to discuss:
- If building artificial general intelligence is a winner-take-all game.
- How AI advancements could develop in the next decade.
- Lingering questions and worries about the future of superintelligence.
To catch full episodes of all The Motley Fool’s free podcasts, check out our podcast center. To get started investing, check out our beginner’s guide to investing in stocks. A full transcript follows the video.
This video was recorded on Nov. 09, 2024.
Asit Sharma: So if you’re scared that the AI is going to advance so much and solve so many problems and perhaps there’s nothing left for us to do, he points out that we’re straining against the laws of physics, biology, and experimentation. So the rate of change may be phenomenal, but it may be something that we’ll be able to live with because it’s constrained by so many variables.
Ricky Mulvey: I’m Ricky Mulvey and that’s Motley Fool’s Senior Analyst, Asit Sharma. OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei have both published essays outlining their visions for the future of artificial intelligence. A world where diseases become easier to cure, work becomes radically different and unsolvable math problems become solvable. Mary Long caught up with Asit to chat about these visions in a book club-style conversation, what they’re excited about and worried about when AI becomes “a country of geniuses in a data center.”
Mary Long: Amodei is not the first character to leave OpenAI. It’s worth noting and remembering that even earlier this year, OpenAI saw a number of high profile departures. Many of those people have now gone on or are rumored to be going on to start their own AI upstarts. I want to stick on this point for a minute because I think in this world that we’re in now, where our conception of AI is primarily through of chatbots, makes sense that we can have multiple different chatbots. Maybe you have a preference for one, I have a preference for one. They can compete pretty openly in the marketplace, but this grander vision that both Altman and Amodei lay out in their essays are of what super intelligence or artificial general intelligence or powerful AI.
There’s different terms that they each prefer. What this higher AI can do. Amodei describes that super intelligence as a country of geniuses in a data center. If we put ourselves in that future of a time when AI is a country of geniuses in a data center, can there be multiple countries of geniuses in different data centers? Or, can we have the same open competition that we now see with chat bots? Or, is this a winner-takes-all type situation with the massive change that both of these leaders are talking about and envisioning?
Asit Sharma: That’s really difficult to contemplate because behind the scenes, both of these leaders are trying to raise capital in order to have a jump on whatever you call the superintelligence or general AGI. There’s so many different terms that we can insert here. So on the surface, if you read both of their works, there is an idealized vision of the future, which seems very cooperative. That would necessitate if it is a cooperative game, multiple data centers with multiple Nobel Prize winning geniuses. But even Amodei who I think is more prone to look at this as a cooperative endeavor, refers to different thinkers who feel that democracy itself is intertwined with these concepts and therefore, democratic companies should participate, pool capital, pool resources to develop their AI and make that more advanced than AI from non-democratic countries, that’s the stick and then they extend a carrot, which is to share that technology.
So there is a vision here in which it’s important to maybe get there first and have a superintelligence. I like that metaphor that Amodei puts out. I will say that for me, it’s more about flipping the equation. Right now, URI uses a chatbot, or large language model and, they’re basically the assistant. We’re trying to achieve something. So an intelligent human with a very good artificial intelligence can do a lot. In all of these visions, the one commonality I see, is that, that gets flipped, where we become the assistants. Maybe we control the initial objective or put it out in front of the artificial intelligence. But from then on, they’re really controlling everything. In Amodei’s vision, the superintelligence has access to whatever it needs; robots, laboratories, means of productions, etc, to solve problems.
Mary Long: Both Altman and Amodei anticipate that this super intelligence is going to come quite fast. Altman’s estimate is that we could reach this in around 1,000 days. He published this in September. Amodei’s estimate is that we could see this as early as 2026. So Amodei’s essay goes on to imagine what will happen in the 5-10 years after we reach this super intelligence, what that looks like. I don’t even think that this is really a value judgment of the piece. It’s much more vague than what Amodei lays out. Amodei’s essay is about over 14,000 words, it’s lengthy, it’s detailed. Just setting the table with that information. Do you find that these are fundamentally different visions, even though one might be more detailed than another, and if they’re different, do you buy one over the other?
Asit Sharma: They may not be so fundamentally different as it seems on the surface, although we don’t have enough detail in Sam Altman’s vision of the future to understand if he’s talking about the same thing. He has a sentence or two in his vision, which as you say, is very short on details. There is really no effort there to persuade the reader of anything he’s saying, because he does point out that, hey, we found out that deep learning with scale changes the world. That’s essentially what he’s saying.
Amodei has a similar thought in that there is this weird principle and I think there are some academic papers on this that, just a little bit of agentic action, just a little bit of an algorithm or an ability to interpret a sequence, that applied at scale is actually what intelligence is, and that neuroscientists have been kidding themselves and thinking there’s some mysterious thing that goes on in the human mind, that’s the basis of intelligence. Perhaps our brains, also with just a few simple mechanisms, once we’re exposed to enormous amounts of data as we are through our lives from the time we’re born until the time we die, maybe that’s what intelligence is. It’s just a lot of scaling that compute with very simple operations going on. So in that sense, I think they see the world the same way.
In fact, Altman has taken that to an extreme. He went on a tour last year to Asia to try to persuade various chip makers and governments that about seven trillion dollars in capital should be raised between chips, data centers, networking, energy requirements, etc, to support the goals of AI as it moves toward this super intelligence. Which in my estimation is interesting because it’s this essay. It’s very big, bold, without a lot of detail. I will point out that he was dismissed by some of the smartest minds on the planet, the engineers at TSMC, the leadership. The New York Times reported that they dismissed Altman like a podcast pro. I think this is the danger of putting out bold vision without thinking about consequences or persuading people that you’re thinking about consequences. I think for most of us who aren’t as brilliant as either these two gentlemen, or don’t have access to the capital, it worries one that someone would go and try to raise $7 trillion with today’s energy demands on compute GPUs as they’re structured today, that’s a lot of impact on the planet, wouldn’t you first, maybe if you had access to such thinkers and investors, try to find ways to reduce the energy imprint of compute? So I think the essays both have some great similarities, but they really approach the world in very different ways.
Mary Long: Amodei almost makes a similar point to perhaps what the TSMC engineers were making, because he lays out very early on in this essay that, part of his purpose and point in writing this, is that, you can’t make this technology convincing to other people unless you underline and explain what the hope of it is. So you need to have this North Star that everyone who’s bought into the technology or who’s going to be affected by it, which ultimately he argues is everyone. You need to understand what that North Star is, and that not only helps inspire you to work toward something, and even if you’re a layperson, just get excited about it.
As one of the minds that’s helping to build this technology, it also helps you figure out what we don’t want and what to stay away from. Amodei is clear at the beginning of his essay that what he is writing is a positive vision for AI. Obviously, there are lots of things that can go wrong anthropic, purports to be very concerned about safety. That’s not the purpose of this essay. This is mostly the most generous vision that he’s outlining here. I want to get philosophical for a minute before we dive into the visions. [laughs] Austin is so excited for this to come. Because a commonality between both men’s envisioning of what super intelligence would be is like, I think you called it agentic. That AI agents are not only able to process data, but they wind up doing your bidding. Amodei describes this again, more specifically than Altman does, but they’re very similar concepts, as I understand them. He says that ultimately these super intelligent AI agents are capable of initiating tasks and have the I’m going to use quotes, “brain power” of Nobel Prize winners in numerous industries.
So he names a few of these industries, one of which is writing. It can prove and solve mathematical theorems, write extremely good novels; is how he describes this. When I hear this, I can’t help but wonder about the difference between skill and art and how that gets thought of in these renderings and even just imaginings of AI agents. So I’ll turn it to you, Asit. Before I wax poetic a bit longer, is there a difference between skill and art? What does that difference look like?
Asit Sharma: Totally there is. I think for all I admired in this essay, and I should say that there is a vein of humility that runs through the whole Amodei’s whole essay. He begins with a lot of great rhetorical devices, telling you that he understands how silly he could look writing something like this, and he gets that out of the way. He doesn’t want to do that. He doesn’t want to sound like it, and he gives a whole range of types of personalities that he probably sounds like. So for all that there is so much in this essay to like, this is the one point where I really disagreed.
The difference between skill and art, is that, skill is necessary for art. You need the mechanics of a thing to be able to create something great. So you need the artifice of something. You need to be able to embroider if you’re making a beautiful cape. You can’t just imagine it. I think the machines are really great at this. But you also need the ability to experience emotion in a way that can be wrapped up with other things to create something. The reason why it’s going to take machines a long time is they can only emulate. They have the ability to hallucinate already the way these neural networks are built. They hallucinate just like we do. We dream, they dream. Their hallucination is a bit different than ours, but the human body is such an interesting thing.
It’s both composed of electrical impulses, chemical impulses, etc. We have very fine grain receptors on our skin. Therefore, if you’re a child and your grandparent strokes your palm, you may carry that memory with you decades into the future and as a writer, will come back to you when you’re describing a scene. So this is something that yes, maybe over time can be replicated. We know that Meta AI is working on very fine-grained touch perception. So there’s that. We know that these agents can emulate human thought, but putting that whole thing together, where there is an emotional current running through, the machines can only imagine in their own way what that is like.
If you read a great piece of art, I know that you and I have discussed some novels over time. The one commonality is that they’re drawn from this amazing breath of experience throughout a writer’s lifetime that comes together in a very unique way. We don’t understand how that’s done. So machines will come close to that, but will they be able to recreate the effect of reading a great first line of a novel and that carrying you all the way through? It’s going to be a long, long while before that actually happens. So where that argument starts to fall apart. I will note the one thing he doesn’t come back to after discussing how AI will excel in biology and mechanics, so many things. He actually, after stating that they’ll be able to write Nobel level novels, doesn’t support that argument. I don’t think it can be supported.
Mary Long: There is a section at the end of Amodei’s essay in which he talks about work and meaning, and the impact that AI will have on that. That is notably the shortest section in the essay of the five that he outlines earlier on, which include biology, neuroscience, governance and peace, economic development, and prosperity. Then this work in meaning, which is the shortest. And he even addresses this. I think that in large part comes back to what you’re talking about of so much of perhaps not work, but meaning in human life can go back to art and this breath of experience and trying to articulate it and connect over it and in many ways, compelling vision of what AI can do for mankind. But even he comes up empty when he’s like, “Where do we We do we get at the end of all this?” What are we leading to? So I think it’s connected, and it’s an interesting point that you make that once he mentions this writing piece and the capability that AI could have to produce “art” he then drops it and only begins to hint at it again at the end, where he also says, “I don’t actually know what’s going to happen here.”
Asit Sharma: I agree. I love the humility that he brings. He doesn’t ultimately know if this will end up as making our lives more meaningful as we perceive them, although he shows so many benefits that AI could bring. We contrast that with Sam Altman’s vision at the very end of his essay where he says and I’m quoting here, “Many of the jobs we do today would have looked like trifling waste of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamp lighter.” If a lamp lighter could see the world today, he would think the prosperity all around him was unimaginable. Of course, there are many of us who could disagree there to say, “Imagine if you could have been a lamplighter in Victoria and England, the sense of value you would have had waking up at dawn, going around as like a quasi patrol person for your neighborhood at night, illuminating society, how good you would have felt about that job at that time? So many of us today, as it is, struggle to find meaning. Sitting in front of computers or doing what work we have, we struggle to feel like our lives are worth anything. So I think these visions are very different. But, you wanted to talk about, Mary, the advancements that Amodei proposes and both the fun side of that and maybe the creepy part, too.
Mary Long: No and I’m glad. Thank you for getting us back on track. I’m going down the rabbit holes of the philosophical and I’m like, “I could talk about this all day long.” But you’re right. Amodei outlines a lot of possibilities and this is, again, what he’s envisioning will come to happen within the 5-10 years after superintelligence, artificial general intelligence, powerful AI, as he calls it, is achieved. Again, he sees this that point as happening as early as 2026. So this is achieved and say, 2026, the clock starts. What happens next? That’s what Amodei is outlining in this vision. Within this vision, he breaks down, again, into a couple of different categories. You’ve got biology and health, you have neuroscience and mental health, you have economic development and prosperity, you have governance and peace, and then you have work and meaning.
There is a lot of excitement here. It’s easy to concern troll. We can get to some of the things that are perhaps more frightening later on, but this is supposed to be a positive utopian vision. So let’s talk about what’s exciting. You kick it to me so I won’t kick the question back to you before answering. I think that, obviously, he talks about the eradication of infectious and genetic diseases and most cancers. Of course, that is incredibly exciting to think about. The way that he envisions this happening is that, again, you have this team of AI agents that have the collective brainpower of the world’s smartest biologists that are able to run experiments, and because they can act and initiate tasks, they can run experiments.
They can run even regulatory tests and speed up the process with which things are improved. So I like that Amodei doesn’t just say, hey, this is what AI is going to do, but he gives you a sense of how it might come to accomplish this really massive task. He also mentions the elimination of severe mental illnesses, which frankly is not something that had crossed my mind before when thinking about the possibilities. I think it feels wild to say and suggest that, When I thought about AI, I thought the eradication of disease was on the table, but I feel that is something that’s more often discussed. The eradication of mental illness was not something that had crossed my mind or that I had read about before. So I think that there’s especially when you think about really, really severe mental illness and even less severe mental illness.
To imagine a world without that, is obviously a positive in progress. I got really excited hearing him talk about, making progress in food security and climate change mitigation, the possibilities within food and agriculture technology is fascinating to me. This is a silly nerdy one, but he mentions within this government and peace perspective segment that you could have an AI that helps citizens take full advantage of the governmental services that are available to them. When you line that offering up against the eradication of all disease, it feels really trite and small, but I thought, “That would be awesome.” [laughs]
Asit Sharma: I love that, too, Mary. I really love the cognizance that help is hard, even when there is something provided by a government for you. Bureaucracy is difficult and you may be the person who has to fill out just innumerable amount of forms to get help for a kid or maybe to file for unemployment. There’s so much of bureaucracy in our society. What if an AI made it easy for you and easy for the analysis on the other end, so you could get the services that you needed? I thought that was really fun. My takeaways were very similar, so I’m not going to say a lot here except to say that I found oddly reassuring the consistent pointing out that there are so many physical limitations that keep problems from being solved overnight.
So if you’re scared that the AI is going to advance so much and solve so many problems and perhaps there’s nothing left for us to do. He keep points out that we’re straining against the laws of physics, biology, and experimentation. So the rate of change may be phenomenal, but it may be something that we’ll be able to live with because it’s constrained by so many variables and he gives a lot of great examples in clinical research, for example. So that was something that was cool for me. Then finally, just thinking about neuroscience, he discusses or actually alludes to the work that Anthropic is doing to undercover why their models work the way they do. Most of these companies don’t see to have that much interest in trying to understand the black box. But I give Anthropic a lot of credit for publishing papers on what they’re seeing as they build the models. So him just giving a nod to what he calls interpretability, which is also understanding how these large language models work vis-a-vis, our brains, it was cool.
Mary Long: Again, Amodei is clear that this is a positive vision. This is the best case scenario for what he thinks powerful AI is possible of. Powerful AI, I should say. That said, there were still moments that I was reading through these possibilities where I felt my stomach turt a bit and I felt quite nervous. There were many things that I read that I’m like, see how this is positive. I’m wincing as I say that because the overwhelming part of me, also thought, but this is perhaps too much of a good thing. I don’t know that for all the good that this powerful AI could potentially bring, I don’t know that it’s fair or possible, really, to imagine a world without any problems. You could solve a lot of problems and still, funnily enough, problems tend to arise. So I don’t want to, like, gloss over that. I’d love to take a moment to point out what stuck out to each of us as more worrisome elements of this positive vision. You want to kick us off with this one?
Asit Sharma: Sure. I’ve got two. So one is something he mentions called biological freedom. He talks about all the advances over the last 70 years in fertility, weight management, all these great things. Then says that he suspects that an AI accelerated biology is going to expand what’s possible almost, so that we can select from a cafeteria style menu of how we want to be, how we want our biology to play out, our physical appearance, our reproduction, which is what people first worried about when we started making progress on the human genome.
What if you can just select what your baby will be like? This sounds like that after birth, so that was a little bit creepy to me. Another one which gave me pause was something related to this in that and I have to quote here just to make this clear, “Everyday problems that we don’t think of as a clinical disease will also be solved.” Most of us have everyday psychological problems that are not ordinarily thought of as rising to the level of clinical disease. Some people are quick to anger, others have trouble focusing or are often drowsy. Some are fearful or anxious or react badly to change. Then he discusses that, there are some drugs that help with that, but conceivably, these super-intelligent AI systems could just take that edge off of our personalities. There’s a thin line, I think, between being human and then having something that needs treatment. This goes back to what you were saying earlier, Mary, that do you want to solve every problem? I mean, what happens if we select and select to the point where our whole existence is one of moderation? That felt a little uncomfortable or a lot uncomfortable to me.
Ricky Mulvey: As always, people on the program may own stocks mentioned, and the Motley Fool may have formal recommendations for or against, so don’t buy or sell anything based solely on what you hear. All personal finance content follows Motley Fool editorial standards, and are not approved by advertisers. The Motley Fool only picks products that it would personally recommend to friends like you. I’m Ricky Mulvey. Thanks for listening. We’ll be back tomorrow.
Diseases become easier to cure. Bureaucracy is simplified. What will work look like? OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei released essays about their visions of what artificial intelligence will bring humanity.
In this podcast, Motley Fool analyst Asit Sharma joins host Mary Long to discuss:
- If building artificial general intelligence is a winner-take-all game.
- How AI advancements could develop in the next decade.
- Lingering questions and worries about the future of superintelligence.
To catch full episodes of all The Motley Fool’s free podcasts, check out our podcast center. To get started investing, check out our beginner’s guide to investing in stocks. A full transcript follows the video.
This video was recorded on Nov. 09, 2024.
Asit Sharma: So if you’re scared that the AI is going to advance so much and solve so many problems and perhaps there’s nothing left for us to do, he points out that we’re straining against the laws of physics, biology, and experimentation. So the rate of change may be phenomenal, but it may be something that we’ll be able to live with because it’s constrained by so many variables.
Ricky Mulvey: I’m Ricky Mulvey and that’s Motley Fool’s Senior Analyst, Asit Sharma. OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei have both published essays outlining their visions for the future of artificial intelligence. A world where diseases become easier to cure, work becomes radically different and unsolvable math problems become solvable. Mary Long caught up with Asit to chat about these visions in a book club-style conversation, what they’re excited about and worried about when AI becomes “a country of geniuses in a data center.”
Mary Long: Amodei is not the first character to leave OpenAI. It’s worth noting and remembering that even earlier this year, OpenAI saw a number of high profile departures. Many of those people have now gone on or are rumored to be going on to start their own AI upstarts. I want to stick on this point for a minute because I think in this world that we’re in now, where our conception of AI is primarily through of chatbots, makes sense that we can have multiple different chatbots. Maybe you have a preference for one, I have a preference for one. They can compete pretty openly in the marketplace, but this grander vision that both Altman and Amodei lay out in their essays are of what super intelligence or artificial general intelligence or powerful AI.
There’s different terms that they each prefer. What this higher AI can do. Amodei describes that super intelligence as a country of geniuses in a data center. If we put ourselves in that future of a time when AI is a country of geniuses in a data center, can there be multiple countries of geniuses in different data centers? Or, can we have the same open competition that we now see with chat bots? Or, is this a winner-takes-all type situation with the massive change that both of these leaders are talking about and envisioning?
Asit Sharma: That’s really difficult to contemplate because behind the scenes, both of these leaders are trying to raise capital in order to have a jump on whatever you call the superintelligence or general AGI. There’s so many different terms that we can insert here. So on the surface, if you read both of their works, there is an idealized vision of the future, which seems very cooperative. That would necessitate if it is a cooperative game, multiple data centers with multiple Nobel Prize winning geniuses. But even Amodei who I think is more prone to look at this as a cooperative endeavor, refers to different thinkers who feel that democracy itself is intertwined with these concepts and therefore, democratic companies should participate, pool capital, pool resources to develop their AI and make that more advanced than AI from non-democratic countries, that’s the stick and then they extend a carrot, which is to share that technology.
So there is a vision here in which it’s important to maybe get there first and have a superintelligence. I like that metaphor that Amodei puts out. I will say that for me, it’s more about flipping the equation. Right now, URI uses a chatbot, or large language model and, they’re basically the assistant. We’re trying to achieve something. So an intelligent human with a very good artificial intelligence can do a lot. In all of these visions, the one commonality I see, is that, that gets flipped, where we become the assistants. Maybe we control the initial objective or put it out in front of the artificial intelligence. But from then on, they’re really controlling everything. In Amodei’s vision, the superintelligence has access to whatever it needs; robots, laboratories, means of productions, etc, to solve problems.
Mary Long: Both Altman and Amodei anticipate that this super intelligence is going to come quite fast. Altman’s estimate is that we could reach this in around 1,000 days. He published this in September. Amodei’s estimate is that we could see this as early as 2026. So Amodei’s essay goes on to imagine what will happen in the 5-10 years after we reach this super intelligence, what that looks like. I don’t even think that this is really a value judgment of the piece. It’s much more vague than what Amodei lays out. Amodei’s essay is about over 14,000 words, it’s lengthy, it’s detailed. Just setting the table with that information. Do you find that these are fundamentally different visions, even though one might be more detailed than another, and if they’re different, do you buy one over the other?
Asit Sharma: They may not be so fundamentally different as it seems on the surface, although we don’t have enough detail in Sam Altman’s vision of the future to understand if he’s talking about the same thing. He has a sentence or two in his vision, which as you say, is very short on details. There is really no effort there to persuade the reader of anything he’s saying, because he does point out that, hey, we found out that deep learning with scale changes the world. That’s essentially what he’s saying.
Amodei has a similar thought in that there is this weird principle and I think there are some academic papers on this that, just a little bit of agentic action, just a little bit of an algorithm or an ability to interpret a sequence, that applied at scale is actually what intelligence is, and that neuroscientists have been kidding themselves and thinking there’s some mysterious thing that goes on in the human mind, that’s the basis of intelligence. Perhaps our brains, also with just a few simple mechanisms, once we’re exposed to enormous amounts of data as we are through our lives from the time we’re born until the time we die, maybe that’s what intelligence is. It’s just a lot of scaling that compute with very simple operations going on. So in that sense, I think they see the world the same way.
In fact, Altman has taken that to an extreme. He went on a tour last year to Asia to try to persuade various chip makers and governments that about seven trillion dollars in capital should be raised between chips, data centers, networking, energy requirements, etc, to support the goals of AI as it moves toward this super intelligence. Which in my estimation is interesting because it’s this essay. It’s very big, bold, without a lot of detail. I will point out that he was dismissed by some of the smartest minds on the planet, the engineers at TSMC, the leadership. The New York Times reported that they dismissed Altman like a podcast pro. I think this is the danger of putting out bold vision without thinking about consequences or persuading people that you’re thinking about consequences. I think for most of us who aren’t as brilliant as either these two gentlemen, or don’t have access to the capital, it worries one that someone would go and try to raise $7 trillion with today’s energy demands on compute GPUs as they’re structured today, that’s a lot of impact on the planet, wouldn’t you first, maybe if you had access to such thinkers and investors, try to find ways to reduce the energy imprint of compute? So I think the essays both have some great similarities, but they really approach the world in very different ways.
Mary Long: Amodei almost makes a similar point to perhaps what the TSMC engineers were making, because he lays out very early on in this essay that, part of his purpose and point in writing this, is that, you can’t make this technology convincing to other people unless you underline and explain what the hope of it is. So you need to have this North Star that everyone who’s bought into the technology or who’s going to be affected by it, which ultimately he argues is everyone. You need to understand what that North Star is, and that not only helps inspire you to work toward something, and even if you’re a layperson, just get excited about it.
As one of the minds that’s helping to build this technology, it also helps you figure out what we don’t want and what to stay away from. Amodei is clear at the beginning of his essay that what he is writing is a positive vision for AI. Obviously, there are lots of things that can go wrong anthropic, purports to be very concerned about safety. That’s not the purpose of this essay. This is mostly the most generous vision that he’s outlining here. I want to get philosophical for a minute before we dive into the visions. [laughs] Austin is so excited for this to come. Because a commonality between both men’s envisioning of what super intelligence would be is like, I think you called it agentic. That AI agents are not only able to process data, but they wind up doing your bidding. Amodei describes this again, more specifically than Altman does, but they’re very similar concepts, as I understand them. He says that ultimately these super intelligent AI agents are capable of initiating tasks and have the I’m going to use quotes, “brain power” of Nobel Prize winners in numerous industries.
So he names a few of these industries, one of which is writing. It can prove and solve mathematical theorems, write extremely good novels; is how he describes this. When I hear this, I can’t help but wonder about the difference between skill and art and how that gets thought of in these renderings and even just imaginings of AI agents. So I’ll turn it to you, Asit. Before I wax poetic a bit longer, is there a difference between skill and art? What does that difference look like?
Asit Sharma: Totally there is. I think for all I admired in this essay, and I should say that there is a vein of humility that runs through the whole Amodei’s whole essay. He begins with a lot of great rhetorical devices, telling you that he understands how silly he could look writing something like this, and he gets that out of the way. He doesn’t want to do that. He doesn’t want to sound like it, and he gives a whole range of types of personalities that he probably sounds like. So for all that there is so much in this essay to like, this is the one point where I really disagreed.
The difference between skill and art, is that, skill is necessary for art. You need the mechanics of a thing to be able to create something great. So you need the artifice of something. You need to be able to embroider if you’re making a beautiful cape. You can’t just imagine it. I think the machines are really great at this. But you also need the ability to experience emotion in a way that can be wrapped up with other things to create something. The reason why it’s going to take machines a long time is they can only emulate. They have the ability to hallucinate already the way these neural networks are built. They hallucinate just like we do. We dream, they dream. Their hallucination is a bit different than ours, but the human body is such an interesting thing.
It’s both composed of electrical impulses, chemical impulses, etc. We have very fine grain receptors on our skin. Therefore, if you’re a child and your grandparent strokes your palm, you may carry that memory with you decades into the future and as a writer, will come back to you when you’re describing a scene. So this is something that yes, maybe over time can be replicated. We know that Meta AI is working on very fine-grained touch perception. So there’s that. We know that these agents can emulate human thought, but putting that whole thing together, where there is an emotional current running through, the machines can only imagine in their own way what that is like.
If you read a great piece of art, I know that you and I have discussed some novels over time. The one commonality is that they’re drawn from this amazing breath of experience throughout a writer’s lifetime that comes together in a very unique way. We don’t understand how that’s done. So machines will come close to that, but will they be able to recreate the effect of reading a great first line of a novel and that carrying you all the way through? It’s going to be a long, long while before that actually happens. So where that argument starts to fall apart. I will note the one thing he doesn’t come back to after discussing how AI will excel in biology and mechanics, so many things. He actually, after stating that they’ll be able to write Nobel level novels, doesn’t support that argument. I don’t think it can be supported.
Mary Long: There is a section at the end of Amodei’s essay in which he talks about work and meaning, and the impact that AI will have on that. That is notably the shortest section in the essay of the five that he outlines earlier on, which include biology, neuroscience, governance and peace, economic development, and prosperity. Then this work in meaning, which is the shortest. And he even addresses this. I think that in large part comes back to what you’re talking about of so much of perhaps not work, but meaning in human life can go back to art and this breath of experience and trying to articulate it and connect over it and in many ways, compelling vision of what AI can do for mankind. But even he comes up empty when he’s like, “Where do we We do we get at the end of all this?” What are we leading to? So I think it’s connected, and it’s an interesting point that you make that once he mentions this writing piece and the capability that AI could have to produce “art” he then drops it and only begins to hint at it again at the end, where he also says, “I don’t actually know what’s going to happen here.”
Asit Sharma: I agree. I love the humility that he brings. He doesn’t ultimately know if this will end up as making our lives more meaningful as we perceive them, although he shows so many benefits that AI could bring. We contrast that with Sam Altman’s vision at the very end of his essay where he says and I’m quoting here, “Many of the jobs we do today would have looked like trifling waste of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamp lighter.” If a lamp lighter could see the world today, he would think the prosperity all around him was unimaginable. Of course, there are many of us who could disagree there to say, “Imagine if you could have been a lamplighter in Victoria and England, the sense of value you would have had waking up at dawn, going around as like a quasi patrol person for your neighborhood at night, illuminating society, how good you would have felt about that job at that time? So many of us today, as it is, struggle to find meaning. Sitting in front of computers or doing what work we have, we struggle to feel like our lives are worth anything. So I think these visions are very different. But, you wanted to talk about, Mary, the advancements that Amodei proposes and both the fun side of that and maybe the creepy part, too.
Mary Long: No and I’m glad. Thank you for getting us back on track. I’m going down the rabbit holes of the philosophical and I’m like, “I could talk about this all day long.” But you’re right. Amodei outlines a lot of possibilities and this is, again, what he’s envisioning will come to happen within the 5-10 years after superintelligence, artificial general intelligence, powerful AI, as he calls it, is achieved. Again, he sees this that point as happening as early as 2026. So this is achieved and say, 2026, the clock starts. What happens next? That’s what Amodei is outlining in this vision. Within this vision, he breaks down, again, into a couple of different categories. You’ve got biology and health, you have neuroscience and mental health, you have economic development and prosperity, you have governance and peace, and then you have work and meaning.
There is a lot of excitement here. It’s easy to concern troll. We can get to some of the things that are perhaps more frightening later on, but this is supposed to be a positive utopian vision. So let’s talk about what’s exciting. You kick it to me so I won’t kick the question back to you before answering. I think that, obviously, he talks about the eradication of infectious and genetic diseases and most cancers. Of course, that is incredibly exciting to think about. The way that he envisions this happening is that, again, you have this team of AI agents that have the collective brainpower of the world’s smartest biologists that are able to run experiments, and because they can act and initiate tasks, they can run experiments.
They can run even regulatory tests and speed up the process with which things are improved. So I like that Amodei doesn’t just say, hey, this is what AI is going to do, but he gives you a sense of how it might come to accomplish this really massive task. He also mentions the elimination of severe mental illnesses, which frankly is not something that had crossed my mind before when thinking about the possibilities. I think it feels wild to say and suggest that, When I thought about AI, I thought the eradication of disease was on the table, but I feel that is something that’s more often discussed. The eradication of mental illness was not something that had crossed my mind or that I had read about before. So I think that there’s especially when you think about really, really severe mental illness and even less severe mental illness.
To imagine a world without that, is obviously a positive in progress. I got really excited hearing him talk about, making progress in food security and climate change mitigation, the possibilities within food and agriculture technology is fascinating to me. This is a silly nerdy one, but he mentions within this government and peace perspective segment that you could have an AI that helps citizens take full advantage of the governmental services that are available to them. When you line that offering up against the eradication of all disease, it feels really trite and small, but I thought, “That would be awesome.” [laughs]
Asit Sharma: I love that, too, Mary. I really love the cognizance that help is hard, even when there is something provided by a government for you. Bureaucracy is difficult and you may be the person who has to fill out just innumerable amount of forms to get help for a kid or maybe to file for unemployment. There’s so much of bureaucracy in our society. What if an AI made it easy for you and easy for the analysis on the other end, so you could get the services that you needed? I thought that was really fun. My takeaways were very similar, so I’m not going to say a lot here except to say that I found oddly reassuring the consistent pointing out that there are so many physical limitations that keep problems from being solved overnight.
So if you’re scared that the AI is going to advance so much and solve so many problems and perhaps there’s nothing left for us to do. He keep points out that we’re straining against the laws of physics, biology, and experimentation. So the rate of change may be phenomenal, but it may be something that we’ll be able to live with because it’s constrained by so many variables and he gives a lot of great examples in clinical research, for example. So that was something that was cool for me. Then finally, just thinking about neuroscience, he discusses or actually alludes to the work that Anthropic is doing to undercover why their models work the way they do. Most of these companies don’t see to have that much interest in trying to understand the black box. But I give Anthropic a lot of credit for publishing papers on what they’re seeing as they build the models. So him just giving a nod to what he calls interpretability, which is also understanding how these large language models work vis-a-vis, our brains, it was cool.
Mary Long: Again, Amodei is clear that this is a positive vision. This is the best case scenario for what he thinks powerful AI is possible of. Powerful AI, I should say. That said, there were still moments that I was reading through these possibilities where I felt my stomach turt a bit and I felt quite nervous. There were many things that I read that I’m like, see how this is positive. I’m wincing as I say that because the overwhelming part of me, also thought, but this is perhaps too much of a good thing. I don’t know that for all the good that this powerful AI could potentially bring, I don’t know that it’s fair or possible, really, to imagine a world without any problems. You could solve a lot of problems and still, funnily enough, problems tend to arise. So I don’t want to, like, gloss over that. I’d love to take a moment to point out what stuck out to each of us as more worrisome elements of this positive vision. You want to kick us off with this one?
Asit Sharma: Sure. I’ve got two. So one is something he mentions called biological freedom. He talks about all the advances over the last 70 years in fertility, weight management, all these great things. Then says that he suspects that an AI accelerated biology is going to expand what’s possible almost, so that we can select from a cafeteria style menu of how we want to be, how we want our biology to play out, our physical appearance, our reproduction, which is what people first worried about when we started making progress on the human genome.
What if you can just select what your baby will be like? This sounds like that after birth, so that was a little bit creepy to me. Another one which gave me pause was something related to this in that and I have to quote here just to make this clear, “Everyday problems that we don’t think of as a clinical disease will also be solved.” Most of us have everyday psychological problems that are not ordinarily thought of as rising to the level of clinical disease. Some people are quick to anger, others have trouble focusing or are often drowsy. Some are fearful or anxious or react badly to change. Then he discusses that, there are some drugs that help with that, but conceivably, these super-intelligent AI systems could just take that edge off of our personalities. There’s a thin line, I think, between being human and then having something that needs treatment. This goes back to what you were saying earlier, Mary, that do you want to solve every problem? I mean, what happens if we select and select to the point where our whole existence is one of moderation? That felt a little uncomfortable or a lot uncomfortable to me.
Ricky Mulvey: As always, people on the program may own stocks mentioned, and the Motley Fool may have formal recommendations for or against, so don’t buy or sell anything based solely on what you hear. All personal finance content follows Motley Fool editorial standards, and are not approved by advertisers. The Motley Fool only picks products that it would personally recommend to friends like you. I’m Ricky Mulvey. Thanks for listening. We’ll be back tomorrow.