- Search Popular SearchesCritical thinkingPhilosophyEmotional IntelligenceFree Will Latest Videos Latest Articles
-
Topics
Philosophy
- Ethics
- Religion
- Flourishing
- Knowledge
- Philosophy of Science
- Philosophy of Art
- Language
- Political Theory
- Identity
- Meaning & Purpose
Science & Tech
- Physics
- Biology
- Aerospace
- Health
- Geology
- Computing
- Engineering
- Energy
- Biotechnology
Mind & Behavior
- Psychology
- Neuroscience
- Decision-Making
- Mental Health
- Consciousness
- Emotional Intelligence
- Personality
- Relationships
- Parenting
Business
- Entrepreneurship
- Leadership
- Finance
- Marketing
- Innovation
- Strategy
- Management
- Artificial Intelligence
- Startups
- Economics
History & Society
- History
- Literature
- Art
- Music
- Film
- Progress
- Culture
- Sociology
- Policy
- Geopolitics
-
Videos
Latest Videos
Why the search for alien life is about patience, not belief
with
Jill Tarter
Thinking too logically can actually hold you back
with
Dan Shipper
Is there a Planet B? An astrophysicist answers.
with
Sara Seager
What nihilism acknowledges that other philosophies don’t
with
Alex O’Connor
The terrifying ways that social media is altering teenage brains
with
Clare Morell
The computing revolution that secretly began in 1776
with
David Alan Grier
See All
-
Columns
Mini Philosophy
A philosophy column for personal reflection.
Starts With A Bang
An astrophysics column on big questions and our universe.
Books
A literature column to feed your curiosity.
The Long Game
A business column on long-term thinking.
Strange Maps
A geography column on history and society.
The Well
A collection of essays and videos on life’s biggest questions.
13.8
A column at the intersection of science and culture.
-
Classes
Featured Classes
Members
7 videos
How to Lead With Integrity
Steve Stoute
Founder and CEO, UnitedMasters and Translation
Members
6 videos
Transform Your Organization with AI
Daphne Koller
Founder and CEO of insitro.
Members
10 videos
Unlocking Your Team’s Hidden Potential
Adam Grant
Organizational psychologist and author
Members
6 videos
The Secrets of Unreasonable Hospitality
Will Guidara
Restaurateur and Author, Unreasonable Hospitality
Members
12 videos
How to Afford Anything
Paula Pant
Host, Afford Anything Podcast, Afford Anything
Members
7 videos
True Ingredients of Successful Leadership
Atul Gawande
Professor and author
Browse
-
More
- About Big Think
- Work with Us
- Newsletters
- Monthly Issues
- Events
- Careers
- Our Mission
- Get Big Think+ for Business
- Freethink Media
- View our Twitter (X) feed View our Youtube channel View our Instagram feed View our Substack feed
-
My account
- My Classes
- My Account
- My List
- BT+ for my Business
- Early Releases
- Sign Out
- Membership
-
Topics
Back
Philosophy
- Ethics
- Religion
- Flourishing
- Knowledge
- Philosophy of Science
- Philosophy of Art
- Language
- Political Theory
- Identity
- Meaning & Purpose
Science & Tech
- Physics
- Biology
- Aerospace
- Health
- Geology
- Computing
- Engineering
- Energy
- Biotechnology
Mind & Behavior
- Psychology
- Neuroscience
- Decision-Making
- Mental Health
- Consciousness
- Emotional Intelligence
- Personality
- Relationships
- Parenting
Business
- Entrepreneurship
- Leadership
- Finance
- Marketing
- Innovation
- Strategy
- Management
- Artificial Intelligence
- Startups
- Economics
History & Society
- History
- Literature
- Art
- Music
- Film
- Progress
- Culture
- Sociology
- Policy
- Geopolitics
-
Videos
Back
Latest Videos
Why the search for alien life is about patience, not belief
Thinking too logically can actually hold you back
"Rationalism is the idea that, in order to truly know something, you have to be able to describe it explicitly."
Is there a Planet B? An astrophysicist answers.
30 years ago, we didn’t know other stars had planets orbiting them. Now, we may be on the verge of...
What nihilism acknowledges that other philosophies don’t
"The thing that the nihilist recognizes is that the values he or she holds are not grounded in anything other...
The terrifying ways that social media is altering teenage brains
Why social media is the perfect recipe for kids to become addicted to their smartphones.
The computing revolution that secretly began in 1776
"In the process of mapping the heavens, it doesn't take long to realize the data problem they generated."
How the Industrial Revolution invented modern computing
"The process of systematizing, correcting errors, finding approximations, and making them work as civil systems that was what really drove...
Why modern fitness culture misunderstands human bodies
"It's this modern idea of doing voluntary discretionary, physical activity for the sake of health and fitness."
-
Columns
Back
Columns
Mini Philosophy
A philosophy column for personal reflection.
Starts With A Bang
An astrophysics column on big questions and our universe.
Books
A literature column to feed your curiosity.
The Long Game
A business column on long-term thinking.
Strange Maps
A geography column on history and society.
The Well
A collection of essays and videos on life’s biggest questions.
13.8
A column at the intersection of science and culture.
-
Classes
Back
Featured Classes
Members
7 videos
How to Lead With Integrity
Steve Stoute
Founder and CEO, UnitedMasters and Translation
Members
6 videos
Transform Your Organization with AI
Daphne Koller
Founder and CEO of insitro.
Members
10 videos
Unlocking Your Team’s Hidden Potential
Adam Grant
Organizational psychologist and author
Members
6 videos
The Secrets of Unreasonable Hospitality
Will Guidara
Restaurateur and Author, Unreasonable Hospitality
Members
12 videos
How to Afford Anything
Paula Pant
Host, Afford Anything Podcast, Afford Anything
Members
7 videos
True Ingredients of Successful Leadership
Atul Gawande
Professor and author
-
My Account
Back
- My Classes
- My Account
- My List
- BT+ for my Business
- Early Releases
- Sign Out
- Sign In
- Membership
-
More
Back
- About Big Think
- Work with Us
- Newsletters
- Monthly Issues
- Events
- Careers
- Our Mission
- Get Big Think+ for Business
- Freethink Media
- View our Twitter (X) feed View our Youtube channel View our Instagram feed View our Substack feed
This content is locked. Please login or become a member.
Become a member Login The Big Think Interview Thinking too logically can actually hold you back "Rationalism is the idea that, in order to truly know something, you have to be able to describe it explicitly."
Thinking too logically can actually hold you back
Dan Shipper
Dan Shipper is the CEO and cofounder of Every, where he explores the frontiers of AI in his column, Chain of Thought, and on his podcast, ‘AI & I.’
Overview
Transcript
Thinking too logically can actually hold you back
Dan Shipper
Dan Shipper is the CEO and cofounder of Every, where he explores the frontiers of AI in his column, Chain of Thought, and on his podcast, ‘AI & I.’
Rationalism has shaped the way we think for thousands of years, teaching us that to truly know something, we must describe it explicitly, reduce it into rules, and test those rules repeatedly. But what if rationalism only tells half the story?
Every CEO Dan Shipper explores.
DAN SHIPPER: Rationalism is really the idea that in order to truly know something, you have to be able to describe it explicitly. Is to say, if x is true, then y will happen. That way of thinking about the world has been very successful for us, but it also kind of blinded us to the importance of our intuition in being the foundation of everything that we do, everything we think, everything that we know. My name is Dan Shipper. I'm the co-founder and CEO of Every, and I'm the host of the "AI & I" podcast.
- [Narrator] The limits of rationalism from Socrates to neural networks.
- I think rationalism is one of the most important ideas in the last, like 2000 years. Rationalism is really the idea that if we can be explicit about what we know, if we can really reduce what we know down into a set of theories, a set of rules for how the world works, that is true knowledge about the world and that is distinct from everything else that kind of messes with our heads, messes with how we operate in society. And you may not have heard that word, or maybe you have, but it is built into the way that you see the world. For example, the way computers work or the way vaccines work or the way that we predict the weather or the way that we try to make decisions when we're, you know, thinking about, I don't wanna be too emotional about this. I want to get really precise about my thinking on this issue. Even the way that we do therapy, a lot of therapy is about rationalizing or rationalizing through what you think and what you feel. All that stuff comes from an extensive lineage of ideas that started in ancient Greece, really blossomed during the Enlightenment and now is like the bedrock of our culture and the way that we think about the world. I think the father of rationalism is Socrates the philosopher. Socrates is one of the first people to really examine the question of what we know and how, what is true and what's not true. To be able to describe what we know and how we know it. To make that clear and explicit. So that only people that knew the truth, that knew how the world really works were the ones that were steering the state. That really became the birth of philosophy. Is this idea that if you inquire deeply into what is usually kind of like the inexplicit intuitions that we have about the world, you can identify a set of rules or a theory about what the world is like and what's true and what's not that you can lay out explicitly and that you can use to decide the difference between true and false. I think that you can trace the birth of rationalism to this dialogue, "Protagoras." And in the dialogue, it's a debate between Socrates on the one hand and Protagoras. And Protagoras is what we call a sophist, and it's where the term "sophistry," which means like kind of, you know, someone who says really compelling things but is actually full of shit. What Protagoras and Socrates are debating is, can excellence be taught? And excellence, the word is often translated in English as virtue, but I think a more appropriate translation is excellence. And in ancient Greece, like that kind of excellence was really prized. It's sort of like a general ability to be good at important things in life and in society. And they approach it from very different angles. Protagoras believes that everyone has the capacity, every human has the capacity to be excellent and he tells this big myth about how we, as humans, gain the capacity to be excellent. And Socrates is saying, no, no, no, I don't want any of that. What I want is, I want a definition. I want you to say explicitly what it is and what it's not and what are the components of it. And that's a really big moment, at least the way that Plato writes it. Socrates kind of like takes apart Protagoras, and it's pretty clear by the end that Protagoras doesn't know, doesn't have any way to define in a non-contradictory way what excellence is, what it means to be good. And the implication is that, then he doesn't know it. And that sort of set western society on this path of trying to find really clear definitions and theories for the things that we talk about and to identify knowledge, the ability to know something or whether or not you know something with whether or not you can really clearly define it. And that idea became incredibly important in the scientific enlightenment. Thinkers on the philosophy side, like Descartes, and on the science side, like Newton and Galileo, took this idea and used it as a new method to understand and explain the world. So what it became is, can we use mathematics to explain and predict different things in the world? And from Socrates to Galileo to Newton, they continually reinforced this idea that in order to truly know something, you have to be able to describe it explicitly. You have to have a theory about it. You have to be able to describe it mathematically, ideally. The world around us is shaped by this framework. So everything from smartphones to computers to cars to rockets to cameras to electricity, every appliance in your house, vaccines, everything in our world is shaped with this idea or this way of seeing the world. It's been incredibly impactful. And you can find this too in the rest of the culture. Like anytime you see, you know, a book or a movie or a blog post or whatever, talking about like "The Five Laws of Power" or like "The Five Laws of Negotiation," all that stuff is ways that physics and rationalism in general has like sort of seeped into the everyday way that we think about the world. And to be clear, it's been super successful, but in areas of the world like psychology or economics or neuroscience, it has been really hard to make progress in the same way that physics has made progress. I think if you look, for example, at the social sciences, a lot of the way that the social sciences are structured is inspired by physics. What we're trying to do is take very complex higher level phenomena, so like maybe psychology or economics or you know, any other branch of social science. And we're trying to reduce it down to a set of definitions and a theory and a set of rules for how things in that domain work. And what's really interesting is if you look at those fields, so like psychology for example, it's in the middle of a gigantic replication crisis. Even though we spent like a hundred years doing psychology research, the body of knowledge that we've been able to build there in terms of its universal applicability, our ability to find, you know, universal laws in the same way that Newton found universal laws seems pretty suspect. And we feel like we can't stop doing it because we have no better alternative. Another really interesting and important part of the world that this way of looking at things didn't work for in many ways is AI. So this is usually the part of an explanation where I try to define it. Like, what is AI? And what's really interesting is there's no universal agreed upon definition for this in the same way that we've struggled to come up with a universal definition for what it is to know something or a universal definition for what anxiety is, for example, in psychology, is another really good example. There are a lot of ways to kind of like gesture at what AI is, but really it's like, obviously, maybe not obviously, AI stands for artificial intelligence. And the AI's project is to build a computer that can think and learn in the same way that humans learn. And because of the way that computers work, for a very long time, that was a really hard problem. AI started as a field in the fifties at Dartmouth. And you can, like, actually look at the original paper. They were very optimistic. They were like, you know, maybe like, you know, a summer's worth of work and we'll have nailed this. And the way that they defined it is to be able to reduce down human intelligence into a system of symbols that they could combine together based on explicit rules that would mimic human intelligence. And so, there's a really clear through line from Socrates's original project to the Enlightenment to the original approach that AI theorists took called Symbolic AI. The idea that you could embody thinking in essentially like logic, logical symbols, and transformations between logical symbols. It's very similar to just basic philosophy. And there were actually a lot of early successes. For example, the two founding fathers of AI, Herbert Simon and Allen Newell, built this machine that they called the general problem solver. And what's really interesting is it wasn't even built as a computer, 'cause computers were extremely expensive back then. They originally codified the general problem solver on paper and then executed it themselves by hand. Actually, I think one of them had their family do it with them to try to simulate how a computer would work to solve complex problems. And the general problem solver, they tried to reduce down, you know, complex real world situations into simple logic problems that look a little bit like games. And then they tried to see if they could build a computer that would solve some of those games. And they were actually quite successful at first. What they found was, it worked really well for simple problems, but as problems got more and more complex, the search space of possible solutions got really, really, really, really big. And so, by representing the problem in that way, the systems that they built started to fail as soon as they moved away from toy problems to more complex ones. I think a really interesting and simple example of this is thinking about how you might decide whether an email in your inbox is spam or whether it's important. And you might say something like, if it mentions that I won the lottery, it's spam, right? And so that sort of like if-then rule is a lot like the kinds of rules that early symbolic AI theorists were trying to come up with to help you solve any problem. is to codify like if x, y, z is true, then here are the implications. What happens is if you look at that really closely, there are always lots of little exceptions. So an example might be, if it says "emergency," maybe you wanna put that at the top of your inbox, but very quickly you'll have spammer obviously, being like, just put emergency in the subject line and they'll shoot to the top. So then you have to kind of like create another rule which is, it's "emergency," but only if it's from my co-workers or my family. But computers don't really know what coworkers or family is. So then you have to define, okay, like, how is it gonna know what a coworkers or what a family member is? So what you can do is, maybe it's like a coworker is anybody from my company. And so, if it says emergency and it's from anybody in my company, put it at the top of my inbox. But what you may find is that there are certain people at your company who are annoying and want your attention, even if you don't really want them to contact you. And so, they start putting emergency into their inbox, and now you have to create another rule which is like, don't let people who are abusing the privilege of getting to the top of my inbox abuse it even if they're coworkers. And what you find is, anytime you try to create rules to define these things, you always run up against exceptions. If you wanna, for example, define what an important email is, you have to define pretty much everything about the world. You have to create a world full of definitions. And that project of making the entire world explicit in definitions just didn't work. It's too brittle, it's too hard, there's too much computational power required to loop through all the different definitions to decide, you know, if this email is important or not, and it's just, there are too many definitions to create, it's too big of a project. And so that symbolic AI project worked in some limited domains and there were these things called expert systems, for example, in the 70s and 80s that tried to, for example, reduce medical diagnosis down to a set of rules. And they were somewhat successful, but even in a case like medical diagnosis, trying to reduce down to a simple set of rules, something like do you have measles or maybe even do you have anxiety or depression, turned out to be really complicated and really, really hard, and in fact impossible to get right a hundred percent of the time in an explicit way. The alternative, which originated around the time that AI itself originated but really wasn't taken that seriously until probably the 80s and 90s, is what's called a neural network. And a neural network, it is inspired by the way our brains work. It doesn't work exactly the same way, but it's inspired that way. It is inspired from brains. And it basically consists of layers of artificial neurons that are connected to each other. And what you can do with a neural network is you can get it to recognize patterns by giving it lots of examples. You can, you know, for example, if you want it to recognize like whether an email is important, what you can do is you can give it an example, say like, you know, here's an email from a coworker, and have a guess the answer. And if the answer is wrong, what we've done is we've created a way to train the network to correct its wrong answer. And what happens is over many, many, many, many iterations in many, many different examples, what we find is without any explicit set of definitions or explicit rules about like, you know, this is a important email or this is a cat or this is a good move in chess, the neural network learns to recognize patterns and is able to do a lot of the more complex thinking style tasks that early symbolic AI was unable to do. Language models are a particular kind of neural network that operates by finding complex patterns inside of language and using that to produce what comes next in a sequence. So what what we've done with language models is fed them basically, you know, all of the text on the internet. And when we feed them a piece of text, we'll give them, you know, a big chunk of text and then we will say, "Based on this chunk, what's the next word that comes after this chunk?" And language models learn that there are many, many thousands of partially fitting rules that they can apply based on the previous history of text they've seen to predict what comes next. And all of those rules are inexplicit. They're kind of like, you can observe them in the overall behavior of the network, but they don't exist anywhere in the network. You can't go and look inside of a neural network and find like this is exactly, this is the entire set of rules that it has. You made people to find a couple, but you can't find a definitive list in the same way that if I, like, took a microscope and looked in your brain, I would not be able to find that. I would not be able to find the list of rules that you use, for example, to, you know, recognize a cat or do the next move in chess. They're represented all inexplicitly. And what's really interesting about neural networks is the way that they think or the way that they operate is a lot like, it looks a lot like human intuition. Human intuition is also trained by thousands and thousands and thousands of hours of direct experience. Often, our best metaphor for our minds are the tools that we use. So a really good example is Freud has one of the most impactful models of the mind. And the way that he came up with that is he used the steam engine as a metaphor. So it's an explicitly steam engine-based idea. In the 20th century, the metaphor for our minds moved into being like a computer that became the kind of thing that we all wanted to be like. We wanted to be logical and rational and operate like a machine to make the best decisions possible. And I think one of the most interesting things about that way of thinking is it makes invisible to us. And this, I think, relates a lot to the like sort of Socratic enlightenment type of thinking as well. It makes invisible to us the importance of our intuition in being the foundation of everything that we do, everything we think, everything that we know. In a lot of ways, you can think of rationality as emerging out of intuition. So like we have this sort of like squishy inexplicit intuitive way of understanding what's going on in the world, and our rational thought comes out of that and is able to, once in intuition sort of sets the frame for us, is able to go in and sort of manipulate things in a more methodical, rational, logical way. But you sort of need both. And neural networks are the first technology we've ever invented that works a lot like human intuition. And the reason I love that is because, I hope that it makes more visible to us the value and importance of intuitive thought. And that actually loops back and takes us all the way back to Protagoras. And is sort of the thing that we lost in this birth of rationalism in back in Calius's House because Protagoras is arguing that everyone teaches us excellence all the time. He's arguing, he's using stories and myths and metaphor to help us understand something that he knows from his own personal experience. And Socrates is saying, well, if you can't define it, if you can't tell me exactly the rules by which you know something, then you don't know it. And that way of thinking about the world has been very successful for us, but it also kind of blinded us to how important the idea that everyone teaches us to be excellent, that stories and and personal hands-on experience give us a way of knowing about things that we may not be able to explicitly talk about, but we still know just as much as we know things that we can explicitly say. And it was only when we began to embody that way of being in the world or that way of knowing things, that way of thinking into machines that we started to get actual artificial intelligence. There's many different ways of knowing things and many different ways of understanding things, and we may not understand all of the particulars of how humans, for example, how our minds come to certain conclusions intuitively. And we may not understand all the particulars of how language models, for example, come to any particular output, but that doesn't mean that we don't understand them. It just means that we understand them in a different way than we might be used to. So for example, if you use ChatGPT all the time, you develop an intuition for what it's good at and what it's not good at and when it might be hallucinating and when it might not be. In the same way that you might develop an intuition for when a friend of yours is sad or when a friend of yours is not being totally truthful with you. And that's not a universal set of rules that applies to everyone in every situation or even to your friend in every situation. It's just this sort of like intuitive feel that is a core part of understanding, but that we normally discount. History shows that it is better to be open to more ways of seeing and working with the world and that in this particular era, it's very important to be able to work with things that are a little bit mysterious and be comfortable with.
Overview TranscriptRationalism has shaped the way we think for thousands of years, teaching us that to truly know something, we must describe it explicitly, reduce it into rules, and test those rules repeatedly. But what if rationalism only tells half the story?
Every CEO Dan Shipper explores.
DAN SHIPPER: Rationalism is really the idea that in order to truly know something, you have to be able to describe it explicitly. Is to say, if x is true, then y will happen. That way of thinking about the world has been very successful for us, but it also kind of blinded us to the importance of our intuition in being the foundation of everything that we do, everything we think, everything that we know. My name is Dan Shipper. I'm the co-founder and CEO of Every, and I'm the host of the "AI & I" podcast.
- [Narrator] The limits of rationalism from Socrates to neural networks.
- I think rationalism is one of the most important ideas in the last, like 2000 years. Rationalism is really the idea that if we can be explicit about what we know, if we can really reduce what we know down into a set of theories, a set of rules for how the world works, that is true knowledge about the world and that is distinct from everything else that kind of messes with our heads, messes with how we operate in society. And you may not have heard that word, or maybe you have, but it is built into the way that you see the world. For example, the way computers work or the way vaccines work or the way that we predict the weather or the way that we try to make decisions when we're, you know, thinking about, I don't wanna be too emotional about this. I want to get really precise about my thinking on this issue. Even the way that we do therapy, a lot of therapy is about rationalizing or rationalizing through what you think and what you feel. All that stuff comes from an extensive lineage of ideas that started in ancient Greece, really blossomed during the Enlightenment and now is like the bedrock of our culture and the way that we think about the world. I think the father of rationalism is Socrates the philosopher. Socrates is one of the first people to really examine the question of what we know and how, what is true and what's not true. To be able to describe what we know and how we know it. To make that clear and explicit. So that only people that knew the truth, that knew how the world really works were the ones that were steering the state. That really became the birth of philosophy. Is this idea that if you inquire deeply into what is usually kind of like the inexplicit intuitions that we have about the world, you can identify a set of rules or a theory about what the world is like and what's true and what's not that you can lay out explicitly and that you can use to decide the difference between true and false. I think that you can trace the birth of rationalism to this dialogue, "Protagoras." And in the dialogue, it's a debate between Socrates on the one hand and Protagoras. And Protagoras is what we call a sophist, and it's where the term "sophistry," which means like kind of, you know, someone who says really compelling things but is actually full of shit. What Protagoras and Socrates are debating is, can excellence be taught? And excellence, the word is often translated in English as virtue, but I think a more appropriate translation is excellence. And in ancient Greece, like that kind of excellence was really prized. It's sort of like a general ability to be good at important things in life and in society. And they approach it from very different angles. Protagoras believes that everyone has the capacity, every human has the capacity to be excellent and he tells this big myth about how we, as humans, gain the capacity to be excellent. And Socrates is saying, no, no, no, I don't want any of that. What I want is, I want a definition. I want you to say explicitly what it is and what it's not and what are the components of it. And that's a really big moment, at least the way that Plato writes it. Socrates kind of like takes apart Protagoras, and it's pretty clear by the end that Protagoras doesn't know, doesn't have any way to define in a non-contradictory way what excellence is, what it means to be good. And the implication is that, then he doesn't know it. And that sort of set western society on this path of trying to find really clear definitions and theories for the things that we talk about and to identify knowledge, the ability to know something or whether or not you know something with whether or not you can really clearly define it. And that idea became incredibly important in the scientific enlightenment. Thinkers on the philosophy side, like Descartes, and on the science side, like Newton and Galileo, took this idea and used it as a new method to understand and explain the world. So what it became is, can we use mathematics to explain and predict different things in the world? And from Socrates to Galileo to Newton, they continually reinforced this idea that in order to truly know something, you have to be able to describe it explicitly. You have to have a theory about it. You have to be able to describe it mathematically, ideally. The world around us is shaped by this framework. So everything from smartphones to computers to cars to rockets to cameras to electricity, every appliance in your house, vaccines, everything in our world is shaped with this idea or this way of seeing the world. It's been incredibly impactful. And you can find this too in the rest of the culture. Like anytime you see, you know, a book or a movie or a blog post or whatever, talking about like "The Five Laws of Power" or like "The Five Laws of Negotiation," all that stuff is ways that physics and rationalism in general has like sort of seeped into the everyday way that we think about the world. And to be clear, it's been super successful, but in areas of the world like psychology or economics or neuroscience, it has been really hard to make progress in the same way that physics has made progress. I think if you look, for example, at the social sciences, a lot of the way that the social sciences are structured is inspired by physics. What we're trying to do is take very complex higher level phenomena, so like maybe psychology or economics or you know, any other branch of social science. And we're trying to reduce it down to a set of definitions and a theory and a set of rules for how things in that domain work. And what's really interesting is if you look at those fields, so like psychology for example, it's in the middle of a gigantic replication crisis. Even though we spent like a hundred years doing psychology research, the body of knowledge that we've been able to build there in terms of its universal applicability, our ability to find, you know, universal laws in the same way that Newton found universal laws seems pretty suspect. And we feel like we can't stop doing it because we have no better alternative. Another really interesting and important part of the world that this way of looking at things didn't work for in many ways is AI. So this is usually the part of an explanation where I try to define it. Like, what is AI? And what's really interesting is there's no universal agreed upon definition for this in the same way that we've struggled to come up with a universal definition for what it is to know something or a universal definition for what anxiety is, for example, in psychology, is another really good example. There are a lot of ways to kind of like gesture at what AI is, but really it's like, obviously, maybe not obviously, AI stands for artificial intelligence. And the AI's project is to build a computer that can think and learn in the same way that humans learn. And because of the way that computers work, for a very long time, that was a really hard problem. AI started as a field in the fifties at Dartmouth. And you can, like, actually look at the original paper. They were very optimistic. They were like, you know, maybe like, you know, a summer's worth of work and we'll have nailed this. And the way that they defined it is to be able to reduce down human intelligence into a system of symbols that they could combine together based on explicit rules that would mimic human intelligence. And so, there's a really clear through line from Socrates's original project to the Enlightenment to the original approach that AI theorists took called Symbolic AI. The idea that you could embody thinking in essentially like logic, logical symbols, and transformations between logical symbols. It's very similar to just basic philosophy. And there were actually a lot of early successes. For example, the two founding fathers of AI, Herbert Simon and Allen Newell, built this machine that they called the general problem solver. And what's really interesting is it wasn't even built as a computer, 'cause computers were extremely expensive back then. They originally codified the general problem solver on paper and then executed it themselves by hand. Actually, I think one of them had their family do it with them to try to simulate how a computer would work to solve complex problems. And the general problem solver, they tried to reduce down, you know, complex real world situations into simple logic problems that look a little bit like games. And then they tried to see if they could build a computer that would solve some of those games. And they were actually quite successful at first. What they found was, it worked really well for simple problems, but as problems got more and more complex, the search space of possible solutions got really, really, really, really big. And so, by representing the problem in that way, the systems that they built started to fail as soon as they moved away from toy problems to more complex ones. I think a really interesting and simple example of this is thinking about how you might decide whether an email in your inbox is spam or whether it's important. And you might say something like, if it mentions that I won the lottery, it's spam, right? And so that sort of like if-then rule is a lot like the kinds of rules that early symbolic AI theorists were trying to come up with to help you solve any problem. is to codify like if x, y, z is true, then here are the implications. What happens is if you look at that really closely, there are always lots of little exceptions. So an example might be, if it says "emergency," maybe you wanna put that at the top of your inbox, but very quickly you'll have spammer obviously, being like, just put emergency in the subject line and they'll shoot to the top. So then you have to kind of like create another rule which is, it's "emergency," but only if it's from my co-workers or my family. But computers don't really know what coworkers or family is. So then you have to define, okay, like, how is it gonna know what a coworkers or what a family member is? So what you can do is, maybe it's like a coworker is anybody from my company. And so, if it says emergency and it's from anybody in my company, put it at the top of my inbox. But what you may find is that there are certain people at your company who are annoying and want your attention, even if you don't really want them to contact you. And so, they start putting emergency into their inbox, and now you have to create another rule which is like, don't let people who are abusing the privilege of getting to the top of my inbox abuse it even if they're coworkers. And what you find is, anytime you try to create rules to define these things, you always run up against exceptions. If you wanna, for example, define what an important email is, you have to define pretty much everything about the world. You have to create a world full of definitions. And that project of making the entire world explicit in definitions just didn't work. It's too brittle, it's too hard, there's too much computational power required to loop through all the different definitions to decide, you know, if this email is important or not, and it's just, there are too many definitions to create, it's too big of a project. And so that symbolic AI project worked in some limited domains and there were these things called expert systems, for example, in the 70s and 80s that tried to, for example, reduce medical diagnosis down to a set of rules. And they were somewhat successful, but even in a case like medical diagnosis, trying to reduce down to a simple set of rules, something like do you have measles or maybe even do you have anxiety or depression, turned out to be really complicated and really, really hard, and in fact impossible to get right a hundred percent of the time in an explicit way. The alternative, which originated around the time that AI itself originated but really wasn't taken that seriously until probably the 80s and 90s, is what's called a neural network. And a neural network, it is inspired by the way our brains work. It doesn't work exactly the same way, but it's inspired that way. It is inspired from brains. And it basically consists of layers of artificial neurons that are connected to each other. And what you can do with a neural network is you can get it to recognize patterns by giving it lots of examples. You can, you know, for example, if you want it to recognize like whether an email is important, what you can do is you can give it an example, say like, you know, here's an email from a coworker, and have a guess the answer. And if the answer is wrong, what we've done is we've created a way to train the network to correct its wrong answer. And what happens is over many, many, many, many iterations in many, many different examples, what we find is without any explicit set of definitions or explicit rules about like, you know, this is a important email or this is a cat or this is a good move in chess, the neural network learns to recognize patterns and is able to do a lot of the more complex thinking style tasks that early symbolic AI was unable to do. Language models are a particular kind of neural network that operates by finding complex patterns inside of language and using that to produce what comes next in a sequence. So what what we've done with language models is fed them basically, you know, all of the text on the internet. And when we feed them a piece of text, we'll give them, you know, a big chunk of text and then we will say, "Based on this chunk, what's the next word that comes after this chunk?" And language models learn that there are many, many thousands of partially fitting rules that they can apply based on the previous history of text they've seen to predict what comes next. And all of those rules are inexplicit. They're kind of like, you can observe them in the overall behavior of the network, but they don't exist anywhere in the network. You can't go and look inside of a neural network and find like this is exactly, this is the entire set of rules that it has. You made people to find a couple, but you can't find a definitive list in the same way that if I, like, took a microscope and looked in your brain, I would not be able to find that. I would not be able to find the list of rules that you use, for example, to, you know, recognize a cat or do the next move in chess. They're represented all inexplicitly. And what's really interesting about neural networks is the way that they think or the way that they operate is a lot like, it looks a lot like human intuition. Human intuition is also trained by thousands and thousands and thousands of hours of direct experience. Often, our best metaphor for our minds are the tools that we use. So a really good example is Freud has one of the most impactful models of the mind. And the way that he came up with that is he used the steam engine as a metaphor. So it's an explicitly steam engine-based idea. In the 20th century, the metaphor for our minds moved into being like a computer that became the kind of thing that we all wanted to be like. We wanted to be logical and rational and operate like a machine to make the best decisions possible. And I think one of the most interesting things about that way of thinking is it makes invisible to us. And this, I think, relates a lot to the like sort of Socratic enlightenment type of thinking as well. It makes invisible to us the importance of our intuition in being the foundation of everything that we do, everything we think, everything that we know. In a lot of ways, you can think of rationality as emerging out of intuition. So like we have this sort of like squishy inexplicit intuitive way of understanding what's going on in the world, and our rational thought comes out of that and is able to, once in intuition sort of sets the frame for us, is able to go in and sort of manipulate things in a more methodical, rational, logical way. But you sort of need both. And neural networks are the first technology we've ever invented that works a lot like human intuition. And the reason I love that is because, I hope that it makes more visible to us the value and importance of intuitive thought. And that actually loops back and takes us all the way back to Protagoras. And is sort of the thing that we lost in this birth of rationalism in back in Calius's House because Protagoras is arguing that everyone teaches us excellence all the time. He's arguing, he's using stories and myths and metaphor to help us understand something that he knows from his own personal experience. And Socrates is saying, well, if you can't define it, if you can't tell me exactly the rules by which you know something, then you don't know it. And that way of thinking about the world has been very successful for us, but it also kind of blinded us to how important the idea that everyone teaches us to be excellent, that stories and and personal hands-on experience give us a way of knowing about things that we may not be able to explicitly talk about, but we still know just as much as we know things that we can explicitly say. And it was only when we began to embody that way of being in the world or that way of knowing things, that way of thinking into machines that we started to get actual artificial intelligence. There's many different ways of knowing things and many different ways of understanding things, and we may not understand all of the particulars of how humans, for example, how our minds come to certain conclusions intuitively. And we may not understand all the particulars of how language models, for example, come to any particular output, but that doesn't mean that we don't understand them. It just means that we understand them in a different way than we might be used to. So for example, if you use ChatGPT all the time, you develop an intuition for what it's good at and what it's not good at and when it might be hallucinating and when it might not be. In the same way that you might develop an intuition for when a friend of yours is sad or when a friend of yours is not being totally truthful with you. And that's not a universal set of rules that applies to everyone in every situation or even to your friend in every situation. It's just this sort of like intuitive feel that is a core part of understanding, but that we normally discount. History shows that it is better to be open to more ways of seeing and working with the world and that in this particular era, it's very important to be able to work with things that are a little bit mysterious and be comfortable with.
Learn from the world's biggest thinkers.-
Videos
- Latest
- The Big Think Interview
-
Columns
- Mini Philosophy
- Starts with a Bang
- Big Think Books
- The Long Game
- Strange Maps
- 13.8
- The Well
-
Sections
- Philosophy
- Mind & Behavior
- Science & Tech
- Business
- History & Society
-
Classes
- Class Library
-
Subscribe
- Membership
- Free Newsletters
-
About
- Our Mission
- Work with Us
- Contact
- Privacy Policy
- Terms of Sale
- Accessibility
- Careers