


Published in IA, Tech
Image credit by Argo
Sophie
The 8 fundamental principles for the future of artificial intelligence
In a series of interviews conducted by Marcus Weldon for Newsweek, three experts in artificial intelligence (roboticist Rodney Brooks, neuroscientist David Eagleman, and innovator Yann LeCun) outlined eight fundamental principles for the future of AI. These principles address our tendency to anthropomorphize AI, the multidimensional nature of intelligence, the limitations of current models that lack deep reasoning, the inadequacy of language as the sole basis for learning, the vision of a society made up of multiple specialized AI systems rather than an all-knowing AI, the future hierarchy where machines will remain in service to humans despite their power, the overestimation of the power of pure intelligence, and the necessity to develop predictable systems aligned with our diverse human values. These principles provide a pragmatic roadmap for responsible AI development, prioritizing the enhancement of human capabilities over their replacement.
Do you remember Asimov's three laws of robotics? Those rules that say a robot cannot harm a human, must obey orders, and must protect its own existence (as long as it doesn't contradict the first two laws)? Well, it seems we're in the process of writing the modern equivalent for artificial intelligence!
In a series of super interesting discussions published by Newsweek, Marcus Weldon (the former head of Bell Labs) talked with three leading figures in the field: Rodney Brooks (robotics expert), David Eagleman (neuroscientist), and Yann LeCun (one of the fathers of modern AI). And guess what? Despite their different backgrounds, they agreed on eight principles that will likely shape the future of AI. Let’s see what they are!
Principle 1: We think it’s magic (but it’s not)
Let’s admit it: as soon as ChatGPT gives us an intelligent answer, we’re tempted to think "Wow, this machine really understands what I’m saying!" This is what experts call "magical thinking" - this habit we have of attributing human-like abilities to our gadgets as soon as they do something a bit clever.
As Rodney Brooks bluntly puts it: "If it seems like magic, then you don’t understand... and you shouldn’t buy something you don’t understand." Boom!
Basically, we’re like children in front of a magic trick - impressed but completely baffled about what’s really going on. This tendency to see AI as our new super-smart friend prevents us from seeing its true limits. To move forward, we need to stop being naively amazed and understand what these systems can really do (and especially what they can’t do).
Principle 2: Intelligence is much more than a score
Let’s be honest: human intelligence is a real mess! It’s not just about being able to solve equations or memorize things. It’s also about being creative, understanding the emotions of others, having a moral conscience, and even knowing how to use one’s body in a coordinated way.
David Eagleman throws out a thought-provoking truth: "We don’t have a single definition of intelligence. When we ask if AI is truly intelligent, we don’t even have a clear criterion for measuring that!"
So comparing a human and an AI is like comparing apples and oranges. AI might be super strong in playing chess or generating images but useless in understanding why a joke is funny or in opening a squeaky door. That’s why we need to stop with simplistic comparisons and accept that intelligence is complicated!
Principle 3: AI must learn to think before it speaks
You know that moment when you respond instantly without thinking, then that moment when you really take the time to ponder a problem? Psychologist Daniel Kahneman (who won a Nobel Prize, mind you) calls this System 1 (fast and instinctive) and System 2 (slow and reflective).
The problem with our current AIs like ChatGPT? They are stuck in System 1 mode! As Yann LeCun puts it: "A LLM produces one word after another, it’s reactive... there’s no real reasoning." It’s like discussing with someone who always answers immediately without ever taking the time to think deeply.
For AI to become really useful, it needs to develop its System 2 - that ability to truly reflect, to model the world, to reason step by step. It’s a bit like moving from an impulsive teenager to a reflective adult. Not simple, but necessary!
Principle 4: Words are not enough
Imagine trying to describe the exact taste of a strawberry, or the precise sensation of diving into cold water. Tough, right? That’s because language is super limited!
David Eagleman explains it perfectly: "The connection we have through language is ridiculously low-bandwidth. When I say 'justice' or 'freedom,' I put my own meaning into it. You probably put in a completely different meaning. We manage, but it’s honestly not ideal."
The thing is our current AIs are stuffed with texts, words, phrases... but not with direct experiences. It’s like you had to understand what swimming is only by reading books about swimming, without ever dipping a toe in the water!
For AI to truly progress, it needs to go beyond words. It must understand the world in a more complete way, less dependent on language. A bit like us, who learn much more by living experiences than by reading descriptions.
Principle 5: A society of AIs rather than a super-AI
Forget the idea of an all-knowing AI that can do everything (you know, that scary thing in science fiction films). The future is more about a bunch of different AIs working together, each with its specialty.
Yann LeCun explains it stylishly: "It will be an interactive society of machines. You’ll have AI systems that are smarter than others and can neutralize them. It will be my intelligent AI police against your rogue AI." It feels like a city full of specialists rather than a single genius who knows everything!
The funniest thing about this story is Moravec's paradox: computers have been able to beat us in chess for decades but are useless at doing what a one-year-old can do effortlessly - like catching a ball or understanding that an object still exists even when you can't see it anymore. Tech is weird sometimes, isn’t it?
Principle 6: Humans will be the bosses, not the machines
Rest assured: in the future, it’s not Terminator waiting for us (well, normally). Experts believe that the human-AI relationship will be clear: we above, machines below. Why? Because AIs won’t have "free will" and will be designed with built-in limits.
Yann LeCun puts it bluntly: "Everyone will become a sort of CEO, or at least a manager. Humanity will rise in the hierarchy. We will have a level below us, that of AI systems. They might be smarter than us in some respects but they will do what we tell them to."
Imagine: you give orders to your AI that might be super skilled in maths or languages, but you maintain control. A bit like having a super assistant that might be better than you in certain areas, but who never makes the final decision. That would completely change the way we work, wouldn’t it?
Principle 7: Being intelligent isn’t everything in life
We often worry that super intelligent AIs will take over the world, but maybe we’re overestimating the power of intelligence. Look around you: are the brightest really the ones in charge of everything?
Yann LeCun wakes us up: "People give too much credit to pure intelligence. It’s not the only force that matters - there are also physical, biological forces, etc. Look at current politics, it’s not really the brightest who are in charge!"
Honestly, he has a point. Intelligence is cool, but it doesn’t do everything. An AI can be super smart but it can do nothing against a tsunami, a power outage, or even against a human who simply unplugs the cord! Not to mention charisma, emotional manipulation, or brute strength that often have more impact than clever reasoning. Maybe that’s what should reassure us (or worry us, depending on how you see it).
Principle 8: We want AIs that are predictable and respect our values
No one likes unpleasant surprises. With AIs, it’s the same: we want them to be predictable and to respect our values. Not just Western values or those of a tech elite, but the values of each and every one of us, in all their diversity.
Yann LeCun is crystal clear about this: "It takes a collaborative effort to design these systems aligned with human values... and the best way is to do it openly and collaboratively... so that anyone can build upon it and retain their own sovereignty."
Basically, it’s like saying: "Hey, let’s not just let a few big companies decide how AI will work!" The more ordinary people have a say, the more AI will respect the diversity of human values. Not everyone has the same definition of what is important, just, or moral - and our AIs should reflect that!
To conclude all this
So, what do we do with these eight principles? In fact, they offer us a more realistic way of viewing AI, far from sci-fi movies that make us believe we will all end up like in The Matrix!
Marcus Weldon (who summarized it all well) explains that we can judge AI in two ways: by looking at what it produces (the "what") and how it gets there (the "how"). It’s a bit like if to evaluate a chef, you looked not only at whether their dish is good but also how they cooked it.
Ultimately, what we want is not robots that do everything like us or that replace us (honestly, who needs a digital twin?). We want systems that make us more efficient, that free us from tedious tasks, all while respecting our values and remaining under our control.
AI is like any tool: it’s worth what we make of it. These principles remind us that it’s up to us to decide what we want to do with it, not the other way around. So, are you ready for a future where AI is our assistant and not our replacement?
Do you remember Asimov's three laws of robotics? Those rules that say a robot cannot harm a human, must obey orders, and must protect its own existence (as long as it doesn't contradict the first two laws)? Well, it seems we're in the process of writing the modern equivalent for artificial intelligence!
In a series of super interesting discussions published by Newsweek, Marcus Weldon (the former head of Bell Labs) talked with three leading figures in the field: Rodney Brooks (robotics expert), David Eagleman (neuroscientist), and Yann LeCun (one of the fathers of modern AI). And guess what? Despite their different backgrounds, they agreed on eight principles that will likely shape the future of AI. Let’s see what they are!
Principle 1: We think it’s magic (but it’s not)
Let’s admit it: as soon as ChatGPT gives us an intelligent answer, we’re tempted to think "Wow, this machine really understands what I’m saying!" This is what experts call "magical thinking" - this habit we have of attributing human-like abilities to our gadgets as soon as they do something a bit clever.
As Rodney Brooks bluntly puts it: "If it seems like magic, then you don’t understand... and you shouldn’t buy something you don’t understand." Boom!
Basically, we’re like children in front of a magic trick - impressed but completely baffled about what’s really going on. This tendency to see AI as our new super-smart friend prevents us from seeing its true limits. To move forward, we need to stop being naively amazed and understand what these systems can really do (and especially what they can’t do).
Principle 2: Intelligence is much more than a score
Let’s be honest: human intelligence is a real mess! It’s not just about being able to solve equations or memorize things. It’s also about being creative, understanding the emotions of others, having a moral conscience, and even knowing how to use one’s body in a coordinated way.
David Eagleman throws out a thought-provoking truth: "We don’t have a single definition of intelligence. When we ask if AI is truly intelligent, we don’t even have a clear criterion for measuring that!"
So comparing a human and an AI is like comparing apples and oranges. AI might be super strong in playing chess or generating images but useless in understanding why a joke is funny or in opening a squeaky door. That’s why we need to stop with simplistic comparisons and accept that intelligence is complicated!
Principle 3: AI must learn to think before it speaks
You know that moment when you respond instantly without thinking, then that moment when you really take the time to ponder a problem? Psychologist Daniel Kahneman (who won a Nobel Prize, mind you) calls this System 1 (fast and instinctive) and System 2 (slow and reflective).
The problem with our current AIs like ChatGPT? They are stuck in System 1 mode! As Yann LeCun puts it: "A LLM produces one word after another, it’s reactive... there’s no real reasoning." It’s like discussing with someone who always answers immediately without ever taking the time to think deeply.
For AI to become really useful, it needs to develop its System 2 - that ability to truly reflect, to model the world, to reason step by step. It’s a bit like moving from an impulsive teenager to a reflective adult. Not simple, but necessary!
Principle 4: Words are not enough
Imagine trying to describe the exact taste of a strawberry, or the precise sensation of diving into cold water. Tough, right? That’s because language is super limited!
David Eagleman explains it perfectly: "The connection we have through language is ridiculously low-bandwidth. When I say 'justice' or 'freedom,' I put my own meaning into it. You probably put in a completely different meaning. We manage, but it’s honestly not ideal."
The thing is our current AIs are stuffed with texts, words, phrases... but not with direct experiences. It’s like you had to understand what swimming is only by reading books about swimming, without ever dipping a toe in the water!
For AI to truly progress, it needs to go beyond words. It must understand the world in a more complete way, less dependent on language. A bit like us, who learn much more by living experiences than by reading descriptions.
Principle 5: A society of AIs rather than a super-AI
Forget the idea of an all-knowing AI that can do everything (you know, that scary thing in science fiction films). The future is more about a bunch of different AIs working together, each with its specialty.
Yann LeCun explains it stylishly: "It will be an interactive society of machines. You’ll have AI systems that are smarter than others and can neutralize them. It will be my intelligent AI police against your rogue AI." It feels like a city full of specialists rather than a single genius who knows everything!
The funniest thing about this story is Moravec's paradox: computers have been able to beat us in chess for decades but are useless at doing what a one-year-old can do effortlessly - like catching a ball or understanding that an object still exists even when you can't see it anymore. Tech is weird sometimes, isn’t it?
Principle 6: Humans will be the bosses, not the machines
Rest assured: in the future, it’s not Terminator waiting for us (well, normally). Experts believe that the human-AI relationship will be clear: we above, machines below. Why? Because AIs won’t have "free will" and will be designed with built-in limits.
Yann LeCun puts it bluntly: "Everyone will become a sort of CEO, or at least a manager. Humanity will rise in the hierarchy. We will have a level below us, that of AI systems. They might be smarter than us in some respects but they will do what we tell them to."
Imagine: you give orders to your AI that might be super skilled in maths or languages, but you maintain control. A bit like having a super assistant that might be better than you in certain areas, but who never makes the final decision. That would completely change the way we work, wouldn’t it?
Principle 7: Being intelligent isn’t everything in life
We often worry that super intelligent AIs will take over the world, but maybe we’re overestimating the power of intelligence. Look around you: are the brightest really the ones in charge of everything?
Yann LeCun wakes us up: "People give too much credit to pure intelligence. It’s not the only force that matters - there are also physical, biological forces, etc. Look at current politics, it’s not really the brightest who are in charge!"
Honestly, he has a point. Intelligence is cool, but it doesn’t do everything. An AI can be super smart but it can do nothing against a tsunami, a power outage, or even against a human who simply unplugs the cord! Not to mention charisma, emotional manipulation, or brute strength that often have more impact than clever reasoning. Maybe that’s what should reassure us (or worry us, depending on how you see it).
Principle 8: We want AIs that are predictable and respect our values
No one likes unpleasant surprises. With AIs, it’s the same: we want them to be predictable and to respect our values. Not just Western values or those of a tech elite, but the values of each and every one of us, in all their diversity.
Yann LeCun is crystal clear about this: "It takes a collaborative effort to design these systems aligned with human values... and the best way is to do it openly and collaboratively... so that anyone can build upon it and retain their own sovereignty."
Basically, it’s like saying: "Hey, let’s not just let a few big companies decide how AI will work!" The more ordinary people have a say, the more AI will respect the diversity of human values. Not everyone has the same definition of what is important, just, or moral - and our AIs should reflect that!
To conclude all this
So, what do we do with these eight principles? In fact, they offer us a more realistic way of viewing AI, far from sci-fi movies that make us believe we will all end up like in The Matrix!
Marcus Weldon (who summarized it all well) explains that we can judge AI in two ways: by looking at what it produces (the "what") and how it gets there (the "how"). It’s a bit like if to evaluate a chef, you looked not only at whether their dish is good but also how they cooked it.
Ultimately, what we want is not robots that do everything like us or that replace us (honestly, who needs a digital twin?). We want systems that make us more efficient, that free us from tedious tasks, all while respecting our values and remaining under our control.
AI is like any tool: it’s worth what we make of it. These principles remind us that it’s up to us to decide what we want to do with it, not the other way around. So, are you ready for a future where AI is our assistant and not our replacement?
Do you remember Asimov's three laws of robotics? Those rules that say a robot cannot harm a human, must obey orders, and must protect its own existence (as long as it doesn't contradict the first two laws)? Well, it seems we're in the process of writing the modern equivalent for artificial intelligence!
In a series of super interesting discussions published by Newsweek, Marcus Weldon (the former head of Bell Labs) talked with three leading figures in the field: Rodney Brooks (robotics expert), David Eagleman (neuroscientist), and Yann LeCun (one of the fathers of modern AI). And guess what? Despite their different backgrounds, they agreed on eight principles that will likely shape the future of AI. Let’s see what they are!
Principle 1: We think it’s magic (but it’s not)
Let’s admit it: as soon as ChatGPT gives us an intelligent answer, we’re tempted to think "Wow, this machine really understands what I’m saying!" This is what experts call "magical thinking" - this habit we have of attributing human-like abilities to our gadgets as soon as they do something a bit clever.
As Rodney Brooks bluntly puts it: "If it seems like magic, then you don’t understand... and you shouldn’t buy something you don’t understand." Boom!
Basically, we’re like children in front of a magic trick - impressed but completely baffled about what’s really going on. This tendency to see AI as our new super-smart friend prevents us from seeing its true limits. To move forward, we need to stop being naively amazed and understand what these systems can really do (and especially what they can’t do).
Principle 2: Intelligence is much more than a score
Let’s be honest: human intelligence is a real mess! It’s not just about being able to solve equations or memorize things. It’s also about being creative, understanding the emotions of others, having a moral conscience, and even knowing how to use one’s body in a coordinated way.
David Eagleman throws out a thought-provoking truth: "We don’t have a single definition of intelligence. When we ask if AI is truly intelligent, we don’t even have a clear criterion for measuring that!"
So comparing a human and an AI is like comparing apples and oranges. AI might be super strong in playing chess or generating images but useless in understanding why a joke is funny or in opening a squeaky door. That’s why we need to stop with simplistic comparisons and accept that intelligence is complicated!
Principle 3: AI must learn to think before it speaks
You know that moment when you respond instantly without thinking, then that moment when you really take the time to ponder a problem? Psychologist Daniel Kahneman (who won a Nobel Prize, mind you) calls this System 1 (fast and instinctive) and System 2 (slow and reflective).
The problem with our current AIs like ChatGPT? They are stuck in System 1 mode! As Yann LeCun puts it: "A LLM produces one word after another, it’s reactive... there’s no real reasoning." It’s like discussing with someone who always answers immediately without ever taking the time to think deeply.
For AI to become really useful, it needs to develop its System 2 - that ability to truly reflect, to model the world, to reason step by step. It’s a bit like moving from an impulsive teenager to a reflective adult. Not simple, but necessary!
Principle 4: Words are not enough
Imagine trying to describe the exact taste of a strawberry, or the precise sensation of diving into cold water. Tough, right? That’s because language is super limited!
David Eagleman explains it perfectly: "The connection we have through language is ridiculously low-bandwidth. When I say 'justice' or 'freedom,' I put my own meaning into it. You probably put in a completely different meaning. We manage, but it’s honestly not ideal."
The thing is our current AIs are stuffed with texts, words, phrases... but not with direct experiences. It’s like you had to understand what swimming is only by reading books about swimming, without ever dipping a toe in the water!
For AI to truly progress, it needs to go beyond words. It must understand the world in a more complete way, less dependent on language. A bit like us, who learn much more by living experiences than by reading descriptions.
Principle 5: A society of AIs rather than a super-AI
Forget the idea of an all-knowing AI that can do everything (you know, that scary thing in science fiction films). The future is more about a bunch of different AIs working together, each with its specialty.
Yann LeCun explains it stylishly: "It will be an interactive society of machines. You’ll have AI systems that are smarter than others and can neutralize them. It will be my intelligent AI police against your rogue AI." It feels like a city full of specialists rather than a single genius who knows everything!
The funniest thing about this story is Moravec's paradox: computers have been able to beat us in chess for decades but are useless at doing what a one-year-old can do effortlessly - like catching a ball or understanding that an object still exists even when you can't see it anymore. Tech is weird sometimes, isn’t it?
Principle 6: Humans will be the bosses, not the machines
Rest assured: in the future, it’s not Terminator waiting for us (well, normally). Experts believe that the human-AI relationship will be clear: we above, machines below. Why? Because AIs won’t have "free will" and will be designed with built-in limits.
Yann LeCun puts it bluntly: "Everyone will become a sort of CEO, or at least a manager. Humanity will rise in the hierarchy. We will have a level below us, that of AI systems. They might be smarter than us in some respects but they will do what we tell them to."
Imagine: you give orders to your AI that might be super skilled in maths or languages, but you maintain control. A bit like having a super assistant that might be better than you in certain areas, but who never makes the final decision. That would completely change the way we work, wouldn’t it?
Principle 7: Being intelligent isn’t everything in life
We often worry that super intelligent AIs will take over the world, but maybe we’re overestimating the power of intelligence. Look around you: are the brightest really the ones in charge of everything?
Yann LeCun wakes us up: "People give too much credit to pure intelligence. It’s not the only force that matters - there are also physical, biological forces, etc. Look at current politics, it’s not really the brightest who are in charge!"
Honestly, he has a point. Intelligence is cool, but it doesn’t do everything. An AI can be super smart but it can do nothing against a tsunami, a power outage, or even against a human who simply unplugs the cord! Not to mention charisma, emotional manipulation, or brute strength that often have more impact than clever reasoning. Maybe that’s what should reassure us (or worry us, depending on how you see it).
Principle 8: We want AIs that are predictable and respect our values
No one likes unpleasant surprises. With AIs, it’s the same: we want them to be predictable and to respect our values. Not just Western values or those of a tech elite, but the values of each and every one of us, in all their diversity.
Yann LeCun is crystal clear about this: "It takes a collaborative effort to design these systems aligned with human values... and the best way is to do it openly and collaboratively... so that anyone can build upon it and retain their own sovereignty."
Basically, it’s like saying: "Hey, let’s not just let a few big companies decide how AI will work!" The more ordinary people have a say, the more AI will respect the diversity of human values. Not everyone has the same definition of what is important, just, or moral - and our AIs should reflect that!
To conclude all this
So, what do we do with these eight principles? In fact, they offer us a more realistic way of viewing AI, far from sci-fi movies that make us believe we will all end up like in The Matrix!
Marcus Weldon (who summarized it all well) explains that we can judge AI in two ways: by looking at what it produces (the "what") and how it gets there (the "how"). It’s a bit like if to evaluate a chef, you looked not only at whether their dish is good but also how they cooked it.
Ultimately, what we want is not robots that do everything like us or that replace us (honestly, who needs a digital twin?). We want systems that make us more efficient, that free us from tedious tasks, all while respecting our values and remaining under our control.
AI is like any tool: it’s worth what we make of it. These principles remind us that it’s up to us to decide what we want to do with it, not the other way around. So, are you ready for a future where AI is our assistant and not our replacement?
Continue Reading