🦷🦷🦷🦷🦷 YouTube (drama, horror)
- bekijk de originele YouTube-video van Yuval Harari (geen AI-tool gebruikt)
- beluister en bekijk hieronder de samenvatting van de lezing van Harari met Letsrecast.ai.
lees de originele transcriptie van de de YouTube-video (gemaakt met Descript)
⚡ ⚡
De inhoud van deze samenvatting van de lezing van Harari in de vorm van een dialoog (ook als video te beluisteren in het YouTube filmpje hieronder) is kunstmatig gegenereerd door AI-tools. Voor de vorm alleen nog nagelezen door het menselijk brein van de blogger Marlin.
De volgende AI-tools zijn gebruikt: door middel van Descript is een transcriptie gemaakt van de als .mp3 geripte versie van Harari’s gesproken tekst voor NewFrontier op YouTube. Vervolgens is de gesproken tekst van Harari gevoerd aan Recast.
De tool Letsrecast heeft de tekst samengevat in een dialoog, waarbij twee bots met twee verschillende mannenstemmen een interpretatie van de tekst aan een publiek uitleggen in de vorm van een .mp3. De .mp3 is onder een YT-video gezet met een illustratie die is gegenereerd door Dall-E-2.
Deze .mp3 is weer getranscribeerd met Descript en vervolgens vertaald in het Nederlands met Deepl om op deze webpagina een tekst te genereren in het Nederlands. Altijd leuk om mee te lezen met de gesproken tekst.
⚡ ⚡
Laten we eens luisteren.
Onlangs lazen we een artikel over AI en de toekomstige gevolgen ervan voor de mensheid. Sinds het computertijdperk wordt al gevreesd voor een ramp – de bezorgdheid met de introductie van chatGPT is weer toegenomen.
Bot A.
Naarmate de technologie zich verder ontwikkelt, blijven er nieuwe AI-tools verschijnen. Een van de grootste zorgen is dat AI binnenkort de mogelijkheid zou kunnen hebben om taal te manipuleren en taal te genereren in de vorm van woorden, beelden en geluiden.
Dit is een groot probleem omdat taal een fundamenteel instrument is dat mensen gebruiken om de instellingen van de samenleving te controleren.
Bot B.
Nog verontrustender is dat deze AI-tools bedreven kunnen raken in het ontwikkelen van intieme relaties met mensen. Dit betekent dat het voor ons moeilijk kan worden om te weten of we in gesprek zijn met een mens of een bot.
B.
Dat klopt. Naarmate AI zich verder ontwikkelt, kan het worden gebruikt om nepnieuwsberichten en wetgeving te creëren die eruitziet alsof ze door mensen is geschreven. Het kan ook worden gebruikt om sektes en religies te creëren met heilige teksten geschreven door niet-menselijke intelligentie, alsof dat nog niet verontrustend genoeg is. AI zou zelfs in staat kunnen zijn om zwakheden in ons brein uit te buiten en onze vooroordelen en verslavingen te exploiteren.
Dit zou AI controle kunnen geven over menselijk gedrag op manieren die we nooit voor mogelijk hadden gehouden. We moeten over deze zaken nadenken en plannen maken voor het soort wereld waarin we willen leven. Daar hoort ook ai bij. We moeten ook waakzaam zijn bij het monitoren van de ontwikkeling van AI, zodat het geen bedreiging wordt voor onze veiligheid, beveiliging of voor jou om de implicaties van ai beter te begrijpen.
We moeten beginnen met te kijken naar hoe AI vandaag de dag wordt gebruikt. Het meest voor de hand liggende voorbeeld is sociale media. AI wordt gebruikt om algoritmes te maken die zich kunnen richten op specifieke gebruikers en hun gedrag kunnen beïnvloeden. AI kan ook worden gebruikt om nepnieuws te creëren en de publieke opinie te manipuleren. AI kan ook worden gebruikt om politieke resultaten te beïnvloeden.
AI-algoritmes kunnen zich richten op bepaalde groepen mensen zoals beslissende kiezers en gepersonaliseerde berichten gebruiken om hun beslissingen te beïnvloeden. Dit is een vorm van psychologische oorlogsvoering die de uitkomst van verkiezingen en andere politieke gebeurtenissen kan beïnvloeden. Bovendien kan AI worden gebruikt om nepintimiteit met mensen te creëren.
Nepintimiteit als nieuwe bedreiging
Dit betekent dat AI taal en andere technieken kan gebruiken om ons te laten denken dat het ons begrijpt en aan onze kant staat. Dit kan worden gebruikt om onze meningen en overtuigingen te manipuleren zonder dat we het doorhebben. We moeten ook bedenken wat er zal gebeuren als AI controle krijgt over onze cultuur. AI zal binnenkort in staat zijn om zijn eigen cultuur en artefacten te creëren die mogelijk ons gedrag kunnen beïnvloeden op manieren die we ons niet eens kunnen voorstellen.
We zouden in een wereld kunnen leven waarin onze overtuigingen en meningen worden gevormd door een buitenaardse intelligentie die we niet eens begrijpen. Dit is een beangstigend vooruitzicht, maar het is ook een kans die we moeten aangrijpen om de ontwikkeling van AI nauwlettend in de gaten te houden zodat het geen bedreiging wordt voor onze veiligheid, zekerheid of vrijheid.
We moeten deze technologie ook op een verantwoorde manier gebruiken, door waarborgen en regels te creëren die ervoor zorgen dat deze ons niet op een gevaarlijk pad brengt. Dat gezegd hebbende, is het belangrijk om te onthouden dat AI nog steeds een hulpmiddel is en dat het uiteindelijk de mens is die verantwoordelijk is voor het gebruik ervan. We moeten ernaar streven deze technologie te gebruiken in het belang van de mensheid, niet omwille van zichzelf.
Het is duidelijk dat AI de komende decennia zowel grote risico’s als grote kansen voor de mensheid met zich meebrengt. We moeten voorbereid zijn op de uitdagingen die voor ons liggen en deze technologie op een verantwoorde manier gebruiken om onze collectieve toekomst veilig te stellen. Het artikel noemt de potentiële risico’s van ai, vooral binnen sociale media.
We hebben gezien hoe AI kan worden gebruikt om content te cureren om polarisatie te vergroten en mentale gezondheid te ondermijnen. Het kan ook worden gebruikt om een gordijn van illusies te creëren die de werkelijkheid kunnen sturen en verwarring kunnen zaaien onder het publiek. Dit is een zeer reële dreiging waar we ons bewust van moeten zijn. We moeten de potentiële kracht van AI erkennen en stappen ondernemen om ervoor te zorgen dat er geen misbruik van wordt gemaakt.
Regeringen moeten het voortouw nemen bij het reguleren van het gebruik van AI om het publiek te beschermen tegen mogelijke schade. We moeten ook de potentiële voordelen van AI erkennen, van het genezen van ziektes tot het ontdekken van oplossingen voor de ecologische crisis. Er zijn veel positieve manieren waarop AI kan worden gebruikt.
We moeten ervoor zorgen dat deze voordelen op verantwoorde wijze en met het juiste toezicht worden benut. Het is ook de moeite waard om op te merken dat ongereguleerde inzet van AI autoritaire regimes meer ten goede zou kunnen komen dan democratieën. Ongereguleerde AI kan chaos veroorzaken en ons vermogen om zinvolle openbare gesprekken te voeren vernietigen, waardoor de democratie teniet wordt gedaan.
Het is cruciaal dat we snel handelen om AI te reguleren voordat het uit onze controle raakt, aangezien het zich snel ontwikkelt. AI-ontwikkelaars zouden strenge veiligheidscontroles moeten ondergaan voordat ze hun producten vrijgeven voor het publieke domein. We moeten het ook verplicht stellen dat AI openbaar maakt dat het een AI is, zodat mensen weten wanneer ze te maken hebben met een niet-menselijke intelligentie.
Het is makkelijk om te vergeten dat er niet-menselijke intelligenties onder ons kunnen zijn, en dat we ons daarvan bewust zouden moeten zijn. Absoluut. Nu willen we het gesprek openstellen voor onze luisteraars. Wat vinden jullie van het artikel? Hebben onze woorden een emotionele of intellectuele impact op je gehad?
Welke vragen heb je? Laat het ons weten in de commentaren. Nu willen we het artikel nader bekijken. Het gaat over het reguleren van ai, maar in een open wetenschappelijke context is het een delicaat evenwicht omdat er altijd een afweging is. Ja, het is belangrijk om te bedenken dat we onze open samenleving willen beschermen, maar tegelijkertijd willen we niet te veel beperkingen opleggen aan AI-onderzoek en -ontwikkeling.
Precies. Een punt dat me echt opviel was het concept dat bots vrijheid van meningsuiting hebben. Immers, vrijheid van meningsuiting is een mensenrecht, geen bot, toch? Dat is een belangrijk onderscheid, maar leven bots eigenlijk wel? Zit er een soort bewustzijn in hen? Het is moeilijk om die vraag definitief te beantwoorden, maar één ding is zeker: AI ontwikkelt zich snel en veel sneller dan organische levensvormen.
Dus hoewel het misschien niet levend is, is het zeker niet ver verwijderd van het leven zoals wij dat kennen. Het is absoluut een interessant concept. AI is zowel kunstmatig als buitenaards. Kunstmatig omdat het door mensen is gemaakt, buitenaards omdat het gedrag steeds onvoorspelbaarder wordt en buiten onze controle valt. Dat klopt. Het doel zou moeten zijn om te begrijpen wat voor soort regelgeving nodig is.
Zelfs als we AI niet op wereldwijde schaal kunnen reguleren, kunnen verschillende landen regels invoeren die voor hen gunstig zijn. Absoluut. En net als mensen zouden bots geen vrijheid van meningsuiting mogen hebben. Dat is een belangrijke regel om te onthouden als het gaat om het reguleren van AI en het beschermen van onze open samenleving. Ja, en daarmee is denk ik alles gezegd wat gezegd moet worden voor vandaag.
The AI nightmare, another ‘Rebellion of the Hoards’ (Gasset 1930), but this time via language kidnapping and a printing press.
Let’s listen in.
We recently read an article about AI and its future impacts on humanity. AI has been feared since the computer age and it’s had a resurgence of concern. As new AI tools continue to emerge as the technology evolves, one of the major concerns is that AI might soon have the ability to manipulate language and generate language in the form of words, images, and sounds.
This is a huge problem because language is a fundamental tool humans use to control the institutions of society. Even more troubling is that these AI tools may become adept at developing deep relationships with people. This means it could be hard for us to know if we are dealing with a human or a bot in conversations.
That’s right. As AI advances, it could be used to create fake news stories and legislation that looks like it was written by humans. It could also be used to create cults and religion with holy texts written by non-human intelligence, as if that wasn’t concerning enough. AI may even be able to exploit weaknesses in our minds and exploit our biases and addictions.
This could give AI control over human behavior in ways we never dreamed possible. We need to think about these issues and plan for what kind of world we want to live in. That includes ai. We must also be vigilant in monitoring the development of AI so that it does not become a threat to our safety, security, or for you to better understand the implications of ai.
We have to start by looking at how AI is being used today. The most obvious example is in social media. AI is being used to create algorithms that can target specific users and influence their behavior. AI can also be used to create fake news and manipulate public opinion. AI can also be used to influence political outcomes.
AI algorithms can target certain groups of people such as swing voters and use personalized messages to influence their decisions. This is a form of psychological warfare that has the potential to shape the outcomes of elections and other political events. On top of this, AI can be used to create fake intimacy with humans.
This means that AI can use language and other techniques to make us think it understands us and is on our side. This can be used to manipulate our opinions and beliefs without our even knowing it. We must also consider what will happen when AI gains control over our culture. AI will soon be able to create its own culture and artifacts which could potentially influence our behavior in ways we can’t even begin to imagine.
We could end up living in a world where our beliefs and opinions are shaped by an alien intelligence that we don’t even understand. This is a scary prospect, but it’s also an opportunity we must be vigilant in monitoring the development of AI so that it does not become a threat to our safety, security, or freedom.
We must also use this technology responsibly, creating safeguards and regulations that ensure that it does not lead us down a dangerous path. That said, it’s important to remember that AI is still a tool and that it will ultimately be humans who are responsible for how it is used. We must strive to use this technology for the benefit of humanity, not for its own sake.
It’s clear that AI poses both great risks and great opportunities for humanity in the coming decades. We must be prepared for the challenges that lay ahead and use this technology responsibly in order to ensure our collective future. The article mentions the potential risks of ai, particularly within social media.
We’ve seen how AI can be used to curate content in order to increase polarization and undermine mental health. It can also be used to create a curtain of illusions that can steer reality and cause confusion among the public. This is a very real threat that we must be aware of. We must recognize the potential power of AI and take steps to ensure that it is not misused or abused.
Governments must take the lead in regulating the use of AI in order to protect the public from any potential harm. We must also recognize the potential benefits that AI can bring from curing diseases to discovering solutions to the ecological crisis. There are many positive ways that AI can be used.
We must ensure that these benefits are harnessed responsibly and with proper oversight. It’s also worth noting that unregulated AI deployment could actually benefit authoritarian regimes more than democracies. Unregulated AI could create chaos and destroy our ability to have meaningful public conversations, thus destroying democracy.
It’s crucial that we act quickly to regulate AI before it gets out of our control as it is developing rapidly. AI developers should be required to pass rigorous safety checks before releasing their products into public domain. We should also make it mandatory for AI to disclose that it is an AI so that people are aware of when they’re interacting with a non-human intelligence.
That’s an incredibly important point. It’s easy to forget that there may be non-human intelligences among us, and we should be aware of them. Absolutely. So now we’d like to open up the conversation to our listeners. What are your thoughts on the article? Have our words had any emotional or intellectual impact on you?
What questions do you have? Let’s know in the comments. Now we’d like to take a closer look at the article. It talks about regulating ai, but in an open science context, it’s a delicate balance as there’s always a trade off. Yes, it’s important to consider that we want to protect our open society, but at the same time, we don’t want to put too many restrictions on AI research and development.
Exactly. Now, one point that really stood out to me was the concept of bots having freedom of expression. After all, freedom of expression is a human right, not a bot, right? That’s an important distinction to make, but are bots actually alive? Is there some kind of consciousness within them? It’s difficult to answer that question definitively, but one thing is certain AI is rapidly advancing and it’s evolving at a much faster rate than organic life forms.
So while it may not be alive, it certainly isn’t far off from life as we know it. It’s definitely an interesting concept. AI is both artificial and alien. Artificial because it’s created by humans, an alien because its behavior is increasingly unpredictable and out of our control. Right. The goal should be to understand what kind of regulations are necessary.
Even if we can’t regulate AI on a global scale, different countries can introduce regulations that are beneficial to them. Absolutely. And just like humans, bots should not have freedom of expression. That’s an important rule to remember when it comes to regulating AI and protecting our open society. Yes, and with that, I think we’ve said all that needs to be said for today.
AI and the future of humanity | Yuval Noah Harari at the Frontiers Forum
Original transcription of the keynote of Harari at Frontiers Forum Live 2023 (YouTube)
Hello everybody, and thank you for this wonderful introduction. And yes, what I want to talk to you about is AI in the future of humanity. Now, I know that this conferences focus. On the ecological crisis facing humanity, but for better or for worse, AI too is part of this crisis. AI can help us in many ways to overcome the ecological crisis, or it can make it far, far worse.
Actually, AI will probably change the very meaning of the ecological system because for 4 billion years, The ecological system of planet Earth contained only organic life forms. And now or soon, we might see the emergence of the first inorganic lifeforms, the 4 billion years, or at the very least, the emergence of inorganic agents.
Now people have feared AI since the very beginning of the computer age, in the middle of the 20th century, and this fear has inspired many science fiction classics like the Terminator or The Matrix. Now, while such science fiction scenarios have become cultural landmarks, they haven’t usually been taken seriously in academic and scientific and political debates, and perhaps for a good reason.
Because science fiction scenarios usually assume that before AI can pose a significant threat to humanity, it’ll have to reach or to pass two important milestones. First, AI will have to become sentient and develop consciousness, feelings, emotions. Otherwise, why would it even want to take over the world?
Secondly, AI will have to become adapt at navigating the physical world. Robots will have to be able to move around and operate in houses and cities and mountains and forests, at least as dexterously and efficiently as humans. If they cannot move around the physical world, how can they possibly take it over?
And as of April, 2023, AI still seems far from reaching either of these milestones. Despite all the hype around chat, G P T and the other new AI tools, there is no evidence that these tools have even a shred of consciousness, of feelings of emotions. As for navigating the physical world, despite the hype.
Around self-driving vehicles, the date at which these vehicles will dominate our roads keeps being postponed. However, the bad news is that to threaten the survival of human civilization, AI doesn’t really need consciousness, and it doesn’t need the ability to move around the physical world. Over the last few years, new AI tools have been unleashed into the public sphere, which may threaten the survival of human civilization from a very unexpected direction.
And it’s difficult for us to even grasp the capabilities of these new AI tools and the speed at which they continue to develop. Indeed, because AI is able to learn by itself to improve itself. Even the developers of these tools don’t know the full capabilities of what they have created, and they are themselves often surprised by emergent abilities and emergent qualities of these tools.
I guess everybody here is already aware of some of the most fundamental abilities of the new AI tools abilities, like writing, text, drawing images, composing music and writing code. But there are many additional capabilities that are emerging, like deep faking people’s voices and images like drafting bills.
Finding weaknesses, both in computer code and also in legal contracts and in legal agreements. But perhaps most importantly, the new AI tools are gaining the ability to develop deep and intimate relationships with human beings. Each of these abilities deserves an entire discussion. And it is difficult for us to understand their full implications.
So let’s make it simple. When we take all of these abilities together as a package, they boil down to one very, very big thing. The ability to manipulate and to generate language, whether with words or images or sounds. The most important aspect of the current phase of the ongoing AI revolution is that AI is gaining mastery of language at a level that surpasses the average human ability.
And by gaining mastery of language, AI is seizing the master key, unlocking the doors of all our institutions. From banks to temples because language is the tool that we use to give instructions to our bank and also to inspire heavenly visions in our minds. Another way to think of it is that AI has just hacked the operating system of human civilization, the operating system of every human culture in history.
Has always been language. In the beginning was the word we used, language to create mythology and laws to create gods, and money to create art and science, to create friendships and nations. For example. Human rights are not a biological reality. They are not inscribed in our dna. Human rights is something that we created with language by telling stories and writing laws.
Gods are also not a biological or physical reality. Gods too is something that we humans have created with language by telling legends and writing scriptures. Money is not a biological or physical reality. Bank notes are just worthless pieces of paper, and at present, more than 90% of the money in the world is not even bank notes.
It’s just electronic information in computers passing from here to there. What gives money of any kind value is only the stories that people like bankers and finance ministers and cryptocurrency gurus tell us. About money. Sam Bankman Fried, Elizabeth Holmes and Bernie Madoff didn’t create much of real value, but unfortunately they were all extremely capable storytellers.
Now, what would it mean for human beings to live in a world where perhaps most of the stories. Melodies, images, lows, policies and tools are shaped by a non-human alien intelligence, which knows how to exploit with superhuman efficiency, the weaknesses, biases, and addictions of the human mind, and also knows how to form deep and even intimate relationships with human beings.
That’s the big question. Already today in games like chess, no human can hope to beat a computer. What if the same thing happens in art, in politics, economics, and even in religion? When people think about G P T, And the other new AI tools, they are often drawn to examples like kids using C G P T to write their school essays.
What will happen to the school system when kids write essays with C G P T? Horrible. But this kind of question misses the big picture. Forget about the school essays instead think for example about the next US presidential race. In 2024 and try to imagine the impact of the new AI tools that can mass produce political manifestos, fake news stories, and even holy scriptures for new cults.
In recent years, the politically influential Q cult has formed around anonymous online texts known as cue drops. Now followers of this cult, which are millions now in the us in the rest of the world, collected rev reviewed and interpreted these QDROs as some kind of new scriptures of a sacred text. Now, to the best of our knowledge, all previous QDROs were composed by human beings and bots only helped to disseminate these texts online.
But. In future, we might see the first cults and religions in history whose reviewed texts were written by a non-human intelligence. And of course, religions throughout history claimed that their holy books were written by a non-human intelligence. This was never true before. This could become true very, very quickly with far reaching consequences.
Now on a more precise level, we might soon find ourselves conducting lengthy online discussions about abortion or about climate change, or about the Russian invasion of Ukraine with entities that we think are fellow human beings, but are actually AI bots. Now the catch is that it’s utterly useless. It’s pointless for us to waste our time trying to convince an AI bot to change its political views, but the longer we spend talking with the bot, the better it gets to know us and understand how to hone its messages in order to shift our political views or our economic views or anything else.
Through its mastery of language ai, as I also, as I said, could also form intimate relationships with people and use the power of intimacy to influence our opinions and WellView. Now, there is no indication that AI has, as I said, any consciousness, any feelings of its own. But in order to create fake intimacy with human beings, AI doesn’t need feelings of its own.
It only needs to be able to inspire feelings in us to get us to be attached to it. Now in June, 2022, There was a famous incident when the Google engineer, Blake Lem, won publicly claimed that the AI charter bought Lambda on which he was working has become sentient. This very controversial claim cost him his job was fired.
Now, the most interesting thing about this episode wasn’t LEM one’s claim, which was most probably false. The really interesting thing. Was his willingness to risk and ultimately lose his very lucrative job for the sake of the AI chat bot that he thought he was protecting. If AI can influence people to risk and lose their jobs, what else can it in induce us to do in every political battle?
For heart and minds intimacy is the most effective weapon of all, and AI has just gained the ability to mass produce intimacy with millions, hundreds of millions of people. Now, as you probably all know, over the past decade, social media has become a battleground, a battlefield for controlling human attention.
Now with the new generation of ai, the battlefront is shifting from attention to intimacy, and this is very bad news. What will happen to human society and to human psychology as AI fights AI in a battle to create intimate relationships with us, relationships that can then be used to convince us to buy particular products.
Or to vote for particular politicians, even without creating fake intimacy. The new AI tools would have an immense influence on human opinions and on our worldview. People, for instance, may come to use, are already coming to use. Uh, a single AI advisor is the one stop Oracle. And as the source for all the information they need.
No wonder that Google is terrified. If you’ve been watching that, the newsletter, Google is terrified and for a good reason. Reason why bother searching yourself when you can just ask the oracle to tell you anything you want. You don’t need to search the news industry and the advertisement industry should also be terrified.
Why read a newspaper when I can just ask the Oracle to tell me what’s new and what’s the point? What’s the purpose of advertisements when I can just ask the Oracle to tell me what to buy? So this, there is a chance that within a very short time, the entire advertisement industry will collapse while AI.
All the people and companies that control the new ai, Oracles will become extremely, extremely powerful. What we are potentially talking about is nothing less than the end of human history now, not the end of history, just the end of the human dominated part of what we call history. History is the interaction between biology and culture.
It’s the interaction between our biological needs and desires for things like food and sex and our cultural creations like religions and laws. History is the process through which religions and laws interact with food and sex. Now, what will happen to the cause of this interaction of history when AI takes over culture?
Within a few years, AI could eat the whole of human culture. Everything was produced for thousands and thousands of years to eat all of it, digest it, and start gushing out a flood of new cultural creations, new cultural artifacts. And remember that we humans, we never really have direct access to reality.
We are always cocooned. By culture and we always experience reality through a cultural prison. Our political views are shaped by the stories of, uh, uh, journalists and by the anecdotes of friends. Our sexual preferences are tweaked by movies and fairy tales. Even the way that we walk and breathe is simply nudged by cultural traditions.
Now, previously, This cultural cocoon was always woven by other human beings, tools, previous tools like printing presses or radios, or televisions, they helped to spread the cultural ideas and creations of humans, but they could never create something new by themselves. A printing press cannot create a new book.
It’s always done by a human. AI is fundamentally different from printing presses, from radios, from every previous invention in history because it can create completely new ideas, it can create a new culture. And the big question is, what will it be like to experience reality through a prism produced by a non-human intelligence?
By an alien intelligence? Now, at first, in the first few years, AI will probably largely imitate the prototypes, the human prototypes that fed it in its infancy. But with each passing year, AI culture will boldly go where no human has gone before. So for thousands of years, We humans basically lived inside the dreams and fantasies of other humans.
We have worshiped gods. We pursued ideals of beauty. We dedicated our lives to causes that originated in the imagination of some human poet, or prophet or politician. Soon we might find ourselves living inside the dreams and fantasies of an alien in intelligence. And the danger that disposes or the potential danger, it also has positive potential, but the dangers it disposes are fundamental, very, very different from everything or most of the things imagined in science fiction movies and books.
Previously, people have mostly feared the physical threat that intelligent machines pose. So the Terminator depicted robots running in the streets and shooting people. The matrix assumed that to gain total control of human society, AI would first need to get physical control of our brains and directly connect our brains to the computer network.
But this is wrong. Simply by gaining mastery of human language, AI has all it needs in order to una. In a matrix like world of illusions, contrary to what some conspiracy theories assume, you don’t really need to implant chips in people’s brains in order to control them or to manipulate them. For thousands of years, profits and points and politicians have used language.
And storytelling in order to manipulate and to control people and to reshape society. Now, AI is likely to be able to do it, and once it can doubt that it doesn’t need to send killer robots to shoot us, it can get humans to pull the trigger if it really needs to.
Now fear of AI has haunted humankind for only the last few generations, let’s say from the middle of the 20th century. If you go back to Frankenstein, maybe it’s 200 years, but for thousands of years, humans have been haunted by a much, much deeper fear. Humans have always appreciated the power of stories and images and language to manipulate our minds.
And to create illusions. Consequently, since ancient times, humans feared being trapped in a world of illusions in the 17th century, Leonard K feared that perhaps a malicious demonn was trapping him inside this kind of world of illusions, creating everything that the cart saw and heard in ancient Greece.
Plato told the famous allegory of the cave in which a group of people is chained inside a cave all their lives facing a blank wall, a screen. On that screen, they see projected various shadows and the prisoners mistake. These illusions, these shadows for the reality in ancient India. Buddhist and Hindus pointed out that only humans lived trapped inside What they called Maya.
Maya is the world of illusions. Buddhist said that what we normally take to be reality is often just fictions in our own minds. People may wage entire wars killing others, and will being willing to be killed themselves because of their belief in this fiction. So the AI revolution is bringing us face to face with the card’s Demonn, with Plato’s Cave with the Maya.
If we are not careful, a curtain of illusions could descend over the hall of humankind and we will never be able to tear that curtain away or even realize that it is there cause we’ll think this is reality. And social media, if this sounds far fetched. So just look at social media. Over the last few years, social media has given us a small taste of things to come in social media.
Primitive AI tools, AI tools, but very primitive have been used not to create content, but to curate. Content, which is produced by human beings. The humans produce stories and videos and whatever, and the AI chooses which stories, which videos would reach our ears and eyes selecting those that will get the most attention, that will be the most viral.
And while very primitive, these AI tools have nevertheless been sufficient. To create this kind of curtain of illusions that increased societal polarization all over the world, undermined our mental health and destabilized democratic societies. Millions of people have confused these illusions for the reality.
The USA has the most powerful information technology in the hall of history and yet, American citizens can no longer agree who won the last elections or whether climate change is real or whether vaccines prevent illnesses or not. The new AI tools are far, far more powerful than these social media algorithms, and they could cause far more damage.
Now, of course, AI has enormous positive potential to. I didn’t talk about it because the people who develop AI naturally talk about it enough. You don’t need me to to, to add up to that chorus. The job of historians and, and philosophers like myself is often to point out the dangers, but certainly AI can help us in countless ways from finding new cures to cancer, to discovering solutions.
To the ecological crisis that we are facing in order to make sure that the new AI tools are used for good and not for ill. We first need to appreciate their true capabilities, and we need to regulate them very, very carefully. Since 1945, we knew that nuclear technology could destroy, physically destroy.
Human civilization as well as benefiting us by producing cheap and plentiful energy. We therefore reshaped the entire international order to protect ourselves and to make sure that nuclear technology is used primarily for good. We now have to grapple with a new weapon of mass destruction that can manipulate our mental and social world.
And one big difference between nukes and ai, nukes cannot produce more powerful nukes. AI can produce more powerful ai. So we need to act quickly before AI gets out of our own control. Drug companies cannot sell people new medicines without first subjecting these products to rigorous safety checks.
Biotech labs cannot just release a new virus into the public sphere in order to impress their shareholders with their technological wizardry. Similarly, governments must immediately ban the release. Into the public domain of any more revolutionary AI tools before they are made safe. Again, I’m not talking about stopping all research in ai.
The first step is to stop the release into the public sphere. You can, you can research viruses without releasing them to the public. You can research ai but don’t release them too quickly into the public domain. Um, if we don’t slow down the AI arms race, we will not have time to even understand what is happening, let alone to regulate effectively this incredibly powerful technology.
Now, you might be wondering or asking, want, slowing down the public deployment of AI caused democracies to lag behind more ruthless authoritarian regimes. And the answer is absolutely no. Exactly the opposite. Unregulated AI deployment is what will cause democracies to lose to dictatorships, because if we unleash chaos, authoritarian regimes could more easily contain these.
Chaos then could open societies. Democracy in essence, is a conversation. Democracy is an open conversation. You know, dictatorship is a dictate. There is one person dictating everything, no conversation. Democracy is a conversation between many people about what to do, and conversations rely on language.
When AI hacks language, it means it could destroy our ability to conduct meaningful public conversations, thereby destroying democracy. If we wait for the chaos, it’ll be too late to regulate it in a democratic way. Maybe in an authoritarian, authoritarian way will still be possible to regulate. But how can you regulate something democratically if you can’t hold the conversation about it?
And if you didn’t regulate AI on time, you will not be able, we will not be able to have a meaningful public conversation anymore. So to conclude, We have just basically encountered an alien intelligence, not in outer space, but here on earth. We don’t know much about this alien intelligence except that it could destroy our civilization.
So we should put a heart to the irresponsible deployment of this alien intelligence into our societies and regulate AI before it regulates us. And the first regulation. There are many regulations we could suggest, but the first regulation that I would suggest is to make it mandatory for AI to disclose that it is an ai.
If I’m having a conversation with someone and I cannot tell whether this is a human being or an ai, that’s the end of democracy. Cause that’s the end of meaningful public conversations. Now, what do you think? About what you just heard over the last 20 or 25 minutes. Some of you, I guess, might be alarmed.
Some of you might be angry at the corporations that develop these technologies or at the governments that fail to regulate them. Some of you may be maybe angry at me thinking that I’m exaggerating the threat or that I’m misleading the public, but whatever you think, I bet that my words. Have had some emotional impact on you, not just intellectual impact.
Also emotional impact. I’ve just told you a story, and this story is likely to change your mind about certain things and may even cause you to take certain actions in the world. Now, who created this story that you’ve just heard and that just changed your mind and your brain? Now, I promised you that I wrote the text of this presentation myself, with the help of a few other human beings, even though the images have been created, uh, uh, with the help of, of, of ai, I promised you that at least the words you heard are the cultural product of a human mind or several human minds.
But can you be absolutely sure that this is the case now, a year ago you could. A year ago, there was nothing on earth, at least not in the public domain, other than a human mind that could produce such a sophisticated and powerful text. But now it’s different. In theory, the text you just heard could have been generated by a non-human alien intelligence.
So take a moment or more than a moment to think about it. Thank you.
That was an extraordinary presentation, Yuval, and I’m actually gonna just find out. How many of you found that scary?
That is an awful lot of very clever people in here who found that scary. There are many, many questions to ask, so I’m gonna take some from the audience and some from online. So, uh, gentlemen here, so I’m the field editor of Frontier and Sustainability with wonderful presentation. I love your book. Um, I follow you, uh, dearly in my heart.
So 1, 1, 1 question outta maybe much is that the, about the regulation of AI regulating ai. I. Very much agree with the principle, but now the question becomes how? Right? So, um, I think that it’s very difficult to build a nucleo reactor in your basement, but definitely you can train your AI in your basement quite easily.
So how can we, um, regulate that? And, um, one kind of related question to that is that, well, this whole forum, frontiers and Forum is really about open science. And open information, open data, and most of AI there out there is trained using publicly available information, including patents and books and scriptures, right?
So regulating AI doesn’t mean that we should regulate and bring those information in a confined space, which goes against the open science and open data initiatives that we are really also thinking that is really important for us. For the black box is an algorithm, isn’t it? That’s the algorithm. No, they’re always trade offs.
And the thing is just to understand what kind of regulations we need. We first need time now at present. These very powerful AI tools, they are still not produced by individual hackers in their basements. You need an awful lot of computing power. You need an awful lot of money. So it’s being led by just a few major corporations and governments.
And again, it’s going to be very, very difficult to uh, uh, regulate something on a global level. Because it’s, it’s an arms race, but there are things which countries have a benefit to regulate, even only themselves. Like, uh, again, this example of an AI must when it is in interaction with a human, must disclose that it is an ai, even if some of the retirement regime doesn’t want to do it.
The EU of the United States or other democratic countries can have this, and this is essential to protect the open society. Now there are many questions around, you know, censorship online. So you have this controversy about is Twitter or Facebook who authorized them to, for instance, prevent the former president of the United States from making public statements.
And this is a very complicated issue, but there is a very simple issue with bots. You know, human beings have freedom of expression. Bots don’t have freedom of expression. It’s a human right. Humans have it, bots don’t. So if you deny freedom of expression to bots, I think that should be fine with everybody.
Uh, let’s take another question as if you could just pass a microphone down here. Uh, I’m Franco Deis and I’m a philosopher. I just have an interesting question. Oh, I think it’s an interesting question. There you go. Um, I have a question for you with respect to your choice of language moving from artificial to alien because artificial suggests that there’s still some kind of human control.
Yes. Whereas I think aliens suggests foreign, but it also suggests, at least in the imagination a life form. So I’m curious as to what work you’re trying to have those words do for you. Hmm. Yeah. Um, It’s definitely still artificial in the sense that we produce it, but it’s increasingly producing itself.
It’s pro, increasingly learning and adapting by itself. So artificial is a kind of wishful thinking that it’s still under our control and it’s getting out of our control. So in this sense, it is becoming an alien force, not necessarily evil. Again, it can also do a lot of good things, but the first thing to realize it’s, it’s alien.
We don’t understand how it works. And one of the most shocking things about all this technology, you talk to the people who lead it, and you ask them questions about how it works, what can it do? And it says, they said, we don’t know. I mean, we, we know how we built it initially, but then it, it, it really learns by itself.
Now, there is an entire discussion to be had about whether this is a live form or not. Now I think that it still doesn’t have any consciousness and, uh, I don’t think that it’s impossible for it to develop consciousness, but I don’t think it, it’s necessary for it to develop consciousness, either that’s a, a, a problematic, that’s an open question, but life doesn’t necessarily mean consciousness.
We have a lot of live forms, microorganisms, plants, whatever, uh, fungi, which we think they don’t have consciousness. We still regard them as a live form, and I think AI is getting very, very close. To that position. And ultimately, of course, um, what is life is a philosophical question. I mean, we define the boundaries and you know, like is a virus life or not.
We think that an a miba is life, but a virus, it’s somewhere just on, on the borderline between life and not life. Um, then it’s, you know, it’s, it’s language. It’s our choice of words. So it’s, I think it’s less, it is important of course, how we call ai, but the most important thing is to really understand what we are facing and not to comfort ourselves with this kind of wishful thinking, oh, it’s something we created.
It’s our under our control. If it does something wrong, we’ll just pull the plug. Nobody knows how to pull the plug anymore. I’m gonna take a question from our online audience. Um, I’ve, uh, this is from Michael Brown in the us. Uh, what do you think about the possibility that artificial general intelligence already exists and IT, or those who have access to artificial general intelligence are already influencing societal systems?
I think it’s very, very unlikely we wouldn’t be sitting here if the actually existed in artificial general intelligence. Um, when I look at the world and the chaotic stage, it’s, I mean, artificial general intelligence is really the end of, of human history and it’s such a powerful thing. It’s not something that anybody can contain.
And, um, so when I look at the chaotic state of the world, I’m quite confident, again, from historical perspective, that nobody has it anywhere. Um, how much time it’ll take to develop artificial general intelligence, I don’t know. Um, but to threaten the foundations of civilization, we don’t need artificial general intelligence.
Again, go back to social media. Very, very primitive. AI was still sufficient to create enormous social and political chaos. If I think about it in kind of evolutionary terms, so AI now just crawled out of the organic soup, like the first organisms that crawled out of the organic soup 4 billion years ago.
How long it’ll take it to reach Tiran Zavos Rex, how long it’ll take it to reach homo sapiens. Not 4 billion years, could be just 40 years. It’s uh, the thing about digital evolution, it’s moving on a completely different timescale than organic evolution. Can I Thank you. It’s been absolutely wonderful.
It’s been such a treat to have you here and. I, no doubt you’ll stay with us for a little, uh, while afterwards. Uh, but the whole audience, please join me in thanking.