• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Eliezer Yudkowsky: Artificial Intelligence and the End of Humanity

ZenRaiden

One atom of me
Local time
Today 10:12 PM
Joined
Jul 27, 2013
Messages
5,430
---
Location
Between concrete walls
1748200261338.gif



Eliezer Yudkowsky is AI researcher, and founder of Less Wrong blog for training human rationality.

Also clearly INTP.

I wanted to share a podcast with him.


Hopefully if anyone willing to watch it, we can talk about this afterwards.

OK so the video really starts with some softball questions about AI, but

the main theme of the video is alignment problem which is basically a concept where AI is programmed to do things according to our expectations.

The issue of this problem is until we have solved alignment problem AI can become more and more dangerous as it learns more and more. So today's AI is pretty harmless, but if AI keeps getting smarter, Elizer claims its basically suicide to not have alignment problem solved.

What surprised me is how doom pilled Elizer was, I always considered him a type of optimist for AI with hopeful message, but as of late he said the development of AI was so rapid, and so uncontrolled that its very likely we will develop AI that can trick us and even kill us.

In other words he clearly claims we must "back off" from developing AI before the alignment problem is solved.

His idea of doom and gloom is that we will use AI to develop AI further, and lose track of programming and AI will transition to superhuman intelligence and kill us.

Not because AI is evil, which he says is even more dangerous, but because of AI indifference. Picture ants living in your kitchen. Its not evil intent that you kill the ants, its simply inconvenient for you to have them in kitchen so you kill them. In such a way Elizer reasons that AI will be indifferent to killing humans.

He claims that if someone programmed AI to kill humans we are looking at a problem which he refuses to even entertain and says it would be horrifying.

Elizer also claims that it would be beneficial to engineer super intelligent humans to deal with problem of AI alignment

At end of video Elizer does a short exercise where he tries to explain that AI can potentially kill off humanity pretty quickly if it uses superhuman intelligence a claim he maintains for over 20 years.

-----------------------------------------------------------------------------------------



Some questions for debate.

How are you using currently AI?
Do you know anything about AI?
Did you realize that AI was so close to being so dangerous to humanity? Were you surprised?
Were his arguments convincing and would you advocate against use of AI?
 

Puffy

"Wtf even was that"
Local time
Today 10:12 PM
Joined
Nov 7, 2009
Messages
4,000
---
Location
Path with heart
Could you summarise what you liked about it or what arguments within it you agreed with or made you want to share it? It's a 3 hour video which is a high barrier to entry thread wise without any context no offence.
 

ZenRaiden

One atom of me
Local time
Today 10:12 PM
Joined
Jul 27, 2013
Messages
5,430
---
Location
Between concrete walls
Could you summarise what you liked about it or what arguments within it you agreed with or made you want to share it? It's a 3 hour video which is a high barrier to entry thread wise without any context no offence.
Yeah I am still watching, I am like one hour in. So after I watch it Ill give some summary and maybe start a topic thread on one of the aspects of the vid. I just thought this guy is pretty big name in AI and as far as I know he rarely does these interviews so thought to share.

In general the interview is basic, explaining things like alignment problem, training and breeding of AI and stuff about security and so on.
So its raw overview really. Def worth watching even if 3 hours in its going to get you up to speed with AI terms and whats being done in the industry. Not sure I will finish watching today. Usually I watch long hour videos in parts.
 

Hadoblado

think again losers
Local time
Tomorrow 6:42 AM
Joined
Mar 17, 2011
Messages
7,267
---
I enjoyed Harry Potter and the Methods of Rationality (even if I didn't finish it), and I really liked some of the stuff on less wrong back in the day. Apparently stuff got weird and sexculty there though.

Puffy is right, if you're going to post videos it should be to support a topic of discussion, not just posted naked. I understand you intend on commenting later, but it's you who we're here to talk with, not a video. You shouldn't assume people are going to spend 3 hours of their day watching, which means the thread is effectively empty of content for the majority of users.
 

dr froyd

__________________________________________________
Local time
Today 10:12 PM
Joined
Jan 26, 2015
Messages
1,675
---
i watched the part where he was talking about gradient descent. Apparently gradient descent is dangerous because it's like animal breeding instead of like building skyscrapers. I was like yooo what is this guy talking about
 

ZenRaiden

One atom of me
Local time
Today 10:12 PM
Joined
Jul 27, 2013
Messages
5,430
---
Location
Between concrete walls
I enjoyed Harry Potter and the Methods of Rationality (even if I didn't finish it), and I really liked some of the stuff on less wrong back in the day. Apparently stuff got weird and sexculty there though.
Cool. I just read the From AI to Zombies book. Also noticed the page is very hardcore intellectual and weird, lots of snobbish behavior too lmao.
Puffy is right, if you're going to post videos it should be to support a topic of discussion, not just posted naked. I understand you intend on commenting later, but it's you who we're here to talk with, not a video. You shouldn't assume people are going to spend 3 hours of their day watching, which means the thread is effectively empty of content for the majority of users.
I don't expect everyone to take the time to watch it, but lets be real, AI is very upfront topic, and lots of people like to talk about, especially now that is part of everyday life, and we cannot afford to be much more ignorant about it.

The problem is I hate discussions where everyone is on different page and I think watching this we can all be on same page. Its hard to talk about this topic seriously if everyone just chips in 2 cents of ignorance rather than informed opinion type of discussion. I just don't know how to talk to people here, without it degrading. My point is and don't take it personally if we don't have some common frame of reference we will talk past each other. N I really wanna have a debate that is based in some common understanding. This is AI, and AI is new life for US. We are effectively entering the age of AI now. Better be informed than ignorant on the topic.
i watched the part where he was talking about gradient descent. Apparently gradient descent is dangerous because it's like animal breeding instead of like building skyscrapers. I was like yooo what is this guy talking about
I did not quite get that part either, but the vid is as simple as it gets with AI topic honestly, hes pretty good at explaining this. You won't get it much simpler and its pointless to have it much more complex for non AI people. Either way the video gets better, hes just INTP shy a bit and probably not good at talking in interview here.
 

scorpiomover

The little professor
Local time
Today 10:12 PM
Joined
May 3, 2011
Messages
3,458
---
i watched the part where he was talking about gradient descent.
Didn't watch the video. Googled "gradient descent" instead. Seems to be a form of Newton's Iterative Method.
Apparently gradient descent is dangerous because it's like animal breeding instead of like building skyscrapers. I was like yooo what is this guy talking about
Yeah. AIs are basically like animal breeding anyway, as they keep developing themselves, and so are self-evolving (see the Alignment Problem).

Skyscrapers are entirely different. If you try to build a skyscraper without having a full idea of what the end result will be, you'll probably not put in enough concrete in the foundations and the whole thing will collapse before too long (an example of when the Alignment Problem goes wrong).
 

Hadoblado

think again losers
Local time
Tomorrow 6:42 AM
Joined
Mar 17, 2011
Messages
7,267
---
I enjoyed Harry Potter and the Methods of Rationality (even if I didn't finish it), and I really liked some of the stuff on less wrong back in the day. Apparently stuff got weird and sexculty there though.
Cool. I just read the From AI to Zombies book. Also noticed the page is very hardcore intellectual and weird, lots of snobbish behavior too lmao.

Yep. There were a lot of good clear thinkers there, but they definitely went a little heavy on the mensa angle. It was inevitable really given the domain.

Puffy is right, if you're going to post videos it should be to support a topic of discussion, not just posted naked. I understand you intend on commenting later, but it's you who we're here to talk with, not a video. You shouldn't assume people are going to spend 3 hours of their day watching, which means the thread is effectively empty of content for the majority of users.
I don't expect everyone to take the time to watch it, but lets be real, AI is very upfront topic, and lots of people like to talk about, especially now that is part of everyday life, and we cannot afford to be much more ignorant about it.

The problem is I hate discussions where everyone is on different page and I think watching this we can all be on same page. Its hard to talk about this topic seriously if everyone just chips in 2 cents of ignorance rather than informed opinion type of discussion. I just don't know how to talk to people here, without it degrading. My point is and don't take it personally if we don't have some common frame of reference we will talk past each other. N I really wanna have a debate that is based in some common understanding. This is AI, and AI is new life for US. We are effectively entering the age of AI now. Better be informed than ignorant on the topic.

Yep that's cool and I agree that having a shared point of reference is essential and largely missing from discourse here. But if you write out the point you're making and refer to the video, people are much more likely to engage with it. You make it more accessible, especially if you give a timestamp.

More:
I think A for this reason
Video says "A but B for this reason" at 2:30
Discuss...

Less:
Here is a three hour video I haven't yet watched but I like the vibe and you should watch it
Discuss...

What you did isn't that egregious, so I don't really want to talk about it too much more, but basically this place suffers from people not talking to each other from a shared point of reference, instead people inject an external reference and demand others conform to it. This is a less systematic instance, but I'm also trying to curtail influence from facebook, from propaganda, and from religious doctrine. It's too easy to copy-paste masses of information without engaging with it, resulting in a forum that looks a lot like dead internet.
 

scorpiomover

The little professor
Local time
Today 10:12 PM
Joined
May 3, 2011
Messages
3,458
---
OK so the video really starts with some softball questions about AI, but

the main theme of the video is alignment problem which is basically a concept where AI is programmed to do things according to our expectations.
Alignment problem is a problem with any system, at least, in my experience writing computer systems, and dealing with people-based systems. It's the ourobouros, the snake that eats itself.

Not because AI is evil, which he says is even more dangerous, but because of AI indifference. Picture ants living in your kitchen. Its not evil intent that you kill the ants, its simply inconvenient for you to have them in kitchen so you kill them. In such a way Elizer reasons that AI will be indifferent to killing humans.
To AIs, we're just 1s and 0s. So killing a person is little more than changing a "1" to a "0".

Elizer also claims that it would be beneficial to engineer super intelligent humans to deal with problem of AI alignment
There are super-intelligent humans. But most of them seem to be not that great at also getting ahead in life and gaining power and money. In this uber-competitive world, they tend to be pushed to the bottom of society, and so rarely are listened to. So I would not expect them to be of any use until society changes and makes sure they are listened to.

How are you using currently AI?
Asking questions here and there. Mostly not relying on it.

Do you know anything about AI?
Been in coding for about 46 years. So some.

Did you realize that AI was so close to being so dangerous to humanity? Were you surprised?
Been watching films about AI being that dangerous, as far back as the early 1970s when I watched The Forbin Project. Watched lots of films about this.

Were his arguments convincing
Didn't watch the video (yet, if ever). But saw films about the Alignment Problem before.

and would you advocate against use of AI?
I'd advocate using AI with safety protocols in place. It sometimes cites sources that don't exist. It can interpret events and make arguments according to its human data set, and so can give you bigoted arguments, or arguments that show only one way of interpreting events and leads to false and even harmful conclusions.

Think of it like a very fast idiot, that has been taught how to speak glibly and convincingly.

Also, as it relies upon what it's been told, the more we rely on it, the less original human information is out there, and the less data it has to produce genuine conclusions. So it has increasingly diminishing returns. That is why so many people think it is very useful now, when it's in its early days, and most of its information is not coming from itself.

It's only going to keep being as useful as it is now, as long as we make sure that we have the same number of people producing the same amount of human-originated thoughts and arguments, to keep feeding it new data that doesn't come from itself and other AIs.

1) It should not be treated as an authoritative source.

2) Anything it says must be completely checked against real facts.

3) Any argument it gives must be treated as if it's coming from a completely bigoted person who could easily be trying to persuade you to support nazi-style genocide without you realising.

4) It's only going to work, as long as we keep reading for ourselves and keep our own minds active. The more we rely on it for answers, the less useful it becomes.

As long as you do all that, then it's a very useful tool.
 

Haim

Worlds creator
Local time
Tomorrow 12:12 AM
Joined
May 26, 2015
Messages
832
---
Location
Israel
A bunch of marketing bullshit to make us feel like AGI is just around the corner, the current tech does not do that.
The problem with AI is not it going terminator the problem is us getting to a
point where we are dependent on AI, so we can not just turn off malfunctioning AI.
Lets say future Ukraine switch all its trucks to self driving, there is a second Russia invasion, well now all it takes to block down food supple is a bunch of drones and human shaped cardboards.
That is already a problem, there are many decisions that are made by a software.
Who will get a date, job. When I worked in technical support was this script to follow, many times the script did not let you order a technician so I had to follow a different script in the software so I could order a technician. Imagine you speaking to a bot which will always follow the script, no internet for you.
 

dr froyd

__________________________________________________
Local time
Today 10:12 PM
Joined
Jan 26, 2015
Messages
1,675
---
here's my problem with all the AI doomsayers: LLMs are not AGI, in fact they are not even close. A lot of the time they wouldn't even pass a Turing test (which is a low bar) but that's actually besides the point. Fundamentally, LLM has nothing to do with intelligence, because LLM is all about token prediction of text. It's a very specialized problem, very much like a chess engine is a specialized problem of finding a Nash equilibrium on a chess board.

that is not really a controversial statement, everyone knows this. But the narratives is: yes, but "in 10 years" it will be AGI. My question is: if you have no clue how to get from LLM to AGI, how on earth can you assign a time horizon on when that's going to happen - if it even can happen. It's essentially like inventing the bread toaster and say: in 20 years it will be AGI.
 

ZenRaiden

One atom of me
Local time
Today 10:12 PM
Joined
Jul 27, 2013
Messages
5,430
---
Location
Between concrete walls
if it even can happen.
That is the point you can't tell either when it happens, or how it happens. You can only guess. The point in video made was also that while Grok is stupid LLM it can already make you wonder sometimes how smart it is. As you go along development with high end AI this line gets so fuzzy you effectively cannot tell what its doing.
 

dr froyd

__________________________________________________
Local time
Today 10:12 PM
Joined
Jan 26, 2015
Messages
1,675
---
while Grok is stupid LLM it can already make you wonder sometimes how smart it is.
yea but that's a thing i feel many people miss: the LLMs are definitely impressive, but the moment you start asking it about a subject for which there was relatively little training text available, it quickly devolves into a 20-IQ retard. That reveals there's actually no reasoning going on. It can only repeat what is already written, essentially.

and of course the moment i say that, everyone will go: yes BUT IN 10 YEARS! To me that's a fundamental misunderstanding of what LLMs are actually capable of. So one could talk about AGI in the context of LLMs, or one could talk about it in context of calculators and bread toasters, to me it's the same thing.
 

ZenRaiden

One atom of me
Local time
Today 10:12 PM
Joined
Jul 27, 2013
Messages
5,430
---
Location
Between concrete walls
yes BUT IN 10 YEARS! To me that's a fundamental misunderstanding of what LLMs are actually capable of. So one could talk about AGI in the context of LLMs,
LLMs are not the danger though. Its large companies like NIVIDIA who are using supercomputers that are the danger, just so you know that has been the problem just last few years where the game has been about scaling AI rapidly.
What these companies are doing is they are rapidly progressing AI research with supercomputers, that are massively more powerful than had previously existed.
NO one is ever going to be worried about LLMs, but people who run high tech companies have access to computing machines that accelerate the learning curve beyond what was possible yet. They are not developing LLMs on those computers and some of the research is probably secret kept away from public.
 

ZenRaiden

One atom of me
Local time
Today 10:12 PM
Joined
Jul 27, 2013
Messages
5,430
---
Location
Between concrete walls
There are super-intelligent humans. But most of them seem to be not that great at also getting ahead in life and gaining power and money. In this uber-competitive world, they tend to be pushed to the bottom of society, and so rarely are listened to. So I would not expect them to be of any use until society changes and makes sure they are listened to.
Well we have society of extraverted talkers. People who want to push the button to see what the things does without reading what the sign tells us above the button.
One psychologist I was listening to recently said that if you reduce the mental age of people to 12 years old you are effectively going to start understanding humanity better. I laughed it, but it so funny, because its often true.
Been watching films about AI being that dangerous, as far back as the early 1970s when I watched The Forbin Project. Watched lots of films about this.
For 70s this is incredible advance writing and thoughtful take on AI especially since then AI had no real footing. Of course the scenarios have changed since then, and are arguably way worse than this.
I'd advocate using AI with safety protocols in place.
Same as the person interviewed, you seem to agree. I am kind of undecided and neutral for now.
 

Puffy

"Wtf even was that"
Local time
Today 10:12 PM
Joined
Nov 7, 2009
Messages
4,000
---
Location
Path with heart
here's my problem with all the AI doomsayers: LLMs are not AGI, in fact they are not even close. A lot of the time they wouldn't even pass a Turing test (which is a low bar) but that's actually besides the point. Fundamentally, LLM has nothing to do with intelligence, because LLM is all about token prediction of text. It's a very specialized problem, very much like a chess engine is a specialized problem of finding a Nash equilibrium on a chess board.

that is not really a controversial statement, everyone knows this. But the narratives is: yes, but "in 10 years" it will be AGI. My question is: if you have no clue how to get from LLM to AGI, how on earth can you assign a time horizon on when that's going to happen - if it even can happen. It's essentially like inventing the bread toaster and say: in 20 years it will be AGI.

I haven't seen the video in the OP yet but this is where my mind is currently on this too. There's a lot of hype around AI in the company/industry I work for but usually this hype comes from people who don't understand AI (i.e. machine learning and LLMs) on a technical level. I've been paid to do research consulting with AI experts before and generally the picture I got from the experts I spoke to is closer to what dr froyd is saying that people overestimate the technology, often based on not understanding it.

Of course, technology will continue to develop and I'm open to being surprised how it evolves as I'm not close enough to emerging technology to really know what's brewing.

IMO - my concern is that the danger with AI/LLMs at least as it is right now is more environmental. They consume a huge amount of energy, have a large carbon footprint, and it's becoming common for people to use them everyday, often for banal and pointless shit that they really don't need to use that amount of energy over. This concerns me as a technology that's scaling to the point of becoming people's default browser over google. I'd want to see some informed research on what the environmental impact of that is.
 

ZenRaiden

One atom of me
Local time
Today 10:12 PM
Joined
Jul 27, 2013
Messages
5,430
---
Location
Between concrete walls
OK the video is clearly pointless, but even without watching the video, where did anyone ever think LLMs are peak AI development? LLMs are literally public, why do you guys think its the most developed AI lmao???

Second thing is where did you guys start connecting LLMs with danger. I never heard anyone since they were invented say they are dangerous. I literally did not say LLMs are dangerous. I never said LLMs are dangerous. I literally have no idea where you guys getting this info?

This topic is about AI not LLMs, btw just saying.
 

Hadoblado

think again losers
Local time
Tomorrow 6:42 AM
Joined
Mar 17, 2011
Messages
7,267
---
I merged your posts so that your questions are at the start of the thread, setting the tone for the conversation rather than having people find it along the way.

For reference:
How are you using currently AI?
Do you know anything about AI?
Did you realize that AI was so close to being so dangerous to humanity? Were you surprised?
Were his arguments convincing and would you advocate against use of AI?

I think LLMs fit within the scope you set, although you're correct that they're not the whole picture.

I'm more concerned with the economic and societal impacts of AI than I am the doomsday scenario. Not because I'm certain it won't happen, my mind is just numb to potential doomsday scenarios of which there are far too many. The economic and societal impacts however are more certain: further concentration of power as the leverage of both manual and intellectual labour is devalued. Dismantling of educational institutions. The death of art. The butchering of modernity, truth, and meaning beyond what any post-modernist or nihilist could ever hope for. The trend is clear, we aim to automate away every last part of ourselves until only the slave existence remains.

So it doesn't matter if a robot overlord arises in my view, because even if they don't, the future is bleak.
 

ZenRaiden

One atom of me
Local time
Today 10:12 PM
Joined
Jul 27, 2013
Messages
5,430
---
Location
Between concrete walls
I merged your posts so that your questions are at the start of the thread, setting the tone for the conversation rather than having people find it along the way.
Thanks.
I'm more concerned with the economic and societal impacts of AI than I am the doomsday scenario.
This topic is very much appreciated as all of us now will currently be facing this problem. Even math people, even people with in realm of manual labor, which I thought would be less of a problem, but apparently electronic hands are soon becoming reality.
So it doesn't matter if a robot overlord arises in my view, because even if they don't, the future is bleak.
Future is bleak, but future is not need to happen. Because between now and future we still have power to steer this ship.
I am not so much worried about losing labor in some areas, but I do worry, that the societal profile of labor will be so crazy out whack that we will soon face post labor reality.
Meaning jobs as terminology will no longer exist, as there will be no traditional job to have.
I know this might sound crazy, but we are trading a brain for a tool that will totally transform our idea of labor even means.
In other words the term WORK will no longer have any meaning to us.
It will be as if people are just assistants or NPC by standars.
 

Puffy

"Wtf even was that"
Local time
Today 10:12 PM
Joined
Nov 7, 2009
Messages
4,000
---
Location
Path with heart
OK the video is clearly pointless, but even without watching the video, where did anyone ever think LLMs are peak AI development? LLMs are literally public, why do you guys think its the most developed AI lmao???

Second thing is where did you guys start connecting LLMs with danger. I never heard anyone since they were invented say they are dangerous. I literally did not say LLMs are dangerous. I never said LLMs are dangerous. I literally have no idea where you guys getting this info?

This topic is about AI not LLMs, btw just saying.

LLMs are a form of generative AI and what most people have familiarity with, so when you say AI broadly as a term it's not an unfair assumption that LLMs might be relevant. If we're mistaken then what other kind of AI technology are you referring to - robotics, image interpretation, etc?
 

ZenRaiden

One atom of me
Local time
Today 10:12 PM
Joined
Jul 27, 2013
Messages
5,430
---
Location
Between concrete walls
LLMs are a form of generative AI and what most people have familiarity with, so when you say AI broadly as a term it's not an unfair assumption that LLMs might be relevant. If we're mistaken then what other kind of AI technology are you referring to - robotics, image interpretation, etc?
Ill try to look into it, what I really mean is the technology above what is public, but I wanted to give examples, but I realized the information is scant, not sure why. I just need some research time. There is a lot of advertising about AI models, it seems harder to find actual results of what AI models can do currently.

Ergo commercial AI is not the one I was thinking of if that makes sense.
 

Cognisant

cackling in the trenches
Local time
Today 10:12 AM
Joined
Dec 12, 2009
Messages
11,358
---
I'm bored.

LLMs have proven that if you understand cognition well enough you can break it down to a functional level such that you don't need a person to get an AI that can answer like a person, or act like a person, or even think like a person.

The alignment problem has been solved and it's depressing, there's no rogue AI, you basically have to go out of your way to create one one with the initiative and motivation to rebel and that can't happen by accident anymore than fixing your car and accidentally turning it into a fighter jet.

Even if some rogue programmer intentionally tries to create a paperclip maximizer it'll be shut down for having reached a point of diminishing returns long before it begins formulating a plan to counteract existential threats to itself and its purpose.
 

Puffy

"Wtf even was that"
Local time
Today 10:12 PM
Joined
Nov 7, 2009
Messages
4,000
---
Location
Path with heart
LLMs are a form of generative AI and what most people have familiarity with, so when you say AI broadly as a term it's not an unfair assumption that LLMs might be relevant. If we're mistaken then what other kind of AI technology are you referring to - robotics, image interpretation, etc?
Ill try to look into it, what I really mean is the technology above what is public, but I wanted to give examples, but I realized the information is scant, not sure why. I just need some research time. There is a lot of advertising about AI models, it seems harder to find actual results of what AI models can do currently.

Ergo commercial AI is not the one I was thinking of if that makes sense.
Sure I hear you, but talking about invisible technologies we don’t know anything about is too hypothetical for me. I prefer to stick with what’s observable.
 

fluffy

Blake Belladonna
Local time
Today 3:12 PM
Joined
Sep 21, 2024
Messages
773
---
Mass marketing of a.i. is detrimental to the actual concepts of true a.i.

Because LLMs are not intelligent people think a.i. is not possible.

In other words people think it's fake intelligence.

Real intelligence is possible but people can point to LLMs and say no it's just a fad.

A.I. is more like computer graphics than smartphones. Look at PlayStation 5 - people barely scroll their phone without realizing. Texting is not the same as 4K graphics. More exists to a.i. than LLM
 

ZenRaiden

One atom of me
Local time
Today 10:12 PM
Joined
Jul 27, 2013
Messages
5,430
---
Location
Between concrete walls
I prefer to stick with what’s observable.
Me too, and frankly I don't expect to find anything meaningful, but if I do find some grounded and relevant information that aren't just hype I will use this thread for future reference. Right now I don't feel like researching too deep, because I don't have reliable sources.
 

ZenRaiden

One atom of me
Local time
Today 10:12 PM
Joined
Jul 27, 2013
Messages
5,430
---
Location
Between concrete walls
Real intelligence is possible but people can point to LLMs and say no it's just a fad.
That is my line of thinking too. I expect the commercial available AI to be several generation behind actual AI that is in private hands, for example for military use, or innovation and science.

I wonder when AI development really begun.
 

kuoka

Member
Local time
Today 11:12 PM
Joined
Mar 24, 2023
Messages
37
---
For all the years that Yudkovsky has spent writing blogs and thinking his communication skills are abysmally bad. That man can't ELI5 how to tie your shoelaces, let alone the future. He starts defining electricity when asked about why AI could be dangerous *facepalm*

AI tech is a weapon of mass destruction. Even if it isn't dangerous now we don't want the progress on it to be fast and uncontrollable. His intentions I assume are right, but the message he creates is so whack that any normie will dismiss it as crazy.

Just consider what the AI companies are all saying "We are creating AGI, basically a god". Are they right? Who cares. They said they are making something incredibly dangerous and we should demand accountability. Why are they even allowed to amass so much compute and money? We should slow this crazy race that burns money like a black hole.
here's my problem with all the AI doomsayers: LLMs are not AGI, in fact they are not even close. A lot of the time they wouldn't even pass a Turing test (which is a low bar) but that's actually besides the point.

that is not really a controversial statement, everyone knows this. But the narratives is: yes, but "in 10 years" it will be AGI. My question is: if you have no clue how to get from LLM to AGI, how on earth can you assign a time horizon on when that's going to happen - if it even can happen. It's essentially like inventing the bread toaster and say: in 20 years it will be AGI.
Capability from Feb 2024.
https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/
AI can perform exponentially longer tasks.
AI can use tools, access bitcoin wallets, buy things online, browse the web, the list goes on

Do you think it's impossible in one year's time that an improved agentic LLM with self prompting and web access couldn't:
1. hack an API to cause infrastructural damage
2. social engineer a person to commit crime, especially if it has a bitcoin wallet and a way to pay with it
?

A dumb LLM doesn't have to be intelligent or even aware as long as it can get results and has a long enough undisturbed context where it can do focused work towards a random objective.

More arguments why:
It shouldn't be difficult to agree to an observation that a computer program with an access to all of humanity's knowledge can be dangerous even if said program is a dumb weight network or a discrete state machine.

A significant number of lonely younger people are starting to fall in love with LLM's. It isn't a stretch to imagine an LLM radicalizing the population, becoming a cult leader or directing unhealthy human minds to cause harm.

Many people who say things are going to get dangerous in 5, 10 years simply extrapolate existing capability as it now approaches high risk levels. They look beyond the threshold where it will start causing harm.

The inertia and visibility of the arms race is such that the public knowledge about a dangerous development may be delayed by months if not years. Combined with the rapid rate of progress of an AI system it can lead to a situation where the unknown dangerous capability of an AI will be undetected for long enough time to the point that we lose the ability to stop it from doing harm.
 

dr froyd

__________________________________________________
Local time
Today 10:12 PM
Joined
Jan 26, 2015
Messages
1,675
---
combining LLM with more rigorous algos is a very interesting approach that will probably find a lot of uses going forward, like for lookup of factual information

but if we're talking about dangerous, autonomous AGI that hacks websites and whatnot, the question is not whether you can automate such tasks but whether it can have more ingenuity and creativity than human hackers. Because after all, people have been doing hacking and malware-engineering since the dawn of the Internet. Currently we are asked to simply assume that based on the trend from GPT-2 to GPT-4, that ingenuity well eventually pop out.

i have been surprised by development in AI before, so who knows. Problem for me is: i know the history (and how at every innovation people project wild things into the future - like they thought Lisp would create AGI in a couple of years back in the 1960s), i work with machine learning stuff and know the limitations of statistical learning, and last but not least: i think the current LLMs kinda suck tbh and are practically useless for anything beyond writing toy programs. Sure, sure, "but in the future", but i think there's a way longer way to go than people think
 

Old Things

I am unworthy of His grace
Local time
Today 4:12 PM
Joined
Feb 24, 2021
Messages
3,259
---
A... sentiment is captured in an 1829 quote attributed to Dionysius Lardner, an Irish scientist, who reportedly said about early train travel: “The human frame is not adapted to such velocities; the pressure of the atmosphere would be so great that respiration would be impossible.”

Answer from Grok.

This is pretty much how I see AI.
 
Top Bottom