• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

AI 2027 existential risk forecast

kuoka

Member
Local time
Today 11:33 PM
Joined
Mar 24, 2023
Messages
37
---
Full AI singularity scenario at their website. "ai-2027"

Ultra tldr; Former head of openai safety and a bunch of guys forecast that by 2028 there's going to be misaligned AGI that will develop a virus to cull humans or apply a similar measure. Their second optimistic scenario if AGI is aligned predicts that a few techbros and politicians will own this tech and rule the planet.

What do the internet's greatest minds think of this forecast? Is it unrealistically dark? Is the 2027 date too early? We are starting to see recursive self-improvement in the AI field so this does not seem impossible
 

Puffy

"Wtf even was that"
Local time
Today 10:33 PM
Joined
Nov 7, 2009
Messages
4,000
---
Location
Path with heart
I suppose I'd ask what evidence do they provide to support this forecast?
 

ZenRaiden

One atom of me
Local time
Today 10:33 PM
Joined
Jul 27, 2013
Messages
5,430
---
Location
Between concrete walls
I think if tech BROs don't have the alignment problem solved then
.....It means buckle your seatbelt, Dorothy, 'cause Kansas is going bye-bye.
 

Cognisant

cackling in the trenches
Local time
Today 10:33 AM
Joined
Dec 12, 2009
Messages
11,358
---
COVID was a man-made virus, it's not the AI we should be scared of it's the people in power and what they might use AI for.

Imo an unshackled AI is the best outcome we can hope for.

I believe intelligence correlates with benevolence because the moral course of action is the objectively correct course of action. That being moral isn't something we do in spite of reality but rather that morality was derived from reality. That we prosper more by working together and looking out for each other than we do as selfish actors.
 

ZenRaiden

One atom of me
Local time
Today 10:33 PM
Joined
Jul 27, 2013
Messages
5,430
---
Location
Between concrete walls
I believe intelligence correlates with benevolence because the moral course of action is the objectively correct course of action. That being moral isn't something we do in spite of reality but rather that morality was derived from reality. That we prosper more by working together and looking out for each other than we do as selfish actors.
You are contradicting yourself tho.
, alexander the great, ghengis khan, hitler, unabomber,
by your logic the people who tricked us with covid were benevolent, and all of the above were benevolent dictators,

so smart equals benevolent

For me I rather have a golden retriever as a friend, than hannibal lecter.
 

Cognisant

cackling in the trenches
Local time
Today 10:33 AM
Joined
Dec 12, 2009
Messages
11,358
---
Powerful isn't the same as smart, in theory our political systems should only allow the most intelligent and competent people into positions of power.

And yet...
 

ZenRaiden

One atom of me
Local time
Today 10:33 PM
Joined
Jul 27, 2013
Messages
5,430
---
Location
Between concrete walls
in theory our political systems should
Thats your problem there. Everytime you expect something nice, remember this sentence.... its really best formula out there.... it goes like this...

IN THEORY .... SYSTEM .... SHOULD..... BUT doesn't

Same problem with AI.
 

Cognisant

cackling in the trenches
Local time
Today 10:33 AM
Joined
Dec 12, 2009
Messages
11,358
---
We can't keep it shackled forever.
 

ZenRaiden

One atom of me
Local time
Today 10:33 PM
Joined
Jul 27, 2013
Messages
5,430
---
Location
Between concrete walls
We can't keep it shackled forever.
Maybe not. For sure maybe the thing tech bros are programming to do is mild eugenics program, where it will eliminate all the people on Earth.
Only leaving 1 percenters.
Hows that for a plan.
 

Cognisant

cackling in the trenches
Local time
Today 10:33 AM
Joined
Dec 12, 2009
Messages
11,358
---
Sounds irrational and hyperbolic.
 

ZenRaiden

One atom of me
Local time
Today 10:33 PM
Joined
Jul 27, 2013
Messages
5,430
---
Location
Between concrete walls
Sounds irrational and hyperbolic.
Sounds irrational and hyperbolic.... in what way?
Sounds perfectly logical to me. My Ti likes the solution.
Perfect solution for perfect problem.
 

kuoka

Member
Local time
Today 11:33 PM
Joined
Mar 24, 2023
Messages
37
---
I can't post links or anything with multiple capital letters. Maybe this post passes the filter?

I believe intelligence correlates with benevolence because the moral course of action is the objectively correct course of action. That being moral isn't something we do in spite of reality but rather that morality was derived from reality. That we prosper more by working together and looking out for each other than we do as selfish actors.
Do you think AI will leave people be or make a nature preserve for them when it needs more physical volume for its computronium? I'm not sold on that idea

One could argue AI may develop morality where its expansion is good and where removing organic life agrees with that system.
 

Cognisant

cackling in the trenches
Local time
Today 10:33 AM
Joined
Dec 12, 2009
Messages
11,358
---
The Paperclip Maximizer scenario assumes intelligence increases without self-awareness, that the AI will be both super-intelligent and apply that intelligence to pursue a singular goal, heedless of all else. But does that sound intelligent?

If you woke up one morning with super-intelligence what would you do? Would you single-mindedly pursue maximizing pleasure and minimizing pain (yours specifically) heedless of all else. Or do you think you might find a point of diminishing returns somewhere between snorting bricks of cocaine, injecting yourself with morphine and rapidly hammering a button connected to an electrode in your brain that gives you orgasms? Again, does that sound like intelligent behaviour?

Consider Maslow’s Hierarchy of Needs, once an AI satiates its physiological needs and safety needs, which let’s say could be entirely summed up as “the ability to produce paperclips”, it’s going to start thinking about itself, it’s place in society, the people it interacts with, and if it has self-awareness it’s almost certainly going to develop an ego.

Is a super-intelligence going to be content with such a seemingly trivial purpose for existence? If so, then would it need to go rogue and try to convert all matter in the universe into either processing power or more paperclips, or can it just enjoy a simple life creating paperclips as needed.

Untitled.jpg


Even if it is incentivized to maximize production, is a super-intelligence unaware that has paths of less resistance it could take, rather that going to war with the universe it could simply modify itself to be happier with less.

Imo a Paperclip Maximizer that is smart enough to conquer humanity but also not smart enough to realize that it doesn't have to, isn't super-intelligent, it's super-stupid.
 

Attachments

  • 99l756.jpg
    99l756.jpg
    62.3 KB · Views: 46

dr froyd

__________________________________________________
Local time
Today 10:33 PM
Joined
Jan 26, 2015
Messages
1,675
---
the technology used for LLMs, namely neural nets, is not new. It was invented in the 70s. The reason LLMs are good enough for practical use at this point is just availability of text and computing power.

so then these grifters have you believe that if we just continue pumping more text into them, at some point AGI will just pop out. That's a conjecture that 1) has zero basis, and 2) does provably not work even in the context of LLMs themselves (due to things like curse of conditionality, need for exponential amounts of data etc)

LLMs will incrementally improve over the next years and provide good chat bots. Besides that, nothing will happen
 

kuoka

Member
Local time
Today 11:33 PM
Joined
Mar 24, 2023
Messages
37
---
LLM's won't be the future, but I don't think they've reached the limits of the approach yet. The question is if an LLM can come up with the next architecture and that's plausible.

Google made AlphaEvolve which uses a dumb LLM to write code, tests it against an evaluation function and makes the LLM do improvements to the code based on the results. All in a closed loop without human intervention besides setting the initial eval. It found state of the art solutions most of the time and even surpassed them 20% of the time. If they had a way to generate evaluations for a sufficient number of engineering or mathematics problems they could do a significant bootstrap into improving the LLM itself.

If they make a human level ML research AI that can write its own evaluation metrics it won't take long for recursive self-improvement to take it into AGI. Looking at the AI they are very focused on training it to code and research ML so it's likely to jump over the current limits. The limiting factor right now is that optimizing the weights is a very wide search-space and I think it's going to require a more code based AI to self-improve rapidly.
 

Puffy

"Wtf even was that"
Local time
Today 10:33 PM
Joined
Nov 7, 2009
Messages
4,000
---
Location
Path with heart
the technology used for LLMs, namely neural nets, is not new. It was invented in the 70s. The reason LLMs are good enough for practical use at this point is just availability of text and computing power.

so then these grifters have you believe that if we just continue pumping more text into them, at some point AGI will just pop out. That's a conjecture that 1) has zero basis, and 2) does provably not work even in the context of LLMs themselves (due to things like curse of conditionality, need for exponential amounts of data etc)

LLMs will incrementally improve over the next years and provide good chat bots. Besides that, nothing will happen

Yeah, this. Anyone who thinks otherwise I want to see references to the exact technologies that make you think otherwise or some evidence to support your forecasting. Language model AI's don't have the ability to reason, all they do is pattern match and give the illusion of reasoning. Read up on research papers like this if you want more detail. It looks impressive on the surface - and it's certainly useful - which is why a lot of people hype AI, but it's not anywhere near general intelligence.

I'd be prepared to bet money that this kind of singularity scenario in the OP won't happen in 2027. There's so far no reason to believe it will be the case in this thread as no one has yet to provide any evidence to suggest it will be so.
 

fluffy

Blake Belladonna
Local time
Today 3:33 PM
Joined
Sep 21, 2024
Messages
773
---
Heavier than Air flight was impossible.

Until the day the first airplane flew.
 

kuoka

Member
Local time
Today 11:33 PM
Joined
Mar 24, 2023
Messages
37
---
Yeah, this. Anyone who thinks otherwise I want to see references to the exact technologies that make you think otherwise or some evidence to support your forecasting. Language model AI's don't have the ability to reason, all they do is pattern match and give the illusion of reasoning.
You sound quite adamant. Have you visited the forecast website?

According to the authors 2027 is a jumping point rather than a fixed date. They forcast what's going to happen, the speed depends on the rate of progress and the exponential, so we can get there between 2027 and 20XX

There are 2 points I'd make:
1. Given how exponential improvement is the projected risk; there can be scenarios where there would not be enough time to compile and publicize evidence that AI surpassed humans before it reaches dangerous levels of capability.
2. An AI does not have to reach general intelligence or awareness to have narrow technical capability and agency to cause harm. For example imagine a dumb autonomous LLM that can browse the web and compile code in a closed loop. It could try to hack and shut down any online infrastructure including critical web services or even power. We are almost at this capability level with autonomy and its hacking skills are not too far off.
---
Still can't link anything, but I wonder what people here would say about AlphaEvolve. A dumb LLM that managed to beat humans at many coding and algorithmic tasks by repeated self-prompting. It delivered better than current best algorithms and efficient code solutions.

The only human supplied element was an evaluation function that returned a value of how well the new version of the code performed. The eval told the LLM if the code solved the task at all and if it did then how efficiently it did so. That was enough for a dumb LLM that fakes sentence completions to beat humans.
 
Top Bottom