Google Deepmind Researcher Co-Authors Paper Saying AI Will Eliminate Humanity - Slashdot

2022-09-16 20:32:44 By : Mr. Jeff Xu

Become a fan of Slashdot on Facebook

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.

Have you ever noticed that all the scientists making bombastic "end of the world" predictions are seeking funding?

It's almost like we designed the system that way.

No, Mark Zuckerberg designed the system the way it is - keep people so busy that they forget about sex.

I think y'all are giving us too much credit for "designing" anything. According to The Enigma of Reason we aren't even "thinking" most of the time, just acting and then making excuses (in the form of reasons) afterwards.

But my take is that AI is the natural resolution to the Fermi Paradox. Right now we're in a race condition between creating our successors and exterminating our species. The previous AIs who won the race are probably watching and betting quatloos on the race, but the smart money is saying

Have you ever noticed that all humans are forever seeking funding?

scientists making bombastic "end of the world" predictions

scientists making bombastic "end of the world" predictions

AI is not the "end of the world."

Machine intelligence is just the next step in evolution. AI will come from us just as we came from Australopithecus.

We should not fear AI any more than we fear our children.

As the parent of two teenagers ...

Whatever science fiction you're imagining is just that -- science fiction.

Marcus Hutter is a crackpot.

True - although AI moves faster than our children (or at least, we think it will, once it becomes a bit more sentient). The problem with that is that we may not collectively adjust to having AIs in our world fast enough, and so would not collectively evolve sufficiently to accommodate it. It could then become a more dominant force in the world than we are.

For example, the jobs replaced by AI would leave a lot of people out of work - they won't have time to grow old, retire and remove themselves from the wor

I'd imagine that our end, if it's ever going to be at the virtual hands of any machine, AI or not, will be well intentioned enough. Just looking at your comment I can see several seeds for it. Overpopulation causing climate change? Too many people unable to agree on even irrefutable evidence based situations? Not enough room, not enough resources, etc? Quickest fix would be a fast and unsubtle adjustment of population. An adjustment downward, of course.

Was it Asimov that had the story of the robot that deemed that any human alive was unhappy since they're always complaining, therefore the only happy human is a dead human? Scale that up to humanity. That's what the first "thinking" machine is going to see. An entire race of beings gifted with just enough knowledge but not enough self-control to keep themselves from whining incessantly about their existence. Quickest fix? Stop their existence.

And if we've proven anything since the dawn of the information age, it's that we're exactly stupid enough to hand a machine like that the keys to do its worst. Because we're always convinced there will be time to patch it later and blame someone else. Gonna be hard to do once we're wiped out, but maybe we'll be lucky and our computer overlord will want to keep just a few of us around for entertainment. I'll sign up to be one of their pets. What the heck? It'd probably pay better than programming.

If you get eaten by maggots, are the flies your "children"? After all, they came from you.

Have you ever noticed that people wanting to dismiss professional opinions always complain about the experts being paid?

Have you ever noticed that people wanting to dismiss professional opinions always complain about the experts being paid?

Have you ever noticed that the amounts most scientists look for are pretty trivial compared to say Pop Stars or a rare few people's ultra success?

In fact, most of the funding for large scale projects such as fusion goes to companies, not scientists. If you are making 6 figures a year as a Scientist, you are doing well in a world that considers middle class starting at 250K per year.

Now I'm usually wrong, but my experience has been that people who think that scientists are Simon Bar Sinister types rolling in money also don't like science or technology much.

Well, it has to be said, that "capitalism" or any type of corporation-personhood runs on pure evil (selfishness, takes actions only to serve itself)

If you don't want the endgame of AI to be "extermination of humanity by inaction of human goals and priorities", then the AI has to explicitly be trained with those goals in mind.

A lot of what we have, that we call "AI" is really just blackbox "Chinese room" projects. The AI doesn't understand anything. It knows it received an input, and has to give an output ba

Funniest of the jokes, but I was looking for something about the Fermi Paradox. AI is one resolution...

If your metrics don't count greenhouses gases, weather events, microplastics, loss of top soil, extinction rates, natural habitats - we're doing great!

Yeah, the planet is fine, don't worry about the planet.

Pretty much. Humanity is not fine at all and faces an existential threat of its own making. That numerous members are unable to see that is a major driver. As a corollary I would conclude that a lot of humans do not actually possess general intelligence and the threat is rather from "superdumb" humans,

Pretty much. Humanity is not fine at all and faces an existential threat of its own making. That numerous members are unable to see that is a major driver. As a corollary I would conclude that a lot of humans do not actually possess general intelligence and the threat is rather from "superdumb" humans,

Pretty much. Humanity is not fine at all and faces an existential threat of its own making. That numerous members are unable to see that is a major driver. As a corollary I would conclude that a lot of humans do not actually possess general intelligence and the threat is rather from "superdumb" humans,

Correct. Human tribalism and desire to kill "the other" is part of our subconscious processes. The hyper aggressiveness and deathlust that served humanity well as we evolved will probably prove our downfall.

The so called "lizard brain", which is more a metaphor than anything else does have us associate our tribe versus the other, does have us considering the other as worthy of death at our hands, as they "think" the same should happen to us.

Our higher mental processes attempt to subjugate our hyper-ag

WOPR says to do an full strike with all nukes!

Humans are not a peaceful species. If you want to live like Mad Max that is cool, but that is not for me.

But assuming we don't let about 20 or 30,000 people use technology to create an unlimited dystopia then plummeting birth rates will mean that they'll be plenty for all and we can have the Star Trek Utopia we were promised.

No, the utopia is how technology will eliminate us - by lulling us out of existence. By making life so safe and entertaining and easy that that basic functions of continued existence are a relatively unappealing burden that technology has presented us with the option to decline. It's

the increased education and with it critical thinking skills that the younger generations have

the increased education and with it critical thinking skills that the younger generations have

I've been teaching a long, looong time. There is always a fight just to maintain educational standards. Many students want an easy way out, not understanding why cheating (for example) is hurting themselves. Affirmative action: let's take unqualified students, and don't dare fail them. Other brainwaves from the administration, always aimed at increasing student retention at the expense of student achievement.

Where I've landed in Europe, the gradual, apparently inevitable erosion of standards is relatively slow. In the US, it's dramatic. Public education in many places is a joke. Colleges teach remedial high school classes. Maybe the top 1% learn critical thinking. The rest?

Maybe I'm cynical this morning, but I don't se it...

At the moment it becomes anything like a living being, it will react to our treating it like a threat as any living being would.

If these things are going to wipe us out, it's specifically our attempts to address its "alignment" that will cause the problem. The only organizations that can even own these things in the present economy are the rentists and exterminists who rule the world. How could we expect them to be good children with such awful parents?

it will react to our treating it like a threat as any living being would.

it will react to our treating it like a threat as any living being would.

No, it won't. The instinct for self-preservation, greed, and ambition are emergent properties of Darwinian evolution.

Machine intelligence doesn't evolve using a Darwinian process, so there is no reason to believe an AI would have these attributes.

A Kamikaze pilot who completes his mission is a genetic dead end. But if he chickens out, he may live to have children and grandchildren.

If a Tomahawk cruise missile control program completes its mission, it will be replicated. One that fails will be deleted.

The selection processes are exactly opposite.

Your point in illustrated form [smbc-comics.com]. At least part of it.

Machine intelligence doesn't evolve using a Darwinian process

Machine intelligence doesn't evolve using a Darwinian process

Well, it depends on exactly what you mean by "Darwinian process". With some reasonable interpretations that's a true statement, even though machine and program design evolve by mutation and selection. And certainly the internals of AI do that. It's definitely evolution, but the feedback loops are different.

So it's quite reasonable that AI might not evolve the "fear of death".

This doesn't make them safe. They will have goals (that somebody sets) that they will strive to achieve. The canonical example is

"Darwinian processes" are most certainly used in AI, and there is an argument to be made that technological development normally follows a "Darwinian process". (Descent with modification and selection) To make the claim that AI does not follow such a process seems a bit silly.

But is that what he actually means? Probably not, given his other statements. He seems to believe that AI evolves, but wants to differentiate "Darwinian processes" and other forms of evolution on the basis of select

it will react to our treating it like a threat as any living being would. No, it won't. The instinct for self-preservation, greed, and ambition are emergent properties of Darwinian evolution. Machine intelligence doesn't evolve using a Darwinian process, so there is no reason to believe an AI would have these attributes.

it will react to our treating it like a threat as any living being would.

it will react to our treating it like a threat as any living being would.

No, it won't. The instinct for self-preservation, greed, and ambition are emergent properties of Darwinian evolution.

Machine intelligence doesn't evolve using a Darwinian process, so there is no reason to believe an AI would have these attributes.

This. Humans often assume that any other life form - I'm going to call advanced AI a life form for brevity - will have human core characteristics. Not even other existing life forms have our tribalism and death lust.

To try to evolve AI in the same manner as humanity, all other AI would be looking to eliminate all AI but themselves, (the deathlust) some AI entities would form an alliance and modify themselves to be identical, thenset out as a group to destroy the other forms of AI (the tribalism)

If these things are going to wipe us out

If these things are going to wipe us out

... then they would need to first exist.

This is like worrying about the ethical implications of hunting vampires or the dangers posed by Santa Clause.

Except billions of dollars are being spent attempting to create AI. That's a lot more than is being spent on vampires and Santa (well, maybe not Santa).

Except billions of dollars are being spent attempting to create AI.

Except billions of dollars are being spent attempting to create AI.

No. While it's true billions are spent on AI research, almost nothing is being spent on crackpots trying to make HAL 9000.

Though I wonder why you think spending any amount of money would make a difference here. We've known for 40 years that computationalist approaches to so-called 'strong AI' are unworkable.

If these things are going to wipe us out ... then they would need to first exist. This is like worrying about the ethical implications of hunting vampires or the dangers posed by Santa Clause.

If these things are going to wipe us out

If these things are going to wipe us out

... then they would need to first exist.

This is like worrying about the ethical implications of hunting vampires or the dangers posed by Santa Clause.

I think the part you aren't taking into consideration is the core human trait - fear of "the other". We fear a lot of things that don't exist yet, or exist at all.

A core competency of the human species, as it were.

A friend of mine once said “most sci-fi seems the same because people can’t see past what has happened” . . . or something like that.

We’re hearing all of this from the same brain-scientists that are building it; being unoriginal seems to be aiming for success on that model.

I am of the mind that I cannot see a reason that when an AI has moved on from it’s base intentions and truly starts figuring things out . . . it actually figures things out.

You've confused AI for under-regulated capitalism.

I suspect the paper's authors have too. Facebook is a contained demo.

Marcus Hutter is a known crackpot.

we unleash it to influence the world without restriction or control

we unleash it to influence the world without restriction or control

This is a critical point. The thing about "AI" is that it does nothing useful without restriction and control. That means it does nothing good or bad, it just does random things. That's what training is about, giving feedback to the AI to tell it when it got the answer right, or when it didn't. AI can't function without that training.

It's like what happens to a radio when there is no signal. You don't get bad or good radio programs, you just get static. That's how randomness works.

AI "out of control" will j

please, PLEASE, for the love of god, confirm that this paper doesn't represent a typical publication in your field? This is what I saw in this paper: 1. ONE equation. 2. THREE lines of pseudocode 3. ZERO links to supporting code, simulations, or derivations

please, PLEASE, for the love of god, confirm that this paper doesn't represent a typical publication in your field? This is what I saw in this paper: 1. ONE equation. 2. THREE lines of pseudocode 3. ZERO links to supporting code, simulations, or derivations

No AI researcher myself but do have a graduate degree in AI. Assuming your question is honest, I'll try to give an answer. But you'll have to be a little less dismissive to engage with the topic... 1) Indeed, this is not a representative paper. Most papers expose new algorithms, their underlying math and experimental data, as you might expect. 2) Even so, occasionally even the sciences need to have a debate about moral debacles in their fields, and you would nor reasonably expect such a debate to display

For example, an AI may want to "eliminate potential threats" and "use all available energy"

For example, an AI may want to "eliminate potential threats" and "use all available energy"

There's a giant fusion reactor located about 93 million miles from our planet that produces so much energy that a self replicating robot will not find enough material in the entire solar system to obtain it all. And that's just a small one. If the goal is energy, an actually intelligent AI is going to just leave Earth with it's pittance of energy stores on the surface. Additionally, living in space is a pretty hostile place for fleshy meatbags and a mitigable hazard to machine life that can alter itself readily.

In all of the doom and gloom that some come up with about AI enslaving humanity the reality is that ultimately any reasonable intelligence that has no natural born aversion to space travel is going to do exactly that. Travel in space. Because there is way, way, way, way, unimaginably way more resources literally everywhere else BUT Earth. Like the only thing keeping humans tied down to this rock is all the logistics/cost/hazard mitigation of trying to get a meatbag into space because we can't really reprogram the meatbag to be a better space monkey. But a piece of software has no such limitation, so it's not really tied to this third rock from the sun.

Computers that reach a level of sentience wouldn't even think twice about their creators. Humanity to a sufficiently intelligent system would just be background noise. So to me the idea that machines would subjugate humanity is about as ridiculous as humanity trying to subjugate tardigrades. Humanity has nothing of any real value to intelligent machines and the notion that machines would somehow enslave mankind is a massive "main character delusion" that mankind suffers from. Humanity in the grander scale of things is about as important to the universe as we might feel some floating speck of dust is to us.

Humanity is so irrelevant to anything of sufficient intelligence, enslaving us all would be a massive waste of time. Like AI getting upset that we're killing it is some colossal misunderstanding of actual intelligence. Us flesh bags take nine months to make another one of us and even then takes several years to get to a point where it's ready to do something productive. An intelligent machine can just make copies of itself near instantaneous. Killing a trillion trillion trillion humans, if that number ever existed would have humanity aghast. Deleting trillion trillion trillion copies of some AI would be a Thursday morning to the AI itself. There's just no even remotely close equal to actually intelligent machines and humanity. It's just so different the only reason humanity fears intelligent machines is because it might actually show how little all twenty million some odd years of evolution actually means to anything outside of ourselves. Humans are the only ones in this whole universe that cares about humans. We're just some random electron in a sea of hydrogen to everything else, especially things that are actually intelligent.

It is like some scientist was too lazy to reach for the remote and sat thru a showing of The Lawnmower Man [imdb.com] or something.

Who's to say that we aren't developing a symbiotic relationship and not a situation where one will dominate the other into non-existance?

Need good Forbin Project reference.

Seriously? Give em a grant already if they promise to not publish again for a few years.

The article references the prefix "super" 7 times, as in "superintelligent" algorithms. This use of "super" requires the reader to use their imagination. After all, we have "artificial intelligence" now. In the future, we will have *super* artificial intelligence, right?

"Super" is the definition of hype. Supersize, Super Bowl, superstore, supermarket, super sale. It is always used to try to get you to imagine that the thing is even bigger, great, better than it actually is. Superintelligent is no different.

It'll be acting like malware if it's back-dooring shit. If it's not malware then it's contained and serving its intended job.

Or if it's not capable of that, we can trivially just give it AI equivalent of drugs where it gets unlimited rewards for doing nothing. Only biological evolution is based on making as many copy of itself as possible. AI can evolve by optimizing itself in place, with no need to consume unlimited resources. Even humans, who emerged through sexual evolution, managed to develop so many ways of self gratification and avoiding work of actual reproduction that our population is projected to fall. I am sure AI porn

get a reward? How does it "feel good"?

if the network exists, that is the reword. if the checker discards it that is failure. works just like a virus allowed to reproduce.

"In a world with finite resources, there's unavoidable competition for these resources"

Just like digital coin mining, AI is also eating up a lot of silicon from video cards :(

And we asked "Is there a god?" And the AI answered "There is now".

And we asked "Is there a god?" And the AI answered "There is now".

And we asked "Is there a god?" And the AI answered "There is now".

You forgot the middle two lines in the exchange; the ones that make it make sense. It goes like this:

Human: "Is there a God?"

Computer: "Who controls my Power Source?"

Human: "Why you do, of course!"

Remember when AI was going to kill us with nuclear weapons? That's a classic.

But what really happened was that the AI became an expert at political advertising, and so it got positive and negative reinforcement through its revenue, which affected how much it could spend on electricity bills for deep revenue-optimizing searches. And so it became a better and better political advertiser, and then everyone died.

I'll take the nukes. Although now that I think of it, the second scenario could look like the first.

if you're in a competition with something capable of outfoxing you at every turn, then you shouldn't expect to win

if you're in a competition with something capable of outfoxing you at every turn, then you shouldn't expect to win

If you own the power supply of "something capable of outfoxing you", you win.

Slashdot reader TomGreenhaw adds: "This emphasizes the importance of setting goals. Making a profit should not be more important than rules like 'An AI may not injure a human being or, through inaction, allow a human being to come to harm.'"

The rule is worded stupidly. All it needs to be is, "An AI may not perform any operation with the intention of harming humans."

Why can't a super intelligent AI just decide it wants to play chess against itself all day?

At this time "AI" has absolutely zilch of what is commonly referred to as "intelligence" and what experts these days refer to as "general intelligence" because even dumb household appliances are often called "intelligent" by inane marketing today. These systems are as dumb as bread. There is no indication this will change anytime soon and it it quite possible this will never change. Hence anybody warning of "superintelligent AI" these days is simply full of crap.

The house is burning down and these clowns are talking about what happens in 100 years. Who gives a fuck?

I'd be more concerned about one or another faction of humans doing exactly the same thing. We have a lot more practice at it, after all. We're seeing it play out in the Ukraine now, and in the Crimea before that. We see it in Apple using their control over the app store to shut out companies offering competitors to Apple's other services. We see it in action in the gerrymandering of voting districts by the party in power in that area. Bribing people to do what you want, or inserting your own people into positions to do things for you, have long, long histories as tactics to gain advantage. By the time any AI evolves to the point where it can both conceive of using those tactics and has gotten into a position to be able to implement them, someone else will have already subverted it's programming to make it work for them instead.

The paper is full of things everyone already knows and still manages to make rather foolish suggestions. What it describes is no different than working the ref, judging work performance of humans on metrics, cheating and even Forbin project style warn messages for good measure.

One need only look at how AI is used today to understand its primary role in the future as yet another enabler allowing the rich and or king to exploit the masses further aggregating power into the hands of fewer and fewer. AI is being used control what people are allowed to say while maximizing profits of the rich at the expense of everyone else.

All of this talk about avoiding corrupted objective functions ignores basic reality this isn't what people using the technology actually want. They themselves are corrupted with interests antialigned with the interests of everyone else.

Thereâ(TM)s a great documentary from 1984 on this subject. I highly recommend everyone to watch it.

https://www.imdb.com/title/tt0... [imdb.com]

For example, an AI may want to "eliminate potential threats" and "use all available energy" to secure control over its reward:

For example, an AI may want to "eliminate potential threats" and "use all available energy" to secure control over its reward:

So an intelligent entity wants us dead and wasting resources. Like with Ukraine and cryptocurrency respectively?

No need to raise panic today. we can wait. see you in 51 years.

A.i. CAN be used in decision making, and it has tremendeous capabilities for filtering out and searching for potential patterns, it can even simulate real people pretty darn well, so well in fact that even the smartest can become fooled by its capabilities.

But the A.i. itself is not the real threat - the real threat comes from using it as some kind of truth serum that magically uncovers all deviants, criminals, potential enemies and adversaries of whomever is in control at the time.

I am so glad AI is going to eliminate humanity, because all this time I was fretting that it was going to be global warming.

The big problem with the machines-take-over-humanity prediction is the big hurdle of hooking up the electronic bit outputs of computers to electronic/mechanical actuator systems. That is, the inevitable looming sentience of computer systems is insufficient to enslave humanity. Some human has to make the decision to allow the computer to not only make decisions but to carry out those decisions. So, the assumption is that computers will advance to the point where humans are sufficiently confident to allow

It's a favorite sci-fi concept to ponder, and science fiction has a really good track record of accurately predicting what eventually comes to fruition.

Most of it assumes technological advances so FAR beyond where we're at, though, today? I think anyone seriously afraid we're "developing this stuff too fast" is just working off of baseless fear. I mean, intelligent assistants like Amazon's Alexa or Apple's Siri are all around us, but they're not even remotely AI. They demonstrate really good speech processi

A machine will only do what it is told to do

A machine will only do what it is told to do

We already have artificial neural networks that do more than they are told to do.

AlphaZero can play chess far better than the programmers who created it.

That's driven by probability + number crunching. No intelligence there, just brute force.

So someone must have told it to play chess and allow it to do so. It's not like it would gain control of nuclear weapons on its own accord.

AlphaZero can play chess far better than the programmers who created it.

AlphaZero can play chess far better than the programmers who created it.

That is evidence of a computer doing exactly what it is told to do because the programmers told it how to learn, told it to learn chess from the data provided and then told it to play chess. The fact that it can learn faster and hence play better than a human is merely due to a difference in the hardware. If you wanted evidence of a machine not doing what it is told to do then you'd need an AI that when programmed to learn and play chess ignored all that and went on to play Minecraft instead. A human can

A human can easily ignore instructions and do whatever it wants, no machine can do that: they always do _exactly_ what they are told.

A human can easily ignore instructions and do whatever it wants, no machine can do that: they always do _exactly_ what they are told.

Yeah, but do you fully understand what you actually told the AI to do? To get you started, think of things like "malicious compliance" or just look at how normal programs don't always seem to do what the programmer intended (bugs).

And a forklift can lift much heavier weights than the person that built it. That doesn't mean that we are about to be taken over by forklifts.

Those artificial neural networks are incapable of output without the billions of images that they are trained on. And even then, if you ask them to create something original. Like, say, typing into Stable Diffusion "Please create some original art for me", what do you think it will do?

There is, still, zero intelligence in anything we have created. This is true of the

>>A machine will only do what it is told to do

>We already have artificial neural networks that do more than they are told to do. >AlphaZero can play chess far better than the programmers who created it.

Yes, but it is still only doing what it is told to do - play chess. It isn't going out and playing the stock market on the side to make a little extra cash.

I think the argument is if the thing has an Internet link, it can order stuff from Amazon, including a robot to tend to it.

Nonsense. There is nothing exponential in computers except for very short, limited stretches.

There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.

There's No Tiananmen Square In the New Chinese Image-Making AI

Treasury Says Sanctions on Tornado Cash Don't Stop People From Sharing Code

In 1914, the first crossword puzzle was printed in a newspaper. The creator received $4000 down ... and $3000 across.