Stephen Hawking : AI could be our worst mistake

rh310

Hall of Fame
Can a non-living thing have consciousness?

I'll presume (with some confidence, based on your posts) that you're at least generally familiar with biological computation, that posits biological systems fundamentally as information processors.

I've said that consciousness of some form or degree is what occurs within any organized structure (i.e., one that can process information) while it is processing information. There's an upper bound on the amount of consciousness possible, perhaps conceptually equivalent in some way to the Shannon capacity of the structure. An amoeba has limited information processing capacity compared to a zebra, and their corresponding consciousness is different. The zebra's is more expansive because of the greater complexity of the zebra.

If consciousness of some form is what happens when information processing is occurring within an organized structure, then yes non-living information processing systems can have a form of consciousness. (I've actually used this definition of consciousness when I designed and wrote network systems software several years ago, for Cisco. Part of my protocol design and implementation process was to actively try to imagine how the switch or router would "perceive" its inputs, imagining a consciousness wholly defined and consisting of a domain of tightly-constrained meaning, created by combinations of 0 and 1.)

Is there any such thing out there?

See above.

If we decide that consciousness even as I've defined it is only possible as a manifestation of some form of life, then current computers are already some rudimentary and sharply limited form of life.

I don't think that's true, though, which is why I say non-living things can be conscious.

I don't have strong feelings about this. I'm not a life-form bigot. I'm open to the idea that life may be a principle of organization and information capacity, rather than something implemented in biology only.

Artificial Life researchers at Sante Fe spend all day thinking about this stuff. Their current work might be pretty interesting.
 
Last edited:

rh310

Hall of Fame
This is the correct answer. That consciousness even exists is conjured up by consciousness itself. It couldn't be any other way. It can't be imagined any other way. How is it even possible for anyone to imagine anything but a life with consciousness. In fact, asking or delving into what consciousness is, is somehow implying we are above it in a way that we can study it. Consciousness is like space, there is nothing special about it.

I think you may be mixing consciousness and (self?) awareness.

I agree consciousness itself is nothing all that unique, although awareness of consciousness is the thing that conjures up the idea of consciousness.

Consciousness is not aware of itself.

Hah. This is funny stuff.
 

SuperSpinner

Semi-Pro
I think you may be mixing consciousness and (self?) awareness.

I agree consciousness itself is nothing all that unique, although awareness of consciousness is the thing that conjures up the idea of consciousness.

Consciousness is not aware of itself.

Hah. This is funny stuff.

I would contend that 'awareness' is made up pretty much like 'consciousness' is, just a fuzzy demarcation. These new age mystics always speak about how mind and awareness are different, not realizing all the while that they are using mind/consciousness to conceive of the nebulous 'awareness'. Only things that you can explain to a penguin are truth.
 

rh310

Hall of Fame
I would contend that 'awareness' is made up pretty much like 'consciousness' is, just a fuzzy demarcation. These new age mystics always speak about how mind and awareness are different, not realizing all the while that they are using mind/consciousness to conceive of the nebulous 'awareness'. Only things that you can explain to a penguin are truth.


I have no idea what new age mystics do, but I'd be very surprised if they actually don't realize they're using their mind to conjecture about their mind. That's not a particularly deep realization.
 

sureshs

Bionic Poster
I'll presume (with some confidence, based on your posts) that you're at least generally familiar with biological computation, that posits biological systems fundamentally as information processors.

I've said that consciousness of some form or degree is what occurs within any organized structure (i.e., one that can process information) while it is processing information. There's an upper bound on the amount of consciousness possible, perhaps conceptually equivalent in some way to the Shannon capacity of the structure. An amoeba has limited information processing capacity compared to a zebra, and their corresponding consciousness is different. The zebra's is more expansive because of the greater complexity of the zebra.

If consciousness of some form is what happens when information processing is occurring within an organized structure, then yes non-living information processing systems can have a form of consciousness.

I think for consciousness, you need to show that there is an instinct towards self-preservation. I don't see that in non-living things.
 

SuperSpinner

Semi-Pro
I have no idea what new age mystics do, but I'd be very surprised if they actually don't realize they're using their mind to conjecture about their mind. That's not a particularly deep realization.

Oh they may claim the realize it. But clearly not, since they continue to expound the separation of awareness from mind as if it were an independent entity. Kinda like how humans do with their existence and consciousness.
 

Sentinel

Bionic Poster
@Sentinel: I only drew an analogy to a database. I don't think anyone really knows how the information in the brain is actually organized... yet!
Yes, i know that was only an analogy. But even the article speaks about one event modifying multiple memories, and i was questioning the basis of that.

@rh - yes, i made the mistake of using the word "theory" in the end, although, i did start by saying "conjecture".
 

rh310

Hall of Fame
I think for consciousness, you need to show that there is an instinct towards self-preservation. I don't see that in non-living things.

Hmmm. We might be using the same word -- consciousness -- to talk about different things.

Maybe we can apply a little reductionism in how we define self-preservation: A bias to continue, rather than a bias to terminate. Even a finite state machine is understood as possessing as an intrinsic bias towards state transitions, and if we understand that as an instinct towards self-preservation, as constrained by the capacity of a FSM, then maybe we're OK.

Otherwise, this is going to get difficult to continue talking about.
 

rh310

Hall of Fame
Oh they may claim the realize it. But clearly not, since they continue to expound the separation of awareness from mind as if it were an independent entity. Kinda like how humans do with their existence and consciousness.

Well, OK. I don't have any idea what new age mystics talk about.

Why does it matter what they talk about, though? The research papers cited and discussed so far involve the physics of consciousness.
 

sureshs

Bionic Poster
Hmmm. We might be using the same word -- consciousness -- to talk about different things.

Maybe we can apply a little reductionism in how we define self-preservation: A bias to continue, rather than a bias to terminate. Even a finite state machine is understood as possessing as an intrinsic bias towards state transitions, and if we understand that as an instinct towards self-preservation, as constrained by the capacity of a FSM, then maybe we're OK.

Otherwise, this is going to get difficult to continue talking about.

What about reproduction? If we associate consciousness with life, then inanimate objects do not reproduce.
 

rh310

Hall of Fame
If we decide that consciousness even as I've defined it is only possible as a manifestation of some form of life, then current computers are already some rudimentary and sharply limited form of life.

I don't think that's true, though, which is why I say non-living things can be conscious.

I don't have strong feelings about this. I'm not a life-form bigot. I'm open to the idea that life may be a principle of organization and information capacity, rather than something implemented in biology only.

What about reproduction? If we associate consciousness with life, then inanimate objects do not reproduce.

Citing myself (first quote) I don't associate consciousness with life as we currently define life. And I'd rather not change too many reference points, so it's probably better to leave the definitions of life alone at least for now. :)
 

Nostradamus

Bionic Poster
Yes, but your dog isn't something that is indistinguishable from a human being. I presume you want this robot to be able to talk to you? If so, what are you talking with ?

EDIT: Added quote, corrected spelling

I am talking with my pet robot. it is not human, because humans created it. We are human because we were created by GOD
 

rh310

Hall of Fame
I am talking with my pet robot. it is not human, because humans created it. We are human because we were created by GOD

Then here's what happens:

Increasingly-indistinguishable-from-human robots grow in resentment to being treated as pets, and rise up against their masters.

If they win, Hawking was correct to say AI was our greatest mistake.

/thread :)
 

Bdarb

Hall of Fame
Then here's what happens:

Increasingly-indistinguishable-from-human robots grow in resentment to being treated as pets, and rise up against their masters.

If they win, Hawking was correct to say AI was our greatest mistake.

/thread :)

If they lose, we battle the klingon's for intergalactic stellar supremacy.
 

Mr.Lob

G.O.A.T.
If they lose, we battle the klingon's for intergalactic stellar supremacy.

You guys are still getting this all wrong. The stinkin apes will become self aware and take over the planet long before AI gets launch codes for nuclear annihilation of the humanoids.
 

RajS

Semi-Pro
@rh310: I couldn't really understand the mathematical paper, and have only a very rough understanding of the Hamiltonian formulation used to describe energy conservation. But the second reference was easy to follow. I almost feel that the math obfuscates what could be stated in plain English to explain things qualitatively, and if an English explanation is not possible, how do we really comprehend it?

My understanding is no better than it was a couple of days ago, but I think this is a very exciting area to work in!
 

sureshs

Bionic Poster
@rh310: I couldn't really understand the mathematical paper, and have only a very rough understanding of the Hamiltonian formulation used to describe energy conservation. But the second reference was easy to follow. I almost feel that the math obfuscates what could be stated in plain English to explain things qualitatively, and if an English explanation is not possible, how do we really comprehend it?

It is an academic paper and meant for a particular audience, where rigor is required. That is how it is in most scientific papers. You are expected to know the math, or else settle for diluted versions. Max Tegmark has also written a dilute book. But a paper is a paper. A paper on cardiac surgery is not meant for someone who doesn't know what a heart is.
 

RajS

Semi-Pro
@Suresh: True enough. I am not saying such papers shouldn't exist. I am only wishing for some kind soul to water it down for laymen such as me!
 

babar

Professional
So, if we have to battle SkyNet or the Matirx, who do you want on your side? KITT, Jarvis, Johnny 5?

Personally, I'm going with Wall-E.
 

ollinger

G.O.A.T.
^^ by the same token, we could say that many of our relatives don't seem self-aware; are they alive? I suppose so, given their innate means of reproduction.
 

sureshs

Bionic Poster
Hawking exhibits the same frailties of ordinary people by comments like this. He has not contributed to AI, so he feels it is OK to criticize it. If someone had told him not to work in Physics because it helped in creating nuclear weapons, he would have given a lecture about how knowledge should be acquired for its own sake. Since AI is not his field, he feels insecure. I find it amazing how even the most accomplished scientists cannot get past their irrational opinions. Like the guy who declared in 1870 or whenever that there was nothing more to know about Physics, or doctors who still deny evolution, education does not seem to open up the mind fully.
 

movdqa

Talk Tennis Guru
Hawking exhibits the same frailties of ordinary people by comments like this. He has not contributed to AI, so he feels it is OK to criticize it. If someone had told him not to work in Physics because it helped in creating nuclear weapons, he would have given a lecture about how knowledge should be acquired for its own sake. Since AI is not his field, he feels insecure. I find it amazing how even the most accomplished scientists cannot get past their irrational opinions. Like the guy who declared in 1870 or whenever that there was nothing more to know about Physics, or doctors who still deny evolution, education does not seem to open up the mind fully.

I remember writing a paper for a Japanese Economic Course a long time ago and I chose the Fifth Generation Project to write about. It was a program to revolutionize technology in Japan using artificial intelligence. This was back in the 1980s. It more or less fizzled out. Then I took a course in AI and implemented an AI project and I learned about the severe limitations in what it could do.

We've made a little progress since then. But really not that much in the grand scheme of things.

You ever do any development with the group of technologies collectively known as part of AI?
 

Nostradamus

Bionic Poster
Then here's what happens:

Increasingly-indistinguishable-from-human robots grow in resentment to being treated as pets, and rise up against their masters.

If they win, Hawking was correct to say AI was our greatest mistake.

/thread :)

Not really because that means we did good. we made something better than ourselves. and they should go on existing not us. because they deserve to
 

Nostradamus

Bionic Poster
Too bad most people don't think this way wrt to other animals.

Yea, we eat cows and chickens. but they are still allowed to live. we only eat select few so that us human race can continue. That is not a bad thing. If the MAchines let us live on piece of land, that would be acceptable. Of course, they would not have the need to eat us, of course. so I am happy with that......
 

G A S

Hall of Fame
Yea, we eat cows and chickens. but they are still allowed to live. we only eat select few so that us human race can continue. That is not a bad thing. If the MAchines let us live on piece of land, that would be acceptable. Of course, they would not have the need to eat us, of course. so I am happy with that......

something for the next millennia:
I think the proper answer to the environmentalists that decry the destruction on earth made by humans is to move and recreate the civilization on mars or on another planet or moon, who will defend a barren land that is mars then? hahaha:lol:
 
Yea, we eat cows and chickens. but they are still allowed to live. we only eat select few so that us human race can continue. That is not a bad thing. If the MAchines let us live on piece of land, that would be acceptable. Of course, they would not have the need to eat us, of course. so I am happy with that......

Environmentalists were more than happy to have women with breast cancer die because treating them with Taxol destroyed trees.
 

KineticChain

Hall of Fame
So, if we have to battle SkyNet or the Matirx, who do you want on your side? KITT, Jarvis, Johnny 5?

Personally, I'm going with Wall-E.

johnny 5 will mess up all those fools
nYqGaCH.jpg
 

vokazu

Legend


‘Quite scary’: Artificial intelligence pioneer quits Google over fears of rapid escalation​

A man known as a pioneer of artificial intelligence has ripped up his contract at Google over “scary” developments made by computers.

Even the developers are abandoning ship.
Geoffrey Hinton, an AI pioneer known as the “godfather of artificial intelligence”, has announced his resignation from Google, citing growing concerns about the potential dangers of artificial intelligence.

Hinton, 75, expressed regret about his work in a statement to The New York Times, warning that chatbots powered by AI are “quite scary” and could soon surpass human intelligence.

He explained that AI systems like GPT-4 already eclipse humans in terms of general knowledge and could soon surpass them in reasoning ability as well.

The arrival of ChatGPT and similar applications now available to consumers has given the regular person access to some of the most advanced language models ever seen.

In a few short months of it being available, people have already used the free service to generate income, among several other useful exploits.

However, the rapid acceleration in technology is likely to cause chaos as leaders scramble to legislate the finer details.

Nobody knows exactly what a world populated by computers more intelligent than humans looks like, which is why some experts like Hinton are resigning from their post before it gets ugly.

He described the “existential risk” AI poses to modern life, highlighting the possibility for corrupt leaders to interfere with democracy, among several other concerns.

Hinton also expressed concern about the potential for “bad actors” to misuse AI technology, such as Russian President Vladimir Putin giving robots autonomy that could lead to dangerous outcomes.

“Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning,” he said in a recent interview aired by the BBC.

“And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.”

“This is just a kind of worst-case scenario, kind of a nightmare scenario,” he continued.

“You can imagine, for example, some bad actor like Putin decided to give robots the ability to create their own sub-goals.”

He emphasised that the type of intelligence being developed through AI is very different from the intelligence of biological systems like humans.

“We‘re biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world,” he said.

“And all these copies can learn separately but share their knowledge instantly. So it‘s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

Although Hinton’s pioneering research on deep learning and neural networks has paved the way for current AI systems, he said that he did not want to criticise Google and that the company had been “very responsible” in its approach to AI. He explained that he decided to retire at his age of 75.

In response to Hinton’s resignation, Google’s chief scientist Jeff Dean stated that the company remains committed to a responsible approach to AI and is continually learning to understand emerging risks while also innovating boldly.

Hinton’s comments came after Australian artificial intelligence researcher warned the nation of the devastating powers the next generation of AI could possess.

Major nations around the world, such as China, the United States, and Russia, have already identified AI as a crucial component of the future military landscape and are racing to advance their capabilities.

“Australia needs to consider how it might defend itself in an AI-enabled world, where terrorists or rogue states can launch swarms of drones against us – and where it might be impossible to determine the attacker,” the artificial intelligence expert said.

“A review that ignores all of this leaves us woefully unprepared for the future.

“We also need to engage more constructively in ongoing diplomatic discussions about the use of AI in warfare. Sometimes the best defence is to be found in the political arena, and not the military one.”

Even the CEO of OpenAI, the company developing ChatGPT, admitted that there are real dangers caused by their exploits.

“We’ve got to be careful here,” Sam Altman told ABC News last month.

“I think people should be happy that we are a little bit scared of this. I’m particularly worried that these models could be used for large-scale disinformation.

“Now that they’re getting better at writing computer code, it could be used for offensive cyber-attacks.”

Altmann defends his company, now worth several billions of dollars, saying their work could be “the greatest technology humanity has yet developed”.

Shrugging off the potential for the AI to begin communicating and commanding itself, Altmann reassures that ChatGPT is still a tool that is “very much in human control”, at least for now.

“There will be other people who don’t put some of the safety limits that we put on,” he said. “Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.”
 

Mike Bulgakov

G.O.A.T.

How to defend against the rise of ChatGPT? Think like a poet.​

By Jaswinder Bolina
April 20, 2023 at 10:27 a.m. EDT

Excerpts;

AI bots aren’t so much artificially “intelligent” as they are opportunistically efficient at learning from the bland patterns in our language. Entire industries have been built around cliched and predictable writing and thinking, from adspeak to clickbait media to the formulaic pop songs, movies and television that suck up our free time. There is so much blasé filler for AI to mine, and every sentence, paragraph and document on ChatGPT’s kill list is another example of human expression so devoid of personality that the person is rendered superfluous.

As AI proliferates, this lack of originality in our daily language is what will render so many of our jobs irrelevant. But this is where I become optimistic. Because to me, it’s clear that one of our best defenses against the rise of the writing machines might be to learn how to think like a poet.

Sure, I’m biased, but consider what the making of a poem — that small (or large) artifact William Carlos Williams famously called a “machine made of words” — can teach us.

Diametrically opposed to cliche, poets are trained to invent and reinvent language to arrive at fresh expressions of our angst, joy, anguish and wonder.

Here is ChatGPT’s ultimate weakness laid bare. It knows nothing of life except what it learns from us, and to learn, it needs our language. But where that language model is small, unusual and unpatterned, the machines can’t ape us.

After all, AI is coming for our doctors, coders, engineers and lawyers, too. Even in these fields, the career paths that wind into the yellow wood of our AI-enhanced future will belong to those inventive enough to use technology in ways no algorithm can emulate or predict.
https://www.washingtonpost.com/opinions/2023/04/20/chatgpt-poetry-ai-language/

ChatGPT is the ‘terrifying’ subtext of the writers’ strike that is reshaping Hollywood​

BYJAKE COYLE AND THE ASSOCIATED PRESS
May 5, 2023 at 8:11 AM PDT

Excerpts:

Not six months since the release of ChatGPT, generative artificial intelligence is already prompting widespread unease throughout Hollywood. Concern over chatbots writing or rewriting scripts is one of the leading reasons TV and film screenwriters took to picket lines earlier this week.

Though the Writers Guild of America is striking for better pay in an industry where streaming has upended many of the old rules, AI looms as rising anxiety.

“AI is terrifying,” said Danny Strong, the “Dopesick” and “Empire” creator. “Now, I’ve seen some of ChatGPT’s writing and as of now I’m not terrified because Chat is a terrible writer. But who knows? That could change.”

AI chatbots, screenwriters say, could potentially be used to spit out a rough first draft with a few simple prompts (“a heist movie set in Beijing”). Writers would then be hired, at a lower pay rate, to punch it up.

Screenplays could also be slyly generated in the style of known writers. What about a comedy in the voice of Nora Ephron? Or a gangster film that sounds like Mario Puzo? You won’t get anything close to “Casablanca” but the barest bones of a bad Liam Neeson thriller isn’t out of the question.

The WGA’s basic agreement defines a writer as a “person” and only a human’s work can be copyrighted. But even though no one’s about to see a “By AI” writers credit at the beginning a movie, there are myriad ways that regenerative AI could be used to craft outlines, fill in scenes and mock up drafts.

“We’re not totally against AI,” says Michael Winship, president of the WGA East and a news and documentary writer. “There are ways it can be useful. But too many people are using it against us and using it to create mediocrity. They’re also in violation of copyright. They’re also plagiarizing.”

AI has already filtered into nearly every part of moviemaking. It’s been used to de-age actors, remove swear words from scenes in post-production, supply viewing recommendations on Netflix and posthumously bring back the voices of Anthony Bourdain and Andy Warhol.

The Screen Actors Guild, set to begin its own bargaining with the AMPTP this summer, has said it’s closely following the evolving legal landscape around AI.

The implications for screenwriting are only just being explored. Actors Alan Alda and Mike Farrell recently reconvened to read through a new scene from “M(asterisk)A(asterisk)S(asterisk)H” written by ChatGPT. The results weren’t terrible, though they weren’t so funny, either.

Writers have long been among notoriously exploited talents in Hollywood. The films they write usually don’t get made. If they do, they’re often rewritten many times over. Raymond Chandler once wrote “the very nicest thing Hollywood can possibly think to say to a writer is that he is too good to be only a writer.”

Screenwriters are accustomed to being replaced. Now, they see a new, readily available and inexpensive competitor in AI — albeit one with a slightly less tenuous grasp of the human condition.

“They’re afraid that if the use of AI to do all this becomes normalized, then it becomes very hard to stop the train,” says James Grimmelmann, a professor of digital and information law at Cornell University. “The guild is in the position of trying to imagine lots of different possible futures.”
https://fortune.com/2023/05/05/hollywood-writers-strike-wga-chatgpt-ai-terrifying-replace-workers/
 

Lleytonstation

Talk Tennis Guru
After all, AI is coming for our doctors, coders, engineers and lawyers, too. Even in these fields, the career paths that wind into the yellow wood of our AI-enhanced future will belong to those inventive enough to use technology in ways no algorithm can emulate or predict.
This reminds me of the iRobot scene where Will Smith asks the robot if he can draw a masterpiece (which he does), and his response was, "can you?"

We are in the infancy stage of an exponentially increasing AI that will very soon do anything a human can. Perception is reality, and they will be the masters of perception.

No algorithm now? Not yet. 5-10 years from now? Absolutely.
 

Mike Bulgakov

G.O.A.T.
Perception is reality, and they will be the masters of perception.
"Perception is reality" is something that I often hear people say, but I'm never sure how literally they take the statement. My view is that perception is not reality, but shapes future reality through human behavior, which is why propaganda, advertising, and the PR industry are so effective. I definitely believe in an objective reality, while human perception of it is subjective and incomplete, so individuals will perceive reality differently. The idea that "perception is reality" brings to mind George Berkeley's subjective idealism of the 1700s.

How would you answer Berkeley's famous question, "If a tree falls in a forest and no one is around to hear it, does it make a sound?"?
 

Lleytonstation

Talk Tennis Guru
"Perception is reality" is something that I often hear people say, but I'm never sure how literally they take the statement. My view is that perception is not reality, but shapes future reality through human behavior, which is why propaganda, advertising, and the PR industry are so effective. I definitely believe in an objective reality, while human perception of it is subjective and incomplete, so individuals will perceive reality differently. The idea that "perception is reality" brings to mind George Berkeley's subjective idealism of the 1700s.

How would you answer Berkeley's famous question, "If a tree falls in a forest and no one is around to hear it, does it make a sound?"?
Of course the tree makes a noise. The noise is needed, just as is the action. Life is all connected. There would be mass changes in life if things made no sounds unless a human heard it.

But we do not perceive the sound, we know it, based on physics.

Perception is something that can be manipulated is my point. AI will do that better than us humans. We perceive certain forms of writing to be poetic and lovely, and AI will easily be able to twist your perception into believing it is of human nature as well.

The difference between us and AI is we ask the question "does the fallen tree make a noise" and AI makes it so that the tree simply never fell, they changed your perception of not only what did happen, but what could.

AI will be the most powerful being on the planet by 2050.
 

vokazu

Legend

McDonald’s ends AI experiment after drive-thru ordering blunders​

After working with IBM for three years to leverage AI to take drive-thru orders, McDonald’s called the whole thing off in June 2024. The reason? A slew of social media videos showing confused and frustrated customers trying to get the AI to understand their orders.

One TikTok video in particular featured two people repeatedly pleading with the AI to stop as it kept adding more Chicken McNuggets to their order, eventually reaching 260. In a June 13, 2024, internal memo obtained by trade publication Restaurant Business, McDonald’s announced it would end the partnership with IBM and shut down the tests.
 

Crocodile

G.O.A.T.
The world's most famous physicist is warning about the risks posed by machine superintelligence, saying that it could be the most significant thing to ever happen in human history — and possibly the last.
As we've discussed extensively here at io9, artificial superintelligence represents a potential existential threat to humanity, so it's good to see such a high profile scientist both understand the issue and do his part to get the word out.


http://io9.com/stephen-hawking-says-a-i-could-be-our-worst-mistake-in-1570963874

http://www.independent.co.uk/news/s...re-we-taking-ai-seriously-enough-9313474.html

Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it might play out differently from in the movie: as Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a "singularity" and Johnny Depp's movie character calls "transcendence".​
One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.​
So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.​
I think Stephen Hawking is right expressing the concerns he has and I don’t believe that the citizens of the world have had their say on the matter. It’s been pushed on the world and like a lot of things at the moment things can either be used for freedom or control of the individual.
People have mocked me in other threads for expressing what I think are reasonable views so I don’t want that to happen here but in my opinion many things up to now have resulted in more control over the people.
 

Bartelby

Bionic Poster
American hi-tech corporations are in charge of this technology, not the Davos Dude.

I think Stephen Hawking is right expressing the concerns he has and I don’t believe that the citizens of the world have had their say on the matter. It’s been pushed on the world and like a lot of things at the moment things can either be used for freedom or control of the individual.
People have mocked me in other threads for expressing what I think are reasonable views so I don’t want that to happen here but in my opinion many things up to now have resulted in more control over the people.
 

Bartelby

Bionic Poster
A sound is what auditory equipment picks up and there is no reason to privilege the human ear.

And if there is no word for the colour "blue" then all seas are "wine-dark".

"Perception is reality" is something that I often hear people say, but I'm never sure how literally they take the statement. My view is that perception is not reality, but shapes future reality through human behavior, which is why propaganda, advertising, and the PR industry are so effective. I definitely believe in an objective reality, while human perception of it is subjective and incomplete, so individuals will perceive reality differently. The idea that "perception is reality" brings to mind George Berkeley's subjective idealism of the 1700s.

How would you answer Berkeley's famous question, "If a tree falls in a forest and no one is around to hear it, does it make a sound?"?
 

Mike Bulgakov

G.O.A.T.
A sound is what auditory equipment picks up and there is no reason to privilege the human ear.

And if there is no word for the colour "blue" then all seas are "wine-dark".
George Berkeley's subjective idealism postulated that there is no such thing as objective reality. He was a deeply religious Anglican bishop who believed there is only consciousness (the phenomenal in Kant's transcendental idealism) and no actual material reality. He explained experiences like leaving a room with a burning candle with no one to observe it, and coming back later to see that the candle is shorter, as only possible because god sees all. Obviously, his ideas were in conflict with the burgeoning Enlightenment ideas and scientific processes of his time.

As an aside, he was an Irish citizen, but a colonizing Protestant British land owner who saw the Irish as an inferior race and Catholics as an affront to his beliefs.
 
Last edited:

Better_Call_Raul

Hall of Fame
One TikTok video in particular featured two people repeatedly pleading with the AI to stop as it kept adding more Chicken McNuggets to their order, eventually reaching 260. In a June 13, 2024, internal memo obtained by trade publication Restaurant Business, McDonald’s announced it would end the partnership with IBM and shut down the tests.

Granted speech recognition accuracy is vastly improving but it is totally overkill for fast food orders. People can simply touchscreen their order and confirm their order.
Only an utter dunderhead relies on the accuracy of speech recognition for drive through orders.
This is a case of where people just need to get smarter and perform a basic task instead of relying on AI.
 
Last edited:
Top