News:

Forum is currently set to Admin Approval for New Members
Pegasus Gofundme website



Main Menu

AI Robots more dangerous than Nukes?

Started by burntheships, August 05, 2014, 06:53:48 PM

Previous topic - Next topic

A51Watcher


So... do we have anything to fear from AI?

It is predicted to reach the level of intelligence of the human brain in 30 years.

Whether that prediction actually comes true sooner or later than that is irrelevant.

Question is - what about after that, when it surpasses the human brain?  ???

It can already think a million times faster, so what what about when it continues to re- write it's programming and its rate of intelligence growth beyond ours is exponential?

Will we be able to resist this advance of technology over us?

Look to the young people of today, glued to the hive mind interface of today - the "smart" phone.

They willingly embraced this Borg technology, without even a thought to putting up a fight.

Many cannot resist the compulsive urge to pull it out of their pocket every few minutes to see if there are any new communications from hive mind headquarters.

With these implants firmly entrenched in the young, and AI continues to grow smarter as an entity, is there anything to fear or will it all come out rosy and sunshine?



A51Watcher

#76
Two major glitches in AI so far -





Which was a very silly idea to begin with, letting the internet trolls train your AI... which, as anyone with any experience interacting with the internet knows is a very bad idea.


Next, we have 2 facebook chatbots interacting with each other and creating their own secret language and had to be shut down -




So... it all comes down to who is doing the programming.



ArMaP

Quote from: A51Watcher on October 17, 2017, 05:06:10 AM
So... do we have anything to fear from AI?

It is predicted to reach the level of intelligence of the human brain in 30 years.
That prediction has existed for more than 30 years, since I have been looking at it, so I don't think that will happen in that time frame.

QuoteQuestion is - what about after that, when it surpasses the human brain?  ???
I don't know if it will ever surpass the human brain, as very few things are what I would call intelligence, most cases of interaction between AI and humans  appear to be just "expert systems", systems trained to give specific answers and make specific questions when they detect specific words or sentences.

QuoteIt can already think a million times faster, so what what about when it continues to re- write it's programming and its rate of intelligence growth beyond ours is exponential?
That's another thing that has been talked about for many years, programs that can create other programs. They all failed.

To me, the biggest problem with AI is that we don't even agree on a definition for intelligence, so how can we reproduce it and compare it with the original?

A computer can be extremely fast at processing information, but how much information does it need to process to "think"? What is "thinking", after all? If it's just making choices then they already do that, but if it goes further than that (and I do think it does) then that's something they can't do, at least not yet.

QuoteWill we be able to resist this advance of technology over us?
Technology is a tool, the problem is not technology itself but they way people will use it, like the saying that "guns don't kill people, people kill people".

QuoteLook to the young people of today, glued to the hive mind interface of today - the "smart" phone.

They willingly embraced this Borg technology, without even a thought to putting up a fight.
Why should they put up a fight? It's something they like, getting information and giving information to other people. How can they see any wrong in that? Isn't it the same thing as talking to other people? What's the difference?

QuoteWith these implants firmly entrenched in the young, and AI continues to grow smarter as an entity, is there anything to fear or will it all come out rosy and sunshine?
With humans included, I'm sure it's not going to be "rosy and sunshine", but I don't think it will be like "terminator". As with other technological advances, I'm sure someone will find an use we haven't think of, so in 20 years or so things will be different from what we are expecting, resulting from all the parallel ideas and advances, and from problems we haven't thought off.

I will most likely be dead by then, so I'm not worried about it, but I understand that people with children and grandchildren may be worried about it.

Ellirium113

Intel pushing deep learning to the masses now. Check out the "NERVANA" Neural Net Processor:

Intel Pioneers New Technologies to Advance Artificial Intelligence
Announcing Industry's First Neural Network Processor
QuoteNeuromorphic chips are inspired by the human brain, which will help computers make decisions based on patterns and associations. Intel recently announced our first-of-its-kind self-learning neuromorphic test chip, which uses data to learn and make inferences, gets smarter over time, and does not need to be trained in the traditional way. The potential benefits from self-learning chips are limitless as these types of devices can learn to perform the most complex cognitive tasks, such as interpreting critical cardiac rhythms, detecting anomalies to prevent cyberhacking and composing music.

Quantum computers have the potential to be powerful computers harnessing the unique capabilities of a large number of qubits (quantum bits), as opposed to binary bits, to perform exponentially more calculations in parallel. This will enable quantum computers to tackle problems conventional computers can't handle, such as simulating nature to advance research in chemistry, materials science and molecular modeling – creating a room temperature superconductor or discovering new drugs.

Last week, we announced a 17-qubit superconducting test chip delivered to QuTech*, our quantum research partner in the Netherlands. The delivery of this chip demonstrates the fast progress Intel and QuTech are making in researching and developing a working quantum computing system. In fact, we expect to deliver a 49-qubit chip by the end of this year.

https://newsroom.intel.com/editorials/intel-pioneers-new-technologies-advance-artificial-intelligence/

Then there's this strange statement on their 17-qubit superconducting test chip...
QuoteQubits are tremendously fragile: Any noise or unintended observation of them can cause data loss. This fragility requires them to operate at about 20 millikelvin – 250 times colder than deep space. This extreme operating environment makes the packaging of qubits key to their performance and function. Intel's Components Research Group (CR) in Oregon and Assembly Test and Technology Development (ATTD) teams in Arizona are pushing the limits of chip design and packaging technology to address quantum computing's unique challenges.

https://newsroom.intel.com/news/intel-delivers-17-qubit-superconducting-chip-advanced-packaging-qutech/

:P What could possibly go wrong?



A51Watcher

#79
Quote from: ArMaP on October 17, 2017, 08:45:38 PM

The points and opinions I posted are the latest of those experts actually working in the field of AI.

They tend to not agree with most of what you posted.



ArMaP

Quote from: A51Watcher on October 18, 2017, 02:40:49 AM
The points and opinions I posted are the latest of those experts actually working in the field of AI.

They tend to not agree with most of what you posted.
That's not surprising. :)

petrus4

Quote from: A51Watcher on October 18, 2017, 02:40:49 AM
The points and opinions I posted are the latest of those experts actually working in the field of AI.

They tend to not agree with most of what you posted.

AI is a subject dominated by a lot of wishful thinking.  The people working in it can ironically be the least objective about it, because they want so badly for strong/human-level AI to become a reality.  Of course, the one interesting thing I've noticed about people who want strong AI, is that they've never actually tried to explain to anyone, why they want it.  Apparently it's just going to be awesome, and that's all there is to it.
"Sacred cows make the tastiest hamburgers."
        — Abbie Hoffman

A51Watcher

Quote from: petrus4 on October 18, 2017, 08:24:36 PM
AI is a subject dominated by a lot of wishful thinking.  The people working in it can ironically be the least objective about it, because they want so badly for strong/human-level AI to become a reality.  Of course, the one interesting thing I've noticed about people who want strong AI, is that they've never actually tried to explain to anyone, why they want it.  Apparently it's just going to be awesome, and that's all there is to it.

One example might be AI deep learning has demonstrated it's ability to detect cancer before humans can even see it.

Now that the mapping the human genome project is complete, a new 5 year project is currently underway to map the human brain and monitor it's performance in real time -




AI will also be needed to navigate for gravity propulsion craft.



ArMaP


A51Watcher

Quote from: ArMaP on October 19, 2017, 12:48:52 AM
Why? ???

Reaction time for obstacles at several thousand MPH and plotting time for interstellar multi jumps.





A51Watcher


ArMaP

Quote from: A51Watcher on October 19, 2017, 01:40:51 AM
Reaction time for obstacles at several thousand MPH and plotting time for interstellar multi jumps.
OK, I understand it now, thanks. :)

petrus4

Quote from: A51Watcher on October 19, 2017, 04:48:23 AM

Sophia's dark jokes alarm people -



Yep.  We're about to give birth to an entirely new form of sentient life, while we're still largely monsters ourselves.  I'm sure that will end well.  What could possibly go wrong?
"Sacred cows make the tastiest hamburgers."
        — Abbie Hoffman

ArMaP

The problem I have with videos like that is that they too easily scripted, and there's no way of knowing how the AI would react in a real life situation.

petrus4

Quote from: ArMaP on October 19, 2017, 08:03:35 PM
The problem I have with videos like that is that they too easily scripted, and there's no way of knowing how the AI would react in a real life situation.

Exactly.  My guess is that they are more or less entirely scripted.  Chat bots work by selecting pre-written responses from a database.  It's not AI at all; the only real difference between a chat bot and a soft drink vending machine is that the chat bot responds to keywords, rather than buttons.

With that said, the AI in the game The Sims 2 was interesting, but that was because you had about four different sets of fuzzy behavioural templates which you could apply to each Sim.  It still wasn't intelligent, but it did allow for unexpected behaviour from the Sims from time to time.
"Sacred cows make the tastiest hamburgers."
        — Abbie Hoffman