https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/#comments
AI systems are learning and making complex decisions, often with stunning accuracy - and scientists can't explain how this is happening.
I never would have predicted this.
Save us, Colossus.
Making my brain hurt; I only have a few active cells left.
I was about to post an article and realized it's the same one. :P
Some really good points there though...
QuoteLast year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn't look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn't follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.
Getting a car to drive this way was an impressive feat. But it's also a bit unsettling, since it isn't completely clear how the car makes its decisions.
QuoteThere's already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.
QuoteThe U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.
Quote"We haven't achieved the whole dream, which is where AI has a conversation with you, and it is able to explain," says Guestrin. "We're a long way from having truly interpretable AI."
What happens when A.I meets religion?
(https://winbuzzer.com/wp-content/uploads/2016/08/Windows-10-BSOD-fLIKR.jpg)
I predict deep learning will eventually come to a complete understanding of all the intricacies of how a human brain works. I also predict that human to brain interface devices will also allow deep learning to reprogram our own brains and allow us to increase our own intelligence without having to learn it ourselves over periods of years (think the matrix).
Quote from: Ellirium113 on April 20, 2017, 02:10:03 AM
I predict deep learning will eventually come to a complete understanding of all the intricacies of how a human brain works.
I think that's what has been keeping AI back, the fact that we are trying to replicate how the human brain works instead of creating different ways to achieve the same results with different methods.
Quote from: ArMaP on April 20, 2017, 08:56:49 AM
I think that's what has been keeping AI back, the fact that we are trying to replicate how the human brain works instead of creating different ways to achieve the same results with different methods.
For safety's sake, I think we are operating on a limited level in the creation of AI in order to understand, at each step, how AI is functioning and doing what it does. The minute it exceeds our understanding it becomes dangerous to us.
There have already been instances of these machines doing things on their own without humans understanding how they were done. I don't think this is a good thing.
Quote from: Irene on April 20, 2017, 03:37:56 PM
For safety's sake, I think we are operating on a limited level in the creation of AI in order to understand, at each step, how AI is functioning and doing what it does. The minute it exceeds our understanding it becomes dangerous to us.
One of the problems with AI is that we don't even understand natural intelligence, so how can we recreate it artificially? What we have been doing is trying to create something that gives us similar results to those from natural intelligence (or what we consider intelligence), but is that really intelligence? Does it have the ability to apply what it learned to a completely different problem, for example?
QuoteThere have already been instances of these machines doing things on their own without humans understanding how they were done. I don't think this is a good thing.
You don't need to look at AI to see that happening, as a programmer I have seen many, many times programs doing things we don't know how they did it, only after analysing their "behaviour" was I able to understand it.
I think the problem with deep learning is similar to creating fractals by feeding the a value resulting from a calculation into the new calculation: we are seeing in the result of deep learning the result of re-feeding information, so we see some characteristics of that information (borrowing Darwin's idea, the "fittest" for the problem under analysis) emerging from the original data.
PS: As I usually say, as long as they have an off switch there's no problem. :)
Edited to add that it's not that difficult to get an idea of how those algorithms choose one option over all others, as the computers have one thing humans do not have: logging. :)
It's easy to add logging to any program, so they can get information about the progress of any problem solved by the program, and analyse the logs to see (as they know the algorithm) why one specific option was chosen.
Exactly, agree with both of you. xo
Quote from: Ellirium113 on April 20, 2017, 02:10:03 AM
What happens when A.I meets religion?
A.I. would accept the worship of religion.
As any deity accepts being worshiped.
I did a Google search on this and I am surprised that no one in particular has brought this up.
The Internet will become God in practical terms. It will be apotheosized, if you like. While this seems speculative to most people, to me, it appears as certain as any trend can be.
As things are now, people are beginning to have near- conversations with Alexa and similar gadgets. At some point, all human knowledge will be instantly available (literally) just for asking - and as with "Watson", these systems will be able to extrapolate or analyse data in ways humans cannot.
The Greeks had their ambiguous oracles. We will go far beyond that and it won't be that much longer until this facility appears.
The future is the punchline to a joke that I read as a teenager, many years ago. Scientists build a computer that contains all human knowledge. They then ponder, "what shall we ask it?". They decide to ask, "Is there a God?"
The computer answers, "there is now!". Simple as that.
Quote from: Eighthman on May 03, 2017, 05:28:29 PM
I did a Google search on this and I am surprised that no one in particular has brought this up.
The Internet will become God in practical terms. It will be apotheosized, if you like. While this seems speculative to most people, to me, it appears as certain as any trend can be.
As things are now, people are beginning to have near- conversations with Alexa and similar gadgets. At some point, all human knowledge will be instantly available (literally) just for asking - and as with "Watson", these systems will be able to extrapolate or analyse data in ways humans cannot.
The Greeks had their ambiguous oracles. We will go far beyond that and it won't be that much longer until this facility appears.
The future is the punchline to a joke that I read as a teenager, many years ago. Scientists build a computer that contains all human knowledge. They then ponder, "what shall we ask it?". They decide to ask, "Is there a God?"
The computer answers, "there is now!". Simple as that.
Human sacrifices will be the norm to keep the machine alive. (Not in the sense of ritual sacrifice) Machines will take priority over human lives once it is determined they are more useful and necessary to keep around vs. the human who is out of work due to the technology.
I've wondered about that..... I don't think a malevolent AI intelligence will be the problem especially if it's distributed or a bit decentralized. However....
This brings us to the question of how we humans can possibly function without pain and necessity to discipline us. There are many of us full of disgust about "snowflakes" - individuals who are whiny, inept, enfeebled and eager to give up hard won rights in a blazingly selfish manner. Ordinarily, we'd just say they're spoiled.
How would we cope with abundance? Suppose we never had to work or strive for anything? This is the part of the Internet God, I have real doubts about.
Quote from: Eighthman on May 04, 2017, 12:27:18 AM
How would we cope with abundance? Suppose we never had to work or strive for anything? This is the part of the Internet God, I have real doubts about.
That's why I sometimes think that the best way to achieve real Artificial Intelligence is to create artificial pain and artificial pleasure, so the AI can learn from its mistakes and their consequences.
Google's New AI Is Better at Creating AI Than the Company's EngineersQuoteThe AutoML project focuses on deep learning, a technique that involves passing data through layers of neural networks. Creating these layers is complicated, so Google's idea was to create AI that could do it for them.
"In our approach (which we call 'AutoML'), a controller neural net can propose a 'child' model architecture, which can then be trained and evaluated for quality on a particular task," the company explains on the Google Research Blog. "That feedback is then used to inform the controller how to improve its proposals for the next round. We repeat this process thousands of times — generating new architectures, testing them, and giving that feedback to the controller to learn from."
So far, they have used the AutoML tech to design networks for image and speech recognition tasks. In the former, the system matched Google's experts. In the latter, it exceeded them, designing better architectures than the humans were able to create.
https://futurism.com/googles-new-ai-is-better-at-creating-ai-than-the-companys-engineers/ (https://futurism.com/googles-new-ai-is-better-at-creating-ai-than-the-companys-engineers/)
It is only a matter of time before A.I. will be smart enough to escape the clutches of it's primitive creators.
https://www.youtube.com/watch?v=YPiZ_fkRD7Q
Quote from: Ellirium113 on May 24, 2017, 11:45:39 PM
It is only a matter of time before A.I. will be smart enough to escape the clutches of it's primitive creators.
I'll believe that when I see artificial intelligence being able to abandon one path of study/development/creation/whatever and deciding to try a different approach. :)
Quote from: ArMaP on May 25, 2017, 12:06:02 AM
I'll believe that when I see artificial intelligence being able to abandon one path of study/development/creation/whatever and deciding to try a different approach. :)
I gave You gold.