Re: Rise of the Robots | Machine Learning
Posted: Fri Jan 26, 2018 3:21 am
Another day in the Universe
https://www.onthenatureofthings.net/forum/
https://www.onthenatureofthings.net/forum/viewtopic.php?t=123
In my experience, these reservations are accurate.According to skeptics like Marcus, deep learning is greedy, brittle, opaque, and shallow.
The systems are greedy because they demand huge sets of training data.
Brittle because when a neural net is given a “transfer test”—confronted with scenarios that differ from the examples used in training—it cannot contextualize the situation and frequently breaks.
They are opaque because, unlike traditional programs with their formal, debuggable code, the parameters of neural networks can only be interpreted in terms of their weights within a mathematical geography. Consequently, they are black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases.
Finally, they are shallow because they are programmed with little innate knowledge and possess no common sense about the world or human psychology.
These limitations mean that a lot of automation will prove more elusive than AI hyperbolists imagine. “A self-driving car can drive millions of miles, but it will eventually encounter something new for which it has no experience,” explains Pedro Domingos, the author of The Master Algorithm and a professor of computer science at the University of Washington. “Or consider robot control: A robot can learn to pick up a bottle, but if it has to pick up a cup, it starts from scratch.”
The algorithmic equivalent of an idi*t savant.noddy wrote:neural nets are all those things and less.
still.
you cant overstate the act of the simple refinement over all the data they have access too.. even a string.find(keywords) is mind boggling powerful in the context of the modern world and turning millions of millions into subsets of thousands.
i dont have the language for how basic and laughable it all is versus how terrifying and powerful it all is.. at the same time.
Behold! The calculator! *(works pretty well)*noddy wrote:neural nets are all those things and less.
still.
you cant overstate the act of the simple refinement over all the data they have access too.. even a string.find(keywords) is mind boggling powerful in the context of the modern world and turning millions of millions into subsets of thousands.
i dont have the language for how basic and laughable it all is versus how terrifying and powerful it all is.. at the same time.
I wonder why they don't track the other foreigners influencing twitter.
Maybe the neural networks are shallow because, well, they're physically still shallow. Who's to say that a neural network of sufficient density could not outperform human cognition? We simply don't know.Typhoon wrote:Wired | Greedy, Brittle, Opaque, and Shallow: The Downsides to Deep Learning
In my experience, these reservations are accurate.According to skeptics like Marcus, deep learning is greedy, brittle, opaque, and shallow.
The systems are greedy because they demand huge sets of training data.
Brittle because when a neural net is given a “transfer test”—confronted with scenarios that differ from the examples used in training—it cannot contextualize the situation and frequently breaks.
They are opaque because, unlike traditional programs with their formal, debuggable code, the parameters of neural networks can only be interpreted in terms of their weights within a mathematical geography. Consequently, they are black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases.
Finally, they are shallow because they are programmed with little innate knowledge and possess no common sense about the world or human psychology.
These limitations mean that a lot of automation will prove more elusive than AI hyperbolists imagine. “A self-driving car can drive millions of miles, but it will eventually encounter something new for which it has no experience,” explains Pedro Domingos, the author of The Master Algorithm and a professor of computer science at the University of Washington. “Or consider robot control: A robot can learn to pick up a bottle, but if it has to pick up a cup, it starts from scratch.”
I suspect eventually they will. However this is not quite the same thing but it demonstrates a point:Zack Morris wrote:Maybe the neural networks are shallow because, well, they're physically still shallow. Who's to say that a neural network of sufficient density could not outperform human cognition? We simply don't know.Typhoon wrote:Wired | Greedy, Brittle, Opaque, and Shallow: The Downsides to Deep Learning
In my experience, these reservations are accurate.According to skeptics like Marcus, deep learning is greedy, brittle, opaque, and shallow.
The systems are greedy because they demand huge sets of training data.
Brittle because when a neural net is given a “transfer test”—confronted with scenarios that differ from the examples used in training—it cannot contextualize the situation and frequently breaks.
They are opaque because, unlike traditional programs with their formal, debuggable code, the parameters of neural networks can only be interpreted in terms of their weights within a mathematical geography. Consequently, they are black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases.
Finally, they are shallow because they are programmed with little innate knowledge and possess no common sense about the world or human psychology.
These limitations mean that a lot of automation will prove more elusive than AI hyperbolists imagine. “A self-driving car can drive millions of miles, but it will eventually encounter something new for which it has no experience,” explains Pedro Domingos, the author of The Master Algorithm and a professor of computer science at the University of Washington. “Or consider robot control: A robot can learn to pick up a bottle, but if it has to pick up a cup, it starts from scratch.”
OK so what they are saying that it is preferable to take everyone's personal non medical information and combine it into a big database. Which not only could be abused by say the health insurance companies, but also be prone to being hacked. As opposed to just asking the patient if they have anyone at home to take care of them? REALLY?One example of how this could have worked, described by CNBC, would have had hospitals send home nurses to patients recovering from major heart surgery deemed to have no nearby friends or family based on their Facebook profile.
Perhaps. I am just thinking that maybe we are missing something. But maybe Zack is right. Maybe we aren't building networks that are large enough. The human brain is divided into different parts that specialize in different things. Each part seems to be pretty powerful by itself. Also each part develops back to front, and doesn't finish until we are 24 years old or so. That is a lot of "greedy" learning and training time.noddy wrote:I wouldnt have thought post processing that improves linkages and whatnot was much of an issue for theoretical future AI.
it works for us because our puny little flesher brains can only do so much in realtime before they get overloaded and we have long downtimes whilst asleep in which such things can happen.
their is no algorithmic reason these things couldnt happen in realtime - compare old db engines which needed maintenance after hours to modern ones that maintain whilst they are running.
TheZack Morris wrote:Maybe the neural networks are shallow because, well, they're physically still shallow. Who's to say that a neural network of sufficient density could not outperform human cognition? We simply don't know.Typhoon wrote:Wired | Greedy, Brittle, Opaque, and Shallow: The Downsides to Deep Learning
In my experience, these reservations are accurate.According to skeptics like Marcus, deep learning is greedy, brittle, opaque, and shallow.
The systems are greedy because they demand huge sets of training data.
Brittle because when a neural net is given a “transfer test”—confronted with scenarios that differ from the examples used in training—it cannot contextualize the situation and frequently breaks.
They are opaque because, unlike traditional programs with their formal, debuggable code, the parameters of neural networks can only be interpreted in terms of their weights within a mathematical geography. Consequently, they are black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases.
Finally, they are shallow because they are programmed with little innate knowledge and possess no common sense about the world or human psychology.
These limitations mean that a lot of automation will prove more elusive than AI hyperbolists imagine. “A self-driving car can drive millions of miles, but it will eventually encounter something new for which it has no experience,” explains Pedro Domingos, the author of The Master Algorithm and a professor of computer science at the University of Washington. “Or consider robot control: A robot can learn to pick up a bottle, but if it has to pick up a cup, it starts from scratch.”
fallacy.We're going to need a large computer.
Indeed.. I think it belongs here.Nonc Hilaire wrote:Wrong topic, Doc but that is your best meme.
Given that the track record of experts predicting the future is no better than the O-mikuji random fortunes at the local Shinto shrine, I won't spend time worrying about it.Doc wrote:Another warning
qsKsBualNT8
Do a robot’s social skills and its objection discourage interactants from switching the robot off?
http://www.atimes.com/article/artificia ... -in-china/Keeko is just 45 centimeters tall, or one-foot seven inches, and weighs only 45 kilograms, roughly 99 pounds. Gliding across the room to the amazement of starry-eyed five-year-olds, it rolls its head and tells the transfixed children “remember to wash your hands before you eat.”
Doc, that's scary as hell. In a similar vein consider Eve Tushnet as she upends what we should consider conventional, and unquestionable wisdom:The Mouse Utopia Experiments | Down the Rabbit Hole
https://ifstudies.org/blog/whats-wrong- ... s-sequenceThe overwhelming majority of my clients believe the most secure path to a lasting marriage is sex first, then cohabitation once you can afford your own place, then marriage once you’re both economically stable. Delaying sex until marriage is not merely unrealistic—not merely prudish—but risky. To rush to the altar before you’ve “tested the relationship” is irresponsible: the character trait my clients fear most.
I think that says more about the expectations of western culture than anything else. Poor in the third world tend to have as many children as possible so there are many to take care of them in old age. As many children are expected to die before they grow up.Miss_Faucie_Fishtits wrote:Doc, that's scary as hell. In a similar vein consider Eve Tushnet as she upends what we should consider conventional, and unquestionable wisdom:The Mouse Utopia Experiments | Down the Rabbit Hole
https://ifstudies.org/blog/whats-wrong- ... s-sequenceThe overwhelming majority of my clients believe the most secure path to a lasting marriage is sex first, then cohabitation once you can afford your own place, then marriage once you’re both economically stable. Delaying sex until marriage is not merely unrealistic—not merely prudish—but risky. To rush to the altar before you’ve “tested the relationship” is irresponsible: the character trait my clients fear most.