Parodite wrote: ↑Thu May 09, 2024 10:04 am
Amazing. Well, maybe it's time we accept reality is never exactly what it appears to be. Or not at all. Reality=deep fake.
Seems to me all this is a net positive development. It forces everybody to not believe everything you see and hear on a screen.
That healthy chunck of salt and critical thinking may spill over in other areas where it is equally needed. Mental vaccinations against rubbish.
IN 2017 it was discovered that Speech predictive AI had, from learning words, developed the emergent behavior of being able to conduct research level research in chemistry. No one had instructed it to learned chemistry It learned simply by studying words about chemistry.
This says a lot of things. Like consciousness could easily also be emergent behavior. Which would slide the bar from the AI expert's estimated 5% chance of AI being dangerous to humanity In fact it seems like they are saying there is a 5% chance of AI being an existential threat to humanity. With emergent behavior the odds by my estimate are over 90% that AI developing into an existential threat to humanity is over 90%. As emergent behavior = evolutionary behavior. Where evolution generally comes down to the survival of the fittest. And it is close to 100% that AI will develop
I am currently half way through reading a Sci-fi book by Larry Niven and Mathew Harrington Titled "The Goliath Stone"
https://www.amazon.com/Goliath-Stone-La ... 0765368897
While the book is about Nano tech becoming self aware I think it is astounding that I am reading this book and seeing headlines that could have been ripped from its pages, and my imagination. Here is a excerpt from Amazons description page
Twenty-five years ago, the Briareus mission took nanomachinery out to divert an Earth-crossing asteroid and bring it back to be mined, only to drop out of contact as soon as it reached its target. The project was shut down and the technology was forcibly suppressed.
Now, a much, much larger asteroid is on a collision course with Earth―and the Briareus nanites may be responsible. While the government scrambles to find a solution, Glyer knows that their only hope of avoiding Armageddon lies in the nanites themselves. On the run, Glyer must track down his old partner, William Connors, and find a way to make contact with their wayward children.
As every parent learns, when you produce a new thinking being, the plans it makes are not necessarily your plans. But with a two-hundred-gigaton asteroid that rivals the rock that felled the dinosaurs hurtling toward Earth, Glyer and Connors don't have time to argue. Will Glyer's nanites be Earth's salvation or destruction?
I would assert that maybe the only reason the estimate of human doom is not 100% is that AI *might* feel benevolent towards its creators. Though perhaps not benevolent towards the entire human race. But that judgement will be made one way or another and humans won't be in that loop.
AI experts freely admit that they don't understand how the AIs they create come up with the answers that they give. Or as Author C. Clark (more or less)once said "Any intelligence encountered by man that has sufficiently advanced technology will be indistinguishable from Gods". Maybe God created man to create God to pass judgement on Man.
And to add another alarmist alarm. Would WWIII, assuming that Taiwan's Semiconductor fab's destruction, save humanity, at least for a time? Time to rethink the whole AI thing.
All probably me just rationalizing stories to explain reality.
Which begs the question: What stories AI will rationalize from its sense of reality?
And why do I feel like Dan Ackroyd right now?
https://www.youtube.com/watch?v=2zhDfUAQSbs