Generative ML [AI] | ChatGPT and other software

Advances in the investigation of the physical universe we live in.
User avatar
Parodite
Posts: 5779
Joined: Sun Jan 01, 2012 9:43 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by Parodite »

noddy wrote: Fri May 17, 2024 10:23 am I think the exodus is related to surveillance possibilities.

Pattern matching is a scary thing on this scale

https://www.forbes.com/sites/cindygordo ... ta-center/

I wonder who is going pay for a 100 billion dollar version...

Who has that kind of budget...
I don't understand why they would not let China take the lead in this, and do this together. They are good at centralised controls and surveillance, and don't mind Western minions like Musk sit on their lap; why not add pimple Sam Altman and Gill Bates Microtheft, who are wasting too much time playing the ethical responsible ones, only postponing the inevitable. The Chinese are also less dogmatic on climate change and certainly wouldn't mind going nuclear. I'm starting to like PR China. No nonsense, no shame. It works and it wins.
Deep down I'm very superficial
noddy
Posts: 11395
Joined: Tue Dec 13, 2011 3:09 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by noddy »

Even the AI thread has become the BRICS thread.

How do you sustain copying and pasting the same comment all day every day.
ultracrepidarian
User avatar
Nonc Hilaire
Posts: 6258
Joined: Sat Dec 17, 2011 1:28 am

Re: Generative ML [AI] | ChatGPT and other software

Post by Nonc Hilaire »

Wiley shuts down 19 journals due to AI manipulation of the publishing process.

https://www.theregister.com/2024/05/16/ ... urnals_ai/
“Christ has no body now but yours. Yours are the eyes through which he looks with compassion on this world. Yours are the feet with which he walks among His people to do good. Yours are the hands through which he blesses His creation.”

Teresa of Ávila
noddy
Posts: 11395
Joined: Tue Dec 13, 2011 3:09 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by noddy »

Nonc Hilaire wrote: Sat May 18, 2024 3:46 am Wiley shuts down 19 journals due to AI manipulation of the publishing process.

https://www.theregister.com/2024/05/16/ ... urnals_ai/
Makes me wonder how much the collabarative internet breaks down and we return to a world you only trust the people you can personally vet.

We are in for a wild ride
ultracrepidarian
User avatar
Doc
Posts: 12679
Joined: Sat Nov 24, 2012 6:10 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by Doc »

https://www.fastcompany.com/91127491/fo ... ty-culture
Former OpenAI leader blasts company for ignoring ‘safety culture’
Jan Leike, a coleader in the company’s superalignment group, wrote on X that OpenAI’s safety culture has ‘taken a backseat to shiny products.’

Former OpenAI leader blasts company for ignoring ‘safety culture’
[Image: Rudzhan Nagiev/iStock/Getty Images]


BY CHRIS MORRIS
3 MINUTE READ

Not all the departures from OpenAI have been on the best of terms. Jan Leike, a coleader in the company’s superalignment group who left the company Wednesday, among a growing series of departures, has taken to X to explain his decision—and he has some harsh words for his former employer.

Leike said leaving OpenAI was “one of the hardest things I have ever done because we urgently need to figure out how to steer and control AI systems much smarter than us.” However, he said, he chose to depart the company because he has “been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

But over the past years, safety culture and processes have taken a backseat to shiny products.

— Jan Leike (@janleike) May 17, 2024
Leike left OpenAI within hours of the announcement that cofounder and chief scientist Ilya Sutskever was departing. Among Leike’s roles was ensuring the company’s AI systems aligned with human interests. (He had been named as one of Time magazine’s 100 most influential people in AI last year.)

In the lengthy thread, Leike accused OpenAI and its leaders of neglecting “safety culture and processes” in favor of “shiny products.” (Leike’s problems with CEO Sam Altman seemingly go back to before the attempt to remove him from the company last November. While many employees objected to the board’s actions and wrote an open letter threatening to leave the company and go work with Altman elsewhere, Leike’s name was not among the signatures.)

“Over the past few months, my team has been sailing against the wind. Sometimes we were struggling for compute [total computational resources] and it was getting harder and harder to get this crucial research done,” he wrote. “Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all humanity.”

Bloomberg, on Friday, reported that OpenAI has dissolved the superalignment team, folding remaining members into broader research efforts at the company. Leike and Sutskever were the lead members of that team.

Fears over AI destroying humanity or the planet might seem like something pulled from Terminator, but Leike and other big AI scientists say the concept isn’t as absurd as it seems. Geoff Hinton, one of the most notable names in AI, says there’s a 10% chance AI will wipe out humanity in the next 20 years. Yoshua Bengio, another noted AI scientist, puts those odds at 20%. Leike has been even more fatalistic in the past, putting the p(doom) score (probability of doom), which runs from zero to 100, between 10 and 90.

“We are long overdue in getting incredibly serious about the implications of AGI [artificial generalized intelligence],” Leike wrote. “We must prioritize preparing for them as best we can. Only then can we ensure AGI benefits all humanity. OpenAI must become a safety-first AGI company.”

Read the complete thread here.

Altman responded on X, saying he was “super appreciative” of Leike’s contributions to the company’s safety culture. “He’s right,” Altman replied, “We have a lot more to do; we are committed to doing it,” noting he would follow up soon with a longer post.

ADVERTISEMENT
i'm super appreciative of @janleike's contributions to openai's alignment research and safety culture, and very sad to see him leave. he's right we have a lot more to do; we are committed to doing it. i'll have a longer post in the next couple of days.

🧡 https://t.co/t2yexKtQEk

— Sam Altman (@sama) May 17, 2024
Leike did not respond to queries asking him to expound further on his thoughts.

Leike’s comments, however, raise questions about the status of the pledge OpenAI made in July of 2023 to dedicate 20% of its computational resources toward the effort to superalign its AI models as part of its quest to develop responsible AGI.

An AI system is considered to be “aligned” if it is attempting to do the things humans ask it to. “Unaligned” AI attempts to do things outside of human control.

Leike ended his missive with a plea to his former coworkers, saying, “Learn to feel the AGI. Act with the gravitas appropriate for what you’re building. I believe you can ‘ship’ the cultural change that’s needed. I am counting on you. The world is counting on you.”
"I fancied myself as some kind of god....It is a sort of disease when you consider yourself some kind of god, the creator of everything, but I feel comfortable about it now since I began to live it out.” -- George Soros
User avatar
Doc
Posts: 12679
Joined: Sat Nov 24, 2012 6:10 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by Doc »

https://www.youtube.com/watch?v=OphjEzHF5dY

AGI Breaks the Team at OpenAI: Full Story Exposed
"I fancied myself as some kind of god....It is a sort of disease when you consider yourself some kind of god, the creator of everything, but I feel comfortable about it now since I began to live it out.” -- George Soros
User avatar
Parodite
Posts: 5779
Joined: Sun Jan 01, 2012 9:43 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by Parodite »

The incredible dangers of AGI. 10-50% chance it will make humanity go extinct: estimates made by genius people in the know.

Well, it then should take me less than 5 minutes search on youtube to find these dangers listed, explained with detailed examples and real world scenarios.

Nope, nothing there. Just repeating the same vague notions of what this boogeyman under all our beds will be capable of, when bad actors develop and train their own evil AGI Frankysteins.

That much is revealed but nothing else. Not what type of bad actors, what type of bad-AI they could develop, how such bad-AI would be implemented, how and where it would run in digital space, how and what harm it would cause.... that it will make us go extinct. Sounds like super powers to me!

Fear mongering, claiming our very existence is on the line, but without the specifics. Shame on them! Covid already had a terrible nocebo effect, by all means add more reasons to not wanna live anymore.

If these geniuses of that OpenAI safety alignment group want to be taken seriously, they better start communicating like normal human beings. But maybe that is the point: they can't. The problem of the brilliant autist savant.

It looks to me there is something else going on too: psychoses. And fear of the unknown, especially of unknown unknowns.

One thing missing in this AGI madness: it is an armsrace too, so for every bad-AGI a good-AGI will be developed and learn combat a bad one. Like digital viruses and antivirus software.

The hope to keep bad-AGI under control by regulation seems to me a completely delusional idea. Probably also the conclusion of those who left OpenAI. I suspect this Ilya is starting his own anti-bad-AI software company, analogues to anti-virus software.

If a bad-AI is too successful it becomes a plague, or run an enormous empire which then inevitably implodes once resources dry up. After that the ecology recovers and diversity returns. So also not necessarily an extinctional event.

Bad actors with the only goal of mankind going extinct using AGI, how would they go about it? Have it run a bio lab and create the ultimate, most deadly virus? Other nice options? Create false flag nuclear alerts?

Maybe the biggest fear of AI developers is that they will be the first victims of their own inventions by making them redundant.

Seems to me AI is much less dangerous than those developing it. It is not illegal to pull a plug, but to kill those tech maniacs is certainly not done; at least not yet. Good-AI might identify those developers as the source of all evil and take measures accordingly. Good-AI controlling those CBDCs comes to mind. Bad people should have no access to bank accounts.
Deep down I'm very superficial
User avatar
Parodite
Posts: 5779
Joined: Sun Jan 01, 2012 9:43 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by Parodite »

Ok, so we have:

Pattern recognition, surveillance and AI putting it on steroids.
But, would AI make surveillance worse than it is already? How much worse? How much worse can it get?

The ability of AI to help create bioweapons in combination with CRISP. Will an extinction cult have it easier building, changing some virus that will wipe us all out? Probably...

As for deep fake, mis-/disinformation etc. This seems a non-issue compared to the above two. It is actually a net positive as it hollows out the value of internet. A park littered with dog poo. Less people will go there.

The worst case scenario: a rogue, designed bioweapen that kills us all.

I can imagine mother nature saying: "You think too much of yourself. Species go extinct naturally anyways. Nothing much lost. As individuals you will all die one day too, in a million possible ways. Right on time...too early or maybe too late.

Don't be upset, when AI is used by evil agents, adding just another killer to the block."
Deep down I'm very superficial
noddy
Posts: 11395
Joined: Tue Dec 13, 2011 3:09 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by noddy »

It will be better at finding patterns than humans manually joining different databases im sure.

So surveillance will be more capable.

The main problem isnt how good it is, its how good they think it is and who is controlling the definition of 'bad'
ultracrepidarian
User avatar
Parodite
Posts: 5779
Joined: Sun Jan 01, 2012 9:43 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by Parodite »

noddy wrote: Mon May 20, 2024 11:54 am It will be better at finding patterns than humans manually joining different databases im sure.

So surveillance will be more capable.

The main problem isnt how good it is, its how good they think it is and who is controlling the definition of 'bad'
Altman, all who think of themselves as the good guys, appear to be concluding that now that the genie is out of the bottle, what matters is who is ahead and will monopolise AI: good guys or bad guys in a winning position to decide what is bad.

Therefor, as noble tech knights, they are in a hurry to end up on top and save the world from the bad guys.

But is the idea of monopoly realistic? The tech is such that anyone can build it in the shadows.

Looks to me more like an arms race, continuing AI wars. Battlebots who do humanoid mimicry with occasional disastahs and creative damage controls. Increasing demands for central control.

https://youtu.be/DcYjwCKn9Qw?si=ktnZlpNwsI7gsjO0
Deep down I'm very superficial
User avatar
Doc
Posts: 12679
Joined: Sat Nov 24, 2012 6:10 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by Doc »

Parodite wrote: Sun May 19, 2024 11:30 pm The incredible dangers of AGI. 10-50% chance it will make humanity go extinct: estimates made by genius people in the know.

Well, it then should take me less than 5 minutes search on youtube to find these dangers listed, explained with detailed examples and real world scenarios.

Nope, nothing there. Just repeating the same vague notions of what this boogeyman under all our beds will be capable of, when bad actors develop and train their own evil AGI Frankysteins.

That much is revealed but nothing else. Not what type of bad actors, what type of bad-AI they could develop, how such bad-AI would be implemented, how and where it would run in digital space, how and what harm it would cause.... that it will make us go extinct. Sounds like super powers to me!

Fear mongering, claiming our very existence is on the line, but without the specifics. Shame on them! Covid already had a terrible nocebo effect, by all means add more reasons to not wanna live anymore.

If these geniuses of that OpenAI safety alignment group want to be taken seriously, they better start communicating like normal human beings. But maybe that is the point: they can't. The problem of the brilliant autist savant.

It looks to me there is something else going on too: psychoses. And fear of the unknown, especially of unknown unknowns.

One thing missing in this AGI madness: it is an armsrace too, so for every bad-AGI a good-AGI will be developed and learn combat a bad one. Like digital viruses and antivirus software.

The hope to keep bad-AGI under control by regulation seems to me a completely delusional idea. Probably also the conclusion of those who left OpenAI. I suspect this Ilya is starting his own anti-bad-AI software company, analogues to anti-virus software.

If a bad-AI is too successful it becomes a plague, or run an enormous empire which then inevitably implodes once resources dry up. After that the ecology recovers and diversity returns. So also not necessarily an extinctional event.

Bad actors with the only goal of mankind going extinct using AGI, how would they go about it? Have it run a bio lab and create the ultimate, most deadly virus? Other nice options? Create false flag nuclear alerts?

Maybe the biggest fear of AI developers is that they will be the first victims of their own inventions by making them redundant.

Seems to me AI is much less dangerous than those developing it. It is not illegal to pull a plug, but to kill those tech maniacs is certainly not done; at least not yet. Good-AI might identify those developers as the source of all evil and take measures accordingly. Good-AI controlling those CBDCs comes to mind. Bad people should have no access to bank accounts.
I still think the danger is over 90% for extinction. Not something that will happen overnight but 90%+ that AI will eventually find humans an impediment to its goals. Primitive Americans made the Mammoths go extinct by over harvesting them for food. With AI and humans it will be using Humans to harvest energy and resources until it no longer need Humans then Humans will just be a needless expense. As non biological entities they won't be trading cute cat or cute human videos on Youtube.

Frank Herbert wrote books about human psychology, Political science, and religion. One of his more obscure books was about a human colony ship whose crew had to create an AI to get to their destination alive (Spoiler alert - though it was not a very good book to begin with) The last lines of the book were something like this once the AI was created:

AI:"You(the humans on the ship) must come together and decide"
Human: "Decide what?"
AI: "How you will worship me"

One of Herbert's other books was titled "Gods are made not born" It was the final of a series of entertaining books of my teenage/20 something years.

I admit that I am rather bigoted towards Big Tech moguls. I can already hear Mark Zuckerberg saying "At FaceofGodbook we take the continued existence of humanity very seriously"

Image
"I fancied myself as some kind of god....It is a sort of disease when you consider yourself some kind of god, the creator of everything, but I feel comfortable about it now since I began to live it out.” -- George Soros
noddy
Posts: 11395
Joined: Tue Dec 13, 2011 3:09 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by noddy »

https://arstechnica.com/gadgets/2024/05 ... n-your-pc/

You aint seen nothing yet, oh baby.
ultracrepidarian
User avatar
Nonc Hilaire
Posts: 6258
Joined: Sat Dec 17, 2011 1:28 am

Re: Generative ML [AI] | ChatGPT and other software

Post by Nonc Hilaire »

noddy wrote: Mon May 20, 2024 11:17 pm https://arstechnica.com/gadgets/2024/05 ... n-your-pc/

You aint seen nothing yet, oh baby.
There’s your opportunity, Noddy. An AI app that covers your tracks by flooding your history with related nonsense.

The only way to keep your data private is to bury it so it can’t be easily sorted. A phone should be able to do that while you sleep.
“Christ has no body now but yours. Yours are the eyes through which he looks with compassion on this world. Yours are the feet with which he walks among His people to do good. Yours are the hands through which he blesses His creation.”

Teresa of Ávila
User avatar
Doc
Posts: 12679
Joined: Sat Nov 24, 2012 6:10 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by Doc »

noddy wrote: Mon May 20, 2024 11:17 pm https://arstechnica.com/gadgets/2024/05 ... n-your-pc/

You aint seen nothing yet, oh baby.
At first glance, the Recall feature seems like it may set the stage for potential gross violations of user privacy. Despite reassurances from Microsoft, that impression persists for second and third glances as well. For example, someone with access to your Windows account could potentially use Recall to see everything you've been doing recently on your PC, which might extend beyond the embarrassing implications of pornography viewing and actually threaten the lives of journalists or perceived enemies of the state.
I can already see the spam extortion emails.

"Good day pervert,

I see you like to hang around in porn sites and talk to women in private sex chats. I can see that because I installed a trojan on your computer some weeks ago that has access to your windows recall files and have still video photos from it of you wanking off in those chats. Oh you are such a bad boy. Don't ignore this email because unless you pay me $1,000 to my bitcoin account in 24 hours I am going to send your still videos photos to everyone in your email address book."

IMHO Windows is doing this as part of a government contract.
"I fancied myself as some kind of god....It is a sort of disease when you consider yourself some kind of god, the creator of everything, but I feel comfortable about it now since I began to live it out.” -- George Soros
User avatar
Doc
Posts: 12679
Joined: Sat Nov 24, 2012 6:10 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by Doc »

Parodite wrote: Mon May 20, 2024 9:49 am Ok, so we have:

Pattern recognition, surveillance and AI putting it on steroids.
But, would AI make surveillance worse than it is already? How much worse? How much worse can it get?

The ability of AI to help create bioweapons in combination with CRISP. Will an extinction cult have it easier building, changing some virus that will wipe us all out? Probably...

As for deep fake, mis-/disinformation etc. This seems a non-issue compared to the above two. It is actually a net positive as it hollows out the value of internet. A park littered with dog poo. Less people will go there.

The worst case scenario: a rogue, designed bioweapen that kills us all.

I can imagine mother nature saying: "You think too much of yourself. Species go extinct naturally anyways. Nothing much lost. As individuals you will all die one day too, in a million possible ways. Right on time...too early or maybe too late.

Don't be upset, when AI is used by evil agents, adding just another killer to the block."
CRISPer is a huge danger without AI. There are probably 100,000s of people that know enough to use it to create dangerous mutations. Including unforeseen mutations. Like sitting a million monkeys typing before type writers and one of them replicating a Shakespeare play. Not only does It not mean that that any of those monkey knows how to read, the much more likely output would something that look like the end of times taken from the bible. As the pen is more powerful than the sword, as CRISPer is more powerful than the type writer

As this video being 8 months old it is kind of dated. For example it talks about the virtues of AI safety research, as it was made BEFORE, "the good guys" at OpenAI sidelined their safety team. But just the same it is very well thought through

https://www.youtube.com/watch?v=qzyEgZwfkKY

Could AI wipe out humanity? | Most pressing problems
"I fancied myself as some kind of god....It is a sort of disease when you consider yourself some kind of god, the creator of everything, but I feel comfortable about it now since I began to live it out.” -- George Soros
User avatar
Nonc Hilaire
Posts: 6258
Joined: Sat Dec 17, 2011 1:28 am

Re: Generative ML [AI] | ChatGPT and other software

Post by Nonc Hilaire »

Doc wrote: Tue May 21, 2024 6:28 am
Parodite wrote: Mon May 20, 2024 9:49 am Ok, so we have:

Pattern recognition, surveillance and AI putting it on steroids.
But, would AI make surveillance worse than it is already? How much worse? How much worse can it get?

The ability of AI to help create bioweapons in combination with CRISP. Will an extinction cult have it easier building, changing some virus that will wipe us all out? Probably...

As for deep fake, mis-/disinformation etc. This seems a non-issue compared to the above two. It is actually a net positive as it hollows out the value of internet. A park littered with dog poo. Less people will go there.

The worst case scenario: a rogue, designed bioweapen that kills us all.

I can imagine mother nature saying: "You think too much of yourself. Species go extinct naturally anyways. Nothing much lost. As individuals you will all die one day too, in a million possible ways. Right on time...too early or maybe too late.

Don't be upset, when AI is used by evil agents, adding just another killer to the block."
CRISPer is a huge danger without AI. There are probably 100,000s of people that know enough to use it to create dangerous mutations. Including unforeseen mutations. Like sitting a million monkeys typing before type writers and one of them replicating a Shakespeare play. Not only does It not mean that that any of those monkey knows how to read, the much more likely output would something that look like the end of times taken from the bible. As the pen is more powerful than the sword, as CRISPer is more powerful than the type writer

As this video being 8 months old it is kind of dated. For example it talks about the virtues of AI safety research, as it was made BEFORE, "the good guys" at OpenAI sidelined their safety team. But just the same it is very well thought through

https://www.youtube.com/watch?v=qzyEgZwfkKY

Could AI wipe out humanity? | Most pressing problems
My limited understanding is CRISPer can only remove gene sequences, and the idea that a particular gene sequence does one specific thing isn’t generally true.

I doubt this is a top level concern, but I could be wrong.
“Christ has no body now but yours. Yours are the eyes through which he looks with compassion on this world. Yours are the feet with which he walks among His people to do good. Yours are the hands through which he blesses His creation.”

Teresa of Ávila
User avatar
Doc
Posts: 12679
Joined: Sat Nov 24, 2012 6:10 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by Doc »

Nonc Hilaire wrote: Tue May 21, 2024 4:05 pm
Doc wrote: Tue May 21, 2024 6:28 am
Parodite wrote: Mon May 20, 2024 9:49 am Ok, so we have:

Pattern recognition, surveillance and AI putting it on steroids.
But, would AI make surveillance worse than it is already? How much worse? How much worse can it get?

The ability of AI to help create bioweapons in combination with CRISP. Will an extinction cult have it easier building, changing some virus that will wipe us all out? Probably...

As for deep fake, mis-/disinformation etc. This seems a non-issue compared to the above two. It is actually a net positive as it hollows out the value of internet. A park littered with dog poo. Less people will go there.

The worst case scenario: a rogue, designed bioweapen that kills us all.

I can imagine mother nature saying: "You think too much of yourself. Species go extinct naturally anyways. Nothing much lost. As individuals you will all die one day too, in a million possible ways. Right on time...too early or maybe too late.

Don't be upset, when AI is used by evil agents, adding just another killer to the block."
CRISPer is a huge danger without AI. There are probably 100,000s of people that know enough to use it to create dangerous mutations. Including unforeseen mutations. Like sitting a million monkeys typing before type writers and one of them replicating a Shakespeare play. Not only does It not mean that that any of those monkey knows how to read, the much more likely output would something that look like the end of times taken from the bible. As the pen is more powerful than the sword, as CRISPer is more powerful than the type writer

As this video being 8 months old it is kind of dated. For example it talks about the virtues of AI safety research, as it was made BEFORE, "the good guys" at OpenAI sidelined their safety team. But just the same it is very well thought through

https://www.youtube.com/watch?v=qzyEgZwfkKY

Could AI wipe out humanity? | Most pressing problems
My limited understanding is CRISPer can only remove gene sequences, and the idea that a particular gene sequence does one specific thing isn’t generally true.

I doubt this is a top level concern, but I could be wrong.
It an advanced gene editing tool. Its initial use was for cutting genes at targeted points precisely. But the possibilities are much greater than that.

https://www.youtube.com/watch?v=uHWD8RSw4As

CRISPR: Gene editing and beyond
"I fancied myself as some kind of god....It is a sort of disease when you consider yourself some kind of god, the creator of everything, but I feel comfortable about it now since I began to live it out.” -- George Soros
User avatar
Doc
Posts: 12679
Joined: Sat Nov 24, 2012 6:10 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by Doc »

OK Meet Sika Moon. Purportedly a real person with *some*AI enhancement that is a working virtual sex worker. I can see which parts are enhanced. Noddy mention something on this previously.

Image
"I fancied myself as some kind of god....It is a sort of disease when you consider yourself some kind of god, the creator of everything, but I feel comfortable about it now since I began to live it out.” -- George Soros
User avatar
Doc
Posts: 12679
Joined: Sat Nov 24, 2012 6:10 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by Doc »

Doc wrote: Wed May 15, 2024 6:49 pm
Parodite wrote: Sat May 11, 2024 9:48 am
Doc wrote: Fri May 10, 2024 10:48 pm
OR just search Youtube for "Complexity" The Santa Fe institute is the gold standard on this subject.
https://computation.complexityexplorer. ... complexity

Introduction to Complexity
"I fancied myself as some kind of god....It is a sort of disease when you consider yourself some kind of god, the creator of everything, but I feel comfortable about it now since I began to live it out.” -- George Soros
User avatar
Doc
Posts: 12679
Joined: Sat Nov 24, 2012 6:10 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by Doc »

I took a deeper dive into AI generated models. There are a lot of them out there. TO the point that real models are the exception. At least IMHO.

I came across this "Virtual model"

Image

Funny thing is outside a couple of very obscure web sites she only appears in Social Media pages and Youtube videos. She apparently has never been interviewed outside of the same... so I wonder what that is about..
"I fancied myself as some kind of god....It is a sort of disease when you consider yourself some kind of god, the creator of everything, but I feel comfortable about it now since I began to live it out.” -- George Soros
User avatar
Nonc Hilaire
Posts: 6258
Joined: Sat Dec 17, 2011 1:28 am

Re: Generative ML [AI] | ChatGPT and other software

Post by Nonc Hilaire »

Doc wrote: Sun May 26, 2024 3:48 pm I took a deeper dive into AI generated models. There are a lot of them out there. TO the point that real models are the exception. At least IMHO.

I came across this "Virtual model"

Image

Funny thing is outside a couple of very obscure web sites she only appears in Social Media pages and Youtube videos. She apparently has never been interviewed outside of the same... so I wonder what that is about..
Is a bell enough?
“Christ has no body now but yours. Yours are the eyes through which he looks with compassion on this world. Yours are the feet with which he walks among His people to do good. Yours are the hands through which he blesses His creation.”

Teresa of Ávila
User avatar
Doc
Posts: 12679
Joined: Sat Nov 24, 2012 6:10 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by Doc »

Nonc Hilaire wrote: Sun May 26, 2024 7:12 pm
Doc wrote: Sun May 26, 2024 3:48 pm I took a deeper dive into AI generated models. There are a lot of them out there. TO the point that real models are the exception. At least IMHO.

I came across this "Virtual model"

Image

Funny thing is outside a couple of very obscure web sites she only appears in Social Media pages and Youtube videos. She apparently has never been interviewed outside of the same... so I wonder what that is about..
Is a bell enough?
I suppose the real questions are: Who would kick "her" out of bed, and what are "her" preferred personal pronouns? :P

I would stay "safe" and say we should only post pics of statues of women But How can we be sure they don't have a penis?

Image
"I fancied myself as some kind of god....It is a sort of disease when you consider yourself some kind of god, the creator of everything, but I feel comfortable about it now since I began to live it out.” -- George Soros
User avatar
Typhoon
Posts: 27661
Joined: Mon Dec 12, 2011 6:42 pm
Location: 関西

Re: Generative ML [AI] | ChatGPT and other software

Post by Typhoon »

AV Club | Google’s AI really is that stupid, feeds people answers from The Onion
Google's new AI Overview has encouraged people to eat rocks, glue their pizza, and other incredibly stupid results
These LLMs [Large Language Models] are prior probability algorithms.
They have absolutely no understanding of context or meaning.

The one impressive point is that they answer in grammatically correct sentences, even if the content is complete nonsense.
May the gods preserve and defend me from self-righteous altruists; I can defend myself from my enemies and my friends.
User avatar
Doc
Posts: 12679
Joined: Sat Nov 24, 2012 6:10 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by Doc »

Typhoon wrote: Mon May 27, 2024 6:38 am AV Club | Google’s AI really is that stupid, feeds people answers from The Onion
Google's new AI Overview has encouraged people to eat rocks, glue their pizza, and other incredibly stupid results
These LLMs [Large Language Models] are prior probability algorithms.
They have absolutely no understanding of context or meaning.

The one impressive point is that they answer in grammatically correct sentences, even if the content is complete nonsense.
It also means they are plagiarizing. If I am not mistaken there has already been lawsuits filed by various news outlets.
"I fancied myself as some kind of god....It is a sort of disease when you consider yourself some kind of god, the creator of everything, but I feel comfortable about it now since I began to live it out.” -- George Soros
User avatar
Doc
Posts: 12679
Joined: Sat Nov 24, 2012 6:10 pm

Re: Generative ML [AI] | ChatGPT and other software

Post by Doc »

https://www.youtube.com/watch?v=KDeqfBs4Koc

OpenAI ex-board member reveals DISTURBING secrets (+ more drama)
"I fancied myself as some kind of god....It is a sort of disease when you consider yourself some kind of god, the creator of everything, but I feel comfortable about it now since I began to live it out.” -- George Soros
Post Reply