(05-17-2024, 07:14 PM)Nintex wrote: Anyway I think these 'prediction models' will only get us so far.
They'll be a big help for making our work easier and automating simple things but I don't think this technology will ever pass as AGI. Uh, yeah, that's the inherent limitation to any prediction model.
Hey AI, explain to me the story of Kingdom Hearts
1 user liked this post: Potato
I'm with Scarlett on this one.
(05-21-2024, 12:06 AM)Potato wrote: I'm with Scarlett on this one.
I'm not
personally I didn't even think it sounded like her
in the future when we want to use AI voices for things we can't have this chilling factor at all times that if something sounds slightly too close to an existing person, better watch out or you'll get sued, just for the mere casual appearance of sounding like them
it would be so easy to find a scarjo impersonator to train from and that should be perfectly legal, I don't know if openai trained on her without consent or just managed to find a similar sounding voice, but I could see them removing it not as admission of guilt but just to avoid yet another legal battle
there isn't a single voice you can make that someone on earth wouldn't be able to say "hey, that sounds kind of like me, I should sue and get rich"
Altman posted "her" on X prior to the GPT-4o presentation so it was obvious he was going to reference Scarjo in some way.
05-21-2024, 05:54 PM
(This post was last modified: 05-21-2024, 05:55 PM by Uncle.)
as long as he's not lying, it's what I said
(05-21-2024, 02:44 PM)Uncle wrote: (05-21-2024, 12:06 AM)Potato wrote: I'm with Scarlett on this one.
I'm not
personally I didn't even think it sounded like her
in the future when we want to use AI voices for things we can't have this chilling factor at all times that if something sounds slightly too close to an existing person, better watch out or you'll get sued, just for the mere casual appearance of sounding like them
it would be so easy to find a scarjo impersonator to train from and that should be perfectly legal, I don't know if openai trained on her without consent or just managed to find a similar sounding voice, but I could see them removing it not as admission of guilt but just to avoid yet another legal battle
there isn't a single voice you can make that someone on earth wouldn't be able to say "hey, that sounds kind of like me, I should sue and get rich"
The difference is intention I guess. The fact they approached her more than once and basically gave her an ultimatum kind of suggests they did it on purpose...that's considering it went down like Johansson said it did though.
I don't know, maybe she's full of it, but I generally don't care for tech bro cocks and their "move fast and break things" philosophy. It's led to too many poor outcomes that were promised to be improvements.
(05-21-2024, 05:54 PM)Uncle wrote: as long as he's not lying, it's what I said
Yeah, it’s not like a young tech CEO would lie to the public to attempt to cover up a major fuck up.
05-21-2024, 08:16 PM
(This post was last modified: 05-21-2024, 08:19 PM by Potato.)
https://theconversation.com/ai-chatbots-are-intruding-into-online-communities-where-people-are-trying-to-connect-with-other-humans-229473
Fuck these AI evangelists.
Quote:A parent asked a question in a private Facebook group in April 2024: Does anyone with a child who is both gifted and disabled have any experience with New York City public schools? The parent received a seemingly helpful answer that laid out some characteristics of a specific school, beginning with the context that “I have a child who is also 2e,” meaning twice exceptional.
On a Facebook group for swapping unwanted items near Boston, a user looking for specific items received an offer of a “gently used” Canon camera and an “almost-new portable air conditioning unit that I never ended up using.”
Both of these responses were lies. That child does not exist and neither do the camera or air conditioner. The answers came from an artificial intelligence chatbot.
According to a Meta help page, Meta AI will respond to a post in a group if someone explicitly tags it or if someone “asks a question in a post and no one responds within an hour.” The feature is not yet available in all regions or for all groups, according to the page. For groups where it is available, “admins can turn it off and back on at any time.”
Just generating fake engagement now. Gotta keep those dopamine hits coming.
Add toxic glue to your sauce
(05-21-2024, 07:51 PM)TylenolJones wrote: (05-21-2024, 05:54 PM)Uncle wrote: as long as he's not lying, it's what I said
Yeah, it’s not like a young tech CEO would lie to the public to attempt to cover up a major fuck up.
https://www.washingtonpost.com/technology/2024/05/22/openai-scarlett-johansson-chatgpt-ai-voice/
Quote:OpenAI didn’t copy Scarlett Johansson’s voice for ChatGPT, records show
A different actress was hired to provide the voice for ChatGPT’s “Sky,” according to documents and recordings shared with the Washington Post.
Quote:The agent, who spoke on the condition of anonymity to assure the safety of her client, said the actress confirmed that neither Johansson nor the movie “Her” were ever mentioned by OpenAI. The actress’s natural voice sounds identical to the AI-generated Sky voice, based on brief recordings of her initial voice test reviewed by The Post. The agent said the name Sky was chosen to signal a cool, airy and pleasant sound.
I think Genesis is a psyop by Google to discredit AI.
05-23-2024, 10:57 PM
(This post was last modified: 05-23-2024, 10:57 PM by Nintex.)
This is going well
Trump was right
I agree with Potato, they are deliberately fucking this up.
05-24-2024, 07:26 AM
(This post was last modified: 05-24-2024, 07:26 AM by Nintex.)
Lmao
Who in their right mind would train an AI on Reddit comments?
The biggest collection of low intelligence morons on the internet combined with millions of trolls...
1 user liked this post: benji
05-24-2024, 11:38 AM
(This post was last modified: 05-24-2024, 11:38 AM by Nintex.)
Many people are saying
My client mister Sundar did absolutely nothing wrong your honor.
He simply collected 304 exabytes of CSAM, categorized by age, gender, race and hair style in a global network of data centers to train his new AI.
Things are moving incredibly fast. We need to be very adaptable because technology is evolving rapidly. You could even lose your career if you're not careful. Like me, I started learning programming, but then AI came along and threw a wrench in my plans.
Look, 5 billion is a lot, just 1008 instances of child porn? Everyone knows you can get that much just trying to get Metallica MP3's.
(05-24-2024, 12:03 PM)Uncle wrote:
1 user liked this post: benji
"Awesome, that will go great in my digital poster wallet."
(05-24-2024, 08:35 PM)doby8 wrote: Things are moving incredibly fast. We need to be very adaptable because technology is evolving rapidly. You could even lose your career if you're not careful. Like me, I started learning programming, but then AI came along and threw a wrench in my plans.
Just learn how to program with AI and prompting
AI is like a super power for developers, one programmer can simply have an army of AI agents working for him.
I'm currently reconsidering my subscription to Canva after watching that shite.
Da fuq is wrong with tech people?
1 user liked this post: Nintex
Corporate staff watching this video
"Oh my gosh, we need Canva guys #lol, lets send IT a message with Microsoft Teams to install it for us"
The tech companies are one thing, the corporate normies that like this shit are the biggest problem.
We should've bullied the normies off the internet harder and kept everything as difficult as possible for them.
|