SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : Technical analysis for shorts & longs -- Ignore unavailable to you. Want to Upgrade?


To: Johnny Canuck who wrote (59186)5/10/2024 12:36:12 PM
From: Johnny Canuck  Respond to of 69087
 
Social media platforms race to address AI-generated images ahead of November election


Analysis by Oliver Darcy, CNN

3 minute read
Published 7:00 AM EDT, Fri May 10, 2024



With less than 200 days until the November election, the largest social media companies have outlined plans to ensure users can differentiate between content generated by machines and humans.
Anna Barclay/Getty Images

Editor’s Note: A version of this article first appeared in the “Reliable Sources” newsletter. Sign up for the daily digest chronicling the evolving media landscape here.

New YorkCNN —
Big Tech is racing to address the stream of A.I.-generated images inundating social media platforms before the machine-crafted renderings further contaminate the information space.

TikTok announced on Thursday that it will begin labeling A.I.-generated content. Meta (the parent company of Instagram, Threads and Facebook) said last month that it will begin labeling such content. And YouTube introduced rules mandating creators disclose when videos are A.I.-created so that a label can be applied. (Notably, Elon Musk’s X has not announced any plans to label A.I.-generated content.)

ADVERTISING

Ad Feedback

With less than 200 days until the high-stakes November election, and as the technology advances at break-neck speed, the three largest social media companies have each outlined plans to ensure their billions of users can differentiate between content generated by machines and humans.

Meanwhile, OpenAI, the ChatGPT-creator that allows users to also create A.I.-generated imagery through its DALL-E model, said this week that it will launch a tool that allows users to detect when an image is built by a bot. Additionally, the company said that it will launch an election-related $2 million fund with Microsoft to combat deepfakes that can “deceive the voters and undermine democracy.”

The efforts fromSilicon Valley represent an acknowledgment that the tools being built by technological titans have the serious potential to wreak havoc on the information space and inflict grave injury to the democratic process.





RELATED ARTICLEWelcome to the AI dystopia no one asked for, courtesy of Silicon Valley

A.I.-generated imagery has already proven to be particularly deceptive. Just this week, an A.I.-created image of pop star Katy Perry supposedly posing on the Met Gala red carpet in metallic and floral dresses fooled people into believing that the singer attended the annual event, when in fact she did not. The image was so realistic that Perry’s own mother believed it to be authentic.

“Didn’t know you went to the Met,” Perry’s mom texted the singer, according to a screen shot posted by Perry.

“lol, mom the AI got you too, BEWARE!” Perry replied.

While the viral image didn’t cause serious harm, it’s not difficult to imagine a scenario — particularly ahead of a major election — in which a fake photograph could mislead voters and stir confusion, perhaps tipping the scale in favor of one candidate or another.

But, despite the repeated and alarming warnings from industry experts and figures, the federal government has, thus far, failed to take any action to establish safeguards around the industry. And so, Big Tech has been left to its own devices to rein in the technology before bad actors can exploit it for their own benefit. (What could possibly go wrong?)

Whether the industry-led efforts can successfully curb the spread of damaging deepfakes remains to be seen. Social media giants have reams of rules prohibiting certain content on their platforms, but history has repeatedly shown that they have often failed to adequately enforce them and allowed malicious content to spread to the masses before taking action.

That poor record doesn’t inspire much confidence as A.I.-created images increasingly bombard the information environment — particularly as the U.S. hurtles toward an unprecedented election with democracy itself at stake.



To: Johnny Canuck who wrote (59186)5/10/2024 12:48:22 PM
From: Johnny Canuck  Read Replies (1) | Respond to of 69087