Discover Excellence at TrendyFindsHaven – Where Price Meets Quality

Congress Might Actually Do Something About AI, Thanks to Taylor Swift

Welcome to AI This Week, Gizmodo’s weekly deep dive on what’s been happening in artificial intelligence.

Concerns about AI porn—or, more commonly “deepfake porn”—are not new. For years, countless women and girls have been subjected to a flood of non-consensual pornographic imagery that is easy to distribute online but quite difficult to get taken down. Most notably, celebrity deepfake porn has been an ongoing source of controversy, one that has frequently gained attention but little legislative traction. Now, Congress may finally do something about it thanks to dirty computer-generated images of the world’s most famous pop star.

Yes, it has been a story that has been difficult to avoid: A couple of weeks ago, pornographic AI-generated images of Taylor Swift were distributed widely on X (formerly Twitter). Since then, Swift’s fan base has been in an uproar and a national conversation has emerged about the familiar topic of computer about what to do about this very familiar problem.

Now, legislation has been introduced to combat the issue. The Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act was introduced as bipartisan legislation by Sens. Dick Durbin, (D-Ill.), Josh Hawley, (R-Mo), and Lindsey Graham, (R-S.C.). If enacted, the bill would allow victims of deepfake porn to sue individuals who distributed “digital forgeries” of them that were sexual in nature. The proposed law would basically open the door for high-profile litigation on the part of female celebrities whose images are used in instances like the one involving Swift. Other women and victims would be able to sue too, obviously, but the wealthier, famous ones would have the resources to carry out such litigation.

The bill defines “digital forgery” as “a visual depiction created through the use of software, machine learning, artificial intelligence, or any other computer-generated or technological means to falsely appear to be authentic.”

“This month, fake, sexually-explicit images of Taylor Swift that were generated by artificial intelligence swept across social media platforms. Although the imagery may be fake, the harm to the victims from the distribution of sexually-explicit ‘deepfakes’ is very real,” said Sen. Durbin, in a press release associated with the bill. The press release also notes that the “volume of ‘deepfake’ content available online is increasing exponentially as the technology used to create it has become more accessible to the public.”

As previously noted, AI or Deepfake porn has been an ongoing problem for quite some time, but advances in AI over the past few years have made the generation of realistic (if slightly bizarre) porn much, much easier. The advent of free, accessible image generators, like OpenAI’s DALL-E and others of its kind, means that pretty much anybody can create whatever image they want—or, at the very least, can create an algorithm’s best approximation of what they want—at the click of a button. This has caused a cascading series of problems, including an apparent explosion of computer-generated child abuse material that governments and content regulators don’t seem to know how to combat.

The conversation around regulating deepfakes has been broached again and again, though serious efforts to implement some new policy have repeatedly been tabled or abandoned by Congress.

There’s little way to know whether this particular effort will succeed, though as Amanda Hoover at Wired recently pointed out, if Taylor Swift can’t defeat deepfake porn, no one can.

Question of the day: Can Meta’s new robot clean up your gross-ass bedroom?

OK-Robot: Home 10

There’s currently a race in Silicon Valley to see who can create the most commercially viable robot. While most companies seem to be preoccupied with creating a gimmicky “humanoid” robot that reminds onlookers of C3PO, Meta may be winning the race to create an authentically functional robot that can do stuff for you. This week, researchers connected to the company unveiled their OK-Robot, which looks like a lamp stand attached to a Roomba. While the device may look silly, the AI system that drives the machine means serious business. In multiple YouTube videos, the robot can be seen zooming around a messy room and picking up and relocating various objects. Researchers say that the bot uses “Vision-Language Models (VLMs) for object detection, navigation primitives for movement, and grasping primitives for object manipulation.” In other words, this thing can see stuff, grab stuff, and move around in a physical space with a fair amount of competence. Additionally, the bot does this in environments that it’s never been in before—which is an impressive feat for a robot since most of them can only perform tasks in highly controlled environments.

Other headlines this week:

  • AI companies just lost a shitload of stock value. The market capitalization of several large AI companies plummeted this week after their quarterly earnings reports showed they had brought in significantly less revenue than investors were expecting. Google parent company Alphabet, Microsoft, and chipmaker AMD, all witnessed a massive selloff on Tuesday. Reuters reports that, in total, the companies lost $190 billion in market cap. Seriously, yikes. That’s a lot.
  • The FCC might criminalize AI-generated robocalls. AI has allowed online fraud to run rampant—turbo-charging online scams that were already annoying but that, thanks to new forms of automation, are now worse than ever. Last week, President Joe Biden was the subject of an AI-generated robocall and, as a result, the Federal Communications Commission now wants to legally ban such calls. “AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate,” said Jessica Rosenworcel, FCC Chairwoman, in a statement sent to NBC.
  • Amazon has debuted an AI shopping assistant. The biggest e-commerce company in the world has rolled out an AI-trained chatbot, dubbed “Rufus,” that’s designed to help you buy stuff more efficiently. Rufus is described as an “expert shopping assistant trained on Amazon’s product catalog and information from across the web to answer customer questions on shopping needs, products, and comparisons.” While I’m tempted to make fun of this thing, I have to admit: Shopping can be hard. It often feels like a ridiculous amount of research is required just to make the simplest of purchases. Only time will tell whether Rufus can actually save the casual web user time or whether it’ll “hallucinate” some godawful advice that makes your e-commerce journey even worse. If the latter turns out to be the case, I vote we lobby Amazon to rename the bot “Doofus.”

Trending Products

0
Add to compare
Corsair 5000D Airflow Tempered Glass Mid-Tower ATX PC Case – Black

Corsair 5000D Airflow Tempered Glass Mid-Tower ATX PC Case – Black

$174.99
0
Add to compare
CORSAIR 7000D AIRFLOW Full-Tower ATX PC Case, Black

CORSAIR 7000D AIRFLOW Full-Tower ATX PC Case, Black

$269.99
.

We will be happy to hear your thoughts

Leave a reply

TrendyFindsHaven
Logo
Register New Account
Compare items
  • Total (0)
Compare
0
Shopping cart