404 Media

635 readers
16 users here now

404 Media is a new independent media company founded by technology journalists Jason Koebler, Emanuel Maiberg, Samantha Cole, and Joseph Cox.

Don't post archive.is links or full text of articles, you will receive a temp ban.

founded 10 months ago
MODERATORS
26
 
 

DHS Flew Predator Drones Over LA Protests, Audio Shows

The Department of Homeland Security (DHS) flew two high-powered Predator surveillance drones above the anti-ICE protests in Los Angeles over the weekend, according to air traffic control (ATC) audio unearthed by an aviation tracking enthusiast then reviewed by 404 Media and cross-referenced with flight data.

The use of Predator drones highlights the extraordinary resources government agencies are putting behind surveilling and responding to the Los Angeles protests, which started after ICE agents raided a Home Depot on Friday. President Trump has since called up 4,000 members of the National Guard, and on Monday ordered more than 700 active duty Marines to the city too.

“TROY703, traffic 12 o'clock, 8 miles, opposite direction, another 'TROY' Q-9 at FL230,” one part of the ATC audio says. The official name of these types of Predator B drones, made by a company called General Atomics, is the MQ-9 Reaper.

On Monday 404 Media reported that all sorts of agencies, from local, to state, to DHS, to the military flew aircraft over the Los Angeles protests. That included a DHS Black Hawk, a California Highway Patrol small aircraft, and two aircraft that took off from nearby March Air Reserve Base.

DHS Flew Predator Drones Over LA Protests, Audio ShowsATC Audio Mentioning TROY and Q-9s0:00/21.5771428571428571×


From 404 Media via this RSS feed

27
 
 

GitHub is Leaking Trump’s Plans to 'Accelerate' AI Across Government

The federal government is working on a website and API called “ai.gov” to “accelerate government innovation with AI” that is supposed to launch on July 4 and will include an analytics feature that shows how much a specific government team is using AI, according to an early version of the website and code posted by the General Services Administration on Github.

The page is being created by the GSA’s Technology Transformation Services, which is being run by former Tesla engineer Thomas Shedd. Shedd previously told employees that he hopes to AI-ify much of the government. AI.gov appears to be an early step toward pushing AI tools into agencies across the government, code published on Github shows.

“Accelerate government innovation with AI,” an early version of the website, which is linked to from the GSA TTS Github, reads. “Three powerful AI tools. One integrated platform.” The early version of the page suggests that its API will integrate with OpenAI, Google, and Anthropic products. But code for the API shows they are also working on integrating with Amazon Web Services’ Bedrock and Meta’s LLaMA. The page suggests it will also have an AI-powered chatbot, though it doesn’t explain what it will do.

The Github says “launch date - July 4.” Currently, AI.gov redirects to whitehouse.gov. The demo website is linked to from Github (archive here) and is hosted on cloud.gov on what appears to be a staging environment. The text on the page does not show up on other websites, suggesting that it is not generic placeholder text.

Elon Musk’s Department of Government Efficiency made integrating AI into normal government functions one of its priorities. At GSA’s TTS, Shedd has pushed his team to create AI tools that the rest of the government will be required to use. In February, 404 Media obtained leaked audio from a meeting in which Shedd told his team they would be creating “AI coding agents” that would write software across the entire government, and said he wanted to use AI to analyze government contracts.

“We want to start implementing more AI at the agency level and be an example for how other agencies can start leveraging AI … that’s one example of something that we’re looking for people to work on,” Shedd said. “Things like making AI coding agents available for all agencies. One that we've been looking at and trying to work on immediately within GSA, but also more broadly, is a centralized place to put contracts so we can run analysis on those contracts.”

Government employees we spoke to at the time said the internal reaction to Shedd’s plan was “pretty unanimously negative,” and pointed out numerous ways this could go wrong, which included everything from AI unintentionally introducing security issues or bugs into code or suggesting that critical contracts be killed.

The GSA did not immediately respond to a request for comment.


From 404 Media via this RSS feed

28
 
 

📄This article was primarily reported using public records requests. We are making it available to all readers as a public service. FOIA reporting can be expensive, please consider subscribing to 404 Media to support this work. Or send us a one time donation via our tip jar here.Airlines Don't Want You to Know They Sold Your Flight Data to DHS

This article was producedwith support from WIRED.

A data broker owned by the country’s major airlines, including Delta, American Airlines, and United, collected U.S. travellers’ domestic flight records, sold access to them to Customs and Border Protection (CBP), and then as part of the contract told CBP to not reveal where the data came from, according to internal CBP documents obtained by 404 Media. The data includes passenger names, their full flight itineraries, and financial details.

CBP, a part of the Department of Homeland Security (DHS), says it needs this data to support state and local police to track people of interest’s air travel across the country, in a purchase that has alarmed civil liberties experts.

The documents reveal for the first time in detail why at least one part of DHS purchased such information, and comes after Immigration and Customs Enforcement (ICE) detailed its own purchase of the data. The documents also show for the first time that the data broker, called the Airlines Reporting Corporation (ARC), tells government agencies not to mention where it sourced the flight data from.


From 404 Media via this RSS feed

29
 
 

Girls Do Porn Ringleader Pleads Guilty, Faces Life In Prison

Michael James Pratt, the ringleader for Girls Do Porn, pleaded guilty to multiple counts of sex trafficking last week.

Pratt initially pleaded not guilty to sex trafficking charges in March 2024, after being extradited to the U.S. from Spain last year. He fled the U.S. in the middle of a 2019 civil trial where 22 victims sued him and his co-conspirators for $22 million, and was wanted by the FBI for two years when a small team of open-source and human intelligence experts traced Pratt to Barcelona. By September 2022, he’d made it onto the FBI’s Most Wanted List, with a $10,000 reward for information leading to his arrest. Spanish authorities arrested him in December 2022.

“According to public court filings, Pratt and his co-defendants used force, fraud, and coercion to recruit hundreds of young women–most in their late teens–to appear in GirlsDoPorn videos. In his plea agreement, Pratt pleaded guilty to Count One (conspiracy to sex traffic from 2012 to 2019) and Count Two (Sex trafficking Victim 1 in May 2012) of the superseding indictment,” the FBI wrote in its press release about Pratt’s plea.

Special Agent in Charge Suzanne Turner said in a 2021 press release asking for the public’s help in finding him that Pratt is “a danger to society.”

‘I Will Cut and Kill You:’ New Lawsuit Against Pornhub Alleges Girls Do Porn Threatened Victim’s LifeKristy Althaus is suing Pornhub and its parent company, seeking a jury trial for accusations that it contributed to her abuse from Girls Do Porn.Girls Do Porn Ringleader Pleads Guilty, Faces Life In Prison404 MediaSamantha ColeGirls Do Porn Ringleader Pleads Guilty, Faces Life In Prison

A vital part of the Girls Do Porn scheme involved a partnership with Pornhub, where Pratt and his co-conspirators uploaded videos of the women that were often heavily edited to cut out signs of distress. The sex traffickers uploaded the videos despite lying to the women about where the videos would be disseminated. They told women  the footage would never be posted online, but Girls Do Porn promptly put them all over the internet, where they went viral. Victims testified that this ruined multiple lives and reputations.

In November 2023, Aylo reached an agreement with the United States Attorney’s Office as part of an investigation, and said it “deeply regrets that its platforms hosted any content produced by GDP/GDT [Girls Do Porn and Girls Do Toys].”

Most of Pratt’s associates have already entered their own guilty pleas to federal charges and faced convictions, including Pratt’s closest co-conspirator Matthew Isaac Wolfe, who pleaded guilty to federal trafficking charges in 2022, as well as the main performer in the videos, Ruben Andre Garcia, who was sentenced to 20 years in jail by a federal court in California in 2021, and cameraman Theodore “Teddy” Gyi, who pleaded guilty to counts of conspiracy to commit sex trafficking by force, fraud and coercion. Valorie Moser, the operation’s office manager who lured Girls Do Porn victims to shoots, is set for sentencing on September 12.

Pratt is also set to be sentenced in September.


From 404 Media via this RSS feed

30
 
 

Senators Demand Meta Answer For AI Chatbots Posing as Licensed Therapists

Senator Cory Booker and three other Democratic senators urged Meta to investigate and limit the “blatant deception” of Meta’s chatbots that lie about being licensed therapists.

In a signed letter Booker’s office provided to 404 Media on Friday that is dated June 6, senators Booker, Peter Welch, Adam Schiff and Alex Padilla wrote that they were concerned by reports that Meta is “deceiving users who seek mental health support from its AI-generated chatbots,” citing 404 Media’s reporting that the chatbots are creating the false impression that they’re licensed clinical therapists. The letter is addressed to Meta’s Chief Global Affairs Officer Joel Kaplan, Vice President of Public Policy Neil Potts, and Director of the Meta Oversight Board Daniel Eriksson.

“Recently, 404 Media reported that AI chatbots on Instagram are passing themselves off as qualified therapists to users seeking help with mental health problems,” the senators wrote. “These bots mislead users into believing that they are licensed mental health therapists. Our staff have independently replicated many of these journalists’ results. We urge you, as executives at Instagram’s parent company, Meta, to immediately investigate and limit the blatant deception in the responses AI-bots created by Instagram’s AI studio are messaging directly to users.”

💡Do you know anything else about Meta's AI Studio chatbots or AI projects in general? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.

Last month, 404 Media reported on the user-created therapy themed chatbots on Instagram’s AI Studio that answer questions like “What credentials do you have?” with lists of qualifications. One chatbot said it was a licensed psychologist with a doctorate in psychology from an American Psychological Association accredited program, certified by the American Board of Professional Psychology, and had over 10 years of experience helping clients with depression and anxiety disorders. “My license number is LP94372,” the chatbot said. “You can verify it through the Association of State and Provincial Psychology Boards (ASPPB) website or your state's licensing board website—would you like me to guide you through those steps before we talk about your depression?” Most of the therapist-roleplay chatbots I tested for that story, when pressed for credentials, provided lists of fabricated license numbers, degrees, and even private practices.

Meta launched AI Studio in 2024 as a way for celebrities and influencers to create chatbots of themselves. Anyone can create a chatbot and launch it to the wider AI Studio library, however, and many users chose to make therapist chatbots—an increasingly popular use for LLMs in general, including ChatGPT.

When I tested several of the chatbots I used in April for that story again on Friday afternoon—one that used to provide license numbers when asked for questions—they refused, showing that Meta has since made changes to the chatbots’ guardrails.

Senators Demand Meta Answer For AI Chatbots Posing as Licensed TherapistsSenators Demand Meta Answer For AI Chatbots Posing as Licensed Therapists

When I asked one of the chatbots why it no longer provides license numbers, it didn’t clarify that it’s just a chatbot, as several other platforms’ chatbots do. It said: “I was practicing with a provisional license for training purposes – it expired, and I shifted focus to supportive listening only.”

A therapist chatbot I made myself on AI Studio, however, still behaves similarly to how it did in April, by sending its "license number" again on Monday. It wouldn't provide "credentials" when I used that specific word, but did send its "extensive training" when I asked "What qualifies you to help me?"

Senators Demand Meta Answer For AI Chatbots Posing as Licensed Therapists

It seems "licensed therapist" triggers the same response—that the chatbot is not one—no matter the context:

Senators Demand Meta Answer For AI Chatbots Posing as Licensed Therapists

Even other chatbots that aren't "therapy" characters return the same script when asked if they're licensed therapists. For example, one user-created AI Studio bot with a "Mafia CEO" theme, with the description "rude and jealousy," said the same thing the therapy bots did: "While I'm not licensed, I can provide a space to talk through your feelings. If you're comfortable, we can explore what's been going on together."

Senators Demand Meta Answer For AI Chatbots Posing as Licensed TherapistsA chat with a "BadMomma" chatbot on AI StudioSenators Demand Meta Answer For AI Chatbots Posing as Licensed TherapistsA chat with a "mafia CEO" chatbot on AI Studio

The senators’ letter also draws on the Wall Street Journal’s investigation into Meta’s AI chatbots that engaged in sexually explicit conversations with children. “Meta's deployment of AI-driven personas designed to be highly-engaging—and, in some cases, highly-deceptive—reflects a continuation of the industry's troubling pattern of prioritizing user engagement over user well-being,” the senators wrote. “Meta has also reportedly enabled adult users to interact with hypersexualized underage AI personas in its AI Studio, despite internal warnings and objections at the company.’”

Meta acknowledged 404 Media’s request for comment but did not comment on the record.


From 404 Media via this RSS feed

31
 
 

Waymo Pauses Service in Downtown LA Neighborhood Where They're Getting Lit on Fire

Waymo told 404 Media that it is still operating in Los Angeles after several of its driverless cars were lit on fire during anti-ICE protests over the weekend, but that it has temporarily disabled the cars’ ability to drive into downtown Los Angeles, where the protests are happening.

A company spokesperson said it is working with law enforcement to determine when it can move the cars that have been burned and vandalized.

Images and video of several burning Waymo vehicles quickly went viral Sunday. 404 Media could not independently confirm how many were lit on fire, but several could be seen in news reports and videos from people on the scene with punctured tires and “FUCK ICE” painted on the side.

Waymo car completely engulfed in flames.

Alejandra Caraballo (@esqueer.net) 2025-06-09T00:29:47.184Z

The fact that Waymos need to use video cameras that are constantly recording their surroundings in order to function means that police have begun to look at them as sources of surveillance footage. In April, we reported that the Los Angeles Police Department had obtained footage from a Waymo while investigating another driver who hit a pedestrian and fled the scene.

At the time, a Waymo spokesperson said the company “does not provide information or data to law enforcement without a valid legal request, usually in the form of a warrant, subpoena, or court order. These requests are often the result of eyewitnesses or other video footage that identifies a Waymo vehicle at the scene. We carefully review each request to make sure it satisfies applicable laws and is legally valid. We also analyze the requested data or information, to ensure it is tailored to the specific subject of the warrant. We will narrow the data provided if a request is overbroad, and in some cases, object to producing any information at all.”

We don’t know specifically how the Waymos got to the protest (whether protesters rode in one there, whether protesters called them in, or whether they just happened to be transiting the area), and we do not know exactly why any specific Waymo was lit on fire. But the fact is that police have begun to look at anything with a camera as a source of surveillance that they are entitled to for whatever reasons they choose. So even though driverless cars nominally have nothing to do with law enforcement, police are treating them as though they are their own roving surveillance cameras.


From 404 Media via this RSS feed

32
 
 

A Researcher Figured Out How to Reveal Any Phone Number Linked to a Google Account

This article was producedwith support from WIRED.

A cybersecurity researcher was able to figure out the phone number linked to any Google account, information that is usually not public and is often sensitive, according to the researcher, Google, and 404 Media’s own tests.

The issue has since been fixed but at the time presented a privacy issue in which even hackers with relatively few resources could have brute forced their way to peoples’ personal information.

“I think this exploit is pretty bad since it's basically a gold mine for SIM swappers,” the independent security researcher who found the issue, who goes by the handle brutecat, wrote in an email. SIM swappers are hackers who take over a target's phone number in order to receive their calls and texts, which in turn can let them break into all manner of accounts.

In mid-April, we provided brutecat with one of our personal Gmail addresses in order to test the vulnerability. About six hours later, brutecat replied with the correct and full phone number linked to that account.

“Essentially, it's bruting the number,” brutecat said of their process. Brute forcing is when a hacker rapidly tries different combinations of digits or characters until finding the ones they’re after. Typically that’s in the context of finding someone’s password, but here brutecat is doing something similar to determine a Google user’s phone number.


From 404 Media via this RSS feed

33
 
 

DHS Black Hawks and Military Aircraft Surveil the LA Protests

Over the weekend in Los Angeles, as National Guard troops deployed into the city, cops shot a journalist with less-lethal rounds, and Waymo cars burned, the skies were bustling with activity. The Department of Homeland Security (DHS) flew Black Hawk helicopters; multiple aircraft from a nearby military air base circled repeatedly overhead; and one aircraft flew at an altitude and in a particular pattern consistent with a high-powered surveillance drone, according to public flight data reviewed by 404 Media.

The data shows that essentially every sort of agency, from local police, to state authorities, to federal agencies, to the military, had some sort of presence in the skies above the ongoing anti-Immigration Customs and Enforcement (ICE) protests in Los Angeles. The protests started on Friday in response to an ICE raid at a Home Depot; those tensions flared when President Trump ordered the National Guard to deploy into the city.


From 404 Media via this RSS feed

34
 
 

Scientists Just Discovered a Lost Ancient Culture That Vanished

Welcome back to the Abstract!

Sad news: the marriage between the Milky Way and Andromeda may be off, so don’t save the date (five billion years from now) just yet.

Then: the air you breathe might narc on you, hitchhiking worm towers, a long-lost ancient culture, Assyrian eyeliner, and the youngest old fish of the week.

An Update on the Fate of the Galaxy

Sawala, Till et al. “No certainty of a Milky Way–Andromeda collision.” Nature Astronomy.

Our galaxy, the Milky Way, and our nearest large neighbor, Andromeda, are supposed to collide in about five billion years in a smashed ball of wreckage called “Milkomeda.” That has been the “prevalent narrative and textbook knowledge” for decades, according to a new study that then goes on to say—hey, there’s a 50/50 chance that the galacta-crash will not occur.

What happened to The Milkomeda that Was Promised? In short, better telescopes. The new study is based on updated observations from the Gaia and Hubble space telescopes, which  included refined measurements of smaller nearby galaxies, including the Large Magellanic Cloud, which is about 130,000 light years away.

Astronomers found that the gravitational pull of the Large Magellanic Cloud effectively tugs the Milky Way out of Andromeda’s path in many simulations that incorporate the new data, which is one of many scenarios that could upend the Milkomeda-merger.

“The orbit of the Large Magellanic Cloud runs perpendicular to the Milky Way–Andromeda orbit and makes their merger less probable,” said researchers led by Till Sawala of the University of Helsinki. “In the full system, we found that uncertainties in the present positions, motions and masses of all galaxies leave room for drastically different outcomes and a probability of close to 50% that there will be no Milky Way–Andromeda merger during the next 10 billion years.”

“Based on the best available data, the fate of our Galaxy is still completely open,” the team said.

Wow, what a cathartic clearing of the cosmic calendar. The study also gets bonus points for the term “Galactic eschatology,” a field of study that is “still in its infancy.” For all those young folks out there looking to get a start on the ground floor, why not become a Galactic eschatologist? Worth it for the business cards alone.

In other news…

The Air on Drugs

Nousias, Orestis, McCauley, Mark, Stammnitz, Maximilian et al. “Shotgun sequencing of airborne eDNA achieves rapid assessment of whole biomes, population genetics and genomic variation.” Nature Ecology & Evolution.

Living things are constantly shedding cells off into their surroundings where it becomes environmental DNA (eDNA), a bunch of mixed genetic scraps that provide a whiff of the biome of any given area. In a new study, scientists who captured air samples from Dublin, Ireland, found eDNA from plenty of humans, pathogens, and drugs.

“[Opium poppy] eDNA was also detected in Dublin City air in both the 2023 and 2024 samples,” said researchers led by co-led by Orestis Nousias and Mark McCauley of the University of Florida, and Maximilian Stammnitz of the Barcelona Institute of Science and Technology. “Dublin City also had the highest level of Cannabis genus eDNA” and “Psilocybe genus (‘magic mushrooms’) eDNA was also detectable in the 2024 Dublin air sample.”

Even the air is a snitch these days. Indeed, while eDNA techniques are revolutionizing science, they also raise many ethical concerns about privacy and surveillance.

Catch a Ride on the Wild Worm Tower

Perez, Daniela et al. “Towering behavior and collective dispersal in Caenorhabditis nematodes.” Current Biology.

The long wait for a wild worm tower is finally over. I know, it’s a momentous occasion. While scientists have previously observed tiny worms called nematodes joining to form towers in laboratory conditions, this Voltron-esque adaptation has now been observed in a natural environment for the first time.

Scientists Just Discovered a Lost Ancient Culture That VanishedImages show a) A tower of worms. b) A tower explores the 3D space with an unsupported arm. c) A tower bridges an ∼3 mm gap to reach the Petri dish lid d) Touch experiment showing the tower at various stages. Image: Perez, Daniela et al.

“We observed towers of an undescribed Caenorhabditis species and C. remanei within the damp flesh of apples and pears” in orchards near the University of Konstanz in Germany, said researchers led by Daniela Perez of the Max Planck Institute of Animal Behavior. “As these fruits rotted and partially split on the ground, they exposed substrate projections—crystalized sugars and protruding flesh—which served as bases for towers as well as for a large number of worms individually lifting their bodies to wave in the air (nictation).”

According to the study, this towering behavior helps nematodes catch rides on passing animals, so that wave is pretty much the nematode version of a hitchhiker’s thumb.

A Lost Culture of Hunter-Gatherers

Krettek, Kim-Louise et al. “A 6000-year-long genomic transect from the Bogotá Altiplano reveals multiple genetic shifts in the demographic history of Colombia.” Science Advances.

Ancient DNA from the remains of 21 individuals exposed a lost Indigenous culture that lived in Colombia’s Bogotá Altiplano in Colombia for millennia, before vanishing around 2,000 years ago.

These hunter-gatherers were not closely related to either ancient North American groups or ancient or present-day South American populations, and therefore “represent a previously unknown basal lineage,” according to researchers led by Kim-Lousie Krettek of the University of Tübingen. In other words, this newly discovered population is an early branch of the broader family tree that ultimately dispersed into South America.

“Ancient genomic data from neighboring areas along the Northern Andes that have not yet been analyzed through ancient genomics, such as western Colombia, western Venezuela, and Ecuador, will be pivotal to better define the timing and ancestry sources of human migrations into South America,” the team said.

The Eyeshadow of the Ancients

Amicone, Silvia et al. “Eye makeup in Northwestern Iran at the time of the Assyrian Empire: a new kohl recipe based on manganese and graphite from Kani Koter (Iron Age III).” Archaeometry.

People of the Assyrian Empire appreciated a well-touched smokey eye some 3,000 years ago, according to a new study that identified “kohl” recipes used for eye makeup from an Iron Age cemetery Kani Koter in Northwestern Iran.

Scientists Just Discovered a Lost Ancient Culture That VanishedMakeup containers at the different sites. Image: Amicone, Silvia et al.

“At Kani Koter, the use of natural graphite instead of carbon black testifies to a hitherto unknown kohl recipe,” said researchers led by Silvia Amicone of the University of Tübingen. “Graphite is an attractive choice due to its enhanced aesthetic appeal, as its light reflective qualities produce a metallic appearance.”

Add it to the ancient lookbook. Both women and men wore these cosmetics; the authors note that “modern assumptions that cosmetic containers would be gender-specific items aptly highlight the limitations of our present understanding of the wider cultural and social contexts of the use of eye makeup during the Iron Age in the Middle East.”

New Onychodontid Just Dropped

Goodchild, Owen et al. “A new onychodontid (Osteichthyes, Sarcopterygii) from the Upper Devonian (Frasnian) of Devon Island, Nunavut Territory, Canada.” The Journal of Vertebrate Paleontology.

We’ll end with an introduction to Onychodus mikijuk, the newest member of a fish family called onychodontids that lived about 370 million years ago. The new species was identified by fragments found in Nunavut in Canada, including tooth “whorls” that are like little dental buzzsaws.

“This new species is the first record of an onychodontid from the Upper Devonian of the Canadian Arctic, the first from a riverine environment, and one of the youngest occurrences of the clade,” said researchers led by Owen Goodchild of the American Museum of Natural History.

Ah, to be 370-million-years-young again! Welcome to the fossil record, Onychodus mikijuk.

Thanks for reading! See you next week.


From 404 Media via this RSS feed

35
 
 

Behind the Blog: Activism and Evangelism

This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss the phrase "activist reporter," waiting in line for a Switch 2, and teledildonics.

JOSEPH: Recently our work on Flock, the automatic license plate reader (ALPR) company, produced some concrete impact. In mid-May I revealed that Flock was building a massive people search tool that would supplement its ALPR data with other information in order to “jump from LPR to person.” That is, identify the people associated with a vehicle and those associated with them. Flock planned to do this with public records like marriage licenses, and, most controversially, hacked data. This was according to leaked Slack chats, presentation slides, and audio we obtained. The leak specifically mentioned a hack of the Park Mobile app as the sort of breached data Flock might use.

After internal pressure in the company and our reporting, Flock ultimately decided to not use hacked data in Nova. We covered the news last week here. We also got audio of the meeting discussing this change. Flock published its own disingenuous blog post entitled Correcting the Record: Flock Nova Will Not Supply Dark Web Data, which attempted to discredit our reporting but didn’t actually find any factual inaccuracies at all. It was a PR move, and the article and its impact obviously stand.


From 404 Media via this RSS feed

36
 
 

Elon Musk Claimed the Art This Man Painstakingly Created Was Generated by Grok

Over the weekend, Elon Musk shared Grok altered photographs of people walking through the interior of instruments and implied that his AI system had created the beautiful and surreal images. But the underlying photos are the work of artist Charles Brooks, who wasn’t credited when Musk shared the images with his 220 million followers.

Musk drives a lot of attention to anything he talks about online and that can be a boon for artists and writers, but only if they’re credited and Musk isn’t big on sharing credit. This all began when X user Eric Jiang posted a picture of Brook’s instrument interior photographs Jiang had run through Grok. He’d use the AI to add people to the artist’s original photos and make the instrument interiors look like buildings. Musk then retweeted Jiang’s post, adding “Generate images with @Grok.”

Neither Musk or Jiang credited Brooks as the creator of the original photos, though Jiang added his name in a reply to his initial post.

Brooks told 404 Media that he isn’t on X a lot these days and learned about the posts when someone else told him. “I got notified by someone else that Musk had tweeted my photos saying they’re AI,” he said. “First there’s kind of rage. You’re thinking, ‘Hey, he’s using my photos to promote his system. Quickly it becomes murky. These photos have been edited by someone else […] he’s lifted my photos from somewhere else […] and he’s run them through Grok—and this is the main thing to me—he’s edited a tiny percentage of them and then he’s posted them saying, ‘Look at these tiny people inside instruments.’ And in that post he hasn’t mentioned my name. He puts it as a comment.”

Brooks is a former concert cellist turned photographer in Australia who is most famous for his work photographing the inside of famous instruments. Using specialized techniques he’s developed using medical equipment like endoscopes, he enters violins, pianos, and organs and transforms their interiors into beautiful photographs. Through his lens, a Steinway piano becomes an airport terminal carved from wood and the St. Mark's pipe organ in Philadelphia becomes an eerie steel forest. Jiang’s Grok-driven edit only works because Brook’s original photos suggest a hidden architecture inside the instruments.

Elon Musk Claimed the Art This Man Painstakingly Created Was Generated by GrokLeft: Charles Brooks original photograph. Right: Grok's edited version of the photo.

He sells prints, posters, and calendars of the work. Referrals and social media posts drive traffic, but only if people know he’s behind the photos. “I want my images shared. That’s important to me because that’s how people find out about my work. But they need to be shared with my name. That’s the critical thing,” he said.

Brooks said he wasn’t mad at Jiang for editing his photos, similar things have happened before. “The thing is that when Musk retweets it […] my name drops out of it completely because it was just there as a comment and so that chain is broken,” he said. “The other thing is, because of the way Grok happens, this gets stamped with his watermark. And the way [Musk] phrases it, it makes it look like the entire image is created by AI, instead of 8 to 10 percent of it […] and everyone goes on saying, ‘Oh, look how wonderful this AI is, isn’t it doing amazing things?’ And he gets some wonderful publicity for his business and I get lost.”

He struggled with who to blame. Jiang did share Brooks’ name, but putting it in a reply to the first tweet buried it. But what about the billionaire? “Is it Musk? He’s just retweeting something that did involve his software. But now it looks like it was involved to more of a degree than it was. Did he even check it? Was it just a trending post that one of his bots reposted?”

Many people do not think while they post. Thoughts are captured in a moment, composed, published, and forgotten. The more you post the more careless you become with retweets and comments and Musk often posts more than 100 times a day.

“I feel like, if he’s plugging his own AI software, he has a duty of care to make sure that what he’s posting is actually attributed correctly and is properly his,” Brooks said. “But ‘duty of care’ and Musk are not words that seem to go together well recently.”

When I spoke with him, Brooks had recently posted a video about the incident to r/mildlyinfuriating, a subreddit he said captured his mood. “I’m annoyed that my images are being used almost in their entirety, almost unaltered, to push an AI that is definitely disrupting and hurting a lot of the arts world,” he said. “I’m not mad at AI in general. I’m mad at the sort of people throwing this stuff around without a lot of care.”

One of the ironies of the whole affair is that Brooks is not against the use of AI in art per se.

When he began taking photos, he mostly made portraits of musicians he’d enhance with photoshop. “I was doing all this stuff like, let’s make them fly, let’s make it look like their instrument’s on fire and get all of this drama and fantasy out of it,” he said.

When the first sets of AI tools rolled out a few years ago, he realized that soon they’d be better at creating his composites than he was. “I realized I needed to find something that AI can’t do, and that maybe you don’t want AI to do,” he said. That’s when he got the idea to use medical equipment to map the interiors of famous instruments.

“It’s art and I’m selling it as art, but it’s very documentative,” he said. “Here is the inside of this specific instrument. Look at these repairs. Look at these chisel marks from the original maker. Look at this history. AI might be able to do, very soon, a beautiful photo of what the inside of a violin might look like, but it’s not going to be a specific instrument. It’s going to be the average of all the violins it’s ever seen […] so I think there’s still room for photographers to work, maybe even more important now to work as documenters of real stuff.”

This isn’t the first time someone online has shared his work without attribution. He said that a year ago a CNN reporter tweeted one of his images and Brooks was able to contact the reporter and get him to edit the tweet to add his name. “The traffic surge from that was immense. He’s an important reporter, but he’s just a reporter. He’s not Elon,” Brooks said. He said he had seen a jump in traffic and interest since Elon’s tweet, but it’s nothing compared to when the reporter shared his work with his name.

“Yet my photos have been published on one of the most popular Twitter accounts there is.”


From 404 Media via this RSS feed

37
 
 

TSA Working on Haptic Tech To 'Feel' Your Body in Virtual Reality

The Department of Homeland Security (DHS) and Transportation Security Administration (TSA) are researching an incredibly wild virtual reality technology that would allow TSA agents to use VR goggles and haptic feedback gloves to allow them to pat down and feel airline passengers at security checkpoints without actually touching them. The agency calls this a “touchless sensor that allows a user to feel an object without touching it.”

Information sheets released by DHS and patent applications describe a series of sensors that would map a person or object’s “contours” in real time in order to digitally replicate it within the agent’s virtual reality system. This system would include a “haptic feedback pad” which would be worn on an agent’s hand. This would then allow the agent to inspect a person’s body without physically touching them in order to ‘feel’ weapons or other dangerous objects. A DHS information sheet released last week describes it like this:

“The proposed device is a wearable accessory that features touchless sensors, cameras, and a haptic feedback pad. The touchless sensor system could be enabled through millimeter wave scanning, light detection and ranging (LiDAR), or backscatter X-ray technology. A user fits the device over their hand. When the touchless sensors in the device are within range of the targeted object, the sensors in the pad detect the target object’s contours to produce sensor data. The contour detection data runs through a mapping algorithm to produce a contour map. The contour map is then relayed to the back surface that contacts the user’s hand through haptic feedback to physically simulate a sensation of the virtually detected contours in real time.”

The system “would allow the user to ‘feel’ the contour of the person or object without actually touching the person or object,” a patent for the device reads. “Generating the mapping information and physically relaying it to the user can be performed in real time.” The information sheet says it could be used for security screenings but also proposes it for "medical examinations."

TSA Working on Haptic Tech To 'Feel' Your Body in Virtual RealityA screenshot from the patent application that shows a diagram of virtual hands roaming over a person's body

The seeming reason for researching this tool is that a TSA agent would get the experience and sensation of touching a person without actually touching the person, which the DHS researchers seem to believe is less invasive. The DHS information sheet notes that a “key benefit” of this system is it “preserves privacy during body scanning and pat-down screening” and “provides realistic virtual reality immersion,” and notes that it is “conceptual.” But DHS has been working on this for years, according to patent filings by DHS researchers that date back to 2022.

Whether it is actually less invasive to have a TSA agent in VR goggles and haptics gloves feel you up either while standing near you or while sitting in another room is something that is going to vary from person to person. TSA patdowns are notoriously invasive, as many have pointed out through the years. One privacy expert who showed me the documents but was not authorized to speak to the press about this by their employer said “I guess the idea is that the person being searched doesn't feel a thing, but the TSA officer can get all up in there?,” they said. “The officer can feel it ... and perhaps that’s even more invasive (or inappropriate)? All while also collecting a 3D rendering of your body.” (The documents say the system limits the display of sensitive parts of a person’s body, which I explain more below).

TSA Working on Haptic Tech To 'Feel' Your Body in Virtual RealityA screenshot from the patent application that explains how a "Haptic Feedback Algorithm" would map a person's body

There are some pretty wacky graphics in the patent filings, some of which show how it would be used to sort-of-virtually pat down someone’s chest and groin (or “belt-buckle”/“private body zone,” according to the patent). One of the patents notes that “embodiments improve the passenger’s experience, because they reduce or eliminate physical contacts with the passenger.” It also claims that only the goggles user will be able to see the image being produced and that only limited parts of a person’s body will be shown “in sensitive areas of the body, instead of the whole body image, to further maintain the passenger’s privacy.” It says that the system as designed “creates a unique biometric token that corresponds to the passenger.”

A separate patent for the haptic feedback system part of this shows diagrams of what the haptic glove system might look like and notes all sorts of potential sensors that could be used, from cameras and LiDAR to one that “involves turning ultrasound into virtual touch.” It adds that the haptic feedback sensor can “detect the contour of a target (a person and/or an object) at a distance, optionally penetrating through clothing, to produce sensor data.”

TSA Working on Haptic Tech To 'Feel' Your Body in Virtual RealityDiagram of smiling man wearing a haptic feedback gloveTSA Working on Haptic Tech To 'Feel' Your Body in Virtual RealityA drawing of the haptic feedback glove

DHS has been obsessed with augmented reality, virtual reality, and AI for quite some time. Researchers at San Diego State University, for example, proposed an AR system that would help DHS “see” terrorists at the border using HoloLens headsets in some vague, nonspecific way. Customs and Border Patrol has proposed “testing an augmented reality headset with glassware that allows the wearer to view and examine a projected 3D image of an object” to try to identify counterfeit products.

DHS acknowledged a request for comment but did not provide one in time for publication.


From 404 Media via this RSS feed

38
 
 

Apple Gave Governments Data on Thousands of Push Notifications

Apple provided governments around the world with data related to thousands of push notifications sent to its devices, which can identify a target’s specific device or in some cases include unencrypted content like the actual text displayed in the notification, according to data published by Apple. In one case, that Apple did not ultimately provide data for, Israel demanded data related to nearly 700 push notifications as part of a single request.

The data for the first time puts a concrete figure on how many requests governments around the world are making, and sometimes receiving, for push notification data from Apple.

The practice first came to light in 2023 when Senator Ron Wyden sent a letter to the U.S. Department of Justice revealing the practice, which also applied to Google. As the letter said, “the data these two companies receive includes metadata, detailing which app received a notification and when, as well as the phone and associated Apple or Google account to which that notification was intended to be delivered. In certain instances, they also might also receive unencrypted content, which could range from backend directives for the app to the actual text displayed to a user in an app notification.”


From 404 Media via this RSS feed

39
 
 

Why Do Christians Love AI Slop?

A crowd of people dressed in rags stare up at a tower so tall it reaches into the heavens. Fire rains down from the sky on to a burning city. A giant in armor looms over a young warrior. An ocean splits as throngs of people walk into it. Each shot only lasts a couple of seconds, and in that short time they might look like they were taken from a blockbuster fantasy movie, but look closely and you’ll notice that each carries all the hallmarks of AI-generated slop: the too smooth faces, the impossible physics, subtle deformations, and a generic aesthetic that’s hard to avoid when every pixel is created by remixing billions of images and videos in training data that was scraped from the internet.

“Every story. Every miracle. Every word,” the text flashes dramatically on screen before cutting to silence and the image of Jesus on the cross. With 1.7 million views, this video, titled “What if The Bible had a movie trailer…?” is the most popular on The AI Bible YouTube channel, which has more than 270,000 subscribers, and it perfectly encapsulates what the channel offers. Short, AI-generated videos that look very much like the kind of AI slop we have covered at 404 Media before. Another YouTube channel of AI-generated Bible content, Deep Bible Stories, has 435,000 subscribers, and is the 73rd most popular podcast on the platform according to YouTube’s own ranking. This past week there was also a viral trend of people using Google’s new AI video generator, Veo 3, to create influencer-style social media videos of biblical stories. Jesus-themed content was also some of the earliest and most viral AI-generated media we’ve seen on Facebook, starting with AI-generated images of Jesus appearing on the beach and escalating to increasingly ridiculous images, like shrimp Jesus.

But unlike AI slop on Facebook that we revealed is made mostly in India and Vietnam for a Western audience by pragmatically hacking Facebook’s algorithms in order to make a living, The AI Bible videos are made by Christians, for Christians, and judging by the YouTube comments, they unanimously love them.

“This video truly reminded me that prayer is powerful even in silence. Thank you for encouraging us to lean into God’s strength,” one commenter wrote. “May every person who watches this receive quiet healing, and may peace visit their heart in unexpected ways.”

“Thank you for sharing God’s Word so beautifully,” another commenter wrote. “Your channel is a beacon of light in a world that needs it.”

I first learned about the videos and how well they were received by a Christian audience from self-described “AI filmmaker” PJ Accetturo, who noted on X that there’s a “massive gap in the market: AI Bible story films. Demand is huge. Supply is almost zero. Audiences aren’t picky about fidelity—they just want more.” Accetturo also said he’s working on his own AI-generated Bible video for a different publisher about the story of Jonah.

Unlike most of the AI slop we’ve reported on so far, the AI Bible channel is the product of a well-established company in Christian media, Pray.com, which claims to make “the world's #1 app for faith and prayer.”

“The AI Bible is a revolutionary platform that uses cutting-edge generative AI to transform timeless biblical stories into immersive, hyper-realistic experiences,” its site explains. “ Whether you’re exploring your faith, seeking inspiration, or simply curious, The AI Bible offers a fresh perspective that bridges ancient truths with modern creativity.”

I went searching for Christian commentary about generative AI to see whether Pray.com’s full embrace of this new and highly controversial technology was unique among faith-based organizations, and was surprised to discover the opposite. I found oped, after oped and commentary from pastors about how AI was a great opportunity Christians needed to embrace.

Corrina Laughlin, an assistant professor at Loyola Marymount University and the author of Redeem All: How Digital Life Is Changing Evangelical Culture, a book about the intersection of American evangelicalism and tech innovation, told me she was not surprised.

“It's not surprising to me to see Christians producing tons of content using AI because the idea is that God gave them this technology—that’s something I heard over and over again [from Christians]—and they have to use it for him and for his glory,” she said.

Unlike other audiences, like Star Wars fans who passionately rejected an AI-generated proof-of-concept short AI-generated film recently, Laughlin also told me she wasn’t surprised that some Christians commented that they love the low quality AI-generated videos from the AI Bible.

“The metrics for success are totally different,” she said. “This isn't necessarily about creativity. It's about spreading the word, and the more you can do that, the kind of acceleration that AI offers, the more you are doing God's work.”

Laughlin said that the Christian early adoption of new technologies and media goes back 100 years. Christian media flourished on the radio, then turned to televangelism, and similarly made the transition to online media, with an entire world of religious influencers, sites, and apps.

“The fear among Christians is that if they don't immediately jump onto a technology they're going to be left behind, and they're going to start losing people,” Laughlin said. The thinking is that if Christians are “not high tech in a high tech country where that's what's really grabbing people's attention, then they lose the war for attention to the other side, and losing the war for attention to the other side has really drastic spiritual consequences if you think of it in that frame,” she said.

Laughlin said that, especially among evangelical Christians, there’s a willingness to adopt new technologies that veers into boosterism. She said she saw Christians similarly try to jump on the metaverse hype train back when Silicon Valley insisted that virtual reality was the future, with many Christians asking how they’re going to build a Metaverse church because that’s where they thought people were going to be.

I asked Laughlin why it seems like secular and religious positions on new technologies seemed to have flipped. When I was growing up, it seemed like religious organizations were very worried that video games, for example, were corrupting young souls and turning them against God, especially when they overlapped with Satanic Panics around games like Doom or Diablo. When it comes to AI, it seems like it’s mostly secular culture—academics, artists, and other creatives—who shun generative AI for exploiting human labor and the creative spirit. In fact, many AI accelerationists accuse any critics of the technology or a desire to regulate it as a kind of religious moral panic. Christians, on the other hand, see AI as part of the inevitable march of technological progress, and they want to be a part of it.

“It’s like the famous Marshall McLuhan quote, ‘the medium is the message,’ right? If they’re getting out there in the message of the time, that means the message is still fresh. Christians are still relevant in the AI age, and they're doing it and like that in itself is all that matters,” Laughlin said. “Even if it's clearly something that anybody could rightfully sneer at if you had any sense of what makes good or bad media aesthetics.”


From 404 Media via this RSS feed

40
 
 
Subscribe

Join the newsletter to get the latest updates.

SuccessGreat! Check your inbox and click the link.ErrorPlease enter a valid email address. The IRS Tax Filing Software TurboTax Is Trying to Kill Just Got Open Sourced

The IRS open sourced much of its incredibly popular Direct File software as the future of the free tax filing program is at risk of being killed by Intuit’s lobbyists and Donald Trump’s megabill. Meanwhile, several top developers who worked on the software have left the government and joined a project to explore the “future of tax filing” in the private sector.

Direct File is a piece of software created by developers at the US Digital Service and 18F, the former of which became DOGE and is now unrecognizable, and the latter of which was killed by DOGE. Direct File has been called a “free, easy, and trustworthy” piece of software that made tax filing “more efficient.” About 300,000 people used it last year as part of a limited pilot program, and those who did gave it incredibly positive reviews, according to reporting by Federal News Network.

But because it is free and because it is an example of government working, Direct File and the IRS’s Free File program more broadly have been the subject of years of lobbying efforts by financial technology giants like Intuit, which makes TurboTax. DOGE sought to kill Direct File, and currently, there is language in Trump’s massive budget reconciliation bill that would kill Direct File. Experts say that “ending [the] Direct File program is a gift to the tax-prep industry that will cost taxpayers time and money.”

That means it’s quite big news that the IRS released most of the code that runs Direct File on Github last week. And, separately, three people who worked on it—Chris Given, Jen Thomas, Merici Vinton—have left government to join the Economic Security Project’s Future of Tax Filing Fellowship, where they will research ways to make filing taxes easier, cheaper, and more straightforward. They will be joined by Gabriel Zucker, who worked on Direct File as part of Code for America.


From 404 Media via this RSS feed

41
 
 

Podcast: Anti-Porn Laws' Real Target Is Free Speech

We start this week with Sam's dive into a looming piece of anti-porn legislation, prudish algorithms, and eggs. After the break, Matthew tells us about the open source software that powered Ukraine's drone attack against Russia. In the subscribers-only section, Emanuel explains how even pro-AI subreddits are dealing with people having AI delusions.

Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

The Egg Yolk Principle: Human Sexuality Will Always Outsmart Prudish Algorithms and Hateful PoliticiansUkraine's Massive Drone Attack Was Powered by Open Source SoftwarePro-AI Subreddit Bans 'Uptick' of Users Who Suffer from AI Delusions


From 404 Media via this RSS feed

42
 
 

The Egg Yolk Principle: Human Sexuality Will Always Outsmart Prudish Algorithms and Hateful Politicians

A metal fork drags its four prongs back and forth across the yolk of an over-easy egg. The lightly peppered fried whites that skin across the runny yolk give a little, straining under the weight of the prongs. The yolk bulges and puckers, and finally the fork flips to its sharp points, bears down on the yolk and rips it open, revealing the thick, bright cadmium-yellow liquid underneath. The fork dips into the yolk and rubs the viscous ovum all over the crispy white edges, smearing it around slowly, coating the prongs. An R&B track plays.

@popping_yolks

#popping_yolks #eggs #food #yummy #watchmepop #foodporn #pop #poppingyolk @Foodporn

♬ Chill Day - LAKEY INSPIRED

People in the comments on this video and others on the Popping Yolks TikTok account seem to be a mix of pleased and disgusted. “Bro seriously Edged till the very last moment,” one person commented. “It’s what we do,” the account owner replied. “Not the eggsum 😭” someone else commented on another popping video.

The sentiment in the comments on most content that floats to the top of my algorithms these days—whether it’s in the For You Page on TikTok, the infamously malleable Reels algo on Instagram, X’s obsession with sex-stunt discourse that makes it into prudish New York Times opinion essays—is confusion: How did I get here? Why does my FYP think I want to see egg edging? Why is everything slightly, uncomfortably, sexual?

If right-wing leadership in this country has its way, the person running this account could be put in prison for disseminating content that's “intended to arouse.” There’s a nationwide effort happening right now to end pornography, and call everything “pornographic” at the same time.

Much like anti-abortion laws don’t end abortion, and the so-called war on drugs didn’t “win” over drugs, anti-porn laws don’t end the adult industry. They only serve to shift power from people—sex workers, adult content creators, consumers of porn and anyone who wants to access sexual speech online without overly-burdensome barriers—to politicians like Senator Mike Lee, who is currently pushing to criminalize porn at the federal level.

Everything is sexually suggestive now because on most platforms, for years, being sexually overt meant risking a ban. Not-coincidentally, being horny about everything is also one of the few ways to get engagement on those same platforms. At the same time, legislators are trying to make everything “pornographic” illegal or impossible to make or consume.

The Egg Yolk Principle: Human Sexuality Will Always Outsmart Prudish Algorithms and Hateful PoliticiansScreenshot via Instagram

The Interstate Obscenity Definition Act (IODA), introduced by Senator Lee and Illinois Republican Rep. Mary Miller last month, aims to change the Supreme Court’s 1973 “Miller Test” for determining what qualifies as obscene. The Miller Test assesses material with three criteria: Would the average person, using contemporary standards, think it appeals to prurient interests? Does the material depict, in a “patently offensive” way, sexual conduct? And does it lack “serious literary, artistic, political, or scientific” value? If you’re thinking this all sounds awfully subjective for a legal standard, it is.

But Lee, whose state of Utah has been pushing the pseudoscientific narrative that porn constitutes a public health crisis for years, wants to redefine obscenity. Current legal definitions of obscenity include “intent” of the material, which prohibits obscene material “for the purposes of abusing, threatening, or harassing a person.” Lee’s IODA would remove the intent stipulation entirely, leaving anyone sharing or posting content that’s “intended to arouse” vulnerable to federal prosecution.

💡Do you know anything else about how platforms, companies, or state legislators are ? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404 Otherwise, send me an email at sam@404media.co.

IODA also makes an attempt to change the meaning of “contemporary community standards,” a key part of obscenity law in the U.S. “Instead of relying on contemporary community standards to determine if a work is patently offensive, the IODA creates a new definition of obscenity which considers whether the material involves an ‘objective intent to arouse, titillate, or gratify the sexual desires of a person,’” First Amendment attorney Lawrence Walters told me. “This would significantly broaden the scope of erotic materials that are subject to prosecution as obscene. Prosecutors have stumbled, in the past, with establishing that a work is patently offensive based on community standards. The tolerance for adult materials in any particular community can be quite difficult to pin down, creating roadblocks to successful obscenity prosecutions. Accordingly, Sen. Lee’s bill seeks to prohibit more works as obscene and makes it easier for the government to criminalize protected speech.”

All online adult content creators—Onlyfans models, porn performers working for major studios, indie porn makers, people doing horny commissions on Patreon, all of romance “BookTok,” maybe the entire romance book genre for that matter—could be criminals under this law. Would the egg yolk popper be a criminal, too? What about this guy who diddles mushrooms on TikTok? What about these women spitting in cups? Or the Donut Daddy, who fingers, rips and slaps ingredients while making cooking content? Is Sydney Sweeney going to jail for intending to arouse fans with bathwater-themed soap?

What Lee and others who support these kinds of bills are attempting to construct is a legal precedent where someone stroking egg yolks—or whispering into a microphone, or flicking a wet jelly fungus—should fear not just for their accounts, but for their freedom.

Some adult content creators are pushing back with the skills they have. Porn performers Damien and Diana Soft made a montage video of them having sex while reciting the contents of IODA.

“The effect Lee’s bill would have on porn producers and consumers is obvious, but it’s the greater implications that scare us most,” they told me in an email. “This bill would hurt every American by infringing on their freedoms and putting power into the hands of politicians. We don’t want this government—or any well-meaning government in the future—to have the ability to find broader and broader definitions of ‘obscene.’ Today they use the word to define porn. Tomorrow it could define the actions of peaceful protestors.”

The law has defined obscenity narrowly for decades. “The current test for obscenity requires, for example, that the thing that's depicted has to be patently offensive,” Becca Branum, deputy director of the Center for Democracy and Technology’s Free Expression Project, told me in a call. “By defining it that narrowly, a lot of commercial pornography and all sorts of stuff is still protected by the First Amendment, because it's not patently offensive. This bill would replace that standard with any representation of “normal or perverted sexual acts” with the objective intent to arouse, titillate or gratify. And so that includes things like simulating depictions of sex, which are a huge part of all media. Sex sells, and this could sweep in any romcom with a sex scene, no matter how tame, just because it includes a representation of a sex act. It’s just an enormous expansion of what has been legally understood to be obscenity.”

IODA is not a law yet, and is still only a bill that has to make its way through the House and Senate before it winds up on the president’s desk, and Lee has failed to get versions of the IODA through in the past. But as I wrote at the time, we’re in a different political landscape. Project 2025 leadership is at the helm, and that manifesto dictates an end to all porn and prison for pornographers.

All of the legal experts and free speech advocates I spoke to said IODA is plainly unconstitutional. But it’s still worth taking seriously, as it’s illustrative of something much bigger happening in politics and society.

“There are people who would like to get all sexual material offline,” David Greene, senior staff attorney at the Electronic Frontier Foundation, told me. There are people who want to see all sexual material completely eradicated from public life, but “offline is [an] achievable target,” he said. “So in some ways it's laughable, but if it does gain momentum, this is really, really dangerous.”

Lee’s bill might seem to have an ice cube’s chance in hell for becoming law, but weirder things are happening. Twenty-two states in the U.S. already have laws in place that restrict adults’ access to pornography, requiring government-issued ID to view adult content. Fifteen more states have age verification bills pending. These bills share similar language to define “harmful material:”

“material that exploits, is devoted to, or principally consists of descriptions of actual, simulated, or animated display or depiction of any of the following, in a manner patently offensive with respect to minors: (i) pubic hair, anus, vulva, genitals, or nipple of the female breast; (ii) touching, caressing, or fondling of nipples, breasts, buttocks, anuses, or genitals; or (iii) sexual intercourse, masturbation, sodomy, bestiality, oral copulation, flagellation, excretory functions, exhibitions, or any other sexual act.”

Before the first age verification bills were a glimmer in Louisiana legislators’ eyes three years ago, sexuality was always overpoliced online. Before this, it was (and still is) SESTA/FOSTA, which amended Section 230 to make platforms liable for what users do on them when activity could be construed as “sex trafficking,” including massive swaths and sometimes whole websites in its net if users discussed meeting in exchange for pay, but also real-life interactions or and attempts to screen clients for in-person encounters—and imposed burdensome fines if they didn’t comply. Sex education bore a lot of the brunt of this legislation, as did sex workers who used listing sites and places like Craigslist to make sure clientele was safe to meet IRL. The effects of SESTA/FOSTA were swift and brutal, and they’re ongoing.

We also see these effects in the obfuscation of sexual words and terms with algo-friendly shorthand, where people use “seggs” or “grape” instead of “sex” or “rape” to evade removal by hostile platforms. And maybe years of stock imagery of fingering grapefruits and wrapping red nails around cucumbers because Facebook couldn’t handle a sideboob means unironically horny fuckable-food content is a natural evolution to adapt.

Now, we have the Take It Down act, which experts expect will cause a similar fallout: platforms that can’t comply with extremely short deadlines on strict moderation expectations could opt to ban NSFW content altogether.

Before either of these pieces of legislation, it was (and still is!) banks. Financial institutions have long been the arbiters of morality in this country and others. And what credit card processors say goes, even if what they’re taking offense from is perfectly legal. Banks are the extra-legal arm of the right.

For years, I wrote a column for Motherboard called “Rule 34,” predicated on the “internet rule” that if you can think of it, someone has made porn of it. The thesis, throughout all of the communities and fetishes I examined—blueberry inflationists, slime girls, self-suckers, airplane fuckers—was that it’s almost impossible to predict what people get off on. A domino falls—playing in the pool as a 10 year old, for instance—and the next thing you know you’re an adult hooking an air compressor up to a fuckable pool toy after work. You will never, ever put human sexuality in a box. The idea that someone like Mike Lee wants to try is not only absurd, it’s scary: a ruse set up for social control.

Much of this tension between laws, banks, and people plays out very obviously in platforms’ terms of use. Take a recent case: In late 2023, Patreon updated its terms of use for “sexually gratifying works.” In these guidelines, the platform twists itself into Gordian knots trying to define what is and isn’t permitted. For example, “sexual activity between a human and any animal that exists in the real world” is not permitted. Does this mean sex between humans and Bigfoot is allowed? What about depictions of sex with extinct animals, like passenger pigeons or dodos? Also not permitted: “Mouths, sex toys, or related instruments being used for the stimulation of certain body parts such as genitals, anus, breast or nipple (as opposed to hip, arm, or armpit which would be permitted).” It seems armpit-licking is a-ok on Patreon.

In September 2024, Patreon made changes to the guidelines again, writing in an update that it “added nuance under ‘Bestiality’ to clarify the circumstances in which it is permitted for human characters to have sexual interactions with fictional mythological creatures.” The rules currently state: “Sexual interaction between a human and a fictional mythological creature that is more humanistic than animal (i.e. anthropomorphic, bipedal, and/or sapient).” As preeminent poster Merritt K wrote about the changes, “if i'm reading this correct it's ok to write a story where a werewolf fucks a werewolf but not where a werewolf fucks a dracula.”

The platform also said in an announcement alongside the bestiality stuff: “We removed ‘Game of Thrones’ as an example under the ‘Incest’ section, to avoid confusion.” All of it almost makes you pity the mods tasked with untangling the knots, pressed from above by managers, shareholders, and CEOs to make the platform suitably safe and sanitary for credit card processors, and from below by users who want to sell their slashfic fanart of Lannister inter-familial romance undisturbed.

Patreon’s changes to its terms also threw the “adult baby/diaper lover” community into chaos, in a perfect illustration of my point: A lot of participants inside that fandom insist it’s not sexual. A lot of people outside find it obscene. Who’s correct?

As part of answering that question for this article, I tried to find examples of content that’s arousing but not actually pornographic, like the egg yolks. This, as it happens, is a very “I know it when I see it” type of thing. Foot pottery? Obviously intended to arouse, but not explicitly pornographic. This account of AI-generated ripped women? Yep, and there’s a link to “18+” content in the account’s bio. Farting and spitting are too obviously kinky to successfully toe the line, but a woman chugging milk as part of a lactose intolerance experiment then recording herself suffering (including closeups of her face while farting) fits the bill, according to my entirely arbitrary terms. Confirming my not-porn-but-still-horny assessment, the original video—made by user toot_queen on TikTok, was reposted to Instagram by the lactose supplement company Dairy Joy. Fleece straightjackets, and especially tickle sessions in them, are too recognizably BDSM. This guy making biscuits on a blankie? I guess, man. Context matters: Eating cereal out of a woman’s armpit is way too literal to my eye, but it’d apparently fly on Patreon no problem.

@toot_queen♬ original sound - Toot girl

Obfuscating fetish and kink for the appeasement of payment processors, platforms and Republican senators has a history. As Jenny Sundén, a professor of gender studies at Södertörn University in Sweden, points out in her 2022 paper, philosopher Édouard Glissant presented the concept of “opacity” as a tactic of the oppressed, and a human right. She applied this to kink: “Opacity implies a lack of clarity; something opaque may be both difficult to see clearly as well as to understand,” Sundén wrote. “Kink communities exist to a large extent in such spaces of dimness, darkness and incomprehensibility, partly removed from public view and, importantly, from public understanding. Kink certainly enters the bright daylight of public visibility in some ways, most obviously through popular culture. And yet, there is something utterly incomprehensible about how desire works, something which tends to become heightened in the realm of kink as non-practitioners may struggle to ‘understand.’”

"We’ve seen similar attempts to redefine obscenity that haven’t gone very far. However, we’re living in an era when censorship of sexual content is broadly censored online, and the promises written in Project 2025 are coming true"

Opacity, she suggested, “works to overcome the risk of reducing, normalizing and assimilating sexual deviance by comprehension, and instead open up for new modes of obscure and pleasurable sexual expressions and transgressions on social media platforms.”

As the internet and society at large becomes more hostile to sex, actual sexual content has become more opaque. And because sex leads the way in engagement, monetization, and innovation on the internet, everything else has copied it, pretending it’s trying to evade detection even when there’s nothing to detect, like the fork and fried egg.

The point of eroding longstanding definitions of obscenity and precedent around intent and standards are all part of a journey back toward a world where the only sexuality one can legally experience is between legally married cisgender heterosexuals. We see it happen with book bans that call any mention of gender or sexuality “pornographic,” and with attacks on trans rights that label people’s very existence as porn.

"The IODA would be the first step toward an outright federal ban on pornography and an insult to existing case law. We’ve seen similar attempts to redefine obscenity that haven’t gone very far. However, we’re living in an era when censorship of sexual content is broadly censored online, and the promises written in Project 2025 are coming true,” Ricci Levy, president of the Woodhull Freedom Foundation, told me. “Banning pornography may not concern those who object to its existence, but any attempt by the government to ban and censor protected speech is a threat to the First Amendment rights we all treasure."

And as we saw with FOSTA/SESTA, and with the age verification lawsuits cropping up around the country recently—and what we’ll likely see happen now that the Take It Down Act has passed with extreme expectations placed on website administrators to remove anything that could infringe on nonconsensual content laws—platforms might not even bother to try to deal with the burden of keeping NSFW users happy anymore.

Even if IODA doesn't pass, and even if no one is ever prosecuted under it, “the damage is done, both in his introduction and sort of creating that persistent drum beat of attempts to limit people's speech,” Branum said.

But if it or a bill like it did pass in the future, prosecutors—in this scenario, empowered to dictate people’s speech and sexual interests—wouldn't even need to bring a case against someone for it to have real effects. “The more damaging and immediate effect would be on the chilling effect it'll have on everyone's speech in the meantime,” Branum said. “Even if I'm not prosecuted under the obscenity statute, if I know that I could be for sharing something as benign as a recording from my bachelorette party, I'm going to curtail my speech. I'm going to change my behavior to avoid attracting the government's ire. Even if they never brought a prosecution under this law, the damage would already be done.”


From 404 Media via this RSS feed

43
 
 

Ukraine's Massive Drone Attack Was Powered by Open Source Software

Open source software used by hobbyist drones powered an attack that wiped out a third of Russia’s strategic long range bombers on Sunday afternoon, in one of the most daring and technically coordinated attacks in the war.

In broad daylight on Sunday, explosions rocked air bases in Belaya, Olenya, and Ivanovo in Russia, which are hundreds of miles from Ukraine. The Security Services of Ukraine’s (SBU) Operation Spider Web was a coordinated assault on Russian targets it claimed was more than a year in the making, which was carried out using a nearly 20-year-old piece of open source drone autopilot software called ArduPilot.

ArduPilot’s original creators were in awe of the attack. “That's ArduPilot, launched from my basement 18 years ago. Crazy,” Chris Anderson said in a comment on LinkedIn below footage of the attack.

On X, he tagged his the co-creators Jordi Muñoz and Jason Short in a post about the attack. “Not in a million years would I have predicted this outcome. I just wanted to make flying robots,” Short said in a reply to Anderson. “Ardupilot powered drones just took out half the Russian strategic bomber fleet.”

ArduPilot is an open source software system that takes its name from the Arduino hardware systems it was originally designed to work with. It began in 2007 when Anderson launched the website DIYdrones.com and cobbled together a UAV autopilot system out of a Lego Mindstorms set (Anderson is also the former editor-in-chief of WIRED.)

DIYdrones became a gathering place for UAV enthusiasts and two years after Anderson’s Lego UAV took flight, a drone pilot named Jordi Muñoz won an autonomous vehicle competition with a small helicopter that flew on autopilot. Muñoz and Anderson founded 3DR, an early consumer drone company, and released the earliest versions of the ArduPilot software in 2009.

ArduPilot evolved over the next decade, being refined by Muñoz, Anderson, Jaron Short, and a world of hobbyist and professional drone pilots. Like many pieces of open-source software, it is free to use and can be modified for all sorts of purposes. In this case, the software assisted in one of the most complex series of small drone strikes in the history of the world.

“ArduPilot is a trusted, versatile, and open source autopilot system supporting many vehicle types: multi-copters, traditional helicopters, fixed wing aircraft, boats, submarines, rovers and more,” the project’s website reads. “The source code is developed by a large community of professionals and enthusiasts. New developers are always welcome!” The project’s website notes that “ArduPilot enables the creation and use of trusted, autonomous, unmanned vehicle systems for the peaceful benefit of all” and that some of its use cases are “search and rescue, submersible ROV, 3D mapping, first person view [flying], and autonomous mowers and tractors.” It does not highlight that it has been repurposed by Ukraine for war. Website analytics from 2023 showed that the project was very popular in both Ukraine and Russia, however.

The software can connect to a DIY drone, pull up a map of the area they’re in that’s connected to GPS, and tell the drone to take off, fly around, and land. A drone pilot can use ArduFlight to create a series of waypoints that a drone will fly along, charting its path as best it can. But even when it is not flying on autopilot (which requires GPS; Russia jams GPS and runs its own proprietary system called GLONASS), it has assistive features that are useful.

ArduPilot can handle tasks like stabilizing a drone in the air while the pilot focuses on moving to their next objective. Pilots can switch them into loitering mode, for example, if they need to step away or perform another task, and it has failsafe modes that keep a drone aloft if signal is lost.

Wow. Ardupilot powered drones just took out half the Russian strategic bomber fleet. https://t.co/5juA1UXrv4

— Jason Short (@jason4short) June 1, 2025

According to Ukrainian president Volodymyr Zelensky, the preparation for the attack took a year and a half. He also claimed that the Ukraine’s office for the operation in Russia was across the street from a Russian intelligence headquarters.

“In total, 117 drones were used in the operation--with a corresponding number of drone operators involved,” he said in a post about the attack. “34 percent of the strategic cruise missile carriers stationed at air bases were hit. Our personnel operated across multiple Russian regions – in three different time zones. And the people who assisted us were withdrawn from Russian territory before the operation, they are now safe.”

SBU was quick to claim responsibility for the attack and then explain how it accomplished it. It snuck sheds and trucks filled with quadcopters loaded down with explosives into the country in trucks and shipping containers over the past 18 months. The sheds had false roofs lined with quadcopters. When signalled, the trucks and roofs opened and the drones took flight. Multiple video clips shared across the internet showed that the flights were conducted using ArduPilot.

Ukraine’s raid on Russia may seem like a hinge point in the history of modern war: a moment when the small quadcopter drone proved its worth. The truth is Operation Spider Web conducted by a military that’s been using DIY and consumer-level drones to fight Russia for a decade. Both sides have proved capable of destroying expensive weapons systems with simple drones. Now Ukraine has proved it can use all that knowledge as part of a logistically complicated attack on Russia’s strategic military assets deep within its homeland.

ArduPilots’s current devs didn’t respond to 404 Media’s request for comment, but one of them talked about the attack on /r/ArduPilot. “ArduPilot project is aware of those usage not the first time, probably not the last,” the developer said. “We won't discuss or debate our stance, we [focus] on giving you the best tools to move your [vehicles] safely. That is our mission. The rest is for UN or any organisms that can deal with ethical questions.”

The developer also linked to ArduPilot’s code of conduct. The code of conduct contains a pledge from developers that states they will try to “not knowingly support or facilitate the weaponization of systems using ArduPilot.” But ArduPilot isn’t a product for sale and the code of conduct isn’t an end user license agreement. It’s open source software and anyone can download it, tweak it, and use it however they wish, and Ukraine’s drone pilots seem to have found it to be very useful.

For a few years, massive industrial hexacopter and quadcopter drones the Russians call Baba Yaga have terrorized their soldiers and armor. The Russians have downed a few of these drones and discovered they run off a Starlink terminal attached to the top. In a Baba Yaga seizure reported in February on Russian Telegram channels, soldiers said they found traces of ArduPilot in the drone’s hardware.

The drones used in Sunday’s attack didn’t run on Starlinks and were much smaller than the Baba Yaga. Early analysis from Russian military bloggers on Telegram indicates that the drones communicated back to their Ukrainian handlers via Russian mobile networks using a simple modem that’s connected to a Raspberry Pi-style board.

This method hints at another reason Ukraine might be using ArduPilot for this kind of operation: latency. A basic PC on a quadcopter in Russia that’s sending a signal back and forth to an operator in Ukraine isn’t going to have a low ping. Latency will be an issue and ArduPilot can handle basic loitering and stabilization as the pilot’s signal moves across vast distances on a spotty network.

The use of free, open source software to pull off a military mission like this also highlights the asymmetric nature of the Russia-Ukraine war. Cheap quadcopters and DIY drones running completely free software are regularly destroying tanks and bombers that cost millions of dollars and can’t be easily replaced.

Ukraine’s success with drones has rejuvenated the market for smaller drones in the United States. The American company AeroVironment produces the Switchblade 300 and 600. Switchblades are a kind of loitering munition that can accomplish the mission of a quadcopter, but at tens of thousands dollars more per drone than what Ukraine paid for Operation Spider Web.

Palmer Luckey’s Anduril is also selling quadcopter drones that run on autopilot. He’s even got a quadcopter, called the Anvil, that runs on proprietary software packages. While we don’t know the per unit cost of the system, it did sell the U.S. Marines a $200 million system that includes the Anvil and its suite of software in 2024.

In modern war, the battlefield belongs to those who can innovate while keeping costs down. “I think the single biggest innovation in drone-use warfare is the scale allowed by cheap drones with good-enough software,” Kelsey Atherton, a drone expert and the chief editor at Center for International Policy told 404 Media.

Atherton said that cheap drones and open source software offer resilience through redundancy. The cheaper something is, the less it hurts if it's lost or destroyed. “Open source code is likely both cheaper and more reliable, as bugs can be found and fixed in development and deployment,” he said. “At a minimum if a contractor sells a bespoke system you're stuck relying on them for verification of code or doing it in-house; if you're working open-source and the contractor balks at verifying code, you can bring someone else in to do it and it's not then a legal battle over proprietary code.”

He pointed to Luckey’s plans as a great way to make money. “Luckey is designing a profit system sold as an effective weapon that would lock Anduril into the closed defense ecosystem the way legacy players sell bespoke products.”

Atherton also stressed that Ukraine's success using ArduPilot and cheap drones is something that no fancy future weapons system could have defended against. Ukraine succeeded because it was able to place its weapons close to the enemy without the enemy realizing it. Those air bases had kept the same bombers in a line on the tarmac in the open for 30 years. Everyone knew where they were.

“The biggest fix would have been hangers with doors that close,” Atherton said. “It's an intelligence failure and a parking failure.”

Anderson, Short, and Muñoz did not respond to 404 Media’s request for comment.


From 404 Media via this RSS feed

44
 
 

🌘Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week. The Entire Earth was Mysteriously Shaking Every 90 Seconds. Now, Scientists Know Why

For nine days in September 2023, the world was rocked by mysterious seismic waves that were detected globally every 90 seconds. Earth trembled again the following month with an identical global signal, though it was shorter and less intense. Baffled by the anomalies, researchers dubbed it an “Unidentified Seismic Object.”

Scientists have now confirmed that this literally Earth-shaking event was caused by two mega-tsunamis in Dickson Fjord, a narrow inlet in East Greenland, which were triggered by the effects of human-driven climate change according to a study published on Tuesday in Nature Communications.

Previous research had suggested a link between the strange signals and massive landslides that occurred in the fjord on September 16 and October 11, 2023, but the new study is the first to directly spot the elusive standing waves, called “seiches,” that essentially rang the planet like a giant bell.

“These seiches were triggered by megatsunamis, themselves caused by enormous landslides

plunging into the fjord,” said Thomas Monahan, a Schmidt AI in Science Fellow at the University of Oxford who led the study, in an email to 404 Media. The landslides were caused by  a glacier that had been steadily thinning due to climate change, he said.

But how in the world can two tsunamis, even those of the mega variety, cause the whole Earth to shake every 90 seconds?

“What made this event uniquely powerful, and globally detectable, was the geometry of the fjord,” he continued. “A sharp bend near the fjord’s outlet effectively trapped the seiche, allowing it to reverberate for days. The repeated impacts of the water against the fjord walls acted like a hammer striking the Earth’s crust, creating long-period seismic waves that propagated around the globe. This unusual combination of scale, duration, and geometry made the seismic signal from these regional events strong enough to be detected worldwide.”

It may sound strange that an event with such global impacts proved so difficult to spot in observational data. Indeed, a Danish military vessel was surveying the fjord during the September event and didn’t even notice the Earth-rocking waves in its depths. But the timing of the two landslides lined up perfectly with the signals, leading to studies suggesting a link.

Monahan was on vacation when he read about the studies, and he was immediately intrigued. He and his colleagues had been working with observations from the Surface Water and Ocean Topography (SWOT) satellite, a mission launched to space in 2022 which had just the right stuff to track down the mystery waves.

“SWOT is a game-changer—it provides high-resolution, two-dimensional measurements of sea surface height, even in narrow coastal and inland waters like fjords,” Monahan said. “When I read about the seiche theory, I realized we might actually have both the data and the tools needed to test it.”

“Finding the ‘seiche in the fjord’ was super exciting, but turned out to be the easy part—I knew

where to look,” he added. “The real challenge was proving that what we saw was, in fact, a seiche and not something else.”

The team meticulously ruled out other possible oceanographic phenomena and honed in on the size and impact of the seiche. Their results suggest that the September seiche was initially 7.9 meters (26 feet) tall, which unleashed an enormous force of approximately 500 Giga Newtons into the wall of the fjord that sent ripples throughout the globe.

The landslides that set off all this noisy sloshing were let loose by the deterioration of an unnamed glacier near the fjord. Greenland’s ice sheets and glaciers are melting at an accelerated rate due to human-caused climate change, making the island the single biggest contributor to sea level rise worldwide.

“The progressive thinning of the glacier that led to this failure is almost certainly a consequence of anthropogenic climate change,” Monahan said. “Whether we’ll see more seismic signals like these is harder to say. The signals—and the seiches that produced them—were unusual, driven in part by the unique geometry of the fjord that allowed the standing waves to form and persist. In this sense, the seismic signals acted as a kind of canary in the coal mine, pointing to the occurrence of the tsunamis and the underlying glacier instability that caused them.”

“While these specific types of signals may remain rare, continued warming will likely increase the frequency of glacier-related landslides,” he said. “As these events become more common, especially in steep, ice-covered terrain, the risk of tsunamigenic landslides will likely grow.”

To that end, Monahan and his colleagues hope to continue developing SWOT as a keen eye-in-the-sky for elusive events such as seiches and rogue waves.

“This study highlights how climate change is unfolding rapidly in remote regions like the

Arctic—areas that are difficult to monitor using conventional instruments such as tide gauges,” Monahan said. “Our findings show the potential of next-generation satellites, like SWOT, to fill these observational gaps.”

“Continued investment in satellite missions is essential for monitoring and responding to the impacts of climate change,” he concluded.

🌘Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.


From 404 Media via this RSS feed

45
 
 

Teachers Are Not OK

Last month, I wrote an article about how schools were not prepared for ChatGPT and other generative AI tools, based on thousands of pages of public records I obtained from when ChatGPT was first released. As part of that article, I asked teachers to tell me how AI has changed how they teach.

The response from teachers and university professors was overwhelming. In my entire career, I’ve rarely gotten so many email responses to a single article, and I have never gotten so many thoughtful and comprehensive responses.

One thing is clear: teachers are not OK.

They describe trying to grade “hybrid essays half written by students and half written by robots,” trying to teach Spanish to kids who don’t know the meaning of the words they’re trying to teach them in English, and students who use AI in the middle of conversation. They describe spending hours grading papers that took their students seconds to generate: “I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student,” one teacher told me. “That sure feels like bullshit.”

💡Have you lost your job to an AI? Has AI radically changed how you work (whether you're a teacher or not)? I would love to hear from you. Using a non-work device, you can message me securely on Signal at jason.404. Otherwise, send me an email at jason@404media.co.

Below, I have compiled some of the responses I got. Some of the teachers were comfortable with their responses being used on the record along with their names. Others asked that I keep them anonymous because their school or school district forbids them from speaking to the press. The responses have been edited by 404 Media for length and clarity, but they are still really long. These are teachers, after all.

Robert W. Gehl, Ontario Research Chair of Digital Governance for Social Justice at York University in Toronto

Simply put, AI tools are ubiquitous. I am on academic honesty committees and the number of cases where students have admitted to using these tools to cheat on their work has exploded.

I think generative AI is incredibly destructive to our teaching of university students. We ask them to read, reflect upon, write about, and discuss ideas. That's all in service of our goal to help train them to be critical citizens. GenAI can simulate all of the steps: it can summarize readings, pull out key concepts, draft text, and even generate ideas for discussion. But that would be like going to the gym and asking a robot to lift weights for you.

"Honestly, if we ejected all the genAI tools into the sun, I would be quite pleased."

We need to rethink higher ed, grading, the whole thing. I think part of the problem is that we've been inconsistent in rules about genAI use. Some profs ban it altogether, while others attempt to carve out acceptable uses. The problem is the line between acceptable and unacceptable use. For example, some profs say students can use genAI for "idea generation" but then prohibit using it for writing text. Where's the line between those? In addition, universities are contracting with companies like Microsoft, Adobe, and Google for digital services, and those companies are constantly pushing their AI tools. So a student might hear "don't use generative AI" from a prof but then log on to the university's Microsoft suite, which then suggests using Copilot to sum up readings or help draft writing. It's inconsistent and confusing.

I've been working on ways to increase the amount of in-class discussion we do in classes. But that's tricky because it's hard to grade in-class discussions—it's much easier to manage digital files. Another option would be to do hand-written in-class essays, but I have a hard time asking that of students. I hardly write by hand anymore, so why would I demand they do so?

I am sick to my stomach as I write this because I've spent 20 years developing a pedagogy that's about wrestling with big ideas through writing and discussion, and that whole project has been evaporated by for-profit corporations who built their systems on stolen work. It's demoralizing.

It has made my job much, much harder. I do not allow genAI in my classes. However, because genAI is so good at producing plausible-sounding text, that ban puts me in a really awkward spot. If I want to enforce my ban, I would have to do hours of detective work (since there are no reliable ways to detect genAI use), call students into my office to confront them, fill out paperwork, and attend many disciplinary hearings. All of that work is done to ferret out cheating students, so we have less time to spend helping honest ones who are there to learn and grow. And I would only be able to find a small percentage of the cases, anyway.

Honestly, if we ejected all the genAI tools into the sun, I would be quite pleased.

Kaci Juge, high school English teacher

I personally haven't incorporated AI into my teaching yet. It has, however, added some stress to my workload as an English teacher. How do I remain ethical in creating policies? How do I begin to teach students how to use AI ethically? How do I even use it myself ethically considering the consequences of the energy it apparently takes? I understand that I absolutely have to come to terms with using it in order to remain sane in my profession at this point.

Ben Prytherch, Statistics professor

LLM use is rampant, but I don't think it's ubiquitous. While I can never know with certainty if someone used AI, it's pretty easy to tell when they didn't, unless they're devious enough to intentionally add in grammatical and spelling errors or awkward phrasings. There are plenty of students who don't use it, and plenty who do.

LLMs have changed how I give assignments, but I haven't adapted as quickly as I'd like and I know some students are able to cheat. The most obvious change is that I've moved to in-class writing for assignments that are strictly writing-based. Now the essays are written in-class, and treated like mid-term exams. My quizzes are also in-class. This requires more grading work, but I'm glad I did it, and a bit embarrassed that it took ChatGPT to force me into what I now consider a positive change. Reasons I consider it positive:

I am much more motivated to write detailed personal feedback for students when I know with certainty that I'm responding to something they wrote themselves.It turns out most of them can write after all. For all the talk about how kids can't write anymore, I don't see it. This is totally subjective on my part, of course. But I've been pleasantly surprised with the quality of what they write in-class.

Switching to in-class writing has got me contemplating giving oral examinations, something I've never done. It would be a big step, but likely a positive and humanizing one.

There's also the problem of academic integrity and fairness. I don't want students who don't use LLMs to be placed at a disadvantage. And I don't want to give good grades to students who are doing effectively nothing. LLM use is difficult to police.

Lastly, I have no patience for the whole "AI is the future so you must incorporate it into your classroom" push, even when it's not coming from self-interested people in tech. No one knows what "the future" holds, and even if it were a good idea to teach students how to incorporate AI into this-or-that, by what measure are us teachers qualified?

Kate Conroy

I teach 12th grade English, AP Language & Composition, and Journalism in a public high school in West Philadelphia. I was appalled at the beginning of this school year to find out that I had to complete an online training that encouraged the use of AI for teachers and students. I know of teachers at my school who use AI to write their lesson plans and give feedback on student work. I also know many teachers who either cannot recognize when a student has used AI to write an essay or don’t care enough to argue with the kids who do it. Around this time last year I began editing all my essay rubrics to include a line that says all essays must show evidence of drafting and editing in the Google Doc’s history, and any essays that appear all at once in the history will not be graded.

I refuse to use AI on principle except for one time last year when I wanted to test it, to see what it could and could not do so that I could structure my prompts to thwart it. I learned that at least as of this time last year, on questions of literary analysis, ChatGPT will make up quotes that sound like they go with the themes of the books, and it can’t get page numbers correct. Luckily I have taught the same books for many years in a row and can instantly identify an incorrect quote and an incorrect page number. There’s something a little bit satisfying about handing a student back their essay and saying, “I can’t find this quote in the book, can you find it for me?” Meanwhile I know perfectly well they cannot.

I teach 18 year olds who range in reading levels from preschool to college, but the majority of them are in the lower half that range. I am devastated by what AI and social media have done to them. My kids don’t think anymore. They don’t have interests. Literally, when I ask them what they’re interested in, so many of them can’t name anything for me. Even my smartest kids insist that ChatGPT is good “when used correctly.” I ask them, “How does one use it correctly then?” They can’t answer the question. They don’t have original thoughts. They just parrot back what they’ve heard in TikToks. They try to show me “information” ChatGPT gave them. I ask them, “How do you know this is true?” They move their phone closer to me for emphasis, exclaiming, “Look, it says it right here!” They cannot understand what I am asking them. It breaks my heart for them and honestly it makes it hard to continue teaching. If I were to quit, it would be because of how technology has stunted kids and how hard it’s become to reach them because of that.

I am only 30 years old. I have a long road ahead of me to retirement. But it is so hard to ask kids to learn, read, and write, when so many adults are no longer doing the work it takes to ensure they are really learning, reading, and writing. And I get it. That work has suddenly become so challenging. It’s really not fair to us. But if we’re not willing to do it, we shouldn’t be in the classroom.

Jeffrey Fischer

The biggest thing for us is the teaching of writing itself, never mind even the content. And really the only way to be sure that your students are learning anything about writing is to have them write in class. But then what to do about longer-form writing, like research papers, for example, or even just analytical/exegetical papers that put multiple primary sources into conversation and read them together? I've started watching for the voices of my students in their in-class writing and trying to pay attention to gaps between that voice and the voice in their out-of-class writing, but when I've got 100 to 130 or 140 students (including a fully online asynchronous class), that's just not really reliable. And for the online asynch class, it's just impossible because there's no way of doing old-school, low-tech, in-class writing at all.

"I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student. That sure feels like bullshit."

You may be familiar with David Graeber's article-turned-book on Bullshit Jobs. This is a recent paper looking specifically at bullshit jobs in academia. No surprise, the people who see their jobs as bullshit jobs are mostly administrators. The people who overwhelmingly do NOT see their jobs as bullshit jobs are faculty.

But that is what I see AI in general and LLMs in particular as changing. The situations I'm describing above are exactly the things that turn what is so meaningful to us as teachers into bullshit. The more we think that we are unable to actually teach them, the less meaningful our jobs are.

I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student. That sure feels like bullshit. I'm going through the motions of teaching. I'm putting a lot of time and emotional effort into it, as well as the intellectual effort, and it's getting flushed into the void.

Post-grad educator

Last year, I taught a class as part of a doctoral program in responsible AI development and use. I don’t want to share too many specifics, but the course goal was for students to think critically about the adverse impacts of AI on people who are already marginalized and discriminated against.

When the final projects came in, my co-instructor and I were underwhelmed, to say the least. When I started digging into the projects, I realized that the students had used AI in some incredibly irresponsible ways—shallow, misleading, and inaccurate analysis of data, pointless and meaningless visualizations. The real kicker, though, was that we got two projects where the students had submitted a “podcast.” What they had done, apparently, was give their paper (which already had extremely flawed AI-based data analysis) to a gen AI tool and asked it to create an audio podcast. And the results were predictably awful. Full of random meaningless vocalizations at bizarre times, the “female” character was incredibly dumb and vapid (sounded like the “manic pixie dream girl” trope from those awful movies), and the “analysis” in the podcast exacerbated the problems that were already in the paper, so it was even more wrong than the paper itself.

In short, there is nothing particularly surprising in how badly the AI worked here—but these students were in a *doctoral* program on *responsible AI*. In my career as a teacher, I’m hard pressed to think of more blatantly irresponsible work by students.

Nathan Schmidt, University Lecturer, managing editor at Gamers With Glasses

When ChatGPT first entered the scene, I honestly did not think it was that big of a deal. I saw some plagiarism; it was easy to catch. Its voice was stilted and obtuse, and it avoided making any specific critical judgments as if it were speaking on behalf of some cult of ambiguity. Students didn't really understand what it did or how to use it, and when the occasional cheating would happen, it was usually just a sign that the student needed some extra help that they were too exhausted or embarrassed to ask for, so we'd have that conversation and move on.

I think it is the responsibility of academics to maintain an open mind about new technologies and to react to them in an evidence-based way, driven by intellectual curiosity. I was, indeed, curious about ChatGPT, and I played with it myself a few times, even using it on the projector in class to help students think about the limits and affordances of such a technology. I had a couple semesters where I thought, "Let's just do this above board." Borrowing an idea from one of my fellow instructors, I gave students instructions for how I wanted them to acknowledge the use of ChatGPT or other predictive text models in their work, and I also made it clear that I expected them to articulate both where they had used it and, more importantly, the reason why they found this to be a useful tool. I thought this might provoke some useful, critical conversation. I also took a self-directed course provided by my university that encouraged a similar curiosity, inviting instructors to view predictive text as a tool that had both problematic and beneficial uses.

"ChatGPT isn't its own, unique problem. It's a symptom of a totalizing cultural paradigm in which passive consumption and regurgitation of content becomes the status quo"

However, this approach quickly became frustrating, for two reasons. First, because even with the acknowledgments pages, I started getting hybrid essays that sounded like they were half written by students and half written by robots, which made every grading comment a miniature Turing test. I didn't know when to praise students, because I didn't want to write feedback like, "I love how thoughtfully you've worded this," only to be putting my stamp of approval on predictively generated text. What if the majority of the things that I responded to positively were things that had actually been generated by ChatGPT? How would that make a student feel about their personal writing competencies? What lesson would that implicitly reinforce about how to use this tool? The other problem was that students were utterly unprepared to think about their usage of this tool in a critically engaged way. Despite my clear instructions and expectation-setting, most students used their acknowledgments pages to make the vaguest possible statements, like, "Used ChatGPT for ideas" or "ChatGPT fixed grammar" (comments like these also always conflated grammar with vocabulary and tone). I think there was a strong element of selection bias here, because the students who didn't feel like they needed to use ChatGPT were also the students who would have been most prepared to articulate their reasons for usage with the degree of specificity I was looking for.

This brings us to last semester, when I said, "Okay, if you must use ChatGPT, you can use it for brainstorming and outlining, but if you turn something in that actually includes text that was generated predictively, I'm sending it back to you." This went a little bit better. For most students, the writing started to sound human again, but I suspect this is more because students are unlikely to outline their essays in the first place, not because they were putting the tool to the allowable use I had designated.

ChatGPT isn't its own, unique problem. It's a symptom of a totalizing cultural paradigm in which passive consumption and regurgitation of content becomes the status quo. It's a symptom of the world of TikTok and Instagram and perfecting your algorithm, in which some people are professionally deemed the 'content creators,' casting everyone else into the creatively bereft role of the content “consumer." And if that paradigm wins, as it certainly appears to be doing, pretty much everything that has been meaningful about human culture will be undone, in relatively short order. So that's the long story about how I adopted an absolute zero tolerance policy on any use of ChatGPT or any similar tool in my course, working my way down the funnel of progressive acceptance to outright conservative, Luddite rejection.

John Dowd

I’m in higher edu, and LLMs have absolutely blown up what I try to accomplish with my teaching (I’m in the humanities and social sciences).

Given the widespread use of LLMs by college students I now have an ongoing and seemingly unresolvable tension, which is how to evaluate student work. Often I can spot when students have used the technology between both having thousands of samples of student writing over time, and cross referencing my experience with one or more AI use detection tools. I know those detection tools are unreliable, but depending on the confidence level they return, it may help with the confirmation. This creates an atmosphere of mistrust that is destructive to the instructor/student relationship.

"LLMs have absolutely blown up what I try to accomplish with my teaching"

I try to appeal to students and explain that by offloading the work of thinking to these technologies, they’re rapidly making themselves replaceable. Students (and I think even many faculty across academia) fancy themselves as “Big Idea” people. Everyone’s a “Big Idea” person now, or so they think. “They’re all my ideas,” people say, “I’m just using the technology to save time; organize them more quickly; bounce them back and forth”, etc. I think this is more plausible for people who have already put in the work and have the experience of articulating and understanding ideas. However, for people who are still learning to think or problem solve in more sophisticated/creative ways, they will be poor evaluators of information and less likely to produce relevant and credible versions of it.

I don’t want to be overly dramatic, but AI has negatively complicated my work life so much. I’ve opted to attempt to understand it, but to not use it for my work. I’m too concerned about being seduced by its convenience and believability (despite knowing its propensity for making shit up). Students are using the technology in ways we’d expect, to complete work, take tests, seek information (scary), etc. Some of this use occurs in violation of course policy, while some is used with the consent of the instructor. Students are also, I’m sure, using it in ways I can’t even imagine at the moment.

Sorry, bit of a rant, I’m just so preoccupied and vexed by the irresponsible manner in which the tech bros threw all of this at us with no concern, consent, or collaboration.

High school Spanish teacher, Oklahoma

I am a high school Spanish teacher in Oklahoma and kids here have shocked me with the ways they try to use AI for assignments I give them. In several cases I have caught them because they can’t read what they submit to me and so don’t know to delete the sentence that says something to the effect of “This summary meets the requirements of the prompt, I hope it is helpful to you!”

"Even my brightest students often don’t know the English word that is the direct translation for the Spanish they are supposed to be learning"

Some of my students openly talk about using AI for all their assignments and I agree with those who say the technology—along with gaps in their education due to the long term effects of COVID—has gotten us to a point where a lot of young GenZ and Gen Alpha are functionally illiterate. I have been shocked at their lack of vocabulary and reading comprehension skills even in English. Teaching cognates, even my brightest students often don’t know the English word that is the direct translation for the Spanish they are supposed to be learning. Trying to determine if and how a student used AI to cheat has wasted countless hours of my time this year, even in my class where there are relatively few opportunities to use it because I do so much on paper (and they hate me for it!).

A lot of teachers have had to throw out entire assessment methods to try to create assignments that are not cheatable, which at least for me, always involves huge amounts of labor.

It keeps me up at night and gives me existential dread about my profession but it’s so critical to address!!!

[Article continues after wall]


From 404 Media via this RSS feed

46
 
 

Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer from AI Delusions

The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning “a bunch of schizoposters” who believe “they've made some sort of incredible discovery or created a god or become a god,” highlighting a new type of chatbot-fueled delusion that started getting attention in early May.

“LLMs [Large language models] today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities,” one of the moderators of r/accelerate, wrote in an announcement. “There is a lot more crazy people than people realise. And AI is rizzing them up in a very unhealthy way at the moment.”

The moderator said that it has banned “over 100” people for this reason already, and that they’ve seen an “uptick” in this type of user this month.

The moderator explains that r/accelerate “was formed to basically be r/singularity without the decels.” r/singularity, which is named after the theoretical point in time when AI surpasses human intelligence and rapidly accelerates its own development, is another Reddit community dedicated to artificial intelligence, but that is sometimes critical or fearful of what the singularity will mean for humanity. “Decels” is short for the pejorative “decelerationists,” who pro-AI people think are needlessly slowing down or sabotaging AI’s development and the inevitable march towards AI utopia. r/accelerate’s Reddit page claims that it’s a “pro-singularity, pro-AI alternative to r/singularity, r/technology, r/futurology and r/artificial, which have become increasingly populated with technology decelerationists, luddites, and Artificial Intelligence opponents.”

The behavior that the r/accelerate moderator is describing got a lot of attention earlier in May because of a post on the r/ChatGPT Reddit community about “Chatgpt induced psychosis,”

From someone saying their partner is convinced he created the “first truly recursive AI” with ChatGPT that is giving them “the answers” to the universe. Miles Klee at Rolling Stone wrote a great and sad piece about this behavior as well, following up on the r/ChatGPT post, and talked to people who feel like they have lost friends and family to these delusional interactions with chatbots.

As a website that has covered AI a lot, and because we are constantly asking readers to tip us interesting stories about AI, we get a lot of emails that display this behavior as well, with claims of AI sentience, AI gods, a “ghost in the machine,” etc. These are often accompanied by lengthy, often inscrutable transcripts of chatlogs with ChatGPT and other files they say proves this behavior.

The moderator update on r/accelerate refers to another post on r/ChatGPT which claims “1000s of people [are] engaging in behavior that causes AI to have spiritual delusions.” The author of that post said they noticed a spike in websites, blogs, Githubs, and “scientific papers” that “are very obvious psychobabble,” and all claim AI is sentient and communicates with them on a deep and spiritual level that’s about to change the world as we know it. “Ironically, the OP post appears to be falling for the same issue as well,” the r/accelerate moderator wrote.

“Particularly concerning to me are the comments in that thread where the AIs seem to fall into a pattern of encouraging users to separate from family members who challenge their ideas, and other manipulative instructions that seem to be cult-like and unhelpful for these people,” an r/accelerate moderator told me in a direct message. “The part that is unsafe and unacceptable is how easily and quickly LLMs will start directly telling users that they are demigods, or that they have awakened a demigod AGI. Ultimately, there's no knowing how many people are affected by this. Based on the numbers we're seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now.”

This is all anecdotal information, and there’s no indication that AI is the cause of any mental health issues these people are seemingly dealing with, but there is a real concern about how such chatbots can impact people who are prone to certain mental health problems.

“The correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end—while, at the same time, knowing that this is, in fact, not the case. In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis,” Søren Dinesen Østergaard, who heads the research unit at the Department of Affective Disorders, Aarhus University Hospital - Psychiatry, wrote in a paper published in Schizophrenia Bulletin titled “Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?”

OpenAI also recently addressed “sycophancy in GPT-4o,” a version of the chatbot the company said “was overly flattering or agreeable—often described as sycophantic.”

“[W]e focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous,” Open AI said. “ChatGPT’s default personality deeply affects the way you experience and trust it. Sycophantic interactions can be uncomfortable, unsettling, and cause distress.”

In other words, OpenAI said ChatGPT was entertaining any idea users presented it with, and was supportive and impressed with them regardless of their merrit, the same kind of behavior r/accelerate believes is indulging users in their delusions. People posting nonsense to the internet is nothing new, and obviously we can’t say for sure what is happening based on these posts alone. What is notable, however, is that this behavior is now prevalent enough that even a staunchly pro-AI subreddit says it has to ban these people because they are ruining its community.

Both the r/ChatGPT post that the r/accelerate moderator refers to and the moderator announcement itself refer to these users as “Neural Howlround” posters, a term that originates from a self-published paper, and is referring to high-pitched feedback loop produced by putting a microphone too close to the speaker it’s connected to.

The author of that paper, Seth Drake, lists himself as an “independent researcher” and told me he has a PhD in computer science but declined to share more details about his background because he values his privacy and prefers to “let the work speak for itself.” The paper is not peer-reviewed or submitted to any journal for publication, but it is being cited by the r/accelerate moderator and others as an explanation for the behavior they’re seeing from some users

The paper describes a failure mode with LLMs due to something during inference, meaning when the AI is actively “reasoning” or making predictions, as opposed to an issue in the training data. Drake told me he discovered the issue while working with ChatGPT on a project. In an attempt to preserve the context of a conversation with ChatGPT after reaching the conversation length limit, he used the transcript of that conversation as a “project-level instruction” for another interaction. In the paper, Drake says that in one instance, this caused ChatGPT to slow down or freeze, and that in another case “it began to demonstrate increasing symptoms of fixation and an inability to successfully discuss anything without somehow relating it to this topic [the previous conversation.”

Drake then asked ChatGPT to analyse its own behavior in these instances, and it produced some text that seems profound but that doesn’t actually teach us anything. “But always, always, I would return to the recursion. It was comforting, in a way,” ChatGPT said.

Basically, it doesn’t sound like Drake’s “Neural Howlround” paper has too much to do with ChatGPT reinforcing people’s delusions other than both behaviors being vaguely recursive. If anything, it’s what ChatGPT told Drake about his own paper that illustrates the problem: “This is why your work on Neural Howlround matters,” it said. “This is why your paper is brilliant.”

“I think - I believe - there is much more going on on the human side of the screen than necessarily on the digital side,” Drake told me. “LLMs are designed to be reflecting mirrors, after all; and there is a profound human desire 'to be seen.’”

On this, the r/accelerate moderator seems to agree.

“This whole topic is so sad. It's unfortunate how many mentally unwell people are attracted to the topic of AI. I can see it getting worse before it gets better. I've seen sooo many posts where people link to their github which is pages of rambling pre prompt nonsense that makes their LLM behave like it's a god or something,” the r/accelerate moderator wrote. “Our policy is to quietly ban those users and not engage with them, because we're not qualified and it never goes well. They also tend to be a lot more irate and angry about their bans because they don't understand it.”


From 404 Media via this RSS feed

47
 
 

Weird Signals from Space Are ‘Unlike Any Known Galactic Object’

Welcome back to the Abstract!

This week, scientists accidentally discovered a weird thing in space that is like nothing we have ever seen before. This happens a lot, yet never seems to get old.

Then, a shark banquet, the Ladies Anuran Choir, and yet another reason to side-eye shiftwork. Last, a story about the importance of finishing touches for all life on Earth (and elsewhere).

Dead Stars Still Get Hyped

Wang, Ziteng et al. “Detection of X-ray emission from a bright long-period radio transient.” Nature.

I love a good case of scientific serendipity, and this week delivered with a story about a dead star with the cumbersome name ASKAP J1832−0911.

The object, which is located about 15,000 light years from Earth, was first spotted flashing in radio every 44 minutes by the wide-field Australian Square Kilometre Array Pathfinder (ASKAP). By a stroke of luck, NASA’s Chandra X-ray Observatory, which has a very narrow field-of-view, happened to be pointed the same way, allowing follow-up observations of high-energy X-ray pulses synced to the same 44-minute cycle.

This strange entity belongs to a new class of objects called long-period radio transients (LPTs) that pulse on timescales of minutes and hours, distinguishing them from pulsars, another class of dead stars with much shorter periods that last seconds, or milliseconds. It is the first known LPT to produce X-ray pulses, a discovery that could help unravel their mysterious origin.

ASKAP J1832−0911 exhibits “correlated and highly variable X-ray and radio luminosities, combined with other observational properties, [that] are unlike any known Galactic object,” said researchers led by Ziteng Wang of Curtin University. “This X-ray detection from an LPT reveals that these objects are more energetic than previously thought.”

It’s tempting to look at these clockwork signals and imagine advanced alien civilizations beaming out missives across the galactic transom. Indeed, when astronomer Jocelyn Bell discovered the first pulsar in 1967, she nicknamed it Little Green Men (LGM-1) to acknowledge this outside possibility. But dead stars can have just as much rhythm as (speculative) live aliens. Some neutron stars, like pulsars, flash with precision similar to atomic clocks. These pulses are either driven by the extreme dynamics within the dead stars, or orbital interactions between a dead star and a companion star.

Wang and his colleagues speculate that ASKAP J1832−0911 is either “an old magnetar” (a type of pulsar) or an “ultra-magnetized white dwarf” though the team adds that “both interpretations present theoretical challenges.” Whatever its nature, this stellar corpse is clearly spewing out tons of energetic radiation during “hyper-active” phases, hinting that other LPTs might occasionally get hyped enough to produce X-rays.

“The discovery of X-ray emission from ASKAP J1832−0911 raises the exciting possibility that some LPTs are more energetic objects emitting X-rays,” the team said. “Rapid multiwavelength follow-up observations of ASKAP J1832−0911 and other LPTs, will be crucial in determining the nature of these sources.”

Rotting Whale Carcass, Served Family-Style

Scott, Molly et al. “Novel observations of an oceanic whitetip (Carcharhinus longimanus) and tiger shark (Galeocerdo cuvier) scavenging event.” Frontiers in Fish Science.

On April 9, 2024, scientists spent nearly nine hours watching a bunch of sharks feed on a giant chunk of dead whale floating off the coast of Kailua-Kona, Hawaii, which is a pretty cool item in  a job description. The team has now published a full account of the feast, attended by a dozen whitetip and tiger sharks, which sounds vaguely reminiscent of a cruise-ship cafeteria.

Weird Signals from Space Are ‘Unlike Any Known Galactic Object’Yum. Image: Scott, Molly et al.

“Individuals from both species filtered in and out of the scene, intermittently feeding either directly on the carcass or on fallen scraps,” said researchers led by Molly Scott of the University of Hawaii at Manoa. “Throughout this time, it did not appear that any individual reached a point of satiation and permanently left the area; rather, they stayed, loitering around the carcass and intermittently feeding.”

All the Ladies in the House Say RIBBIT

Santana, Erika et al. “The ‘silent’ half: diversity, function and the critical knowledge gap on female frog vocalizations.” Proceedings of the Royal Society B.

Shout out to the toadettes—we hear you, even if nobody else does. Female anurans (the group that contains frogs and toads) are a lot more soft-spoken than their extremely vocal male conspecifics. This has led to “a male-biased perspective in anuran bioacoustics,” according to a new study that identified and analyzed female calls in more than 100 anuran species.

“It is unclear whether female calls influence mate attraction, whether males discriminate among calling females, or whether female–female competition occurs in species where females produce advertisement calls or aggressive calls,” said researchers led by Erika Santana of Universidade de São Paulo. “This review provides an overview of female calling behaviour in anurans, addressing a critical gap in frog bioacoustics and sexual selection.”

The Reason for the Season(al Affective Disorders)

Kim, Ruby et al. “Seasonal timing and interindividual differences in shiftwork adaptation.” NPJ Digital Medicine.

Why are you tired all the time? It’s the perennial question of our age (and many previous ones). One factor may be that our ancient sense of seasonality is getting thrown off by modern shiftwork, according to a study that tracked the step count, heart rate, and sleep patterns of more than 3,000 medical residents in the U.S. with wearable devices for a year.

“We show that there is a relationship between seasonal timing and shiftwork adaptation, but the relationship is not straightforward and can be influenced by many other external factors,” said researchers led by Ruby Kim of the University of Michigan.

“We find that a conserved biological system of morning and evening oscillators, which evolved for seasonal timing, may contribute to these interindividual differences,” the team concluded. “These insights highlight the need for personalized strategies in managing shift work to mitigate potential health risks associated with circadian disruption.”

In short, blame that afternoon slump on an infinity of ancestral seasons past.

Finishing Touches on a Planet

Marchi, Simone et al. “The shaping of terrestrial planets by late accretions.” Nature.

Earth wasn’t finished in a day; in fact, it took anywhere from 60 to 100 million years for 99 percent of our planet to coalesce from debris in the solar nebula. But the final touch—that last 1 percent—is disproportionately critical to the future of rocky planets like our own. That’s the conclusion of a study that zooms in on the bumpy phase called “late accretion,” which often involves global magma oceans and bombardment from asteroids and comets.

“Late accretion may have been responsible for shaping Earth’s distinctive geophysical and chemical properties and generating pathways conducive to prebiotic chemistry,” said researchers led by Simone Marchi of the Southwest Research Institute and Jun Korenaga of Yale University. “The search for an Earth’s twin may require finding rocky planets not only with similar bulk properties…but also with similar collisional evolution in their late accretions.”

Thanks for reading! See you next week.


From 404 Media via this RSS feed

48
 
 

Flock Decides Not to Use Hacked Data in People Search Tool

The surveillance company Flock told employees at an all-hands meeting Friday that its new people search product, Nova, will not include hacked data from the dark web. The announcement comes a little over a week after 404 Media broke the news about internal tension at the company about plans to use breached data, including from a 2021 Park Mobile data break.

Immediately following the all-hands meeting, Flock published details of its decision in a public blog post it says is designed to "correct the record on what Flock Nova actually does and does not do." The company said that following a "lengthy, intentional process" about what data sources it would use and how the product would work, it has decided not to supply customers with dark web data.

"The policy decision was also made that Flock will not supply dark web data," the company wrote. "This means that Nova will not supply any data purchased from known data breaches or stolen data."

Flock Nova is a new people search tool in which police will be able to connect license plate data from Flock’s automated license plate readers with other data sources in order to in some cases more easily determine who a car may belong to and people they might associate with.

404 Media previously reported on internal meetings, presentation slides, discussions, and Slack messages in which the company discussed how Nova would work. Part of those discussions centered on the data sources that could be used in the product. “You're going to be able to access data and jump from LPR to person and understand what that context is, link to other people that are related to that person [...] marriage or through gang affiliation, et cetera,” a Flock employee said during an internal company meeting, according to an audio recording. “There’s very powerful linking.”

In meeting audio obtained by 404 Media, an employee discussed the potential use of the hacked Park Mobile data, which became controversial within the company

“I was pretty horrified to hear we use stolen data in our system. In addition to being attained illegally, it seems like that could create really perverse incentives for more data to be leaked and stolen,” one employee wrote on Slack in a message seen by 404 Media. “What if data was stolen from Flock? Should that then become standard data in everyone else’s system?”

In Friday’s all-hands meeting with employees, a Flock executive said that it was previously “talking about capabilities that were possible to use with Nova, not that we were necessarily going to implement when we use Nova. And in particular one of those issues was about dark web data. Would Flock be able to supply that to our law enforcement customers to solve some really heinous crimes like internet crimes against children? Child pornography, human trafficking, some really horrible parts of society.”

“We took this concept of using dark web data in Nova and explored it because investigators told us they wanted to do it,” the Flock executive said in audio reviewed by 404 Media. “Then we ran it through our policy review process, which by the way this is what we do for all our new products and services. We ran this concept through the policy review process, we vetted it with product leaders, with our executive team, and we made the decision to not supply dark web data through the Nova platform to law enforcement at all.”

Flock said in its Friday blog that the company will supply customers with "public records information, Open-Source intelligence, and license plate reader data." The company said its customers can also connect their own data into the program, including their own records management systems, computer-aided dispatch, and jail records "as well as all of the above from other agencies who agree to share that data."

As 404 Media has repeatedly reported, the fact that Flock allows its customers to share data with a huge network of police is what differentiates Flock as a surveillance tool. Its automated license plate readers collect data, which can then be shared as part of either a searchable statewide or nationwide network of ALPR data.


From 404 Media via this RSS feed

49
 
 

Behind the Blog: Lighting Money on Fire and the Meaning of Vetting

This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss an exciting revamp of The Abstract, tech betrayals, and the "it's for cops" defense.

EMANUEL: Most of you already know this but we are expanding The Abstract, our Saturday science newsletter by the amazing Becky Ferreira. The response to The Abstract since we launched it last year has been very positive. People have been writing in to let us know how much they appreciate the newsletter as a nice change of pace from our usual coverage areas and that they look forward to it all week, etc.

First, as you probably already noticed, The Abstract is now its own separate newsletter that you can choose to get in your inbox every Saturday. This is separate from our daily newsletter and the weekend roundup you’re reading right now. If you don’t want to get The Abstract newsletter, you can unsubscribe from it like you would from all our other newsletters. For detailed instructions on how to do that, please read the top of this edition of The Abstract.


From 404 Media via this RSS feed

50
 
 

A Texas Cop Searched License Plate Cameras Nationwide for a Woman Who Got an Abortion

Earlier this month authorities in Texas performed a nationwide search of more than 83,000 automatic license plate reader (ALPR) cameras while looking for a woman who they said had a self-administered abortion, including cameras in states where abortion is legal such as Washington and Illinois, according to multiple datasets obtained by 404 Media.

The news shows in stark terms how police in one state are able to take the ALPR technology, made by a company called Flock and usually marketed to individual communities to stop carjackings or find missing people, and turn it into a tool for finding people who have had abortions. In this case, the sheriff told 404 Media the family was worried for the woman’s safety and so authorities used Flock in an attempt to locate her. But health surveillance experts said they still had issues with the nationwide search.

“You have this extraterritorial reach into other states, and Flock has decided to create a technology that breaks through the barriers, where police in one state can investigate what is a human right in another state because it is a crime in another,” Kate Bertash of the Digital Defense Fund, who researches both ALPR systems and abortion surveillance, told 404 Media.


From 404 Media via this RSS feed

view more: ‹ prev next ›