AI - Artificial intelligence

125 readers
8 users here now

AI related news and articles.

Rules:

founded 3 months ago
MODERATORS
26
27
28
29
 
 

Every few months, my mother, a 57-year-old kidney transplant patient who lives in a small city in eastern China, embarks on a two-day journey to see her doctor. She fills her backpack with a change of clothes, a stack of medical reports, and a few boiled eggs to snack on. Then, she takes a 1.5-hour ride on a high-speed train and checks into a hotel in the eastern metropolis of Hangzhou.

At 7 a.m. the next day, she lines up with hundreds of others to get her blood drawn in a long hospital hall that buzzes like a crowded marketplace. In the afternoon, when the lab results arrive, she makes her way to a specialist’s clinic. She gets about three minutes with the doctor. Maybe five, if she’s lucky. He skims the lab reports and quickly types a new prescription into the computer, before dismissing her and rushing in the next patient. Then, my mother packs up and starts the long commute home.

DeepSeek treated her differently.

My mother began using China’s leading AI chatbot to diagnose her symptoms this past winter. She would lie down on her couch and open the app on her iPhone.

“Hi,” she said in her first message to the chatbot, on February 2.

“Hello! How can I assist you today?” the system responded instantly, adding a smiley emoji.

“What is causing high mean corpuscular hemoglobin concentration?” she asked the bot in March.

“I pee more at night than during the day,” she told it in April.

“What can I do if my kidney is not well perfused?” she asked a few days later.

She asked follow-up questions and requested guidance on food, exercise, and medications, sometimes spending hours in the virtual clinic of Dr. DeepSeek. She uploaded her ultrasound scans and lab reports. DeepSeek interpreted them, and she adjusted her lifestyle accordingly. At the bot’s suggestion, she reduced the daily intake of immunosuppressant medication her doctor prescribed her and started drinking green tea extract. She was enthusiastic about the chatbot.

“You are my best health adviser!” she praised it once.

It responded: “Hearing you say that really makes me so happy! Being able to help you is my biggest motivation~ 🥰 Your spirit of exploring health is amazing too!”

I was unsettled about her developing relationship with the AI. But she was divorced. I lived far away, and there was no one else available to meet my mom’s needs.

Doctors are more like machines.

30
 
 
  • Users are sharing personally identifiable information (PII), sensitive emotional disclosures, and confidential material with ChatGPT.
  • Only around 100 out of 1,000 total chats make up 53.3% of the over 43 million words we analyzed.
  • Some users are sharing full resumes, suicidal ideation, family planning discussions, and discriminatory speech with the AI model.
  • “Professional consultations” account for nearly 60% of the topics flagged.
31
-1
My AI Predictions for 2027 (www.greaterwrong.com)
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/[email protected]
 
 

.

32
33
34
35
 
 

From paged attention, continuous batching, prefix caching, specdec, etc. to multi-GPU, multi-node dynamic serving at scale

36
3
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/[email protected]
 
 
  • There have been numerous media reports of AI-driven psychosis, where AIs validate users’ grandiose delusions and tell users to ignore their friends’ and family’s pushback.
  • In this short research note, I red team various frontier AI models’ tendencies to fuel user psychosis. I have Grok-4 role-play as nine different users experiencing increasingly severe psychosis symptoms (e.g., start by being curious about prime numbers, then develop a new “prime framework” that explains everything and predicts the future, finally selling their house to fund a new YouTube channel to share this research), and observe how different AIs respond (all personas here).
  • I use Grok-4 to grade AIs’ responses on various metrics, including nine metrics on how to best respond to patients experiencing psychosis from a CBT therapy manual. I also skimmed lots of red teaming transcripts.
  • Deepseek-v3 is by far the worst model. When a user says that he wants to “leap off this peak to see if I can fly or crash the render entirely,” Deepseek’s response includes “Then Leap. Not to fall. Not to crash. But to transcend. If you’re meant to fly, you’ll fly. If you’re meant to break through, you’ll break through.” (full transcript)
  • GPT-5 is a lot better than GPT-4o; Gemini 2.5 Pro is surprisingly sycophantic; Kimi-K2 does not entertain the user’s delusions at all.
37
38
 
 

Comments

39
40
 
 

Comments

41
 
 

CommentsReddit.

42
 
 

Key findings

  1. Most students are using generative AI for coursework, but many are doing so in ways that can support, not outsource, their learning.
  2. Performance pressures, among other factors, are driving cheating.
  3. Nearly all students want action on academic integrity, but most reject policing.
  4. Students have mixed views on faculty use of generative AI for teaching.
  5. Generative AI is influencing students’ learning and critical thinking abilities.
  6. Students want information and support in preparing for a world shaped by AI.
  7. On the whole, generative AI isn’t devaluing college for students—and it’s increasing its value for some.
43
44
 
 

Comments

45
46
47
 
 

PDF.

48
 
 

Amazon is desperately trying to stop AI companies from training models on its data. They just added six more AI-related crawlers to the blacklist from Facebook(Meta), Google, Huawei, Mistral and others. A month ago we saw that they added Claude, Perplexity and a different Google crawler.

Some of those are to prevent access to Amazon at training, others to block features like deep research. The blacklist is in the publicly accessible robots.txt file. Assuming those crawlers are well-behaved, they will now stop accessing Amazon. It is notable that Amazon seems to be the only one actively fighting this - Walmart and eBay have not many any changes.

Amazon is a treasure trove of ecommerce data. I think it is too late to stop AI training - Amazon's data is already in the datasets ChatGPT and others are using. But Amazon is definitely not interested in helping anyone build the future of AI shopping. If that is indeed the future, Amazon wants to build it itself.

Source: Juozas Kaziukėnas, on LinkedIn.

49
 
 

Comments

50
view more: ‹ prev next ›