this post was submitted on 25 Jun 2025
868 points (98.8% liked)

Technology

72263 readers
3354 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Google’s Gemini team is apparently sending out emails about an upcoming change to how Gemini interacts with apps on Android devices. The email informs users that, come July 7, 2025, Gemini will be able to “help you use Phone, Messages, WhatsApp, and Utilities on your phone, whether your Gemini Apps Activity is on or off.” Naturally, this has raised some privacy concerns among those who’ve received the email and those using the AI assistant on their Android devices.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 27 points 1 week ago (20 children)

Is it really the people or just a subset of people that use Lemmy, the vast majority of people seemingly don’t care as is evidenced by the sheer number of people using things like social media.

What might be important to use in this echo chamber isn’t reflective of society on the whole.

[–] [email protected] 13 points 1 week ago (11 children)

It's mostly lemmy. In real life people go from amused to indifferent. I have never met anyone as hostile as the lemmy consensus seems to be. If a feature is useful people will use it, be it AI or not AI. Some AI features are gimmicks and they largely get ignored, unless very intrusive (in which case the intrusivity, not the AI, is the problem).

[–] [email protected] 11 points 1 week ago (8 children)

If a feature is useful people will use it, be it AI or not AI.

People will also use it if it's not useful, if it's the default.

A friend of mine did a search the other day to find the hour of something, and google's AI lied to her. Top of the page, just completely wrong.

Luckily I said, "That doesn't sound right" and checked the official site, where we found the truth.

Google is definitely forcing this out, even when it's inferior to other products. Hell, it's inferior to their own, existing product.

But people will keep using AI, because it's there, and it's right most of the time.

Google sucks. They should be broken up, and their leadership barred from working in tech. We could have had a better future. Instead we have this hallucinatory hellhole.

[–] ScoffingLizard 4 points 1 week ago* (last edited 1 week ago)

They need a tech ethics board, and people need a license to operate or work in decision-making capacities. Also, anyone above the person's head making an unethical decision loses their license, too. License should be cheap to prevent monopoly, but you have to have one to handle data. Don't have a license. Don't have a company. Plant shitty surveillance without separate, noticeable, succinctly presented agreements that are clear and understandable, with warnings about currently misunderstood uses, then you lose license. First offense.

Edit: Also mandatory audits with preformulated and separate, and succint notifications are applied. "This company sells your info to the government and police forces. Any private information, even sexual in nature, can be used against you. Your information will be used by several companies to build your complete psychological profile to sell you things you wouldn't normally purchase and predict crimes you might commit."

load more comments (7 replies)
load more comments (9 replies)
load more comments (17 replies)