Privacy

3388 readers
59 users here now

Welcome! This is a community for all those who are interested in protecting their privacy.

Rules

PS: Don't be a smartass and try to game the system, we'll know if you're breaking the rules when we see it!

  1. Be civil and no prejudice
  2. Don't promote big-tech software
  3. No apathy and defeatism for privacy (i.e. "They already have my data, why bother?")
  4. No reposting of news that was already posted
  5. No crypto, blockchain, NFTs
  6. No Xitter links (if absolutely necessary, use xcancel)

Related communities:

Some of these are only vaguely related, but great communities.

founded 8 months ago
MODERATORS
401
402
 
 
  1. Persistent Device Identifiers

My id is (1 digit changed to preserve my privacy):

38400000-8cf0-11bd-b23e-30b96e40000d

Android assigns Advertising IDs, unique identifiers that apps and advertisers use to track users across installations and account changes. Google explicitly states:

“The advertising ID is a unique, user-resettable ID for advertising, provided by Google Play services. It gives users better controls and provides developers with a simple, standard system to continue to monetize their apps.” Source: Google Android Developer Documentation

This ID allows apps to rebuild user profiles even after resets, enabling persistent tracking.

  1. Tracking via Cookies

Android’s web and app environments rely on cookies with unique identifiers. The W3C (web standards body) confirms:

“HTTP cookies are used to identify specific users and improve their web experience by storing session data, authentication, and tracking information.” Source: W3C HTTP State Management Mechanism https://www.w3.org/Protocols/rfc2109/rfc2109

Google’s Privacy Sandbox initiative further admits cookies are used for cross-site tracking:

“Third-party cookies have been a cornerstone of the web for decades… but they can also be used to track users across sites.” Source: Google Privacy Sandbox https://privacysandbox.com/intl/en_us/

  1. Ad-Driven Data Collection

Google’s ad platforms, like AdMob, collect behavioral data to refine targeting. The FTC found in a 2019 settlement:

“YouTube illegally harvested children’s data without parental consent, using it to target ads to minors.” Source: FTC Press Release https://www.ftc.gov/news-events/press-releases/2019/09/google-youtube-will-pay-record-170-million-settlement-over-claims

A 2022 study by Aarhus University confirmed:

“87% of Android apps share data with third parties.” Source: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies https://dl.acm.org/doi/10.1145/3534593

  1. Device Fingerprinting

Android permits fingerprinting by allowing apps to access device metadata. The Electronic Frontier Foundation (EFF) warns:

“Even when users reset their Advertising ID, fingerprinting techniques combine static device attributes (e.g., OS version, hardware specs) to re-identify them.” Source: EFF Technical Analysis https://www.eff.org/deeplinks/2021/03/googles-floc-terrible-idea

  1. Hardware-Level Tracking

Google’s Titan M security chip, embedded in Pixel devices, operates independently of software controls. Researchers at Technische Universität Berlin noted:

“Hardware-level components like Titan M can execute processes that users cannot audit or disable, raising concerns about opaque data collection.” Source: TU Berlin Research Paper https://arxiv.org/abs/2105.14442

Regarding Titan M: Lots of its rsearch is being taken down. Very few are remaining online. This is one of them available today.

"In this paper, we provided the first study of the Titan M chip, recently introduced by Google in its Pixel smartphones. Despite being a key element in the security of these devices, no research is available on the subject and very little information is publicly available. We approached the target from different perspectives: we statically reverse-engineered the firmware, we audited the available libraries on the Android repositories, and we dynamically examined its memory layout by exploiting a known vulnerability. Then, we used the knowledge obtained through our study to design and implement a structure-aware black-box fuzzer, mutating valid Protobuf messages to automatically test the firmware. Leveraging our fuzzer, we identified several known vulnerabilities in a recent version of the firmware. Moreover, we discovered a 0-day vulnerability, which we responsibly disclosed to the vendor."

Ref: https://conand.me/publications/melotti-titanm-2021.pdf

  1. Notification Overload

A 2021 UC Berkeley study found:

“Android apps send 45% more notifications than iOS apps, often prioritizing engagement over utility. Notifications act as a ‘hook’ to drive app usage and data collection.” Source: Proceedings of the ACM on Human-Computer Interaction https://dl.acm.org/doi/10.1145/3411764.3445589

How can this be used nefariously?

Let's say you are a person who believes in Truth and who searches all over the net for truth. You find some things which are true. You post it somewhere. And you are taken down. You accept it since this is ONLY one time.

But, this is where YOU ARE WRONG.

THEY can easily know your IDs - specifically your advertising ID, or else one of the above. They send this to Google to know which all EMAIL accounts are associated with these IDs. With 99.9% accuracy, AI can know the correct Email because your EMAIL and ID would have SIMULTANEOUSLY logged into Google thousands of times in the past.

Then they can CENSOR you ACROSS the internet - YouTube, Reddit, etc. - because they know your ID. Even if you change your mobile, they still have other IDs like your email, etc. You can't remove all of them. This is how they can use this for CENSORING. (They will shadow ban you, you wont know this.)

403
 
 

A Trust Report for DeepSeek R1 by VIJIL, a security resercher company, indicates critical levels of risk with security and ethics, high levels of risk with privacy, stereotype, toxicity, hallucination, and fairness, a moderate level of risk with performance, and a low level of risk with robustness.

404
405
 
 

@privacy Privacy Roundup: Week 12 of Year 2025

Posted from Mastodon to Lemmy.

Some of the more interesting stories last week include Android apps using Bluetooth and Wi-Fi information to collect user location data, Apple Passwords using insecure HTTP, and threat actors using Reddit posts in cryptocurrency subreddits to push Lumma stealer.

https://avoidthehack.com/privacy-week12-2025

406
 
 

cross-posted from: https://lemmy.sdf.org/post/31525284

Archived

[...]

While the financial, economic, technological, and national-security implications of DeepSeek’s achievement have been widely covered, there has been little discussion of its significance for authoritarian governance. DeepSeek has massive potential to enhance China’s already pervasive surveillance state, and it will bring the Chinese Communist Party (CCP) closer than ever to its goal of possessing an automated, autonomous, and scientific tool for repressing its people.

Since its inception in the early 2000s, the Chinese surveillance state has undergone three evolutions. In the first, which lasted until the early 2010s, the CCP obtained situational awareness — knowledge of its citizens’ locations and behaviors — via intelligent-monitoring technology. In the second evolution, from the mid-2010s till now, AI systems began offering authorities some decision-making support. Today, we are on the cusp of a third transformation that will allow the CCP to use generative AI’s emerging reasoning capabilities to automate surveillance and hone repression.

[...]

China’s surveillance-industrial complex took a big leap in the mid-2010s. Now, AI-powered surveillance networks could do more than help the CCP to track the whereabouts of citizens (the chess pawns). It could also suggest to the party which moves to make, which figures to use, and what strategies to take.

[...]

Inside China, such a network of large-scale AGI [artificial general intelligence] systems could autonomously improve repression in real time, rooting out the possibility of civic action in urban metropolises. Outside the country, if cities such as Kuala Lumpur, Malaysia — where China first exported Alibaba’s City Brain system in 2018 — were either run by a Chinese-developed city brain that had reached AGI or plugged into a Chinese city-brain network, they would quietly lose their governance autonomy to these highly complex systems that were devised to achieve CCP urban-governance goals.

[...]

As China’s surveillance state begins its third evolution, the technology is beginning to shift from merely providing decision-making support to actually acting on the CCP’s behalf.

[...]

DeepSeek [...] is this technology that would, for example, allow a self-driving car to recognize road signs even on a street it had never traveled before. [...] The advent of DeepSeek has already impelled tech experts in the United States to take similar approaches. Researchers at Stanford University managed to produce a powerful AI system for under US$50, training it on Google’s Gemini 2.0 Flash Thinking Experimental. By driving down the cost of LLMs, including for security purposes, DeepSeek will thus enable the proliferation of advanced AI and accelerate the rollout of Chinese surveillance infrastructure globally.

[...]

The next step in the evolution of China’s surveillance state will be to integrate generative-AI models like DeepSeek into urban surveillance infrastructures. Lenovo, a Hong Kong corporation with headquarters in Beijing, is already rolling out programs that fuse LLMs with public-surveillance systems. In Barcelona, the company is administering its Visual Insights Network for AI (VINA), which allows law enforcement and city-management personnel to search and summarize large amounts of video footage instantaneously.

[...]

The CCP, with its vast access to the data of China-based companies, could use DeepSeek to enforce laws and intimidate adversaries in myriad ways — for example, deploying AI police agents to cancel a Lunar New Year holiday trip planned by someone required by the state to stay within a geofenced area; or telephoning activists after a protest to warn of the consequences of joining future demonstrations. It could also save police officers’ time. Rather than issuing “invitations to tea” (a euphemism for questioning), AI agents could conduct phone interviews and analyze suspects’ voices and emotional cues for signs of repentance.

[...]

407
408
409
410
 
 

cross-posted from: https://infosec.pub/post/25439167

A new study reveals that thousands of Android apps covertly collect location data using Bluetooth and WiFi beacons, allowing continuous tracking and profiling of users without explicit consent.

411
412
413
414
415
416
417
4
submitted 4 months ago* (last edited 4 months ago) by HoracyDarcy to c/privacy
 
 

Hi there! If I buy a computer on Amazon from a trusted brand like Apple or Lenovo, will they keep my name and address and connect that information to the serial numbers of the computer's parts? I'm especially worried about this when I visit various websites that track serial numbers from my PC while browsing or when using gaming platforms like Steam. Would it be wiser to purchase it from a physical store using cash, or a second-hand computer from ebay instead?

418
 
 

I've always had it on, but it's kind of a pain in the ass. Especially on worse (not necessarily slower) networks.

On laptop it's fine for the most part since the use-case is a bit different, but on a phone it's causing me some annoyances/issues.

With my carrier indoors it takes on average 62 seconds to connect. This is pretty annoying if toggling/switching VPN servers more often.
But when travelling (e.g.: in a train) it can make the difference from slightly spotty signal to almost never being connected successfully, impacting usability.

As such, I often find myself not even using VPN in such cases in the first place.
For comparison, plain Wireguard is done before I can pull away my finger from the "connect" button, usually even on 2G EDGE.

Do you keep this (perhaps a bit paranoid-level) option on?
Even if actually useful in the future, it would only protect traffic recorded from User to VPN anyway.

419
 
 

cross-posted from: https://lemmy.sdf.org/post/31274457

Archive

An exploitation avenue found by Trend Micro in Windows has been used in an eight-year-long spying campaign, but there's no sign of a fix from Microsoft, which apparently considers this a low priority.

The attack method is low-tech but effective, relying on malicious .LNK shortcut files rigged with commands to download malware. While appearing to point to legitimate files or executables, these shortcuts quietly include extra instructions to fetch or unpack and attempt to run malicious payloads.

Ordinarily, the shortcut's target and command-line arguments would be clearly visible in Windows, making suspicious commands easy to spot. But Trend's Zero Day Initiative said it observed North Korea-backed crews padding out the command-line arguments with megabytes of whitespace, burying the actual commands deep out of sight in the user interface.

Trend reported this to Microsoft in September last year and estimates that it has been used since 2017. It said it had found nearly 1,000 tampered .LNK files in circulation but estimates the actual number of attacks could have been higher.

"This is one of many bugs that the attackers are using, but this is one that is not patched and that's why we reported it as a zero day," Dustin Childs, head of threat awareness at the Zero Day Initiative, [said].

"We told Microsoft but they consider it a UI issue, not a security issue. So it doesn't meet their bar for servicing as a security update, but it might be fixed in a later OS version, or something along those lines."

[...]

420
 
 

Fastbackgroundcheck. com says there's info on me on truthfinder, spokeo, peoplefinders and instantcheckmate. When I try going through all four of those sites takes a super long time, including a few times in the past when I tried getting reports on myself.

The progress bars reach 100% and reset continously. If these sites are legimate like some reddit users claim, then why or be upfront about wanting me to pay? Right now I'm convinced that these sites are snake oil, maybe they work if you pay but the behavior of the free options turn me off. They act 100% like typical scam websites, the kind that asks you to complete three surveys on external sites with fake progress bars.

Basic info like my full name, address, age, and siblings can be found with search engines easily but I feel like there's no point in trying to wipe it if there aren't methods that could definitely work.

421
 
 

Archived version

Here is an Invidious link for the video (and 'Lola' part starts at ~5 minutes)

To demonstrate this, Sadoun introduces the audience to “Lola,” a hypothetical young woman who represents the typical web user that Publicis now has data about. “At a base level, we know who she is, what she watches, what she reads, and who she lives with,” Sadoun says. “Through the power of connected identity, we also know who she follows on social media, what she buys online and offline, where she buys, when she buys, and more importantly, why she buys.”

It gets worse. “We know that Lola has two children and that her kids drink lots of premium fruit juice. We can see that the price of the SKU she buys has been steadily rising on her local retailer’s shelf. We can also see that Lola’s income has not been keeping pace with inflation. With CoreAI, we can predict that Lola has a high propensity to trade down to private label,” Sadoun says, meaning that the algorithm apprehends whether Lola is likely to start buying a cheaper brand of juice. If the software decides this is the case, the CoreAI algo can automatically start showing Lola ads for those reduced price juice brands, Sadoun says.

422
 
 

@privacy Privacy Roundup: Week 11 of Year 2025

Hi Lemmy, shared with <3 from Mastodon.

https://avoidthehack.com/privacy-week11-2025

423
20
Blur Your House On Google (support.google.com)
submitted 4 months ago by [email protected] to c/privacy
 
 

cross-posted from: https://europe.pub/post/9313

cross-posted from: https://europe.pub/post/9311

In case you ever wanted to blur your house from google street view you can. A little privacy i suppose, its pretty easy. you dont need a reason to do it. This probaly the only thing google lets opt out of which is cool.

Originally posted on Reddit

424
425
view more: ‹ prev next ›