A1kmm

joined 2 years ago
MODERATOR OF
[–] [email protected] 3 points 4 weeks ago (1 children)

I think it was a 18th century British fad that spread to America - for example, look at the date on this London newspaper from 1734:

London Gazette November 5 1734 - in the text it does also use the other format about "last month", however.

It didn't make it into legal documents / laws, which still used the more traditional format like: "That from and after the Tenth Day of April, One thousand seven hundred and ten ...". However, the American Revolution effectively froze many British fashions from that point-in-time in place (as another example, see speaking English without the trap/bath split, which was a subsequent trend in the commonwealth).

The fad eventually died out and most of the world went back to the more traditional format, but it persisted in the USA.

[–] [email protected] 4 points 1 month ago

When pressed, the boss admitted they'd hired a lemon.

[–] [email protected] 47 points 1 month ago

I think detecting that something bad is happening, finding out how, and stopping it prevents other people from being affected. Otherwise contamination incidents could go on for years, and the cumulative exposure to affected individuals would be higher, and the number of individuals affected would also be higher.

[–] [email protected] 5 points 1 month ago

GENEVA CONVENTION relative to the treatment of Prisoners of War of 12 August 1949 Article 52 Unless he be a volunteer, no prisoner of war may be employed on labour which is of an unhealthy or dangerous nature. No prisoner of war shall be assigned to labour which would be looked upon as humiliating for a member of the Detaining Power’s own forces. The removal of mines or similar devices shall be considered as dangerous labour.

Sometimes I wonder if they are trying to get a high score by committing every possible war crime.

[–] [email protected] 2 points 1 month ago

Possibly "Making History" by Stephen Fry - although at 380 pages it doesn't quite match as a short story, and the protagonist doesn't stop himself so much as do something else to reverse the effects of his actions to save Hitler.

[–] [email protected] 7 points 1 month ago* (last edited 1 month ago)

Apparently the xitter tweet was a eulogy for Yahya Sinwar.

Now Yahya Sinwar was a war criminal, so they kind of have a point.

However, if that is the standard they set, saying anything positive about Benjamin Netanyahu, Yoav Gallant, Ron Dermer, Aryeh Deri, Benny Gantz, Gadi Eisonkot, Bezalel Smotrich and Itamar Ben-Gvir, who are all also leaders who have supported war crimes should also be grounds for having awards rescinded. But what are the chances that there is a double standard?

Perhaps a good approach is to check other recipients who are pro-Zionist‡ and see if they have anything praising war criminals, and complain - if there is no similar response, it is clear there is a double standard.

‡: And before anyone tries to twist my words as a smear, I define a modern Zionist in the usual way as someone who wants to expand the state of Israel beyond the 1967 boundaries, other than as a one-state solution with the consent of the people of the lands.

[–] [email protected] 2 points 1 month ago

I think the whole case seems super suss.

The photos of someone in the area look nothing like him.

But supposedly they found him days later, based on someone recognising him (from what? he doesn't even look like the publicly shared suspect photos), and despite him supposedly having travelled a great distance - enough to scatter any evidence over large distances where it would never be recovered, he happened to have a complete set of evidence on him, including a paper "manifesto" and the weapon. That seems like a rather unlikely story. And then they try to seek the death penalty, and double up federal and state.

I think what happened is the authorities decided they probably would never find the real killer, but it was also unacceptable not to have someone to blame - they'd rather kill an innocent to send a message than let crime against the rich go without a response. So they picked some random they didn't like and set him up.

[–] [email protected] 10 points 1 month ago (3 children)

In Australia, there is a strong presumption towards keeping left as a pedestrian (and overtaking on the right - e.g. etiquette on escalators is to keep left, but if you are walking up the escalator, overtake to the right).

In some particularly busy places (especially on shared footpath / bike lane zones) there are even arrows on the pavement to ensure tourists know what side to keep to.

There are always a few people (probably tourists) who don't follow the local etiquette.

[–] [email protected] 4 points 1 month ago (1 children)

Or at least the other way around. Reddit is banned from me.

[–] [email protected] 1 points 1 month ago

to lose 100% of the court cases where they try this defense

I don't think the litigants actually know this. The shady characters they are paying for the information probably know that, but represent that it will just work if they do it right.

Imagine you have some kind of legal problem, and you go to your lawyer, and your lawyer tells you they know what to do that will let you win. You'll probably do it. Now for the litigants, it is the same thing, except instead of a lawyer, it is some person with an Internet and/or in real life following, who dazzles you with lots of fake formality that aligns to your preconceptions of the legal system based on TV. Of course, it is all just pseudolegal and a scam, but you don't know that.

Now you might except that some critical thinking and/or research of authoritative sources like case law, or consulting a real lawyer might let the litigant see that it is a scam, but critical thinking skills are not as common as you might hope, and secondary education in many places doesn't cover much about the law or how to do legal research.

Consider that 49.8% of voters in the 2024 US Presidential election voted for Trump, even after seeing the first term. Many people are easily hoodwinked into acting against their own best interests, especially if they are convinced there is a community of other people like them acting the same way (SovCit like groups do have some numbers), that people who endorse those theories get a lot of recognition / are influential (the leaders of the groups can create that impression), and that their theories have a long traditional backing (usually they make up a historical backstory).

[–] [email protected] 0 points 1 month ago

That catholics should practice confession is a religious belief. But the confidentiality part is from canon law - i.e. in terminology of most other organisations, it is a policy. It is a long-standing policy to punish priests for breaking it, dating back to at least the 12th century, but nonetheless the confidentiality is only a policy within a religious organisation, and not a religious belief.

Many organisations punish individuals who break their policy. But if an organisation has a policy, and insist that it be followed even when following it is contrary to the law, and would do immense harm to vulnerable individuals, then I think it is fair to call that organisation evil - and to hold them culpable for harm resulting from that policy.

Even if the confidentiality itself was a core part of the religious belief itself, religious freedom does not generally extend to violating the rights of others, even if the religion demands it. Engaging in violent jihad, for example, is not a protected right even in places where religious freedom cannot be limited, even if the person adheres to a sect that requires it.

[–] [email protected] 12 points 1 month ago

"Except for Claims (i) in which a party is attempting to protect its intellectual property rights (such as its patent, copyright, trademark, trade secret, anti-circumvention, or moral rights, but not including its privacy or publicity rights) ..."

So in other words, the types of matters Nintendo thinks it might have a dispute against users, court and class actions are okay, but for everything that they think users might file against Nintendo, they think arbitration is best.

 

spoilerHe was the instar pupa.

 

tls-attestproxy is currently only a shell of a program. However, reproducible aarch64-linux images (ready for import to run on cloud providers) can be produced now using nix flakes. The image performs measured boot into the TPM2 PCRs, using grub2 and Linux, before calling into an initramfs with a tiny init script (that gets an IPv6 using udhcpc6 from busybox) and then executes the Rust HTTP server.

The firmware measures the grub2 EFI (with a reproducible hash) into PCRs, and the grub2 EFI measures the hashes of the kernel, initramfs, and grub command line. The attestation will be that the private key corresponding to a public key has a policy locking it down to the expected PCR values - and since it is a reproducible build over Free/Open Source software, anyone can verify what tls-attestproxy does and doesn't do with the keys.

For now, it is available on GitHub, but I might consider moving it to somewhere else later!

94
submitted 5 months ago* (last edited 5 months ago) by [email protected] to c/[email protected]
 

Today, lemmy.amxl.com suffered an outage because the rootful Lemmy podman container crashed out, and wouldn't restart.

Fixing it turned out to be more complicated than I expected, so I'm documenting the steps here in case anyone else has a similar issue with a podman container.

I tried restarting it, but got an unexpected error the internal IP address (which I hand assign to containers) was already in use, despite the fact it wasn't running.

I create my Lemmy services with podman-compose, so I deleted the Lemmy services with podman-compose down, and then re-created them with podman-compose up - that usually fixes things when they are really broken. But this time, I got a message like:

level=error msg=""IPAM error: requested ip address 172.19.10.11 is already allocated to container ID 36e1a622f261862d592b7ceb05db776051003a4422d6502ea483f275b5c390f2""

The only problem is that the referenced container actually didn't exist at all in the output of podman ps -a - in other words, podman thought the IP address was in use by a container that it didn't know anything about! The IP address has effectively been 'leaked'.

After digging into the internals, and a few false starts trying to track down where the leaked info was kept, I found it was kept in a BoltDB file at /run/containers/networks/ipam.db - that's apparently the 'IP allocation' database. Now, the good thing about /run is it is wiped on system restart - although I didn't really want to restart all my containers just to fix Lemmy.

BoltDB doesn't come with a lot of tools, but you can install a TUI editor like this: go install github.com/br0xen/boltbrowser@latest.

I made a backup of /run/containers/networks/ipam.db just in case I screwed it up.

Then I ran sudo ~/go/bin/boltbrowser /run/containers/networks/ipam.db to open the DB (this will lock the DB and stop any containers starting or otherwise changing IP statuses until you exit).

I found the networks that were impacted, and expanded the bucket (BoltDB has a hierarchy of buckets, and eventually you get key/value pairs) for those networks, and then for the CIDR ranges the leaked IP was in. In that list, I found a record with a value equal to the container that didn't actually exist. I used D to tell boltbrowser to delete that key/value pair. I also cleaned up under ids - where this time the key was the container ID that no longer existed - and repeated for both networks my container was in.

I then exited out of boltbrowser with q.

After that, I brought my Lemmy containers back up with podman-compose up -d - and everything then worked cleanly.

 

Since Project Uniquonym is aiming to use HTTPS / TLS transcripts from government sites as the starting point to establish unique identities, figuring out how to create a proof of the transcript is a key goal.

I had originally hoped that it would be possible to use individually owned TPM2.0 devices to create this proof for TLS 1.3. However, this has hit some snags:

  • Implementing GCM is problematic on TPM2.0, and that rules out the only mandatory ciphersuite supported by TLS 1.3.
  • There are two other 'should implement' ciphersuites - one other GCM one, and one using Chacha20. Chacha20 is not supported on TPM2.0.
  • In practice, major implementations of TLS 1.3 do not support any ciphers beyond those three.
  • There are TLS 1.2 ciphersuites that might be more feasible to implement. However, it's over 6 years since TLS 1.3 was released, and browser adoption is now high. I expect it is only a matter of time before TLS 1.2 support becomes rare, so it seems like a dead-end approach to invest into.

This means it is time to pivot to an alternative approach to creating the transcripts.

Here are some options:

Use some other integrity mechanism than plain TLS

If every government would issue a cryptographically signed certificate to citizens, or would implement TLS-N, this would be a viable option.

In reality, I think that very few people would be able to use this due to limit governments supporting it, so it would be unlikely to allow for an effective rollout of uniquonyms any time soon, unfortunately.

Use an alternative locally hosted trusted element rather than TPM2.0

One option is to use CPU "Trusted Execution Environments". There are some alternatives that could apply here. Some AMD CPUs support SEV to allow for protected execution, ARM has TrustZone, and Intel has SGX.

Unfortunately (or fortunately perhaps, because most applications of these devices are actually user-hostile), all of these solutions have known flaws - such as voltage glitching, where carefully timed application of the wrong voltage to the CPU causes a predictable logic fault and circumvents the security. Even where the attacks have a relatively low success rate, these make this technology unusable as a private input into zero-knowledge proofs - the leak of the attestation key from a single device would allow someone to create millions of fake identities.

There are other types of trusted elements (for example, various smart cards), but there would be significant distribution barriers to getting them rolled out, and it isn't clear they offer more primitives than a TPM2.0 anyway, nor how attestation would work with them.

One day, the security of TEEs against attackers with physical access might improve, and it is possible that Uniquonyms could use these devices in the future. However, as of now, I think the locally hosted trusted element approach is, unfortunately, ruled out.

Rely on trusted third parties

The previous attempt was to use TPM 2.0 devices as a local equivalent to a trusted third party.

The alternative is to have the hardware be run by actual third parties that we would need to put trust in. The most viable approach for this is for the trusted parties to be major public cloud hosting providers - see below for a discussion of why this is feasible.

In relation to any one country, the cloud providers would have similar level of trust to the governments in the system: they could theoretically use their position of trust to enable the creation of fake uniquonyms (i.e. allow one person in a country to have an arbitrary number of uniquonyms). They would not be able to unmask which individual a uniquonym belongs to. One point of difference is that the same cloud providers might have access to fake identities across multiple countries (vs a single government being able to fake identities only their own country) - which could increase the risk of a cloud provider being targeted by a government that has jurisdiction over them. In cases where this is a concern, an option could be to trust different cloud providers by country.

There is a question of how trustworthy the big providers actually are. I would not trust Amazon not to union bust, Google not to use dark UI patterns to trick people into opting in to giving them more data, or Microsoft not to enshittify a product to squeeze consumers. However, their public cloud offerings, and particularly the confidential computing parts of them, are a bit different. They all promise (and enter into contractual agreements) to all their customers not to use the data outside very limited circumstances, they all make a lot of money from ensuring trust by their customers that they won't do things, and all are externally audited on their security controls to prevent staff. So I'd say it is easier to trust them not to do dodgy things with regards to these cloud computing products.

It is worth noting that this trust model - placing some trust in some big providers - is how the public key infrastructure for X.509 certificates work. Any Certificate Authority trusted by browsers could theoretically start issuing bogus certificates (and in the past there, have been cases where this happened). The trust in this system is established by browsers (on behalf of their users) only trusting Certificate Authorities that comply with rigorous policies - including audits, commitment to prompt disclosure and revocation in the case of misissued certificates, and so on, with forums like the CA/Browser forum to establish standards. All of the big public cloud providers are actually also Certificate Authorities. Generally speaking, the root of trust for verifying an identity to support uniquonyms is a CA root certificate for a government website.

As such, a solution which requires the use of a cloud provider service (either paid for by the end user, or by someone else offering a service to them) once per renewal of a uniquonym isn't so bad.

How to establish non-interactive trust in code running on a cloud provider

Amazon AWS, Google GCP and Microsoft Azure all support running cloud compute with a vTPM (virtual TPM) in measured boot mode - each component, starting from the cloud-provider provided virtual firmware, creates a hash of the next component to execute, and updates a 'Platform Configuration Register' (PCR) in an irreversible way. It is impossible to reset most of the PCRs, only append new hashes to them, meaning that an attestation of the PCR status is an attestation of the exact code running in a container (short of a cloud provider providing an exploit). The vTPM can be used to get a certificate chain, linking back to a root from the cloud provider, attesting that a particular key is resident in the vTPM (with a flag prohibiting it leaving the vTPM), and that access to the key is contingent on the PCRs being a particular value. This means that only the expected code can ask the TPM to sign a particular attestation. That expected code can then do things like attest to the TLS 1.3 transcript being genuine - creating a chain that requires no trust in anyone except the cloud provider.

This proof can then be used as a private input to construct a zero-knowledge proof (anywhere, not necessarily in the cloud).

Do we have to trust the cloud provider with the contents of the TLS 1.3 transcript?

The TLS 1.3 transcript likely contains sensitive information - such as a cookie for a logged in session to a government website.

It would technically be possible not to expose the sensitive data to the cloud provider at all; there is work such as TLSNotary which uses multi-party computation (MPC) to spread a TLS 1.2 implementation across two nodes, so that all the nodes are confident as to the transcript, but can't see confidential data from other nodes.

However, this produces highly inefficient transcript proofs and is slow enough it might result in timeouts. It could theoretically be updated to TLS 1.3, but the existing implementation is only TLS 1.2.

It is possible to implement it to attest to an encryption public key on the vTPM, and have the client encrypt the data, so that it can only be decrypted by the container running the correct software (short of the cloud provider allowing access to the key against the policy or signing a false attestation). If the software has been checked to be correct, this is probably a very low risk for users.

Does the cloud instance need to be centrally maintained by one party, or can anyone run it?

Since the zero-knowledge proof would only check that the correct software is running in the cloud instance, and that there is a valid certificate chain back to one of the expected cloud providers, anyone could run the instance. We could give people a choice of which cloud provider to use.

Does every end user need to spin up their own instance on the cloud?

Not necessarily - this would be closer to a federated model where anyone who wanted to could spin up a TLS transcript verification service on one of the supported public clouds, but it would be possible to use one hosted by someone else (without even needing to trust that someone else, only the cloud provider).

Summary

Overall, the current approach of using attestation by public cloud providers is not as decentralised as I'd originally hoped would be possible. However, due to the obstacles hit with other options, I think it is the most realistic path forward, and I think it is still acceptable enough to be worth proceeding with.

 

I'm trying to figure out what ciphers can be implemented on a TPM2 to enable TLS1.3 with non-repudiable transcripts (via attested TPM2 audit logs).

It looks like GCM is not going to be possible because:

  • There is no complete GCM implementation available on the TPM2, so to get it to work, I'd need to implement it on top of a primitive such as AES in ECB mode.
  • However, to do that, I'd need to encrypt a zero vector with the key, and the IV, IV + 1, and so on up to the length of the message needed (for the IV in both directions - to verify messages from the server, and to send valid messages from the client).
  • Once I have that information, however, I could use it to create an authentication tag for any message (up to the length I have encrypted IVs for) I wanted, without creating any further records in the TPM2 audit log.
  • This means that the attested audit logs from the TPM2 are worthless since they won't stop anyone forging a message.

I'm continuing to look through other ciphersuites to see what is viable under this approach.

 

A key to the protocol I plan to use is to verify a TLS transcript is to do ECDH on TPM2 to get a key that is flagged as not being able to leave the TPM2 (so that only the remove peer server + the TPM has direct access to the key, not the user). The key is used via the TPM2 for TLS, and then deleted off the TPM. The TPM2 then produces a signed attestation that ECDH happened on the TPM, never left it, was only used for the data included in the hash being signed, and was deleted at the end.

There are a few potential paths I considered, all leveraging the audit functionality of the TPM2 (which allows it to certify an audit log).

The biggest challenge is that since every command needs to specify the audit session to use (if any), making sure there is no usage of the key between the records in the audit log is essential.

Use the nonceTPM and verify it through the attested audit log?

It is possible to create a session (that can't leave the TPM2). When the session is created, you get a random nonce value back. Every time the authorisation is used, the nonce changes. So if it was possible to check that the nonce in matched the nonce out from the previous command, it would be possible to guarantee there were no extra commands.

Unfortunately, the nonce doesn't seem to appear in the audit hash attested to in the audit logs since it isn't a command parameter.

Force the nonceTPM to be used against a policy session?

TPM2 allows an object (such as a key) to have a policy hash, so that it can be only accessed by a policy session that satisfies the policy. And a policy can require authorising with a particular session nonceTPM. This would force that nonceTPM into the log. This is effective for making sure a session is only used for authorisation once. However, the above problem with the response nonceTPMs not being in the logs makes this hard to rely on.

Use NV Indexes to limit how many times a policy session is used?

An NV Index references non-volatile memory on the TPM.

There is an NV Index type called 'PIN PASS' which increments every time it is used for authorisation, with a limit after which it stops working. It is possible to get the TPM2 to attest to the state of the NV Index. It can be locked down by policy to stop writes to it. So if it wasn't for loopholes in it, it could be used to attest that a key was only used the expected number of times to encrypt / decrypt / sign.

However, NV Indexes can be deleted and recreated, and their name used to reference them from policy doesn't include any random data - so it is possible for the user to recreate a PIN PASS with reset counts, and the attestation for this new object will be indistinguishable from the original, and will still work from the policy. This means effectively NV Index based solutions would allow people to forge TLS transcripts, and so it is not suitable for this application.

Use an exclusive audit session

The TPM2 specification allows for an audit session to be marked as 'exclusive'. It remains exclusive as long as there are no intervening commands to the TPM2 that don't use the same audit session.

I was hoping to avoid using an exclusive audit session, because generally a system only has one TPM, and an exclusive audit session means that for the duration of the TLS connection we are generating a proof over, the application would need exclusive use of the TPM (or be disrupted and need to start again).

However, I haven't found a better option despite trying out quite a few leads, and so I'm continuing research on the basis that exclusive audit sessions are the (only) way to go for now. This is disappointing given a few simple improvements to the TPM2 standard could have provided much better options for this application.

 

I'm logging my idea across a series of posts with essays on different sub-parts of it in a Lemmy community created for it.

What do you think - does anyone see any obvious problems that might come up as it is implemented? Is there anything you'd do differently?

There are still some big decisions (e.g. how to do the ZKP part, including what type of ZKPs to use), and some big unknowns (I'm still not certain implementing TLS 1.3 on TPM 2.0 primitives is going to stand up and/or create a valid audit hash attestation to go into the proof, and the proofs might test the limits of what's possible).

 

Let's say I visit an HTTPS website. There is a decent chance my browser will use TLS1.3 to connect to the server. It will verify the certificate, and establish an ephemeral symmetric key, and transmit an encrypted request, and decrypt the response. It would immediately solve the needs of Uniquonym if I could then generate a zero-knowledge proof that the certificate was signed by a particular CA, and that the transcript of the connection (request and response) met certain parameters.

However, there is a huge gap here. Since my browser, which I can control entirely, negotiated the symmetric key, I can:

  1. Encrypt an entirely different request to what I actually sent to the server, and/or,
  2. Encrypt an entirely different response from what I actually received from the server.

This response will look equally authentic to a real response - nothing in the TLS1.3 protocol provides non-repudiation to stop me doing that, because it isn't really designed for that.

There are proposals for extensions that, with the cooperation of the server, allow for TLS non-repudiation (e.g. TLS-N). If every government in the world could be convinced to adopt these on servers that prove user identities through their responses, this would enable Uniquonym. However, TLS-N has very limited adoption, and governments looking to monetise their identity systems probably are unlikely to go out of their way to help provide a free alternative. So relying on this option is probably unrealistic!

There are solutions that use multi-party computation to implement TLS to help interactively convince a group of users that a transcript is valid. However, they are relatively slow, don't scale to large groups, and will only convince group members present at the time of the original request - they won't produce a non-interactively verifiable proof. Making this into a non-interactive protocol is outside of what's possible with currently known cryptographic primitives, and unlikely to be feasible (and for the request part would be impossible - arbitrary data could be encrypted, and then state rolled back and the correct data encrypted, with the arbitrary data substituted later).

One alternative is to lean on so-called "trusted computing" - i.e. specialised hardware that can perform operations and produce signed attestations about the process. This is similar to introducing a trusted third party, except that it is owned by the person who wants to verify their identity (but designed so that person doesn't have access to the private keys to create false attestations). Data stays physically local, but there is still trust in the manufacturer of the trusted computing module to ensure keys are adequately protected, and not to sign certificates for fake modules (allowing fake transcripts to be attested to). This puts them in a similar position to governments - they can't unmask a Uniquonym, but they can create multiple fake ones. Since we don't have a clearly better option, this is the current area of research for Uniquonym.

Trusted computing is a controversial choice because the most common application is to reduce users' freedom - the FSF calls it Treacherous Computing. Using it to create pseudonyms to escape censorship and astroturfing manipulation to manufacture consent is turning a technology created mainly for bad uses around and using it for good.

That are multiple types of trusted computing.

One of the easiest options might be to use a Trusted Execution Environment (TEE) like AMD Secure Encrypted Virtualization (SEV) and Intel SGX - which leverage existing CPUs, but let them run in a secure mode that allows access to secrets. However, these are slightly finicky to use, and significantly, many versions of them have been compromised through under-voltage attacks to glitch the CPU. A compromise would allow for arbitrary issuing of fake identities, and due to the Uniquonym being pseudonymous and based on ZKP, it would be nearly impossible to respond.

A more secure option would be to leverage physical TPM 2.0 chips. These are designed to be much more robust against various physical attacks, since they are separate to the main CPU, and typically have features like protective wires set up to destroy the key if decapped and analysed with probes. Thanks to the efforts of Microsoft, who made it mandatory for Windows 11, TPM 2.0 are now fairly common. Note however, that some TPM 2.0 implementations are actually virtual TPMs on top of AMD SEV / Intel SGX, and so secure TPM 2.0 chips suitable for this application might be less common.

One difficulty with using TPM 2.0 is that they provide a limited range of cryptograpic operations. An open research question for Project Uniquonym is whether this is enough to produce an attested TLS 1.3 transcript, with a cipher suite compatible with a sufficient number of servers to be useful, that can used from a zero-knowledge proof.

 

As explained in https://lemmy.amxl.com/post/709344, government issued identities are an essential building block on top of which uniquonyms are created.

However, there is a diverse range of governments out there, and they all do things differently.

The simplest would be if they all acted like a Certificate Authority - you prove your ID to them, and give them your public key, and they give you a certificate linking your public key to the unique ID. You then publish a document linking your ID number to your public key and a hash of a secret only you know, and including the certificate - this establishes your public real world identity. Later, you publish a message establishing your pseudonym as a hash of the secret and the namespace, with a zero-knowledge proof that a publicly disclosed identity with a valid government certificate exists somewhere in a particular tree, and that the secret matches.

However, most governments do not provide this Certificate Authority service unfortunately, and when they do, access to it is often heavily gated behind barriers to entry on who can use it and for what purpose. It also often has a fee, which would deter uptake. There are some private businesses which specialise in verifying government identities and issuing certificates - but this again has a fee, and introduces an additional trusted third party.

Many governments, however, have some form of online login system and portal (for government services) where people can log in using HTTPS (specifically generally HTTP over TLS1.3 these days), and between the contents of the request and the response, that will be enough to identify an individual uniquely. Leveraging this (combined with a per-country module system for understanding what needs to be proven about the HTTP request and response) would solve the needs for Uniquonyms. This requires a way to prove to others that a TLS1.3 transcript is genuine - which is a very difficult problem - and a topic for a future post.

 

As discussed in another post, establishing unique real identities is a pre-requisite for applying the cryptographic tools to decouple a real identity from a pseudonymous one to make a uniqonym.

There are some existing solutions for ensuring people only have one identity:

  • https://proofofhumanity.id/ is one attempt using photos, videos, and humans vouching for others.
  • https://worldcoin.org/ scans irises and hashes the pattern.
  • Most governments around the world register births and assign unique government identifiers to individuals.

There is no perfect solution unfortunately - there are trade-offs.

Use photos and videos as in PoH

The biggest problem with this solution is that it is unlikely to be that hard to make AI generated images and videos (after fine-tuning AI models for the task), and easy to vouch for a bot that vouches for a bot, and so on, until you have hundreds or thousands of identities. As such, it is unlikely to scale well.

Using iris scans as in WorldCoin

This solution attempts to solve for the problem of AI-generated images by using hardware produced and signed by a supposed 'trusted third party'. Anyone could generate an iris image with AI, but the official hardware has proprietary technology to check for liveness. The hardware is physically hardened against attacks, and has a key protected in hardware to sign iris hashes - that links through a certificate chain back to a certificate issued by the "World Foundation".

If you trust the "World Foundation", this solves the AI problem with PoH, and solves the problem to an extent of not trusting a government to do the right thing (except to the extent said government could order the World Foundation to comply, anyway). However, in exchange, it requires trust for WorldCoin - they could create an arbitrary number of identities.

It also has the problem of requiring a large investment to get it worldwide, and is inconvenient to onboard to.

People in general do not trust the World Foundation. While it is a not-for-profit, it is associated with Sam Altman, who famously was able to get the board of OpenAI to resign and replace it with one loyal to him, and transition OpenAI towards being fully for-profit. World Foundation does not have members - only 4 directors who appoint their successors, and have complete control over it.

As such, a solution that already has a perception of being 'creepy' and which relies on a trusted-third party that doesn't seem all that trustworthy can probably be discounted. Another implementation of the same idea is unlikely at this point due to the entry costs.

Another problem is that it has faced legal challenges operating in some countries due to privacy or security trading concerns.

Leveraging government issued identities

This solution relies on governments to decide who is a real person, and then leverages existing infrastructure.

This solution has the upside that it potentially has the lowest barrier to entry - at least in countries where people already prove their identity to the government.

There are a few downsides:

  • The government could create lots of fake people and use those identities to create fake prevalence signals (i.e. do the exact thing uniquonyms are supposed to protect against). This could be mitigated to an extent by making uniquonyms per-country - if 50,000 Russian government certified uniquonyms show up on a Ukrainian forum pretending to be Ukrainians, users can just filter them out. Within a country, all of the options require some degree of trust of governments - they could scan all their prisoners irises, or force prisoners to hand over their private keys under any option, and use those to create inauthentic interactions.
  • Uniquonyms based on government issued identities would need to be unique per country. Someone with multiple citizenships or relationships with multiple governments might have a uniquonym per country per namespace.
  • There are technical complexities around implementation (to be covered in a separate post) unless governments provide a unique key
  • There is a trade-off (discussed below) between allowing a government to unmask uniquonyms for users under that government, or not allowing recovery if someone claims a uniquonym using stolen government credentials.

The unmasking vs recovery tradeoff

Suppose someone gets access to your computer while you are logged in to a government website. Suppose you don't yet have a uniquonym in a particular namespace, but the attacker goes ahead and creates one linked to your real identity under their own private key.

This is the uniquonym recovery problem. In an ideal world, after you re-prove your identity to the government, and kick out the identity thief, you'd be able to take over the uniquonym or somehow detach it from your real identity.

However, if this was allowed, the government could do the same without your cooperation - hence unmasking your real uniquonym.

So there is a trade-off to be had between safety from the government as a threat actor, and recovery following actions of non-government threat actors. Due to the design goal of protecting dissidents, the uniquonym architecture plans for now are to disallow recovery in this scenario.

However, this can be limited by making uniquonyms expire and require periodic re-verification (e.g. every 6 months). Then, recovery is to wait 6 months, and claim a uniquonym after that. Other forms of recovery other than someone making a new uniquonym against your identity are more solvable for - e.g. lost private keys could be solved by providing printable backup QR codes that can identify a uniquonym, but have enough information to enable recovery. These would have the risk that a government raid of a dissident could uncover it and lead to the dissident's pseudonym being found - but that is still a better threat posture if they have no way to identify the dissident's real identity if they limit their actions to online.

Why the plan for uniquonyms is to leverage government identities

All the options have some drawbacks - there is no perfect option unfortunately.

However, robustness against AI-generated alts is almost certainly needed for uniquonyms to be a success, and reliance on the World Foundation as a trusted third party is not going to fly or be convenient enough for significant uptake.

Therefore, relying on government issued identities, with as many mitigations to the drawbacks as possible applied, is how I plan to take project uniquonym.

 

For a uniquonym to work, it is necessary for there to be some tie back to ensuring a person can't just create a new one. This requires some kind of way to identify a unique person that is hard to double up. I'll make some separate posts going deeper on this.

Next, it requires a way to prove that within a namespace, that a real identity has no more than one pseudonym linking to it - but without revealing the real identities to anyone. There are a range of different trade-offs here - a topic for another post.

 

An increasing proportion of our discourse occurs on the Internet - but it is increasingly fractured into echo chambers, which creates divides in society. The biggest cause of this is that if communities open up to dissenting views on polarising topics, it is very difficult to tell what is from an astroturfer / provocateur / troll who posts the same content under 50,000 names (possibly using AI technology), vs what is genuine. If using a uniquonym (a pseudonym that is unique - i.e. one real person can only have one within a particular namespace) brings trust that people are authentic, it should foster more genuine communication and re-unite communities.

One could argue that someone's real name is the ultimate uniquonym. The trouble is that fear of surveillance and persecution also represses people on the Internet in many parts of the world. It is not reasonable to expect real world consequences for online actions if those real world consequences are persecution and the online actions are speaking truth to power. For that reason, a uniquonym is only unique within a namespace (e.g. a particular website, or a perhaps across a fediverse protocol) - and it should not be possible for anyone to work out someone's real identity from a uniquonym, or to correlate uniquonyms across namespaces.

Some degree of compartmentalisation is good for privacy. There could be a convention of accepting several namespaces per protocol - but with a number, where claiming a uniquonym in a lower numbered namespace gives it more credibility. For example, if I have a name in the namespace lemmy_1, I might be more credible than if I have a name in lemmy_53123, which suggests I might have 53,123 other names as well in other numbered namespaces! This will allow people to have some degree of compartmentalisation of different identities, while protecting their real identity, and preventing non-genuine interactions.

view more: next ›