/c/cybersecurity - Cybersecurity News & Discussion

2375 readers
1 users here now

A community for technical news and discussion of cybersecurity and closely related topics.

founded 5 years ago
MODERATORS
1
 
 

Why does Stripe require OAuth tokens to pass through a third party server?

Can someone who understands OAuth better than me explain to me why Stripe REQUIRES that their OAuth Access Keys get shared with a third party?

I've tried RTFM, but my biggest hangup is that the OAuth docs appear describe a very different situation than mine. They usually describe a user agent (web browser) as the client. And they talk about "your users" as if I have a bunch of users that I'm going to be fetching access keys for.

Nah, this is server <--> server. I have a server. Stripe has a server. I am one user. All I need is ONE API key for ONE account. But I'm forced to use OAuth. It doesn't seem appropriate, and it's especially concerning that the "flow" requires the (non-expiring!) Access Token to be shared with a third party server. Why?!?

I recently learned that Stripe has been pushing OAuth (branded as "Stripe Connect") to its integration apps as the "more secure" solution, compared to Restricted API Keys. In fact, we've found that most integrations we've encountered that use Stripe Connect are less secure than using Restricted API Keys because the (private!) tokens are shared with a third party!

I've been using Stripe to handle credit card payments on my e-commerce website for years. Recently, we updated our wordpress e-commerce website and all its plugins. And then we discovered that all credit card payments were broken because our Stripe Payment Gateway plugin stopped allowing use of Restricted API Keys. Instead they only support "Stripe Connect" (which, afaict, is a marketing term for OAuth). This change forced us to do a security audit to make sure that the new authentication method met our org's security requirements. What we found was shocking.

So far we've started auditing two woocommerce plugins for Stripe, and both have admitted that the OAuth tokens are shared with their (the developer's) servers!

One of them is a "Stripe Verified Partner", and they told us that they're contractually obligated by Stripe to use only "Stripe Connect" (OAuth) -- they are not allowed to use good-'ol API Keys.

They also told us that Stripe REQUIRED them to include them in the OAuth flow, such that their servers are given our (very secret!) OAuth Access Keys!

The benefit of normal API Keys, of course, is that they're more secure than this OAuth setup for (at least) two reasons:

  1. I generate the API keys myself, and I can restrict the scope of the keys permissions

  2. I store the key myself on my own server. It's never transmitted-to nor stored-on any third party servers. Only my server and Stripe's servers ever see it.

Can someone shine a light onto this darkpattern? I understand that standardization is good. OAuth Refresh Keys add security (this service doesn't use them). But why-oh-why would you FORCE OAuth flows that share the (non-expiring) Access Tokens with a third party? And why would you claim that's more secure than good-ol-API-keys?

Does OAuth somehow not support server<-->server flows? Or is it a library issue?

What am I missing?

2
3
 
 

Hi, could some one explain how are seedphrases considered to be super secure? If it’s just a random string of words from a well-known list of words, what stops someone with a simple python script generating random phrases and trying to open wallets with them?

4
5
 
 

AI - the ultimate social engineer... like for real

link provides story bypassing paywall

6
1
submitted 10 months ago* (last edited 10 months ago) by [email protected] to c/[email protected]
 
 

im aiming to make a chat app secure as theorhetically possible as a webapp. for transparency its open source. id like the experience to be as close to possible to a regular chat app. its important to note; there are limitation with p2p and webapps such that messages cant be sent if the peer isnt connected.

to keep this post brief, please take a look at the readme. it has all the information and links.

i dont think its ready to replace any app or service, but id love to get feedback on what you think would make it so you would use it more than once.

7
 
 

Samy Kamkar's latest at Defcon.

Archive link: https://archive.ph/UtTtp

8
 
 

3TOFU: Verifying Unsigned Releases

By Michael Altfield
License: CC BY-SA 4.0
https://tech.michaelaltfield.net/

This article introduces the concept of "3TOFU" -- a harm-reduction process when downloading software that cannot be verified cryptographically.

Verifying Unsigned Releases with 3TOFU
Verifying Unsigned Releases with 3TOFU

⚠ NOTE: This article is about harm reduction.

It is dangerous to download and run binaries (or code) whose authenticity you cannot verify (using a cryptographic signature from a key stored offline). However, sometimes we cannot avoid it. If you're going to proceed with running untrusted code, then following the steps outlined in this guide may reduce your risk.

TOFU

TOFU stands for Trust On First Use. It's a (often abused) concept of downloading a person or org's signing key and just blindly trusting it (instead of verifying it).

3TOFU

3TOFU is a process where a user downloads something three times at three different locations. If-and-only-if all three downloads are identical, then you trust it.

Why 3TOFU?

During the Crypto Wars of the 1990s, it was illegal to export cryptography from the United States. In 1996, after intense public pressure and legal challenges, the government officially permitted export with the 56-bit DES cipher -- which was a known-vulnerable cipher.

Photo of Paul Kocher holding a very large circuit board
The EFF's Deep Crack proved DES to be insecure and pushed a switch to 3DES.

But there was a simple way to use insecure DES to make secure messages: just use it three times.

3DES (aka "Triple DES") is the process encrypting a message using the insecure symmetric block cipher (DES) three times on each block, to produce an actually secure message (from known attacks at the time).

3TOFU (aka "Triple TOFU") is the process of downloading a payload using the insecure method (TOFU) three times, to obtain the payload that's magnitudes less likely to be maliciously altered.

3TOFU Process

To best mitigate targeted attacks, 3TOFU should be done:

  1. On three distinct days
  2. On three distinct machines (or VMs)
  3. Exiting from three distinct countries
  4. Exiting using three distinct networks

For example, I'll usually execute

  • TOFU #1/3 in TAILS (via Tor)
  • TOFU #2/3 in a Debian VM (via VPN)
  • TOFU #3/3 on my daily laptop (via ISP)

The possibility of an attacker maliciously modifying something you download over your ISP's network are quite high, depending on which country you live-in.

The possibility of an attacker maliciously modifying something you download onto a VM with a freshly installed OS over an encrypted VPN connection (routed internationally and exiting from another country) is much less likely, but still possible -- especially for a well-funded adversary.

The possibility of an attacker maliciously modifying something you download onto a VM running a hardened OS (like Whonix or TAILS) using a hardened browser (like Tor Browser) over an anonymizing network (like Tor) is quite unlikely.

The possibility for someone to execute a network attack on all three downloads is very near-zero -- especially if the downloads were spread-out over days or weeks.

3TOFU bash Script

I provide the following bash script as an example snippet that I run for each of the 3TOFUs.

REMOTE_FILES="https://tails.net/tails-signing.key"

CURL="/usr/bin/curl"
WGET="/usr/bin/wget --retry-on-host-error --retry-connrefused"
PYTHON="/usr/bin/python3"

# in tails, we must torify
if [[ "`whoami`" == "amnesia" ]] ; then
	CURL="/usr/bin/torify ${CURL}"
	WGET="/usr/bin/torify ${WGET}"
	PYTHON="/usr/bin/torify ${PYTHON}"
fi

tmpDir=`mktemp -d`
pushd "${tmpDir}"

# first get some info about our internet connection
${CURL} -s https://ifconfig.co/country | head -n1
${CURL} -s https://check.torproject.org/ | grep Congratulations | head -n1

# and today's date
date -u +"%Y-%m-%d"

# get the file
for file in ${REMOTE_FILES}; do
	wget ${file}
done

# checksum
date -u +"%Y-%m-%d"
sha256sum *

# gpg fingerprint
gpg --with-fingerprint  --with-subkey-fingerprint --keyid-format 0xlong *

Here's one example execution of the above script (on a debian DispVM, executed with a VPN).

/tmp/tmp.xT9HCeTY0y ~
Canada
2024-05-04
--2024-05-04 14:58:54--  https://tails.net/tails-signing.key
Resolving tails.net (tails.net)... 204.13.164.63
Connecting to tails.net (tails.net)|204.13.164.63|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1387192 (1.3M) [application/octet-stream]
Saving to: ‘tails-signing.key’

tails-signing.key   100%[===================>]   1.32M  1.26MB/s    in 1.1s    

2024-05-04 14:58:56 (1.26 MB/s) - ‘tails-signing.key’ saved [1387192/1387192]

2024-05-04
8c641252767dc8815d3453e540142ea143498f8fbd76850066dc134445b3e532  tails-signing.key
gpg: WARNING: no command supplied.  Trying to guess what you mean ...
pub   rsa4096/0xDBB802B258ACD84F 2015-01-18 [C] [expires: 2025-01-25]
      Key fingerprint = A490 D0F4 D311 A415 3E2B  B7CA DBB8 02B2 58AC D84F
uid                             Tails developers (offline long-term identity key) <[email protected]>
uid                             Tails developers <[email protected]>
sub   rsa4096/0x3C83DCB52F699C56 2015-01-18 [S] [expired: 2018-01-11]
sub   rsa4096/0x98FEC6BC752A3DB6 2015-01-18 [S] [expired: 2018-01-11]
sub   rsa4096/0xAA9E014656987A65 2015-01-18 [S] [revoked: 2015-10-29]
sub   rsa4096/0xAF292B44A0EDAA41 2016-08-30 [S] [expired: 2018-01-11]
sub   rsa4096/0xD21DAD38AF281C0B 2017-08-28 [S] [expires: 2025-01-25]
sub   rsa4096/0x3020A7A9C2B72733 2017-08-28 [S] [revoked: 2020-05-29]
sub   ed25519/0x90B2B4BD7AED235F 2017-08-28 [S] [expires: 2025-01-25]
sub   rsa4096/0xA8B0F4E45B1B50E2 2018-08-30 [S] [revoked: 2021-10-14]
sub   rsa4096/0x7BFBD2B902EE13D0 2021-10-14 [S] [expires: 2025-01-25]
sub   rsa4096/0xE5DBA2E186D5BAFC 2023-10-03 [S] [expires: 2025-01-25]

The TOFU output above shows that the release signing key from the TAILS project is a 4096-bit RSA key with a full fingerprint of "A490 D0F4 D311 A415 3E2B B7CA DBB8 02B2 58AC D84F". The key file itself has a sha256 hash of "8c641252767dc8815d3453e540142ea143498f8fbd76850066dc134445b3e532".

When doing a 3TOFU, save the output of each execution. After collecting output from all 3 executions (intentionally spread-out over 3 days or more), diff the output.

If the output of all three TOFUs match, then the confidence of the file's authenticity is very high.

Why do 3TOFU?

Unfortunately, many developers think that hosting their releases on a server with https is sufficient to protect their users from obtaining a maliciously-modified release. But https won't protect you if:

  1. Your DNS or publishing infrastructure is compromised (it happens), or
  2. An attacker has just one (subordinate) CA in the user's PKI root store (it happens)

Generally speaking, publishing infrastructure compromises are detected and resolved within days and MITM attacks using compromised CAs are targeted attacks (to avoid detection). Therefore, a 3TOFU verification should thwart these types of attacks.

⚠ Note on hashes: Unfortunately, many well-meaning developers erroneously think that cryptographic hashes provide authenticity, but cryptographic hashes do not provide authenticity -- they provide integrity.

Integrity checks are useful to detect corrupted data on-download; it does not protect you from maliciously altered data unless those hashes are cryptographically signed with a key whose private key isn't stored on the publishing infrastructure.

Improvements

There are some things you can do to further improve the confidence of the authenticity of a file you download from the internet.

Distinct Domains

If possible, download your payload from as many distinct domains as possible.

An adversary may successfully compromise the publishing infrastructure of a software project, but it's far less likely for them to compromise the project website (eg 'tails.net') and their forge (eg 'github.com') and their mastodon instance (eg 'mastodon.social').

Use TAILS

TAILS Logo
TAILS is by far the best OS to use for security-critical situations.

If you are a high-risk target (investigative journalist, activist, or political dissident) then you should definitely use TAILS for one of your TOFUs.

Signature Verification

It's always better to verify the authenticity of a file using cryptographic signatures than with 3TOFU.

Unfortunately, some companies like Microsoft don't sign their releases, so the only option to verify the authenticity of something like a Windows .iso is with 3TOFU.

Still, whenever you encounter some software that is not signed using an offline key, please do us all a favor and create a bug report asking the developer to sign their releases with PGP (or minisign or signify or something).

4TOFU

3TOFU is easy because Tor is free and most people have access to a VPN (corporate or commercial or an ssh socks proxy).

But, if you'd like, you could also add i2p or some other proxy network into the mix (and do 4TOFU).

9
 
 

Just finished up a new post. Hope someone finds it helpful!

10
 
 

Release 1.3.0 (26-07-2024)

Improvements

  • Vulnerability Details Page Enhancements: We've significantly enhanced the vulnerabilities details page. It now presents more relevant information and the layout has been substantially improved for a better user experience.
  • API Enhancements: Various improvements have been made to the API for better performance and functionality.
  • UI Enhancements: Edition/action buttons are now hidden when not logged in (#57).
  • Importer Improvements: Enhancements have been made to various importers (37d3a6d).

Fixes

  • Custom Vulnerability Display Bug: Fixed an issue where custom vulnerabilities were not displayed correctly (#58).
  • New Vulnerability Creation Issue: Resolved the problem where new vulnerabilities couldn't be created without a CVE number (#56).
  • Webservice Sorting Fix: Fixed the sorting issue of contributors versus users (46195d1).
  • Minor Fixes: Various minor fixes have been implemented to improve overall stability and performance.

Screenshot_20240726_141051 Screenshot_20240726_141112

And do not hesitate to create an account to contribute and share your thoughts on the security advisories: https://vulnerability.circl.lu/

Funding

ngsoti-small eu_funded_en

The NGSOTI project is dedicated to training the next generation of Security Operation Center (SOC) operators, focusing on the human aspect of cybersecurity. It underscores the significance of providing SOC operators with the necessary skills and open-source tools to address challenges such as detection engineering, incident response, and threat intelligence analysis. Involving key partners such as CIRCL, Restena, Tenzir, and the University of Luxembourg, the project aims to establish a real operational infrastructure for practical training. This initiative integrates academic curricula with industry insights, offering hands-on experience in cyber ranges.

vulnerability-lookup is co-funded by CIRCL and by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or ECCC. Neither the European Union nor the granting authority can be held responsible for them.

11
 
 

Media coverage largely sucked

When I just looked at my phone, the headlines were about an unfolding Microsoft global IT outage. My first thought, ransomware. So I logged in and started looking around at what was happening — I’m a CrowdStrike customer — and quickly realised two different, separate things had happened:

  • Microsoft Azure had an outage earlier in the day. This was resolved before I got up. Azure has frequent outages (don’t kill me, Microsoft) — this isn’t abnormal.
  • CrowdStrike had made a boo-boo and pushed out a channel update that had borked a decent percentage of customers.

The media connected these two events together and conflated them. They weren’t connected.

12
 
 

Vulnerability Lookup facilitates quick correlation of vulnerabilities from various sources (NIST, GitHub, CSAF-Siemens, CSAF-CISCO, CSAF-CERT-Bund, PySec, VARIoT, etc.), independent of vulnerability IDs, and streamlines the management of Coordinated Vulnerability Disclosure (CVD). Vulnerability Lookup is also a collaborative platform where users can comment on security advisories and create bundles.

A Vulnerability Lookup instance operated by CIRCL is available at https://vulnerability.circl.lu/.

13
 
 

cross-posted from: https://programming.dev/post/16106778

Contrary to what is stated on the polyfill.io website, Cloudflare has never recommended the polyfill.io service or authorized their use of Cloudflare’s name on their website. We have asked them to remove the false statement, and they have, so far, ignored our requests. This is yet another warning sign that they cannot be trusted.

14
 
 

Wall Street Journal (paywalled) The digital payments company plans to build an ad sales business around the reams of data it generates from tracking the purchases as well as the broader spending behaviors of millions of consumers who use its services, which include the more socially-enabled Venmo app.

PayPal has hired Mark Grether, who formerly led Uber’s advertising business, to lead the effort as senior vice president and general manager of its newly-created PayPal Ads division.

15
 
 

Not sure if there’s a better community to ask this, but I’m trying to find a good quality non-cloud-based IP camera that I can feed into a standardized video recording software over a network. Ideally, it would be Wi-Fi capable as well.

Everywhere I’ve looked, they all reach out to a third-party and go through an app or are through junction box and are analog-based.

Does anyone know if an option like this exists?

16
 
 

Und mal wieder: Es ist nur ein Werkzeug, mit dem halt mit genug krimineller Energie auch Blödsinn gemacht wird. In dem Fall war es auch nicht mal ein Angriff auf das Auto, der “spezielle Hardware “ benötigen würde.

17
18
 
 

So, yeah. Other than stated, Spotify does not provide 2FA (shame on them!), so I use a strong password and since years nothing happened.

This early morning I got multiple mails that my account was logged in from Brazil, from the USA, from India, and some other countries. There were songs liked and playlists created so it wasn’t a malicious e-mail but some people actually were able to log on to my Spotify account.

I of course changed the password and logged out all accounts and checked allowed apps, etc. and everything looks fine.

But I wonder … was there something that happened recently? The common sites to check such things do not list my old Spotify password, and a quick web research does not bring anything up.

Any clue what could have happened here?

19
 
 

Infomaniak claims to use TLS, but

The first link in the TLS chain is executed via a purely internal network by the webmail and Smtp servers and is not available in TLS for performance reasons.

is this normal, acceptable, irrelevant, standard, a red flag?

they are the biggest hosting provider of Switzerland, so I somehow have a hard time believing, they lack resources to implement TLS right.

20
 
 

The reporting methodology employed should yield valuable insights, spanning both technical details and high-level strategic considerations.

21
2
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

I wrote about my perception of what risks AI brings to society in 2024. And it's not all about cybersecurity 😉

22
23
24
25
view more: next ›