rcbrk

joined 4 years ago
[–] [email protected] 1 points 1 week ago

Letters, punctuation, and spaces are just about the only ASCII in that art.

[–] [email protected] 10 points 1 month ago

"Having carefully considered all these matters, we are unanimously of the opinion that the evidence was sufficiently cogent to support the findings that the appellant murdered four Afghan men and to the extent that we have discerned error in the reasons of the primary judge, the errors were inconsequential," an executive summary from the court said.

 

At a busy inner-city Canberra polling booth, a man confronted the incumbent finance minister with accusations of genocide.

Labor ACT senate candidate Katy Gallagher was leaving the Lyneham Public School polling booth when she was approached by James Godfrey, spokesperson for pro-Palestinian group Freedom Flotilla Coalition.

"You're a minister in this government and you're doing nothing," Mr Godfrey yells at Senator Gallagher.

"Palestinians in Gaza are being starved at the moment.

"Sixty days plus of no food, no water, no medicines and no fuel and your government is complicit in this genocide."

Pro-Palestinian protester confronts Labor Senator Katy Gallagher

Mr Godfrey says a Freedom Flotilla ship carrying aid bound for Gaza was recently attacked by drones.

"Our ship's been attacked in international waters off the east coast of Malta," he tells reporters afterward.

"There's been fires on board, and we're trying to transport humanitarian aid to Gaza and the Australian government won't speak out against it."

https://www.abc.net.au/news/2025-05-02/freedom-flotilla-group-says-ship-struck-off-malta/105245778

-- https://www.abc.net.au/news/2025-05-03/federal-election-2025-live-anthony-albanese-peter-dutton/105245936#live-blog-post-176075

Related information/news:

[–] [email protected] 1 points 1 month ago

Ugh, so much AI generated slop starting to fill up Bandcamp. Enshittification launch countdown is almost complete, I fear...

There's excellent music on there and if you buy stuff from a trusted artist it's a good deal for them, but as a discovery platform it's becoming a dead-internet failure.

[–] [email protected] 3 points 1 month ago

Qwen3-30B-A3B-Q6_K:

Amt: 8003, Process:10.67s (17.52T/s), Generate:1408.00s (5.68T/s), Total: 1418.67s

Qwen3-30B-A3B-Q6_K - Thoughts (29081 characters) Okay, I need to write a Python script that sorts all .gguf files in the current directory into subdirectories based on their publisher. The publishers are determined from URLs stored in files named dl{YYYYMMDD}.txt. Each line in these files has a URL, and some might be commented out with a #. The publisher is the first part after the domain in the URL. Let me think through how to approach this.

First, I need to find all the .gguf files in the current directory. Then, for each of them, I need to determine their publisher. But how do I get the URLs associated with each .gguf file? Wait, the problem says that the URLs are in the dl{YYYYMMDD}.txt files. But how do I know which URL corresponds to which .gguf file? Hmm, maybe the URLs in the txt files are the ones that were used to download the .gguf files. So each .gguf file was downloaded via a URL listed in one of these txt files. But how do I associate each .gguf file with its URL?

Wait, the problem statement says that the URLs are in the files named dl{YYYYMMDD}.txt. But the .gguf files themselves don't have any metadata linking them to the URLs. So maybe the assumption is that each .gguf file was downloaded using one of the URLs in these txt files. But how to determine which URL corresponds to which .gguf file? That's a problem. Unless there's some naming convention I'm missing.

Wait, maybe the URLs in the txt files are the ones that were used to download the .gguf files. For example, if a URL is "https://huggingface.co/unsloth/.../file.gguf", then the corresponding .gguf file would be named something like "file.gguf". But the problem says that the .gguf files are in the current directory. So perhaps the script needs to look through all the URLs in the txt files, extract the publisher, and then for each .gguf file, find which URL it came from. But without any direct link, how?

Wait, maybe the .gguf files are named after the file part of the URL. For example, the URL ends with "Qwen3-30B-A3B-Q6_K.gguf?download=true", so the .gguf file would be named "Qwen3-30B-A3B-Q6_K.gguf". If that's the case, then for each .gguf file, we can check if there's a URL in the txt files that has the same filename. But that might not be reliable. Alternatively, maybe the script is supposed to process all the URLs in the txt files, extract their publishers, and then for each .gguf file, determine which publisher it belongs to based on the URL that was used to download it. However, without knowing which URL corresponds to which .gguf file, this seems impossible.

Wait, maybe the problem is that the user expects that each .gguf file was downloaded via one of the URLs in the txt files. So the script should process all the URLs in the txt files, extract the publisher, and then for each .gguf file, find which URL (if any) it corresponds to. But how?

Alternatively, perhaps the script is supposed to take each .gguf file, and for each of them, look through all the URLs in the txt files to find which one matches. But how? For example, if the .gguf file is named "Qwen3-30B-A3B-Q6_K.gguf", then the URL would have that as the filename. So perhaps the script can check for each .gguf file, whether there's a URL in the txt files that has that filename. But that's a bit of a stretch. However, given the problem statement, perhaps that's the intended approach.

Alternatively, maybe the problem is that the URLs in the txt files are the ones that were used to download the .gguf files, and each .gguf file's name is derived from the URL. So the script can process all the URLs, extract the publisher, and then for each .gguf file, determine which URL it came from by matching the filename. But this requires that the .gguf files are named exactly as the file part of the URL.

Assuming that's the case, here's the plan:

  1. Find all .gguf files in the current directory.
  2. For each .gguf file, extract the filename (without the path).
  3. Search through all the URLs in the dl{YYYYMMDD}.txt files to find which URL has a filename that matches the .gguf file's name.
  4. Once found, extract the publisher from that URL.
  5. Create a subdirectory under the publisher's name (if it doesn't exist) and move the .gguf file there.

But how to handle cases where multiple URLs have the same filename? Or if no URL matches? The problem statement says that the URLs are in the txt files, but maybe not all .gguf files are accounted for. But the script should process as many as possible.

Alternatively, perhaps the script is supposed to process all URLs in the txt files, extract the publisher, and then create directories for each publisher, but how does that relate to the .gguf files? Maybe the .gguf files are already in the directory, and the script is supposed to move them into the publisher's directory based on the URLs in the txt files. But without knowing which .gguf file corresponds to which URL, this seems impossible.

Wait, maybe the problem is that the .gguf files were downloaded using the URLs in the txt files, and the script needs to associate each .gguf file with the URL that was used to download it. But how? Unless there's some other metadata. But the problem statement doesn't mention that.

This is a bit confusing. Let me re-read the problem statement.

The user says: "The URLs are in files in the current directory named in the format dl{YYYYMMDD}.txt, with one url per line. Some url lines may be commented out with a single # character, but they are still a valid source of info. An example url is ... and the publisher is always the first node after the domain, in this case unsloth."

So the URLs are stored in th [...trimmed because lemmy won't handle >10,000 chars]

Qwen3-30B-A3B-Q6_K - AnswerHere's a well-documented Python script that fulfills the requirements of sorting .gguf files into directories based on their source publishers, as extracted from URLs stored in dl{YYYYMMDD}.txt files.


What This Script Does

  1. Finds all .gguf files in the current working directory.
  2. Reads all dl{YYYYMMDD}.txt files in the current directory.
  3. Parses each URL in those files, including lines that are commented out (i.e., starting with #) but still considered valid.
  4. Extracts the file name from the URL path.
  5. Checks if the file exists in the current directory.
  6. Determines the publisher by parsing the URL path.
  7. Creates a directory for the publisher if it doesn't exist.
  8. Moves the .gguf file into the corresponding publisher directory.

📁 Sample Input (from dl20231010.txt)

https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF/resolve/main/Qwen3-30B-A3B-Q6_K.gguf?download=true
# https://huggingface.co/another_publisher/model.gguf

📦 Python Script

import os
import shutil
from urllib.parse import urlparse

def main():
    # Step 1: Find all dl{YYYYMMDD}.txt files in the current directory
    txt_files = [f for f in os.listdir() if f.startswith('dl') and f.endswith('.txt')]

    for txt_file in txt_files:
        print(f"Processing {txt_file}...")
        try:
            with open(txt_file, 'r', encoding='utf-8') as f:
                lines = f.readlines()
        except Exception as e:
            print(f"Error reading {txt_file}: {e}")
            continue

        for line in lines:
            line = line.strip()

            # Step 2: Remove any leading '#' and whitespace
            if line.startswith('#'):
                line = line[1:].strip()
            if not line:
                continue

            try:
                # Step 3: Parse the URL and extract the filename
                parsed = urlparse(line)
                path = parsed.path
                filename = os.path.basename(path)

                # Skip non-.gguf files
                if not filename.endswith('.gguf'):
                    continue

                # Step 4: Check if the file exists in the current directory
                if not os.path.exists(filename):
                    print(f"File '{filename}' not found for URL '{line}'")
                    continue

                # Step 5: Extract publisher from the URL path
                path_parts = parsed.path.split('/')
                if len(path_parts) < 3:
                    print(f"Invalid URL format: '{line}'")
                    continue
                publisher = path_parts[1]

                # Step 6: Create the publisher directory if it doesn't exist
                publisher_dir = os.path.join(os.getcwd(), publisher)
                os.makedirs(publisher_dir, exist_ok=True)

                # Step 7: Move the file
                src_path = os.path.join(os.getcwd(), filename)
                dst_path = os.path.join(publisher_dir, filename)
                shutil.move(src_path, dst_path)
                print(f"Moved '{filename}' to '{publisher_dir}'")

            except Exception as e:
                print(f"Error processing URL '{line}': {e}")

if __name__ == "__main__":
    main()

📝 Notes

  • The script handles both commented and un-commented lines in dl{YYYYMMDD}.txt.
  • It assumes that .gguf files are named exactly as per the URL's file part (e.g., Qwen3-30B-A3B-Q6_K.gguf).
  • If a .gguf file is referenced by multiple URLs, it will be moved only once (after that, it will no longer exist in the current directory).
  • The script uses shutil.move, which is safe and works across file systems.

🧪 Example Output

Processing dl20231010.txt...
Moved 'Qwen3-30B-A3B-Q6_K.gguf' to 'unsloth'
File 'model.gguf' not found for URL 'https://huggingface.co/another_publisher/model.gguf'

This script is designed to be robust and user-friendly, ensuring all .gguf files are properly organized based on their source publishers.

[–] [email protected] 2 points 1 month ago

QwQ-32B-Q6_K:

Amt:3313, Process:107.21s (1.74T/s), Generate:3463.16s (0.96T/s), Total:3570.37s

QwQ-32B-Q6_K - Thoughts (11010 characters)

Okay, let's tackle this problem step by step. The user wants a script that sorts all .gguf files into subdirectories based on their publisher, which is derived from the URLs in the dl*.txt files. Hmm, first, I need to figure out how to extract the publisher from each URL.

The example URL given is https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF/resolve/main/Qwen3-30B-A3B-Q6_K.gguf?download=true, and the publisher here is 'unsloth'. Looking at the URL structure, the publisher is the first path component after the domain. So, after 'huggingface.co', the next part is the publisher.

So, the plan is: for each .gguf file in the current directory, find its corresponding URL in any of the dl*.txt files. Then parse that URL to get the publisher, create a directory if it doesn't exist, and move the file there. But wait, how do I link each .gguf file to its URL? The filenames might not directly correspond to the URLs, so maybe the filenames are part of the URL paths?

Alternatively, perhaps each line in the dl*.txt files contains a URL that ends with the filename. So, the filename is the last part of the URL before any query parameters. For example, in the example URL, the filename is Qwen3-30B-A3B-Q6_K.gguf. So, the script can look through all the dl*.txt files, parse each URL, extract the filename from the URL, and then check if that filename exists in the current directory. If it does, then we can get the publisher from that URL and move the file accordingly.

Wait, but the problem states that the URLs are in the dl files, and each .gguf file was downloaded from one of those URLs. So each .gguf file corresponds to exactly one URL in one of the dl files. Therefore, the script needs to:

  1. Collect all the URLs from all dl*.txt files, ignoring comments (lines starting with #).
  2. For each URL, extract the filename (the part after the last '/' before any '?') and the publisher (the first path component after the domain).
  3. Create a dictionary mapping filenames to their respective publishers.
  4. Iterate over all .gguf files in the current directory. For each file: a. Check if its name is in the filename-publisher map. b. If yes, create the publisher directory if it doesn't exist. c. Move the file into that directory. d. If not found, maybe log a warning or skip?

But first, how to parse the URLs correctly. Let's think about splitting the URL. The example URL's path after the domain is '/unsloth/Qwen3-30B-A3B-GGUF/resolve/main/Qwen3-30B-A3B-Q6_K.gguf'. The first path component after the domain is 'unsloth', which is the publisher. So splitting the path by '/' and taking the second element (since the first is empty due to leading '/') would give the publisher.

For the filename, the last part of the path before any query parameters. So split the URL on '?' to get the path part, then split on '/' and take the last element.

Potential edge cases: URLs with different domains? The example uses huggingface.co, but maybe others could be present. But the problem says "the publisher can only be found in the URL used to originally download each .gguf file", so the structure should be consistent. So the first path component after the domain is always the publisher.

Now, steps for the script:

First, process all dl*.txt files. For each file:

  • Read each line.
  • Ignore lines starting with #.
  • For each valid URL line:
    • Parse the URL to get the publisher and filename.

How to do this in bash? Maybe using parameter expansion or cut.

Alternatively, using awk or sed. Let's think of code snippets.

Parsing the publisher:

Given a URL like 'https://huggingface.co/unsloth/...', the path starts after 'huggingface.co/'. So splitting the URL into parts after the domain.

Maybe using cut:

domain_part=$(echo $url | cut -d '/' -f 3) # gets huggingface.co path_part=$(echo $url | cut -d '/' -f 4-) # gets everything after the domain

Then the first element of path_part is the publisher. So split path_part on '/' again.

publisher=$(echo $path_part | cut -d '/' -f 1)

Wait, let's see:

For the example URL:

Original URL: https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF/resolve/main/Qwen3-30B-A3B-Q6_K.gguf?download=true

Breaking down:

After splitting by '/', the components are:

0: https: 1: 2: huggingface.co 3: unsloth 4: Qwen3-30B-A3B-GGUF 5: resolve 6: main 7: Qwen3-30B-A3B-Q6_K.gguf?download=true

Wait, actually, the URL is split into parts by the slashes. The third component (index 2 if starting from 0) is huggingface.co. The fourth component (index 3) is unsloth, which is the publisher.

So, to get the publisher, we can take the 4th field when splitting by '/'.

Wait, perhaps using:

publisher=$(echo $url | awk -F/ '{print $4}')

Yes, because:

The URL is split into fields by '/', so $4 would be 'unsloth'.

Similarly, for the filename:

The filename is the last part before any '?', take the first part, then split on '/' and take the last element. Alternatively, using basename:

filename=$(basename $(echo $url | cut -d '?' -f1))

Yes, that's better. For example, the URL part before ? is:

https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF/resolve/main/Qwen3-30B-A3B-Q6_K.gguf

basename of that is "Qwen3-30B-A3B-Q6_K.gguf".

So putting it all together, for each URL line:

publisher=$(echo "$url" | awk -F/ '{print $4}') filename=$(basename "$(echo "$url" | cut -d '?' -f1)")

Then, we can build an associative array (in bash) mapping filenames to publishers.

Once all URLs are processed, we can loop over the .gguf files in the current directory. For each file:

  • Check if its name is in the filename-publisher map.
  • If yes, create the directory if needed, then move.
  • Else, perhaps skip or report an error.

Now, in bash, associative arrays can be used. Let's structure the code:

First, read all the dl files.

declare -A publisher_map

for dlfile in dl*.txt; do while IFS= read -r line; do # Ignore comments and empty lines if [[ $line == #* || -z $line ]]; then continue fi url="$line"

    # Extract publisher
    publisher=$(echo "$url" | awk -F/ '{print $4}')
    # Extract filename
    fname=$(basename "$(echo "$url" | cut -d '?' -f1)")
    
    # Store in the map
    publisher_map["$fname"]=$publisher
done < "$dlfile"

done

Then, process each .gguf file:

for file in *.gguf; do if [[ ! -f $file ]]; then continue fi fname=$(basename "$file")

if [[ -v publisher_map[$fname] ]]; then
    publisher=${publisher_map[$fname]}
    dir="$publisher"
    mkdir -p "$dir"
    mv -- "$file" "$dir/"
    echo "Moved $file to $dir"
else
    echo "Warning: $file has no corresponding URL in dl files. Skipping."
fi

done

Wait, but in bash, checking if a key exists in an associative array uses -v, so that part is okay.

But need to make sure that the filenames are exactly as they appear in the URL's path. For example, if the URL's filename has a different case, or maybe trailing spaces? Probably need to ensure exact matches.

Another thing: what about files that are not in any of the dl files? The problem says "all .gguf files", but perhaps some files were not downloaded via these URLs? The user might want those to be skipped, or maybe an error. But according to the problem statement, they need to be sorted into subdirectories based on their original download URLs, so files not present in any dl file ca [...trimmed because lemmy won't handle so many chars]

QwQ-32B-Q6_K - AnswerHere's a bash script to organize the .gguf files into publisher directories based on their download URLs:

#!/bin/bash

# Create an associative array to hold filename -> publisher mappings
declare -A publisher_map

# Process all dl*.txt files
for dlfile in dl*.txt; do
    while IFS= read -r line; do
        # Skip comments and empty lines
        if [[ $line == \#* || -z $line ]]; then
            continue
        fi
        
        url="$line"
        
        # Extract publisher (4th part after splitting on '/')
        publisher=$(echo "$url" | awk -F/ '{print $4}')
        
        # Extract filename (basename of path without query parameters)
        fname=$(basename "$(echo "$url" | cut -d '?' -f1)")
        
        # Store in the map
        publisher_map["$fname"]=$publisher
    done < "$dlfile"
done

# Process each .gguf file
for file in *.gguf; do
    if [[ ! -f $file ]]; then
        continue
    fi
    
    fname=$(basename "$file")
    
    if [[ -v publisher_map[$fname] ]]; then
        publisher=${publisher_map[$fname]}
        dir="$publisher"
        
        # Create directory if not exists
        mkdir -p "$dir"
        
        # Move the file
        mv -- "$file" "$dir/"
        echo "Moved '$file' to '$dir'"
    else
        echo "Warning: No publisher found for '$file'" >&2
    fi
done

To use this script:

  1. Save it as sort_gguf.sh in your current directory
  2. Make it executable: chmod +x sort_gguf.sh
  3. Run it: ./sort_gguf.sh

This script does the following:

  • Reads all URLs from any dl*.txt files
  • Ignores commented lines (starting with #)
  • Extracts the publisher from each URL (4th path component after the domain)
  • Extracts the filename from each URL (basename of path without query params)
  • Creates a mapping between filenames and their publishers
  • Creates directories for each publisher if they don't exist
  • Moves each .gguf file to its corresponding publisher directory
  • Shows progress messages for moved files
  • Gives warnings for files with no matching URL entries

Note: The script assumes:

  • The URLs are properly formatted as shown in the example
  • Each .gguf file has a corresponding URL entry in one of the dl*.txt files
  • File names exactly match the ones in the URLs (case-sensitive)
[–] [email protected] 2 points 1 month ago (2 children)

QwQ-32B-Q6_K vs Qwen3-30B-A3B-Q6_K on an i5-9500 with plenty of DDR4@2666:

PromptPlease write a script to sort all the .gguf files in the current directory into subdirectories. They need to go into publisher directories (which may or may not exist yet). The publisher can only be found in the URL used to originally download each .gguf file. The URLs are in files in the current directory named in the format dl{YYYYMMDD}.txt, with one url per line. Some url lines may be commented out with a single # character, but they are still a valid source of info. An example url is https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF/resolve/main/Qwen3-30B-A3B-Q6_K.gguf?download=true and the publisher is always the first node after the domain, in this case unsloth.

[–] [email protected] 6 points 1 month ago (1 children)

So.. (spoiler alert for everyone who is only up to the June 2023 episode of APH in the Vice article):

In September 2023 the $10B housing bill was passed by Labor and the Greens.
Bit of a shame Labor held back for so long on the Greens amendments, but Labor did show here they can work around the inevitable delays of robust parliamentary discourse by approving interim funding for housing in June to get things started while the details of long term funding were nutted out the crossbench.

[–] [email protected] 5 points 2 months ago

[...]
On EOS’s website the R400 is described as “Lightweight 30mm lethality with precise mobile engagements up to 2 km” and “a high-precision weapon platform with the firepower of a 30 mm cannon”.

“The R400 also supports other weapons, such as machine guns, automatic grenade launchers, and anti-tank guided missiles. It is compatible with other EOS weapon systems through ‘plug-and-play’ functionality.”
[...]
An accompanying photograph shows Israeli Defence Ministry officials and industry representative standing behind an R400 system, which is the same image included in the EOS “Market Development Update” provided to the Australian Stock Exchange. An image of the cream and black weapon, set against a white background

The R400, manufactured by Electro Optic Systems (Supplied by the company, EOS)

In response to questions on whether EOS sought approval to demonstrate its R400, a Defence Department spokesperson insisted Australia had “not supplied weapons or ammunition to Israel since the conflict began, and for at least the past five years.

“As the circumstances of the conflict have evolved, the Government has calibrated its approach and has only approved new export permits to Israel for items for Australian defence and law enforcement that will return to Australia,” the spokesperson said.

“Defence has also been undertaking a process to scrutinise the pre-existing export permits to Israel. These pre-existing permits were all approved before the conflict began and none of the permits relate to weapons or ammunition.”

Asked whether it had sought permission to export the lethal technology, and whether the product had first gone to the United States, an EOS spokesperson told the ABC that that the company “complies with all relevant regulations at all times”.

“EOS operates under the trade control regulations of the countries it is based in, due to the nature of its products and technologies,” the spokesperson added.

“That includes detailed regulations in Australia and other host countries where EOS manufactures products, like the United States.”

Greens Senator David Shoebridge says the involvement of EOS exposes the Albanese government’s “lies” that Australia has not supported Israel’s military campaign against Gaza, a position reinforced this week by the Prime Minister during a leaders’ debate.

"What we can see clearly here is an Australian made weapon in the hands of the Israeli military in Israel in January this year and I think that really puts the lie to these denials we've had from the Albanese government since this appalling conflict started," Senator Shoebridge told the ABC.

https://archive.md/KFP1e

[–] [email protected] 3 points 2 months ago

Australian Knitting Mills, or Australian Woollen Mills.. not really sure what name they go by.

It's run mostly by a bitter old knitting machine technican / wizard with poor social and marketing skills, and his partner.

Last time I knew anything was from ~2021 visiting their factory in Coburg to buy some socks and long underwear, at which point they were already looking at moving/retiring to the east coast.

Bob told me they were the original manufacturer of Explorers, before his business partner wanted to take things in a different direction, whereupon they split the business and we can see Australian Woollen mills kept striving for high quality but didn't evolve with the fashions while Holeproof set fashion trends and sent machines overseas but gradually enshittified.

The pair of "Ultra thick hiking socks" are my most frequently worn socks and are still going strong since 2021: https://www.aust-woollenmills.com/shop/hiking-sock-ultra-thick-100-merino/hiking

No online ordering, just phone and bank transfer. If you're gonna make a massive order I'd call them to confirm when someone will actually be there, at either their Coburg (8 Trade Place) or Collingwood (13 Hood Street) places. There's a few phone numbers scattered around their website.

[–] [email protected] 5 points 2 months ago (1 children)

The replacement battery you bought in 2017 was the last of the genuine stock for that 2012 Thinkpad model. Now it's only poor quality aftermarket. Maybe just stick with the existing genuine battery -- its 47 second runtime should be enough time for AC loss to trigger a custom script to make it hibernate.

[–] [email protected] 14 points 2 months ago

LoL, blue shirt has no persistence. Anger and giving up gets you nowhere.
XKCD #349 - "Success": [Four full-width panels arranged vertically, each with a label for number of hours elapsed, with a title above the stack of panels.];    Title: As a project wears on, standards for success slip lower and lower.;    [Megan is standing behind Cueball, watching him as he sits at a desk working on a desktop computer on the desk.];    Label: 0 hours;    Cueball: Okay, I should be able to dual-boot BSD soon.;    [Cueball is on the floor fiddling with the open tower in front of him. Megan is not shown in the panel, but may be off-panel unless Cueball is talking to himself.];    Label: 6 hours;    Cueball: I'll be happy if I can get the system working like it was when I started.;    [Cueball is standing in front of the computer, which now has a laptop plugged into the tower. Megan is still not shown in the panel, but may be off-panel again.];    Label: 10 hours;    Cueball: Well, the desktop's a lost cause, but I think I can fix the problems the laptop's developed.;    [Cueball and Megan are swimming in the sea; an island and a beach can be seen in the distance.];    Label: 24 hours;    Cueball: If we're lucky, the sharks will stay away until we reach shallow water.;    Megan: If we make it back alive, you're never upgrading anything again.

Title text:40% of OpenBSD installs lead to shark attacks. It's their only standing security issue.

[–] [email protected] 5 points 2 months ago (1 children)
 

..And you can imagine the job discrimination as an adult if you don't drive.

 

Australian Senate, last sitting of the year. No idea when the Social Media Ban debate is kicking off.

If anyone's keen, feel free to give a live run-down of anything interesting in this thread.

(sorry about all the edits, just trying to get a decent thumbnail: elevated photo of the Australian Senate)

 

Of course, the real story here is how the elderly (and everyone else) are fucked over by car dependency and its associated suburban sprawl, shit public transport, and unwalkable neighbourhoods.

 

The government is being pretty coy about the details, so most of the article is necessarily conjecture.

Selected excerpts from the article:

The definition of a social media service, as per the Online Safety Act

An electronic service that satisfies the following conditions:

  1. The sole or primary purpose of the service is to enable online social interaction between two or more end users;
  2. The service allows end users to link to, or interact with, some or all of the other end users;
  3. The service allows end users to post material on the service.

Under the proposed changes, it will be the responsibility of social media companies to take reasonable steps to block people under 16.

How will your age be verified?

The government's legislation won't specify the technical method for proving a person's age.

Several options are on the table, including providing ID and biometrics such as face scanning.

The government's currently running an age assurance trial to assess all the methods, and it's scheduled to continue into 2025.

Based on the results of that trial, eSafety commissioner Julie Inman Grant will make recommendations to platforms.

It's possible that Australians will be asked to provide their IDs or biometric data directly to social media companies in order to use their platforms, but that's not guaranteed.

Many of the big players, including Meta, have instead argued for the age verification onus to be placed on app stores, rather than individual platforms, as that would mean proving your age once — rather than every time you sign up to a platform.

It's also possible that a third-party company that specialises in ID verification will act as a go-between between users and social media platforms.

No matter which model is adopted, the prime minister has said privacy protections will be introduced to cover any data people end up providing.

 

Tip of the iceberg when it comes to examining the corruption of land ownership in Australia. It's hardly talked about. The linked article doesn't even talk about it.

The public as a whole (and traditional owners) should be the only financial beneficiaries of rezoning.

I suspect private maximisation of rezoning profits is the reason behind why urban developments here are almost universally that awful single-story no-greenspace roof-to-roof packed suburban hellscape.

 

[...] The 92-page document compiled by the legal team lays out a number of specific ways Albanese and other Australian officials have acted as an accessory to genocide, including:

  • Freezing $6 million in funding to the United Nations Relief and Works Agency for Palestine Refugees in the Near East amid a humanitarian crisis based on unsubstantiated claims by Israel;
  • Providing military aid and approving defenee exports to Israel, which could be used by the Israel Defense Forces (IDF) in the course of the prima facie commission of genocide and crimes against humanity;
  • Ambiguously deploying an Australian military contingent to the region, where its location and exact role have not been disclosed; and
  • Permitting Australians, either explicitly or implicitly, to travel to Israel to join the IDF and take part in its attacks on Gaza.

"The Rome Statute provides four modes of individual criminal responsibility, two of which are accessorial," [attorney] Omeri explained in a statement. [...]

See also: Birchgrove Legal's media release and communiqué to ICC

 

The Australien Government has made an ad about its Whistleblower Protection Laws, and it’s surprisingly honest and informative.

Take action: droptheprosecutions.org.au

https://www.thejuicemedia.com/honest-government-ad-whistleblower-protection-laws/

view more: next ›