SpaceCadet

joined 2 years ago
[–] SpaceCadet@feddit.nl 4 points 1 week ago

Libre (from French) is sometimes used to solve the ambiguity of the word free in the English language, but it sounds kinda awkward in English and there's certainly no consensus that this should be the official replacement, or that the term free even needs replacement.

Furthermore, the FSF who originally came up with the idea of "free software" still exists and is still called the Free Software Foundation, though Stallman uses both terms interchangeably.

[–] SpaceCadet@feddit.nl 1 points 1 week ago

They don't have nukes as such. They are prepositioned US owned nukes that remain under the custody of the USAF. The part of the base where the nukes are stored is strictly off limits to local personnel.

What makes them "shared", is that they are intended to be dropped by planes owned by the host country, and both the government of the host country as well as the US government need to give their authorization to activate and use them.

So you may as well just consider them as US nukes.

[–] SpaceCadet@feddit.nl 2 points 1 week ago* (last edited 1 week ago)

we have real parties, not fascist or fascist-lite

Says the dutch guy...

[–] SpaceCadet@feddit.nl 17 points 1 week ago (3 children)

Free as in freedom, not as in free beer.

[–] SpaceCadet@feddit.nl 1 points 1 week ago

That’s not necessarily the fault of systemd.

No, but the error being hard to debug, and not being able to cancel the timeout as it's occurring, is though.

Anyway that is been fixed on modern systems

No, I've had it happen more recently (I wanna say less than a month ago) with network mounts and random systemd controlled desktop processes that refuse to die.

[–] SpaceCadet@feddit.nl 2 points 1 week ago

But I wouldnt turn it on and actually play with it even if I could because I will always take the better performance.

Depends. In Cyberpunk I can get 90-100fps on 1440p on ultra with raytracing on and FSR4 Quality (via Optiscaler). That is a very good experience IMO, to the point that I forget about "framerate" while playing.

That's Windows though, in Linux the raytracing performance is rather worse for some reason and it slips below the threshold of what I find noticeable, so I go for 1440p native.

[–] SpaceCadet@feddit.nl 2 points 1 week ago (2 children)

bit better RT performance about which I couldn’t care less about.

Yeah raytracing is not really relevant on these cards, the performance hit is just too great.

The RX 9070 XT is the first AMD GPU where you can consider turning it on.

[–] SpaceCadet@feddit.nl 4 points 1 week ago

bitcoin mining

That's a thing of the past, not profitable anymore unless you use ASIC miners. Some people still GPU mine it on niche coins, but it's nowhere near the scale as it was during the bitcoin and ethereum craze a few years ago.

AI is driving up prices or rather, it's reducing availability, which then translates into higher prices.

Another thing is that board manufacturers, distributors and retailers have figured out that they can jack up GPU prices above MSRP and enough suckers will still buy them. They'll sell less volume but they'll make more profit per unit.

[–] SpaceCadet@feddit.nl 1 points 1 week ago

They could be. There are two alternatives.

And I could be the Queen of Holland or I could be not. There are two alternatives.

I assumed the former

You kneejerked, that's what you did.

[–] SpaceCadet@feddit.nl 1 points 1 week ago

Oh really, someone is being annoying to you? Oh noes!

BTW, you don't have to announce it when you block someone.

You’re like some biblethumper

Your lack of self awareness is staggering.

[–] SpaceCadet@feddit.nl 1 points 1 week ago (2 children)

You, digging up an irrelevant non-event from half a century ago, a fixation of the rabid anti-communist US imperialist propaganda machine shows who is in an echo chamber.

You're the one who got fixated on it sweetheart. I just showed you an image of a lego figurine and offff you went. You guys are so easy to trigger. 😂

And so you have nothing meaningful to offer as I expected.

Says the twerp who is afraid to even answer a simple yes/no question about this so called "irrelevant non-event" from half a century ago.

You're like the holocaust denier or flat-earther who is afraid to answer the obvious question because they know they will be ridiculed and deep down they know they're wrong and have no arguments to defend themselves.

[–] SpaceCadet@feddit.nl 2 points 1 week ago

Crazy how triggered (and retarded) they are. Even got one who, rather than admitting he was wrong, doubled down arguing that the GDR was a USSR member state. For some reason that was important to his "argument".

1143
submitted 1 year ago* (last edited 1 year ago) by SpaceCadet@feddit.nl to c/fediverse@lemmy.world
 

I feel like we need to talk about Lemmy's massive tankie censorship problem. A lot of popular lemmy communities are hosted on lemmy.ml. It's been well known for a while that the admins/mods of that instance have, let's say, rather extremist and onesided political views. In short, they're what's colloquially referred to as tankies. This wouldn't be much of an issue if they didn't regularly abuse their admin/mod status to censor and silence people who dissent with their political beliefs and for example, post things critical of China, Russia, the USSR, socialism, ...

As an example, there was a thread today about the anniversary of the Tiananmen Massacre. When I was reading it, there were mostly posts critical of China in the thread and some whataboutist/denialist replies critical of the USA and the west. In terms of votes, the posts critical of China were definitely getting the most support.

I posted a comment in this thread linking to "https://archive.ph/2020.07.12-074312/https://imgur.com/a/AIIbbPs" (WARNING: graphical content), which describes aspects of the atrocities that aren't widely known even in the West, and supporting evidence. My comment was promptly removed for violating the "Be nice and civil" rule. When I looked back at the thread, I noticed that all posts critical of China had been removed while the whataboutist and denialist comments were left in place.

This is what the modlog of the instance looks like:

Definitely a trend there wouldn't you say?

When I called them out on their one sided censorship, with a screenshot of the modlog above, I promptly received a community ban on all communities on lemmy.ml that I had ever participated in.

Proof:

So many of you will now probably think something like: "So what, it's the fediverse, you can use another instance."

The problem with this reasoning is that many of the popular communities are actually on lemmy.ml, and they're not so easy to replace. I mean, in terms of content and engagement lemmy is already a pretty small place as it is. So it's rather pointless sitting for example in /c/linux@some.random.other.instance.world where there's nobody to discuss anything with.

I'm not sure if there's a solution here, but I'd like to urge people to avoid lemmy.ml hosted communities in favor of communities on more reasonable instances.

6
submitted 2 years ago* (last edited 2 years ago) by SpaceCadet@feddit.nl to c/debian@lemmy.ml
 

I have a small server in my closet which is running 4 Debian 12 virtual machines under kvm/libvirt. The virtual machines have been running fine for months. They have unattended-upgrades enabled, and I generally leave them alone. I only reboot them periodically, so that the latest kernel upgrades get applied.

All the machines have an LVM configuration. Generally it's a debian-vg volume group on /dev/vda for the operating system, which has been configured automatically by the installer, and a vgdata volume group on /dev/vdb for everything else. All file systems are simple ext4, so nothing fancy. (*)

A couple of days ago, one of the virtual machines didn't come up after a routine reboot and dumped me into a maintenance shell. It complained that it couldn't mount filesystems that were on vgdata. First I tried simply rebooting the machine, but it kept dumping me into maintenance. Investigating a bit deeper, I noticed that vgdata and the block device /dev/vdb were detected but the volume group was inactive, and none of the logical volumes were found. I ran vgchange -a y vgdata and that brought it back online. After several test reboots, the problem didn't reoccur, so it seemed to be fixed permanently.

I was willing to write it off as a glitch, but then a day later I rebooted one of the other virtual machines, and it also dumped me into maintenance with the same error on its vgdata. Again, running vgchange -y vgdata fixed the problem. I think two times in two days the same error with different virtual machines is not a coincidence, so something is going on here, but I can't figure out what.

I looked at the host logs, but I didn't find anything suspicious that could indicate a hardware error for example. I should also mention that the virtual disks of both machines live on entirely different physical disks: VM1 is on an HDD and VM2 on an SSD.

I also checked if these VMs had been running kernel 6.1.64-1 with the recent ext4 corruption bug at any point, but this does not appear to be the case.

Below is an excerpt of the systemd journal on the failed boot of the second VM, with what I think are the relevant parts. Full pastebin of the log can be found here.

Dec 16 14:40:35 omega lvm[307]: PV /dev/vdb online, VG vgdata is complete.
Dec 16 14:40:35 omega lvm[307]: VG vgdata finished
...
Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvbinaries.device: Job dev-vgdata-lvbinaries.device/start timed out.
Dec 16 14:42:05 omega systemd[1]: Timed out waiting for device dev-vgdata-lvbinaries.device - /dev/vgdata/lvbinaries.
Dec 16 14:42:05 omega systemd[1]: Dependency failed for binaries.mount - /binaries.
Dec 16 14:42:05 omega systemd[1]: Dependency failed for local-fs.target - Local File Systems.
Dec 16 14:42:05 omega systemd[1]: local-fs.target: Job local-fs.target/start failed with result 'dependency'.
Dec 16 14:42:05 omega systemd[1]: local-fs.target: Triggering OnFailure= dependencies.
Dec 16 14:42:05 omega systemd[1]: binaries.mount: Job binaries.mount/start failed with result 'dependency'.
Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvbinaries.device: Job dev-vgdata-lvbinaries.device/start failed with result 'timeout'.
Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvdata.device: Job dev-vgdata-lvdata.device/start timed out.
Dec 16 14:42:05 omega systemd[1]: Timed out waiting for device dev-vgdata-lvdata.device - /dev/vgdata/lvdata.
Dec 16 14:42:05 omega systemd[1]: Dependency failed for data.mount - /data.
Dec 16 14:42:05 omega systemd[1]: data.mount: Job data.mount/start failed with result 'dependency'.
Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvdata.device: Job dev-vgdata-lvdata.device/start failed with result 'timeout'.

(*) For reference, the disk layout on the affected machine is as follows:

# lsblk 
NAME                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
vda                   254:0    0   20G  0 disk 
├─vda1                254:1    0  487M  0 part /boot
├─vda2                254:2    0    1K  0 part 
└─vda5                254:5    0 19.5G  0 part 
  ├─debian--vg-root   253:2    0 18.6G  0 lvm  /
  └─debian--vg-swap_1 253:3    0  980M  0 lvm  [SWAP]
vdb                   254:16   0   50G  0 disk 
├─vgdata-lvbinaries   253:0    0   20G  0 lvm  /binaries
└─vgdata-lvdata       253:1    0   30G  0 lvm  /data

# vgs
  VG        #PV #LV #SN Attr   VSize   VFree
  debian-vg   1   2   0 wz--n- <19.52g    0 
  vgdata      1   2   0 wz--n- <50.00g    0 

# pvs
  PV         VG        Fmt  Attr PSize   PFree
  /dev/vda5  debian-vg lvm2 a--  <19.52g    0 
  /dev/vdb   vgdata    lvm2 a--  <50.00g    0 

# lvs
  LV         VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root       debian-vg -wi-ao----  18.56g                                                    
  swap_1     debian-vg -wi-ao---- 980.00m                                                    
  lvbinaries vgdata    -wi-ao----  20.00g                                                    
  lvdata     vgdata    -wi-ao---- <30.00g 
view more: next ›