thingsiplay

joined 2 years ago
[–] [email protected] 1 points 4 hours ago

That really hurts. I lost so many save files back then from cartridges where battery dies (or cart was sold) and from memory card corruption or other problems on the Playstation. Even today I lost save files because of HD problems (Steam games without Cloud Save or Emulator saves which I did not backup).

But nothing compares to a save file with thousand of hours and 20 years of managing it. Man, reading this I feel so sorry and sad for this person. Thanks Nintendo. That happens if you don't let people access save files directly and let them backup at least offline.

[–] [email protected] 2 points 6 hours ago

I started with RetroPie long time ago too. :-) RetroPie is an operating system that is build to be a Gaming distribution basically. It uses RetroArch on its backend for the emulators and Emulation Station for the UI. When you select and run a game in Emulation Station (the UI on operating system level), then it runs RetroArch with a core and a game. While ingame, you can open the RetroArch menu as well.

In short: RetroPie is an operating system setup to use RetroArch for the emulation.

[–] [email protected] 3 points 10 hours ago

They did that for Ubuntu. I mean it makes sense on Ubuntu. For everything else you can install it through Flatpak, your distributions own package manager (but that is often not the newest one) or AppImage, through Steam or many other methods. Its amazing how many ways you can install this and where it is available on!

[–] [email protected] 7 points 10 hours ago

I'm a huuuge fan of RetroArch and have setup over 80 cores :D. I only use standalone emulators for cores that are not available in RetroArch (such as Yuzu and RPCS3).

The article itself is a bit bare bones though. Here is the official installation documentation for Linux: https://docs.libretro.com/guides/install-gnu/ I personally have it installed through the official Archlinux package, but they are slow on updating it. Its more than a month now and they still are on an older version. Bleeding Edge? Who says that! It's the reason why I think to switch to the Flatpak version, maybe, maybe not.

When you install it through the official package in Archlinux, you have to change some paths in the settings where cores are saved. That way you can use the RetroArch internal update, so it can download and install cores in the directory you want. Because if you install RetroArch from official package, its managed and installed in a directory the normal user have no access without sudo. I changed the cores path to "~/.config/retroarch/cores". Note, Flatpak has its own file structure and paths, so do not do this with that.

There is also an official RetroArch version for Steam. I use that on my Steam Deck. The good thing is, its always up to date on day one release of RetroArch. And it has Cloud Save support for save files of games. Negative is, that not all cores are supported. However you can install them manually in the cores directory, but then you have to update it manually too if you do that. I also have my own custom controls and menus for RetroArch on Steam Deck, but not uploaded it yet. Really really need to do this...

Last but not least, some shameless plug of a post I made about RetroArch Shaders: https://thingsiplay.game.blog/2024/10/19/showcase-for-retroarch-shaders-2024/

[–] [email protected] 2 points 1 day ago

PS I hate to be the UUOC person. I’m sure you’re already aware and it was a deliberate choice.

I wish it was. I honestly forgot. yeah, shame on me. :D Before this, at the position of cat there was actually a different command, which I replaced with this. And I didn't think of adding the file to awk instead. I'll update the line with this suggestion and a suggestion from someone else.

[–] [email protected] 2 points 2 days ago (2 children)

They are not exactly the same. I always default to piping it, because I never remember which to use when. And had to lookup again to make sure I was not hallucinating: https://unix.stackexchange.com/questions/76049/what-is-the-difference-between-sort-u-and-sort-uniq/76095#76095

 

It only works with the first command in the recorded history, not with any sub shells or chained commands.

#!/usr/bin/env bash

# 1. history and $HISTFILE do not work in scripts. Therefore cat with a direct
#    path is needed.
# 2. awk gets the first part of the command name.
# 3. List is then sorted and duplicate entries are removed.
# 4. type -P will expand command names to paths, similar to which. But it will
#    also expand aliases and functions.
# 5. Final output is then sorted again.

type -P $(cat ~/.bash_history | awk '{print $1}' | sort | uniq) | sort

After reading a blog post, I had this script in mind to see if its possible. This is just for fun and I don't have an actual use for it. Maybe some parts of it might inspire you to do something too. So have fun.

Edit 1:

After some suggestions from the comments, here is a little shorter version. sort | uniq can be replaced by sort -u, as the output of them should be identical in this case (in certain circumstances they can have different effect!). Also someone pointed out my useless cat, as the file can be used directly with awk. And for good reason. :D Enjoy, and thanks for all.

type -P $(awk '{print $1}' ~/.bash_history | sort -u) | sort

I still have no real use case for this one liner, its mainly just for fun.

[–] [email protected] 3 points 2 days ago

That's just speculation and accusation without proof. Let us not do this here.

[–] [email protected] 5 points 3 days ago (2 children)

It's the first time I see a Ban. This must be really bad...

[–] [email protected] 0 points 3 days ago (3 children)

Right, but the other fork became its own project. I have no problem with it. As long as the original code license is not changed.

[–] [email protected] 9 points 3 days ago

I don't. I just copy the link and enter it in FreeTube directly.

[–] [email protected] 1 points 3 days ago (8 children)

I don't see a problem. If someone forks it and changes the license to some proprietary, then their fork is proprietary. The original software is still Open Source. People act like as if the original license changed.

[–] [email protected] 28 points 3 days ago (3 children)

FreeTube, a desktop client to watch YouTube videos, without an account. Why not use a browser without an account? Well, it has a watch history, favorites and subscriptions as if you had an account - but its all "offline" account, without Google involved (besides watching their video). So it manages an account with subscriptions, without YouTube account. Plus it integrates an ad blocker and SponsorBlock, and has a few more features on its sleeve.

kdotool, a xdotool like program for KDE on Wayland. Just learned about it when setting up another application. But I will use it for independently too.

There are more, but this is what came to my mind right now.

 

Direct link to the image in the browser: https://cosmos2025.iap.fr/fitsmap/?ra=150.1203188&dec=2.1880050&zoom=2

Article copied:


In the name of open science, the multinational scientific collaboration COSMOS on Thursday has released the data behind the largest map of the universe. Called the COSMOS-Web field, the project, with data collected by the James Webb Space Telescope (JWST), consists of all the imaging and a catalog of nearly 800,000 galaxies spanning nearly all of cosmic time. And it’s been challenging existing notions of the infant universe.

“Our goal was to construct this deep field of space on a physical scale that far exceeded anything that had been done before,” said UC Santa Barbara physics professor Caitlin Casey, who co-leads the COSMOS collaboration with Jeyhan Kartaltepe of the Rochester Institute of Technology. “If you had a printout of the Hubble Ultra Deep Field on a standard piece of paper,” she said, referring to the iconic view of nearly 10,000 galaxies released by NASA in 2004, “our image would be slightly larger than a 13-foot by 13-foot-wide mural, at the same depth. So it’s really strikingly large.” An animated zoom-out from the center of the COSMOS-Web field to a full-size comparison between COSMOS-Web and the Hubble Ultra Deep Field

The COSMOS-Web composite image reaches back about 13.5 billion years; according to NASA, the universe is about 13.8 billion years old, give or take one hundred million years. That covers about 98% of all cosmic time. The objective for the researchers was not just to see some of the most interesting galaxies at the beginning of time but also to see the wider view of cosmic environments that existed during the early universe, during the formation of the first stars, galaxies and black holes.

“The cosmos is organized in dense regions and voids,” Casey explained. “And we wanted to go beyond finding the most distant galaxies; we wanted to get that broader context of where they lived.” A 'big surprise'

And what a cosmic neighborhood it turned out to be. Before JWST turned on, Casey said, she and fellow astronomers made their best predictions about how many more galaxies the space telescope would be able to see, given its 6.5 meter (21 foot) diameter light-collecting primary mirror, about six times larger than Hubble’s 2.4 meter (7 foot, 10 in) diameter mirror. The best measurements from Hubble suggested that galaxies within the first 500 million years would be incredibly rare, she said.

“It makes sense — the Big Bang happens and things take time to gravitationally collapse and form, and for stars to turn on. There’s a timescale associated with that,” Casey explained. “And the big surprise is that with JWST, we see roughly 10 times more galaxies than expected at these incredible distances. We’re also seeing supermassive black holes that are not even visible with Hubble.” And they’re not just seeing more, they’re seeing different types of galaxies and black holes, she added.

“Since the telescope turned on we’ve been wondering ‘Are these JWST datasets breaking the cosmological model? Because the universe was producing too much light too early; it had only about 400 million years to form something like a billion solar masses of stars. We just do not know how to make that happen." 

'Lots of unanswered questions'

While the COSMOS-Web images and catalog answer many questions astronomers have had about the early universe, they also spark more questions.

“Since the telescope turned on we’ve been wondering ‘Are these JWST datasets breaking the cosmological model? Because the universe was producing too much light too early; it had only about 400 million years to form something like a billion solar masses of stars. We just do not know how to make that happen,” Casey said. “So, lots of details to unpack, and lots of unanswered questions.”

In releasing the data to the public, the hope is that other astronomers from all over the world will use it to, among other things, further refine our understanding of how the early universe was populated and how everything evolved to the present day. The dataset may also provide clues to other outstanding mysteries of the cosmos, such as dark matter and physics of the early universe that may be different from what we know today.

“A big part of this project is the democratization of science and making tools and data from the best telescopes accessible to the broader community,” Casey said. The data was made public almost immediately after it was gathered, but only in its raw form, useful only to those with the specialized technical knowledge and the supercomputer access to process and interpret it. The COSMOS collaboration has worked tirelessly for the past two years to convert raw data into broadly usable images and catalogs. In creating these products and releasing them, the researchers hope that even undergraduate astronomers could dig into the material and learn something new.

“Because the best science is really done when everyone thinks about the same data set differently,” Casey said. “It’s not just for one group of people to figure out the mysteries.” Image Caitlin Casey wears a puffy coat in front of a lake Photo Credit Courtesy Photo Caitlin Casey

Caitlin Casey is an observational astronomer with expertise in high-redshift galaxies. She uses the most massive and unusual galaxies at early times to test fundamental properties of galaxy assembly (including their gas, stars, and dust) within a ΛCDM cosmological framework. Read more

For the COSMOS collaboration, the exploration continues. They’ve headed back to the deep field to further map and study it.

“We have more data collection coming up,” she said. “We think we have identified the earliest galaxies in the image, but we need to verify that.” To do so, they’ll be using spectroscopy, which breaks up light from galaxies into a prism, to confirm the distance of these sources (more distant = older). “As a byproduct,” Casey added, “we’ll get to understand the interstellar chemistry in these systems through tracing nitrogen, carbon and oxygen. There’s a lot left to learn and we’re just beginning to scratch the surface.”

The COSMOS-Web image is available to browse interactively ; the accompanying scientific papers have been submitted to the Astrophysical Journal and Astronomy & Astrophysics.

 

I like listening to oldschool videogame music. Recently I listened to some music of games I never played and one song in particular blew my mind. Its wonderful and since it lives rent free in my head, coming back to it over and over again. I'm loving it.

Listen on:

"Sacred Somnom Woods" in Mario & Luigi - Dream Team for the Nintendo 3DS. The composer is the well known Yoko Shimomura, also known for work on Street Fighter 2, Kingdom Hearts and many more legendary games.

To me this track has this Breath of the Wild or Tears of the Kingdom vibes to it. Because I did not play the actual Mario & Luigi games, I always interpret this as a Zelda song now. Its name does contribute to this factor too! Do you also have sometimes game music that captures you?

 

cross-posted from: https://beehaw.org/post/20234081

2 days ago I made a post that the game would not run on a Linux desktop PC (but it would on the Steam Deck). 10 hours ago they released an update that resolves this issue and makes the game run through Proton on a Linux desktop PC.

- The Beta now supports players on Linux thru Proton

I can confirm it does run and I just did the short tutorial. I still have to play more, but wanted to inform anyone who is interested into the game.

 

2 days ago I made a post that the game would not run on a Linux desktop PC (but it would on the Steam Deck). 10 hours ago they released an update that resolves this issue and makes the game run through Proton on a Linux desktop PC.

- The Beta now supports players on Linux thru Proton

I can confirm it does run and I just did the short tutorial. I still have to play more, but wanted to inform anyone who is interested into the game.

 

I want to share some thoughts that I had recently about YouTube spam comments. We all know these early bots in the YouTube comment section, with those "misleading" profile pictures and obvious bot like comments. Those comments are often either random about any topic or copied from other users.

OK, why am I telling you that? Well, I think these bots are there to be recognized as bots. Their job is to be seen as a bot and be deleted and ignored. In that case everyone feels safe, thinking all bots are now deleted. But in reality there are more sophisticated bots under us. So the easy bots job is to get delete and basically mislead us, so we don't think that any is left, because they are deleted.

What do you think? Sounds plausible, doesn't it? Or do I have paranoia? :D

 

Splitgate 2 opened the public beta since today or yesterday. Unfortunately the game does not run on desktop PC with a Linux operating system. Others have the same problem.

But whats weird is, people claim it works on Steam Deck and even the official blog post from the devs says they support the Steam Deck. There is no word about general Linux desktops.

So does the developers treat the Steam Deck like a console and make their games not playable on general purpose Linux desktops? Its weird, because otherwise it is playable on a general desktop with Windows too. Even the previous game Splitgate 1 (which they shut off) worked on desktop Linux. It makes no sense!

I'm totally disappointed right now. Because I was excited for this game. It got some hero abilities (I like that) and even a map creator.

 

Alternative link: https://skipvids.com/?v=BA_HMsznNKg (Ad-free and does not use YouTube directly)

Technical explanation of why almost all Nintendo 64 games looked so blurry. Kaze Emanuar is an expert in this field and does lot of Romhacks and Mods and creates his own Super Mario 64 games with it. So he is quiet knowledgeable.

Note: I recommend watching the video at 1.4x speed, or at the very minimum at 1.25x speed.

 

Video description:


In this video, we'll talk about NVIDIA's last several months of pressure to talk about DLSS more frequently in reviews, plus MFG 4X pressure from the company. NVIDIA has repeatedly made comments to GN that interviews, technical discussion, and access to engineers unrelated to MFG 4X and DLSS are made possible by talking about MFG 4X and DLSS. NVIDIA has explicitly stated that this type of content is made "possible" by benchmarking MFG 4X in reviews specifically, despite us separately and independently covering it in other videos, and has made repeated attempts to get multiplied framerate numbers into its benchmark charts. We will not play those games. In the time since, NVIDIA has offered certain unqualified media outlets access to drivers which actual qualified reviewers do not have access to, but allegedly only under the premise of publishing "previews" of the RTX 5060 in advance of its launch. Some outlets were given access to drivers specifically to publish what we believe are puff pieces and marketing while reviewers were blocked.

TIMESTAMPS

00:00 - Giving Access, Then Threatening It
04:29 - Quid Pro Quo
08:28 - Social Manipulation
09:44 - It's Never Good Enough for NVIDIA
12:08 - NVIDIA is Vindictive
14:28 - Stevescrimination
17:38 - Not The First Time
19:00 - Gamers Are Entitled
 

https://browseraudit.com/

I just downloaded Tor browser (which is a configured Firefox browser BTW) using the torbrowser-launcher that automatically downloads and manages the browser. And I thought for funs sake, checking and comparing some tests from browseraudit against my current personal Firefox setup. And to my surprise I got more warnings with Tor Browser v14.5 (based on Mozilla Firefox 128.9.0esr) vs My personal setup of Firefox Browser v137.0.2 (custom configurations and plugins installed). Both at the most up to date version in their official version.

I just found this interesting and wanted to share with you.

Tor Browser

My Firefox Browser

 

cross-posted from: https://beehaw.org/post/19564932

https://github.com/thingsiplay/crc32sum

# usage: crc32sum [-h] [-r] [-i] [-u] [--version] [path ...]

crc32sum *.sfc
2d206bf7  Chrono Trigger (USA).sfc

Previously I used a Bash script to filter out the checksum from 7z output. That felt always a bit hacky and the output was not very flexible. Plus the Python script does not rely on any external module or program too. Also the underlying 7z program call would automatically search for all files in sub directories recursively when a directory was given as input. This would require some additional rework, but I decided it is a better idea to start from scratch in a programming language. So I finally wrote this, to have a bit better control. My previous Bash script can be found here, in case you are curious: https://gist.github.com/thingsiplay/5f07e82ec4138581c6802907c74d4759

BTW, believe it or not, the Bash script running multiple commands starts and executes faster than the Python instance. But the difference is negligible, and the programmable control in Python is much more important to me.


What is this program for?

Calculates the CRC hash for each given file, using Python's integrated zlib module. It has a similar use like MD5 or SHA, but is way, way weaker and simpler. It's a quick and easy method to verify the integrity of files, in example after downloading from the web, to check data corruption from your external drives or when creating expected files.

It is important to know and understand that CRC-32 is not secure and should never be used cryptographically. It's use is limited for very simple use cases.

Linux does not have a standard program to calculate the CRC. This is a very simple program to have a similar output like md5sum offers by default. Why use CRC at all? Usually and most of the time CRC is not required to be used. In fact, I favor MD5 or SHA when possible. But sometimes, only a CRC is provided (often used by the retro emulation gaming scene). Theoretically CRC should also be faster than the other methods, but no performance comparison has been made (frankly the difference doesn't matter to me).

 

https://github.com/thingsiplay/crc32sum

# usage: crc32sum [-h] [-r] [-i] [-u] [--version] [path ...]

crc32sum *.sfc
2d206bf7  Chrono Trigger (USA).sfc

Previously I used a Bash script to filter out the checksum from 7z output. That felt always a bit hacky and the output was not very flexible. Plus the Python script does not rely on any external module or program too. Also the underlying 7z program call would automatically search for all files in sub directories recursively when a directory was given as input. This would require some additional rework, but I decided it is a better idea to start from scratch in a programming language. So I finally wrote this, to have a bit better control. My previous Bash script can be found here, in case you are curious: https://gist.github.com/thingsiplay/5f07e82ec4138581c6802907c74d4759

BTW, believe it or not, the Bash script running multiple commands starts and executes faster than the Python instance. But the difference is negligible, and the programmable control in Python is much more important to me.


What is this program for?

Calculates the CRC hash for each given file, using Python's integrated zlib module. It has a similar use like MD5 or SHA, but is way, way weaker and simpler. It's a quick and easy method to verify the integrity of files, in example after downloading from the web, to check data corruption from your external drives or when creating expected files.

It is important to know and understand that CRC-32 is not secure and should never be used cryptographically. It's use is limited for very simple use cases.

Linux does not have a standard program to calculate the CRC. This is a very simple program to have a similar output like md5sum offers by default. Why use CRC at all? Usually and most of the time CRC is not required to be used. In fact, I favor MD5 or SHA when possible. But sometimes, only a CRC is provided (often used by the retro emulation gaming scene). Theoretically CRC should also be faster than the other methods, but no performance comparison has been made (frankly the difference doesn't matter to me).

view more: next ›