thejevans

joined 2 years ago
[–] [email protected] 2 points 2 months ago (2 children)

I appreciate the way you did things. Here is mine. Mine is a bit more hierarchical, and bit more abstracted (especially in the flake), but I wouldn't say one way is better than another.

https://codeberg.org/jevans/nix-config

[–] [email protected] 3 points 2 months ago
[–] [email protected] 1 points 2 months ago

Ah yes, the classic diff eq exam problem

[–] [email protected] 3 points 2 months ago (1 children)

It sounds like what they ultimately want is one place to look at both read-it-later stuff and starred RSS articles. My read is that they are proposing one way to do it, but ultimately it's not super workable that way. There are no clients I know of that are both RSS clients and read-it-later clients (using pocket, wallabag, or anything else).

If OP wants one place to see both, their best bet is to find a read-it-later server that can generate RSS feeds, subscribe to those, and now everything is RSS and behaves the same. Wallabag is a great option for that and is self-hostable.

This is exactly what I do and it works great.

[–] [email protected] 4 points 2 months ago (3 children)

http://wallabag.it/ can publish your read-it-laters to RSS

[–] [email protected] 18 points 2 months ago (1 children)

It's even simpler than that: In the first instance a human learned a thing. In the second instance a bunch of humans wrote software to ingest art and spit out some Frankenstein of it. Software which is specifically designed to replace artists, many of whom likely had art used as inputs to said software without their consent.

In both cases humans did things. The first is normal, the second is shitty.

[–] [email protected] 3 points 3 months ago (3 children)

Sorry, just to be clear, are you equating a human learning to an organization scraping creative works as inputs for their software?

[–] [email protected] 5 points 3 months ago

The OSI doesn't require open access to training data for AI models to be considered "open source", unfortunately. https://opensource.org/ai/open-source-ai-definition

I agree that "open weights" is a more apt description, though

[–] [email protected] 6 points 3 months ago (1 children)

uh sure. My point is that sharing weights is analogous to sharing a compiled binary, not source code.

[–] [email protected] 13 points 3 months ago

"Wait, so we have all the technology we need to stop climate change, but we have to sacrifice some profits to do so?

Well, since it's impossible to stop climate change with current technology, I guess we just have to dump chemicals into the atmosphere and hope for the best."

[–] [email protected] 10 points 3 months ago (10 children)

The definition of "open source" AI sucks. It could just mean that the generated model weights are shared under an open source license. If you don't have the code used to train the model under an open source license, or you can't fully reproduce the model using the code they share and open source datasets, then calling a model "open source" feels weird as hell to me.

At the same time, I don't know of a single modern model that only used training data that was taken with informed consent from the creators of that data.

[–] [email protected] 1 points 3 months ago

Invidious is switching to a new paradigm where the part that talks to YouTube will be split out into it's own service called invidious-companion. While not part of the current release, they have instructions for setting it up, and it's what I'm currently using. The only things that don't work right now are live videos and the Clipious Android TV app (the phone app works fine). If you don't need either of those things, I recommend starting with invidious-companion

https://docs.invidious.io/companion-installation/

view more: ‹ prev next ›