e0qdk

joined 2 years ago
[–] [email protected] 2 points 2 years ago (1 children)

I tried messing around with the colors a bit in an image editor and this was the best adaptation I could make: https://files.catbox.moe/03k8sc.png

[–] [email protected] 3 points 2 years ago

Yeah; I also tried subbing in case that kicks off federation and searched a few titles to see if they ended up in random incorrectly as well (stuff like that happens sometimes with kbin). The magazine has seen a few microblogs mentioning the channel, and it clearly picked up the avatar/icon, description, etc. somehow, but doesn't seem to be getting any videos as threads/posts and I couldn't find any floating around disconnected either. I think kbin most likely doesn't understand what PeerTube is publishing through AP, but there could always be federation weirdness or something.

[–] [email protected] 6 points 2 years ago (3 children)

Doesn't seem to work right on kbin, unfortunately, although it does show up as a magazine: https://kbin.social/m/[email protected]

[–] [email protected] 2 points 2 years ago

I hope you feel better soon!

[–] [email protected] 2 points 2 years ago

Thanks for sharing!

[–] [email protected] 6 points 2 years ago

Reminds me a bit of Kammy Koopa

[–] [email protected] 1 points 2 years ago* (last edited 2 years ago)

So I either need something like this that I could host myself (is something like that even feasible?)

The closest thing I could find that already exists is GPT4All Chat with LocalDocs Plugin. That basically builds a DB of snippets from your documents and then tries to pick relevant stuff based on your query to provide additional input as part of your prompt to a local LLM. There are details about what it can and can't do further down the page. I have not tested this one myself, but this is something you could experiment with.

Another idea -- if you want to get more into engineering custom tools -- would be to split a document (or documents) you want to interact with into multiple overlapping chunks that fit within the context window (assuming you can get the relevant content out -- PyPDF2's documentation explains why this can be difficult), and then prompt with something like "Does this text contain anything that answers <query>? <chunk>". (May take some experimentation to figure out how to engineer the prompt well.) You could repeat that for each chunk gathering snippets and then do a second pass over all snippets asking the LLM to summarize and/or rate the quality of its own answers (or however you want to combine results).

Basically you would need to give it two prompts: a prompt for the "map" phase that you use to apply to every snippet to try to extract relevant info from each snippet, and a second prompt for the "reduce" phase that combines two answers (which is then chained).

i.e.:

f(a) + f(b) + f(c) + ... + f(z)

where f(a) is the result of the first extraction on snippet a and + means "combine these two snippets using the second prompt". (You can evaluate in whatever order you feel is appropriate -- including in parallel, if you have enough compute power for that.)

If you have enough context space for it, you could include a summary of the previous state of the conversation as part of the prompts in order to get something like an actual conversation with the document going.

No idea how well that would work in practice (probably very slow!), but it might be fun to experiment with.

[–] [email protected] 5 points 2 years ago

[coreutils-announce] coreutils-8.31 released [stable]

stat now prints file creation time when supported by the file system,
on GNU Linux systems with glibc >= 2.28 and kernel >= 4.11.

https://lists.gnu.org/archive/html/coreutils-announce/2019-03/msg00000.html

(found thanks to this blog post titled "File Creation Time in Linux")

[–] [email protected] 11 points 2 years ago (1 children)

Any ways to get around the download failing

I did this incredibly stupid procedure with Firefox yesterday as a workaround for a failing Google Takeout download:

  • backup the .part file from the failed download
  • restart the download (careful -- if you didn't move/back it up, it will be deleted and you will have to download the whole thing again; found this out the hard way on a 50GB+ file... that failed again)
  • immediately pause the new download after it starts writing to disk
  • replace the new .part file with the old .part file from earlier (or -- see [1] below)
  • Firefox might not show progress for a long time, but will eventually continue the download (I saw it reading the file back from disk with iotop so I just let it run)
  • sanity check that you actually got the whole thing and that it is usable (in my case, I knew a hash for the file)

[1] You can actually replace the new .part file with anything that has the same size in bytes as the old file -- I replaced it with a file full of zeros and manually merged the end onto the original .part file with a tiny custom python script since I had already moved the incomplete file to other media before realizing I could try this. (In my case, the incomplete file would still have been useful even with the last ~1MB cut off.)

There are probably better options in most cases -- like Thunderbird for mailbox as other people suggested, or rclone for getting stuff from Drive -- but if you need to get Takeout to work and the download keeps failing this may be another option to try.

[–] [email protected] 4 points 2 years ago (1 children)

In pop culture and modern fiction it's used to mean an artificial human -- e.g. see the examples in https://en.wikipedia.org/wiki/Homunculus#In_popular_culture like Fullmetal Alchemist for an idea of what OP was going for. (In this case, more Frankenstein's monster though.)

There is also the "little man who makes things work" idea like a golem -- which is related, but not the sense used here.

[–] [email protected] 2 points 2 years ago (3 children)

"Homunculus" is an artificially created person.

[–] [email protected] 2 points 2 years ago (3 children)

I stopped by my local donut shop and couldn’t find any of the jelly donuts like they eat in Pokemon. Any recommendations?

Shinobu Horror Story

I haven't really been watching much anime lately, but I am still going through Penguindrum -- just, very slowly.

Most of my free time's gone into working on my art tools. I hunkered down this weekend and pretty much completely rewrote my image stitcher; it can now handle graphs of correspondences (solved one pair at a time) and I used it to stitch this image from Penguindrum. I have some more details about it in the thread I posted, if you're curious.

This is the third anime I've encountered Klimt's paintings in. Sora no Woto and Elfen Lied's OPs are the others. Yes, that's where the butts came from -- Klimt's Goldfish -- thank you to whoever explained that on reddit years ago; you introduced me to Klimt and I recognized this one was a Klimt parody (of The Kiss) because of that. Caution for anyone not familiar: lots of nudity in his art.

2
sparkle (media.kbin.social)
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 
4
anime_irl (media.kbin.social)
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 
 
14
Time for a haircut (media.kbin.social)
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 
3
anime_irl (media.kbin.social)
 
5
anime_irl (media.kbin.social)
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 
28
stare (media.kbin.social)
 
1
anime_irl (media.kbin.social)
 
 

Not sure where to report this exactly but media.kbin.social has an expired Let's Encrypt certificate (expired Tue, 03 Oct 2023 14:47:25 GMT) and this is causing problems loading various images across the site -- e.g. user avatars (sometimes), images copied from various Lemmy instances, and image threads made on kbin.

It doesn't affect all images though as some are loaded from kbin.social/media instead.

view more: ‹ prev next ›