I imagine that was part of it, but I doubt it's the actual main reason. More of a post justification.
projectmoon
Or if you just ignore federal courts, which seems to be the current fashion.
This is why some cities have banned the rental services. Paris has plenty of electric scooters, but they banned the rental services. Keeps the benefits of the scooters for micro mobility, but no scooters lying everywhere.
Rclone can do file mounts as well as sync.
A lot of the answers here are short or quippy. So, here's a more detailed take. LLMs don't "know" how good a source is. They are word association machines. They are very good at that. When you use something like Perplexity, an external API feeds information from the search queries into the LLM, and then it summarizes that text in (hopefully) a coherent way. There are ways to reduce hallucination rate and check factualness of sources, e.g. by comparing the generated text against authoritative information. But how much of that is employed by Perplexity et al I have no idea.
I feel like this article is exactly the type of thing it's criticizing.
What is actually happening to the computer in the image?
I think you have the wrong full generation parameters here.
Can you link the feeds?
The problem is that while LLMs can translate, it's still machine translation and isn't always accurate. It's also not going to just be for that. It'll be applying "AI" to everything that looks like it might vaguely fit, and it'll stifle productivity.
Is the code available somewhere?
I know. I have NodeBB as a backup.