I mean it's very easy to side with SAGA and WGA considering he literally has no other option (all major studios are union).
ZickZack
They will make it open source, just tremendously complicated and expensive to comply with.
In general, if you see a group proposing regulations, it's usually to cement their own positions: e.g. openai is a frontrunner in ML for the masses, but doesn't really have a technical edge against anyone else, therefore they run to congress to "please regulate us".
Regulatory compliance is always expensive and difficult, which means it favors people that already have money and systems running right now.
There are so many ways this can be broken in intentional or unintentional ways. It's also a great way to detect possible e.g. government critics to shut them down (e.g. if you are Chinese and everything is uniquely tagged to you: would you write about Tiananmen square?), or to get monopolies on (dis)information.
This is not literally trying to force everyone to get a license for producing creative or factual work but it's very close since you can easily discriminate against any creative or factual sources you find unwanted.
In short, even if this is an absolutely flawless, perfect implementation of what they want to do, it will have catastrophic consequences.
The "adequate covering" of our distribution p
is also pretty self-explanatory: We don't need to see the statement "elephants are big" a thousand times to learn it, but we do need to see it at least once:
Think of the p
distribution as e.g. defining a function on the real numbers. We want to learn that function using a finite amount of samples. It now makes sense to place our samples at interesting points (e.g. where the function changes direction), rather than just randomly throwing billions of points against the problem.
That means that even if our estimator is bad (i.e. it can barely distinguish real and fake data), it is still better than just randomly sampling (e.g. you can say "let's generate 100 samples of law, 100 samples of math, 100 samples of XYZ,..." rather than just having a big mush where you hope that everything appears).
That makes a few assumptions: the estimator is better than 0% accurate, the estimator has no statistical bias (e.g. the estimator didn't learn things like "add all sentences that start with an A", since that would shift our distribution), and some other things that are too intricate to explain here.
Importantly: even if your estimator is bad, it is better than not having it. You can also manually tune it towards being a little bit biased, either to reduce variance (e.g. let's filter out all HTML code), or to reduce the impact of certain real-world effects (like that most stuff on the internet is english: you may want to balance that down to get a more multilingual model).
However, you have not note here that these are LANGUAGE MODELS. They are not everything models.
These models don't aim for factual accuracy, nor do they have any way of verifying it: That's simply not the purview of these systems.
People use them as everything models, because empirically there's a lot more true stuff than nonsense in those scrapes and language models have to know something about the world to e.g. solve ambiguity, but these are side-effects of the model's training as a language model.
If you have a model that produces completely realistic (but semantically wrong) language, that's still good data for a language model.
"Good data" for a language model does not have to be "true data", since these models don't care about truth: that's not their objective!
They just complete sentences by predicting the next token, which is independent of factuallity.
There are people working on making these models more factual (same idea: you bias your estimator towards more likely to be true things, like boosting reliable sources such as wikipedia, rather than training on uniformly weighted webscrapes), but to do that you need a lot more overview over your data, for which you need more efficient models, for which you need better distributions, for which you need better estimators (though in that case they would be "factuallity estimators").
In general though the same "better than nothing" sentiment applies: if you have a sampling strategy that is not completely wrong, you can still beat completely random sample models. If your estimator is good, you can substantially beat them (and LLMs are pretty good in almost everything, which means you will get pretty good samples if you just sample according to the probability that the LLM tells you "this data is good")
For actually making sure that the stuff these models produce is true, you need very different systems that actually model facts, rather than just modelling language. Another way is to remove the bottleneck of machine learning models with respect to accuracy (i.e. you build a model that may be bad, but can never give you a wrong answer):
One example would be vector-search engines that, like search engines, retrieve information from a corpus based on the similarity as predicted by a machine learning model. Since you retrieve from a fixed corpus (like wikipedia) the model will never give you wrong information (assuming the corpus is not wrong)! A bad model may just not find the correct e.g. wikipedia entry to present to you.
Yes: keep in mind that with "good" nobody is talking about the content of the data, but rather how statistically interesting it is for the model.
Really what machine learning is doing is trying to deduce a probability distribution q
from a sampled distribution x ~ p(x)
.
The problem with statistical learning is that we only ever see an infinitesimally small amount of the true distribution (we only have finite samples from an infinite sample space of images/language/etc....).
So now what we really need to do is pick samples that adequately cover the entire distribution, without being redundant, since redundancy produces both more work (you simply have more things to fit against), and can obscure the true distribution:
Let's say that we have a uniform probability distribution over [1,2,3]
(uniform means everything has the same probability of 1/3).
If we faithfully sample from this we can learn a distribution that will also return [1,2,3]
with equal probability.
But let's say we have some redundancy in there (either direct duplicates, or, in the case of language, close-to duplicates):
The empirical distribution may look like {1,1,1,2,2,3} which seems to make ones a lot more likely than they are.
One way to deal with this is to just sample a lot more points: if we sample 6000 points, we are naturally going to get closer to the true distribution (similar how flipping a coin twice can give you 100% tails probability, even if the coin is actually fair. Once you flip it more often, it will return to the true probability).
Another way is to correct our observations towards what we already know to be true in our distribution (e.g. a direct 1:1 duplicate in language is presumably a copy-paste rather than a true increase in probability for a subsequence).
<continued in next comment>
That paper makes a bunch of(implicit) assumptions that make it pretty unrealistic: basically they assume that once we have decently working models already, we would still continue to do normal "brain-off" web scraping.
In practice you can use even relatively simple models to start filtering and creating more training data:
Think about it like the original LLM being a huge trashcan in which you try to compress Terrabytes of mostly garbage web data.
Then, you use fine-tuning (like the instruction tuning used the assistant models) to increases the likelihood of deriving non-trash from the model (or to accurately classify trash vs non-trash).
In general this will produce a datasets that is of significantly higher quality simply because you got rid of all the low-quality stuff.
This is not even a theoretical construction: Phi-1 (https://arxiv.org/abs/2306.11644) does exactly that to train a state-of-the-art language model on a tiny amount of high quality data (the model is also tiny: only half a percent the size of gpt-3).
Previously tiny stories https://arxiv.org/abs/2305.07759 showed something similar: you can build high quality models with very little data, if you have good data (in the case of tiny stories they generate simply stories to train small language models).
In general LLM people seem to re-discover that good data is actually good and you don't really need these "shotgun approach" web scrape datasets.
I think you also have to keep in mind the position that de Vries and redbull is in:
- Redbull is looking for a second verstappen-level driver. That's always been the case not only for redbull, but all tier 1 teams: Their aspirations are championships, not points or even podiums.
- De Vries is a 28 year old rookie. That's usually the time that drivers retire or lean on their superior experience to make up for their loss in reaction speed and overall pace. The problem is that De Vries has no experience, while being older than Verstappen by close to three years. The fact that he got to race at all is a miracle: He would have to beat Tsunoda every week by quite a margin to become relevant for RedBull. If he doesn't become relevant for redbull, then why have him at alpha tauri?
Meanwhile they have a young driver in the form of tsunoda which exists in a limbo due to him having nothing to compare against: He could be the fastest driver on the planet in a trash car, or he could be underdelivering without anyone noticing due to the lack of comparison.
This is bad for two reasons:
- you don't know whether tsunoda is an option for redbull
- you have no idea how good alpha tauri is over all, which is doubly bad considering that they want to make major changes to how alpha tauri operates.
On the other hand, you have a perfectly good Ricciardo sitting on his hands that performed really well at silverstone. Realistically, you aren't going to lose anything from having Riccardo drive the rest of the season compared to having de Vries drive, but you have to potential upside of more context to the quality of tsunoda and the team, which you wouldn't get otherwise.
In general I'm more suprised that they ever gave De Vries a chance considering his age and the context to his big achievements:
In formula 2 his stiffest competitor was Nicholas Latifi (He won with 266 vs Latifi's 214 points) in what can be described as a dud year after the majority of now F1 mainstays had already graduated (he also needed 3 years to win F2, which is never a good sign).
If you have ever seen an formula E race, you will notice that it is quite a chaotic crash-fest with very weird rules and other nonsense. Just not crashing and not driving to quickly can get you really far by surviving the carbon-fiber mayhems and fuel-conservation issues.
To put it into perspective, here are the race records in the year that De Vries won formula E [1st, 9th, retired, retired, 1st, 16th, retired, 9th, retired, 13th, 18th, 2nd, 2nd, 22nd, 8th] or, in short if we ignore all DNFs we get a mean position of 9th!
In short, there's a reason why Mercedes never even tried to get him an F1 spot: He's not a bad driver, but being "not a bad driver" is insufficient for a top team like mercedes and redbull. There's little incentive to put him into any car, even less so nowadays considering his age.
Everything using the activityPub standard has open likes (see https://www.w3.org/TR/2018/REC-activitypub-20180123/ for the standard), and logically it makes sense to do this to allow for verification of "likes":
If you did not do that, a malicious instance could much more easily just shove a bunch of likes onto another instance's post, while, if you have "like authors" it's much easier to do like moderation.
Effectively ActivityPub treats all interactions like comments, where you have a "from" and "to" field just like email does (just imagine you could send messages without having an originator: email would have unusable levels of spam and harassment).
Specfically, here is an example of a simple activity:
POST /outbox/ HTTP/1.1
Host: dustycloud.org
Authorization: Bearer XXXXXXXXXXX
Content-Type: application/ld+json; profile="https://www.w3.org/ns/activitystreams"
{
"@context": ["https://www.w3.org/ns/activitystreams",
{"@language": "en"}],
"type": "Like",
"actor": "https://dustycloud.org/chris/",
"name": "Chris liked 'Minimal ActivityPub update client'",
"object": "https://rhiaro.co.uk/2016/05/minimal-activitypub",
"to": ["https://rhiaro.co.uk/#amy",
"https://dustycloud.org/followers",
"https://rhiaro.co.uk/followers/"],
"cc": "https://e14n.com/evan"
}
As you can see this has a very "email like" structure with a sender, receiver, and content. The difference is mostly that you can also publish a "type" that allows for more complex interactions (e.g. if type is comment, then lemmy knows to put it into the comments, if type is like it knows to put it to the likes, etc...).
The actual protocol is a little more complex, but if you replace "ActivityPub" with "typed email" you are correct 99% of the time.
The different services, like lemmy, kbin, mastodon, or peertube are now just specific instantiations of this standard. E.g. a "like" might have slightly different effects on different services (hence also the confusion with "boosting" vs "liking" on kbin)
It really depends on what you want: I really like obsidian which is cross-platform and uses basically vanilla markdown which makes it easy to switch should this project go down in flames (there are also plugins that add additional syntax which may not be portable, but that's as expected).
There's also logseq which has much more bespoke syntax (major extensions to markdown), but is also OSS meaning there's no real danger of it suddenly vanishing from one day to the next.
Specifically Logseq is much heavier than obsidian both in the app itself and the features it adds to markdown, while obsidian is much more "markdown++" with a significant part of the "++" coming from plugins.
In my experience logseq is really nice for short-term note taking (e.g. lists, reminders, etc) and obsidian is much nicer for long-term notes.
Some people also like notion, but i never got into that: it requires much more structure ahead of time and is very locked down (it also obviously isn't self-hosted). I can see notion being really nice for people that want less general note-taking and more custom "forms" to fill out (e.g. traveling checklists, production planning, etc..).
Personally, I would always go with obsidian, just for the piece of mind that the markdown plays well with other markdown editors which is important for me if I want a long-running knowledge base.
Unfortunately I cannot tell you anything with regards to collaboration since I do not use that feature in any note-taking system
Should have been done a long time ago. Even adding and removing gravel traps where they currently have the blue concrete should be within the realms of possibility for an F1 gp if they want both F1 and MotoGP (consider that places like Baku literally pave their historical cobblestone and then un-pave it after the gp)
For example, if you had an 8-bit integer represented by a bunch of qbits in a superposition of states, it would have every possible value from 0-256 and could be computed with as though it were every possible value at once until it is observed, the probability wave collapses, and a finite value emerges. Is this not the case?
Not really, or at least it's not a good way of thinking about it. Imagine it more like rigging coin tosses: You don't have every single configuration at the same time, but rather you have a joint probability over all bits which get altered to produce certain useful distributions.
To get something out, you then make a measurement that returns the correct result with a certain probability (i.e. it's a probabilistic turing machine rather than a nondeterministic one).
This can be very useful since sampling from a distribution can sometimes be much nicer than actually solving a problem (e.g. you replace a solver with a simulator of the output).
In traditional computing this can also be done but that gives you the fundamental problem of sampling from very complex probability distributions which involves approximating usually intractable integrals.
However, there are also massive limitations to the type of things a quantum computer can model in this way since quantum theory is inherently linear (i.e. no climate modelling regardless of how often people claim they want to do it).
There's also the question of how many things exist where it is more efficient to build such a distribution and sample from it, rather than having a direct solver.
If you look at the classic quantum algorithms (e.g. https://en.wikipedia.org/wiki/Quantum_algorithm), you can see that there aren't really that many algorithms out there (this is of course not an exhaustive list but it gives a pretty good overview) where it makes sense to use quantum computing and pretty much all of them are asymptotically barely faster or the same speed as classical ones and most of them rely on the fact that the problem you are looking at is a black-box one.
Remember that one of the largest useful problems that was ever solved on a quantum computer up until now was factoring the number 21 with a specialised version of Shor's algorithm that only works for that number (since the full shor would need many orders of magnitude more qbits than exist on the entire planet).
There's also the problem of logical vs physical qbits: In computer science we like to work with "perfect" qbits that are mathematically ideal, i.e. are completely noise free. However, physical qbits are really fragile and attenuate to pretty much anything and everything, which adds a lot of noise into the system. This problem also gets worse the larger you scale your system.
The latter is a fundamental problem: the entire clue of quantum computers is that you can combine random states to "virtually" build a complex distribution before you sample from it. This can be much faster since the virtual model can look dependencies that are intractable to work with on a classical system, but that dependency monster also means that any noise in the system is going to negatively affect everything else as you scale up to more qbits.
That's why people expect real quantum computers to have many orders of magnitude more qbits than you would theoretically need.
It also means that you cannot trivially scale up a physical quantum algorithm: Physical grovers on a list with 10 entries might look very different than a physical grover with 11 entries.
This makes quantum computing a nonstarter for many problems where you cannot pay the time it takes to engineer a custom solution.
And even worse: you cannot even test whether your fancy new algorithm works in a simulator, since the stuff you are trying to simulate is specifically the intractable quantum noise (something which, ironically, a quantum computer is excellent at simulating).
In general you should be really careful when looking at quantum computing articles, since it's very easy to build some weird distribution that is basically impossible for a normal computer to work with, but that doesn't mean it's something practical e.g. just starting the quantum computer, "boop" one bit, then waiting for 3ns will give you a quantum noise distribution that is intractable to simulate with a computer (same thing is true if you don't do anything with a computer: there's literal research teams of top scientists whose job boils down to "what are quantum computers computing if we don't give them instructions").
Meanwhile, the progress of classical or e.g. hybrid analog computing is much faster than that of quantum computing, which means that the only people really deeply invested into quantum computing are the ones that cannot afford to miss, just in case there is in fact something:
- finance
- defence
- security
- ....
It's $\mathbb{X}$ or unicode 𝕏 (U+1D54F)
Maybe he really likes metric spaces??