I agree! This time last year I might not have, but things changes fast and not for the better:(
Tobberone
On the contrary, I'm afraid. Land is in very short supply. The issue is that even if the land is not currently developed it is doing vital stuff already. If it's used for food production, if it's a bit of forest storing massive amounts of CO2, if it's home the insects pollinating our food supply, if it's....
Finding scrap pieces of land, like roof tops/already developed land for solar will be crucial going forward.
I'm just in the beginning, but my plan is to use it to evaluate policy docs. There is so much context to keep up with, so any way to load more context into the analysis will be helpful. Learning how to add excel information in the analysis will also be a big step forward.
I will have to check out Mistral:) So far Qwen2.5 14B has been the best at providing analysis of my test scenario. But i guess an even higher parameter model will have its advantages.
And exactly why are they missing? Who stole what at Microsoft?
Thank you! Very useful. I am, again, surprised how a better way of asking questions affects the answers almost as much as using a better model.
This is expected. Oil prize has been on the decline for some time. I didn't expect demand to erode this fast, though. which I guess is kinda a good thing.
The only way forward is for renewables to become even cheaper that fossils. Which can be done. The EUs fit-for-55 will bring down energy prizes. Summertime we will see really low electricity prizes the comming decade in Europe because of this.
It's the only chemistry possible to source unfortunately. I read about other chemistries, but they are hard to find.
There is usually a 1:1 between MW:MWh at these capacities.
But per the definition given involving negative mass, it should be "meassurable mass in the presence of exotic matter". Anywho...
I need to look into flash attention! And if i understand you correctly a larger model of llama3.1 would be better prepared to handle a larger context window than a smaller llama3.1 model?
Thanks! I actually picked up the concept of context window, and from there how to create a modelfile, through one of the links provided earlier and it has made a huge difference. In your experience, would a small model like llama3.2 with a bigger context window be able to provide the same output as a big modem L, like qwen2.5:14b, with a more limited window? The bigger window obviously allow more data to be taken into account, but how does the model size compare?
I disagree with your first statement. Law is about the application of rules, not the rules themselves. In a perfect world, it would be about determining which law has precedence in matter at hand, a task in itself outside of AI capabilities as it involves weighing moral and ethical principles against eachother, but in reality it often comes down to why my interpretation of reality is the correct one.