How do you get the messages to your laptop? My googling only lead to various hacks to enable hidden iCloud folders and copying from there. Anyway I suspect I'm going a bit of topic but the point was it wasn't as smooth a process as I was expecting from my experience upgrading phones on the same OS.
stsquad
I dunno. I tried to transition my sister in law's WhatsApps from her iPhone to Android and it's as far as I can tell impossible because the backup on the respective cloud services are hidden and can't be loaded manually.
I swear the very mention of Thatcher is catnip to some on the left. I wouldn't say the one line in the piece about her was praise, just an observation about the changes she put in.
But the strategy makes sense: convince the centerists voters who absolutely will vote that Kier is a safe pair of hands. The alternative is appealing to the younger radical voters who may not bother to turn up to the voting booth because they're unhappy he's not as ideologically pure as they want him to be.
Also he was leader during a pandemic which will have been especially stressful and his wife died earlier this year. I suspect he's done with politics (he's an academic in a previous career) and wants to do other things with his life.
I assume it would be Private Eye regular Helen Lewis.
Do you usually have some other front-end over the model? I can run llama.cpp directly in interactive mode but the results are a little underwhelming. However there seem to be various front ends that get better results? Is this down to better prompting and parameter control? I've seen temperature mentioned in relation to ChatGPT but I have no idea what rope and yarn factors are for?
Is there a standard for the suffixes? For example the OpenLlama models here: https://huggingface.co/SlyEcho/open_llama_7b_v2_gguf/tree/main have qN and and then a mix of K, M, 0 and 1 suffixes. The q I assume is the quantisation level but measured how? Does q2 mean t 2bits per weight? That seems very small - and what is it fixed float, integers?
Where is the sweet spot for running CPU bound models? I've just started playing with llama.cpp but the big models do make the cores work pretty hard. Should I look at using quantisation or more fine tuned models for the tasks I care about (developer assistance mainly).
I'm very lucky that I get to work in an upstream focused open source job. But I also maintain a few small packages personally and those only get attention when there are contributions to review or I have a personal itch to scratch. I'll leave enhancement requests in the trackers and just mark them as such and occasionally have a go at them if I feel the urge. No one not paying should expect anymore from maintainers.
It does seem more and more like she was deliberately trying to get sacked.
It's super frustrating after several years of Stamers "ming vase" strategy the Labour party is managing to have a split over the definitions of ceasefire and humanitarian pause. Whatever motion eventually gets carried will have no actual effect on the ground in Gaza.
Assuming the Tories go full nasty party mode that might keep them a few strongholds. However I suspect the challenge to Labour will be from the left who will claim the centerists Labour of Stamer are not left wing enough making it harder and harder to maintain a majority for successive terms. Eventually a Cameron like figure will have to start another detoxifying process to pull the Tories back to the centre when they are bored of not getting power.