daredevil

joined 2 years ago
[–] [email protected] 2 points 1 year ago

I've only felt the need to change distros once, from Linux Mint to EndeavourOS, because I wanted Wayland support. I realize there were ways to get Wayland working on Mint in the past, but I've already made the switch and have already gotten used to my current setup. I personally don't feel like I'm missing out by sticking to one distro, tbh. If you're enjoying Mint, I'd suggest to stick with it, unless another distro fulfills a specific need you can't get on Mint.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Haha, I've felt this way about other movies despite being prepared going into them, like the Resident Evil movies. I'd agree with you regardless.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

You could make a (private) collection for your subscribed magazines. Not exactly the feature you were asking for, but it's an option to curate your feed. On Firefox I have various collections bookmarked and tagged so accessibility is seamless.

 

They're currently facing technical issues, once that's sorted out, it should be up.

 

Join Cloud and his friends on a journey across the planet in search of Sephiroth, experiencing the captivating story and thrilling combat, in FINAL FANTASY VII REBIRTH.

 

13.VII.A #genki2textbook #japanese #learnjapanese

VII. まとめの練習

A. Answer the following questions

  1. 子供の時に何ができましたか。何ができませんでしたか。

子供の時本が読めれたが、夜は外で一人で遊べませんでした。

  1. 百円で何が買えますか。

多分ノートが買えます。

  1. どこに行ってみたいですか。どうしてですか。

日本語を勉強するのが有益なので、日本に行ってみたいですよ。

  1. 子供の時、何がしてみたかったですか。

時々漫画を書いてみたかったです。

  1. 今、何がしてみたいですか。

もっと日本語が上手になりたいです。

  1. 一日に何時間ぐらい勉強しますか。

最近、一時間まで二時間ぐらい勉強します。けど、すぐもっと練習したいです。お互い頑張りましょうか。

  1. 一週間に何回レストランに行きますか。

最近、一週間に三まで四回に行きます。多分行くすぎると思います。

  1. 一か月にいくらぐらい使いますか。

だけ少し使います。

#LearnJapanese

 

Final Fantasy XIV wrapped up the final leg of its Fan Festival tour in Tokyo over the weekend. As with the other Fan Festival events, players were treated to a wealth of new information on Dawntrail, the next expansion in the critically acclaimed MMORPG series — in addition to graphical updates, a new playable race, and information on the expansion’s new area (and you can check out the full keynote address here). We also got a first look at the second job class being added to the game: Pictomancer.

 

久しぶりですね。皆さんは元気だと希望します。このごろすごくいそがしいので、すみません皆さんと一緒に話せませんでした。さて、日本語を練習しましょうか。

13.VI.A #genki2textbook #japanese #japanesereview

A. Look at the following pictures and make sentences as in the example.

Ex. Twice a day
一日に二回食べます。

  1. Brush teeth three times a day.
    一日三回歯を磨きます。

  2. Sleep seven hours a day.
    一日七時間寝ます。

  3. Study three hours a day.
    一日三時間勉強します。

  4. Clean room once a week.
    一週間一回部屋で掃除する。

  5. Do laundry twice a week
    一週間二回洗濯をします。

  6. Working part-time three days a week.
    一週間三回バイトで働きます。

  7. I go to school 5 days a week.
    一週間五日学校へ行きます。

  8. I watch a movie once a month.
    一ヶ月一回映画を見ます。

#LearnJapanese

[–] [email protected] 1 points 2 years ago* (last edited 2 years ago) (1 children)

I imagine something like this

Duly noted, I missed a line of text. Won't try to help in the future

 
 
 

Terminal Trove showcases the best of the terminal, Discover a collection of CLI, TUI, and more developer tools at Terminal Trove.

[–] [email protected] 1 points 2 years ago

Starting to wonder if I should just make a doc at this point...

 

イニシエノウタ/デボル · SQUARE ENIX MUSIC · 岡部 啓一 · MONACA

NieR Gestalt & NieR Replicant Original Soundtrack

Released on: 2010-04-21

[–] [email protected] 2 points 2 years ago

Came here with this show in mind. Would recommend.

 

B. Answer the following questions. Use ~なら whenever possible.

Example:
Q: スポーツをよく見ますか。
A: ええ、野球なら見ます。/ いいえ、見ません。

Q: 外国語ができますか。
A: ええ、ちょっと日本語ができます。
2.
Q: アルバイトをしたことがありますか。
A: ええ、バイトならしたことがありました。
3.
Q: 日本の料理が作れますか。
A: 中国の料理なら作れるが、日本語の料理は作れません。
4.
Q: 有名人に会ったことがありますか。
A: ええ、有名人なら会ったことがありました。
5.
Q: 楽器ができますか。
A: ええ、バイオリンならできます。
6.
Q: お金が貸せますか
A: ええ、お金なら貸せます。

[–] [email protected] 1 points 2 years ago

I haven't, but I'll keep this in mind for the future -- thanks.

[–] [email protected] 1 points 2 years ago (2 children)

I believe I was when I tried it before, but it's possible I may have misconfigured things

[–] [email protected] 3 points 2 years ago* (last edited 2 years ago)

I'll give it a shot later today, thanks

edit: Tried out mistral-7b-instruct-v0.1.Q4_K_M.ggufvia the LM Studio app. it runs smoother than I expected -- I get about 7-8 tokens/sec. I'll definitely be playing around with this some more later.

 

On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a "mixture of experts" (MoE) model with open weights that reportedly truly matches OpenAI's GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI's Andrej Karpathy and Jim Fan. That means we're closer to having a ChatGPT-3.5-level AI assistant that can run freely and locally on our devices, given the right implementation.

Mistral, based in Paris and founded by Arthur Mensch, Guillaume Lample, and Timothée Lacroix, has seen a rapid rise in the AI space recently. It has been quickly raising venture capital to become a sort of French anti-OpenAI, championing smaller models with eye-catching performance. Most notably, Mistral's models run locally with open weights that can be downloaded and used with fewer restrictions than closed AI models from OpenAI, Anthropic, or Google. (In this context "weights" are the computer files that represent a trained neural network.)

Mixtral 8x7B can process a 32K token context window and works in French, German, Spanish, Italian, and English. It works much like ChatGPT in that it can assist with compositional tasks, analyze data, troubleshoot software, and write programs. Mistral claims that it outperforms Meta's much larger LLaMA 2 70B (70 billion parameter) large language model and that it matches or exceeds OpenAI's GPT-3.5 on certain benchmarks, as seen in the chart below.
A chart of Mixtral 8x7B performance vs. LLaMA 2 70B and GPT-3.5, provided by Mistral.

The speed at which open-weights AI models have caught up with OpenAI's top offering a year ago has taken many by surprise. Pietro Schirano, the founder of EverArt, wrote on X, "Just incredible. I am running Mistral 8x7B instruct at 27 tokens per second, completely locally thanks to @LMStudioAI. A model that scores better than GPT-3.5, locally. Imagine where we will be 1 year from now."

LexicaArt founder Sharif Shameem tweeted, "The Mixtral MoE model genuinely feels like an inflection point — a true GPT-3.5 level model that can run at 30 tokens/sec on an M1. Imagine all the products now possible when inference is 100% free and your data stays on your device." To which Andrej Karpathy replied, "Agree. It feels like the capability / reasoning power has made major strides, lagging behind is more the UI/UX of the whole thing, maybe some tool use finetuning, maybe some RAG databases, etc."

Mixture of experts

So what does mixture of experts mean? As this excellent Hugging Face guide explains, it refers to a machine-learning model architecture where a gate network routes input data to different specialized neural network components, known as "experts," for processing. The advantage of this is that it enables more efficient and scalable model training and inference, as only a subset of experts are activated for each input, reducing the computational load compared to monolithic models with equivalent parameter counts.

In layperson's terms, a MoE is like having a team of specialized workers (the "experts") in a factory, where a smart system (the "gate network") decides which worker is best suited to handle each specific task. This setup makes the whole process more efficient and faster, as each task is done by an expert in that area, and not every worker needs to be involved in every task, unlike in a traditional factory where every worker might have to do a bit of everything.

OpenAI has been rumored to use a MoE system with GPT-4, accounting for some of its performance. In the case of Mixtral 8x7B, the name implies that the model is a mixture of eight 7 billion-parameter neural networks, but as Karpathy pointed out in a tweet, the name is slightly misleading because, "it is not all 7B params that are being 8x'd, only the FeedForward blocks in the Transformer are 8x'd, everything else stays the same. Hence also why total number of params is not 56B but only 46.7B."

Mixtral is not the first "open" mixture of experts model, but it is notable for its relatively small size in parameter count and performance. It's out now, available on Hugging Face and BitTorrent under the Apache 2.0 license. People have been running it locally using an app called LM Studio. Also, Mistral began offering beta access to an API for three levels of Mistral models on Monday.

 

On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a "mixture of experts" (MoE) model with open weights that reportedly truly matches OpenAI's GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI's Andrej Karpathy and Jim Fan. That means we're closer to having a ChatGPT-3.5-level AI assistant that can run freely and locally on our devices, given the right implementation.

Mistral, based in Paris and founded by Arthur Mensch, Guillaume Lample, and Timothée Lacroix, has seen a rapid rise in the AI space recently. It has been quickly raising venture capital to become a sort of French anti-OpenAI, championing smaller models with eye-catching performance. Most notably, Mistral's models run locally with open weights that can be downloaded and used with fewer restrictions than closed AI models from OpenAI, Anthropic, or Google. (In this context "weights" are the computer files that represent a trained neural network.)

Mixtral 8x7B can process a 32K token context window and works in French, German, Spanish, Italian, and English. It works much like ChatGPT in that it can assist with compositional tasks, analyze data, troubleshoot software, and write programs. Mistral claims that it outperforms Meta's much larger LLaMA 2 70B (70 billion parameter) large language model and that it matches or exceeds OpenAI's GPT-3.5 on certain benchmarks, as seen in the chart below.

The speed at which open-weights AI models have caught up with OpenAI's top offering a year ago has taken many by surprise. Pietro Schirano, the founder of EverArt, wrote on X, "Just incredible. I am running Mistral 8x7B instruct at 27 tokens per second, completely locally thanks to @LMStudioAI. A model that scores better than GPT-3.5, locally. Imagine where we will be 1 year from now."

LexicaArt founder Sharif Shameem tweeted, "The Mixtral MoE model genuinely feels like an inflection point — a true GPT-3.5 level model that can run at 30 tokens/sec on an M1. Imagine all the products now possible when inference is 100% free and your data stays on your device." To which Andrej Karpathy replied, "Agree. It feels like the capability / reasoning power has made major strides, lagging behind is more the UI/UX of the whole thing, maybe some tool use finetuning, maybe some RAG databases, etc."

Mixture of experts

So what does mixture of experts mean? As this excellent Hugging Face guide explains, it refers to a machine-learning model architecture where a gate network routes input data to different specialized neural network components, known as "experts," for processing. The advantage of this is that it enables more efficient and scalable model training and inference, as only a subset of experts are activated for each input, reducing the computational load compared to monolithic models with equivalent parameter counts.

In layperson's terms, a MoE is like having a team of specialized workers (the "experts") in a factory, where a smart system (the "gate network") decides which worker is best suited to handle each specific task. This setup makes the whole process more efficient and faster, as each task is done by an expert in that area, and not every worker needs to be involved in every task, unlike in a traditional factory where every worker might have to do a bit of everything.

OpenAI has been rumored to use a MoE system with GPT-4, accounting for some of its performance. In the case of Mixtral 8x7B, the name implies that the model is a mixture of eight 7 billion-parameter neural networks, but as Karpathy pointed out in a tweet, the name is slightly misleading because, "it is not all 7B params that are being 8x'd, only the FeedForward blocks in the Transformer are 8x'd, everything else stays the same. Hence also why total number of params is not 56B but only 46.7B."

Mixtral is not the first "open" mixture of experts model, but it is notable for its relatively small size in parameter count and performance. It's out now, available on Hugging Face and BitTorrent under the Apache 2.0 license. People have been running it locally using an app called LM Studio. Also, Mistral began offering beta access to an API for three levels of Mistral models on Monday.

view more: ‹ prev next ›