Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️ Teuken-7B-Base and Teuken-7B-Instruct: Towards European LLMs (2024)
ozgune 5 daysReload
I had a related, but orthogonal question about multilingual LLMs.

When I ask smaller models a question in English, the model does well. When I ask the same model a question in Turkish, the answer is mediocre. When I ask the model to translate my question into English, get the answer, and translate the answer back to Turkish, the model again does well.

For example, I tried the above with Llama 3.3 70B, and asked it to plan me a 3-day trip to Istanbul. When I asked Llama to do the translations between English <> Turkish, the answer was notably better.

Anyone else observed a similar behavior?


kiru_io 5 daysReload
Maybe someone should edit the title to mention this is from 2024: [Submitted on 30 Sep 2024 (v1), last revised 15 Oct 2024 (this version, v2)]

KronisLV 5 daysReload
I also quite liked the EuroLLM project: https://huggingface.co/blog/eurollm-team/eurollm-9b

Was pretty good with Latvian (better than other models this size as well as variants of Llama or Qwen that I could run) and I assume probably with other EU languages as well.


JKolios 5 daysReload
More diversity in the LLM space is always good. In my experience though, speaking as a native speaker of one of the less-used European languages, Mistral's models already use it pretty well.

jug 5 daysReload
On this topic, don’t miss the quite useful benchmark:

https://euroeval.com