- cross-posted to:
- singularity@lemmit.online
- cross-posted to:
- singularity@lemmit.online
Big day for people who use AI locally. According to benchmarks this is a big step forward to free, small LLMs.
Big day for people who use AI locally. According to benchmarks this is a big step forward to free, small LLMs.
I haven’t given it a very thorough testing, and I’m by no means an expert, but from the few prompts I’ve ran so far, I’d have to hand it to Nemo concerning quality.
Using openrouter.ai, I’ve also given llama3.1 405B a shot, and that seems to be at least on par with (if not better than) Claude 3.5 Sonnet, whilst being a bit cheaper as well.
Llama 70B is probably where its at, if you go the API route. It’s distilled from 405B, and its benchmarks are pretty close.