Hellfire103@lemmy.ca to linuxmemes@lemmy.worldEnglish · 1 day agoDistro Focuseslemmy.caimagemessage-square192fedilinkarrow-up1847arrow-down111
arrow-up1836arrow-down1imageDistro Focuseslemmy.caHellfire103@lemmy.ca to linuxmemes@lemmy.worldEnglish · 1 day agomessage-square192fedilink
minus-squareZachariah@lemmy.worldlinkfedilinkarrow-up6·edit-221 hours agoWho doesn’t? I mean version 2.9.2 just came out in May.
minus-square1985MustangCobra@lemmy.calinkfedilinkEnglisharrow-up4·21 hours agoi tried living in the terminal but i had no one to talk to
minus-squareZachariah@lemmy.worldlinkfedilinkarrow-up7·21 hours agoWe’re in your terminal: https://github.com/LunaticHacker/lemmy-terminal-viewer
minus-square1985MustangCobra@lemmy.calinkfedilinkEnglisharrow-up4·21 hours agosadly based on the latest issues submitted and my experience, the app no longer works: https://github.com/LunaticHacker/lemmy-terminal-viewer/issues/11
minus-squareLucy :3@feddit.orglinkfedilinkarrow-up1·20 hours agoIf you have a decent GPU or CPU, you can just set up ollama with ollama-cuda/ollama-rocm and run llama3.1 or llama3.1-uncensored.
minus-square1985MustangCobra@lemmy.calinkfedilinkEnglisharrow-up2·20 hours agoI have a ryzen 5 laptop. not really decent enough for that workload. and im not crazy about AI.
minus-squareLucy :3@feddit.orglinkfedilinkarrow-up1·20 hours agoI bet even my Pi Zero W could run such a model* * with 1 character per hour or so
minus-square1985MustangCobra@lemmy.calinkfedilinkEnglisharrow-up2·20 hours agointeresting, well it’s something to look into, but id like a place to communicate with like minded people.
Who doesn’t?
I mean version 2.9.2 just came out in May.
i tried living in the terminal but i had no one to talk to
We’re in your terminal: https://github.com/LunaticHacker/lemmy-terminal-viewer
sadly based on the latest issues submitted and my experience, the app no longer works: https://github.com/LunaticHacker/lemmy-terminal-viewer/issues/11
😍
If you have a decent GPU or CPU, you can just set up ollama with ollama-cuda/ollama-rocm and run llama3.1 or llama3.1-uncensored.
I have a ryzen 5 laptop. not really decent enough for that workload. and im not crazy about AI.
I bet even my Pi Zero W could run such a model*
* with 1 character per hour or so
interesting, well it’s something to look into, but id like a place to communicate with like minded people.