Or maybe same machine with premium options and premium pods.
Which, for me at least, is accurate.
Or maybe same machine with premium options and premium pods.
Which, for me at least, is accurate.
I have a few LLMs running locally. I don’t have an array of 4090s to spare so I am limited to the smaller models 8B and whatnot.
They definitely aren’t as good as anything you get remotely. It’s more private and controlled but it’s much less useful (I’ve found) than any of the other models.
Better is entirely subjective. Mastodon has so much friction for an average person.
Not to mention most servers are filled with tons of “WeLl ACKWalLY…” types or legit weirdos.
I’ve heard it summarized: if you hated Twitter you’ll like mastodon. If you liked Twitter, you’ll love bluesky.
Mastodon aint for everyone. Id hazard to say it’s not for most people. It’s also no immune to ads or natural centralization.
It reminds me of that South Park episode about Walmart where they fought off the super store and shopped at the small store until it grew into the super store.
If China is bad, and the US is good, then why wouldn’t we want our military to have access to the same (or better) tooling than they have access to.
I’m so morally dilemma’d here
Meta, a US company, allows the US military to use its models. Omg! Let me clutch my pearls.
What’s the moral dilemma? China already took their model and is using it in their military.
Do you guys not want our military to have access to all of the possible tools they can?
You mad about Ford and GM building trucks and vehicles parts for the military too? Are you mad about Microsoft selling windows to the govt?
You just upset that it’s the military?
Where’s this line that’s been drawn where this is a moral dilemma??
It’s really funny because Cory always champions POSSE. Post (on)own site, share everywhere.
And it’s funny because he mostly does that.
Imagine falling asleep on a flight and then you wake up to punches to the face. Id imagine most people would just instinctively cover their face and try to protect themselves.
You’d have no fucking clue what’s happening for the first few seconds and if any of the punches landed you might be dazed on top of just waking up. And I’m sure at least a couple landed because you were asleep when attacked.
Hi! It’s me, the guy you discussed this with the other day! The guy that said Lemmy is full of AI wet blankets.
I am 100% with Linus AND would say the 10% good use cases can be transformative.
Since there isn’t any room for nuance on the Internet, my comment seemed to ruffle feathers. There are definitely some folks out there that act like ALL AI is worthless and LLMs specifically have no value. I provided a list of use cases that I use pretty frequently where it can add value. (Then folks started picking it apart with strawmen).
I gotta say though this wave of AI tech feels different. It reminds me of the early days of the web/computing in the late 90s early 2000s. Where it’s fun, exciting, and people are doing all sorts of weird,quirky shit with it, and it’s not even close to perfect. It breaks a lot and has limitations but their is something there. There is a lot of promise.
Like I said else where, it ain’t replacing humans any time soon, we won’t have AGI for decades, and it’s not solving world hunger. That’s all hype bro bullshit. But there is actual value here.
I’m not saying any thing you guys are saying that I’m saying. Wtf is happening. I never said anything about data loss. I never said I wanted people using LLMs to email each other. So this comment chain is a bunch of internet commenters making weird cherry picked, straw man arguments and misrepresenting or miscomprehending what I’m saying.
Legitimately, the llm grok’d the gist of my comment while you all are arguing against your own strawmen argument.
Haha, yea I’m familiar with it(always heard it called the Barnum effect though it sounds like they are the same thing), but this isn’t a fortune cookie-esque, meyers-briggs response.
In this case it actually summarized my post(I guess you could make the case that my post is an opinion that’s shared by many people–so forer-y in that sense), and to my other point, it didn’t misunderstand and tell me I was envisioning LLMs sending emails back and forth to each other.
Either way, there is this general tenor of negativity on Lemmy about AI (usually conflated to mean just LLMs). I think it’s a little misplaced. People are lumping the tech I’m with the hype bros- Altman, Musk, etc. the tech is transformative and there are plenty of valuable uses for it. It can solve real problems now. It doesn’t need to be AGI to do that. It doesn’t need to be perfect to do that.
“do you think your mother’s would still love you if they knew how bad you were at this game?” - Walz
Citation needed
As the author of the post it summarized, I agree with the summary.
Now, tell me more about this bridge.
That’s not what I am envisioning at all. That would be absurd.
Ironically, an gpt4o understood my post better than you :P
" Overall, your perspective appreciates the real-world applications and benefits of AI while maintaining a critical eye on the surrounding hype and skepticism. You see AI as a transformative tool that, when used appropriately, can enhance both individual and organizational capabilities."
People are treating AI like crypto, and on some level I don’t blame them because a lot of hype-bros moved from crypto to AI. You can blame the silicon valley hype machine + Wall Street rewarding and punishing companies for going all in or not doing enough, respectively, for the Lemmy anti-new-tech tenor.
That and lemmy seema full of angsty asshats and curmudgeons that love to dogpile things. They feel like they have to counter balance the hype. Sure, that’s fair.
But with AI there is something there.
I use all sorts of AI on a daily basis. I’d venture to say most everyone reading this uses it without even knowing.
I set up my server to transcribe and diarize my my favorite podcasts that I’ve been listening to for 20 years. Whisper transcribes, pyannote diarieizes, gpt4o uses context clues to find and replace “speaker01” with “Leo”, and the. It saves those transcripts so that I can easily switch them. It’s a fun a hobby thing but this type of thing is hugely useful and applicable to large companies and individuals alike.
I use kagi’s assistant (which basically lets you access all the big models) on a daily basis for searching stuff, drafting boilerplate for emails, recipes, etc.
I have a local llm with ragw that I use for more personal stuff like, I had it do the BS work for my performance plan using notes I’d taken from the year. I’ve had it help me reword my resume.
I have it parse huge policy memos into things I actually might give a shit about.
I’ve used it to run though a bunch of semi-structured data on documents and pull relevant data. It’s not necessarily precise but it’s accurate enough for my use case.
There is a tool we use that uses CV to do sentiment analysis of users (as they use websites/apps) so we can improve our ux / cx. There’s some ml tooling that also can tell if someone’s getting frustrated. By the way, they’re moving their mouse if they’re thrashing it or what not.
There’s also a couple use cases that I think we’re looking at at work to help eliminate bias so things like parsing through a bunch of resumes. There’s always a human bias when you’re doing that and there’s evidence that shows llms can do that with less bias than a human and maybe it’ll lead to better results or selections.
So I guess all that to say is I find myself using AI or ml llms on a pretty frequent basis and I see a lot of value in what they can provide. I don’t think it’s going to take people’s jobs. I don’t think it’s going to solve world hunger. I don’t think it’s going to do much of what the hypros say. I don’t think we’re anywhere near AGI, but I do think that there is something there and I think it’s going to change the way we interact with our technology moving forward and I think it’s a great thing.
Lol GTFO of here.
Others don’t seem to have the hold over those clowns as he does. Meatball Ron fell flat. Vance wouldn’t maintain this level of support for 5 minutes. Hawley can’t. Horse face green is just a clown and not a serious candidate.
I think his losing again would snap some people out of it.
I think the party would bail on him. Especially if he causes losses down the ballot. They won’t back him for another election.
It’s not those people that this sways. They are a “lost cause” in most regards. It’s the people who have hesitations and concerns. The people that have been lifelong Republicans but are feeling jaded or are starting to see through the facade.
It helps sway those people.
The fuck is bitnet
I came in here expecting someone to have a legitimate explanation. Lolol at these comments.