The Microsoft-powered bot says bosses can take workers’ tips and that landlords can discriminate based on source of income
Yet another example of people fundamentally misunderstanding the proper use of LLMs and throwing them into production without any kind of sanity checks on the input and output. As someone who used to work for NYS as a software engineer, this is entirely unsurprising.
Work in HR. Have a very smart boss. Asked me about AI for recruiting, screening and other purposes. Told my boss, wait 5 years, we’ll see the catastrophic lawsuits and early adopters, then after 5 more there will be some plug and play usable solutions.
Anyone eating up Big4 and startups own horseshit deserve what they get. They’ve fully demonstrated they don’t QC, and especially on critical, difficult to parse, contextual or changing info LLMs are incredibly immature.
LLMs are still good for the kind of flowery language you need in HR, but not for any sort of fact-based generation.
Think of it as being creative, not logical.
The biggest thing I’ve found is limiting the inputs with a filter and vetting outputs results in higher quality results. One project I’m working on takes highly complex language and simplifies it for users. There’s no user input and it’s not being used to create anything that isn’t already there. It takes the highly technical language with lots of acronyms and breaks it down into more understandable units for normal people. Of course my company is heavily regulated so we’re extremely focused on QA and ensuring it will never output something that doesn’t align correctly.
I guess the chat bot is drawing from the data where corpos get away with everything?
Believe it when they say the truth out loud.
deleted by creator