The Microsoft-powered bot says bosses can take workers’ tips and that landlords can discriminate based on source of income

  • Daxtron2@startrek.website
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    7 months ago

    Yet another example of people fundamentally misunderstanding the proper use of LLMs and throwing them into production without any kind of sanity checks on the input and output. As someone who used to work for NYS as a software engineer, this is entirely unsurprising.

    • pdxfed@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      7 months ago

      Work in HR. Have a very smart boss. Asked me about AI for recruiting, screening and other purposes. Told my boss, wait 5 years, we’ll see the catastrophic lawsuits and early adopters, then after 5 more there will be some plug and play usable solutions.

      Anyone eating up Big4 and startups own horseshit deserve what they get. They’ve fully demonstrated they don’t QC, and especially on critical, difficult to parse, contextual or changing info LLMs are incredibly immature.

      • P03 Locke@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        LLMs are still good for the kind of flowery language you need in HR, but not for any sort of fact-based generation.

        Think of it as being creative, not logical.

      • Daxtron2@startrek.website
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        The biggest thing I’ve found is limiting the inputs with a filter and vetting outputs results in higher quality results. One project I’m working on takes highly complex language and simplifies it for users. There’s no user input and it’s not being used to create anything that isn’t already there. It takes the highly technical language with lots of acronyms and breaks it down into more understandable units for normal people. Of course my company is heavily regulated so we’re extremely focused on QA and ensuring it will never output something that doesn’t align correctly.