• 1stTime4MeInMCU@mander.xyz
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    3
    ·
    4 hours ago

    I’m convinced people who can’t tell when a chat bot is hallucinating are also bad at telling whether something else they’re reading is true or not. What online are you reading that you’re not fact checking anyway? If you’re writing a report you don’t pull the first fact you find and call it good, you need to find a couple citations for it. If you’re writing code, you don’t just write the program and assume it’s correct, you test it. It’s just a tool and I think most people are coping because they’re bad at using it

    • BluesF@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      4
      ·
      4 hours ago

      Yeah. GPT models are in a good place for coding tbh, I use it every day to support my usual practice, it definitely speeds things up. It’s particularly good for things like identifying niche python packages & providing example use cases so I don’t have to learn shit loads of syntax that I’ll never use again.

      • Aceticon@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        22 minutes ago

        In other words, it’s the new version of copying code from Stack Overflow without going to the trouble of properly understanding what it does.