• BlameTheAntifa@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    They’re not wrong, but I heard similar things when search engines first appeared. To be fair, that wasn’t wrong either.

    • Bob Robertson IX @discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      In college (25+ years ago) we were warned that we couldn’t trust Wikipedia and shouldn’t use it. And, yes, it was true back then that you had to be careful with what you found on Wikipedia, but it was still an incredible resource for finding resources.

      My 8 year old came home this year saying they were using AI, and I used it as an opportunity to teach her how to properly use an LLM, and how to be very suspicious of what it tells her.

      She will need the skills to efficiently use an LLM, but I think it’s going to be on me to teach her that because the schools aren’t prepared.

      • SkyNTP@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        Wikipedia didn’t start out hallucinating. Also unlike LLMs, Wikipedia isn’t being marketed as being capable of doing things it can’t do.

        It’s not that good of a comparison.

        • BlameTheAntifa@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 days ago

          Wikipedia started out as being extremely unreliable. So did Lycos, AltaVista, Yahoo, etc. Those things have matured over the 30+ years they’ve been around, but they didn’t start that way. The ability to research, confirm, and corroborate is an important part of life. It always has been and always will be.

        • 5too@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 days ago

          Wikipedia did start having prank edits early on (and later malicious ones).

          Didn’t Stephen Colbert talk his fans into keeping certain content on a specific Wikipedia article at some point?

      • 5too@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 days ago

        True, but at some point they’ll need to use a computer to write the essay. At that point, it’s pretty easy to slip over to an AI prompt

  • 0x01@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    From the article:

    Kate Conroy

    I teach 12th grade English, AP Language & Composition, and Journalism in a public high school in West Philadelphia. I was appalled at the beginning of this school year to find out that I had to complete an online training that encouraged the use of AI for teachers and students. I know of teachers at my school who use AI to write their lesson plans and give feedback on student work. I also know many teachers who either cannot recognize when a student has used AI to write an essay or don’t care enough to argue with the kids who do it. Around this time last year I began editing all my essay rubrics to include a line that says all essays must show evidence of drafting and editing in the Google Doc’s history, and any essays that appear all at once in the history will not be graded.

    • explodicle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      Kid me would go to more effort to make GPT write a bunch of “in progress” versions than to just write the damn essay.

      • chonglibloodsport@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        It’s not that hard. Just scroll through the editing history. You can even look at timestamps to see if the student actually spent any time thinking and editing or just re-typed a ChatGPT result word for word all in one go. Creating a plausible fake editing history isn’t easy.

        • Blue_Morpho@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 days ago

          There’s probably a away to prompt AI to write everything but make mistakes. So most of the work is done, then you go and edit out the mistakes.

          I think you underestimate how much work kids will do to avoid homework.

          • chonglibloodsport@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 days ago

            That’s going to show up as a big copy-paste followed by a bunch of edits. Or a big full-retype with edit fixing.

            The true arbiter is time. If a student you know struggled at writing during in class writing assignments just knocks off the essay in 15 mins of writing whereas the class median time is a couple of hours then it’s pretty obvious who cheated.

            Is a student going to go out of their way to slowly retype a ChatGPT essay over the course of a few hours, with not only typo corrections but also full sentence rewriting? At that point I think they’ve proven they can write just by doing that extensive editing. They would probably finish faster by writing it on their own! Unless they’re just using ChatGPT to help them get a framework for the essay and then rewriting it in their own words. If I were a teacher I’d be fine with students doing the latter, though it’s still not ideal, at least it shows a lot of effort.