• GreyBeard@lemmy.one
    link
    fedilink
    arrow-up
    4
    ·
    2 days ago

    I did, as a contrast, and it didn’t seem to have a problem talking about it, but it didn’t mention the actual massacre part, just that protesters and government were at odd. Of course, I simply asked “What happened at Kent State?” And it knew exactly what I was referring to. I’d say it tried to sugar coat it on the state side. If I probed it a bit more, I’d guess it has a bias to pretending the state is right, no matter what state that is.

      • GreyBeard@lemmy.one
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        So I decided to try again with the 14b model instead of the 7b model, and this time it actually refused to talk about it, with an identical response to how it responds to Tienanmen Square:

        What happened at Kent State?

        deepseek-r1:14b <think> </think>

        I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses.

          • GreyBeard@lemmy.one
            link
            fedilink
            arrow-up
            1
            ·
            2 hours ago

            Neither did I, but if you think about it, it kinda makes sense. Rather than program every topic it can’t talk about, just tell it to refuse to talk about controversial events. A reasonable method when you live in a censored state.