• netvor@lemmy.world
    link
    fedilink
    arrow-up
    29
    arrow-down
    2
    ·
    28 days ago

    NTA but I think it’s worth trying to steel-man (or steel-woman) her point.

    I can imagine that part of the motivation is to try and use ChatGPT to actually learn from the previous interaction. Let’s leave the LLM out of the equation for a moment: Imagine that after an argument, your partner would go and do lots of research, one or more of things like:

    • read several books focusing on social interactions (non-fiction or fiction or even other forms of art),
    • talk in-depth to several experienced therapist and/or psychology researchers and neuroscientists (with varying viewpoints),
    • perform several scientific studies on various details of interactions, including relevant physiological factors, Then after doing this ungodly amount of research, she would go back and present her findings back to you, in hopes that you will both learn from this.

    Obviously no one can actually do that, but some people might – for good reason of curiosity and self-improvement – feel motivated to do that. So one could think of the OP’s partner’s behavior like a replacement of that research.

    That said, even if LLM’s weren’t unreliable, hallucinating and poisoned with junk information, or even if she was magically able to do all that without LLM and with super-human level of scientific accuracy and bias protection, it would … still be a bad move. She would still be the asshole, because OP was not involved in all that research. OP had no say in the process of formulating the problem, let alone in the process of discovering the “answer”.

    Even from the most nerdy, “hyper-rational” standpoint: The research would be still an ivory tower research, and assuming that it is applicable in the real world like that is arrogant: it fails to admit the limitations of the researcher.