At a recent appointment, Emily’s physical therapist (who knows some about her research) said, “Before we get started, there’s something I want to ask you about.” The something was an automatic “scribing” system their office is trialling for two weeks and deciding whether to purchase. These systems take in a (presumably audio-only) recording of the patient encounter and then output a draft patient note for the chart.

So what’s the big deal with “AI” charting? Here are nine reasons why we recommend refusing to consent to the use of scribing tools in healthcare settings:

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    These systems always involve third-party software, where recordings of provider-patient interactions are sent to some other company.

    Not to dismiss the other problems with this practice, but it seems especially crazy it works this way since transcription and summarization in particular don’t need powerful hardware and they keep coming out with smaller models that can do these things. There’s no justifiable reason for any of this data to leave the doctor’s office and a lot of reasons for it not to.

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      Wellllll…

      I’m fully against these systems, and fully against my data leaving the doctors office, but the “summarization in particular don’t need powerful hardware” is a bit of a stretch to put it mildly, especially when you start looking at what computers most doctors have to work with. The average doctor computer that I see still uses VGA cables for crying out loud, you won’t see any AI on those machines ever.

      • chicken@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        5 days ago

        Summarization is known for being one of the things weaker models can handle competently, and there are LLMs with very low system requirements:

        239 tok/s decode on AMD CPU, 82 tok/s on mobile NPU. Runs under 1GB of memory

        I don’t know exactly what the minimum performance a model would have to have to do this specific task, but it seems very likely it could be done by something that can run on a phone, or other low powered hardware.

    • obtoxious@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      I work in a place that is using these. The reason they are sending it out is that there is a very low level of comfort with tech and the outside company is “taking care of” a lot of messiness. The fact of the matter is there is no in-house capability for this, You should have seen us trying to replace the toner (or was it the drum?) in the fax machine yesterday. The first company with a “nice” UI will have market dominance. Not for technical, but for social reasons.

      • chicken@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        I think at this point you could probably even do it as an offline mobile app, no extra technical competence needed to use. If it needs syncing with other devices have third party servers, but it’s end to end encrypted and all processing is clientside so they don’t actually see anything. But they want to see it, because that data is a valuable asset, even though ethically they really should not have it.

        • obtoxious@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          5 days ago

          I agree but someone (?) needs to get that app out and certified and known ASAP because the ones already in the market

          If you want to give your health care provider a small panic attack, just mention the word “migration” as is “migrate to a new computer system”. Nobody wants to do it. Once something “works” (using the broadest definition of the term) it will be kept as long as possible.

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    I had my first appointment with my new primary care doctor just the other day (my previous one retired) and he proposed using one of those. I had a short conversation with him about my concerns as a software engineer regarding “cloud” and AI shit, and he himself suggested we not use it.

    My wife also had her first appointment with him, and they apparently did use it in that one. 😕

    • obtoxious@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      making it about individual choice is an error. 98% of people agree because the person who is asking permission has so much power. we need regulations.

  • ragebutt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    Every doctor in my healthcare network uses this and they all side eye me when I say no. I get it, it takes the tedious boring part of the job and basically reduces it by 90%, but fuck you buddy you can never even answer basic questions about the model like if it uses data fed into it for refinement and often don’t even know what that means. You’re a goddamn doctor, read up on this shit for a few hours before blindly accepting it

    Fwiw I run a small behavioral health practice. I get emails from people selling this shit like 2-3 times a week. There aren’t that many companies doing it but they’re super aggressive about sales. They’ll have a rep email you over and over and over and when you tell them to fuck off they’ll just have a new rep start.

    There are models for this that are better in theory, they require you to type rough notes which are refined which sidesteps the gigantic issue of transmitting such sensitive audio. Some even guarantee that the model used for generating notes is independent of the model used for reinforcement learning and refinement, eg the notes I would feed in don’t get used to improve unless you consent to being used for model research. But even with that I’m not comfortable here because I don’t trust these companies at their word. Trialing systems does show it can be very effective though and I fear this is inevitable as again it reduces administrative (read non billable) burden by a tremendous amount

    Also for some reason many of the companies offering systems that audio record sessions, transcribe, and summarize (over just summarizing text notes) are Israeli and that is sketch to me. I don’t trust such intimate data with a product from a country that is hostile, violent, and known for aggressive espionage. This is behavioral health though where EMR solutions are often more barebones for small practices, your local hospital probably uses epic or oracle (who don’t bother with small practice stuff) though I’m not sure if they develop LLM stuff in house or just license it

    • obtoxious@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      There are models for this that are better in theory, they require you to type rough notes which are refined which sidesteps the gigantic issue of transmitting such sensitive audio.

      I have never seen anything that starts with typing?

      Also for some reason many of the companies offering systems that audio record sessions, transcribe, and summarize (over just summarizing text notes) are Israeli and that is sketch to me.

      have also noticed it. I have on my to do list to look into these. hmu if you find anything. fuck knows what the training data is for. We have campaigns to divest HOOP from israel and I think AI should be included.

      • ragebutt@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 days ago

        Smaller emrs for small behavioral health practices tend to have text only LLM - headway has scribe, therapynotes has therapyfuel, simple practice has one too and I forget the name. All of these work by taking written session notes and generating a proper progress note, e.g. “client reports more positive mood, anxiety remains triggered by x, practiced breathing exercises” gets extrapolated out into “the client presented with improvements noted in mood and session focused on mindfulness strategies to manage anxiety related to x” (more in depth than that but you get it)

        In my evaluation the LLMs worked fairly well but not always, they tended to shift “writing the note” to “evaluate the note”. I don’t use it for my practice for several reasons, primarily the allure of skipping the evaluation step and ending up with notes that potentially have irrelevant hallucinations/outright falsehoods that get included because a clinician got lazy. As the article states I also believe the process of writing the note serves for reflection on the case and a time to consider conceptualization, eliminating that is certainly convenient but a potential harm to successful outcomes

        When it comes down to it I simply don’t trust this in a server sided application. The only way I would consider it is if I could run a local model, generate the note on a pc that has no internet access, and then upload it. It’s granted that bc it’s entered into the EMR it could certainly be harvested for llm training because AI fucks are without ethics and above the law, but the chances of that are far lower. A workaround for this that solves both of my issues actually came from the psychotherapy subreddit back when I still used reddit: just use a local speech transcription (though confirm it happens locally, which is the case on apple devices post ios 16 or so)

        WRT Israeli companies twofold scribe is currently the one that bothers me the most about signing up. They are headquartered in NY but a subsidiary of Ravel Technologies, an Israeli company. If you look up the developer info on the Google play store the given address for the twofold app is “4 Gordon REHOVOT, 7629112 Israel”. In my experience it’s often done like this and you have to dig a bit.

        They also bring up the other issue: these services are often expensive as shit. Therapynotes adds $40 onto your subscription for theirs, twofold is like $70/mo, etc and all charge additional user fees on top of that. The exception is headway and I think grow, which are free, but that passes on the cost to the consumer (and imo they’re currently propped up by VC capital, once that runs dry and they’re forced to be “profitable” this could change drastically). In a field where reimbursement rates are relatively high (80-130/hr ish) and yet still most clinicians make 40-60k/yr without benefits because of overheads it’s pretty foolish to add even more monthly costs, though tbf the overwhelming majority of overhead for 99% of clinicians is outrageous commercial real estate pricing

  • ThePantser@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    Kids alergist wanted to use it. I said no and had two nurses come “explain” it better to convince me. I said no, and the doctor never once mentioned to to try to convince me himself. He was just snippy with me because he had to use his hands to take notes like a baby’s toy.

    I will never consent to AI transcription, its gonna kill someone if it already hasn’t.

    • streetfestival@lemmy.caOPM
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 days ago

      consent

      That’s an important word, because as the main-linked article says, and my experience corroborates: medical offices seldom explain what exactly patients are ‘consenting’ to with ‘AI scribing’ well enough for people to be informed enough to thus be able to truly consent.

      I come from a research background and when my family doctor casually asked me if I was fine with him using AI for note-writing upon entering the examination room and sitting down, I was internally like, “ughh, no consent form, explanation, etc.?” But I was there for something sensitive and wanted to ‘go along to get along.’ So, I casually agreed. A few minutes later, I noticed he wasn’t typing, so I said, “I notice you’re not typing, is my voice being recorded?”

      Needless to say, I soon after wrote the office saying I did not consent to the use of that in my care henceforth.

      • a_gee_dizzle@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 days ago

        medical offices seldom explain what exactly patients are ‘consenting’ to with ‘AI scribing’ well enough for people to be informed enough to thus be able to truly consent.

        My doctor never even told me that she was using it, I just figured it out when I noticed that she wasn’t typing anything and was saying a lot of technical stuff out loud (which was odd, because she must know I won’t understand the jargon). Then I saw that she had a very expensive microphone on her desk, and a what appeared to be an AI transcript program open on her computer. Absolutely no consent in my case. I wasn’t even informed. I had to figure it out for myself.

        • streetfestival@lemmy.caOPM
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          Shocking and outrageous.

          and was saying a lot of technical stuff out loud

          The quality article I linked to in another comment, from an early adopter turned critic of ‘ai scribing’, suggests you’re right that they’re not saying that for your benefit, they’re saying it for the scribe, to structure the note to their liking. This is an example of how the encounter/interaction with the patient changes as a result of using this software

  • streetfestival@lemmy.caOPM
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    https://benngooch.substack.com/p/i-was-an-enthusiastic-early-adopter

    This is not a small thing. The clinical note in general practice is not merely a medicolegal record. It is, as research in the Journal of General Internal Medicine has articulated, a form of narrative medicine — a clinician-authored story that reflects how the physician understood the patient’s situation at that moment in time. The act of writing it is itself a cognitive process: it forces synthesis, prioritisation, and reflection. It is, in a real sense, how we think.

    When an AI records everything rather than what we chose to record, this feedback loop breaks. The note stops being a reflection of clinical reasoning and becomes a verbatim archive. And as a PMC piece on note bloat explicitly warns, this risks obscuring the most medically important information under a volume of equally-weighted detail.

    The problem is not just that notes become longer. It is that they become less curated, less authored, less ours. And the clinical memory they are supposed to scaffold — the ability to pick up a case six weeks later and immediately reorient — degrades with them.

  • dohpaz42@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    They wanted to do this at my kids’ pediatrician’s office. We prompted out of that immediately.