• fartsparkles@sh.itjust.works
        link
        fedilink
        arrow-up
        15
        arrow-down
        4
        ·
        1 year ago

        I feel the issue with AI models isn’t their source not being open but the actual derived model itself not being transparent and auditable. The sheer number of ML experts who cannot explain how their model produces a given result is the biggest concern and requires a completely different approach to model architecture and visualisation to solve.

        • Jamie@jamie.moe
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          1 year ago

          No amount of ML expertise will let someone know how a model produced a result, exactly. Training the model from the data requires a lot of very delicate math being done uncountable times to get a model that results in something useful, and it simply isn’t possible to comprehend how the work inside is done in a meaningful way other than by doing guesswork.

        • DigitalJacobin@lemmy.ml
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          We already know how these models fundamentally work. Why exactly does it matter how a model produced some result? /gen