There was a time where this debate was bigger. It seems the world has shifted towards architectures and tooling that does not allow dynamic linking or makes it harder. This compromise makes it easier for the maintainers of the tools / languages, but does take away choice from the user / developer. But maybe that’s not important? What are your thoughts?

  • 0x0@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Disk space and RAM availability has increased a lot in the last decade, which has allowed the rise of the lazy programmer, who’ll code not caring (or, increasingly, not knowing) about these things. Bloat is king now.

    Dynamic linking allows you to save disk space and memory by ensuring all programs are using the only one version of a library laying around, so less testing. You’re delegating the version tracking to distro package maintainers.

    You can use the dl* family to better control what you use and if the dependency is FLOSS, the world’s your oyster.

    Static linking can make sense if you’re developing portable code for a wide variety of OSs and/or architectures, or if your dependencies are small and/or not that common or whatever.

    This, of course, is my take on the matter. YMMV.

    • uis@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Static linking can make sense if you’re developing portable code for a wide variety of OSs

      I doubt any other OS supports linux syscalls