Let’s say I had a few microservices in different repositories and they communicated over HTTP using JSON. Some services are triggered directly by other microservices, but others can be triggered by events like a timer going off, a file being dropped into a bucket, a firewall rule blocking X amount of packets and hitting a threshold, etc.

Is there a way to document the microservices together in one holistic view? Maybe, how do you visualise the data, its schema (fields, types, …), and its flow between the microservices?


Bonus (optional) question: Is there a way to handle schema updates? For example generate code from the documentation that triggers a CI build in affected repos to ensure it still works with the updates.

Anti Commercial-AI license

  • johnydoe666@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    4
    ·
    6 months ago

    We’re using backstage in combination with openapi. The schema is documented in OpenAPI, but how services are connected is done via backstage, which crawls all repositories and puts it together to form nice graphs we can traverse easily

    • aes@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      Backstage has become quite misaligned to what we were originally trying to do. Originally, we were trying to inventory and map the service eco-system, to deal with a few concrete problems. For example, when developing new things, you had to go through the village elders and the grape vine to find out what everyone else was doing. Another serious problem was not knowing / forgetting that we had some tool that would’ve been very useful when the on-call pager went off at fuck you dark thirty.

      A reason we could build that map in System-Z (the predecessor of Backstage) is that our (sort of) HTTP/2 had a feature to tell us who had called methods on a service. (you could get the same from munging access logs, if you have them)

      Anyway, the key features were that you could see what services your service was calling, who was calling you, and how those other systems were doing, and that you could see all the tools (e.g. build, logs, monitoring) your service was connected to. (for the ops / on-call use case)

      A lot of those tool integrations were just links to “blahchat/#team”, “themonitoring/theservice?alerts=all” or whatever, to hotlink directly into the right place.

      It was built on an opt-in philosophy, were “blahchat/#team” was the default, but if (you’re John-John and) you insist that the channel for ALF has to be #melmac, you can have that, but you have to add it yourself.

      More recently, I’ve seen swagger/openapi used to great effect. I still want the map of who’s calling who and I strongly recommend mechanicanizing how that’s made. (extract it from logs or something, don’t rely on hand-drawn maps) I want to like C4, but I haven’t managed to get any use out of it. Just throw it in graphviz dot-file.

      Oh, one trick that’s useful there: local maps. For each service S, get the list of everything that connects to it. Make a subset graph of those services, but make sure to include the other connections between those, the ones that don’t involve S. (“oh, so that’s why…”)