I’m working on lemmy-meter which is a simple observability solution for Lemmy end-users like me, to be able to check the health of a few endpoints of their favourite instance in a visually pleasing way.

👉 You can check out a screenshot of the pre-release landing page.


💡 Currently, lemmy-meter sends 33 HTTP GET requests per minute to a given instance.

For a few reasons, I don’t wish lemmy-meter to cause any unwanted extra load on Lemmy instances.
As such I’d like it be an opt-in solution, ie a given instance’s admin(s) should decide whether they want their instance to be included in lemmy-meter’s reports.

❓ Now, assuming I’ve got a list of instances to begin w/, what’s the best way to reach out to the admins wrt lemmy-meter?


PS: The idea occurred to me after a discussion RE momentary outages.

  • Big P
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Why does it need to make 33 requests per minute? Surely the data doesn’t have to be that up to date?

    • bahmanm@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Agreed. It was a mix of too ambitious standards for up-to-date data and poor configuration on my side.

  • PenguinCoder@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    33 HTTP GET requests per minute to a given instance.

    That is way beyond acceptable use, and would likely have your service blocked. There exists these services too :

    https://lemmy-status.org/

    https://lemmy.fediverse.observer/stats

    Maybe those do what you’re trying to do?

    There is not an “admin inbox” for lemmy instances. You can hit the endpoint /api/v3/site for information about an instance including the admins list.

    • bahmanm@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      beyond acceptable use

      Since literally every aspect of lemmy-meter is configurable per instance, I’m not worried about that 😎 The admins can tell me what’s the frequency/number they’re comfortable w/ and I can reconfigure the solution.

      You can hit the endpoint /api/v3/site for information about an instance including the admins list.

      Exactly what I was looking for. Thanks very much 🙏

      • johntash@eviltoast.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Are these 33 different requests or do you hit the same endpoint multiple times?

        I’d probably default to every 5 minutes at most, but I guess if it’s up to the admin then it’s all good. 33 requests per minute shouldn’t be a ton of load if it’s all read requests.

          • johntash@eviltoast.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            Not without asking, but if the admin is okay with it then sure. I don’t see the point of any sort of monitoring making that many requests per minute though.

            • activistPnk@slrpnk.net
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              Indeed. IIUC, OP said 33 reqs/min is a ceiling and tunable on a per-target basis.

              If the target is a Cloudflare instance, you could perhaps even do 300 reqs/min without even being noticed.

      • Kangie@lemmy.srcfiles.zip
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        I’m not worried about that 😎

        You should be. Your name will be associated with abuse forevermore.

        The admins can tell me what’s the frequency/number they’re comfortable w/ and I can reconfigure the solution.

        Or you can set some sane defaults and a timeout period. 1 request / 5 mins is fine to check if something is online and responding.

        • bahmanm@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 year ago

          sane defaults and a timeout period

          I agree. This makes more sense.

          Your name will be associated with abuse forevermore.

          I was going to ignore your reply as a 🧌 given it’s an opt-in service for HTTP monitoring. But then you had a good point on the next line!

          Let’s use such important labels where they actually make sense 🙂

    • activistPnk@slrpnk.net
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 year ago

      That is way beyond acceptable use

      Each server would have its own acceptable use policy. Also consider the social detriment of Cloudflare nodes. We could even say @bahmanm@lemmy.ml has a moral /duty/ to overwork the Cloudflare nodes :)

      @Blaze@discuss.tchncs.de: thanks for pointing out lestat. That’s one of the very few services of this kind to be responsible enough to red-flag the Cloudflare nodes. I hope @bahmanm@lemmy.ml follows that example; though it could still be improved on.

      It’s misleading for any tor-blocking Cloudflare node to have a 100% availability stat just because by design it deliberately breaks availability to a number of users in an arbitrarily discriminatory fashion. https://lemmy-status.org/ and https://lemmy.fediverse.observer/stats do a bit of a disservice by not omitting or flagging #Cloudflare nodes.

  • bahmanm@lemmy.mlOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Update 1

    Thanks all for your feedback 🙏 I think everybody made a valid point that the OOTB configuration of 33 requests/min was quite useless and we can do better than that.

    I reconfigured timeouts and probes and tuned it down to 4 HTTP GET requests/minute out of the box - see the configuration for details.


    🌐 A pre-release version is available at lemmy-meter.info.

    For the moment, it only probes the test instances


    I’d very much appreciate your further thoughts and feedback.