• Bizarroland@kbin.social
    link
    fedilink
    arrow-up
    20
    ·
    1 year ago

    I mean I get it that it’s a Long function, line wise, but it reads like every single line has just the minimum amount of information it needs to have to be legible and to make sense for it to exist.

    I would say that this is more readable than those leet programmer regex hacks that work magic in 3 lines of code but require a fucking PhD to decipher.

  • TomMasz@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    I may put this on a slide for the Code Smells part of Refactoring lecture I have coming up.

  • driving_crooner@lemmy.eco.br
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    Any guide on how to write effective logs? I’m starting to write scripts to automate some processes at my job and want to start logging the process to debugging or troubleshooting in the future.

    • nolefan33@sh.itjust.works
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      The most useful thing you can do for simple scripts is never use the same log string in two locations in your code. If you reuse strings it can become very confusing where a specific log line printed from. In addition, write logs that let you trace the execution of the program, down to some kind of identifier that allows you to determine (for example) the exact iteration of a loop that caused an error.

      • apd@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I’d use logger that prints the file and line number when logging, to avoid the question of: “where is the log coming from”

    • o11c@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      For one thing: don’t bother with fancy log destinations. Just log to stderr and let your daemon manager take care of directing that where it needs to go. (systemd made life a lot easier in the Linux world).

      Structured logging is overrated since it means you can’t just do the above.

      Per-module (filterable) logging are quite useful, but must be automatic (use __FILE__ or __name__ whatever your language supports) or you will never actually do it. All semi-reasonable languages support some form of either macros-which-capture-the-current-module-and-location or peek-at-the-caller-module-name-and-location.


      One subtle part of logging: never conditionally defer a computation that can fail. Many logging APIs ultimately support something like:

      if (log_level >= INFO) // or <= depending on how levels are numbered
          do_log(INFO, message, arguments...)
      

      This is potentially dangerous - if logging of that level is disabled, the code is never tested, and trying to enable logging later might introduce an error when evaluating the arguments or formatting them into the message. Also, if logging of that level is disabled, side-effects might not happen.

      To avoid this, do one of:

      • never use the if-style deferring, internally or externally. Instead, squelch the I/O only. This can have a significant performance cost (especially at the DEBUG level), which is why the API is made in the first place.
      • ensure that your type system can statically verify that runtime errors are impossible in the conditional block. This requires that you are using a sane language and logging library.
      • run your testsuite at every log level, ensure 100% coverage of log code, and hope that the inevitable logic bug doesn’t have an unexpected dynamic failure.