I’ve started noticing articles and YouTube videos touting the benefits of branchless programming, making it sound like this is a hot new technique (or maybe a hot old technique) that everyone should be using. But it seems like it’s only really applicable to data processing applications (as opposed to general programming) and there are very few times in my career where I’ve needed to use, much less optimize, data processing code. And when I do, I use someone else’s library.

How often does branchless programming actually matter in the day to day life of an average developer?

  • Lanthanae@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    It matters if you develop compilers 🤷,

    Otherwise? Readability trumps the minute performance gain almost every time (and that’s assuming your compiler won’t automatically do branchless substitutions for performance reasons anyway which it probably will)

  • marcos@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If you want your code to run on the GPU, the complete viability of your code depend on it. But if you just want to run it on the CPU, it is only one of the many micro-optimization techniques you can do to take a few nanoseconds from an inner loop.

    The thing to keep in mind is that there is no such thing as “average developer”. Computing is way too diverse for it.

    • LaggyKar@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      And the branchless version may end up being slower on the CPU, because the compiler does a better job optimizing the branching version.

    • Ethan@programming.devOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      If you want your code to run on the GPU, the complete viability of your code depend on it.

      Because of the performance improvements from vectorization, and the fact that GPUs are particularly well suited to that? Or are GPUs particularly bad at branches.

      it is only one of the many micro-optimization techniques you can do to take a few nanoseconds from an inner loop.

      How often do a few nanoseconds in the inner loop matter?

      The thing to keep in mind is that there is no such thing as “average developer”. Computing is way too diverse for it.

      Looking at all the software out there, the vast majority of it is games, apps, and websites. Applications where performance is critical, such as control systems, operating systems, databases, numerical analysis, etc, are relatively rare compared to apps/etc. So statistically speaking the majority of developers must be working on the latter (which is what I mean by an “average developer”). In my experience working on apps there are exceedingly few times where micro-optimizations matter (as in things like assembly and/or branchless programming as opposed to macro-optimizations such as avoiding unnecessary looping/nesting/etc).

      Edit: I can imagine it might matter a lot more for games, such as in shaders or physics calculations. I’ve never worked on a game so my knowledge of that kind of work is rather lacking.

    • Ethan@programming.devOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I understand the principles, how branch prediction works, and why optimizing to help out the predictor can help. My question is more of, how often does that actually matter to the average developer? Unless you’re a developer on numpy, gonum, cryptography, digital signal processing, etc, how often do you have a hot loop that can be optimized with branchless programming techniques? I think my career has been pretty average in terms of the projects I’ve worked on and I can’t think of a single time I’ve been in that situation.

      I’m also generally aggravated at what skills the software industry thinks are important. I would not be surprised to hear about branchless programming questions showing up in interviews, but those skills (and algorithm design in general) are irrelevant to 99% of development and 99% of developers in my experience. The skills that actually matter (in my experience) are problem solving, debugging, reading code, and soft skills. And being able to write code of course, but that almost seems secondary.

      • rustic_tiddles@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Personally I try to keep my code as free of branches as possible for simplicity reasons. Branch-free code is often easier to understand and easier to predict for a human. If your program is a giant block of if statements it’s going to be harder to make changes easily and reliably. And you’re likely leaving useful reusable functionality gunked up and spread out throughout your application.

        Every piece of software actually is a data processing pipeline. You take some input, do some processing of some sort, then output something, usually along with some side effects (network requests, writing files, etc). Thinking about your software in this way can help you design better software. I rarely write code that needs to process large amounts of data, but pretty much any code can benefit from intentional simplicity and design.

        • Ethan@programming.devOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I am all aboard the code readability train. The more readable code is, the more understandable and therefore debuggable and maintainable it is. I will absolutely advocate for any change that increases readability unless it hurts performance in a way that actually matters. I generally try to avoid nesting ifs and loops since deeply nested expressions tend to be awful to debug.

          This article has had a significant influence on my programming style since I read it (many years ago). Specifically this part:

          Don’t indent and indent and indent for the main flow of the method. This is huge. Most people learn the exact opposite way from what’s really proper — they test for a correct condition, and if it’s true, they continue with the real code inside the “if”.

          What you should really do is write “if” statements that check for improper conditions, and if you find them, bail. This cleans your code immensely, in two important ways: (a) the main, normal execution path is all at the top level, so if the programmer is just trying to get a feel for the routine, all she needs to read is the top level statements, instead of trying to trace through indention levels figuring out what the “normal” case is, and (b) it puts the “bail” code right next to the correctness check, which is good because the “bail” code is usually very short and belongs with the correctness check.

          When you plan out a method in your head, you’re thinking, “I should do blank, and if blank fails I bail, but if not I go on to do foo, and if foo fails I should bail, but if not i should do bar, and if that fails I should bail, otherwise I succeed,” but the way most people write it is, “I should do blank, and if that’s good I should do foo, and if that’s good I should do do bar, but if blank was bad I should bail, and if foo was bad I should bail, and if bar was bad I should bail, otherwise I succeed.” You’ve spread your thinking out: why are we mentioning blank again after we went on to foo and bar? We’re SO DONE with blank. It’s SO two statements ago.

  • FriendOfFalcons@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I only know of a handful of cases where branchless programming is actually being used. And those are really niche ones.

    So no. The average programmer really doesn’t need to use it, probably ever.

  • Spzi@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The better of those articles and videos also emphasize you should test and measure, before and after you “improved” your code.

    I’m afraid there is no standard, average solution. You trying to optimize your code might very well cause it to run slower.

    So unless you have good reasons (good as in ‘proof’) to do otherwise, I’d recommend to aim for readable, maintainable code. Which is often not optimized code.

  • morhp@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    How often does branchless programming actually matter in the day to day life of an average developer?

    Barely never. When writing some code that really has to be high performance (i.e. where you know it slows down your program), it can help to think about if there are branches or jumps that you can potentially simplify or eliminate.

    Of course some things are often branchless, for example GPU shaders, which need very high performance and which usually always do the same things. But that’s an exception.

    • nakal@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      There are few people who are smarter than a compiler. And those who use “branchless coding” probably aren’t.