TL;DW:

  • FSR 3 is frame generation, similar to DLSS 3. It can greatly increase FPS to 2-3x.

  • FSR 3 can run on any GPU, including consoles. They made a point about how it would be dumb to limit it to only the newest generation of cards.

  • Every DX11 & DX12 game can take advantage of this tech via HYPR-RX, which is AMD’s software for boosting frames and decreasing latency.

  • Games will start using it by early fall, public launch will be by Q1 2024

It’s left to be seen how good or noticeable FSR3 will be, but if it actually runs well I think we can expect tons of games (especially on console) to make use of it.

  • Edgelord_Of_Tomorrow@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    1 year ago

    You’re getting downvoted but this will be correct. DLSSFG looks dubious enough on dedicated hardware, doing this on shader cores means it will be competing with the 3D rendering so will need to be extremely lightweight to actually offer any advantage.

    • Dudewitbow@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      I wouldnt say compete as the whole concept of frame generation is that it generates more frames when gpu resouces are idle/low due to another part of the chain is holding back the gpu from generating more frames. Its sorta like how I view hyperthreads on a cpu. They arent a full core, but its a thread that gets utilized when there are poonts in a cpu calculation that leaves a resouce unused (e.g if a core is using the AVX2 accerator to do some math, a hyperthread can for example, use the ALU that might not be in use to do something else because its free.)

      It would only compete if the time it takes to generate one additional frame is longer than the time a gpu is free due to some bottleneck in the chain.

    • echo64@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      1 year ago

      You guys are talking about this as if it’s some new super expensive tech. It’s not. The chips they throw inside tvs that are massively cost reduced do a pretty damn good job these days (albit, laggy still) and there is software you can run on your computer that does compute based motion interpolation and it works just fine even on super old gpus with terrible compute.

      It’s really not that expensive.

        • echo64@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          edit-2
          1 year ago

          Yeah, it does, which is something tv tech has to try and derive themselves. Tv tech has to figure that stuff out. It’s actually less complicated in a fun kind of way. But please do continue to explain how it’s more compute heavy

          Also just to be very clear, tv tech also encompasses motion vectors into the interpolation, that’s the whole point. It just has to compute them with frame comparisons. Games have that information encoded into various gbuffers so it’s already available.

            • echo64@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              1 year ago

              No. Tvs do not quite literally blend two frames. They use the same techniques as video codecs to extract rudimentary motion vectors by comparing frames, then do motion interpolation with them.

              Please, if you want to talk about this, we can talk about this, but you have to understand that you are wrong here. The Samsung TV I had a decade ago did this, it’s been a standard for a very long time.

              Again, tvs do not "literally blend two frames ** and if they did, they wouldn’t have the input lag problems they do with this feature as they need a few frames of derived motion vectors to make anything look good

              They do not need to know what is foreground or background, they don’t need to know what’s a ui element or not, they need to know what pixels moved between two frames and generate intermediate frames that moves those pixels along the estimated vector.

              Modern engines have this information available, it’s used for a few things, modern engines can provide this. A tv has to estimate it.

                • echo64@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  Rolling my eyes so hard at this entire thread.

                  You: doing this on shader units is bad! Not possible! Uses too much compute!

                  Me: this tech has existed for over a decade on tvs, and there is motion interpolation software that you can get today that will do the same thing tvs do on compute and it works fine on even bad cards

                  You: tvs just blend frames. This is different it uses motion vectors!

                  Me: tvs use motion vectors. They compute them, whereas if you hook it up via amds thing, you don’t need to compute them

                  You: No, this is different because if you hook it up via amds thing, you don’t need to compute them, and it can look better

                  <— We are here.

                  You’ve absolutely lost your thread on what you are mad about, you’re now agreeing with me but you want to fixate on this this as a marker of how it’s not the same thing as tvs, even though it’s the same thing as tvs without the motion estimation exactly like I have been saying this entire time, but you’re desperate to find some way that no, I was right and win! Even though you’ve lost what thread you originally were talking about.

                  Maybe we need to reframe this. How is this not possible or a bad idea to do on shader units? That’s what you were mad about. How is this totally different from tv tech but also the same and less compute heavy as tv tech bad to run on shader units?