DiabloD3 13 hours ago

There isn't an RDNA5 on the roadmap, though. It's been confirmed 4 is the last (and was really meant to be 3.5, but grew into what is assumed to be the PS5/XSX mid-gen refresh architecture).

Next is UDNA1, a converged architecture with it's older sibling, CDNA (formerly GCN).

Like, the article actually states this, but runs an RDNA 5 headline anyways.

  • greenknight 7 hours ago

    AMD does do semi-custom work.

    Whats to stop sony being like we dont want UDNA 1, we want a iteration of RDNA 4.

    For all we know, it IS RDNA 5... it just wont be available to the public.

    • Moto7451 6 hours ago

      And their half step/semi-custom work can find their way back to APUs. RDNA 3.5 (the version marketed as such) is in the Zen 5 APUs with Mobile oriented improvements. It wouldn’t surprise me if a future APU gets RDNA 5. GCN had this sort of APU/Console relationship as well.

  • blasphemers 6 hours ago

    Maybe read the article before commenting on it, it's not that long.

    "Big chunks of RDNA 5, or whatever AMD ends up calling it, are coming out of engineering I am doing on the project"

  • cubefox 12 hours ago

    It's just a name. I'm sure this is all pretty iterative work.

    • dragontamer 10 hours ago

      UDNA isn't a name but instead a big shift in strategy.

      CDNA was for HPC / Supercomputers and Data center. GCN always was a better architecture than RDNA for that.

      RDNA itself was trying to be more NVidia like. Fewer FLOPs but better latency.

      Someone is getting the axe. Only one of these architectures will win out in the long run, and the teams will also converge allowing AMD to consolidate engineers to improving the same architecture.

      We won't know what the consolidated team will release yet. But it's a big organizational shift that surely will affect AMDs architectural decisions.

      • timschmidt 7 hours ago

        My understanding was that CDNA and RDNA shared much if not most of their underlying architecture, and that the fundamental differences had more to do with CDNA supporting a greater variety of numeric representations to aid in scientific computing. Whereas RDNA really only needed fp32 for games.

        • dragontamer 4 hours ago

          Who told ya that??

          CDNA is 64 wide per work item. And CDNA1 I believe was even 16 lanes executed over 4 clock ticks repeatedly (ie: minimum latency of all operations, even add or xor, was 4 clock ticks). It looks like CDNA3 might not do that anymore but that's still a lot of differences...

          RDNA actually executes 32-at-a-time and per clock tick. It's a grossly different architecture.

          That doesn't even get to Infinity Cache, 64-bit support, AI instructions, Raytracing, or any of the other differences....

        • sharpneli an hour ago

          CDNA is based on the older gcn arch so they share the same as pre RDNA ones and RDNA ones.

whatever1 13 hours ago

PS5 was almost twice as fast as the PS4 pro, yet we did not see the generational leap we saw with the previous major releases.

It seems that we are the stage where incremental improvements in graphics will require exponentially more computing capability.

Or the game engines have become super bloated.

Edit: I stand corrected in previous cycles we had orders of magnitude improvement in FLOPS.

  • pjmlp 12 hours ago

    A reason was backwards compatibility, studios were already putting lots of money into PS4 and XBox One, thus PS5 and XBox X|S (two additional SKUs), were already too much.

    Don't forget one reason that studios tend to favour consoles has been regular hardware, and that is no longer the case.

    When middleware starts to be the option, it is relatively hard to have game features that are hardware specific.

  • cosmic_cheese 12 hours ago

    Less effort going into optimization also plays a factor. On average games are a lot less optimized than they used to be. The expectation seems to be that hardware advances will fix deficiencies in performance.

    This doesn’t affect me too much since my backlog is long and by the time I play games, they’re old enough that current hardware trivializes them, but it’s disappointing nonetheless. It almost makes me wish for a good decade or so of performance stagnation to curb this behavior. Graphical fidelity is well past the point of diminishing returns at this point anyway.

    • martinald 11 hours ago

      We have had a decade of performance stagnation.

      Compare PS1 with PS3 (just over 10 years apart).

      PS1: 0.03 GFLOPS (approx given it didn't really do FLOPS per se) PS3: 230 GFLOPS

      Nearly 1000x faster.

      Now compare PS4 with PS5 pro (also just over 10 years apart):

      PS4: ~2TFLOPS PS5 Pro: ~33.5TFLOPS

      Bit over 10x faster. So the speed of improvement has fallen dramatically.

      Arguably you could say the real drop in optimization happened in that PS1 -> PS3 era - everything went from hand optimized assembly code to running (generally) higher level languages and using abstrated graphics frameworks like DirectX and OpenGL. Just noone noticed because we had 1000x the compute to make up for it :)

      Consoles/games got hit hard by first crypto and now AI needing GPUs. I suspect if it wasn't for that we'd have vastly cheaper and vastly faster gaming GPUs, but when you were making boatloads of cash off crypto miners and then AI I suspect the rate of progress fell dramatically for gaming at least (most of the the innovation I suspect went more into high VRAM/memory controllers and datacentre scale interconnects).

      • Dylan16807 2 hours ago

        You divided 230 by .03 wrong, which would be 10000-ish, but you underestimated the PS1 by a lot anyway. The CPU does 30 MIPS, but also the geometry engine does another 60 MIPS and the GPU fills 30 or 60 million pixels per second with multiple calculations each.

        • deaddodo an hour ago

          Not to mention that few developers were doing hand optimized assembly by the time of PSX. They were certainly hand optimizing models and the 3D pipeline (with some assembler tuning), but C and SDKs were well in use by that point.

          Even Naughty Dog went with their own LISP engine for optimization versus ASM.

      • cosmic_cheese 11 hours ago

        Yeah there’s been a drop off for sure. Clearly it hasn’t been steep enough for game studios to not lean on anyway, though.

        One potential forcing factor may be the rise of iGPUs, which have become powerful enough to play many titles well while remaining dramatically more affordable than their discrete counterparts (and sometimes not carrying crippling VRAM limits to boot), as well as the growing sector of PC handhelds like the Steam Deck. It’s not difficult to imagine that iGPUs will come to dominate the PC gaming sphere, and if that happens it’ll be financial suicide to not make sure your game plays reasonably well on such hardware.

        • martinald 10 hours ago

          I get the perhaps mistaken impression the biggest problem games developers have is making & managing absolutely enormous amounts of art assets at high resolution (textures, models, etc). Each time you increase resolution from 576p, to 720p to 1080p and now 4k+ you need a huge step up in visual fidelity of all your assets, otherwise it looks poor.

          And given most of these assets are human made (well, until very recently) this requires more and more artists. So I wonder if games studios are more just art studios with a bit of programming bolted on, vs before with lower res graphics where you maybe had one artist for 10 programmers, now it is more flipped the other way. I feel that at some point over the past ~decade we hit a "organisational" wall with this and very very few studios can successfully manage teams of hundreds (thousands?) of artists effectively?

          • cosmic_cheese 10 hours ago

            That depends a lot on art direction and stylization. Highly stylized games scale up to high resolutions shockingly well even with less detailed, lower resolution models and textures. Breath of the Wild is one good example that looks great by modern standards at high resolutions, and there’s many others that manage to look a lot less dated than they are with similarly cartoony styles.

            If “realistic” graphics are the objective though, then yes, better displays pose serious problems. Personally I think it’s probably better to avoid art styles that age like milk, though, or to go for a pseudo-realistic direction that is reasonably true to life while mixing in just enough stylization to scale well and not look dated at record speeds. Japanese studios seem pretty good at this.

          • spookie 5 hours ago

            Yeah, its flipped. Overall, it has meant studios are more and more dependent on third party software (and thus license fees), it led to game engine consolidation, and serious attrition when attempting to make something those game engines werent built for (non-pbr pipelines come to mind).

            It's no wonder nothing comes out in a playable state.

      • SlowTao 9 hours ago

        It is not just GPU performance, it is that visually things are already very refined. A ten times leap in performance doesn't really show as ten times the visual spectical like it used to.

        Like all this path tracing/ray tracing stuff, yes it is very cool and can add to a scene but most people can barely tell it is there unless you show it side by side. And that takes a lot of compute to do.

        We are polishing an already very polished rock.

        • martinald 8 hours ago

          Yes but in the PS1 days we were doing a 1000x compute performance a decade.

          I agree that 10x doesn't move much, but that's sort of my point - what could be done with 1000x?

    • jayd16 10 hours ago

      By what metric can you say this with any confidence when game scope and fidelity has ballooned?

      • cosmic_cheese 10 hours ago

        Because optimized games aren’t completely extinct and there’s titles with similar levels of size, fidelity, and feature utilization with dramatically differing performance profiles.

        • rtpg 4 hours ago

          Given the N64-PS1 era is filled with first party games that run at like 20 fps, I'm having a hard time saying things are worse now.

          I am a bit uncomfortable with the performance/quality stuff that people have set up but I personally feel that the quality floor for perf is way higher than it used to be. Though there seem to be less people parking themselves at "60fps locked", which felt like a thing for a while

  • cwbriscoe 12 hours ago

    A lot of the difference went into FPS rather than improved graphics.

    • adamwk 10 hours ago

      And loading times. I think people already forgot how long you had to wait on loading screens or how many faked loading (moving through a brush while the next area loads) there was on PS4

      • SlowTao 9 hours ago

        PS4 wasnt too terrible but jumping back to PS3... wow I completely forgot how memory starved that machine was. Working on it, we knew at the time but in retro spect it was just horrible.

        Small RAM space with the hard CPU/GPU split (so no reallocation) feeding off a slow HDD which is being fed by an even slower Bluray disc, you are sitting around for a while.

      • ryao 9 hours ago

        Did you forget that on the N64, load times were near instantaneous?

    • bentt 11 hours ago

      This is correct. Also, it speaks to what players actually value.

      • ThatMedicIsASpy 11 hours ago

        I have played through CP2077 with 40, 30 and 25 fps. A child doesn't care if Zelda runs with low FPS.

        The only thing I value is a consistent stream of frames on a console.

        • adamwk 10 hours ago

          When given a choice, most users prefer performance over higher fidelity

          • teamonkey 10 hours ago

            I would like to see the stats for that.

        • jayd16 10 hours ago

          Children eat dirt. I'm not sure "children don't care" is a good benchmark.

      • LikesPwsh 11 hours ago

        Also FPS just requires throwing more compute at it.

        Excessively high detail models require extra artist time too.

    • kridsdale1 11 hours ago

      Yes PS5 can output 120hz on hdmi. A perfect linear output to direct your more compute at.

  • CoolGuySteve 9 hours ago

    The current generation has a massive leap in storage speed but games need to be architected to stream that much data into RAM.

    Cyberpunk is a good example of a game that straddled the in between, many of it's performance problems on the PS4 were due to constrained serialization speed.

    Nanite and games like FF16 and Death Stranding 2 do a good job of drawing complex geometry and textures that wouldn't be possible on the previous generation

    • Vilian 5 hours ago

      Nanite is actively hurting performance

  • ryao 9 hours ago

    This is the result of an industry wide problem where technology just is not moving forward as quickly as it used to move. Dennard scaling is dead. Moore’s law is also dead for SRAM and IO logic. It is barely clinging to life for compute logic, but the costs are skyrocketing as each die shrink happens. The result is that we are getting anemic improvements. This issue is visible in Nvidia’s graphics offerings too. They are not improving from generation to generation like they did in the past, despite Nvidia turning as many knobs as they could to higher values to keep the party going (e.g. power, die area, price, etcetera).

  • vrighter 12 hours ago

    twice as fast, but asked to render 4x the pixels. Do the math

    • SlowTao 8 hours ago

      Well you see... I got nothing.

      The path nowadays is to use all kinds of upscaling and temporal detail junk that is actively recreating late 90s LCD blur. Cool. :(

  • silisili 12 hours ago

    AFAIK, this generation has been widely slammed as a failure due to lack of new blockbuster games. Most things that came out were either for PS4, or remasters of said games.

    There have been a few decent sized games, but nothing at grand scale I can think of, until GTA6 next year.

    • jayd16 11 hours ago

      There were the little details of a global pandemic and interest rates tearing through timelines and budgets.

  • ErneX 11 hours ago

    GTA VI is going to be a showcase on these consoles.

  • treyd 12 hours ago

    > Or the game engines have become super bloated.

    "Bloated" might be the wrong word to describe it, but there's some reason to believe that the dominance of Unreal is holding performance back. I've seen several discussions about Unreal's default rendering pipeline being optimized for dynamic realtime photorealistic-ish lighting with complex moving scenes, since that's much of what Epic needs for Fortnite. But most games are not that and don't make remotely effective use of the compute available to them because Unreal hasn't been designed around those goals.

    TAA (temporal anti-aliasing) is an example of the kind of postprocessing effect that gamedevs are relying on to recover performance lost in unoptimized rendering pipelines, at the cost of introducing ghosting and loss of visual fidelity.

    • gmueckl 11 hours ago

      This is a very one-sided perspective on things. Any precomputed solution to lighting comes with enormous drawbacks across the board. The game needs to ship the precomputed data when storage is usually already tight. The iteration cycle for artists and level designers suchs when lighting is precomputed - they almost never see accurate graphics for their work while they are iterating because rebaking takes time away from their work. Game design become restricted to those limitations, too. Can't even think of having the player randomly rearranging big things in a level (e.g. building or tearing down a house) because the engine can't do it. Who knows what clever game mechanics are never thought of because of these types of limitations?

      Fully dynamic interactive environments are liberating. Pursuing them in is the right thing to do.

    • mikepurvis 12 hours ago

      In principle, Epic's priorities for Unreal should be aligned to a lot of what we've seen in the PS3/4/5 generation as far as over-the-shoulder 3rd person action adventure games.

      I mean, look at Uncharted, Tomb Raider, Spider-Man, God of War, TLOU, HZD, Ghost of Tsushima, Control, Assassins Creed, Jedi Fallen Order / Survivor. Many of those games were not made in Unreal, but they're all stylistically well suited to what Unreal is doing.

      • kridsdale1 11 hours ago

        I agree. UE3 was made for Gears of War (pretty much) and as a result the components were there to make Mass Effect.

    • babypuncher 12 hours ago

      TAA isn't a crutch being used to hold up poor performance, it's an optimization to give games anti-aliasing that doesn't suck.

      Your other options for AA are

      * Supersampling. Rendering the game at a higher resolution than the display and downscaling it. This is incredibly expensive.

      * MSAA. This samples ~~vertices~~surfaces more than once per pixel, smoothing over jaggies. This worked really well back before we started covering every surface with pixel shaders. Nowadays it just makes pushing triangles more expensive with very little visual benefit, because the pixel shaders are still done at 1x scale and thus still aliased.

      * Post-process AA (FXAA,SMAA, etc). These are a post-process shader applied to the whole screen after the scene has been fully rendered. They often just use a cheap edge detection algorithm and try to blur them. I've never seen one that was actually effective at producing a clean image, as they rarely catch all the edges and do almost nothing to alleviate shimmering.

      I've seen a lot of "tech" YouTubers try to claim TAA is a product of lazy developers, but not one of them has been able to demonstrate a viable alternative antialiasing solution that solves the same problem set with the same or better performance. Meanwhile TAA and its various derivatives like DLAA have only gotten better in the last 5 years, alleviating many of the problems TAA became notorious for in the latter '10s.

      • flohofwoe 11 hours ago

        Erm your description of MSAA isn't quite correct, it has nothing to do with vertices and doesn't increase vertex processing cost..

        It's more similar to supersampling, but without the higher pixel shader cost (the pixel shader still only runs once per "display pixel", not once per "sample" like in supersampling).

        A pixel shader's output is written to multiple (typically 2, 4 or 8) samples, with a coverage mask deciding which samples are written (this coverage mask is all 1s inside a triangle and a combo of 1s and 0s along triangle edges). After rendering to the MSAA render target is complete, an MSAA resolve operation is performed which merges samples into pixels (and this gives you the smoothed triangle edges).

      • wtallis 11 hours ago

        > solves the same problem set with the same or better performance

        The games industry has spent the last decade adopting techniques that misleadingly inflate the simple, easily-quantified metrics of FPS and resolution, by sacrificing quality in ways that are harder to quantify. Until you have good metrics for quantifying the motion artifacts and blurring introduced by post-processing AA, upscaling, and temporal AA or frame generation, it's dishonest to claim that those techniques solve the same problem with better performance. They're giving you a worse image, and pointing to the FPS numbers as evidence that they're adequate is focusing on entirely the wrong side of the problem.

        That's not to say those techniques aren't sometimes the best available tradeoff, but it's wrong to straight-up ignore the downsides because they're hard to measure.

      • cubefox 11 hours ago

        Yeah. Only problem is that overly aggressive TAA implementations blur the whole frame during camera rotation. The thing that is even better than standard TAA is a combination of TAA and temporal upscaling, called TSR in Unreal. Still better is the same system but performed by an ML model, e.g. DLSS. Though this requires special inference hardware inside the GPU.

        In the past, MSAA worked reasonably well, but it was relatively expensive, doesn't apply to all forms of high frequency aliasing, and it doesn't work anymore with the modern rendering paradigm anyway.

LorenDB 13 hours ago

If the Playstation contributions are good enough, maybe RDNA4 -> RDNA5 will be just as good as RDNA3 -> RDNA4. As long as they get the pricing right, anyway.

monster_truck 11 hours ago

We've known this for a while, it's an extension of the upscaling and frame generation AMD already worked on in conjunction with Sony for FSR 3 and to a much greater extent FSR 4. Previous articles also have highlighted their shared focus on BVH optimizations

erulabs 12 hours ago

Excited to see how the software support for UDNA1 works out. Very hopeful we'll see some real competition to Nvidia soon in the datacenter. Unfortunately I think the risk is quite high: if AMD burns developers again with poor drivers and poor support, it's hard to see how they'll be able to shake the current stigma.

  • martinald 11 hours ago

    Take this with a pinch of salt, but the most recent ROCm release installed out of the box on my WSL2 machine and worked first time with llama.cpp. I even compiled llama.cpp from source with 0 issues. That has never happened ever in my 5+ years of having AMD GPUs. Every other time I've tried this it's either failed and required arcane workarounds, or just not worked entirely (including running on 'real' Linux).

    I feel like finally they are turning the corner on software and drivers.

brcmthrowaway 9 hours ago

Who is a better computer architect, Mark Cerny or Anand Shimpi?

  • wmf 8 hours ago

    Did we ever hear what Anand does at Apple?

lofaszvanitt 10 hours ago

Yes, but what will use it when there are so few games on the platform in the current PS generation?

shmerl 11 hours ago

When will Sony support Vulkan on PS?

  • departure4885 9 hours ago

    Why would they? They have their own (two actually) proprietary graphics APIs: GNM and GNMX.

    • shmerl 9 hours ago

      I'd ask why wouldn't they. Not a fan of NIH and wheel reinvention proponents.

      • departure4885 8 hours ago

        Because if they write their own they get to own the political/bureaucratic portion of the problem. For better or worse, they don't have to deal with the Kronos Group. They get to optimize their APIs directly against their research with AMD.

        • shmerl 7 hours ago

          That still doesn't make NIH a better approach. NIH is dinosaur idea really when it comes to technology like this.

          • MBCook 3 hours ago

            Why would Vulkan, as opposed to a custom solution designed to target that hardware and games specifically, be a better solution?

            If you’re making a PS game you’re already doing tons of bespoke PS stuff. If you don’t want to deal with it there are plenty of pieces of middleware out there to help.

            Honestly these “where’s Vulkan” posts on every bit of 3D capable hardware feel like a stupid meme at this point as opposed to a rational question.

            Maybe they should just ship DX12. That’s multi-platform too.

            • shmerl 3 hours ago

              Because it won't tax developers with the need to learn yet another NIH. Same reason any standard exists and makes things easier for those who use it.

              Honestly any idea that defends NIH like this belongs with dinosaurs. NIH is a stupid meme, not the opposite of it.

              • soulbadguy an hour ago

                And most of the standard we have now starts with something similar to NIH. Vulkan itself is an offshoot of mantel from AMD. There are valid reason to have a custom api. Especially in domain like game console with hardware with long release cycle, tight performance requirement and legacy (ps4) code to support.

ZenithExtreme 14 hours ago

AMD’s next-gen GPUs may have some PlayStation tech inside.

  • stanac 13 hours ago

    I don't think he is employed by Sony, but work as a consultant for them. So both Sony PS 4/5 and AMD GPUs have his tech inside.

    • mikepurvis 12 hours ago

      So you're right, though I would never have guessed— in the PS5 hype cycle he gave that deep dive architecture presentation that for all the world looked like he was a Sony spokesperson.

  • smcl 11 hours ago

    It looks like all of your comments are low-effort summaries like this. What’s going on here? Or is this a bot…

    • diggan 11 hours ago

      They're summarizing the submissions they're making. All of the summary comments are on their own submissions.