Last week, my favoured gaming news site, VGC, asked former US PlayStation boss Shawn Layden whether he thought the pursuit of more powerful consoles was still the way to go for the video games industry. His answer was not what I expected.

“We’ve done these things this way for 30 years, every generation those costs went up and we realigned with it. We’ve reached the precipice now, where the centre can’t hold, we cannot continue to do things that we have done before … It’s time for a real hard reset on the business model, on what it is to be a video game,” he said. “We’re at the stage of hardware development that I call ‘only dogs can hear the difference’. We’re fighting over teraflops and that’s no place to be. We need to compete on content. Jacking up the specs of the box, I think we’ve reached the ceiling.”

This surprised me because it seems very obvious, but it’s still not often said by games industry executives, who rely on the enticing promise of technological advancement to drum up investment and hype. If we’re now freely admitting that we’ve gone as far we sensibly can with console power, that does represent a major step-change in how the games industry does business.

So where should the industry go now?

  • MelodiousFunk@slrpnk.net
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    2
    ·
    2 months ago

    Once upon a time, nobody could ever need more than 640kB of RAM. Every “hardware ceiling” ends up being a temporary plateau. How long that plateau lasts is anyone’s guess. They’ll chase “content” for awhile, and then some form of content will demand more power for something either new or evolved, and it’ll be back to hardware races.

    Either way, as long as a market exists for dedicated gaming consoles (hi, I’m that market, zero desire to maintain a PC after 25 years in IT) they’ll stick around.

    • magiccupcake@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 months ago

      I think we are entering a different era.

      Once upon a time shrinking nodes came with cost reductions for the same amount of compute.

      With the new bleeding edge nodes, this is not so true, you can increase compute density, but the cost of new nodes is astronomical, so prices go up too.

      Many improvements recently are more architectural in nature, like zen ccds to decrease costs.

      The architectural improvements will continue to scale, but node improvements are slowing, we are right on the edge of what is physically possible with silicon.

      The improvements in games have slowed a ton too.

      Each new generation of consoles has started to reach diminishing returns for graphics. Ray tracing seems more like a technology that is being pushed to sell hardware, rather than actually improving graphics efficiently.

      The next high compute case might need more creative solutions other than throwing more compute at it. Like eye tracking for VR which reduces compute demand greatly

      • Coelacanth@feddit.nu
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        2 months ago

        Ray tracing seems more like a technology that is being pushed to sell hardware, rather than actually improving graphics efficiently.

        If efficiently is the key word then I agree with you. Ray Tracing is definitely still extremely expensive as far as performance goes. But I do think we’ve also seen it actually add marked improvements to the graphical impact of games.

        • wewbull
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          …but does it add anything to the experience of playing the game?

          It certainly doesn’t affect the gameplay. You’ll still do the same things. It doesn’t enable a new game dynamic.

          All it does is push the graphic fidelity up a bit. For me a good game can be enhanced marginally by good graphics, but a bad game is a bad game even if the graphics are stunning.

          • yonder@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            In my mind, there is not much justification for raytracing other than niche cases. Games like BOTW and CS2 get by just fine using a combination of Screen Space Reflections, cubemaps and pre-baked global illumination to get good looking reflections and lighting. Raytracing seems the most useful for games like Minecraft where the world is completely dynamic and nothing can be baked ahead of time because of that, though plenty of shaders look amazing without raytracing already.

        • Noodle07@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Now if the games themselves could be released optimised and finished that would actually help gaming a lot more than a mmyet another pricier console

  • RisingSwell@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    3
    ·
    2 months ago

    I don’t think consoles can play all games at 4k60fps yet right? Especially on a TV, 4k is a noticable difference. Maybe even go for 4k120? Seems like there still needs to be a bit more improvement. Not sure if 8k is worth it, need to find someone who knows more than i do for that.

  • Poogona [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    19
    ·
    2 months ago

    Going from 256 triangles to 1024 triangles per model is a big deal that you can immediately see

    Going from 10 million triangles to 100 million or whatever is very subtle and nobody notices

    • fanbois [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      2 months ago

      Also, the technical limitations and specifications of each console gave the games a distinct look. You can tell a SNES from a Genesis game, a PS1 from a N64. Even just game ports had their own charme, differing sometimes less, sometimes wildly between each console.

      It all converged in the PS2 era and now it’s just differently branded PCs.

      • sexual_tomato@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 months ago

        Eh you can still tell some PS2 games from Xbox, but yeah most console generations have been “good enough” since the PS3/360 era.

    • Coelacanth@feddit.nu
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      Definitely. Diminishing returns applies to everything to such an extent I almost consider it a law of nature, and we’re definitely hitting that zone with both polygons and pixel density in my opinion.

  • ᴇᴍᴘᴇʀᴏʀ 帝OPA
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 months ago

    I suspect the next avenue is going to be using AI to create more realistic and interactive NPCs, especially in open-world games.

    • Cruxifux@feddit.nl
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      2 months ago

      I wonder how far away we are from just putting a prompt in for the kind of game you’d like to play and it just being procedurally generated for you.

      Also when are we getting haptic suit porn games where my game station sucks my dick? Best I’ve got so far is lubing up the vaccuum while I stream Baywatch on my VR headset

    • IrateAnteater@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Even using ai for environments. Plenty of open world games will have you looking at the exact same patch of grass and the exact same trees. Even getting ray traced lighting effects to work at life-like settings, running at at least 4k90, will take a lot of computing horsepower. The price to do these things is still prohibitive, but game devs will absolutely try doing those things in the future, so the console hardware will have to keep up.

      What I can see them doing is taking a more passive approach, and waiting for the hardware to develop on its own, and just using “off the shelf” parts, instead of paying to have custom hardware developed.

  • Bookmeat@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 months ago

    There are so many features that could be enabled with more powerful compute. Of the top of my head are things like improved physics simulations and clipping geometry (e.g. weapons clipping through models or characters). More detailed hit boxes. Etc

    • addie
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 months ago

      But does that make the game more fun, or does it lower the barrier of entry for smaller studios to make high-quality games?

      Arguably, ray-tracing does lower the barrier to entry. You place lights where they really are in a scene, boom, everything is light perfectly. Art assets and tuning up lighting are a huge time cost in current AAA games; making that much easier might benefit gaming in general.

      Having improved physics modelling might improve physics-based games, but something like Angry Birds doesn’t need a supercomputer anyway, and for most games it’s just added prettiness that greatly increases the production cost

    • Swedneck@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      if only that’s actually what they did, instead they add 2x as many strands of grass and skip out on optimization because they no longer need to do it.

  • kitnaht@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    2 months ago

    It should go towards getting rid of fucking garbage smeary TAA and developers that don’t want to put optimizations into their game and use DLSS as a cover-all bandaid for shit.

    We shouldn’t need a 4090 to play 1080p games.

    • yonder@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I don’t know how people stand TAA, it looks awful. Whenever I get the option, I turn it off because I would MUCH rather see aliasing than having an image that looked like someone used the smear tool in Photoshop.

  • SSJ3Marx [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 months ago

    As much as they suck, Nintendo had the right idea when they backed off of cutting edge graphics and focused on curating a library of high quality first party titles that aren’t available anywhere else. Sony and M$ by comparison are stuck racing each other toward’s a cliff’s edge, selling locked down proprietary computers that can’t be upgraded until the time comes when they reach their crisis of overproduction and one or both of them end up shuttering their console manufacturing the way that Sega did.

    • yonder@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I think that Mariokart 8 and Super Mario 3d world have some of the best video game graphics simply because they make very efficient use of the hardware by using baked lighting and good textures and can render at the console’s native res at a locked 60fps. Running on a smartphone processor (tegra x1).

  • Wolf314159@startrek.website
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 months ago

    Where should the industry go?

    Maybe focus more on developing good games that are more than just good graphics. A shit game will still be a shit game at 4k and 120fps. A good game doesn’t necessarily need all that to be good. Game developers seem to have lost sight of doing more with less.

    The industry has corrupted the mindset of their consumer base with this capitalist driven myth that you need to buy more stuff to be happy. The kids out there trolling about shit graphics and the PCMRs complaining about the lower console specs are gobbling it up. Now that one company is seeing diminishing returns, they’re considering pulling back on that growth mantra. Maybe they’ll start encouraging game development that doesn’t waste so many computing resources for schlocky derivative lazy content. I’m sure they’ll find some other way to convince us that in order to keep gaming, we’ll need to keep buying.

    • yonder@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I think game asset quality is prioritized for big studio games since asset creation can easily scale across many artists and pretty assets look great in trailers to sell the game.

      • wewbull
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        Right, but games studios have a massive budget problem. The cost of developing the games is making every one a must-succeed title. Developing the assets scales, but so does the cost and I expect that’s a major driver of the budget.

        Budgets need to come down and assets are probably the area that will get hit hardest if they want to preserve gameplay.

  • Cort@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    2 months ago

    If people agreed that the current Gen console was powerful enough, they wouldn’t go out buying even more powerful computers to play the games, they’d be satisfied with your offerings.

    We’re climbing out of the uncanny valley with an upward slog through the hills of diminishing returns. If you can’t hack it, get off the mountain and make room for someone else