Hogwarts Legacy, GPU Benchmark: Obsoleting The RTX 3080 10GB

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Ambient occlusion doesn't look good in this game. It's like doesn't exist.

πŸ‘οΈŽ︎ 123 πŸ‘€οΈŽ︎ u/Wellhellob πŸ“…οΈŽ︎ Feb 10 2023 πŸ—«︎ replies

6650XT 8GB: 32fps

3080 10GB: 25fps

????????????????????????

πŸ‘οΈŽ︎ 259 πŸ‘€οΈŽ︎ u/4514919 πŸ“…οΈŽ︎ Feb 10 2023 πŸ—«︎ replies

That CPU bottleneck should not be normalized. We just can't rely on framegen to incrase our performance. I was getting higher FPS on my 1080ti even on GPU limited scenarios on lower resolutions. With RT we get even more CPU bottlenecked and GPU's being not utilized fully.

πŸ‘οΈŽ︎ 121 πŸ‘€οΈŽ︎ u/ArdaCsknn πŸ“…οΈŽ︎ Feb 10 2023 πŸ—«︎ replies

Sorry if I'm wrong, but is that CPU overhead in Nvidia drivers as bad as it looks? AMD new cards are destroying 4090/80/70 wildly at 1080p and even so at 1440p ultra without ray-tracing and in some conditions even with ray-tracing enabled. It's a complete wash.

I mean I'm happy with the performance of my 4080 but considering how little effort devs are making when porting new games to PC in terms of CPU optimizations I'm worried this isn't going to bode well for the future, maybe Nvidia fixing that CPU bottleneck in a future driver release? is it going to stay like that so we'll have to rely on Frame Generation tech? Any input appreciated.

πŸ‘οΈŽ︎ 64 πŸ‘€οΈŽ︎ u/BNSoul πŸ“…οΈŽ︎ Feb 10 2023 πŸ—«︎ replies

Better wait for todays day-1 patch and then test. WB claims that it fixing freezes and some performance issues. https://old.reddit.com/r/HarryPotterGame/comments/10xu3kl/day_1_patch/j7u7cpq/

πŸ‘οΈŽ︎ 38 πŸ‘€οΈŽ︎ u/Shii2 πŸ“…οΈŽ︎ Feb 10 2023 πŸ—«︎ replies

Well let’s hope the day 1 patch makes this significantly better

πŸ‘οΈŽ︎ 9 πŸ‘€οΈŽ︎ u/Narkanin πŸ“…οΈŽ︎ Feb 10 2023 πŸ—«︎ replies

1 game isn't representative of general vram trends, it's too early to call, this seems like abnormally high vram usage for a game

You can look at games like plague tale requiem as the opposite case, that game uses barely any vram, it varies

The CPU overhead is an issue for Nvidia GPUs, but it has been for years now and they haven't done anything about it before

Difference is more CPU intensive titles are being brought out now vs 2 years ago

πŸ‘οΈŽ︎ 56 πŸ‘€οΈŽ︎ u/TalkWithYourWallet πŸ“…οΈŽ︎ Feb 10 2023 πŸ—«︎ replies

A770 ties the 1080Ti in raster performance almost exactly. Very interesting...

Here's wishing Intel luck in catching up from 3 gens behind!

πŸ‘οΈŽ︎ 43 πŸ‘€οΈŽ︎ u/FuckM0reFromR πŸ“…οΈŽ︎ Feb 10 2023 πŸ—«︎ replies

Looks like the Harry Potter devs expect us to grab a wand and cast Engorgio on our vram

πŸ‘οΈŽ︎ 7 πŸ‘€οΈŽ︎ u/demon_eater πŸ“…οΈŽ︎ Feb 10 2023 πŸ—«︎ replies
Captions
foreign [Music] boxed it is Hogwarts Legacy GPU Benchmark time and boy do I have a lot of data for you for this Benchmark we have 53 gpus and just way way too much data so rather than talk about it I'm just going to jump into the graphs as soon as possible so I'm just going to assume that you already know about the Hogwarts Legacy game apparently it's kind of a big deal based on the steam pre-sales so other than to say that it's built on the unreal 4 engine supporting DirectX 12 along with Road tracing and all the latest upscaling Technologies I'm just going to get into the testing now I've benchmarked two sections of the game one Benchmark pass which was the focus of our testing took part on the Hogwarts grounds and then the second at hogsmead as you arrive the CPU used is the ryzen 7 700x with 32 gigabytes of ddr5 6000cl30 memory and of course the latest Intel AMD and Nvidia display drivers have been used okay let's get into it starting with the 1080p medium data we find some really odd results firstly the GeForce 40 series is well down on our graph with the RTX 4090 capped at 136 FPS this is clearly a CPU bold neck which can reduce GeForce performance by around 20 relative to Radeon gpus and that's what we're seeing here the 7900 XTX and 700 XT both push past 160 FPS when using the ryzen 7 700x the Radeon 6000 series also performs better than the GeForce 30 series but again that's not that unusual at 1080p the 6800 XT for example was 13 faster than the RTX 3080 then further down the graph we see that the 6700 XT matched the RTX 3070 and 3070 TI with the 6600 XT and 5700 XT delivering just short of 100 FPS on average making them slightly slower than the 2070 super but also slightly faster than the 2070. then we see the GTX 1080 TI alongside the RX 5700 6600 and Intel Arc a770 so pretty disappointing results there for Intel given that they did Supply a game ready driver to us that said it did beat the RTX 2060 which delivered 5600 XT light performance then for just over 60 FPS the 1660 super RTX 3050 or GTX 1080 they'll do and then falling short of 60fps is the GTX 1070 1660 1650 super and Radeon RX 5500 XT I feel any GPU that can't deliver at least 30 FPS for one percent lows here was unplayable at least by my standards so that includes the Radeon RX 6400 Intel Arc A380 GeForce GTX 1650 1063 gigabyte and 1630. now increasing the resolution to 1440p provides us with similar data to What was seen at 1080p the GeForce gpus are still capped at around 130 FPS while the Radeon gpus could push as high as 167 FPS the 6950 XT for example that was still 10 faster than the RTX 3090 ti so the Radeon 6000 series is faring better than the GeForce 30 series at least at this resolution using medium quality settings having said all of that though this time the 6800 XT was just three percent faster than the RTX 3080 while the rx6800 beat the 3070 TI by a six percent margin further down we find the 5700 XT basically neck and neck with the 2070 super and 6650 XT along with Intel's Arc a770 the RTX 3060 is also just there and it's impressive to see the 5700 XT still outclassing the GeForce 30 series GPU in a new Cutting Edge title for those of you seeking a 60 FPS experience at 1440p that should be possible with the a750 RX 5700 RTX 2060 super or of course anything that's faster then just falling short of that we have the RX 6600 5600 XT Vega 64 and the RTX 2060 and finally anything from the GTX 1660 down is going to lead to a pretty rough experience so I'd just recommend lowering the resolution and or quality settings now the 4K resolution the CPU bottleneck is largely removed and we see what these gpus are really capable of the RTX 490 sustained 111 FPS on average making it 12 faster than the 7900 XTX but remember we are only using the medium quality settings here the 7900 XTX was also nine percent faster than the RTX 4080 while the 4070 TI and 390 TI were comparable making them around 9 faster than the 6950 XT the RTX 3080 was also able to overtake the 6900 XT by a few frames making it 14 faster than the 6800 XT that said the RX 6800 was still able to just stay ahead of the 3070 and 3070 TI though all did dip below 60 FPS we also saw one percent lows dipped below 30 FPS with the RX 5700 and RTX 3060 so any gpus below them on this graph shouldn't be used at 4K but I guess that really goes without saying okay so here are the ultra quality results but with Ray tracing disabled we'll get to the RT results soon again under these more CPU Limited Test conditions at 1080p we see that the Radeon gpus perform exceptionally well the 7900 XTX beat the RTX 4090 by an 8 margin while the 7900 XT was six percent faster than the RTX 4080 then we see that the 6950 XT and 4072i traded blows then for the previous generation matchups AMD does very well the 6950 XT for example is seven percent fast and the 3090 TI while the 6800x team matched to the 3080 TI making it four percent faster than the 10 gigabyte 3080. the Radeon RX 6750xt did well with 83 FPS making it slightly faster than the RTX 3060 TI and only slightly slower than the 2080 TI and 3070. once again the old Radeon RX 5700 XT was really impressive averaging 63 FPS which placed it on par with the 6600 XT and 6650 XT the Intel Arc a770 also did quite well here matching the gpus just mentioned along with the RTX 3060 then for 60fps we find the RTX 2070 and 2060 super adjust up to the task while the 1080 TI RX 5700 RX 6600 a750 and RTX 2060 all fell short then once we get down to the GTX 1660 TI the game starts to become unplayable or at the very least as unenjoyable in my opinion and there are quite a few 4 gigabyte and 6 gigabyte graphics cards that simply weren't up to the task now increasing the resolution to 1440p while still using the ultra quality preset we find a few interesting things firstly the 7900 XTX and rtx49 a neck and neck while the RX 4080 overtook the 7900 XT delivering 15 more performance and what's really interesting to note here is the GeForce 30 series and Radeon 6000 series battle namely the fact that the Radeon GPU is provided stronger one percent lows the 3090 TI for example was three percent faster than the 6950 XT when comparing the average frame rate but 12 slower when comparing the one percent lows and this is less obvious with the RX 6800 and RTX 3780 TI comparison where performance is basically identical there's loads more data here but I think for 1440p Ultra gaming you really not going to want to go below the RTX 3060 TI and 6750 XT so let's just move on because we do have a lot more to go over okay so 4K Gamers hoping to play Hogwarts Legacy using the ultra preset you'll want to bring a serious GPU especially if you don't want to rely on upscaling technology to boost performance this isn't an issue with the RTX 4090 which spat out well over 60 FPS making it eight percent faster than AMD 1700 xdx that said the one percent lows weren't quite as strong in our testing the GeForce gpus in general really shine at 4K and now the 39 ETI is 18 faster than the 6950 XT in fact even the RTX 3080 managed to Edge out the rdna2 flagship which has 51 FPS on average for the 6950 XT there wasn't a single previous generation Radeon GPU that could break or even hit the 60 FPS barrier and for a decent experience you'll want at least the RX 6800 that said there are a few issues here for the GeForce 30 series namely for models only armed with 8 gigabytes of vram such as the 3070 and 3070 TI both of which were only able to match the 6700 XT making them a good bit slower than the 16 gigabyte RX 6800 so in my opinion you want an absolute minimum of 12 gigabytes of vram for 4K Ultra and ideally 16 gigabytes would be the minimum right so now it's time to enable Ray tracing and this is where things become truly interesting even with rt enabled the 1700 XTX was still able to take out top spot in our testing at 1080p though it was just three percent faster than the 4090 but that's a serious achievement though of course we have somewhat CPU Limited at this resolution the 7900 XT also matched the 4072i but did deliver better one percent lows making it more comparable with the RTX 4080 which was just five percent faster now the GeForce 30 series did easily beat the Radeon 6000 series with Ray tracing enabled the 3090 TI for example was 21 faster than the 6950 XT though the more mid-range previous generation matchups were much more competitive the 6700 XT beat the RTX 3060 TI and even the 3070 TI again eight gigabytes of vram just isn't enough here even at 1080p so that's brutal for those who invested big money in products like the 3072i not that long ago now 1440p we see there's actually very few gpus that can handle Ray tracing into Hogwarts Legacy at least when using the ultra quality settings without any kind of upscaling although the 4090 was good for 85 FPS and the 40 80 79 FPS most gpus struggle to hit even 60 FPS such as the previous generation Flagship RTX 3090 TI again 12 gigabytes of vram is the minimum here the 2080 TI did okay but one percent lows did suffer and this was also the case for the RTX 3080. Parts like the 3072i though were completely broken leading to competing Parts such as the RX 6800 delivering twice the performance but really there is no comparison here as the 3072i wasn't even remotely playable by anyone's standards whereas the rx6800 was playable but not really by my standards admittaly but it was technically playable now finally the 4K resolution we see at the RTX 4090 was good for 61 FPS making it 33 faster than the 4080 and 65 faster than the 3090 ti so a very strong performance uplift there basically anything less than the 4080 couldn't deliver 60 FPS with everything else dropping below 40 FPS so upscaling will be required there Gamers will also require at least 16 gigabytes of vram to play at 4K with Ray tracing enabled now something I noticed after our initial wave of GPU testing was some strange scaling behavior in the hogsmeade town for whatever reason the game appeared extremely CPU bound here despite rather low CPU utilization on all cores so I'm not entirely sure what's going on here it'll take more time and a lot more benchmarking to work it out something I don't have the mental capacity for right now I've done a ridiculous amount of GPU testing here though so let's go over the results and first I want to compare the previously seen Hogwarts grounds data to this updated hogsmeade data please note given the incredible amount of testing that was involved here I wasn't able to retest all 53 gpus but we do have a good range of current and previous generation models okay so once again starting with the 1080p medium data we see that we do run into a system limitation in the hogsmead area at around 150 FPS on average with one percent lows of around 130 FPS another drastic difference to what we saw at the Hogs warts ground test but we are clearly hitting a system limitation here and we see the same limitations at 1440p but of course more of the data is now GPU limited basically we see from the 6950 XT and down that the graphics card is the primary performance limiting component but what's really interesting here is the fact that despite imposing a system limitation sooner the Radeon gpus typically delivered higher frame rates in the hogsmeade area suggesting this test is less GPU demanding but more CPU demanding now at 4K there's very little difference between the two test locations and really it's only the 7900 XTX that is performance Limited in this updated hogsme Benchmark but again we are seeing a strange situation where some of the Radeon gpus do perform better in the hogsmeade area the 6650 XT and 6750 XT for example maxed out in both test areas so the data here is heavily GPU limited the 6800 XT 6950 XT and 1700 XT though all delivered higher frame rates in the hogsmeade test and I suspect the 7900 XTX would have as well if it weren't CPU Limited increasing the visual quality preset to ultra again sees most gpus delivering higher frame rates in the hogsmeade area the 6750 XT for example saw a rather substantial 17 performance uplift and a 12 uplift for the 6800 XT but then the 6950 XT performance was much the same in either test area while the 7900 XT and XTX hit a wall at around 140 FPS now this is odd at 1440p all gpus actually delivered higher performance in the hogsmeade test the 1700 XT for example was 20 faster which is a massive difference very strange stuff indeed then at 4K we see similar Trend where the Radeon gpus generally performed better in the hogsmeade test so let's enable Ray tracing and take another look well here we're well under the 140 to 150 FPS cap seen without RT enabled the Radeon 6000 series gpus perform much the same for both test scenes but that wasn't true of the 7000 series where the 7900 XT was nine percent faster in hogsmead and the 700 XTX 13 faster it's a similar story at 1440p the Radeon 6000 series all delivered the same performance in both test areas while the 1700 XT was nine percent faster in hogsmeade and the 700 XTX eight percent faster then at 4K the Radeon gpus are all pretty much wiped out as here the 1700 XTX was only good for just over 30 FPS so let's move on to see how the GeForce gpus scale between these two test areas now with the GeForce gpus we're seeing a hard cap on performance at 130 FPS in hogsmeade not radically different to the initial test but it's interesting to see the RTX 3060 and 3070 performing better here despite everything else hitting in fps wall a little bit sooner similar Trends are seen at 1440p a number of the more lower end gpus did perform better in the hogsmeade test though this time the maximum limit on performance was much the same then at 4K we start to see some really strange scaling behavior in the original Hogwarts grounds testing the RTX 4080 was a good bit faster than the 4070 show and the 4090 was a good bit faster than the 4080. however when testing in hogsmeade we don't see that and although the frame rates were higher at the lower resolutions the data does look to be CPU limited which is quite odd now as we move on to the ultra quality testing we see some more strange scaling Behavior the Hogwarts grounds results show Fairly consistent scaling at 1080p as we increase the GPU power but hogsmeade runs into a performance cap quite quickly 110 FPS and we see from the RTX 3080 10 gigabyte to the RTX 4090 that there's really no change in performance but as we move to 1440p we see the high end GeForce 30 series gpus all top out at around 95 5fps yet despite that the 4080 and 490 were able to render 108 and 109 FPS respectively it's very odd behavior and it's even more confusing given that no single core was maxed out our ryzen 7 at 7700x and overall CP utilization was actually quite low suggesting the game does need a performance patch to better optimize performance more strange data can be seen at 4K for the hogsmeade test again the GeForce 30 series along with the 4072i are all limited to around 55 FPS while the 4080 was good for 72 FPS and the 40 90 85 FPS now with Ray tracing enabled we find more interesting results previously in the Hogwarts grounds testing the 3070 and 3080 were able to deliver playable performance at 1080p but in the hogsmead area we are faced with constant pauses forget about frame stutters performance was completely broken here and this is due to exceeding vram limits it's odd though as performance in general for products with 12 gigabytes of vram like the RTX 3060 and 3082i it was actually better so it's as if the ray tracing demand is slightly lower in the hogsmeade test but the demand on the memory capacity is higher which might actually explain some of the odd results we've seen when testing between these two locations now at 1440p we found previously that eight gigabyte graphics cards couldn't handle the ultra quality settings with Ray tracing and in the hogsmeat area we also find that 10 gigabyte models like the RTX 3080 aren't sufficient either beyond that though scaling is fairly similar from the 3082i and up then finally at 4K we're now seeing very similar results for all gpus in both test areas again suggesting that memory capacity is at more of a premium in the hogsmeade test and it's not until we hit the 4K resolution that the Hogwarts grounds data starts to look very similar now here's a look at how the Radeon and GeForce gpus compare in the hogsmeade test and the results are remarkably different to What was seen previously though it's also very clear there's a serious system limitation here the 1440p scaling for both the g-force and Radeon gpus is similar to what we've seen at 1080P and again the 1700 XTX and XT take out top spots followed by the 6950 XT and 6800 XT made possible by the Nvidia driver overhead issue which hurts performance when CPU Limited now at 4K we're getting to the point where performance is being GPU limited allowing the 4080 and 40 90 to match the 7900 XTX but of course we are still quite heavily system limited here increasing the visual quality preset to ultra but using the 1080p resolution again shows the radio and gpus with a strong performance Advantage here the 1700 XTX was 27 faster than the 4090 for example so clearly something is a miss here the margins remain similar at 1440p again the 1700 XTX and 7900 XT have a clear performance Advantage brought about by a system limitation likely some sort of CPU related issue finally at 4K the 490 is able to hit the lead nudging out the 1700 XTX which now sits between the 40 90 and 4080. even with Ray tracing enabled the 7900 XT and 7900 XTX were faster than the 4090 at 1080p yet the 6950 XT was slower than even the RTX 30802i it's also really interesting to note that while the 8 gigabyte RTX 3070 was crippled here rendering just 17 FPS with 1 lows of 5 FPS so the game was unplayable and completely broken and the 3072i would have suffered the same fate however the 6800 XT on the other hand was able to deliver 55 FPS with 44 FPS for the one percent lows and was surprisingly smooth and enjoyable it's also worth noting that the RX 6800 would offer a similar experience here as it also packs a 16 gigabyte vram buffer and with the help of fsr2 you could probably get up around 60 FPS whereas no degree of dlss is likely to help the RTX 3070 especially given that the 10 gigabyte RTX 3080 is also crippled here but this is certainly a comparison that I'd like to look at in more detail perhaps for an RTX 37 versus RX 6800 revisit which we could do in the not too distant future now with Ray tracing enabled the 40 90 and 4080 were able to overtake the 1700 XTX at 1440p and here the 650 XT was crushed by the 3080 TI again the older GeForce 30 series gpus with less than 12 gigabytes of vram really do suffer for example the RTX 3080 saw one percent lows of Just 4 FPS while the 6800 XT was good for 33 FPS so a bloodbath there for nvidia's previous generation then at 4K it's a clear win for the RTX 49 and even the RTX 3090 was faster than the 7900 XTX and these are certainly more like the results we'd expect to see with Ray tracing enabled again we're seeing that 16 gigabytes of vram is really the minimum here as even the 4072i struggled with frame pacing seeing an 81 discrepancy between the one percent lows and average frame rate whereas the 7900 XT saw less than half that margin yeah you might think many of these results were heavily CPU limited and had we tested with a core I9 3900k that'd look a lot different but I don't believe that's actually the case I did run a few tests with the 7900 XTX and RTX 49er using the core ion processor and found pretty similar scaling behavior that said the utilization on the core I9 part did look quite different so that's interesting this is in the hogsmeade area we saw two cores almost always maxed out so despite that though the frame rates were much the same as what we saw with the 7700x it's clear though the game does have some performance related issues so it's hard to make any concrete conclusions at this point other than to say we're hoping for some performance related patches hopefully in the not too distant future it's also worth noting that AMD should have an updated driver release next week which should include official Hogwarts Legacy support so be on the lookout for that if you own a Radeon GPU as it stands right now though both Nvidia and Intel already have Hogwarts Legacy ready drivers and we use use those for our testing in this video as a side note for those of you using a GeForce 40 series GPU if frame rates are much higher than what's been reported here make sure frame generation hasn't just decided to turn itself on and not tell you I ran into that issue although I never tried frame generation or even dlss for that matter whenever I installed a GeForce 40 series GPU the results were unexpectedly high and this is because frame generation had turned itself on despite the fact that it was grayed out and even dlss was grayed out and not turned on so I'm not sure why it defaulted to turning that on but in my testing it did the fix here was to enable dlss and then enable frame generation then disable frame generation and dlss and again although frame generation was enabled for my testing initially the option was completely grayed out for me it wasn't until I manually enabled it and then disabled it that I was running at the native resolution and frame rate so be aware that that can be an issue also for those of you facing regular stuttering there's a wide range of reasons for why that could be but one of them could be system memory I found that the game plays best with 32 gigabytes of RAM it's certainly playable with 16 gigabytes but I did notice a lot more stuttering so be aware of that another big issue is of course vram usage although this is easier to address by reducing stuff like texture quality or disabling Ray tracing which is a massive vram hog in this title I'm not willing to say that the GeForce RTX 4072 and it's 12 gigabyte vram buffer is a disaster just jet as performance here could be addressed with a game patch but at the very least Hogwarts Legacy has highlighted why 12 gigabytes of vram on an 800 GPU is a bad idea and at best will age as well as what we've seen here with the RTX 3070 and 3070 TI it was also interesting to see the Radeon RX 6800 XT smashing the RTX 3070 by a 31 margin at 1440p using the ultra quality preset it and then buy a 63 margin at 4K meanwhile with Ray tracing enabled the 6800 XT was good for 55 FPS at 1080p while the RTX 3070 was good for just 17 FPS and I did warn the parts like the RX 6800 and it's 16 gigabytes of vram would Age better than the RTX 3070 before too long and we're certainly seeing evidence of that now and this isn't the first time mind you but this is very clear evidence of course you can enable upscaling technology such as dlss xcess or fsr2 though those technologies will only take you so far in the face of a CPU bottleneck but perhaps this could be a workaround for gpus lacking vram though you'd likely have to use the performance modes and they typically suck when it comes to image quality anyway provided you have enough RAM and vram along with a decent CPU and you've dialed in the appropriate level of visuals for your GPU then the game should play quite well and it does appear to be a very high quality game I'm sure with some further optimization work it will be a great game and a useful tool for benchmarking new pc Hardware and with that I am going to get some sleep if you enjoyed this video please do give it a like it was a ridiculous amount of work subscribe because well we'll be testing this game in future CPU and GPU content I'm sure hopefully if it gets patched up and yeah we have patreon uh float plane either of those will give you access to exclusive behind the scenes content q and A's uh exclusive Discord server we talk about all this testing as we're doing it with our community and we do a live stream as well Tim and I get together and do that anyway check that out if you're interested but if not that's perfectly fine and I would like to thank you for watching this video I'm your host Dave see you next time [Music] foreign
Info
Channel: Hardware Unboxed
Views: 407,454
Rating: undefined out of 5
Keywords: hardware unboxed
Id: qxpqJIO_9gQ
Channel Id: undefined
Length: 26min 37sec (1597 seconds)
Published: Fri Feb 10 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.