We were WRONG about RAM – Or were we?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

This video was pretty intersting. I do remember back in the day RAM speed was considered mostly a non factor, and their replicated test shows it used to be sound advice. I wonder where the crossover point was where RAM speeds started to make a meaningful difference for most gaming scenarios? If I had to wager a guess, it was probably around 2017-2018, where greater than 4 core systems became far more common.

πŸ‘οΈŽ︎ 138 πŸ‘€οΈŽ︎ u/BWandstuffs πŸ“…οΈŽ︎ Aug 20 2022 πŸ—«︎ replies

Look at an incredibly well optimised game such as Factorio, it has been known for years that ram speed is hugely important. But as the devs note that ram latency is hugely more important than bandwidth and that performance improvements can be much bigger by making the game smarter (the example in the blog is precaching at about 9-13% improvement) rather than improving hardware.

Side note: Factorio is such a well made game and just reading the dev blogs give such an insight into proper compsci focussed game devs.

Also the game is extremely addicting.

πŸ‘οΈŽ︎ 125 πŸ‘€οΈŽ︎ u/tomw2308 πŸ“…οΈŽ︎ Aug 20 2022 πŸ—«︎ replies

It does, even if you lose cycles in cross clock domain

πŸ‘οΈŽ︎ 10 πŸ‘€οΈŽ︎ u/spca2001 πŸ“…οΈŽ︎ Aug 20 2022 πŸ—«︎ replies

I agree with them, anyway, I use 1600MT/s DDR3L in my 3rd gen i7 laptop because of the iGPU:

With the 25.6GB/s data bandwidth I get many legacy games to run there at native screen resolution (900p) without that much of an issue, lower the RAM speed and soon the little the iGPU can offer suddenly goes away.

But back in the day, with a dGPU, the MT/s were worthless, I felt back then the computer was much responsive with a low latency DIMM rather than a faster one.

πŸ‘οΈŽ︎ 6 πŸ‘€οΈŽ︎ u/soyiago πŸ“…οΈŽ︎ Aug 20 2022 πŸ—«︎ replies

It mattered in Arma 2 and Arma 3. The fact is that some games lean on RAM and latency especially more than others. It often does not matter if the FPS is in the hundreds but with these games performance due to their single threaded nature has been a problem with modding especially. Games that pushed a lot of assets and complexity have always benefited from more RAM, games that were often used in benchmarks not so much.

πŸ‘οΈŽ︎ 7 πŸ‘€οΈŽ︎ u/BrightCandle πŸ“…οΈŽ︎ Aug 21 2022 πŸ—«︎ replies

Guy has a pretty good voice for radio or voice overs.

πŸ‘οΈŽ︎ 15 πŸ‘€οΈŽ︎ u/DallasJW91 πŸ“…οΈŽ︎ Aug 21 2022 πŸ—«︎ replies

From personal experience: yes it does. The 1% fps lows get A LOT better with faster ram, and the games will feel smoother even tho fps counters barely measure a difference. Overall, especially (but not only) when your system is a little bloated, your pc will feel significantly snappier too.

πŸ‘οΈŽ︎ 40 πŸ‘€οΈŽ︎ u/I-took-your-oranges πŸ“…οΈŽ︎ Aug 20 2022 πŸ—«︎ replies

I wonder why AMD GPU's weren't tested considering that it's known that Nvidia implements their driver to manage their memory with the host instead of using their own hardware, which translates to higher CPU usage with the same load.

πŸ‘οΈŽ︎ 3 πŸ‘€οΈŽ︎ u/braiam πŸ“…οΈŽ︎ Aug 21 2022 πŸ—«︎ replies

Interesting how nobody really took into account the proportional differences in ram bandwidth.

For example, 800 -> 1600 MT/s is doubling transfer rate, but so is 2133 -> 4266. But that additional 2133 MT/s is almost double the total transfer rate of that DDR3 1600.

Nowadays, bandwidth increases are so much bigger than they were back then.

πŸ‘οΈŽ︎ 15 πŸ‘€οΈŽ︎ u/Substance___P πŸ“…οΈŽ︎ Aug 21 2022 πŸ—«︎ replies
Captions
this video from 2013 has 2.5 million views and yet it's straight up wrong can you imagine how many gamers for all these years we convinced ram speed didn't matter we now know it does matter but here's a question for you when did ram speed start to matter were we wrong before or did something change since babyface linus was still using windows 7. and if so what i can tell you nothing changes about these segways to our sponsor i fix it for repairs on the go ifixit has you covered find out more about the ultra portable minnow and moire sets and how they can make your repairs easier at the end of the video [Music] we've got a few theories about why the impact of ram speed is different for gaming today than it was nine years ago and while we all agree that each of them is probably true we don't know which is the most influential since i'm already here let's start with my theory which is this the benchmarks themselves were flawed and so our results were also flawed looking at the old video where we claimed ram speed didn't matter we ran a gtx 660 ti at 1080p max settings with anti-aliasing enabled we are gpu bound out the wazoo which isn't going to tell us much about how the ram is behaving i need to redo these tests with reasonable settings so i pulled those old parts out from the archives and built a similar enough system to rerun metro last light the numbers won't quite line up since we don't have our 660 ti anymore but we can see how it does with this factory overclocked 660. at the original benchmark settings we're once again looking at very little difference between the ram speeds technically our 2666 kit runs over five percent faster in minimum frame rates but that translates into less than a whole frame per second in the real world so yeah if we turn off anti-aliasing we're still seeing a similar result despite the lesser load on the gpu it's when we switch over to more modest in-game settings that we see almost no difference across the board what okay then i expected that the higher our frame rates the more ram speed would matter i guess the result makes some sense seeing as the higher the settings the more data needs to pass through the cpu to the gpu maybe if i run a notoriously cpu bound title at esports settings will get more of an impact on ram speed kinda minimum frame rates creep up as we go up in ram speed but it's not by a lot and average fps doesn't change by much cinebench r15 also doesn't show much of a difference in cpu rendering although we depart from our original review here by mostly even jumps as we go up the stack in opengl did i just accidentally prove our old video right well not so fast let's hear jake's theory games have gotten a lot bigger over the last nine years and because of that a ton of data has to go to the gpu very quickly top that off with more intense physics calculations and heavier operating systems with more going on in the background and you've got yourself a ram bottleneck so my theory or at least part of it is that ram speed just didn't matter as much back then and what changed is actually the games which is a good point we saw this in reverse when we dropped the settings down in metro and got less disparity between our ram kits cinebench opengl gave us another clue which is in its cpu utilization it seems like three to four threads are in use at any given time so larger more modern games with bigger assets that use more threads should show a greater disparity between our ram kits so let's test jake's theory the most modern title we have that i can easily run on windows 7 is shadow of the tomb raider which is both pretty large and uses multiple threads and well slow ram hurts a bit at 1080p ultra but there's no tangible difference going from 1600 to 2666 despite the immense cost difference when it was new cutting the detail level to low narrows the gap even further which tracks with what we saw in metro bigger assets need faster ramps or a bigger bag hey back orders through the backpacker up and some are the reviews go check them out you can fit so many assets in there so jake's onto something assets are a factor but now we had to ask is that all there is let's hear linus's thoughts since we made that video gpus have gotten ridiculously powerful even just three years later we went from a gtx 660 with two gigs of ram and an anemic processor to the 1060 a mid-range gpu that had up to six gigabytes of vram not to mention that it was much more powerful so my theory is that as gpus have gotten more and more powerful they've also become more and more demanding on the rest of your system whether we're talking about pci express slot bandwidth or you got it system memory bandwidth this is probably the most obvious difference between our old test setup and today's so let's test that i slotted an rtx 2080 ti into the old bench in place of the gtx 660. and oh boy does it ever make a difference not just that it's so much faster than the 660 but we finally see scaling between our memory speeds the difference in metro is at minimum about five percent going from 1600 to 2666 and the absolute chasm between 800 and 1600 is staggering shadow of the tomb raider similarly slams the slow memory and gives the fast stuff a roughly 10 to 12 advantage over the typical frequency back in the day cs go despite being notoriously cpu bound is ironically less sensitive here perhaps due to the settings or those old assets with the biggest difference being an average frame rate so linus was onto something here but there's a twist cinebench again doesn't seem to cure until we boot up the opengl test where we get similar scaling to the gtx 660. that test must be cpu limited since we've removed all the other variables which is where alex's theory comes into play since 2013 cpus have gone bonkers fast we've gone from dual cores being fine and quad cores being high-end and they they couldn't even break four gigahertz now quad-cores are low-end and we're easily pushing past five gigahertz on eight core processors crazy plus cpus have gotten way faster in terms of ipc and they have way bigger caches keeping that many cores fed means you need faster ram so my theory is that in 2013 we simply didn't have enough cores and general cpu speed for ram speed to really make a difference another good point and to illustrate it i've taken our modern core i9 12 900k and tweaked its ddr5 to be as slow as possible yes that says ddr5 1600. it benchmarks close to but is still a little bit faster than the ddr3 1600 on our old bench but with about 2.5 times the latency it's as close to apples to apples as we'll get and we'll have the bench's parts linked down below with the rtx 2080 ti the difference ram speed makes is material in metro but it's not as major as you might think at less than 10 percent going from ddr3 to ddr5 and that carries over to cs go too and again there's virtually no difference in cinebench whatsoever we need something more modern to really see the difference here so i fired up f1 2021 and far cry 6 with and without hd textures and yeah the difference between 1600 and 6000 is incredible especially in far cry 6 where we're looking at upwards of 50 percent but something magical happens when we look at the numbers for 4 800 mega transfers per second ram versus 6 000. look at that does it remind you of something yeah it's more or less in that zero to five percent range just like our old bench tests this time the 2080 ti is the bottleneck there's always something isn't there for the sake of completeness when we pair the gtx 660 with the modern system it's predictably gpu bound in every scenario except csgo where the extra ram speed did help pretty significantly in both minimum and average frame rates with all the testing done it's time to answer the question what changed our old testing methodology didn't actually end up playing a major role in fact it turns out that anti-aliasing aside running games at higher quality is a good benchmark for testing ram speed software on the other hand did in fact change things sometimes dramatically but only to a point if your cpu and gpu are being properly fed and fully utilized it becomes a question of how fast that hardware is rather than how fast the ram is and if they aren't being fully utilized then faster ram will nudge them closer to peak performance that means that faster memory today may or may not significantly improve performance depending on the application and your hardware but later on down the road when you upgrade your gpu you'll be bottlenecked far less if you have faster ram in other words our old video wasn't technically wrong but there's far more to the story than we understood back then and almost certainly more that we didn't uncover today what we did learned today was that compared to cpus gpus are more sensitive to changes in ram speed or at least the rtx 2080 ti is more sensitive to ram speed than our 12900k is our test suite was limited so radeon or ryzen may scale differently the big takeaway is this it's that combination of software complexity more available bandwidth and more data-hungry gpu cores that makes memory speed matter and it does matter and so is our sponsor i fix it you break it i fix it not me of course but i fix it moire and minnow kits are the tool kits for the tinkerer on the go the pocket size minnow driver kit is only 14.99 with an easy to open magnetized case a built-in sorting tray 16 different bits and a handle with a built-in e-sim eject tool pretty fancy for something slightly bigger and longer the moire driver kit is only 19.99 and comes with 32 different bits with extended reach next for digging into those hard to reach nooks and crannies and all ifixit kits come with a lifetime warranty as well so you're sure to end up in a landfill somewhere before your ifixit kit does so check out our links in the description to get yours today thanks for watching guys instead of throwing you to that old video how about checking out our much more recent video exploring whether 8 gigs of ram is still enough in 2022 the answer might surprise you
Info
Channel: Linus Tech Tips
Views: 1,517,659
Rating: undefined out of 5
Keywords: ram, memory, system, speed, ddr, ddr3, ddr5, mt/s, megatransfers, latency, gaming, productivity, incorrect, methodology, cpu, gpu, software, operating system, complexity, bandwidth, textures, 3D models, assets, performance, testing, benchmark
Id: AbBpmGX7K4w
Channel Id: undefined
Length: 9min 57sec (597 seconds)
Published: Sat Aug 20 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.