- This heatsink right here, pointless. Four Ram slots, you only need two of them. And why is there a PCI
express slot that has SLI? No one uses it. Why the heck does
everybody want all of this? It's because nobody actually
knows what motherboards are and how they work. So here it is, to the best of our ability,
how motherboards work. So buckle up nerds, Linus is on vacation and we're not holding back. Just like I can't hold it back
this segue to our sponsor, I fix it. You like to repair your
own electronics instead of having to spend hundreds
on expensive replacement or a pair of services, learn more about I fix it's essential electronics toolkit at the end of this video. (upbeat music) - Computer power supplies output 12 volts. If you apply 12 volts to a CPU it looks something like this. Suboptimal, for sure. (chuckles) Suboptimal for sure. To avoid the magic smoke, we need to take that 12 volts and step it down to something
more like 1.2 volts. And this is exactly the job
of voltage regulation modules the VRMs that overClockers
talk about so much. The basic circuit consists of two MOSFETs which in this situation are
basically just fancy switches, an inductor and a diode. This first switch closes which
then charges this inductor converting the electricity
into a magnetic field. The voltage of the CPU gets depends on how long the switch is closed but the instant it is opened,
the voltage to the CPU begins to drop kinda like a drain. So this is where that diode
here is very important. If both switches are open
while the inductor is charged it's magnetic field will
collapse and pop goes your CPU but the diode is highly inefficient. So it's important to get the
second switch closed as soon as possible to avoid creating
any unnecessary heat. If you're paying attention you might have noticed
a problem here though. Although the voltage will be bouncing around what the CPU is asking for. There are a lot of spikes,
not great for stability and there are two ways to remove this. The first is to just increase
how fast the MOSFET switches on motherboards. This is normally done at
about 300,000 times per second but even then the voltage
can fluctuate more than the CPU wants. And it isn't practical to try switching a whole lot faster than that. Every single time the
MOSFET switches on or off it generates a bit of heat and above about 150 degrees Celsius. The MOSFET's gonna die. So if we want cleaner power,
we don't need faster MOSFETs, we need more of them. The circuit we have here
represents a single phase sun. Another term that overclockers
love to use so much. So by adding another phase, we can roughly double how clean our
power is with the benefits of additional phases,
scaling, roughly linearly. From there, the number of phases in your motherboard is normally shown as a number like eight plus
two, which means eight phases for the CPU and two dedicated for the RAM. Let me put this away. Multiple phases also have another benefit. Say you're a CPU requires
a hundred amps to run with a single phase. All 100 amps would have to go directly through those components. But with two phases, only 50
amps to go through each phase meaning lower rated and thus
cheaper components can be used. You might be tempted to think then, well more phases equals more better, which is true to a point. However, as you add more phases controlling them can get more
difficult, which translates into the VRM as being less able to respond to changes in voltage. Like everything then there are trade-offs between having less phases
with higher quality components or more phases with
lower quality components. This does create a rather
odd situation though. High-end motherboards
normally have the most phases and the highest efficiency components which means the VRM temperatures
should be really low but these motherboards also
have the best VRM heat sinks. Why purely to look cool. Mid range boards might
actually need good VRM cooling but for anything other
than extreme overclocking the high-end board would probably be fine with no VRM heat sinks at all. It's similar to how you'd
be fine without a shirt from LTD store.com, but
definitely look cooler with one. - Now that the CPU has power how does it talk to
the rest of the system? This copper wire represents
how far an electric signal can travel in a nanosecond falls. At 5,000 megahertz your Ram can send data every 0.2 nanoseconds in that amount of time and electric signal can travel. This far. So then this is physically impossible. Ram makers have lied to you,
although on your Ram stick in the bios and in windows,
your Ram will say it's running at 5,000 megahertz or whatever. In reality, it's completing 5,000 mega "transfers" per second. What's the difference? Well, since it's DDR or
a double data rate Ram transfers are done on both the beginning and the end of each individual Hertz. This means the sticks are actually running at 2,500 megahertz, which is why if you've ever looked
in CPU Z, the Ram shows up as half of what
everything else claims it is. Depending on what you want
to do with your motherboard knowing how the CPU and the
Ram connect can be important. Like have you ever wondered
why the manual always says to put the Ram in the second and the fourth
slots on your motherboard? For this example, we'll be looking at a CPU with dual Ram channels. This is the most common
configuration for consumer CPU's but the same basic principles apply to workstations or server
processors with more channels. The absolute best case scenario for a dual channel CPU is to
simply have two ram slots. PCB traces go directly from the controller to the Ram stick with
nothing else in the way. This is why Asus top tier gaming boards only have dual slots. And also why some ITX motherboards like this one can punch way above their weight class when overclocking Ram. Moving onto other boards
with four Ram slots, here you have two main options T topology and Daisy chaining. T typology is when the
traces to each Ram stick on a channel are the same length. Which can be great for
running all four slots at relatively high speeds. But if pure speed is what you're after then Daisy chaining is the way to go with a Daisy chain motherboard,
the traces basically just go to one slot and then
continue onto the next one. This can cause the speeds to be lower. When all four slots are
used, the timing differences between the two sticks
has to be figured out by the controller, but it also
means with only two sticks you can run them at nearly the same speed as if there are only two slots. Now it is important to use
the correct slots though. If an electric signal goes through a wire and the wire suddenly ends
with nothing connected to it like this, the signal
can reflect back creating noise in the circuit. By putting the Ram sticks
and slots two and four the traces and at the
Ram stick, reducing noise in the circuit and
allowing for higher speeds. So basically before you buy Ram , check if your motherboard has Daisy chain or T topology since it determines if you should get two
higher capacity sticks or four lower capacity
ones with higher capacity dims getting cheaper and
cheaper, it now would make sense for most gaming motherboards
to only have two Ram slots optimized to
go as fast as possible. Our contact at ASUS agreed but said way too many people will complain that they can't upgrade
their memory in the future. Even if statistically very few
people actually will do that. But even without
typologies and slots, like if you had the CPU right next to the Ram and used gold traces or
whatever for the perfect signal there would still be something
else holding you back, the memory controller on the CPU. Everyone has accepted at this point, there's a Silicon lottery and some CPU's will naturally
be able to clock higher at lower voltages, but the same applies to the memory controller on the CPU. Your luck in the Silicon
lottery can sometimes have the biggest effect
on how fast your Ram runs. - The CPU and Ram are now working but we still need to be
able to connect things like the GPU, storage, and peripherals. In the past this will be
done through the chip set with the North bridge taking
care of PCIE and memory will a Southbridge handle
IO storage, audio USB you know, on modern
motherboards though, the memory and some PCIE lanes are
connected directly to the CPU. Well, chip set handles everything else which simplifies the layout
and allows for lower latencies. Losing a whole chip is
big, but what's even more shocking is just how fast PCIE has become with PCIE 4.0 a one-lane slot can do up to two gigabytes per second. And a 16 lane slot is able
to transfer a staggering 32 gigabytes. Second, that's close to the speeds of DDR four Ram. How, how the heck can they pull this off? Simple, very expensive PCBs
previously was possible for lower end motherboards
heavily like four or so layers. But now in order to get PCI
4.0 levels of signal quality eight to 12 layer PCBs are
basically a requirement. Given every two layers
can come with a 20 to 30% cost increase for the
PCB, the insane prices of modern motherboards start
to make a bit more sense. Even if the majority of
users will struggle to saturate a PCI 3.0
connection, let alone 4.0 so you might have noticed
a bit of a theme here. One of the most difficult parts of designing a motherboard isn't creating the best possible product but carefully balancing the actual value and perceived value like
this motherboard right here supports SLI has
massive VRM heat sinks. It has four Dems because those are things
that people expect. It probably would be better
for the majority of users. If there were just two ram slots closer to the CPU, one PCIE
connector, and you know the price of this one down here was turned into better traces and okay. Okay. The VRM heat sink,
it looks pretty cool. And I wouldn't want to give it up but at least I know that that's
a trade off that I'm making. And hopefully in the future we can be more open to
crazy motherboard designs that either deliver higher
performance or lower cost. Huge, thanks to build soil
for creating the videos used as a reference for the
VRM portion of this video. I've got those link below if you want to get even more
in-depth than we did here. And also thanks to JJ from
ASUS for offering some insight that only a motherboard expert can. Also let us know down in the comments, if you guys want to see more
"turbo nerd" edition videos and make sure to hit like
this video is only able to go down a couple of rabbit holes and there are so many more here. There are also so many
more high quality segues to our sponsor. I fix it. Thank you to for sponsoring
today's video there, I fix it. Essential electronics
toolkit is a great basic kit for new users. It gives you what you need for the most essential electronic repairs. It has a compact size includes the most popular precision bits,
and it's held together with a high density
foam that you can throw around without any of
the bits falling out. It also comes with a lifetime warranty and I fix it has a bunch
of awesome videos to show you how to, you know, take
apart your device and stuff. Get it today. I fixed it.com/ltt. If you like this video, maybe
check out our recent video on the semiconductor shortage
that is making GPUS nearly impossible to buy for a reasonable price. At least it won't help you buy a GPU but at least it lets you
go down that rabbit hole.
This is a great overview of things like power phases, vrms, etc, and I'll probably be forwarding noobs to this video when they have questions. I also like how they point out a lot of the things enthusiasts think are absolutely necessary actually aren't, like VRM heatsinks even on motherboards paired with high end CPUs, and so on.
It's true.
99% of consumers don't need more than a mITX board, forget an mATX or larger. 2 RAM slots, 1x nvme, 2x SATA, 1 PCI-E and you've covered most of the market. Add on frontal USB/USBC, and you've essentially got what the vast majority of users are using.
Unfortunately ITX has space constraints and routing difficulties so mATX is probably the ideal compromise for most users.
kinda sad so many are downvoting this post. I have vaguely understood motherboards, but this is enlightening and I want to make sure they make more videos like this.
I don't currently need all the features on my ATX motherboard but I bought one anyway because:
Extensibility is nice to have. I've needed the extra RAM slots more than once before because listening to "experts" on what is enough didn't end well. As builds last longer I've simply learned to over-provision RAM capacity so I could probably get away with 2 slots, but it's nice to have 4 if you don't want to over-provision from day 1.
I'm also currently using more than 1 PCIe slot because of something as simple as a WiFi card. And it needs to be far enough away from the first connector because of the oversized GPU heat sink.
Motherboard feature segmentation encourages buying ATX boards. As you move up the product stack, the premium boards have more of everything, so if you need more of one or two features you're likely buying more of everything else that you don't need. There's little room to make smart purchasing choices when the feature set balloons in every direction.
I wanted the comfort of a debug code display for building at home so I'm already up in the overclock product segment that has both a wide VRM and some VRM heat sinks, and it's more than likely ATX. I also wanted enough SATA ports so I can add another HDD or two (currently using 4 ports) in the future, which apparently means I also "need" a boat load of USB ports and some lovely RGB lighting.
I wouldn't mind buying a more feature targeted board if it existed, but I doubt any of the cost savings would be passed on to the consumer. Instead of three similarly bloated 200 USD motherboards in different designs, we'd get 10 targeted 200 USD motherboards. Like the debug code displays disappearing from cheaper motherboard models.
The last few videos LTT has put out have been much better than their typical content, and even have normal, informative titles rather than clickbait junk.
Not sure which marketing rep told them that PCIe 4.0 requires 8 to 12 layers and is responsible for the higher motherboard prices, because it doesn't and it isn't.
There are multiple 4 layer boards with PCIe 4.0 and the overwhelming majority of PCIe 4.0 capable boards are 6 layers. For most board partners, anything below the 500$ range has 6 layers max.
People expected this content at GN and complained Linus doesn't do it...now he does and they complain?
LTT video with actual title and somewhat relevant thumbnail, what a surprise.
I see Anthony, I upvote.