CPU vs GPU (What's the Difference?) - Computerphile

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Extra info for curious:

Coming from a math background GPU's to me are just linear algebra calculators. GPU's are matrix crunching error producing massively parallelizable beasts.

But they crunch enough matrix's per second that any error is probably corrected in the next frame.

Diagonals, Upper triangular, lower triangular matrix's, matrix decomposition's, projections, transformations. All that stuff from linear Algebra that you though would never be used. GPU's do all of that.

GPU workstations can take advantage of GPU's for simulations or heavy calculations as they can do many calculations at once. However the largest limiting factors of these cards is the VRAM (how large of an object can you physically store and how large of a system can you quickly generate and calculate on) and the accuracy. Scientific GPU's (like the old Tesla K40, the new Tesla K80) have their respective 12GB and 24GB of VRAM but are double precision machines with much much much higher rates of error detection then something like the 980ti or the Titan X. This is why something like the Telsa K80 will set you back 4000$

Here is a K40 with a pen for scale. This baby does quantum mechanics simulations

๐Ÿ‘๏ธŽ︎ 8 ๐Ÿ‘ค๏ธŽ︎ u/Mimical ๐Ÿ“…๏ธŽ︎ Dec 22 2015 ๐Ÿ—ซ︎ replies
Captions
And a graphics processor is a specialist processor that is designed to make processing of three-dimensional images more efficient than other forms of processor. It is a digital world. It's all 1s and 0s, adds and minuses. And if you do lots of adds, you can turn it into a multiply. But actually, a graphics processor takes a very specialist workload and does it much more efficiently. What are the fundamental differences between a central processing unit and a graphics processing unit? Well, fundamentally, we're here to put pixels on screens. So, at the end of the day, we are here to execute some commands whose purpose is to say that pixel on that screen is that color. And usually that's presented to us as: Here's some data. So, usually there's a three-dimensional model. So, in front of us is a table. There's a circle which is a few feet off the ground and it's X thickness and it's got some legs and it's at this position. There's some chairs in the room. There's me, there's the walls all around. There's a three-dimensional model. So, first of all, you get given a bunch of coordinates and say the following things are at the following places. And then you say give them some more data, which is the chair's a lilac and the table's a sort of sludgy grey, and so it's fine. And so you get given some color information, which is what we call textures. The geometry of the scene is usually broken up into triangles because triangles is nice and simple. We're very simple people. We can't cope with complicated stuff. A triangle, three points, always has to be flat. You've never seen a triangle that isn't flat. And so you divide complex surfaces up into triangles and then you have some information about what colors those triangles are. So you say, right, okay, I've got the geometry. I've got the color. What do we do next? Well, you put some lights in the scene. So there's some lights in the ceiling which are shedding some light in certain directions. And then you need a camera. So you say the camera is here. So now you have to do some three-dimensional geometry to say well, what does it look like in the camera? And the first thing you observe is well, about half the room, you can't see. Phew, that's good. So, I don't have to calculate everything that's behind you. The only thing that I have to calculate are the bits you can see. And you project it so that it goes into the two-dimensional screen And this it what it looks like. And then you move the camera around, usually, to get sort of a real impression of moving through the scene. So, there's a lot of different types of calculation involved in that. First is loads and loads and loads of three-dimensional matrix arithmetic. You know, XYZ coordinates, sometimes four-dimensional arrays with XYZ and transparency information. And lots and lots of RGB, red, green, blue, color. So, a device that's really, really good at matrix arithmetic is a good start. Floating point, because the position of all of these things are usually expressed as floating point. And then finally, you've got this unit in the back which says, oh well, I've got lots and lots of pixels to deal with, so we need to run through that and get them all into a buffer in memory. So, some of that's really quite different from a CPU. A lot of three-dimensional plane equations have to be solved. So, for example, here's the table, here's the floor. Well, which bits of the floor and which bits of the table can I see? So you have to do a lot of matrix solving to work that one out. And that's the difference in the problem that gets given to us, right? The difference in the design is: We say, well actually, I can do loads of this in parallel Actually, I can do a lot of these quick calculations in parallel because they don't depend on each other. So, every time you hear the phrase, "For every vertex in the geometry, do blah." "For every pixel in the screen, do foo." You can actually say, well actually, that's a million pixels. I can actually calculate them in batches of 256 or something like that. So, we extract the parallelism out of the algorithm, and we design a processor that is actually very good at parallel processing. So the difference between a CPU and a GPU predominately is, yes, there's some really, really fixed function blocks which we do very, very efficiently compared to a CPU, which does everything itself. But also, we are very, very good at extracting parallelism. So, if I want to multiply three floating point numbers together, I'll do it more slowly than a CPU. But if you ask me to multiply a million three floating point numbers together, then the length of time it takes me to do a million will be a lot shorter than the time it takes a CPU to do a million. So, we don't care so much about how long anything individually takes. What we work on is the bulk throughput, and it's a different end to the problem. And of course, there are blurred areas, and some people are now saying Well, actually there's some sort of computing I could do that would do better on a GPU than on a CPU. And so you get this whole thing called GPU computing coming along where people are not actually doing graphics, but they're doing throughput computing. And actually, that's been quite interesting hearing. I think one of the ones that somebody suggested was people doing Bitcoin mining with GPUs because it's just lots and lots of maths. Yes. But also, image processing. So, in modern devices, you tend to have quite a poor lens, or a poor sensor, and you're trying to take pictures that are as good as that camera you're holding in your hand that costs thousands. And actually, that takes an awful lot of image cleanup So, there's an awful lot of computing that's taking place on those digital images. And it turns out that actually a lot of those go quite well when executed on GPUs, not executed on CPUs. Can it fix my bad focusing as well? That is coming, that is coming.
Info
Channel: Computerphile
Views: 781,052
Rating: 4.9141765 out of 5
Keywords: computers, computerphile, computer, science, Graphics Processing Unit (Conference Subject), Central Processing Unit (Computer Peripheral), ARM, ARM Holdings Plc (Business Operation), computer science, 3d graphics, graphics, computer graphics, computer hardware
Id: _cyVDoyI6NE
Channel Id: undefined
Length: 6min 38sec (398 seconds)
Published: Tue Dec 22 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.