Where did Bytes Come From? - Computerphile

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
there's plenty of a superb anecdotes about the history of the buys and and so on I suppose the earlier question which if we want to go right back to Ground Zero who invented the word and beat a chap at guess where Bell Labs jong-kook II worked with Shannon and others a bit I think it was here in a paper in 1947 actually uttered the word bit rather than binary digit for the first time so how big should your groupings of bits be to hold the numbers you were interested in and this is the key to it all early computers were word based later on the idea of the byte came to be not synonymous but associated with what's the amount of storage we'd have to set aside for holding a character but you know real programmers aren't bothered with wofully bits of text what numbers you know the obsession with the early computer pioneers was very much we're gonna build these things because they're interesting in their own right electronic general-purpose computers but what we really want to do is about to do serious applied mathematics engineering quantum chemistry computer science x-ray crystallography radio astronomy all good solid scientific things which need guaranteed decimal accuracy so remembering the rule of three point three to two which I don't have on my teacher this time sorry about that and if anybody wants to produce a teacher without on by 13.3 to to is the multiplying factor how much more binary circuitry you need in your arithmetic unit of your computer to be able to do things in binary arithmetic rather than decimal arithmetic the lodging bits the computer it's a no-brainer it's yes no decision so proper binaries find there but why not use decimal integers well using decimal in two we can point to the previous video on this it's fine but you've got to stay below ten different voltages for ten different levels of decimal digit you've only got to to stabilise for binary it's much simpler to build in binary but you have to accept you will need three point three three whatever times the amount of binary components to do arithmetic then you wouldn't be decimal okay so that's accepted that we build in binary but for school use you were taught how to use logs to do your multiplies with four decimal digits of accuracy Babbage and others wanted at least 10 digit in fact I've not got them here at home they're gonna work I've actually got 10 digit long tables quite a big thick book of them so if 10 digits was deemed to be a pretty decent starting point multiply by 3 points 3 2 2 and you can see by the time you got to something like 33 36 bit for safety maybe groupings you're beginning to get the 10 decimal digits of accuracy that you want so you look at EDSAC a first generation computer which we've been talking about net sacks only got an 18-bit word of which 17 bits are useful but by the time only gloom two of those together in some way you're up to 35 36 bits now you're talking that's getting as the accuracy with one however at greater expense of course contemporaneously as we now know with age so it was the edvac they went for 40 bits straight away so they didn't have to double up the locations more expensive to build of course but 40 bits fine that will get you 10 or 12 possibly decimal digits even when I came to do my work early work on quantum chemistry all along you were aware of not wanting to lose precision on your decimal digit calculations and take great care about rounding errors and stuff like this I once looked at some examples for the Cambridge diploma because for many years from about the early fifties onwards Cambridge did a post graduate diploma in computing yeah based around the EDSAC you went there a lot of famous computer scientists converting themselves if you like by doing the Cabbage Diploma awful lot of those diploma papers were numerical analysis there was the odd question in there which was shock horrid non-numerical covering things like algorithms for the Traveling Salesman problem but then the next question would be you know how could you invert a household a normal matrix of dimension twenty by twenty in the minimal number of operations without losing decimal precision in the fifths decimal point all this kind of stuff that was the background scientists mathematicians and engineers built computers and they built them to do hardcore numeric calculations so was that all there was to it no on the sidelines and treated quite honestly and shamefacedly I say this in many ways with a bit of division was commercial computing and the company as I'm sure you all know that was instrumental in leading the way with that was IBM IBM for years and years and years was the biggest computer company in the world it's probably in probably still in the top ten but I'm guessing that Microsoft Apple and Google will be bigger than IBM in terms of revenues nowadays yeah IBM of course had got the best part of almost a cent now half a century a good half a century of lead in using punched cards as a means of holding data and even though the machinery that sorted them collated them and could even do elementary additions and subtractions of the codes that were on the task IBM was the market leader by a mile others tried to get in on the act and get a little bitty action and they did but the industry leader was Ibn so Jaime em understood about real data okay then so they were the right people to basically start saying you look at these computers now and how they handle characters it's pathetic and it is because you can go back to our edsac video about how to bring hi but it doesn't matter whether printing out hi or hello world you look at the way that these early computers did characters and it was absolutely suboptimal the story was well if you don't want to fill up your word with lots and lots of bits the real scientists use you could always fill up a subset of your work with maybe five bit for a bordo code or invent your own 6-bit code to stop having to use finger shift a letter shift turn it from third to possibilities to 64 so five six bit characters were common but some people started to say well you always go on about memory being so massively expensive and so precious and yet on EDSAC you're saying well it's five bit character so you can economize can you by squeezing three five bit characters into a seventeen bit word no why would you do that it's to be hell to get them out again you'd have to do bit shifts and all sorts you to slow things down so when you look on an early machine like it suck it uses five bit characters in the middle of an 18-bit word and the rest of the 13 bits just don't do anything IBM came along and said look here's the story for those of you got lots of money and don't matter spending it we have the perfect solution stop making the word be your minimum addressable unit address the characters inside it individually and in order to keep things clean and to allow for future expansion don't mess about with multiples of two like six bit characters let's be brave and so write 8-bit characters now a long for a while before all this came about people had been calling groups of bits of arbitrary size five bits six bits been calling them bytes but IBM proposed to standardize on the word byte being an 8-bit entity and here's the win-win situation for those of you mad mathematicians and engineers we can arrange the hardware to regrow regard groups of four of those as being a 32-bit word it's a win-win IBM can dominate not only commercial computing but can also make a reasonably good showing because of the sheer speed of their hardware a great expense in scientific computing as well so people say oh great idea oh wow yes that is you know now that we've all got more money I can afford more expensive things that is a sensible way to do it without a doubt so the idea was absolutely spot-on you choose a byte width which is a power of two and then you arrange to be able to put together groupings of bytes that form something sensible and bigger for macho types who want to do real number arithmetic so everything the garden is lovely you've got 32 bits you've got 8 bits it was almost end of story but not quite because although IBM did very nicely with things like the 370 one-nine-five and so on at being a pretty good mainstream scientific computer they didn't really go for chasing the market at the very top end because they just wasn't enough in it for them they made mega bucks of their commercial database customers so right at the top end you get alternative solutions emerging which are still with us today started off with a company called CDC and the brilliant hardware engineer called Seymour Cray and ears will say cray he left CDC eventually yeah he did inform Cray computers so yes at the supercomputer end of things there was an alternative market but IBM solidly there in the middle said we do commercial and we do scientific we rule the world and I think they were remarkably prescient and they even at the other end for soy's and tried to get it right but failed but maybe this will have to be another video because it does complete the story is what about people who wanted something even cheaper than a 16-bit PDP lumen wanted an 8-bit micro IBM saw that coming and tried to get in there and dominate it but for reasons we half understand but can be revealed if there's enough demand they did lose control of that one particular piece of data you have to wait for it to come round again so that tape that you see there is going at 5,000 the characters per second yep 5,000 characters per second roughly I think coverage comes to 30 miles an hour
Info
Channel: Computerphile
Views: 415,262
Rating: undefined out of 5
Keywords: computers, computerphile, computer, science, byte, bit, computer science, University of Nottingham, Professor Brailsford, Computer History
Id: ixJCo0cyAuA
Channel Id: undefined
Length: 11min 30sec (690 seconds)
Published: Tue May 01 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.