A Brief, Opinionated History of the API

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
👍︎︎ 2 👤︎︎ u/[deleted] 📅︎︎ Nov 22 2018 🗫︎ replies
Captions
it's great to be back in New York I grew up in New York I went to Columbia so this is home and thank you all for coming today I'm gonna be talking about AP is this is very much not a typical Josh talk anyone who has seen me talk before knows that usually you know I've got code on every slide and usually a lot of it's Java code this talk has only one slide with code and it's 70 year old assembly language it's not actually a brief talk let's see I look at the clock yeah so I've got 41 minutes to do it and you know it was originally an hour talk we'll see how it goes so api's a while ago some guy asked me hey how did api's develop in the history of programming ok so it was a federal judge who asked me you know in this interrogatory in the oracle v google trial but you have to admit it's a great question right you know I answered it as best I could at the time but I realized you know this is a piece of history and I don't really know the answer and I studied it and the result is this talk the talk is divided into three parts the first to directly answer the judges question and the last one is a discussion of the current legal status of api's and how it affects all of us in the software field so first let's talk about who entered sorry who invented a subroutine library because he can't have api's without subroutine libraries so the term first appeared in goldstein and von neumann's planning and coding of problems for an electronic computing instrument from the advanced a the Institute for Advanced Study at Princeton in 1948 and that's before any general-purpose computers had ever been built it's the first account of programming methodology as people all over the country were preparing all over the world were preparing to build computers after World War two and this this report basically made its way to all the labs that were trying to build computers was enormous ly influential and it does contain this key idea that most programs will make use of common operations and a library of subroutines reduces the amount of new code you'll have to write when you're writing a program and also it makes it much likely much more likely that the program will be free of bugs right because you're using tried and tested code and and here it is in black and previously white and you you probably can't read all this stuff but you know simple and composite problems routines and subroutines changes required when using subroutines they were all over subroutines in here so it seems that Goldstein and von Neumann invented the subroutine library right case closed except the ACM would beg to differ because in 1967 the second-ever Turing award sorry let's try that again the second-ever Turing award was awarded to Morris V Wilks and the award says he is also known as the author with wheeler and Gil of a volume on preparation of programs for electronic digital computers in 1951 in which program libraries were effectively introduced so the second-ever Turing award was given to Wilkes for introducing subroutine libraries what gives why didn't you know Goldstein and von Neumann get the award if they invented it well the answer is pretty simple actually they were peddling vaporware it's true so he turns out the Martin Campbell Kelly wrote a lovely history of the first few years of programming called from theory to practice the invention of programming 1947 to 1951 and in it he said Goldstein and von neumann's preparatory routine would have required extensive operator intervention and it is difficult to imagine that it would ever have worked in practice in other words to get a program that used subroutines to actually work would have each time you ran the program involved a lot of work on the part of the person who fed the program to the computer and it just you know it was impractical it was like they had this idea but they didn't know how to make it work but Wilks and wheelers idea was was the real thing and here it is they built a computer called ed Zack at the University of Cambridge mathematical laboratory and I'll give you a few of the vital stats on Ed sack it was the world's first functioning stored programming computer it came to life on May 6 1949 and it was immediately useful as a research tool throughout the university it ran 650 instructions or orders as they were known per second blisteringly fast right 650 instructions per second he initially it had 512 whole words of memory eventually they expanded that to a K and they were 17 bit words don't ask why I dunno but you don't want to and the memory was stored in ultrasonic mercury delay lines there were these big tanks of mercury and vibrations would go through the tanks and then you know be reapplied at the other end and you would just sort of keep the mercury moving to keep the bits in memory you know this is finicky technology and also a little bit dangerous input was paper tape the output was a teleprinter it ran at a blistering 6 and 2/3 characters per second so you know never complain about your network speeds because that's what I had to put up with then it used 3,000 vacuum tubes it used 12 kilowatts of power and it occupied an entire 15 by 12 room so in in round numbers the thing was I guess 4 million times slower than a modern PC it had 4 million times less memory than a modern PC it used a thousand times the power and it was the fella that no sorry used a hundred hundred times the power and it was a thousand times the size so you know it was a monster and a pig but it was like it changed the world that the name edsac is an homage sorry the name edy vac yeah it's like is an homage to edvac edvac is the machine that Goldstein and von Neumann we're working on and they didn't finish it until 1951 three years later and it only worked on a limited basis initially so what gives you know given that the Americans had a two-year head start over the Cambridge team why did the Cambridge team finish their machines so much faster well in a word simplicity he kept it simple once again quoting from camel Kelly's book the reason for the rapid completion which was well ahead of any American computer was that Wilkes wanted to have a machine as practical computing instrument rather than a machine of the highest technological performance so you know first you make it work then you make it fast to this end he kept the EDSAC simple conservative in electronics and conventional in its architecture and then Andrew Herbert who led a project to restore the EDSAC and make it work again recently said you know he added that the refinements came later the sequence of diagrams over the next five years shows that after they got it running they did all sorts of stuff to make it faster but they kept it simple and got it running and put it into production right away and by 1948 they were able to do this they here's the first program that was ever run on EDSAC actually I don't have the program on the slide but this is the actual note from the lab notebook and it says a machine in operation for the first time May 6th 1949 printed table of squares 0 to 99 time for program 2 minutes and 35 seconds but that was fast because how did you do a table of squares before that you did it by hand or with the mechanical calculator so you know would have been half an hour's work so this was you know it was earth-shattering at the time that the second program which was you know written just a few days later predicted the first hundred and seventy Prime's I mean it was written by Morris Wilkes himself the first one was written by his PhD student whose name was David wheeler simple toy programs like that didn't need much in the way of an architecture they didn't need subroutine libraries so the architecture was just a simple set of initial orders which in in modern terminology would be called the bootloader and it was stored on an electromechanical telephone switch the whole machine was electronic except this switch which contained that the bootloader and it simply read the program off the paper tape put it into the mercury delay lines and then started execution at the location immediately following the bootloader the bootloader took up the first 30 words of and by the way who would press a button like on the console and it would copy the the first 30 words the bootloader from the phone switch into the mercury delay lines run it to read in the tape and then continue running the program that was read in from the tape simple right it turns out that technically speaking I suppose the initial orders were an assembler because the tape did not contain binary code wheeler and Wilkes knew that humans would be making these tapes and they wanted the machine to be kind of human-centric so they decreed that humans could always program in assembly language sort of mnemonic codes rather than actual binary so you never actually had to use the binary and wheeler wrote the initial orders they were as far as they went but you know just a few months later Wilkes tried to write his first real program and here's a little quote from Wilkes memoirs here's what he had to say about it by June 1949 I was trying to get working my first non-trivial program which was for the numerical integration of Aries differential equation he was a rate radar engineer in the war and he you know was physicist and did computations of waves bouncing in the ionosphere and this this integral this differential equation apparently tells you about that so it was on one of my journeys between the EDSAC room and the punching equipment that hesitating at the angles of the stairs the realization came over me that a good part of the remainder of my life was going to be spent finding errors in my programs this is the first program ever written on the first computer ever built that amazingly deep insight you know given that this guy was doing something entirely new but I think we all know that he was absolutely right but Wilkes did see high-quality subroutine libraries as a partial fix for this problem he thought basically you know if you don't have to debug every little thing because you can rely on a library of high quality code then it'll be at least a bit easier to write correct efficient code so he gave the task to his PhD student who was David wheeler and wheelers architecture for subroutines was finished he basically spent the summer writing it so he finished it that September and he added what he called coordinating orders to these initial orders when you hear order think instruction so what he did was he had and sort of fake op codes which were instructions not to the computer but to the compiler that is to this to the initial orders sort of telling you you know hey this subroutine is being relocated here you know hey we're invoking this subroutine here and doing parameter passing all the things that you have to do to basically invoke subroutines from code and his system was a masterpiece it required no manual intervention whatsoever the program consisted as it says of the the main program kind of intermixed with these coordinating orders followed by the subroutines all on a single tape and and this magic little program that made it all work was only 42 instructions long which given that the simple boot loader was 30 instructions is is amazing like how did he do this and why well you know he did it by working very very hard and he did it because he had no choice he was constrained by the size of the phone switch right he only had 42 instructions to work with so did the best that he could using those 42 instructions and and Wilkes his advisor who was generally not prone to overstatement described it as a tour de force of ingenuity and he can actually read this code in a book that I'll discuss later if you're if you're curious um here is wheeler a subroutine linkage technique called the wheeler jump I'm not gonna discuss it in any detail because I don't have time but I will tell you a couple things about it first of all it did allow for subroutines to call other subroutines to arbitrary depths which you know may sound like an obvious thing to do but you know who here has programmed the basic language anyone yeah basic doesn't let you do it go sub before return in line you know 37 that was an error message so it did not allow for recursion a method could not call itself that didn't become possible until basically a decade later in in Algol and Lisp it did allow for higher order functions you could pass a function to a function because when they were designing this they knew they were gonna want to like solve differential equations and do numerical integration and in order to do that you have to pass functions to functions so it was it was an amazing piece of work for its time it's overly tricky by modern standards it turns out this is self-modifying code now that would be a security nightmare but at the time it was a clever solution to a difficult problem here's what the EDSAC subroutine library itself looked like sorry I keep hitting the wrong button anyway it was in fact a library see here it is in this little thing each drawer contains a bunch of tapes you know the top one contains arithmetic subroutines the next one is complex arithmetic subroutines and this woman here is the computer operator and she basically takes the the tapes of the main programs and the tapes of the subroutines and using automatic punching equipment kind of combines them into a single tape that's then gonna be fed to the computer and here's another view of the EDSAC subroutine library this is kind of a logical view and it contains of the the the labels on each drawer floating-point arithmetic complex arithmetic debugging division Exponential's functions differential equations special functions power series logarithms miscellaneous print and layout which is output quadrature which is which is you know numerical integration input and through trig functions counting operations vectors and matrices what what an astonishing amount of coverage for the first subroutine library ever written and and the whole thing was written you know in under a year by a tiny little team of you know one professor and a few graduate students this is the book that introduced subroutine libraries to the world it's called it's got a long name the preparation of programs for an electronic digital computer with reference to edsac and the use of a library of subroutines so they knew right up front that the subroutine library was the most important thing in this book and they put it in the title this was the world's first text on programming this was it and for the next decade it was like the only text on program and it was you know as important as any programming book is now though it is largely forgotten in this era it was known as wwg back then in the same way as we call the carnahan and ritchie book on the c programming language KN are now you know this was it this was the bomb this was the book on programming for basically the rest of the 1950s and it wasn't really supplanted until sort of higher you know programming languages like Fortran became the order of the day in the late 1950s we were presented the key ideas from this book and the related research in a 1952 paper at what would become Carnegie Mellon University in Pittsburgh and the paper described all of these concepts the subroutine the subroutine library generality versus performance trade-offs in designing subroutine libraries the importance and difficulty of documenting a library information hiding dynamic debuggers and higher-order functions and the astonishing thing is they built these things you know this wasn't just these are things we might do these are what they had done in the past couple years here's a remarkable passage from the paper that I'm gonna read to you in its entirety it should be pointed out that the preparation of a library of subroutines requires considerable amount of effort this is much greater than the effort merely required to code the subroutine in its simplest form it will usually be necessary to code it in the library standard form and this may detract from its efficiency in time and space it may be desirable to code it in such a manner that the operation is generalized to some extent however even after it has been coded and tested there still remains the considerable task of writing a description so that people not acquainted with the interior coding can nevertheless use it easily this last task may be the most difficult this was in 1952 you know it's just amazing to me how much they knew back then 42 years later David Parnas wrote this he said reuse is something that is far easier to say than to do doing it requires both good design and very good documentation even when we see good design which is still infrequently we won't see the components reused without good documentation and that was still news to everybody in 1994 you know even though it's precisely what Wheeler said in 1952 here's another remarkable passage this is the conclusion of the paper he says the primary objectives to be borne in mind when constructing subroutine libraries are simplicity of use correctness of codes and accuracy of description all complexity should if possible be buried out of sight there's a lot going on in that one sentence here's something that I wrote in 2006 at an oops luck keynote called how to design a good API and why it matters API should be easy to use and hard to misuse it should be easy to do simple things possible to do complex things and impossible or at least difficult to do wrong things documentation matters no matter how good an API it won't get used without good documentation minimize accessibility when in doubt make it private so those are exactly the same ideas that were in the conclusion of this 1952 paper and in addition to its prescient wisdom there's one more truly remarkable thing about this paper it was only two pages alone that's it this this is the whole paper and by the way I found a typo in it but you know I stand in awe of this paper the amount of material that he was able to cram into these two pages is truly remarkable and almost all of it is still of value today I'll post the link to the PDF in case you guys want to read it and I recommend that you do it's good reading so at this point I think it's safe to say that we know who the inventors of the subroutine library are that the primary inventor is David wheeler and you know I think that that is that there's a little doubt but what about api's as opposed to subroutine libraries why didn't Wilson wheeler discuss api's as a separate entity well basically because in those days the two things a subroutine library and an API were isomorphic you know there was only one machine architecture because this was the first machine ever built and in fact there was only one machine because this work was the first machine ever built so the notion of portability didn't exist you know it's not like you take a program written on one system and run it on another system with the same API you know and there were no legacy programs cuz there were no programs the notion of backward compatibility didn't exist because backward compatible with what so there was basically no reason for them to discuss the API separately from the library they were one in the same back in those days but I think it's clear from that two-page paper that wheeler wrote that they really did understand quite well the principles of API design even back before they'd come up with the notion of an API as a free-standing entity the field progressed and then existing subroutine libraries had to be re-implemented why well new hardware was built like they did a new version of the EDSAC machine and they basically you know ported the the old libraries to the new machine to run more quickly using the new instructions on the new machine and new algorithms were devised so you could take the the old api's and you know do things like matrix multiplication faster using new algorithms for matrix multiplication and and so on and these things gave life to AP is independent of the underlying subroutine libraries you know as soon as you start reimplemented an api then the api is a separate thing from the library it isn't just sort of your view into the library as far as the term api I don't think we started using it until 1968 I mean this represents independent research on on my part you know I when I looked it up like a couple years ago Webster's said that term came from you know the late 70s and I said that can't be possible and I did a literature search this was the earliest paper that I could find that actually used the term it's called data structures and techniques for remote computer graphics and it's by Ira cotton and Frank Greatorex and actually went and talked to ira cotton Frank is no longer alive IRA was a student of mmm I forget anyway I talked to IRA and I said you know hey um did you guys come up with the term API and he said no no no we didn't know and I said but it's mentioned in this 1968 paper and he said oh is it let me see you know yeah I guess we came up with it and and I told Miriam Webster and now 1968 is listed as their earliest citation so this is this is the first use of the term let's go a little deeper into what this paper has to say first it dances around the term it says normally the interface between application program so there you go application program interface and the system is desired via Fortran type subroutine calls okay then they say the system has been designed to be essentially Hardware independent in the sense that the implementation may be recoded for different or improved Hardware while still maintaining the same interface with each other and with the application program so there's the concept clearly defined and motivated and then they run with the term a little further in the paper it says finally hardware independence at the central computer means that a consistent application program interface could be maintained if the program were replaced eventual replacement of at least a portion of the hardware is almost a certainty given the rapid rate of new developments in computer technology this was in 1968 by the way they they had no idea what was ahead of them but they were absolutely right then they say a sufficiently flexible Hardware independent system guarantees that technological advances will not make the system prematurely obsolete so it's key that they understand the value proposition of api's that API is you know represent the the glue which allows you to put a new implementation of a library underneath existing code so that you keep the value in that code and in your knowledge of the API so what's going on the author's understood the underlying concept but then you know and that the API had a life of its own apart from the library and and once you have this sort of freestanding entity different from the library it deserves a name and the name they gave it was application programming interface or actually application program interface the in got added later many other people understood it too it's not like this was it some sort of great intellectual achievement that they came up with this name it just sort of naturally arose so in summary libraries naturally give rise to api's api's weren't invented so much as they were discovered so I I would claim that wheeler and Wilkins latent Lee invented the API in you know 1948 and it just took us another 20 years for us to discover it and give alright that ends the first part of the talk and the second part is devoted to what what exactly constitutes an API um as of as of April this is what Wikipedia said a set of subroutine definitions protocols and tools for building applications software in general terms a set of clearly defined methods of communication between various software components and I think that's not a bad definition as far as it goes actually a lot better than the definition they had last year I would quibble with the three things in red you know I don't think tools are part of the API I think there are an adjunct to the API I wouldn't limit it to building application software you build system software with api's as well and they say that it's used to communicate between software components as you'll see later in this section I believe api's are also used to communicate with hardware components the purpose is described and this is my paraphrasing is to define a set of functionalities independent of their implementation allowing implementation to vary without compromising the users of the component and this definition gives rise to a two-part test for whether something's an API or not if you can answer yes to both of these questions it's an API does it provide a set of operations defined by their inputs and outputs and does it admit reimplementation without compromising its users if you can say yes to both of those things then you probably have an API on your hands so now let's take a whirlwind chronological tour of twelve reasonably important api's of the past's video fifty years the first one that I'm discussing is the Fortran standard library in 1958 when 4:22 came out it came out with 28 math functions and they're all listed in this table here this is the Fortran standard library and astonishingly it still works if you write a Fortran program today using these library functions it will still run you know so API is kind of last forever notice by the way that I have kind of the API shown in this picture and I have the underlying architecture which is Fortran shown in this picture I'm gonna stick to that convention for the entire whirlwind tour here's here's another thing that maybe an API the IBM s 360 instruction set which came out in 1964 and was how programmers communicated assembly language programs to the IBM computer it is summarized on this famous green card and it ran on the IBM 360 architecture it was subsequently reimplemented on you know other larger IBM 360s and then our IBM 370 s and on clone machines by companies like am Dahl you know the whole mainframe industry formed around this API is do you guys think an instruction set is an API there's some disagreement as to whether it is or not show of hands is this an API yeah bad half and you say it's an API it feels like an API to me then another one is the C standard library in 1975 and it's really fundamentally no different from the Fortran standard library it's bigger because you know C was a bigger richer language and it just did more things but it's still not huge and it is not fundamentally different here's an interesting discussion of a library from KN ours C book they clearly understood the role of api's in ensuring portability it says input and output facilities are not part of the C language nonetheless real programs do interact with their environment in this chapter we describe the standard i/o library a set of functions designed to provide a standard i/o programming interface the routines are meant to be portable in the sense that they will exist in compatible form on any system where C exists and that the programs which confine their system interactions to facilities provided by the standard library can be motivated it can be moved from one system to another essentially without change so you know there you have it what they're saying in that last sentence is essentially the same thing Sun said when they said write once run anywhere you know they just didn't have the snappy marketing department but it's the same thing so you could move the program from one computer to another without change because you've got an API also they clearly understood that core API are inseparable from the underlying language that kind of said yeah technically the standard library isn't part of C but it's going to be implemented everywhere that C is another kind of API is the API between programs and the underlying operating system kernel so here's the UNIX system calls this happens to be from you know the six edition may 1975 but these were reimplemented by you know every subsequent unix-like operating system up to and including Linux and you know they're what you what you use for you know i/o and the like interrupts at that level what are what about hardware so just how many people just want to see how how old you guys are how many people know what this thing is yeah not a lot of you maybe 10% this is what we used to call a computer terminal you know back in the days when computers took up a whole room we didn't own computers we didn't have them in our offices at work we had these terminals and the terminals talk to the computer via either a modem or you know a wired connection and and this was the deck vt100 which was one of the early smart terminals and what made it terminals smart well you could make characters blinked on it you could you had arbitrary cursor addressing so you could sort of jump around the screen and update things and that allowed you to write games on it so you know smart terminals were really great things in their days but in order to make this work you needed special escape sequences to tell the computer hey you know print this over here print this over here make this thing blink and those escape sequences are just defined on this card so this is essentially an API between programs and the terminal once again it's not clear if it deserves to be called an API but it certainly feels like one another kind of API is between the program and the underlying hardware but not the processor so in in the IBM PC and then in all the compatibles that were made there after in order for program to talk to you note once again to do IO and do other sort of interactions with the lower level hardware but not the processor they had to do these BIOS calls the IBM PC was an open architecture and the BIOS was defined in one chapter of this purple book the technical reference for the IBM PC so that's another kind of an API and what about command-line interfaces so here's the ms-dos command-line interface which was adapted from the deck system 10 command line interface beforehand and you know it's not exactly an API but what if here using a scripting language these things play exactly the part of method calls in an ordinary programming language and if you look at the two part you know test that we made it certainly functions as an API by the way amusing little prose on the box you probably can't read it from there but it says this it says for computers compatible with IBM personal computers it's kind of a mouthful but you know they were saying something important there they were basically saying this defines a way to interact with any computer that implements the correct underlying api's what about modems it turns out so if I said a TDT and then a phone number how many people by show of hands would know what I'm talking about Wow many of you I'm astonished anyway back in the old days when people had modems that was how you told them what to dial and there was a whole sort of little language for communicating with the modem initially you actually typed it on the computer terminal but eventually computers sent it to the modem and it turns out that it's still alive today like in all of our cell phones it turns out cell phones have two computers in them the main computer and then the baseband which handles the radios and the way the main computer talks to the baseband is through these you know eight EDT codes so this this is an API that has been reimplemented you know hundreds of thousands of times and long outlived the underlying hardware what about adobe postscript it is a language right there's the language manual for it but it's also the API that you know Apple's Macintosh is used to talk to laser writers to basically describe the page that's gonna get printed out so is it an API is it a language is it both I don't think there's any easy answer that question honestly I don't think there's any hard and fast distinction between api's and languages technically I might describe this as an API that embodies a language same thing by the way for like Perl regular expressions it's an API and it's a little language what about wire level protocols so SMB the the server message block protocol which is what Windows uses for you know printing and file sharing across networks it was reimplemented by Samba to allow Linux systems to interoperate with Windows systems is that an API in wire level protocol what do you think the only a few of you think of it as an API and yet you know it is an underlying communication mechanism that allows components to interoperate but you don't program with it exactly like a traditional API finally in the modern era we have web api's if you've got a web service in this case delicious you you publish an API so that you can sort of mash the things up and build programs that use them so you know we have api's all the way from the earliest of machines from from the edge sacks of the world to our modern web applications the the Java code was that that I just discussed the wrong one oh god sorry my bad yeah win32 so basically this is just another operating system API bigger operating system bigger API but not fundamentally different and it was real amended by samba sorry by wine so you can program you know against the windows API is on your Linux machines you know Java once again just like Fortran and C it's another set of core language api's you know and it was reimplemented by classpath and by Adobe I'm sorry Apache Oh harmony and Android and so forth and now we get to delicious which is you know just a random web service API where the api's are using a different you know description using json or whatever but there's still api's so that brings our tour to an end what have we learned from the tour well first of all the two-part test may be too broad it admits instruction set architectures and command-line interfaces you know wire level protocols and the like if we want to take them out we can amend the definition to say that an API must augment a programming language or you know calling convention but more importantly we learn that API is come in all shapes and sizes and they keep getting bigger many of them live on forever often way past when the original hardware for which they were written dies and that they create entire industries both above them you know where people write programs to the API like the programs that ran on Windows and below them the entire sort of you know IBM compatible PC industry was reimplementation x' of the bios and so forth and that in summary api's are the methods of operation by which components in a system used one another so they are basically the glue that connects the digital universe finally I would like to conclude with a brief legal digression though I am NOT a lawyer we've always had the freedom to reemployment a P is in fact every single API that I just discussed in that whirlwind tour had many significant tree implementations here's a little table which I'm not gonna go over now of all those api's who created them when they were created an important reimplementation and the year that that reimplementation took place even the exact instruction set architecture was reimplemented amazingly you know is it was designed in 1948 and it turns out that a decade later Toshiba and Tokyo University had a consortium where they built a transistorized version of edsac that reamed went to the instruction set architectures so they could use the same library you know and they basically copied the entire spec for the book wheeler you know wwg and there was no contact between the consortium and Cambridge Wilkes learned about it later on and he was thrilled that these guys were able to build a whole new machine just by reimplemented his api's but now and for the last eight years API reimplementation is under serious attack in in 2010 Oracle sued Google in federal court in the Northern District of California for re-implementing the Java API is in Android there they alleged both patent and copyright infringement and that turns out to be very important the the the judge who is judge alsa the same one whose interrogatory began this talk he ruled that there had been no patent infringement but that api's were not copyrightable so that you know that sorry let me get let me get it right the jury ruled that there was no patent infringement and the judge ruled that api's will not copyrightable so that was what happened in the first trial but Oracle appeal appealed the ruling and because there had been patents mentioned in the original suit the appeal was handled by the Court of Appeals for the Federal Circuit CFC who are otherwise known as the patent court and they overturned they reversed judge osip's ruling in in May of 2014 and then in October of that year Google petitioned the Supreme Court to you know we hear the case but the Supreme Court the next year declined to rehear it on the advice of the Solicitor General under Obama unfortunately and that kind of caused the case to be remanded to the court in California to decide whether the reimplementation was in fact fair use and that's exactly what the court decided in a jury trial they decided yes of course it's fairy story implemented API then in October of that of 2016 Oracle appealed that jury verdict and you know in June of that year Stanford and 76 computer scientists and engineers all wrote an amicus brief saying we've always had the freedom to implement api's to riemann we need to keep that freedom but sadly the court did not listen to that amicus brief and they ruled they they overturned the jury verdict on fair use this is unheard of that never happened but it did happen in this case so you know right now it is the law of the land that api's our copyrightable and you cannot reimplementation of the originator of the api's you know and that has not been the case for the past 70 years but it is the case today recently that is May of this year Google petitioned for the entire Court of Appeals for the Federal Circuit to rehear the case and a subset of those 75 scientists and engineers wrote another amicus brief calling for the rehearing just in case you're curious you know who these people are just they are the designers of these systems and languages you know you've heard of Steve Wozniak and and you know gosh I don't know I don't have time to go over the names but you can look I have a pointer in the talk to it tons of people that you know Gordon Bell they were like you know tentorium Award winners in this list these are the people who believe we have the freedom to reemployment api's there are also the authors of these books since a bunch of you have studied computer science you probably have these books on your shelf so what does it mean for you if the Federal Circuit ruling stands it means you cannot re-implement an API without permission from its author this may require you to pay licensing fees or even they may just say hey you know there are certain restrictions like yeah you can ruin this API but you can't do it on a mobile device that sounds ridiculous right but those are exactly the rules that Oracle attempted to apply to the Java API so you know this this is this is the future if this ruling stands and you know if you guys think that software patents have caused problems it turns out that software copyrights will cause much worse problems because patents are limited to basically twenty years whereas copyrights grant and your perpetual monopoly on writing and implementation of this little language and by near perpetual I mean life Plus 70 years if the API was written by a person or a total of 95 years if it was written by a company well there have only been computers for 70 years so like if nobody can reimplementation API for 95 years it's yours you've got a monopoly did any of you use new or a PC or or wine or Android sure you do probably you use most of these things well none of them could have been written without you know the ability the freedom to reemployment api's so that's the stakes here the stakes are awfully high the right to reemployment api's is crucial new entrants won't be able to compete against incumbents and the result will be software that's less interoperable you'll have silos and you'll spend less time hacking more time talking to lawyers and companies will spend more time building less time building products and they'll spend more time basically fighting with each other in the court or negotiating so in conclusion api is date back to the dawn of the computer age they're the glue that connects the digital universe the magic of api is what makes an api an api is the fact that it can be reimplemented we've been free to do so since the time of edsac and i sincerely hope that we haven't lost this freedom so what can you personally do to ensure that we don't lose the freedom well consider not developing api's unless you know they're free to re-implement you know have a have a chat with your employer consider not working for companies that assert copyright on api's and and finally make your opinions known to those in power whether it's executives at the company that you work for or the courts in amicus briefs or Congress so thank you for coming to my talk of course we have no time for questions but if you have questions for me ask me anything in you know whatever it is 20 minutes Times Square is the room it's the seventh floor of this building thank you for your patience [Applause]
Info
Channel: InfoQ
Views: 5,772
Rating: 5 out of 5
Keywords: API, Development, Software Architecture, API-Design, InfoQ, QCon, QCon New York
Id: LzMp6uQbmns
Channel Id: undefined
Length: 47min 5sec (2825 seconds)
Published: Thu Nov 22 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.