"Data work: the hidden talent and secret logic fuelling artificial intelligence" - Prof Gina Neff

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey to everyone joining from around the world i see we have people from japan from california from switzerland lots of time zones here seattle um and even from west berkshire in the uk and of course from oxford uh good day to all of you i hope you keeping well and safe in these difficult times no doubt you've heard the good news from oxford that we have a vaccine uh that's going to be affordable and globally distributed so this is a significant day for us in oxford and made even more significant by the fact that we have a presentation uh from gina neff and gina is going to give a talk that she was originally scheduled to give in march uh and then circumstances um intervened and then we had covered 19 but she's here and we absolutely delighted that she could join us i'm ian golden i was the founding director of the oxford martin school and since 2006 i have been directing the program on technological and economic change at the school gina is the professor of technology and society at the oxford internet institute was actually founding groups in the oxford martin school she's also a professor in the department of sociology at oxford she works on the future of work in data rich environments she's published three books and over a dozen research articles in uh many of the magazines and journals that you would be reading she's a pioneer in creating the area of human centered data science and leading a new project on the organizational challenges that companies face using artificial intelligence for decision making she's had fellowships at the british academy the institute for advanced study at princeton um and elsewhere and she's an advised to many companies organizations and others on this agenda she's also an advisor on ai to the women's forum for economy and society gina delighted to have you i'm here to talk about data work this is the new project that my team and i have been working on um and it's gonna i'm gonna talk about three distinct projects now the stakes at hand are that we are looking at a transformation of the basic everyday infrastructure of how we do organizing i like this word infrastructure because it reminds us from science and technology studies that once technology is getting put into place they're often invisible or every day we don't so much think about our roads infrastructure when we're trying to get from point a to b um in the west in the north uh the northern hemisphere we don't think necessarily and fortunately we don't have to think about things like our water infrastructure and we think about our connectivity infrastructure because it's helped us allow allowed us to do um allowed us to do things that we haven't we wouldn't be able to do otherwise during this pandemic that said infrastructure helps us think about how small technical decisions being made today end up having enormous everyday implications for all of us and that's what i'd like to talk about there's three myths of everyday ai that we need to address the first is that ai is not um it's not a province anymore of large technology companies that is that we are seeing the integration of new modes of organizing new kinds of technologies into everyday decisions from legal decisions to marketing decisions to civics decisions decisions about who gets credit or who gets put in jail and in this way what the conversation so far about ai ethics has targeted those who make ai those who those who the large technology firms who are um driving the innovation and so i would say that the context for us to think about in terms of data work is not that we're targeting large tech firms but rather the places where many people on this call would work smaller firms firms outside of technology firms organizations of of all sorts that are starting to integrate purchase acquire new kinds of data processing new kinds of technologies that make decisions and automate processes the second kind of key challenge for us in thinking about data work is that we've talked about artificial intelligence as a as a suite of technologies as a smart tech but fueling and driving the expansion of artificial intelligence into more parts of our everyday lives is really this question of ai as data um so the you know the first is ai is everywhere it's in smaller firms we need to we need to change our conversation the second kind of key point that we need to think about is that we need to think of what is happening not as just a technological phenomenon although the processing power and cloud computing are allowing and enabling and driving new kinds and new ways of generating data it it it is about a data question right and how data is being used in in parts of life and that's going to be one of the key areas we're going to look at tonight the third is the idea that ai systems are automatic now um this seems almost like an oxymoron but bear with me for a moment when we talk about integrating artificial intelligence systems into everyday life we might think of them as driving a scent of automation and that's certainly the the push behind many of these systems but what is fueling much of the expansion is a whole host of human work and it's really in the intersection of these three points ai of the everyday and many different kinds of organizations ai as a as a data innovation and not just a tech innovation and the notion that ai systems involve enormous amounts of human effort that's really where we need to be thinking in terms of data work and that's where our talk is tonight so um with my team um maggie mcgrath and nyanna prakash earlier this spring earlier this year we were we released a report called ai at work and and what we did was was actually fairly simple we surveyed global newspapers for accounts of how artificial intelligence was making the news and first thing we did was set aside basically everything that looked like a company press release right so those cases where someone was advertising how well their product worked or um we know from existing research that much of the the journalism that is on artificial intelligence is coming from the sources of technology companies that make it so we set those aside and instead we focused on how news stories were covering where ai was working in organizations and we came up with three common challenges that we saw in the about a thousand articles that met our criterion in this over the course of the year and those challenges were challenges of transparency integration and reliance that is what we began to see if we if we take and and coalite the stories of where ai is not quite living up yet to its potential as a tech as a transformative technology we see the stories share these new stories share three key challenges now in transparency we see this lack of transparency between companies and customers about what ai can do how long it takes how much work is involved in making it and sort of closing the loop as as as they say so companies on the one hand hand tout ai is this automating solution rather than being open about the amount of money or time or effort needed to produce and sustain systems that actually work in practice and this led to a whole host of problems that that that arose in our reports um because we could see that that the people the clients buying these systems felt deceived they felt deceived by privacy promises etc and companies felt that there was a whole host of unglamorous ways on the on the one hand they had signed up for you know sparkling new ai systems on the other hand they were getting what looked like lots of back end data cleaning next there was a huge gap in integration and we we used this to think about how what was the gap between the conditions under which ai was trained in real life environments and how it's used in practice and we we saw that in many systems the ai systems that that that made the news they they somehow were were trained with one set of data but unfortunately the real life situations in which it was rolled out or integrated into were messier and less organized and so it often meant that there were enormous amount of human labor used to train and manage the systems even when they were put in place and that meant that companies often struggled to scale artificial intelligence systems so they could work across a broad array of scenarios and problems um and and that takes away or presents a problem for some of the the the notions of scale and third was the notion of reliance and we like to think of this as like companies relying either too much on their rei system or the idea that an ai system was in place and investing a lot of uh agency autonomy and authority into those systems rather than training their workforce staff or bringing them alongside and making those those decisions um practical in the in the workplace we also see that there's a these challenges of both over and under alliance so let me take each one of these in turn um first is with transparency now um my colleague mary gray at microsoft research and her colleague siddharth um suri have just released a wonderful book um ghost work on the platform work of laboring labeling so that when we see ai systems we know that there's a lot of contract gig or on-demand platform labor that goes in to building these systems and often often that kind of transparency is is a problem that we don't see who's involved and the work that's involved in making these systems work however we in our report kind of pushed on a different idea of transparency the notion that companies often had different kinds of um labor in different parts of the organization that were really important in making their systems happen and so and so in this sense the transparency was both is the system being transparent with with where work is happening who is doing the work but also geographically is the work being done within the company or outsourced to a third and again several of the cases found where um even internally within organizations they would think that a process was happening inside their organization when it was really happening outside the next is this kind of thinking about integration right an example that um how systems actually integrate into existing workplaces really presented a lot of problems so we saw several cases where there was problems scaling data across multiple hospitals where data would be collected in one hospital a system an ai system would be trained but then bringing that to be able to make sense in another hospital or scaling it even across different departments of an organization became a problem so we see these challenges of integration really as when data moves from one context to another and companies they can't quite fit the ai the ai into their existing organization and existing strategy and then third was this notion of reliance right so when was the dependence on ai just right neither sort of a goldilocks moment and there were several challenges where companies were um were touting that their system was was doing one thing but really well it was a it was a shield and what was being done was people in behind the system so in countries like china and india there were crowdsourced labor actually providing the work so it was weak data and privacy laws and cheap labor this kind of intersection that really gave different places on the globe a competitive advantage in this kind of work and and so therefore the the reliance isn't so much on the technical system but rather on a new form of outsourcing so what does this mean for data work well we define data work as the work that needs to happen in terms of helping the inter interpretation and contextualization of data in practice so that's first second data work is really the work that is involved in helping translate data for fairness and inclusion so that too needs some kind of context around an aei system and third that we define data work as the communication that needs to happen with a whole host of stakeholders often including conversations about privacy and ethics and so this comes from a paper that we published earlier this spring um who does the work of data and acm interactions and that's with my colleagues mueller boss and pine nielsen and myself and and in this we we we ask three main questions right who if we're going to understand this hidden invisible labor of data work that is happening within organizations preparing organizations for ai systems we need to ask first who is ensuring that the data are meaningful who's doing the work that helps integrate um data solutions into practice within the organization so who's taking the responses so the sorting and making sure that systems are organized around it second who's really doing the work of organizing and infrastructuring for making ai even possible and so much of what we've seen in terms of what gray and suri called ghost work in terms of some of the platform labor really is that a lot of the organizing and infrastructuring work is hidden labor but even within organizations we see that there's work around organizing and infrastructuring that needs to happen and then finally who is doing this work that is attending to questions of ethics privacy and people's concerns about their data again difficult to automate that particular part of work and so we took these three questions and we looked at two sets of hospitals a set of work in hospitals around creating um uh reading billing data in the united states for analysis so as many people on the call will know in the united states with the privatized health care system each procedure within a a health care setting has a particular billing code and those billing codes can be aggregated to provide real sense how long did someone stay in hospital what what kinds of treatments end up with what kinds of outcomes um we also we looked both at the united states and in denmark where as where a set of workers uh medical secretaries do the work of transcribing and making certain kinds of data meaningful within the health care setting and so by doing those two and this is where we're really missing out on the slides we come up with a data wheel we come up with a way that we can model where intelligibility and transparency ongoing optimization of resources and the work of context information and metadata occur within the organization and so we think about these three really primary types of work both as new kinds of work that's necessary to make ai's systems even function but also as recognizing work that we might not have seen as crucial in the data-driven revolution so for example our billing codes experts in hospitals in denmark and the united states were both really key in making sure that certain kinds of data were ready for data scientists to process they were really important for making sure that the right codes got us got attached to the right procedures and that was a kind of infrastructure work a sort of care of data work that opens up new possibilities for analysis and third they were the ones who interfaced with people who had concerns about whether or not the codes about them were right or had concerns or questions about privacy so um before we open up for discussion and i promise since we're doing this without slides i'm gonna i'm gonna i'm gonna bring us to a close very soon i will just say that the third project that my team's worked on in this um scheme and really where i see both opportunity and a great opportunity for people to get involved is around the discussion of what is and isn't artificial intelligence so helping people make sense of the systems that we see and unpacking some of the myths about automated work and um the the future of work i think is one of the things that we need to be working on so with my team early in this in 2020 we released the a2z of ai or the a2z of ai and if you google that you will find it we partnered with google to create an educational product that really helps people um with a by simple bite-sized explainers to help people understand what ai is and how it works and that and that project has now been rolled out in 59 different countries and 13 languages and it's part of an effort to kind of tease out some of the the misperceptions that we have so um i have a series of practice and a series of policy research recommendations before we wrap up and if this were a slide enabled if this weren't 2020 and a slide enabled talk we would leave these up as we have the conversation but i just want to say that for a practice agenda we really need to be thinking about organizing for data saturated societies these questions around artificial intelligence and who does the work in making ai systems even possible that raises questions of digital data and the public good and it helps us think about projects that seek to understand how we can harness some kinds of data commercial data perhaps for responsible reuse it helps us think about how we want to intervene and think through those questions of who is advocating for the data of data subjects second it reminds us that people's understanding of their privacy and their their own data it's not a task that we can just leave and assume that people will do it on their own that there actually is quite a bit of organizing and stakeholder engagement work that needs to be done both on the the part of companies doing the work but also for those of us who advocate for responsible use of data we really need to be working in this intersection of helping to um people to uh to to learn to interrogate the values and implications of data-driven businesses and then finally third i mean we this is part of this is we really need to empower citizens and upskill societies as part of this so so the work of making ai systems is simply too important to be left to simply large technology companies who have an interest in as as as vendors of selling systems on to smaller companies we really need to support this responsible utilization of data not just by thinking about who's designing systems but helping people who are going to be using them on an ongoing ongoing method and that of course brings up the policy questions of how we can enable policymakers to upskill as well and finally in terms of in terms of research agenda i think this applied everyday infrastructure is one way that we can begin to think about um and move ai and ethics questions out of the realm of large-scale technology makers but really start to think about how we map track and measure these changes and all of our lives what's happening in the organizational settings where we work and how can we see that we're moving from um these questions of of more and more data gathered about us and managed about us in different ways can we identify ways that social cultural organizational and causal factors really shape who can intervene and hold accountable ai systems so where can we do the social science that helps us ensure that systems are deployed and integrated in responsible ways and then can we begin to think about the the changing social norms and conventions that are happening around ai systems and organizations when do we cede power to automated systems and when do we remember that behind the interface of many of these systems is a whole host of human labor that also needs to be held accountable and so with that i indulge thank you all for indulging me with my technical issues this evening and um invite ian to come in and join our conversation thanks very much um gina for that admirably clear um presentation i i love your organization of everything in threes um which is incredibly helpful as a way of retaining it um i'm told that people remember threes and it helps when you have no slides to follow along yeah i know it's impressive that you remembered all the threes as well uh admirably clear and um and really extremely urgently needed because without knowing it we are all walking into this maze and sometimes catastrophically so not least in the uk government's use of excel spreadsheets i didn't understand um for track and trace um i have a number of questions but i'm conscious of the time and we did start a little bit late so let me let me just uh pose a few and um having been in some of these data factories in kenya the summer source and others and and also admire the work that your colleague mark graham's done in thinking about the rights of of people which is very allied to to the work you're doing no doubt um let's just maybe just begin we've got your book the h is that of ai um at the bottom but some of what you were speaking about didn't sound too much like ai to me it sounded more like filling in excel spreadsheets um you know is is is there a slippery slope or how you know without reading your book can you give us a quick definition of um of ai but i do encourage all the participants to to get the book obviously into to look at the deeper explanation thank you um the uh so is it is it is it just excel spreadsheets there's a there's a joke that says when it's a sales or consultant ai is a spreadsheet when it's someone who's a data scientist it's machine learning that's absolutely true here when we look at how companies are talking about what it is that they're doing they um they they they put they put a gloss that allows more computation that infers much more computational power than they're actually doing um so with the women's forum you mentioned at the beginning uh in the introduction i've been doing this autumn a series of focus groups with uh chief data scientists in in companies in europe and the us and these are incredible company leaders right these are fortune 500 companies uh banks um large manufacturing firms large consumer consumer services firms um name the sector we've had we've had an interview with someone working in their data science team and we ask the primary question what are you doing about responsible ai ethics what are you doing for responsibility in your data systems and and each each data scientist knows they have a huge responsibility to do and none of them can articulate yet what it is they should be doing on a practical level so there's an enormous amount of catch-up that our corporate leaders are doing in terms of figuring out how to put into practice something that actually works so sure some of the some of the challenges that we're looking at are challenges of any kind of large-scale centralization large-scale control over globalized systems and supply chains but the challenges here are that we risk building infrastructures that once put into place as technical data infrastructures become difficult to untangle difficult to hold accountable and difficult to intervene in and so that's why we are suggesting that there's an urgency at this moment for really kind of getting getting getting the conversation more involved in holding these systems accountable yeah and and i think you you very clearly articulated uh the urgency of this and and the need um i have many questions but let's go to some of the quest the questions that have been posed by participants um you are able to vote for these questions so uh do vote if you um if you're keen uh on a particular question and i see we have eight uh let me take the first um which is from uh ollie stedman who i happen to know hi holly good to see that you participating in this what does the equity of opportunity in tech look like where do we aim and how do we know when we've reached it that is a fabulous question because on the one hand we want to see the expansion of the types of people involved in designing and building ai systems for the world right now we see concentration of that effort in the global north we see it in the u.s and in europe we see that only 18 percent of people working in ai are women so we have we have enormous challenges of racial ethnic global gender diversity in building the system on top of that we need to start building capacity and i would suggest that there's a couple of um of challenges one is that the the equity of opportunity is necessary for building better tech but it's not sufficient it's not the only thing that's going to get us there um the second would be that we really need to increase the capacity in the global south in order to ensure that systems are properly localized so if we're just relying on systems that are built in one place trained on data from one place and then integrated into into into systems around the world that's that's a recipe for disaster yes integrating bias and i know massive concerns not least shakir muhammad at deepmind has been writing very admirably about the colonialism of of data and data algorithms four votes for siddhartha aurora's question can you comment on technological determinism is ai trends such as those discussed here susceptible to the fallacy of technological determinism that's a fantastic question um we certainly hear that determinism in how industry leaders talk about and support and and and push um the inevitability of ai so if we look at this kind of data work question we where we see people um in these large in these large hospital settings the people who are involved in the day-to-day operations of getting data systems ready for data analysis um there's a whole host of new kinds of jobs that are being involved it's not inevitable that that it automates work or that it displaces all kinds of work but instead it's creating these new moments uh with with with different kinds of opportunities for people to be involved and so and so when i think about technological determinism i think oh the technology has its own drive and it will naturally kind of go in one way or another whereas whereas what we can what we can see from how from from from older industries like like um healthcare and construction too that i know very well in this case that the the pathway that new technologies take is not predetermined at all so um so i think we have i think we as educators have work cut out for us where we um we make sure that how the technology industry talks about the inevitability of their genius is held to account absolutely um so there's also people on youtube um participating and we've been sent a couple of questions that have come across um from that um and the first one uh which seems to suit is i was wondering this is um doesn't say who it's from andrea guzman i think um i was wondering how prof neff thinks data saturated and ai rich healthcare will impact medical education in the next five years brilliant question from a brilliant colleague in the united states um so we absolutely have to use these notions of data work to help people understand the context of the results they're looking at if we continue to think about the outputs of ai systems as decontextualized outputs we end up in a dangerous place especially in medical systems if we if we understand where the data come from what kind of context they have and and who is advocating and who is responsible for ensuring that there's this organizational stitching together between the outputs and the the the organization the practices within the organization we're gonna have much more um highly contextualized and much more relevant um results it it's life and death consequences in health care so we're we're right now today in this moment where we are we have a lot to be grateful for large-scale computing power and the integration of global supply chains is going to help us end this global pandemic sooner than than than any pandemic's ever ended before right we we should be cheering uh large-scale computing power and the collaboration possible from teams in this particular way but we're not going to get to those great societally beneficial outcomes unless we realize that that these data systems are highly dependent highly contextualized and sometimes highly fragile many of the cases that we looked at in our ai work report come from healthcare where data from one context was simply tried to be brought into another context so data gathered in one particular hospital reflected choices that that that were very specific to that hospital and not applicable to to other hospitals so how does this influence medical education we have to train people in healthcare who are going to be using these systems on how to use them responsibly and how to be critical consumers of the data they're using they are our front line defense for bad ai and when ai goes awry um in fact that was from someone else but andre guzman said they'd like to extend this this discussion from medical schools to business schools etc um what should be the core of an education generally related to ai and data carl bernstrom and jevin west at the university of washington have a wonderful new book out called calling bs where they developed a training course around critical human centered data science and they basically are training people to say ah wait a second i am calling out and i won't use the profanity i am calling out um the uh as wrong as patently wrong some of the data some of the seemingly data-driven evidence here i think it's incredibly important in this particular moment when we see attacks and challenges on science and evidence we need to stand up on the one hand for science and evidence and and and help ensure that we continue to build public trust in in good science and good evidence but on the other hand um building systems building fragile ai systems that are not robust or they don't deliver as promised isn't helping us get there it's it's it we want to make sure that we're not simply selling new 21st century versions of snake oil and so i i think it does happen where i think one of the solutions just like we would train medical professionals and knowing how to push back on and query ai systems they don't need to know enough to design them but they need to know how to how to operate them it's going to be the same in business schools we're not necessarily going to have the the mbas and the ceos necessarily as people designing ai systems but they're going to need to know enough to ask their team and to hold them account for how those systems get integrated into their existing practices and that's the education i think that we really need to be doing right now absolutely including in oxford zoe porter asks um with three votes what specific regulatory mechanisms would you like to see to put in pl put in place to address these issues we've already started to see questions around people's individual data right how do we how do we advocate for ourselves um you know with the european uh gdpr for example general data protection act how do we um how do we get individuals to advocate for this um we some of my esteemed colleagues have called for a a kind of um algorithm regulation right that we that we that we regulate particular kinds of data systems and data structures i don't necessarily think that's the path because we have in place enough outcomes regulation so for example in the education system we want to make sure that uh data systems are not failing our most disadvantaged students and that they're treating people equitably we we have we have ways of monitoring that when we met with company leaders this autumn with the women's forum asking data scientists what they're doing about responsible tech we found surprisingly that some of the most advanced conversations around responsible tech were happening in banking why would that be why would it be in banking and not in technology for example well in banking they already are under such a highly regular regulated system around how they make their decisions and choices that they had to be really sure when they integrated new ai systems into their analysis that they could explain them that they could explain them to their customers that they could explain them to them themselves and that they could be assured that these systems were not causing new forms of discrimination that wasn't because that there was a special ai banking regulation in place but it's because we have certain regulations around our financial data and so and so i think that that's the the way it's as a framework as it were for how policymakers need to be thinking about regulation in this space we need to be thinking about what existing frameworks in each realm of our our our lives as citizens do we need to be adjusting to think through how these systems are going to integrate into that because that's the infrastructure that we're building great um there are three votes for this question from gwen um in your research have you spoken to employees interacting with data-driven system if so what are their main concerns and challenges in their work and how do these differ by race gender or age in the data work um article in the article who does the work of data um that we published over the summer we spoke overwhelmingly to women and in the united states it was women of color who were having who had very um stable bureaucratic jobs in large hospitals doing this data work they are the champions of the ai era large-scale data and those data analysis in those hospitals did not get done without these women who worked in literally the back offices the challenges with our medical secretaries in in in denmark was that the hospitals where the we interviewed they were actually looking at automating their work because some of the some of the work that they had done of transcription was being automated and so their concerns and challenges really were how can i cons how can i keep doing this work that i conceive of as care as caring for our patients data as caring for others as making sure that the record stands properly while not getting recognized for doing that work so the challenges really were how how can how can we support this invisible labor it's not being called ai work it's not being called data science and yet neither new ai systems nor data science can happen without it so i think the main concern and challenge is making sure that we're supporting good organization we're supporting the work that needs to happen within organizations to make sure this happens yeah is it called what's the organization that mark graham's involved in as well that's right so mark graham uh my colleague mark graham is working on fair work and fair work is is looking at the platform economy labor right so so so much of this work mark the work of mark graham the work of mary gray much of this is looking at platform-based labor we're not seeing so much platform-based labor in banks and hospitals and yet and yet that the work that's being done in those organizations is absolutely about the invisible work that's fueling ai um so another question from ali that has two votes which is why another vote which is why i'm going to give him a second um question how do you regard human level uh performance as a metric to beat in machine learning applications especially in manufacturing checking for defects and so on um and he says andrew nick ng wrote recently about its limitations right so listen we know that large-scale computing power is going to help humans solve really big problems and there's there are really big problems we cannot solve without large-scale human computing power i think if we frame artificial intelligence as the corollary or the um the competitor to human intelligence we we get to these questions well which is which is better we don't actually ask the question which is better my calculator or my set of hand calculations okay sure my calculator can answer some things faster i can i can do some other things easier that's going to continue to be the case with artificial intelligence systems so if we think about if we think about um using the spray human level performance we again are pushing ai systems to act and think like people and not act and think like bits of technology that we use or bits of infrastructure that are going to be part of much bigger organizations and data ecologies and flows of data that that already exist so so to personify in some ways is to hand over a powerful sense-making tool that takes the power away from how we can intervene in these systems um i don't think you've asked this question but it's another question from siddhartha aurora um can you comment on deep fakes and are they more concerning than general ai issues you've discussed so far listen you know i'm concerned about deep fakes almost as much as everybody else we have at the moment big challenges to um to the notions of information is our information secure do we how do we know what we know and this is a this is a a question about epistemology this is a question about how we know what is real um that is very much a function of where we are in the early 21st century but that said i am more concerned about ensuring that we have a healthy robust new system healthy robust democracies healthy robust organizations than i am about a particular technology of having um life like human pictures right so so um we know for example that the amount of misinformation circulating about elections on facebook varies by country and we know that's a function not a facebook not of the people in one country being smarter or easier to do but a function of the regulatory environment and the social environment in which those elections happen and so and so on the one hand we can worry about deep fakes on the other hand we really need to be thinking about how do we shoulder the responsibility and bolster our social institutions and organizations to make sure that we're supporting society that's uh a great point to end on and i'm afraid we've come back to an hour which is raced by um sorry to all the participants for the technical glitches at the beginning that's part of this transition to a digital world um it's great to see that so many of you have joined this some of you extremely late at night uh or early in the morning and um thanks so much to gina for enlightening us on what's an immensely important topic in a way that i found exceptionally clear there's a lot talked about in this area and i found your presentation uh to really cut through a lot of it you can follow gina on at gina sue you can follow me on ian underscore golden goldin um do look at the oxford martin school events page the next event which is really a must uh for anyone that's interested in the internet we wouldn't be here without this man um connect is uh sir tim berners-lee uh in conversation with some who um was another founder uh together with tim of much of the computing power but so tim who is credited really as the person behind the world wide web will be at the oxford martin school uh giving a talk at five o'clock on thursday so do register for that if you haven't already and um look forward to your engagement in the future thanks for all of your participation and thanks particularly to gina for her great presentation stay safe and good luck to you all
Info
Channel: Oxford Martin School
Views: 754
Rating: undefined out of 5
Keywords:
Id: v5oHhuPzl14
Channel Id: undefined
Length: 47min 32sec (2852 seconds)
Published: Mon Nov 23 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.