Bryan Cantrill: Andreessen's Corollary: Ethical Dilemmas in Software Engineering - Craft Conf 2019

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

I think I just woke up and realised how much has actually changed since 2011.

👍︎︎ 1 👤︎︎ u/ControversySandbox 📅︎︎ Jun 05 2019 🗫︎ replies
Captions
all righty excellent I'm good afternoon everybody it's great to be at at Kraft humph this is a conference that has given me FOMO for many years fear of missing out it's great to actually be here and it one of the things they asked us to do is we were loading our presentations as to offer tags on the presentations and I realized that I couldn't quite bring myself to do it but I should have tagged history on my talk because honestly all my talks begin with a bit of history and I'm talking today about ethical dilemmas in software engineering and so I'm gonna turn the wayback dial to 1997 I'm not gonna ask how many of you are alive in 1997 because I know that I think that most of you are but I know that is changing so that I know not to ask that question even but if we look at the the the code of ethics from the ACM circa 1997 this is these were published in the communications of the ACM and you know they start off with these kind of timeless ideas so certainly it is as true now much more even more true that computers have a central and growing role in commerce industry industry government medicine education entertainment Social Affairs and ordinary life I love like Social Affairs I don't think that that person was predicting social networking circa 1997 um little did we know what we had in store but that this kind of starts off very reasonably like we want to have a code of ethics because software is becoming more and more important and they they kind of expand on this a little bit and one of the things that I actually really like about this code of ethics circa 1997 is that they emphasize that this code is not a simple ethical algorithm and you know like look we're all quantitative people if you're like me if you're a computer scientist in your marrow you like to discrete math a lot more than continuous math we like things to be kind of binary so we all kind of like where is the algorithm for ethics give me the algorithm that I can think no no no no there is no algorithm and I'm sorry for for ethical obligations we're not gonna have an ethical we're not gonna have an algorithm we're not gonna be able to reproduce the same results with with every input um and you're gonna have to actually embrace the fact that there's going to be nuance that in some cases standards may conflict and these situations will require the software engineer to use ethical judgment like judgment okay what's that to act in a manner that is most consistent with the spirit of the code so that this set up all seems good so now we're ready to read the code circa 1997 we're ready to look at what is where our first principles such that we may behave non algorithmically and ethically principled one product you're like is product a principle like does that word not mean what I think it means so if this is the the first commandment software engineers shall insofar as possible assure that the software which they work is useful okay we're already on shaky ground and of acceptable quality to the public shake here the employer the client and the user completed on time oh and at reasonable cost oh boy and free of error come on in particular sovereign tears shall as appropriate 1.01 specification 1.01 the first commandment ensured that specifications for software on which they work have been well documented satisfy the users requirements and have the clients approval it's like I don't think any software engineer survives this first principle intact certainly I have been unethical many many many times based on just 1.01 but if you maybe hey you know what I'm the exception no no no no I'm sorry I don't know what's wrong with everybody else but for me the specifications always always have been well documented and they always have satisfied the users requirements and have had the clients approval like okay great great you're terrific um let's see how many of these you survived have you insured proper and achievable goals okay an objective soaring project on which they work or proposed work ensure appropriate methodology for any project in which they work with the process work okay and then if there is any software engineer left standing after 1.05 1.06 is designed to finish you off and assure that you are always behaving unethically because what by 1.06 you must ensure good management for any projects on which they work including effective procedures for emotional quality and reduction of risk and then 1.07 ensure realistic estimates of cost scheduling personnel and outcome on any project on which they work and proposed to work okay it's like we're all dead right where we are all and the surrounding context was great but this is nutty I mean this is this does not make any sense whatsoever very very deep in there you have to you know go to section 6.10 you do have one that I think is actually pretty important in pretty timeless namely obey all laws governing their work I insofar as such be its is consistent with the public health safety and welfare so there's some stuff that's in here that's good um but in general they these are on these are not all that good but you know they kind of sum it up um by I think in a thoughtful way that you can address these what kind of thoughtful consideration these fundamental principles rather than D then reliance on detail regulations but then why all the emphasis on shipping saw on doing things that we know we can't do in software like shipping things on schedule on time and error free that's really really really hard to do in software and it's not unethical I'm sorry it's not unethical when a project is more complicated than you anticipated it being so that well there's nothing wrong per se with with these they are crazily dated I mean this is the the from the Precambrian era of software um and I like that surrounding context but the actual code itself is I mean quaint is basically the best way you can describe it and software is chained in certain many ways it serves to remind us how how cute and small software was in 1997 and how much how much more important software has become would reflect the with it's it's really Society we end up with we've done software has grown a lot since 1997 and we've come into contact with a lot of ethical dilemmas honestly um and we saw this a bit with the ubiquitous rise of the internet in the late 1990s we saw some of the foreshocks of what was going to become the complications of software I'm now I will I so remember Napster 1999 yeah exactly this is where you get the Gen Xers raising their hands this is kind of the the I and I kind of pointing out Napster as a first body of software that showed us that things were gonna get complicated because Napster was software back in the day for downloading ripping mp3s and playing music generally in violation of the copyright of the music holder right that the person who created the music but it didn't feel that illegal I mean that's a terrible excuse but a lot of people use Napster and it there's this huge disconnect between what record labels were charging and what people wanted to pay honestly for music and Napster kind of came in there and it but it realistically we were entering an ethical gray area as we were using the software and as you're implementing the software if you're implementing Napster you were implementing software that was designed for people to basically circumvent the law again it didn't feel that way it felt playful it felt fun until Napster was shut down by the Recording Industry I thought a lot less fun um so that this was kind of a foreshock then in 2003 we had Friendster Oh friend stir so if you happen to be alive in 2003 and on the internet there was this glorious summer of 2003 when we all discovered social networking and Friendster went from nothing to explosive in the span of like 60 days and as someone who was living in San Francisco during that time I can tell you it was just fun um we were all discovering social networking and of course what people doing is they're discovering social networking they were having fun with it they were putting their cats online and you know their cats were befriended their neighbors cats and then like the signpost down the street that would be online and then the you know both cats befriend the sign signpost which was great and one of the funny things about Friendster was all the cats and the signposts are frantically getting deleted because we only want to have actual people on here and this this dude was just way uptight about only people being on prints or only people you can't put your cat on a search on our thing come on why can't I put my cat on social networks oh you don't want me to put my cat on a social network because you don't want to date my cat I get it so social networking and I think we kind of forget this social networking was born crooked social networking was born so jonathan abrams could meet girls that was the point of social networking and your cat frankly is getting in the way of his ability to meet girls online and so there was this this deep disingenuous nough social networking a disingenuous Ness that social networking still has an outgrown who is Facebook for who now is Facebook for my mom sharing photos of a trip she took recently to her loved ones that's how that's who she perceives as being for that's not who Facebook perceives it a bit as being for and so that disingenuous necessar I think was a foreshock of what was to come and then you hit 2011 and Marc Andreessen in his prophecy and if you haven't read this essay recently it absolutely merits a reread this is eight years ago now why software is eating the world and honestly it's it's deeply prophetic that Marc Andreessen writes that more and more business businesses and industries are being run on software and delivered as online services from movies to agriculture to national defense many of the winners are Silicon Valley style entrepreneurial technology companies that are invading and overturning established energy structures and the prophecy part over the next 10 years I expect many more industries to be disrupted by software with new world-beating Silicon Valley companies doing the disruption in more cases than not now as someone who is who lives and works in Silicon Valley that at the time that was by the time actually I just thought this was obvious but it wasn't obvious it was prophetic and I can tell you that Silicon Valley cheers this paragraph or did but yeah Silicon Valley companies doing the world beating more often than not but it should have been a warning that silicon he needed to grow up that tech companies needed to grow up because there there was a tsunami of social interaction coming societal interaction coming that we were ill-prepared for so what I want to do is I want to walk through some of the the ethical dilemmas that we have seen since the since injury since prophecy in 2011 and I'm gonna walk walk through these some one I picked one from each year you could pick 10,000 more probably but I've tried to pick things that I think are true dilemmas for which there is complexity and ambiguity and I'm not trying to like take eight different things and blame people for them quite the contrary because I think that with a lot of these if you yourself had been in their same shoes you might have done the same thing and that indeed is the point is that there is complexity there is ambiguity and I think that one of the challenges we've got is that those who are most likely to encounter this ambiguity are those that are least troubled by it I have always been troubled by Facebook I'll tell you bluntly I've always been trolled by Facebook as a result I have never worked for Facebook they're the people that have worked for Facebook are not troubled by Facebook which is fine not that everyone should be troubled by Facebook but it means that there's a degree to which the one is less equipped to deal with the ambiguities that Facebook encountered and let's kick it off with Facebook on because in in 2012 Facebook absolutely crossed a Rubicon in 2012 researchers started to manipulate the newsfeed of hundreds of thousands of people as part of an experiment so what they did is they put took positive sentiment and positive sentiment posts and put them in the newsfeed of one group negatively sentiment 'add in the newsfeed of another group and neutral posts because they're good scientists in in the newsfeed of a third that means that you may it not had someone had great news that they had that just had a new baby you may not have seen that in your newsfeed if you were one of the negative groups you may because that may have been selected out for you and instead you may have gotten news about how someone that you once went to high school with was battling breast cancer that and these are very real things they're real people underneath that sentiment analysis but that seems to be devoid here and indeed the researchers themselves how do they defend their action the researchers themselves say that if you they kind of describe what they did in terms of Hadoop and MapReduce but they point out that no text was seen by the researchers as such it was consistent with Facebook's data use policy constituting informed consent for this research come on this is not informed consent I don't think anyone would have said that it was informed consent now in 2012 maybe I don't think these researchers were necessarily lying to themselves I think they had a pretty interesting idea that they wanted to go explore and they ignored the fact that they actually didn't have informed consent and that they were manipulating people's emotions and then reasoning about them at the and these are not these aren't mice these are people who have actual real lives and are potentially actually seriously impacted by your manipulation of their emotions and if you know folks at Facebook I think they will tell you that this was a Rubicon for Facebook because this is where Facebook begin to perform experiments either in the name of academic research as in this case or in the case much more frequently in the case of actually generating revenue and they lost their hesitation for doing these kinds of manipulations which is a real problem then we have so 2013 the end by the way these could all be Facebook but I think that would be boring Facebook and that's not necessary Chris's in the Facebook I guess it's a kind of a perverse complement that Facebook is so dominant that they have encountered every ethical dilemma one can conceive of but I wanted what will pick some other examples and one of them that feels plausible to me is zenefits so zenefits is a was I guess still is a Silicon Valley startup that was going to disrupt the insurance industry sounds great in order to to actually build their software they had to be certified by the state of California by a regulatory board and in order to do that you have to sit through 52 hours of training studying the materials and the way this is done is over the web browser and kind of you know clicking next and so on and the CEO came up with what we might call a hack it's like you know what I'm gonna do I'm gonna actually develop a chrome plug-in that is actually gonna plug into the browser and pretend like I'm there and take the training for me doesn't this feel like that sounds great we've automated it right we took somewhat something that was tedious and manual and I've added it with this little script and it feels plausible I think if that doesn't feel plausible to you you might be lying to yourself it feels plausible to me if a co-worker did that I don't know that I would think to ask the follow-up question I'm like wait a minute what is this training for who's administering it what are the consequences of this and the consequences were pretty grave also I think the time I mean perversely the time kind of makes a difference like if it's like a five-minute video that's one thing but like wow 52 hours but the Train really shouldn't make a difference right I mean in terms of this particular dilemma we shouldn't be asking that question we should be asking broader questions about what the ramifications are and what is the law what does the law say we need to do because we actually do need to abide by the law this is a huge scandal and this is actually from the findings of fact with the SEC pursuing against Parker Conrad he lost his job over it and it was it was a very big deal that continues to kind of haunt sin offense to this day I mean I think is it reveal the culture that wasn't asking these kinds of questions 2014 goober gray ball so an uber gray ball uber this is in the time when uber is trying to enter all of these cities and there were some cities around the u.s. in particular and I'm sure around the globe that had great reticence about allowing uber into their into their city and they had regulations in place that actually prevented uber from driving in that city so what Hoover did is they knew that those regulators in this case it's Portland that those regulators were potentially going to be just downloading the app and looking to see how many Ebers were driving on the street to monitor them so they invented the software called gray ball that gave those regulators a different view of uber namely one that had no cabs on it and exactly it kind of feels like I don't know just that kind of clever a little bit it's a little bit clever a little bit it's a little bit devious and it's it's probably wrong or is wrong when what you're deceiving is a regulator that regulator isn't trying to for enforce the law this is not cat and mouse and your job is not to try to evade regulation especially when you are clearly violating every aspect of that regulation the letter and the spirit so this from me that the findings of fact and I love the kind of different tone where this is Portland themselves saying that uber used great ball software to intentionally evade PB o T's officers from December 5th December 19th 2014 and deny 29 separate ride requests by PB ot enforcement officers um Uber's that are back is is much more like it doesn't feel like it's that big of a deal okay yeah there's a two-week period where I was misleading you but it was pretty short I didn't do it that long I mean come on but and I think that this shows you a bit of moral relativism that it's like committing crimes for a merely a two-week period of time that doesn't in any way absolve you of the responsibility of what you've done so that's 2014 an uber gray ball didn't get to 2015 and this is one that I feel um is is one that is is going to increasingly come up so this is where the googles software they're their network they're their people network has mistakenly identified a dark-skinned individual as a gorilla and they're Google to their credit Google handled this very well Google our as well as you could handle it Google immediately took down the software in question they got in front of it they figured out what had happened they they did everything you could conceivably do but it highlighted a real crack in the foundation in that the this is the kind of the great lie honestly of the current AI boom is that it actually relies on humans at the bottom doing classification doing image classification and if the data that you use to do that classification is itself biased that bias will show up at the top and it's one thing if it's it's it's abhorrent that this is just in in a kind of an image identification but if this is actually going to inform a loan application if we're gonna actually act on this societally and I think it highlights a real dilemma that we have in AI right now which is how do we explain its results the AI has made this decision okay well yeah the AI it's actually not intelligent it's just actually it's actually just recognizing patterns why did it identify this as that pattern and as it misidentifies things and by the way this is innocent in that Google did not intentionally misfeed its network the wrong data right it did not tamper with its own training data to be non representative it just was not representative god forbid what's gonna happen when we allow people to actually generate training data for us remember Microsoft a Microsoft hey was gonna be this chat bot that was gonna learn from what you speak well as soon as 4chan got ahold of this which was like what 30 microseconds after they announced it I mean the latency between like announcement 4chan discovering it and Microsoft hey being the most despicable racist you've ever encountered in your life was shocking I mean it was way less than a day it was like within 30 minutes Microsoft a is like volunteering things they're just like oh my good hey you out on 4chan it's like I actually have been any on 4chan glad you asked that question for kids great it's like no no no no 4chan is not great for Chanin's everything that's wrong with us societally and the fact that Microsoft hey was hanging out there very quickly um learned all the wrong things and we've got a lot of ethical dilemmas ahead of us as we get again as we get more deep neural nets and as we make them make more decisions we've got a lot of these kind of dilemmas ahead of us and if you are working on that kind of technology you've got to be asking yourself how are we making decisions why are we making decisions this way and how are we taking responsibility for those decisions how can we explain them I think this is gonna be a huge issue over the next five and ten years um then we get to 2016 um the the first Tesla autopilot fatality I'm sorry I'm gonna put autopilot and air quotes because it's not a correct representation of what the automobile can do this is not a self-driving car this is a car that has got tried traffic avoiding cruise control and it's got some steering assist that is what it is but it has been sold and marketed as an autopilot and as a result drivers don't necessarily read every sentence of the manual read every word of the manual and they think quite rightfully that when they're driving they can let that our tension drift and in this case the the attentiveness of the driver drifted for reasons that are unknown and this car did not see a semi-truck that had pulled in front of it making a left turn because it was looking at the road underneath it and this car went under the semi-truck and the driver was killed essentially instantly and again we don't know why that driver was distracted but very clearly that driver believed quite rightfully that the car was driving itself and it wasn't and this is not the only fatality we've seen we see we saw one recently in Silicon Valley where the car actually followed the wrong line and it followed the right-hand marker of a left exit as the left marker of the lane and it went driving into the pavement at top speed again this is not an autopilot that's not an accurate representation of what it's doing and the fact that it's killing people is a really serious problem and it's a serious problem because the the reason this is happening goes pretty deep and there are plenty in the work on self-driving cars who will tell you that Tesla's total lack of lidar is not responsible Tesla can't see it's relying on cameras to see things and they are appealing to a very I think a very bad argument namely that humans are able to drive with two eyes why can't a computer it's like well because the computers not a not an actual human numpty and we've got the technology for a computer to actually know when objects are in front of us with lidar but they're not on these cars because of economic cost well maybe if it's too expensive to not be able to kill people can we at least call it what it actually is which is a driver assist and it can be viewed as a safety mechanism as long as the drivers in charge but you've got a lot of people at Tesla who are trying to do what they perceive is the right thing but they they should view themselves as being in a quandary with respect to these vehicles and how they're being marketed and sold and tweeted about it's it's a major problem and if you could the the NTS b--'s conclusion on this the NTS b--'s conclusion i mean god bless so the NTSB is the National Transportation Safety Board in the United States this is the last government agency in the United States that I as a worn out American have any faith in so please allow me my NTSB I love the NTSB the NTSB always kind of figures it out and in particular the NTSB looked at this particular crash in Williston Florida and it concluded that the problem here is actually how this thing is being sold and marketed that this is actually just a level two vehicle automation system and is being marketed as much more than that and our recommendation is that you actually that you actually use these things the way they were designed which is as an enhancement to the driver not as a replacement for the driver all right 2017 I can't I got to come back to Facebook a little bit I'm so the in Myanmar um there is a Muslim ethnic minority called the Ranga um and in Myanmar Facebook effectively is the Internet um Facebook is by far on the way the dominant way of actually getting to the Internet and Facebook messages were used to coordinate a genocide against the rohan gun which seventy five thousand people died and this is an issue of true ambiguity what is Facebook's responsibility here because Facebook is not advocating the genocide of the Ranga queerly um to what degree is Facebook enabling it merely merely by being a communications platform because Facebook's answer to this is like look you know we're just kind of like we're a radio transmitter and we just transmit whatever people say and it's not our job as the radio transmitter to actually audit what people say and I don't agree with that I think a lot of people don't agree with that and I don't agree with that because it was a radio transmitter that was used to broadcast instructions for genocide in the 1994 Rwandan genocide of who'd against hit seas and the the way that was done was with the radio transmitter and the UN General on the ground romeo Dallaire tried to communicate back to anyone who would listen could somebody please bomb the radio transmitter this is what a bomb is for there no humans will be killed we need to bomb the radio transmitter and then they cannot broadcast instructions for this genocide well in this case Facebook is the radio transmitter and there's no responsibility here but it's it's ambiguous I don't know where that responsibility begins and ends I don't think anybody wants a Facebook to need to approve every single post on Facebook and yet the end result was really not acceptable and if you see that there was a huge spike in all of these posts on Facebook as the genocide was undertaken now one thing that Facebook actually did right and I think it's a kind of a template for people who do end up in an ethical quandary is they I think they were rightfully horrified by what by what happened I think Facebook's been horrified by a lot of things that happened have happened but I think they're crippled because they don't understand how to act and in one day in this case actually they acted reasonably well at least at the start they commissioned a group to actually go explore this rigorously and which is great that Commission delivered a series of recommendations also great those recommendations were then ignored by Facebook a lot less great so when you have and I think having these kind of independent findings of fact is incredibly valuable but you've got to listen to them and in particular what the action of Facebook has done is that they have deep platformed a handful of these Buddhist extremist groups but they've left a bunch of other extremist groups on all sides of this on the platform and continuing to be involved and they and they haven't actually yet meaningfully employed boots on the ground who actually speak Burmese who can actually make reasonable human decisions and this is one of these examples where you know we in software fixate on getting the human out of the loop how can we automate everything this is an example where we need to do tomato and there going to be more examples of this where we need to get a human back in the loop we need to stop fooling ourselves about the coming singularity and know that human judgment is not going to be replaced anytime soon and we need to deliberately get it back in the loop and the question should be how can we get it back in the loop with optimally cost-effectively and so on but let's harness that human judgment please which itself by the way it is very difficult you know it there been and I know they've been news stories about this but about those folks that have been tasked with keeping inappropriate material off things like YouTube that job is horrific and people see horrific things and they start to believe some of the these kind of crazy conspiracy theories that they are actually in charge of keeping off the platform so clearly we can't go all the way and say like we need human judgment on everything there's gonna have to be a delicate balance again a lot of complexity a lot of ambiguity but simply saying it's not our problem that's not okay and lest I sound too sanctimonious into judgemental let me for 2018 let's talk about our own problem that we had at join so um there was a dreadful shooting at a Pittsburgh mosque late last year and the the Pittsburgh shooter was on a platform called gab and gab is a an alt right Twitter effectively and the Pittsburgh shooter was very much ginned up by his activities on gab gab was a joint customer it pains me to say they weren't shortly after this and we moved very quickly to take them off our platform but we moved after the incident this is not something we caught before the incident I can tell you that every enjoyin employee was asking themselves pretty serious questions was my software complicit in this horrible massacre I mean at some level I mean yes at some level in that your software wrote the platform on which the software the software platform was written on which the shooter was actively engaged so at some level yes we we bear culpability per se but I also think that we need to think about what our own responsibility is there fortunately in this case I mean such as it is we actually have an acceptable use policy that is pretty broad and allows us to act quickly in this case I'm there and to this day there are many people who believe or some people who believe that this is a First Amendment issue that gab should have the right to be on our platform no one has the inalienable right to be your customer because of how they behave and if someone behaves poorly they don't need to be your customer and don't get fooled into these kind of constitutional arguments and so on when someone behaves unacceptably you can boot them off your platform you can say what you want but not in my bar if you're in my bar I can kick you out of my bar for saying things that are that I deemed to be horrific and unacceptable I'm so if you don't have an acceptable use policy and you have any public exposure I very much encourage you to get one but this was an extremely difficult moment for us at Giant um and one that I think gave us empathy for some of these other folks that find themselves in these dilemmas that these are not necessarily bad acting people but they find themselves in really difficult situations and one that's ones that are very regrettable alright so for 2019 you could pick a lot of things for 2019 you go pic on Facebook again um you go pick on one thing that riles me up in 2019 about Facebook out if you saw this where they are going after teenagers to get their data and and paying teenagers 20 bucks to offer up all their data which my 14 year olds will gladly take $20 to offer up everything about himself in fact you you have overbid with my 14 year old he'll do it for 5 bucks but you know what that's why my 14 year old is a child because he has an undeveloped prefrontal cortex and he can't make these decisions on his own and he needs mom and dad to make those decisions for him I love him obviously but he's a child and the we Facebook going after kids is truly important but I don't want to go back to Facebook yet again although I guess I just did I'm the one that actually for 2019 I got to talk about the 737 max and the M caste system so this is this is the the maneuvering capability augmentation system that was put in the 737 max and the story here is that that Boeing needed to react quickly to the Airbus the the Airbus had developed a new more efficient plane and boy needed to react very quickly to it and they wanted to develop a new 737 there's a problem with that they wanted to develop a new 737 that did not require any recertification so this is gonna be a 737 that's gonna look and fly exactly like the oldest 737 such that we don't need to recertify anyone there's a problem with that in order to make it more efficient the engines had to be larger nor if the engines to be larger they need to be moved and by moving the engines you actually change the way the aircraft flew and then they had an idea that and the reason I mentioned this in a software talk they've got an idea that just feels very inspired by software and that is oh you're saying the aircraft flies a little differently um okay so what if we put in a module that detects when the aircraft is being flown differently and just flies it like it was flying before it's like that sounds crazy but that's what they did with the M cos the M casts will detect high angles of attack and it will automatically adjust on the the aircraft to fly the way that that that the aircraft used to fly the problem is they didn't actually tell any of the pilots about this because to do that would have revealed the fact this is a different aircraft that probably requires new certification and this has already resulted in two fatal severe crashes not all crashes are equal both of these crashes lost all lives so this is Lion Air 610 he Theo pian 302 and in both these cases the the pilots were fighting this mechanism that was doing the wrong thing and the mechanism is pointing the nose down when they're trying to pull it up and pull it back why was it doing the wrong thing because it had bad data the angle-of-attack sensor was wrong and boying is putting out a software update that's like okay okay we got a software update we are now the flight control system will now compare inputs from both angola tack sensors and you're like will now now now now that we've got hundreds of people dead it will now factor in both angle of attack sensors where was this when you were actually designing the aircraft well the problem is this aircraft and again I just I don't blame software but it feels like a move fast break things kind of aircraft and it used to be that that folks that were making aircraft felt a great deal of responsibility to those involved and would never have engaged in a move fast and break things kind of aircraft and indeed when swen cockpits became software controlled in the 80s there was a lot of consternation about how a software brug would bring down an airliner that basically has not happened and that hasn't happened because there was a great deal of discipline but now software is involved in taking down two airliners not because of a defect not because of a bug but because of a culture that wanted this aircraft to be done now rather than being done right and it was a huge problem it spread to the horizon at Boeing and there are lots of people again not bad people but lots of people who are mature are questioning their own behavior and damn it I should have said something I should have asked why we were doing it this way and I'm sure many of them did and that will be revealed as time goes on but so they're now gonna compare inputs for both angle of attack sensors the the MKS would also fight the pilots we're gonna stop doing that and we're never good the MKS also would put more I would would change the aircraft more than the pilots could change it back which is part of the reason that that Ethiopian 302 crashed so this is a very serious problem this will be rectified I've got total faith in our system and in its ability to rectify it and it's but it should be a lesson to all of us that the software way of thinking is invading too much of society so this is just a this is a tiny sampling of the ethical dilemmas we've seen there are lots and lots more of them you probably have several of your own that you've encountered and one thing that has become clear is what's right for software is not right for society we do all sorts of things and software good thing that the software can't speak on its own behalf right I mean how often you get on stage and denigrate old software you rip out software kind of savagely destroying lines and deleting them and then gloating about it on social media but like you're howling over a corpse well fortunately so it's software itself can't speak up and say hey you've done the wrong thing because software's not a person and we can do things to software that we should never do or never engage with in society and we and sovereign tears have got a greater burden Society so how do we proceed are we all just like Scrooge we just feel bad about ourselves and go back to what we were doing before no we shouldn't and there is good news in that on the ACM um finally revisit remember that that code of ethics that we talked about at the beginning the ACM has now revisited all that and rewritten it and it's pretty good so this came out I'm just at the the end of last year summer of last year um and the new code of ethics I think is actually pretty good um and in particular it's very principles based in we're not gonna get into the weeds of whether you've like implemented the specification or not we're gonna focus on contributing to society and human wellbeing avoiding harm be honest and trustworthy there's there's a shocking one for you be honest and trustworthy you figure it out but you'll figure it out be honest and trustworthy you know what I'm talking about and the whole thing actually can go at ethics at azpm.org they've also done and actually one things that I really like that I hope gets really expanded is in ask the Ephesus to feature and because I think that the key here with ethics is not answers don't seek answers seek to ask questions tough questions questions that may make people feel very uncomfortable by the way questions that won't necessarily have nice neat answers we are these questions are going to be complicated but it is the act of asking them that allows us to consider them if we don't ask them we're going to simply do the wrong thing so we need to be able to ask these questions and I think that if you've got an organization that in which question asking is encouraged I think you will find that you will increasingly do the right thing that you are less likely I think to move a drift with respect to these principles but it's up to organizations to really initiate and support this discussion so if you are an organizational leader in your organization you can encourage people ask those questions even by the way if you find them annoying or tedious what you might you know we had after the Gap incident we asked ourselves a bunch of questions and we had a customer for example that sells tobacco I'm in the UK should we should that be a customer and that's an important question to add to ask I'm not sure that there's a pat answer but that's an important question to ask and you want to encourage those questions to be asked even if the answers are difficult or or non-existent so in terms of and reasons corollary um and reasons corollary the what follows from entry since prophecy is that we are going to be encountering these a lot and we need to equip ourselves to be able to do this and I think actually that in your organization if you support this kind of questioning of larger endeavor productive questioning of larger endeavor I think that you allowed yourselves to have differentiation in the market because I think what I see out there is actually there are a lot of software engineers probably a lot of people in this room who want to do the right thing we want to be proud of our work we don't want to create a platform that's used for violence we don't want to create a car that kills people we want to we want to lift people up and delight them that's what we want to do and we I think are going to be drawn to those organizations who take that mission really seriously and allow their larger endeavor to to be questioned but understand that the ethical quandary is where we're going they're not going to be Pat answers and it's our responsibility to ask them directly thank you very much and with that I thanks Brian right so with that I think we got like three and a half minutes for questions is that fair um we got them for a couple of questions yes sure um so what I'm gonna do is um because the whole point of this is asking questions I'm gonna hop on to the slack and after we're done and I will answer questions such as there are a lot of questions gonna be on answerable but I will participate in that questioning until so everyone runs out of steam so that's what I'm gonna do afterwards but um in the case of Tesla is this more like a sales issue than a software development issue totally fair point like hey this is a marketing issue like I built the thing that actually does the thing that I was told to do I built an SAE level 2 system if that's being marketed as something else that's marketing's problem and I think not exactly don't get out of it don't let yourself off the hook that quickly what are you doing inside your organization to actually correct that disconnect uh-huh when you look at other aspects about the vehicle it's like the fact that Tesla doesn't have lidar not in a sales issue that's not a marketing issue that's an engineering issue so III think we need to not be so quick to absolve ourselves of responsibility should I be concerned about people who oh no like it's not ah I should people who lose jobs to my automation solutions that's a great question how should I pursue that am I gonna be developing software that's gonna put people out of work that's a great question that to ask yourself and I think that the specifics really matter I mean I'm not gonna give you a pat answer should you ask yourself that question yes absolutely um does it mean that any software that that actually eliminates a job is a problem no absolutely not and if you look at the jobs in 1900 and then you look at the jobs in 1800 look at the top ten jobs almost none of those jobs exist they don't exist among the top jobs Society changes people have to change but you should take your responsibilities seriously and part of why I view as assert that it is incumbent upon all of us is we need to lift more people up and allow more people to create software rather than simply being disrupted by it what's the next question I do believe it's true that Google and YouTube were vent users in a propaganda videos flat earth and so on take them watching that is it that is that has been in fatica Li true at times where Facebook has deliberately created dissent friction conflict extremism in an attempt to actually create eyeballs and by the way really inefficient eyeballs you look at their conversion numbers on their ads they're terrible it's like the the the conversion numbers are not great only you know a small fraction of a percent of people are clicking on this the rest of people are disgusted by it one way or another why are we focusing on that small percentage oh because they're actually revenue that is a problem it absolutely is a problem and I think that that they need to honestly if you want to put newspapers out of business then could you please take on some of the societal responsibility that new newspapers felt about how they they held themselves in society so I actually do think it's a problem I think they have done it and I think that um yes I just how would you judge it thoroughly and repetitively they need to you you cannot be deliberately focused on on creating dissent because it's eyeballs um I think that the question about what should we do what should we do to address ethical dilemmas before they become tragedies and this is where I would start with read a CMS updated code of ethics gonna think it's pretty good make sure the people on your team read a CMS code of ethics because they're pretty good and I would say kick off a discussion in the software we build what are the questions that we should be asking because again this is about asking questions and I think you will find that in the process of asking that questions you'll get your own sense of clarity and hopefully before they become tragedies but they I mean they won't always we're not gonna be able to prevent everything but we can at least take the responsibility to ask ourselves those questions um do I have time for one more are we no we're done I'm sorry I'll be on slack thank you very much thank you so much Brian [Applause]
Info
Channel: CraftHub Events
Views: 18,987
Rating: undefined out of 5
Keywords:
Id: 0wtvQZijPzg
Channel Id: undefined
Length: 46min 24sec (2784 seconds)
Published: Thu May 16 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.