Beyond the Output: Navigating the Ethical Challenges of Generative AI | NVIDIA GTC 2024

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to GTC my name is Michael Boone I'm our trustworthy AI product manager and it's my pleasure to welcome you to beyond the output navigating the ethical challenges of generative AI couple of housekeeping items for today's session one the session recording will be available within 72 hours of this live session two there are no questions but you may Engage The panelist after the session if there is time don't forget please download the GTC app it's a way for you to connect with attendees check out the latest posted sessions and even provide us some feedback fortunate for you today today is Monday so our keyot is at 1M sharp in the sap Center we hope we will see you there finally between 4 and 6 today we have a poster reception it's an opportunity for you to engage our Global researchers and to really Advanced really Advance the next area of AI now let's get started with our actual session today we have four panelists for you our panelists are as follows our moderator is Nikky Pope our senior director for AI and legal ethics at Nvidia Brian Green director of Technology ethics at the merula center of Applied ethics at Santa clir University Gina moap Who is is the community engagement lead for Mo for the Mozilla foundation and then Ian Cunningham who is our Deputy general counsel at Invidia who and leads the teams responsible for uh intellectual property AI governance and data privacy fun cool fact about Ian Ian uh was a grad student and um before he went to law school he just he figured that AI was moving too slow and so he wanted to make things a little bit quicker and so he's now a part of our team and here he is today advancing the very state of AI so with that Nikki I'll hand over the panel to you enjoy the session um so since Michael can you guys all hear me because I can't tell okay um since Michael introduced everybody I'll just jump right in with the with the conversation um so uh generative AI models have been in the news recently for generating content that reflects some type of bias for using controls to produce diversity and output or by stereotyping members of certain groups the data used to train the model may be biased the de developer of the model May introduce bias intentionally or unintentionally the controls used to manage prompts or model outputs may also be biased and even human human data labeling processes May inject bias into the data so with that as our sort of Baseline um I want to to ask Gina you first um what is algorithmic bias and how do we detect and mitigate it thank you so much Nikki for that question um algorithmic bias is a presence of systematic and unfair discrimination in the output produced by algorithms we have seen several cases of algorithmic bias where Dr Joy balmy a former MIT student embarked on an art project um called Aspire mirror while attempting to create a mirror that was supposed to display inspiring images on her face she discovered that the computer vision software could not detect her face until she actually wore a white mask um and upon investigation she found that the training data that was used to train the computer vision software primarily consisted consisted of um white male faces leading to a lack of familiarity for faces like H um this encounter um highlighted the bias issues in facial recognition Technologies Amazon opted to use AI in their hiring process to sort resumés the algorithm rejected all female um resumes um indicating bias towards females and the um the New York Insurance Regulators launched an investigation into the United Health Group following a study revealing that the United Health Group algorithm prioritized health care for healthier white patients than sieker black patients um displaying bias towards um um black people so for me the question of detecting bias goes hand in hand with mitigating it and also avoiding it from the onset in AI machine learning um it there is a t coin garbage in garbage out which simply means that the quality of output is only as good as the quality of input you would think it's very obvious that for us to mitigate buyers in AI algorithm is to use diverse and representative data sets but we still do have algorithms that have been trained on um biased data um secondly we need to use bias metrics we need to um conduct algorithm impact assessments to evaluate the social economic and demographic impacts of of algorithms we also need to um develop regulations and laws to guide algorithmic development then after establish ethical review boards to check and assess if these algorithms actually follow and comply with these regulations and lastly we need to uh conduct on going monitoring to assess these algorithms in real life scenarios update them as um biases emerge thank you if they understood the training data sorry I don't understand what you mean by that we're going to move on um so Gina what you were talking about um in some uh sense you might think that uh transparency of how the how the model was trained what it was trained on and um how it performs would be uh would at least inform users of uh of of what the model is and how it works yeah Brian um is it sufficient to be transparent about um an AI model's unintended or unwanted bias or should developers try to mitigate bias in the model itself the answer to that is certainly both we want to make sure that not only the the we are transparent about the biases that are found in a model but also as much as possible to try to reduce those amount of biases so for example um if you if you think about if you have a loan data set or something like that a loan data set for example could have lots of historical bias in it in terms of who is receiving those loans and who wasn't you want to as you're looking at the data set of course you want to analyze it and figure out what's in there and analyze what sort of biases are going to be found in there because there will be biases found in it and then you can do various things first of all you want to acknowledge what those biases are once you find them make sure that people know what those are then there are various measures that can be taken to debias that data um now any sort of debiasing is always going to risk the the possibility of introducing new biases into it um which are just slightly tweaked or in other ways uh so at some point we have to really decide what is an acceptable level to bring this down to perfection in this case might be impossible but better and worse are very real that's the way I like to think about it Perfection is impossible but better and worse are very real we want to try to do the best that we can while acknowledging that Perfection might not be possible okay um we're going to shift gears a little bit here s and and talk about um intellectual property so since chat GTP was introduced um what was about 18 months ago there have been questions about the Gen the impact of generative AI on intellectual property and other laws and um what implications do you expect generative AI to have on copyright law yeah it's an interesting question um there's a lot of um discussion around generative Ai and the impact of copyright there have been a couple of cases um most of you will probably have heard that the US copyright office has determined that output from machines is not cop copyrightable by itself and um it's going to force us to really rethink what is the purpose of intellectual property law to go back to fundamental principles which is that intellectual property law exists to protect human intellectual effort that is how it has um always been formulated I think that that's probably not going to change in my view and the impact of that is going to be that it's going to be harder to figure out what is the human intellectual contribution to a particular creation and what is the machine machine contribution um I think there are a lot of reasons why intellectual property will not be extended to machine Creations you know for one it's not the traditional human intellectual effort it also doesn't make sense for machine Stone property right so the property part of intellectual property doesn't make sense yet that's right but you know more fundamentally the whole regime of intellectual property exists as an incentive for people to create things and you don't necessarily need to incentivize machines to create things right and so I don't see intellectual property being extended to cover machine Creations what I do see is courts and um other decision makers um having an increasingly difficult time sorting out what is the part of human intellectual creation that needs to be protected and what is the machine part as that becomes combined and early on we've seen in the generative AI um output that is um solely or almost entirely machine generated you get a prompt and then you get an image but I think that people we're already seeing this are going to want more control over that output and we're seeing more than just prompts go in we're seeing sketches go in we're seeing combinations and that is going to make it a lot more complex for us to figure out what Creations have human components that deserve protection and what Creations don't but on the flip side of that I also think it's going to be a lot easier to create things and so the idea that you need to go take someone else's um creation to appr rate it for your own use I think it's become less um there's less of an incentive to do that when you can go create something easily yourself too so it's going to have two different impacts there but infringement cases are going to get a lot more complicated um one of the things one of the the issues with with intellectual property is ownership Brian who do you believe owns the generated content produced by AI systems and what other considerations in making that determination so one of the things to consider would be that this is a collaboration right it's the there are humans who are who are prompting the AI humans who created the algorithms humans who collected the data set in the first place of course if the data is based on humans then then then that is human originated also um and then using the machine learning model to process that um introduces of course the the machine side of it I think that all of that human input we need to remember is human and uh therefore there are property uh aspects to it as Ian was just saying um when it goes through the machine learning model then that adds a certain amount of complexity to it in terms of uh you know how exactly are we going to balance this who owns the model who owns the data who owns the algorithms who owns the the who are the workers at the company I mean all these things happen in an organizational context also and I think that organizational context ultimately forms the container within which all these things happen but uh speaking speaking kind of from a from a foundational level I think we really need to remember that humanity is pervading this entire thing right it's an organization that's made of human beings and ultimately in terms of responsibility uh because somebody has to be responsible for these models then it needs to come down to human beings also um leads me to actually a good segling into this uh question a court recently ruled that Air Canada was liable for its chatbot giving a customer incorrect information that cost the customer hundreds of dollars could Air Canada and this is for any of um do could Air Canada do you believe Air Canada could have shifted liability for the error to the customer um and if so how Ian they certainly could have tried I suppose um I you know I don't my fundamental starting place with a lot of these AI tools is there's no different than any other software tools and so if a regular software tool a regular software tool you know made a mistake that was controlled by a company and uh fool the customer I think the company's liable for that setting it up right and the AI tool is no different it's more complex in the sense that they're new and a little bit harder to control in the chatbot context I think they're going to get easier to control over time but all of the existing rules and regulations um all of our existing policies even in Nvidia that apply to software also apply to AI tools AI is not an exception to that and so I think that if you're issuing a tool if you're using a tool deploying something and it ends up harming someone all of the existing laws and regulations are going to apply to that so I don't think it would be successful to shift any kind of liability out of the customer there um I'm going to shift GE gears again here H we're in an election here in the United States actually all over the world and we've already seen deep fake images and voice clones of political candidates with a high number of um high-profile elections happening around the world um concern about misinformation is increasing so Bri how can we prevent this harmful uh this type of harmful manipulation of AI systems I think the first thing to remember is awareness we need to first of all recognize that it's possible and when we are recognizing that then we need to go the next step and produce you know broad widespread public awareness is really really important here um and then we need to actually do something about it people need to be responsible for if you produce a deep fake that then is misleading people somebody needs to be held responsible for that if it's an organization doing it the organization needs to be held responsible for it there are once again just because we have a certain sort of Technology doesn't mean that the technology gets to be blamed for what's going on here there are human beings who are using that technology for purposes and for ends that could be good or bad and we need to be able to evaluate those ends and those purposes and make sure that people are really using these Technologies for good purposes and not for bad purposes now as we get into new technologies of course there's going to be a lot of deception possibilities here because of the newness because of the lack of awareness because of all the different considerations that are involved but uh we really need to go the next step and recognize that uh once again it's not the Technology's fault that people are using it for bad purposes we need to actually hold people responsible so Gina how can developers policy makers researchers other stakeholders in AI um work together to address AI misuse and misinformation um addressing AI Mis use and information does um require a collaborative effort and I think um developers researchers and policy makers can work together to promote knowledge sharing because they are all from different domains so developers can um share insights into the Technology's limitations and capabilities while researchers can contribute findings on AI behavior and vulnerabilities policy makers can then Pro provide context for regulatory uh decisions um Ian do you see any First Amendment concerns or limits on what the government can do regarding uh deep fakes and Mis misuse of AI yeah this is a complex question um the first amendment is going to ultimately play a big role in any kind of regulation in this space um it is um not against the law it's a lie um and uh depends on the impact of course it's the old saying it's um you can't yell fire in a crowded theater so it depends on the impact of um the LIE potentially um that's why any kind of regulation in this space is going to have to carefully consider what the actual risks are and to narrowly tailor what the regulation is to address those risks I think there's going to be um a lot of back and forth in that sense because um different states are probably going to try different types of regulations and there's going to be a lot of challenges in the courts over this and I think one of the things that is going to um play out is we're going to have to assess what is the actual risk and harm associated with this there's a bunch of commentary about how deep fakes and disinformation don't have as much impact on the public as um people would have thought um and also some counter commentary that it does have some impact but it's at the margins and in a close election that could make a big difference so I think there's going to be a lot of discussion about what are the actual impacts what is the harm and then what are the specific mitigations that governments can take to address that okay so AI safety is another concern of governments around the world last fall the UK held an AI safety Summit and the US Department of Commerce has formed the AI safety Institute Ian back to you again what safeguards could be implemented to prevent malicious actors from exploiting vulnerabilities in generative AI systems well there's a lot of safeguards nvidia's got some of them um one of the easiest ones that we've seen rolled out of course are prompt filters um to make sure that folks can't use specific types of prompts those are difficult and um they're very rough around the edges in the sense that it's hard to make a rule because safety really depends on context so just to give a small example um you can use a lot of these llms to um draft fishing emails spear fishing emails that are custom for a particular individual um to prevent that you'd really have to prevent the llm from drafting emails in general because it's very easy to fool them into drafting an email for any purpose so AI is really context dependent and that means you're going to have to have multiple layers of uh safety multiple layers of tools and you're going to have to think carefully about the context in which you roll those out prompt filter is one of those um you also have the reinforcement learning with human feedback that you've seen in a lot of these sometimes they can make the models um too conservative so that they won't respond to relatively benign questions so you can air on on one side of the other um but you also have other more expensive mitigations like other llms that are judging the outputs of the primary llms and determining whether there is harmful content in the output that's a lot more expensive but in some cases it could be a reasonable approach if the harm is potentially great enough well one um we've heard and I've read a number of Articles talking about open sourcing this technology as a way of uh improving safety because different researchers and different people will have an opportunity to play with it and improve it um what are some of the tradeoffs in deciding whether to open- Source model and I'll start with Ian but anyone can answer that yeah I think we just had news recently that uh Milan mus model has been open sourced it's I think a 384 billion parameter model so we're getting a lot of these big open source models now um there are considerations because of the unknown um use of them in terms of the potential risk of open sourcing them but we're also seeing a lot of really helpful research that is being done on open source models that can't be done on closed Source models or can't be done on models where you only have access to the blackbox output you know for example I think last late last year there was a paper called represent representational engine in which went through the Llama 2 open source model and actually identified within the weights uh layers and nodes that were responsible for um attributes like truth and power seeking remarkably and that kind of Safety Research can't be done on closed Source models and I think it's hard to predict which teams are going to be doing that kind of research so there's a huge benefit potentially to open sourcing these models as well to give them to different researchers and allow them to experiment with them and figure out what are the best ways to control these models on the flip side of that if the model is open source maybe you can talk to this point um is there a greater chance that it would be misused uh and abuse and how do we address that as part of that tradeoff yeah one of the things that we at the market Center we have a toolkit that we that we have been putting out there for for uh you know anybody who's developing technology to use one of the tools is think about the terrible people there are there are bad people out there um and of course we want to give good people all the tools that they need in order to use technology for good purposes and that'll provide you know social benefit and all sorts of you know Wonderful opportunities for people but we also need to remember that there are bad people out there they're going to use these Technologies for bad purposes um now one of the things that we can do also with open source is that by making the technology available folks who want to stop the bad people from using that technology also have access to the same tools that you know Bad actors would have access to so there's this constant balancing back and forth um it's really hard to know ahead of time what the balance is going to turn out to be so we need to be constantly looking at society and thinking to ourselves are we getting more benefit or cost out of this is this actually helping people in the way that we want it to or is it ultimately causing harms to society that we don't find to be acceptable and if they're not acceptable then we need to figure out ways to regulate that reain people in uh figure out hopefully how to still get access to the the good side of the technology while at the main same time trying to minimize the harms so one thing that um uh is consistent across all AI is that it it crosses uh geographical boundaries and cultures Gina how can we ensure that AI systems behave ethically and align with so societal values and who societ of values yeah um I think um you mentioned that it is contextual so it's not a one siiz fitall um approach um it needs to be adopted and contextualized according to where it's going to be deployed um Dr Joy baly I keep referring to her because I really follow her work and I think she's doing great job in the um AI space um she emphasized the Inseparable link between social and technology so it's not two different things and AI really um should prioritize people um at core um AI development should be um developed in alignment with societal values with human rights as well as um societal Norms um one of the four things that I think are important in making sure that AI is safe as as fairness um responsibility accountability as well as interpretability and interpretability in a sense of explainability so um developers and tech companies should explain and provide documented explanations of all the AI systems that they deploy into the public explain the source of the data explain the bias in the data set and explain the bias in the modeling process explain the bias in the in the final outcome and um I think that we cannot entrust AI with crucial life decisions you know such as determining eligibility for a home loan for a College admission um and also accept an explanation that um it's it's it's Black Box we don't know how AI makes the decisions that it makes um Kathy oel explained it um nicely and said people are suffering um distractions uh of MTH okay um Brian there are always unintended consequences and harmful outcomes even if you know Developers ask the questions and think about the bad people um what measures can be taken to address the potential for unintended consequences or harmful outcomes so of course there's there's a legal approach to it which is which is uh great you know law obviously has a has a role to play here but also on the ethical side of things I think that one of the things we need to really be thinking more about is ethical education all the way from you know from the start of Education all the way to the end of Education and then continuing on through life and we need to be uh constantly aware not only that the world is changing around us as as Gina was saying uh you know the world's a social Tech soot technical system it's not simple it's constantly changing uh people are changing organizations are changing technology is changing governments are changing there's just a constant dynamism in the world that means that we have to constantly re-evaluate the way that we're looking at things and that means a constant uh you know uh attempt we need to be constantly attempting to maintain our awareness of this environment around us uh something that might seem simple one day might turn out to be very complex the next day just because the D the dynamic environment has changed and so as we're putting forth Technologies uh we need to be not only aware of the technology that we're producing but of the way that it's being used and also the uh you know potential for exploitation that's being developed as these Technologies interact with society and it's it's one thing of course to to think about your own Society but also remember the world really is a very vast place and so there are going to be folks in other parts of the world who are using potentially a technology that you create in very different ways than you might expect uh and even in potentially very negative ways a government you know a bad acting government could pick it up and start doing very bad things with it and uh we really need to think not only you know law is one regime law Works within an organization or within a within a certain Nation um and ethics of course also works within certain groups of people but if you start talking about how do we have something International then we need to start thinking at the international level which of course add just another level of complexity to things yeah so how what can developers do or how can developers collaborate with these Regulatory Agencies other decision makers stakeholders industry Partners to make sure or to at least attempt to make sure that the AI that we um distribute it sa any one of you or all of you I think I think one of the best things that developers can do is provide options to uh uh policy makers um and to customers our customers for example at Nvidia to um roll out systems that are safe because safety is such a contextual thing it's hard to know in advance what the right mitigations are um especially in an international context right you might not know how things are used in an international context that could be harmful harmful to local communities and so what you need to do I think what's really helpful for developers and Technical experts is to um come up with that list of all the kinds of safety mitigations that you can have you know I think a lot about the GFCI outlets and and uh bathrooms where you've got electricity and you've got water building codes mandate you've got some special safety device in that location that's not a safety device that you need at every Outlet because it's not as important for every outlet but those are the kinds of options I think technical experts and developers needs to come need to come up with have a list of those work with Regulators to determine what is appropriate in what context and to give options to um regulators and local communities if they need to mandate specific mitigations and specific circumstances Gina Brian um for me I think they can provide um feedback to the regulations that are set by policy makers um and maybe highlight some of the potential unintended consequences of specific regulations um yeah what I would say is that we we should think of society as kind of being a multi-layer defense against bad things happening developers you're the first layer right you you you're the people who create the technology and when you have the technology you understand it better than anyone else um there are other layers there that can also help to defend against bad things happening and exploitation of the technology and Bad actors using it and abusing it but uh at the very first step you got to you know obviously do the best you can it's impossible once again Perfection is impossible uh you're never going to get everything right all the time but it's important to be aware of if you see for example the government says hey we're getting a problem here or another organization or or other groups of stakeholders tell you about problems that come that show up uh just remember that you're the one who knows the technology you're going to be the best person to come up with a solution to it so one of the questions on on everyone's mind is how to regulate Ai and we've talked today um and each one of you have mentioned some form of Regulation or control um the EU is poised to adopt or to complete the adoption of the AI act just last week they voted to approve it um US Congress and um individual states like California um are proposing numerous bills to regulate AI so Ian and Brian um what role should governments and Regulatory bodies play in overseeing the development and deployment of generative AI technology which one of you wants to go first why don't you go first um yeah the EU AI act I think is an interesting model for that right because it's got a list of potential applications that are just straight out prohibited um Mass surveillance in a lot of context um automated decision making and things like that and a lot of states have followed suit and have banned use of not just AI Technologies but any kind of software Technologies and automated decision Mak and and I think that that's um probably the right approach for Regulators initially is to identify those relatively narrow use cases where um the use of any kind of automation especially for a significant decision is just outright prohibited at this point um you need some sort of human in the loop some sort of app pellet process um some kind of due process involved in those kinds of decisions and I think that is appropriate I do get concerned about broad regulations that don't address specific risks um and the reason for that is it's it's easy to identify speculative harms I think it's a lot harder to identify the opportunity cost of failing to innovate and um you know I'm excited about the potential for um a lot of these AI Technologies to develop new drugs and Therapeutics um to help struggling kids learning with AI tutors and even the technological innovations that we're seeing right now around Fusion that opportunity cost is is is a real cost that is a little bit harder to identify and we need to make sure that when we're regulating we regulate specific harms specific real harms and and not strangle that opportunity cost before we have the opportunity to realize it did you want to add anything yeah the the the thing I would say is that uh speaking as an ethicist one of the things I like about ethics is that if everybody just made the right decisions off the bat then we never need to actually get to the legal level um and and you know that's an Ideal World right we're never going to live in a world like that people people are going to make mistakes either intentionally or unintentionally so we're never going to have a perfect world but uh once again if we can really emphasize the ethics level of it then the problems never get out into society or if they do get out into society they get addressed really quickly because people are trying to really trying to do the right thing so since we live in an imperfect World though there is this need for regulation and when we're doing regulation once again as as Ian was saying there's a potential there with opportunity cost that the regulation will go too far and it will start causing problems there are also uh bad uh interactions potentially between regulations I was talking to a person recently who said I am legally mandated by this by this government to both do this and not do this at the same time and what am I supposed to do and I said you need to contact the government I can't answer your question you know talk to talk to some lawyers I I can't handle this um and so we just need to be aware of that right there too much regulation really does present an opportunity for mistakes and big mistakes to happen especially because the connection between the developers of technology and The Regulators technology is often not as strong a connection as it needs to be that's actually a good segue into this next question for Gina and Ian um we've many of us probably everybody in this audience has watched a congressional hearing on some form of Technology where senators and members of Congress are asking questions that they it's obvious that they don't understand how the technology Works um and so is there a role that developers can play to help um uh The Regulators and policy makers come up to come up with the right kinds of policies and regulations uh in other words what can the people in this audience do to make it uh to make the laws that are going to come down um make sense for us um I I really think that engineers and um entrepreneurs technologists in this audience and one of the best things you can do is come up with options for Regulators um they don't know um what is possible in terms of the risk mitigations um we don't know I think there's a lot of research right now we're seeing a lot of papers on the open source models about how these models are controllable and the more research that is done in that space the more they have options lightweight options to do this regulation the better the laws are going to be if the only thing that they know how to do is shut the models down to prevent specific harm then you're going to potentially air on that side but if there's a lot more technological innovation to say that we can actually dial up the truthfulness of these models effectively we can dial down power seeking or deception effectively then that's going to have a lot more impact on um these laws than I think we anticipate right now there'll be a lot more options um for me I think also there should be a balance between the regulations and them hindering Innovation because the development of this models happen happens at a very quicker rate so I think um it's important to note that and try to find ways in which we can have the balance between the two because we also don't want to be in a world where um an AI model is sitting at six months down waiting to be regulated and and approved um so this is a question that we've talked about a lot Ian what existing laws and regulations apply to AI yeah all of them you know is my glib answer um and this is something that US government agencies have been saying quite a bit you last April uh you had a joint statement by the FTC with a bunch of other government agencies saying all of our existing laws and regulations applied to AI there is no AI exemption in the law that's a really important Point as policy makers think about regulations and we saw that recently um with the fake Biden calls in New Hampshire I think during the primary where the FTC came out and said you know in our view the existing law applies to this and if you read the law it clearly applies to it it's a fake you know uh call and um there was no contemplation of AI at the point but you're not allowed to make fake calls the law already addresses that and I think in a lot of the instances that people point to about AI harm there are an existing set of laws that already apply and I think that that is um a really helpful thing to think about in terms of software too AI is a version of software it's a very exciting version of software but we have a whole bunch of laws that are already apply to software 2 we have a whole bunch of laws that apply to harm from any kind of product that is sold have a hundred years of product liability law that governs the design manufacturing and marketing defects associated with products and AI products are not any different than those there are a couple of areas I think there are some gaps in the law but I think those areas are actually relatively few and far between okay um last question how can Regulators or regulations be effectively designed to Foster Innovation while at the same time addressing ethical concerns and protecting societal societal values and I'd like each of you to to speak to that all right I'll start I'll just say that is the entire problem right that that is the really really core issue how can we how can we actually try to regulate something so that we're getting all the benefits and none of the none of the negatives and and once again not possible impossible to get that Perfection uh so it's going to be a constant Balancing Act between different groups in society saying we think it's operating well others saying we think it's operating badly uh you know and and it's best to be very particular right it's it's best to be as precise as possible when when looking at an ethical problem say you know it's this particular model when it does this particular thing can we solve that problem then you can figure out who to talk to to solve that particular problem whether it's the develop ERS or talking to Regulators or looking at particular use cases um the more Precision that we can have the the easier it is to uh actually solve the problem on the other hand then you look at the world and you say we have a 100,000 problems how are we going to how are we going to look at this at this scale level um and that's where you really start having to have once again the social conversation and a prioritization problem of looking at what are the highest priority problems we need to solve once again there's going to be a social debate about that um and then you know get getting government and other Regulators involved as necessary um collaboration collaboration and collaboration uh as developers um different stakeholders I think we need to include NOS government because obviously this is something that is impacting almost everyone um so um we also need to get different context from different role players in the in the all AI ecosystem so I think if we have um I don't want to call them ethical review boards but some sort of board or commities that encompasses of um developers researchers government stakeholders and other key players to try to find a way um that can serve all the players without compromising um another part I think we need a risk responsive approach to this collaboration and um that requires a good understanding of what the risk is one of the things I would love to um see more from um Engineers is benchmarks to help us understand what is the risk associated with various models we have a lot of risks around performance or I'm sorry we have a lot of benchmarks around performance right now we don't have a lot of benchmarks around danger for some definition of danger and it would be nice if we had that if we could put a model through a series of tests and then come up with some agreed upon um assessment of how dangerous the model is in specific contexts that would allow us to do a lot more thoughtful um mitigation so um we have eight minutes left and I'm out of questions so I'm G to ask you to um tell all these fine people one thing related to ethics and AI that you want to see your your AI ethics wish uh one I think my my AI ethics wish would be that organizations really start to take ethics as seriously as they can um once again I think the the right level to be doing a lot of this technical regulation at is at the level of the development of the technology itself if you can create the best piece of technology and of course everyone wants to create the best piece of technology but there are constantly trade-offs and and and time limitations and things like that but if you can really do a good ethical job right at the start then the problems hopefully don't get out there into society in the first place place and I think that the the more that it can be done at the level of the organization and the team and even the individual then the less we have to worry about big problems showing up out in society um once again Perfection is impossible there's always going to be stuff that gets out there and then in that case the organization needs to be responsive to stakeholders out there in society and hopefully can figure out ways to fix the technology as quickly as possible great uh Gina what's your ethics wish uh my ethics wish is um to to to to live in a space where AI serves everyone not uh a certain group of people and also not for it to enforce the current systematic injustices and um um biases that already exist I want to see standards um that are helpful for developers as well as for governments and we're already seeing that play out with a lot of the government action in response to the bu an executive order on AI nist is developing an AI um framework and that's going to be really helpful for developers because right now when you're developing models you're kind of on your own in terms of assessing risk and that is problematic not just from an ethical point of view because you kind of have to invent the wheel again with that but it's also problematic because when you roll these models out it's not clear whether you've done enough or not if you have a standard that is in place that everybody's agreed upon that says this is the right way or at least a reasonable way to develop models then you can have some confidence when you put those models out in the world that you've done the right thing that you're not going to get sued and put in front of a jury who's going to second guess every decision that you've made there's a lot more predictability in having that standard there's a lot more predictability from the government's point of view that they're going to have Safe products on the market so I'd like to see those standards roll out as quickly as possible in a reasonable way yeah yeah I would just say U you know I I I I recognize that I just put a big burden on everyone right saying figure out in your company and we'll never have any problems out there one of the things that the Market Center tries to do as an organization is provide people the resources that they need in order to solve these sorts of problems all the resources are on our website they're free we have a handbook that talks all about how to build ethics into your company in a five-step process we have uh ethics and Technology practice toolkit which was the toolkit I was talking about think about the terrible people as one of the parts of that toolkit there's also thinking about the benefits of Technology uh risk sweeping thinking about ethical premortem and postmortem expanding the ethical Circle in other words be more inclusive in the way that you're looking at the world and and also making sure that you have feedback channels to get those feedback uh from people out in society that you need to hear about so yeah I don't want to just go out there and put burdens on people but also say there are lots of resources out there and and of course there are vast resources beyond the market Center's resources also there are lots of resources out there to think about ethics and really uh think about how to make the best choices when you're designing the technology in the first place and I will end with my ethics wish um about a year ago I read an article I think it was in ours Technica where they um reporters went to University college and university campuses in in the United States and asked computer science students um who is responsible for ethics in in technology products and no student thought it was their responsibility as a computer scientist to think about ethics that that was someone else and so my wish would be that ethics becomes required courses for computer science business I mean you know pretty much everybody but that ethics become required courses for people who are going to build these systems that are going to um impact Our Lives as they already do so I want to thank the um panelists for a really interesting discussion and um I thank you for your time and attention Yep this concludes the GTC presentation thank you
Info
Channel: NVIDIA Developer
Views: 528
Rating: undefined out of 5
Keywords:
Id: O35FHL-PXDw
Channel Id: undefined
Length: 46min 0sec (2760 seconds)
Published: Fri Apr 12 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.