Artificial Intelligence and National Security: The Importance of the AI Ecosystem

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
👍︎︎ 1 👤︎︎ u/AiHasBeenSolved 📅︎︎ Nov 06 2018 🗫︎ replies
Captions
well good morning everybody congratulations on getting through the rain it was you know it's I know it's been kind of wet and soggy and usually when we have rain like that you know 90% of the people to skip it so I'm very pleased and proud to see everybody here thank you thank you for coming my name is John Hamre I'm the president here at CSIS and my rule is largely ornamental to say welcome to all of you I do want to one kind of a safety announcement we always do this would and we have groups from the outside we want to say briefly if we do hear an announcement we've never had it happen in the five years we've been here but if we do hear an announcement andrew is going to be responsible for you we're going to exit through that door or this door they'll both take us to the stairs that's right by that door we'll go down to the to the alley we'll take two left-hand turns on a right-hand turn and we'll go over to National Geographic and we'll assemble there now they've got a great show right now a National Geographic it's it's about the the Titanic people don't know that the Titanic discovering the Titanic was actually a cover story for a secret Navy mission we had lost the the Scorpion the submarine we wanted to find it learn more about it and so it was a cover story if you could imagine and the last after they had done all the finding the submarine they spent one week and found the Titanic it's a great show it's a greater I'll pay for the tickets if anything happens if nothing happens you pay for your own tickets okay but do go see it we're delighted to have everybody here and I want to say thank you to Alan Pellegrini and Tallis for making it possible for us to do this study and have this conference with you today and it's it's about artificial intelligence now this is these are the most often spoken buzzwords that nobody knows anything about in Washington right now I mean if we go through phases like this I remember when big data was the buzzword in Washington right now artificial intelligence is kind of a buzzword everybody is thinking about it and there's really not enough intellectual context to understand what we're talking about that's what this study is about andrew is going to in a few minutes give you kind of a background on the study that we're releasing today little video that's going to introduce it but this is this is one of those very interesting questions where an enormous amount of momentum is moving all around us and what are the governance issues associated with what we're discovering and how are we going to manage it these are open-ended questions for which we really don't have answers the purpose of our conversation today is to lay out a framework a bit and then to hear from three experts that are going to help us think about this more in a more structured way but before we turn to them let me ask Alan to come up on the stage I want to say thank you again for giving us this opportunity Alan Alan Pellegrini please hello and thank you all for coming out and thanks John we do sincerely appreciate the opportunity to to be here with you and CSIS so thanks very much I just have a few minutes so so let me just make a couple points before we get on with the program talas we're very proud to be a sponsor of this report and this event I've had a chance to read an early draft of what you're about to see I can tell you it is filled with what we would consider very important and very interesting insights into artificial intelligence and it's it's impacts I'd like to use the few minutes here to tell you why Tallis felt that sponsoring this project was important not just to us but really to our entire industry for those of you not familiar with us we're very large technology company European base but with global footprint especially here in the in the US and North America we serve five very large vertical markets space were a large provider of satellites and payloads including with the space station the international space station aerospace commercial and military aircraft and air traffic management ground transportation so if you ride the New York subway system or London Underground you've got a chance to experience our our products in action security both cyber and physical security such as airports and of course defense one of the ten largest defense providers on earth in each case you can appreciate that we address some of the most challenging and complex problems that are faced those that really impact critical decisions and those that occur at the most sensitive time so in other words what we're involved with affects lives now we recognized a handful of years ago that there are emerging technologies that will disrupt our businesses and our markets and they include areas like big data analytics that John referred to but also cyber security Internet of Things and especially artificial intelligence and we as a company have made significant investments in each in order to try to stay in front for our businesses in the case of artificial intelligence we're already incorporating this technology to help us solve some important what we would call use cases or specific problems just to give you one example as a large collector of satellite imagery data we now apply artificial intelligence to help decipher or make sense of satellite imagery to detect sensitive or perhaps threatening items among a very large set of imagery data that's a perfect use case for artificial intelligence as is things like airport security what we're doing facial recognition to helped detect potential threats but given the nature and complexity of the problems we help solve and the nature of the customers we serve our success is going to depend not just on technology but on other topics and we needed we felt to better understand the role of government in artificial intelligence the issues such as AI reliability you're going to hear about explainable AI and verification these are especially important topics in critical for defense application none of us wants to apply artificial intelligence to create the next Terminator for example and these are in particular very important topics we need to better understand the hazards and risks and most importantly at Alice we simply want to help shape the conversation around AI because we think it's important we want to be part of a collective force for good and ethical applications of AI to really help address our world's most foundational challenges and so with those objectives in mind we embarked on this initiative partnering with CSIS we felt they would bring and could mobilize the best breadth and depth of strategic thinking and expertise to successfully address this topic at this stage so from a Tallis perspective we look forward to this presentation in panel and certainly to the continued engagement with CSIS and all of you so we thank you for coming and I thank you for the few minutes here this morning now we're going to see the brief a video that John mentioned I think it's about three and a half minutes and I think it'll provide you a good summary and some of the initial insights that you'll hear the panel expound upon shortly so thank you very much [Applause] artificial intelligence is a uniquely complex field we've thought about it for centuries worked toward the modern version of it for over 60 years and made significant breakthroughs especially in machine learning since 2012 yet compared to what it could someday achieve for us AI is only getting started like any emerging field wise AI investors will spread their bets hoping that a few payoff big enough to justify the rest hey I will deliver the greatest rewards to those prepared to make long-term investments but investing in AI applications alone will not ensure results or success the advanced capabilities of AI depend on a complex supportive system an AI ecosystem when properly executed this enables AI to take root and develop and improve on human performance but a fully developed AI ecosystem and the kinds of results that justify its expense don't happen overnight and in many cases they don't happen at all most public and private organizations including the Department of Defense are woefully under invested in the supportive structures that AI depends on this creates a debt that must be paid upfront to allow for successful AI adoption until it is tackled the debt of an underdeveloped AI ecosystem will only undermining long term success the smartest investors have begun to understand how this AI startup debt must be dealt with and their initial investments across the field since AI emerged as a practical reality in the past decade the private sector has dominated AI investment tech companies are perfecting the painstaking process that made famous leaps like alphago and Watson possible meanwhile commercial adopters have begun to think critically about what can make their AI applications worth while long term many have seen the consequences of ignoring deficits in the AI ecosystem and have allocated their resources accordingly [Music] most government AI adopters start out far more under invested in commercial users the Department of Defense exemplifies this issue if the nation's strategic goals for AI are to be realized investing in the AI ecosystem must be a top priority this investment will also lay the groundwork for wider government adoption of AI while they can and do leverage commercial AI for strategic uses there are some areas where commercial developers will not invest the technology required to deliver and verify AI results for national security applications differs from what is expected from commercial AI it must operate and specialized high risk areas and be extremely secure assured reliable and explainable verification and validation are essential for these systems development of this technology is vital to the national interest and must be fast tracked by doing this the government can also make a critical difference to AI and the wider commercial market this public sector development could yield vital breakthroughs in the field a viable national strategy for artificial intelligence would require investing in the AI ecosystem to pay down debt especially in the workforce and spreading bets across the AI field for the public sector it will be critical to focus on AI reliability if the government works closely with the commercial sector to drive the technology forward the US can leverage AI to achieve its strategic objectives [Music] all right well thank you again everyone for coming today and that John opened by saying he was ornamental and I'm here to tell you I think I may be a little redundant because we tried to pack as much of our report major findings as we could into that video which you've just seen I hope you enjoyed it and it'll be out there on the web on our web site on YouTube Vimeo and possibly a couple of other outlets for those who want to follow up and watch it again but I am going to briefly run you through the very top level findings of our report there's a lot more there it's about a 78 page report so I encourage everyone if you didn't get a copy on your way in if we ran out of those which we may have based on the size of the crowd they'll be more available through the website shortly so let me give you that top-level overview I did want to start by thanking again Tallis for their support for this project I also want to thank Lindsey Shepherd who was the lead author on the report and really the engine behind the project and our two other contributing authors Robert and Leonardo who worked very hard to put this report together and make it look look good and sound good and actually have good thoughts in it I also want to thank the attendees that are many workshops on this project we had six workshops there was a hard core of about 20 folks who came to most of them if not all of them another group of 10 to 15 who came for some of the sessions none of the errors of this report of their fault hesitate to say that they can disown every aspect of it but their insights were deeply valuable to us as we went through this project okay okay so John touched on the fact that one of the foundational questions when you're going to do an AI study is to is to start with a question of what are you actually talking about when it comes to artificial intelligence and you know it can be a kind of a meaningless term depending on the level of knowledge of the person talking about it and the problem that they're trying to bring illumination to there's a lot of good work being done out there on artificial intelligence so I want to begin by saying we've defined it in a specific way for this report not because we want to criticize or critique anyone else's definition but because we needed to have an understanding of what our scope was to do a useful project our project focused on what a lot of folks refer to as narrow AI so we didn't try to get into questions of general artificial intelligence and the issues and the problems that that causes largely because our time frame was relatively near-term focused the next five to ten years and our judgment is that during that time frame the issues of narrow AI are very much going to dominate how this field develops and the significance that it has for people trying to implement AI solutions and for government actors trying to understand the technology and capitalize on the technology and so by narrow aoi means a artificial intelligence is a technology that does problem specific tax dependent solutions to cognitive problems the way that AI operates is very different in many ways from what we would normally think of as human intelligence the the kind of problem-solving that a lot of these algorithms engage in bears little to no resemblance to what we would think of as human cognition so it is as we've looked at it here the various AI technologies within the a I feel that words of greatest concern to us are not things that are trying to mimic human intelligence but simply trying to perform tasks and solve problems in whatever way they can do we also had this study was a fairly broad look so we looked at issues of AI investment we looked at issues of AI chené issues of AI management and did a bit of a survey of international activity in AI so it was a very broad study and I think there'd be a lot to be gained by going in deeper on each of these topics and so what you'll see here is the highlights in some ways of the highlights because it's a very broad broad look all right so one of the things we tried to do with our first effort was to come up with some kind of conceptual framework of understanding the arc of a IIF you will how does progress in the eye field happen how is it likely to proceed and there's a lot of different ways of looking at it there's increasing degrees of autonomy and AI systems there's increasing degrees of collaboration and the way that AI operates that leads to higher order higher order effects higher order applications of AI over time we hope and so we tried to kind of capture that in a framework that we could visualize and as you see along the bottom we look at AI capability starting with the very narrowest possible tasks like a telephone switchboard which is just trying to move communication between two channels or to two users in an intelligent and accurate way and connecting the right people at the right time and then building up towards broader and broader or more general-purpose tasks over time and in some cases you can think of that one temptation is to think of it purely in terms of the AI acting entirely on its own and becoming increasingly autonomous but that's not the only path and it may not be for some defense applications the most important path because as you've heard a lot of senior leaders in the department defense articulate we are not at a point now where anyone is willing to sign up for a completely autonomous activation of many critical defense missions there needs to be a human in the loop so one of the other dimensions is how does that AI take in the context of the problem of the world in which it's operating of other actors human actors other AI actors that may be opponents they may be collaborators in the process and so the exposure or the ability of the AI to kind of move up through this hierarchy of behaviors becoming more interactive more collaborative and then ultimately more able to act and in a closer approximation of how a human based intelligent actor would over time there's a lot more discussion that's in the report I encourage you to go into it but in terms of how it informed our later work I would say we tried not to be overly focused on the increasing levels of autonomy as the only the only dimension of AI progress and what you will see is and we have a disconnect on this chart you'll see there's a there's a long way to go if we were to sort of do a non disconnected chart out to some of the applications that at least have been speculated about you would see that we're still emerge in the early stages of AI development there's a long way to go to get to some of the more advanced applications that have been discussed and imagined that wasn't I wanted to do okay I wanted to revisit this chart on the importance of the AI ecosystem that if there's nothing else we want you to walk out of this room thinking about this report it's about the importance of the AI ecosystem this was our our biggest takeaway which is that for all of the importance of the different ways of implementing AI technology through a machine learning computer vision other elements of the field there is something fundamental that gets at how this works how it can be usefully implemented and managed over time and it's this collection of things that we have termed the AI ecosystem that's people it's the work force the technical work force that is developing but also engaging managing and using the AI technology you see our little lock symbol here it's the ability to secure the data on which the AI operates it's the foundational piece of the tool being able to secure it also being able to work on gathering data data quality and those those issues it's the ability to have a network having the computing power to process the data and a network to share data so that the critical applications get the data they need when they need it are able to do the training but then also able to do their mission specific tasks it's the policies required to actually manage AI there's a lot of work to be done there there's a very good report put out earlier this year les es by our strategic technologies program which looked a lot at policy issues and strategy for AI and I commend that to everyone particularly on that topic although they touched on a lot of things that we looked at as well and then last but not least the ability to verify and validate what AI tools actually do and particularly for government users high consequence and high consequence missions many of which we see at the Department of Defense the ability to actually validate the performance of AI as it grows and evolves is not only critical but incredibly challenging in other words we don't know how to do that at this point time and I think we'll hear more about that and the in the panel discussion okay I said we did look a little bit at AI investment and that I want to also give credit here we leaned heavily on data gathered by McKinsey who had a very good report on a investment also some work done by NGO vini that looked at AI in the federal government space and where it lies and then there's a great report called AI for the American people that the White House put out that summarizes some of the White House investment some of the takeaways there is very rapid growth in AI investment starting in about 2010 and really starting to skyrocket in 2012 as the curve really went vertical driven a lot by machine learning and the take off of machine learning as a as a technique for AI that was delivering very very important significant results in 2016 you see that companies across the space inclusive of private equity companies investing internally over about twenty six to thirty nine billion dollars of investment we also see that in terms of government investment looking at the broadest possible category which is NIT networking and information technology research and development it's about four and a half billion dollars over a per year from 2016 2017 2018 on average so there's there's a lot of money going into this now I mentioned that it was a very broad category so one of the debates you can get wrapped up in an AI is what constitutes a real AI we have somewhat sidestepped that debate because our argument is the importance of the AI ecosystem means that in many ways an investment in a critical computing capability or critical networking is so foundational and so supportive of what you need within the ecosystem that it's worth considering that as an AI related investment so we haven't tried to split hairs here and say what is specifically a it's you know an algorithmic investment that isn't AI because it doesn't meet some criteria we have chosen not to get into that kind of fine distinction making because we think the categorical look here is important we also looked at AI adoption what what does it take what does a user really want to know or need to know I need to have in order to make effective implementation of AI and we tried to do a bit of a survey of where you see III being implemented I mentioned that in the commercial space obviously machine learning has taken off in a big way we see AI in the financial industry in the insurance industry in the advertising industry in a big way especially online and on the government side and I would say in terms of self-driving vehicles that's another huge area of commercial investment that's driving the field forward on the government side there's actually also quite a bit of AI a progress or effort forward momentum on AI going on we see this with image recognition with logistics applications with unmanned applications so project maven is a relatively well-known government effort the scope of which is has made a lot of progress in recent years and it's gotten significant investment leadership interest the sea hunter unmanned the surface vessel has likewise been driving the field forward on the government side and there's been some pretty innovative work done and the Marine Corps on the logistic side to try and capitalize capitalize on AI and also within the f-35 program the logistics area some incorporation of AI that we see taking place for us one of the big things that jumped out as we talked through ai adoption was the the significant debts the start-up debts that have to be have to be dealt with this chart says technical debt at the top it should say technical debt and workforce debt because we want to emphasize that the debt is as at least as much and possibly more so on the workforce side as it is on kind of the networking and computer infrastructure and data data of housing and data collection side because without the workforce you really have very little to work with you can gather the data it's not as easy as maybe just saying you can gather the data but there's a tremendous amount of data and government systems there's a lot of work to be done and making sure that that's quality data useful data and its really the workforce that has to do that work but when we get into AI adoption what we heard over and over again from our experts is this issue of the start-up debt that people trying to implement AI face is the biggest issue that's out there the specifics of the technology are not right now what is holding the field back it is this startup debt and again we've tried to incorporate that into the idea of the AI ecosystem that they're sort of debts to be paid across all aspects of the AI ecosystem and that's what we hope will be the major takeaway of our look at this effort we also had a session looking specifically about well if we really have AI in hand and we're starting to use it what are the issues associated with actually managing that usage if you think of AI and I would say we primarily didn't think of AI as a general intelligence entity but if you think of AI as a as a as a unit of application to use some DoD terminology within the force how would you kind of come in and control that force what are the issues that would come up with really using that to achieve missions in a real tactical operational context and what we discovered is that there are issues at the tactical level for how you use AI very much tied to the workforce but also at the operational level middle manager level where a lack of familiarity or understanding of a I can completely stymie the ability to use these tools and frustrate the tactical folks who may be young and innovative and able to embrace and grasp this technology but if they can't get the resources and support from their their next level up of management they can be completely stymied and their ability to execute and then the strategic level of broad organizational level policies in the in the DoD context DoD policies procedures guidelines and legal things some of the things that came out on that side is the a lot of work to be done on understanding intellectual property and the significance and ownership and licensing of intellectual property in an AI context when you have algorithms generating intellectual property how does the management and ownership of that manifest the tremendous amount of work to be done at each of these levels to really make a useful for haikon especially for high consequence missions and then trust reliability and security something that was hit hard in the video but for an operational user of AI to truly put that into a high consequence scenario understanding how it operates what it's doing why it's doing what it's doing and how do we know it's going to do it in the way we expect or in at least in a safe way over time and lastly the study looked at some of the International activity in AI obviously this has gotten a lot of attention we dig into it in some detail across survey of countries I don't want to run you through a death Death March of all of them China obviously stands out as a huge investor here Russia as well one thing we did major takeaways from our international look is that the numbers of countries that are making significant efforts on AI is vast it is truly a global competition that's going on out there the Chinese are heavily committed as are others what these countries are seeking to do with AI varies very much there is a as almost as much of an idea about different ways of using AI to promote national interests as you go across as there are countries engaged in doing it again here we return to the importance of the AI ecosystem because from a global competitive aspect I walk away from the study not so much concerned that there's going to be a specific technology developed in China or in Russia that is going to give them some indomitable advantage that will take them out ahead of the US or others in the world or vice versa as much as the the robust this of that a iOS ecosystem will be what confers advantage over the longer term for countries engaged and work on AI and then on recommendations again refer you to the report for the full list we think issues of AI trust and security are critical areas for US government investment the need the degree for which this is required for high consequence government missions is in excess I think of where the field on its own will be able to go there's a critical need for investment here particularly as I said on verification and validation where there's a lot of theoretical work to be done to understand how is this how is this even possible because AI challenges some of our traditional methods of doing test and verification and validation I've hid already our point about the workforce and the criticality of developing nurturing the worst forced having access to the workforce being able to get the workforce into the organization there is a lot that commercial industry is going to do in AI development they will take the lead but if we think that we can usefully use AI for government missions without AI skilled personnel in the government we're kidding ourselves we need this talent organically as well as out there in the private sector where where a lot of the bulk of the investment will be happening the importance of the digital capability and although there are some strengths and the government's digital capability which you receive reflected in some of the early adopters and government of AI technology such as the intelligence community where there have been significant investments made over a series of years and the digital capability there's still a tremendous amount that needs to be done on the government side and then lastly on policies and being able to number one manage and safely use AI in government context in terms of cooperating with a private sector a successful acquisition of software which AI fundamentally is a lot of work to be done here and so I'll leave our slide up on the AI echo system the importance to try and hammer that home and that concludes our broader view of our findings and recommendations and turn now to our panel who are gonna join me up here on the podium I will let you know upfront we did have four panelists lined up for today's discussion unfortunately drew for our representative from IBM was flying down this morning and his flight was delayed and possibly not even going to take off so unless he comes in graduate style in the back of the room and hammers on the glass I don't think you'll be able to join us today but we do have I think a fine set of panelists and I'm going to join them now at the table all right well thank you ladies and gentlemen for joining us today for the discussion I'm gonna introduce our panel and then we'll we'll move into it to my left is Ryan Lewis who is the vice president of cosmic works which is an in Q tel lab dedicated to helping u.s. national security agencies commercial organizations academia and nonprofits leverage emerging remote sensing capabilities and recent advances and machine learning technologies particularly computer vision thank you for joining us Ryan his left is Aaron Holley who is vice president of public sector at data robot which is a company that in the artificial intelligence business using artificial intelligence on data she can tell you more about how that works she works closely with federal government users they also have a very substantial commercial business as well using their AI to do to draw insights from large volumes of data and to her left is David Sparrow David is a researcher at the Institute for Defense analyses both David and Aaron were regular attendees at her workshops I want to thank them for that David has a PhD in physics from MIT he spent 12 years as an academic physicist and then joined Ida 1986 he's concentrated a lot of his work on technology insurance and ground insertion in ground combat platforms and recently has gone on a deep dive on the challenges of autonomous systems a technological maturity and intelligence machines and on test evaluation verification and validation of autonomous systems driven by artificial intelligence so thank you for joining us this morning I want to start by giving each of our panelists just an opportunity to to give a few thoughts first of all or on your kind of perception of AI challenges if you have reactions to the report that's great as long as they're good now we're looking for all kinds of feedback and then we'll get into some more specific questions as we go but Ryan why don't we start Yeah right well first of all thanks for the opportunity to speak here today and I had the opportunity to read the report over the weekend and I loved all 78 pages of it so but as Andrew met or as mentioned Inc you tell serves as a strategic investor for the US intelligence community and with in the labs we go one step further in focus on applied research projects and that offers us sort of a unique perspective in terms of what's not just happening within the artificial intelligence market in terms of both incumbent activity as well as startups but also in terms of what someone argue is the leading edge that's coming out of academia or other national laboratories and I think in when we compile all of our experiences into one I think the perhaps the simplest way to surmise what we're seeing in the market today is simply that AI and its general sense serves as a fundamental chance for the intelligence community and the military to rethink some of their applications and processes and the key question or the key word there is is that it offers as mentioned a lot of these technologies are in their very early stages with some very exciting and tractable results but there's still much work to be done and I think for us that opens up a perspective in terms of how we think about implications both near-term and long-term for this type of technology and it's important to kind of set the stage because that that comment is in the kind of stark contrast with all the hype that we hear around AI in general and just by show of hands how many people have here have heard of computer vision pattern recognition conference otherwise known as cvpr I know a couple people have all right so if you haven't heard of that conference that conference sold out faster than Capitals playoff tickets and that that should be startling to all of us in some ways because the question is what was why what why are people so excited about going to a conference that just a few years ago was not well heard of and the reason is is because for researchers they are just now having the opportunity to present very tractable results and very niche focus areas that are now maybe expanding beyond that niche and so when we think about implications from the national security perspective what that means is is how can we harness some of that excitement you know we highlighted at the macro level I think it's well put in the report how do we start to design the infrastructure for the ecosystem but more generally I think for us as we look at specific applications one of the our core parts and I think you hit it well is what are the human-machine teaming P interface what does that look like in some cases whether it's an end user just using different analytic tools specific BOTS or if it is a scientist that's building their own models what are our expectations for that for those employees and what do we anticipate the lifecycle for those tools to be it's a completely different way of thinking about a problem from a workforce perspective and then the other piece from a sort of an applications view is that these technologies we're still in experimental stages which can be at times very frustrating as my colleagues will tell you but what isn't frustrating what's really cool is is that it allows us now in these early days to begin to figure out what processes we want to change and which ones that we think are strong hard to believe deep learning isn't a solution to everything I know that sac religions in some areas but it's important to know when should these tools be applied and when they shouldn't I think as we go through some of the questions we can highlight examples for us but I think the core takeaway is is these technologies allow for early experimentation which could have drastic effects and in certain applications or processes it may only be tertiary and others all right next next up is Aaron and I should say because I want to make sure we're not really misunderstood we are talking about a a broad overarching problem for government and making AI effective and useful there are companies out there doing really good stuff with it and Aaron works for one and so we're not trying to say there's no one out there is doing really useful stuff with AI today they are but there is this larger systemic issue but Aaron you can lead us with some more insights on that thank you so much we're thrilled to be here today so data robot is a company that back in about 2012 our CEO was working in an insurance industry and he is a data scientist and many data scientists who are currently working at Facebook and Google and Amazon in those places they're spending weeks two months to really develop some of the strongest algorithms in the world in order to help predict some of the things that could happen in their business place so our CEO thought to himself that there's this competition called the kaggle competition and every data scientists or data analysts who was interested in perte kaninika test whether they were from allstate or netflix on a certain challenge they realized that even if they worked for this insurance company he and his partner they thought it takes us weeks and months to actually develop out a single algorithm and that's just too long we are going to be beaten day in and day out by China and Russia's advances and what they're trying to accomplish if we don't try to take a step ahead so in 2012 thanks to Inc you tell as well with their investment data robot was formed and the idea behind it was we need to make a software platform where we we take what the best pieces of what data scientists bring to the table which is a combination of having really strong domain expertise as well as a really great background in computer science and then the third piece being very strong mathematician statisticians how do we bring that together in a software platform so that rather than it taking weeks to months to answer a single question about something that you you need to have information for if your business let's develop a software platform so that instead of being limited to a single algorithm we offer chances for people to put their data in to be able to generate out hundreds of different models within a few minutes versus weeks or months which we just don't have the time to do anymore so the folks that we have in our organization we took three years the investments that we have across the VC community we took the first three years and instead of putting a product out to market within six months the executive team we decided we're gonna take those three years and 30 million dollars in order to make sure that we built the strongest platform which is what we view as automated machine learning and we are under the umbrella of artificial intelligence machine learning natural language processing deep learning there's a variety of different things you'll hear about our focus is in on automating the machine learning process and in doing that it's fascinating some of the things that we've been able to do especially in the commercial industry I started the federal public sector team about two years ago and we're seeing some really nice ways to get started for both the military and the intelligence community but in commercial it's really outstanding to see what we were able to do to help across a variety of different markets including banking you know anti money laundering is one of the biggest risks to our financial and our economic future and the fact that we are able to identify anti money laundering schemes and help stop them before they get started has saved banks and ultimately those of us who are consumers hundreds of millions of dollars in just a short timeframe so I think from a data robot perspective what we're trying to do is help people understand how can you most people in this room will say if you ask especially a federal agency how many people are truly data scientists in your organization you might find one or you might find two within a massive organization who are dealing with volumes of data but if you actually look back to it and you actually deploy capabilities like ours we can help take your one or two data scientists and allow them to usually really take all that massive amounts of data and make some solid good answers and solutions for you and that's what we've been able to do here in the public sector space what we're seeing happening day in and day out in the commercial industry thanks David so both from the perspective of building these tools and making the investments this is very much in keeping with the ecosystem approach and I'm gonna bring it way down to let me call it clarification level so first of all I'm delighted to be here this is a real treat for me Andrew said something about my my window into this it's from an evaluation perspective on usually the the entry to AI is as an element in a full-up system of some sort either technology maturity assessments or just evaluation and verification validation but I went through prior to getting the the report and reading I said so what what would I say just sort of on my own largely consonant but I'll try to compare so one of our one of our mantras or sound bites at our place is AI is not a thing now Andrews report says it's it's a buzz word but it's not even a technology or a set of technologies it refers to everything from mathematical research and proved ability of software to aspirations for the good of society and it's important to keep reminding yourself that you can't do anything with anything that that broad you have to narrow it down just like they did in the report what we mean is purpose-built algorithms but you also have to not lose sight of the bigger issue if you want to actually have the piece parts build you the the ecosystem the second point I would like to make which is largely overlooked we don't have anything like a predictive theory here we're doing very useful things and there's very profound work on theoretical underpinnings but we're not there yet and this has implications across the board as implications for for how you want to do development implications for T V and V it has actually legal and ethical implications and I would class the need for a predictive theory as part of the technical debt that has to be paid down now you can do very very useful things without a predictive theory I mean part of part of what happens if you work on ground combat as you work on our Tillery systems and we had useful artillery systems when we still thought he was a fluid called resistant and a hundred years before the periodic chart of the elements which told us about where energetic materials got their fun so there can still be tremendous utility but there can be much more utility and I would I would point out that in the artillery business when they were still thinking about Fallujah stirrin and didn't know what the periodic chart was they had a lot of access which gets me to point number three which is about risk I I like to think about risk in terms of two different types of things that have gotten a lot of attention one is the alphago triumph and the other is the self-driving cars alphago is not going to hurt anybody there are no severe consequences there's there's no downside risk and that is incredibly freeing for the developers it simplifies the verification of validation because if you get something that you have to validate which is really hard to validate you don't care nothing bad is gonna happen absolutely different situation in self-driving cars very bad things can happen they already have happened and if catastrophic failures are possible the rare and catastrophic events have to be understood it's got to be even more severe in the defense regime than in the self-driving cars and this links then of course to the the underlying predictive theory to go back to the artillery business you understand energetic materials you understand aerodynamics you know approximately where the risks are going to be when you do something new and this can focus your development it can focus your test evaluation it allows you to have a a basis for you know for the legal and ethical and liability issues the last not the last no danger I'm done yet the human he machine interaction was was the next one I wanted to talk about and we've been thinking about this with stolen technology about a terminology stolen terminology I've forgotten who we stolen it from where we are evolving from a human machine user tool relationship to appear to peer relationship and the report talks about shifting workload from the human to the machine but I think when you do that and a Spurs aspirationally one of the things we want to do is we actually want to shift responsibility from the human to the machine and this is going to be important in very very many ways particularly for defense because what it's going to do is it's going to impose a need for experimentation that hasn't been done before we don't actually know how it's going to work out as we shift the boundaries of responsibility and there's going to be a lot of experimentation that that has to be done this is already going on in the self-driving car regime in a certain sense at least Tesla what Tesla is doing is that they are beta testing their software with real cars that are driving around on the road now but to put the scale on this so first of all recall Tesla gets beat up for their low production the fact that they have had trouble you know gearing up for production which means they've made a hundred thousand of these vehicles and they've hit as high as five thousand in a single day now switch from that to the Department of Defense the biggest platform program we have at the moment is the joint light Tactical Vehicle at no point do they expect to make 5,000 in a year so you are in a completely different regime a completely different learning regime from what you're doing with your fielding you're going to have to front-end load experimentation if you're operating in that kind of space in a way that the commercial sector which thinks in terms of millions doesn't have to the fifth thing I was thinking about was data I promised dr. Henry but coming out of the research area in physics this is an area which has an underlying theory and you still have to have 90% of your money and 80% of your people doing the data partner however hard you think data is it's probably harder than that I want to close with with two other comments the report labels this area as semantically problematic which I don't disagree with and I have two semantic issues to raise very very commonly in the community and I think the report lapses into it in a couple places is trust is treated as an unalloyed good trusting things that aren't working is not an alloyed good the psychology field has the term of calibrated trust and it is important to keep reminding yourself that trust going up is not necessarily a good thing trust going up to exactly the right place where you trust it you know what missions in what environments this system will perform well in but you also know what ones it will not perform in is is critical and it is routine and I think it's probably a part of human nature that people assume the systems going to work and therefore what you want to do is trust it well no system works perfectly everywhere so the the idea of calibrated trust as opposed to freestanding trust is important and the other has to do with explained ability and transparency and we would add instrumentation for this one of the things that you're going to need to do to build the theory is to be able to instrument a look into the decision-making processes of these AI systems this also very very frequently gets treated as well you know once once we can explain it it will work people will trust it it will be adopted but one of my favorite lines from the thing is explained ability is not a panacea there's a lot of things that people might explain to you which would actually cause you to reject rather than endorse their positions so with that again I'm delighted not to be here and I think my turns up thanks David all right well I've got a handful of questions I'm going to throw at the panel to try and drive some discussion and then we'll open up for audience questions after that I want to start with what I consider some of the unfinished work of our project there's still a lot more to be done we kind of started our project with the perspective of if we looked at how AI was actually being used in reality in the commercial space and in the government space it would give us a lot of insight into the areas where progress was going to grow fastest and I still think there's a lot to that but honestly it was a little bit frustrating trying to do that my one of my Opry ory assumptions before really digging too deeply into this project was that AI was going to be really good and really useful at doing things that humans really struggle with one thing humans really struggle with is making decisions in microseconds and you see the need for that and some defense applications I'm thinking of missile defense and in some other areas another thing humans struggle with is dealing with absolutely vast unknowable millions of data points sets of knowledge and we do see AI making a substantial contribution on that volume question my own perspective is we don't see AI lending as much assistance yet on these really time critical kinds of things you see it in the commercial sector in terms of the financial industry but in terms of defense not so much because it turns out that a lot of these really time critical things are also really high consequence things and we run into this problem that we don't really understand how these algorithms achieve the solutions are after and we're not highly confident that they're going to make the right call so there's a lot of areas where you would think that we're gonna make rapid progress in AI but when we look at what's going on not not so much or not as much as you might expect so I want to challenge the panel in terms of the contribution that you see AI make national security missions to the next five to ten years and what do you think kind of explains why that's where the focus or the the momentum is most likely to be in the near term why don't you start all right so obviously inky tel invests across a lot of different areas and within our labs we have labs that focus on everything from audio data to cyber security selfishly being a group that is focused on geospatial and I'm inclined to kind of focus more towards a geospatial application mainly focused on computer vision but I think from our experience so far we do see in the next five to ten years these sorts of technologies to have a fundamental impact on one of the industry is called sort of the TC ped process sort of tasking collection processing exploitation and dissemination and what we mean by that is is that it's no longer just a process about finding things in an image and then reporting that out but it these technologies offer I'll be a very early-stage offer a chance to quantify and systematically explore each part of that chain so going to the beginning part with the tasking piece do we know what we're asking for specifically do we know what has a high enough value if we think about it from an artificial intelligence perspective think about a machine learning model and you want to find buildings they were building footprints is something that's that we focus on a lot you would want to know early on what sort of resolution do you need what sort of spectral coverage do you need and also what sort of temporal collections do you need it's different it's one thing to ask a person that who has looked at this particular application for years it's another to have specific models tuned for those different types of data and that's for us is is really exciting I think one of the ways we've tried to explore this with industry and with the government is an initiative that we've launched in coordination with digital globe and radiant solutions and then with hosting services from Amazon Web Services called space nut which is modeled after image net and the intent there is is that we have open sourced a large amount of curated data and I can completely agree with David's comment the data curation piece is the most painful part and we host machine learning competitions and then we also work with others to post open source tools from that data set and I think one of the things that we've been continuously surprised upon with every competition in every data set that we release is that some of our assumptions are always challenged what we think makes the most logical sense isn't always the case depending on the model or what's even more exciting and sometimes frustrating is that the results will vary greatly between different models we recently just released a blog post that highlighted just the difference in machine learning performance between the same image essentially but at different Nader angles so looking at building footprint detection from one angle and then at the exact same nadir angle just on the other side but you're gonna have a shadow effect now performance varies greatly and these are just this is very subtle this is one input so you're looking for building footprints in one resolution type and you have two different images and the same area and yet your performance is very different just extrapolate that with more sources greater search area it's these sorts of things that we want to explore across each part of the chain so when we think about long term implications and the geospatial domain for something like AI it's allowing us to say what is most valuable for this specific problem and does it help us answer the question in the most impactful way and from from our view whether you're a start-up or you're an incumbent providing services well it is kind of most compelling right now is still being in this experimentation stage to figure out what's best what isn't there's a lot of lessons learned that will serve whether we know it or not will probably serve as a foundation for a lot of our decision-making going forward I think the one thing a couple of things that we would highlight though that are critical to shape that outcome is on the first of which is we've already mentioned data and and that's I won't sort of a labor that point but when we think about key applications in the national security environment think of a specific question that we want whether it's foundational mapping or finding a very specific object having a strategic focus and dedicated focus around building a core data set that is a non-trivial task and ask anyone on our team or anyone on our investment team and they'll tell you step one whether whether you're building a core data set from real life information or trying to use synthetic information that is a critical first step one of the other things that we've seen with without getting too much in the nuts and bolts is focusing on core tools and some standardization of data formats just being able to search across different file formats to say what is in this image is still a very tough task if you look at Amazon's open data repository which is which is really rich in terms of mostly government provided satellite data from NASA and from NOAA and our space mint data is also hosted there right now one cannot seamlessly search across all those different repositories to say I want an image of Atlanta Georgia which is where one of our competitions cities currently for space 9 the fact that the an end user can't do that means that right off it right out of the gate an analyst or an end user regardless of their technical skill is going to have to step through multiple functions just to put a data set together to then start answering questions or use tools or then figure out which models so if we think about what's key whether it be data sets or tool standardization or just having some basic evaluation metrics that we agree upon for certain questions the course focus should be around how can we have sort of these fundamental building blocks that we can start asking more complex questions and then have even more complex analytical techniques as things mature yeah thank you so I would agree on the geospatial side an example of something that we've been able to work with right now and you and you think that the government's not as far ahead is there but there are certain areas specifically within the DoD and the IC where I think that we are seeing some really interesting application one would be in the idea of the geospatial so for instance we have a lot of information about ISIL holdouts as an example so we've been able to take that data and if you understand historically because machine learning is about two things about training the model and then actually scoring or predicting on whatever is that model that you choose and so in a geospatial example we were able to take ISIL holdout locations that we knew about historically in order to help protect the warfighter and to be able to identify future ISIL holdouts but we might not have been able to do that if we hadn't had collected this information and built out machine learning models so that we could more accurately predict where a warfighter might be going that might be a sensitive area to go and that information is available to us the government has massive amounts of data available historically we need to use that data and build out really strong machine learning models as far as more applications that we can see now and in the next five to ten years I know for me this is a hot button for me but the fact that there's a queue of seven hundred and forty thousand people waiting for security clearance is mind-boggling to me so why is it that indeed comm which is the world's largest search engine if we were all applying for a job right now we would probably go to the Internet if you go to indeed your your resume is filtered through it's quickly mined for the information that's gathered from it and they're able to quickly identify organizations that are the best fit for you they also throw out the resumes that do not make sense for that organization so why is that same application not being used in something as critical as national security and clearances you could still bring in there's three areas that you could work in you could bring in all of the applications and immediately those that have not been in the in the past you're using historical those that have not been thrown out immediately make a good indicator of the current applications as they're coming through of folks who we might want to go ahead and say we don't need to have a senior investigator work as much time on these these 740,000 there's at least two hundred thousand that are good citizens they have not been arrested they're not at risk we take that same idea with those applications that do need to be spent and have more information identified in them where your investigators would spend a greater amount of time we should not have a queue of 740,000 applications when commercial today is able to do what they're doing across the board with with machine learning we also see it within fraud we in the commercial industry and our customers today are shaving off tens of millions of dollars by being able to identify fraudulent claims the minute that they come in the door because of being being able to use machine learning and artificial intelligence in their process that same idea could be involved inside of a like a Medicare Medicaid environment and lastly an example that is going on right now is something that we're very proud of with Homeland Security they're very very definitely addressing how can they bring in artificial intelligence in certain areas and an example would be better safety and passenger security so there's a program jeet as the global travel assessment screening system which we are a part of and being able to identify high-risk passengers based on machine learning has been something that we've been working on with them for just the past I'd say six to nine months and the results are pretty outstanding and we're going to be able to share that information with those countries that don't have the capabilities that perhaps the United States government does so that we can provide those same models so that we can make sure that passenger screening globally is more is more easily benefits benefits to the world I think in general what we're able to see in the next five to ten years is across a variety of spectrums but I think that there's this big fear most of the agencies we talked to think that they have to have all over their data ready to go today that's never gonna happen it's just not going to happen so instead of trying to boil the ocean take data that you historically have information on being able to build out strong models within minutes instead of weeks and months and then being able to go out and make some good strong accurate predictions is something that's absolutely relevant and available to do today and that's what we're seeing some of our agencies doing across commercial we have thousands of use cases but I think federal we can see more if we just if we kind of grasp some of those Dave so I have a narrower interpretation of the question I think the obvious area in which the microseconds matters is cyber security I think that's not just a national security issue it's a national economic security issue and then that feeds back into the national security is as well the industrial espionage is a substantial threat to national well-being I think there is a belief that this kind of rapid time scale thing can also work in combat situations in electronic warfare I'm not sure what quite as ready to that partly because that goes back to this issue of when are we going to be ready for the human-in-the-loop and I would I would add a cautionary remark about that back to related to your section about managing artificial intelligence there's a whole lot of managing of artificial intelligence which is lost because of decisions made by 23 year old programmers in the middle of the night who have not been in the strategy meetings and are making decisions embedded deeply in code often tacit often based on tacit assumptions and there's a there's an issue there which I think ties back to the experimentation issue there's an issue there of of how you get coherence from from top to bottom because it's even harder in these software dominate things than in hardware one of the other issues and I will I will reverse course for a minute there are places where there's a lot of data and it's pretty good or at least plenty good enough and for the Department of Defense one of those is in the Personnel Management array arena using machine learning or other techniques on the vast amount of current and past data of the behavior of uniformed military when did they leave how often do they leave you know what are the predictors not necessarily on individuals but on the population as a whole what are they needed incentives so that you had got you know you've got the right number of doctors staying in and not too many lawyers and all that sort of thing that's a field which is ripe for exploitation and it's an area in which for other reasons we're already investing in the data curation so those were the two thoughts I had saw cyber for fast and human capital because we have to have we have to keep the data good anyway those are the opportunities I had in mind I add to that so we're actually seeing that same thing I think the Personnel Management and human capital management is a number one use case for us we thought it would probably be in the cyber security space but that's a lot like saying artificial intelligence there's so many things we don't we won't go down to the cyber arena but workforce analytics is really fascinating to us and the number of agencies right now who are asking us to help them identify who's going to retire and when there was one agency in particular who I will not name who helped early retire an entire group of folks and then they realized five years later that was unfortunate because they were the Russian linguist or that you know whatever you know whatever it was that they happen to know I necessarily it was Russia so then they ended up having to go out and hire a number of contractors in order to fill those roles so it's sort of backfired on them but now they're taking this approach of let's understand what are all the factors and it's really fascinating it's it comes down to in many cases we're losing a lot of folks in this one agency not based on age and they wanted to retire it was that they didn't have any flexibility they weren't allowed to work from home their commute was too long and they or their boss we ended we looked at the division and we said this division doesn't have anybody leaving and this division does and it and it ended up coming back where they were able to put in some environments and some changes that helped and we also find the Department of Defense to your point we've been able to help a group within the military understand who make the who are the best individuals for certain special ops roles why spend that person's months and years of their life going through what something is that that perhaps might not be strong at and how do we understand who are the best candidates for Special Operations vs. wasting a lot of time and taxpayer dollars trying to go through those processes so those are examples of current customers that we're working with today and it is because we have troves of historical information that helps us pinpoint the best Special Ops person looks like X Y & Z so we can better define what we're going to do in future requirements all right I'm going to hit the panel with one more question then we'll open up to audience after that so I will get to you a promise I'm gonna ask you to talk about our our big theme the ecosystem you know this is something that I think it was came up the idea of we should talk about an AI ecosystem at our very first session but I didn't necessarily translate or impact on my brain until we probably got to like the fourth or fifth workshop and it ended up becoming an overarching thing to me that connected our findings on the international competitiveness investment adoption all these issues were we were able to I think anchor on this idea of the AI ecosystem so you don't have to buy into that framing necessarily uh but I my question is what do you think needs to happen in the AI ecosystem as we've defined it or if you have a modification to that feel free to highlight that in order for the use of AI to become something really compelling for people making decisions that yes this is a use case I want to invest I want to implement in my agency in my command and my in my mission area so where do you see you know we talked about this this this startup debt in the ecosystem where do you see the critical elements of that or would you dispute that framing I would see the that question kind of brings brings me back to almost well over four years ago when we we first first started thinking about the lab and it kind of comes to a central question with anyone whether it's government or commercial customers that have very high consequence admission which is is what is good enough more specifically or put in a different way what are you trying to do I remember one of our first meetings and I'll refer I'll rephrase it so you can have the same sort of general confusion that the end user did - we met with the we just released an open source one of our first computer vision models and we met with a government end user and we walked up to him and we said what f1 score is sufficient for you with an intersection over Union variable between 0.25 and 0.5 and the customer looked at us and goes I have no idea what you're talking about and we sat back and said we don't we didn't know what to say either and the reality is is all joking aside why is that a good story it's a good story because at the time and even now so much of work that is really compelling in model development is still in what one may call it's not applied and so if you're going to write a paper or even do early testing you're gonna want to know you're deeply involved in your metric in this case what we were using an f1 score but if you're an end-user particularly in high consequence maybe you care about that this gets the explain ability component but what you more care about is does it answer my question and the reality is is that different questions require very different fidelity and models and thus everything that's associated with that all the way down to a data set and so for to have truly compelling examples of derivatives from machine learning models I think one of that's the first place we always want to start and we've already highlighted some examples where that's occurring and a really good a really good way to illustrate this is if one's just interested in general building counts after a natural disaster and we're trying to figure out just generally what could be the level of impact not how many exact buildings not what was the material damaged on those buildings this give me a count that that problem seems fairly tractable with some some error bars if something is more higher consequence with a very very low acceptability for error and that's something we have to work on and I think perhaps one of the most exciting pieces in the next two to three years is going to be fleshing out an entire workflow for applications that have pretty good models built for them so a really good example of this would be in some of the folks we've worked with at a couple different organizations one including a company called development seed if you look at what they're doing with humanitarian OpenStreetMap s-- thinking about how to integrate general projections into a in this case just tell me what the most complex tiles are to label after a disaster it's still early days and all that's very still in the prototype stage but as that workflow matures that is a great use case of highlighting not only how machine learning model is deployed in this case all open source but also how humans interact with it so what's the feedback rate if those are if the severity rating and those chips are wrong or what if they're right thinking through that entire cycle I think we'll then lay the groundwork for equally compelling work and more complex scenarios where maybe the error rates or acceptability of error is lower what we're finding is the most important thing to do is just to understand at a high level that an agency needs to have senior sponsorship they have to when you when you talk about the people that are part of this equation if you do not have senior sponsorship and you don't have that the person at the highest level who is embracing the fact that you're trying to endeavor on some sort of journey with AI you're going to fail so you need to have that senior leadership we spend a lot of time doing workshops just to sort of lay the groundwork that we are you know a is the big bubble and there's machine learning there's deep learning there's neural we're gonna try to focus in on what can you do in what we call supervised machine learning how can you take something and not try to boil the ocean but take a small subset of something you're trying to do that you really believe that you want to get the answer to if you have that senior sponsorship it's incredibly important from there having a business owner at the next level who understands the data we're never going to take the people out of this equation that's the most important thing you have to have somebody who has that domain expertise and understands the data better than anybody else and who has their senior leader has their back they want them to go off and try to accomplish something and then you have your technical folks your folks who are your data analysts they're really strong in tableau they're really strong in different visualization tools but they do not have a degree in in computer science and in math and stats trying to find that Unicorn is incredibly hard but what you do have is you have the people with the domain expertise then you bring in the capabilities whatever tools it is that you're using and it could be from the ingest side the data management side all the way through the consumption where you're actually doing your visualization but what you want to be able to accomplish in in our view is having that senior sponsorship through your business and down to your technical level and ultimately when you're building out supervised machine learning models you need to have full transparency behind that because you need to be able to have answers to how did you how did you get to this answer how did you understand that this group of patients are at risk for infection in your hospital because of these of these factors that you developed you need to have somebody who has that domain expertise that can read behind the algorithms and the machine learning that's created and be able to really decipher it that helps you with your your your people peace your your transparency and having that full open communication plan I think is really important as well through thinking through some of your other ecosystem pieces of course the policies that enable it you need to make sure that the right people have access to the information and that the insights that they're trying to gather have the have the right the guidance behind it we find a lot of times especially in the intelligence community there's this big fear that if you have a machine do everything that there's going to be this cross population between secret and top secret data and so you need to make sure that the policies and the governance factor are through there I completely support everything in your a I ecosystem and what you've labeled out is is really important but it starts at the people people level I think for us and then going through the trust in the transparency with with what you're creating and what you're actually going to produce as your results and having and having the people tied to it is very important so I'm inclined to because of the senior sponsorship to tell a story from 20 years ago when John Hamre was deputy secretary defense and I was on loan as a science advisor to one of the teeny organizations and I was one of the people who was the advisors on modeling and simulation in particular for J Wars at the time which was an attempt to include logistics in combat modeling so we're out of beating at the Horseshoe table and the Agata sit up at the front a table like I'm important and these two kids come in to talk about the day-long me these kids to talk about the configuration control of the of the software so lights go out they talk for 20 minutes lights come back on and they say any questions and I look around at the advisory group and all I'm seeing is deer-in-the-headlights and I think to myself I'm the only one sitting at this table that ever wrote code for a living and I stopped doing it ten years ago now fortunately the guy who was handling the meeting saw the same thing I did picked up the gravel bank with no questions we'll move on to the next speaker but nothing that I have seen and none of the people that I've shared this story with or asked that indicate that it's gotten any better so the senior sponsorship is important but at least when the Department of Defense we don't have a mechanism to get the people even with my level of experience which is now not 10 years out of date but 30 years out of date into these positions and I I don't see a solution and nobody has has has told me one but you need you need people who are informed about this in ways in the same in the same way they are informed about budgeting or about combat or about you know aviation issues in terms of the of the ecosystem which which is not a natural way for going to think about this the problem but I thought about it in terms of American strength historically at integration issues and and I think since we're moving into an era of probably moving into an era of great power competition we want to think about this ecosystem in terms of what are the what is the nature of an ecosystem that would support artificial intelligence what are the elements of the of an ecosystem that would support liberal democracies I don't have an answer I'm not a political scientist you know back to physics I do equations it's way easier and I don't know where we have the advantage there I think it ought to be an intellectual international leadership role that we try to take and I think within with our in our own nation our own in our own community we have to encourage broader literacy and we have to try to tighten the terminology down so that the the non-experts the adjacent experts can actually grapple with the problems especially well but I think a critical issue is this is this issue of that we want to find an ecosystem in which the liberal democracies are competitive but I would inject one more level of complexity David in your comment which is if especially in the computer vision domain but machine learning at large unlike historical analysis of the defense industrial base if you look at a lot of the work that's occurring and the machine learning domain so much of it both on the tools and frameworks side and algorithms side are in the open source which is a very different environment than what we're used to dealing with historically in terms of how we think about national power and national assets and it's something that we thought it comes up continuously even with our just in our the purview of our lab and the sense of what makes sense to open source and what does not and we continuously come back on the side of being more open simply because there's still so much early work to be done it's hard necessarily at least from our view it it's hard necessarily to determine where we've surpassed a foundational capability and now it's left to go into the realm of proprietary again I think I I know both of you have to deal with that I'm just curious your thoughts on that or both both well well to go back to the the international aspects the openness is natural to our country in ways that it is not natural to others and there may be a way in which to capitalize on that and make that a strength rather than a a weakness but again I'm I'm punching above my weight class talking about international political sphere alright let me turn to audience questions you've been waiting very patiently and we do have microphones that'll be brought please ask one question keep it brief make it a question and tell us who you are so I see one hand here I'm Steve winters independent consultant I think I direct this to a David it's just a minor point but I think you made a remark you know comparing to self-driving cars where people could be hurt in an accident to the sort of oh the the case of alphago where nobody's going to be hurt but I isn't there an argument to be made that health though is $100 you know so much more dangerous because what everybody drew from their their pipe over that was that my gosh this this is how you win wars I mean new tactics we're coming we're coming out that that the go players haven't seen in the whole history of humanity and of course that's a deterministic game but then you get the open a I people working with non-deterministic video games having very good results so can you say something about the danger there and and maybe the openness is the danger so I accept your suggestion that what I was talking about was physical risk not intellectual risk and I I think there were elements of hype about it I think go is an intensely digital game with rules and in fact the rules are the same on both sides which frequently is not the case in warfare I don't know I work at an institute that to some extent was invented 60 years ago to counter hype so I'm familiar with the dangers I tends to be self-regulating over time but yeah I think there was a tremendous amount of enthusiasm that you know all we need I mean one of the comments I used to make the people was all we need is curated data on 30 million Wars and we're ready to go so the scale is very different that said it was very powerful accomplishment and one that was not expected even by many in the field shortly after it happened my wife and I were driving out shenandoah and there was some some NPR thing we heard for a few minutes but they were talking about the fact that you know they're talking about this as a computer beat the world's best player of go and that's one way to look at it but I think the right way to look at it is 300 of the best computer scientists with unlimited budget and unlimited access to computer power most of whom were decent go players were able to pool their resources and beat the best single individual ago and and when you describe it that way then the hype is is damped out but you made an interesting distinction between you know intellectual risk and physical risk that I had not made before thank you okay I'll come here the and the blue blazer three rows up from me Robert there you go thanks Jennifer them from the epoch times I know virtually nothing about AI and I haven't had time to read your report but I heard Angela mentioned that every country has a different purpose in developing AI I heard that China is more advanced than America in this so I wonder what do you think is the China's purpose in developing this and where they are now and how is that going to affect or affect the United States thank you well from the work that we did in the report I would say there was pretty extensive application of AI projected or in the strategy that China has been discussing as part of broader efforts they have towards kind of seizing the hot the technological high ground and a range of industries and so a I sort of complements or supports their efforts along a number of dimensions in the the plan you know made in China 2025 is one of the documents that describes that there are others as well so where I would say there's tremendous strength there is that they have invested in a number of institutes focused on AI they have recruited heavily an AI workforce some of it folks who've come to study in the United States have gone back to China been recruited back others generated right there in China they produce literally hundreds of thousands of engineers every year out of their graduate schools and universities so they have some real advantages there they have advantages in the quantity of data that they gather through constant surveillance of the population and there are very low limits to aggregating and sharing and exploiting that data that we we don't have here so there's real strengths there one element that I think can be sometimes overblown is the amount of money they're putting into it the truth is we don't really know the amount of money there is this you know 300 billion 150 billion dollar figure that's out there that's a that's a multi-year number and it's a it's a projection of the size of the AI industry that is their goal to achieve so it's a little less clear exactly how much in terms of real currency is being invested in AI but there's a little doubt that it's substantial and that it's at least comparable to our investment and perhaps stronger the way we kind of came down and thinking about it was to focus less on specific dollar investments and more on the health of the ecosystem because our view is that you know what may be applicable at doing facial recognition at airports allowing them to monitor people in the population that problem mate may not be at all or equally applicable to other warfare applications that we would consider more important in a in a battlefield scenario so I it's not clear that there's a transferable advantage from one to the other but we do think there's a transferable advantage to having a really robust AI ecosystem that you can apply people and infrastructure and policy to multiple different kinds of problems and and and carry over some advantage there I still think that Silicon Valley represents the most robust eco AI ecosystem that we see today that's that's a good an important advantage for the United States it's a perishable advantage and so it's not to be sad upon are there other thoughts on the panel on that I've one thought which is most of these companies think of themselves as international companies I'm not sure Silicon Valley is American so it's it's located here that confers some advantages but it's it is striking to me that the Google employees seem much more squeamish about project maven than they do about the massive surveillance state that's growing up in China well I it's that ecosystem on our side not immediately apparent to me other questions here in the middle thank you and a crisp it.well Federation of American scientists the kind of segue to your last comment dr. David the with regard to artificial intelligence national security the talent acquisition problem how is that being addressed when the technology is not borne secret as many of the other programs and yet government contracting doesn't account for the fact that the salaries that are being paid in Silicon Valley are outrageously high by at least some of them are for top talent I have this both from a reason to your times article and discussions with with a venture capitalist well-known in the valley how are we going to switch that especially in light of the the problems with the like like with Google and project Nerva switch that to bring that talent into the National Security arena thank you saris a long question so date about we see this challenge all of the time and I would agree that Silicon Valley we are in jeopardy there because there is a massive massive push from China to gather as much information as they possibly can from whether they're getting our technologies or having their their folks like you said study in this country and then go back one of the things that we're looking at is the fact that it is possible it's it's not that far-fetched the idea of making and creating an environment of citizen data scientists I'm not a data scientist I work with the four of the number one data scientists in the world work it at data robot we're about a company of 500 people we are the ones who are trying to hire those folks just like Google is just like Amazon and Facebook but the the idea behind it is that you shouldn't make the technology as one piece of the whole ecosystem you shouldn't make it so difficult that people like you and I who are not data scientists can't leverage the benefits from it so you want to be able to have that area like I said it's a unicorn where data scientists are so hard to get and they're so expensive to get there especially not going to be hired in the federal government because they can get much higher salaries and commercial so what we're trying to do is create an environment of citizen data scientists where you have the domain expertise you understand your data better than anybody but you don't need to have the computer science background and the math and stats background as part of the ability to get really actionable intelligence from your data so if you think about it now the way you're operating with the internet every day you're on your phone you're not a trained expert in coding you didn't need to know computer science in order to log into your social media account this morning that same idea and that same movement is happening within this environment of artificial intelligence we need to make the tools and the capability and this entire ecosystem much more easier to understand through education and and the ability for all of us who have the understanding of our data to be able to gather the information and turn out actionable intelligence from it without having to have these these massive degrees in these very expensive people within your workforce so we call it a citizen scientists just bringing the power down to the common people if you will yeah and to extend that thought think about capabilities on a spectrum Aaron highlights it very well is if if the intent is for u.s. government to be hiring individuals who want to build foundational networks from the ground up then yes that that is a monumental task for anybody doesn't matter what organization but what's been particularly compelling for for us both to invest in as well as participate in open source as well as this experience from other partners is that the evolution of tools has been drastic just in the last couple of years Stata robots a great example of this a step back a step further back not fully a product that what we have seen is tools that are entry-level that help end-users who are perhaps not skilled or not familiar with building out their own models but still learn how to work with a model like a great example of this is if you were to look online at what AWS called sage maker it's just one example of a cloud service offering but essentially these are tools that allow end users to quickly spin up a model and look at some results now it does require some scripting skills but it is something that we've seen government and users start to work with pretty aggressively this is compelling because it to Aaron's point it starts increasing that literacy drastically I mean I when we started in cosmic none of us except for one was actually a geospatial expert and what the reason I bring that up is we started where everyone else started looking at open CV this was before tensorflow was open source and we learned through experimentation I think what's so great about a lot of these new tools is that they and the reason we contribute and others contribute to open source is it allows for those tools to be more robust and for entry-level people or folks who are just interested in learning more to start with experimentation and thus then become maybe a stronger end-user of a tool like data robot or maybe build their own model as they do have greater familiarity so one of my colleagues who actually comes out of the machine learning business and spent some time in the in the Pentagon well the Jake the joint advanced intelligence Center was being set up basically came back and said everybody's worried about how are you gonna get the very best AI people in the Jake so they don't need the best second tier is plenty good enough they need the best contracting officers and the best lawyers in the Jake and and this fits into my mantra which has a physical scientists I have to keep reminding myself of the United States government is primarily a resource allocation organization we when you think of its role in the in the ecosystem that's a big part of it and that's contracting and law and ethics and those things and I think that's an element of the ecosystem that's worth worth bearing in mind it goes to the point that that you know you don't you don't need your government users to be power users so I think I think there's a element there in in terms of building the ecosystem about which pieces of it the government needs to be the best at and which pieces they can be good enough at because you're not gonna be able to be the best at everything I would just add to that from the perspective of our report in to tie it to Dave's earlier comment about you know Silicon Valley may be the robust silicon a ecosystem but you know it's it's out it's a private sector entity and it doesn't necessarily report to us to any nation per se that's true that's one reason why in the report we talk about the government needing it's an AI ecosystem that is organic not to compete with or to outstrip Silicon Valley by any stretch of the imagination but enough of one to be an intelligent user to push forward military critical applications to work with those in industry who are wanting to take on the burden of security and and the threshold of trust and explained ability to do the kinds of high consequence work the government needs I will say from our perspective here at CSIS ed we do have a data team that works for me and the defense industrial initiatives group we do a lot of work on contract data trying to draw a policy conclusion as implications from that and we have seen in the last really two years I would say a dramatic increase in the availability of young people coming out of college or coming out of graduate programs with really significant serious data analytic skills so the the academic world is out there there they're responding to the call and from what I've seen that there's pretty robust market for those folks out there in the private sector so there is there is some room for hope I'm gonna make room for one last question and then we'll have to stop so I always like to balance the room so let me head on to the right side I haven't touched yet I'm drew Calcagno from USD I I was having a question about ethics so given the background and the backdrop of a number of different private sector companies kind of leading the way and a lot of ethics writing I think of deepmind having their own group dedicated specifically for that what do you think are some of the main ethics AI principles for the national security community specifically the DoD I'm going to dodge your questions there was a reference to the National Aird strategy and it actually turns out that while you were putting this report together they actually put out a request for comments on an update which is underway which was just closed a week or so ago but one of our remarks was that this area particularly was was one that required additional attention and and a an even greater US focus for international leadership I would tie it back to the the values of what we described as those that the of the liberal democracies I think we need to run our country and prevail as needed on that basis just one I guess one thing to add on that is just the model sensitivity and model bias is something that regardless of the application is a critical issue for any development or end-user team just even in the geospatial domain one thing that we have to think a lot about is just how do we whether its internal work or work in collaboration with our partners through space net how do we incorporate enough geospatial diversity that we or the models that are released can operate in different domains it's a really niche example but arguably that that example could translate to a variety of applications and it's something that as datasets grow whether they be open source or in-house or as algorithms become increasingly benchmarked that's an important factor that should always be kept in mind of whether it's all the way to a generation all the way to and then the deployment of the end-use application monroebot perspective we look at it the same way and one of the reasons that's so important is that when you look at the models and the transparency behind that and being able to see how the result was provided so for instance we could tell you in jurisdiction of Ohio the opioid crisis looks like it will cause X amount of deaths next year and they ask you why is that we're not limiting limiting it to just one model our platform allows for whether it's tensorflow by google val pal rabbit or Python it doesn't really matter and so what you really want to be able to understand is that your data scientist community and the folks who are who are building out platforms like data robot are looking at as you still own your data and you have to have the the understanding of what it is that you're providing to the system to go off and build models to but being able to have it not limited we had a CIO for one of the intelligence agencies say you all are a lot like Switzerland you don't go out and pick a particular model because you're the company who who developed that model tensorflow is developed by Google but we have that inside of our platform so when we turn out our models we're spinning up hundreds of models and sometimes thousands of combinations of models which are ensemble models and then you can open up the blueprint which is complete transparency behind every step in the process that you can see what's happened as far as the ethics and the governance behind that that's really very dependent on the organization that you're working with and the experts in the data field there so if you look at tools like ours whatever you provide the system to develop and to have the models built from is something that your organization hopefully has vetted out and has approved before we're turning out a result for you at the end your data scientists or your senior executive or whoever it might be needs to look at the data result and the intelligence that we provided to say yes or no good or bad to that answer I guess I would just add to that you know there's a we have a lot of ethical policy that we already have so you know we have rules of engagement in the military context we have requirements that our personnel system generate outcomes relating to diversity or non bias across a range of dimensions and so we have a lot of ethical policy in place across national security world the question to me is how do you translate that into something that the Machine can can meaningfully comply with comply with probably not the right word but can meaningfully address and we in many ways right now are challenged to measure and say is this algorithmic output complying with our ethical policies well we got to do some translation what does that mean in the context of what that algorithm is actually being tasked to do and and what that machine intelligence is that is being tasked to do I think it's addressed a little bit by the way and that other report I mentioned on the national strategy for AI that came out of our strategic technology program I'd ask you to look there but but this is really one of the central challenges and it's not so much a lack of ethical policy or guidances is how do we translate that into something that the machine intelligence can meaningfully address and then to the point that I think David has really brought home for me on a couple of occasions if we don't know how the AI is doing what it's doing or are we aren't able to say if we if we see one outcome in one instance can we assume that with the exact same inputs in two months time as that algorithm has learned and evolved it will even do the same thing is if there's a huge theoretical fundamental challenge to benchmarking and measuring a learning algorithm that we we need to do some work on so I think we're going to have to stop there I want to really thank our audience for sticking with us for a long but hopefully interesting discussion I really enjoyed the discussion especially with the panel I want to commend you too if you didn't get a hard copy or if you're watching online our report which is available on the web site you can download it electronically it'll look just like this or you can order one for yourself we have the video that we showed on our website I should have mentioned that there is a second video so we did an earlier video and this project that was released a couple months ago that is more of an introduction to the concept and then the video you saw today which really summarizes the work of our report I want to Tallis again for making the project possible and Alan for joining us this morning and please join me in thanking our panel for a great discussion [Applause] [Music]
Info
Channel: Center for Strategic & International Studies
Views: 5,781
Rating: 4.9036145 out of 5
Keywords: Tags, go, here!
Id: XXXYEwwZl4k
Channel Id: undefined
Length: 110min 2sec (6602 seconds)
Published: Mon Nov 05 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.