Artificial Intelligence and Security

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
with that I would like to introduce the next speaker mr. Rick echeverria vice president software and services group general manager platform security division at Intel Corporation good morning everyone and thanks to cyber week and tel-aviv University for the opportunity to talk about security in the concept context of artificial intelligence right after our legal disclaimer let me get through that waiting for the slides to move okay thank you for 50 years Intel has been dedicated to delivering world-changing technology and as many of you know most of those years right here in Israel we're doing that to make people's lives better to solve problems in society and to fundamentally impact the way businesses are run and they're transformed on a global scale Intel's motivation to do this is a strong desire to improve the human experience through the use of technology it's interesting to me and to many of you that over that same time period professionals in artificial intelligence have been trying to do the same albeit ineffectively between a lack of data and a lack of computing people haven't gotten the benefit of augmented human intelligence until today artificial intelligence is not a thing it's a mix of technologies that together working in concert are able to augment what humans are capable of doing artificial intelligence requires advances in software tools optimizations it requires innovation in hardware not only the compute capabilities but artificial intelligence is going to impact changes in memory architectures in storage in communications and as you've heard from the previous speakers because no one company can really unleash the outcomes of artificial intelligence we need an ecosystem an ecosystem that's ready and willing to collaborate and just assist the case with any work load but a workload with so much potential as artificial intelligence and in the spirit of cyber week security must underpin artificial intelligence these three pillars that I mentioned earlier need to come together in control with security if we are to again unleash the outcomes of cybersecurity so when we think about security in the context of artificial intelligence we look at it as two different implementations and I'm going to cover those here in a second but first I want to make the point that we need to ensure the integrity of the compute of the data and the algorithms those are three fundamental elements that are important to unleash what this workload can deliver again we see security in the context of AI as to implementations the first one is security for AI where we focus on the protection of algorithms compute and data artificial intelligence solutions are being deployed today and security must be an important consideration as those solutions evolve the second implementation is artificial intelligence for security where we use AI for the detection of advanced exploits the second implementation is very important and has a lot of potential but it's still embryonic and I encourage us as an industry to manage the expectations of the second implementation I'm going to talk about both implementations starting with security for artificial intelligence and I'm going to do that by describing to actual use cases the first is multi-party machine learning and the second is federated learning machine learning algorithms especially those based in deep neural networks have achieved remarkable results in a number of different domains however machine learning algorithms required access to data which is often privacy sensitive examples of industries with privacy sensitive data abound everywhere in fact you heard this morning from the Prime Minister around digital health healthcare financial services especially relevant to this audience the ability to share threat intelligence across different industries let's be clear not being able to access data is one of the biggest risk to the potential of artificial intelligence think of the possibilities if we had mechanisms in place to make the best data available to unleash the power of new models new algorithms in artificial intelligence therein lies our problem statement how do we enable access to the best data available in an increasingly privacy aware world well intercept technology company let's talk a little bit about technology homomorphic encryption and hardware trusted execution environments are two capabilities that are available to address this Intel researchers are making significant strides towards practical applications of homomorphic encryption and this is not an interactive session so I'm not gonna ask you about homomorphic encryption is I'm going to explain it a morphic encryption is the ability for computer systems to act on data that's encrypted and the trick is that you do that without decrypting that data this level of encryption this level of technology would enable researchers to operate on data in a secure and private way while delivering significant results but as many as you know homomorphic encryption is still a ways from today from being able to process being able to use it and the industry is still understanding they compute requirements and capabilities for homomorphic encryption and this is why Intel is working on other technologies like Hardware trusted execution environments that can deliver high compute efficiency and enhance privacy trusted environments like Intel's SGX enable more secure uses of private data for artificial intelligence but our innovation goes one step further because we have capabilities in our silicon that enable us to only allow authorized code to act on the data and if that code is in any way tampered or modified then we basically disable the operations and we eliminate the environment where that computing data came together now you're asking yourselves ricked how real is this is it possible today last year Microsoft Azure became the first cloud service provider to announce and offer capability security capabilities for protecting data in use through a number of tools and services called as your confidential computing services and one of the solutions that Microsoft is committed and planning to deliver is both the party machine learning what I just talked about using the hardware capabilities that I just described and in a few minutes I'm going to highlight a collaboration that we're very excited about in the area of homomorphic encryption while multi-party machine learning will be of great value sometimes taking all this data and moving it to a centralized location is just not possible or feasible federated learning enables data owners at the edge of networks to actually collaborate to learn and develop share prediction models while keeping all the training data at the edge there for decoupling the ability to develop models from the need to centralize that data this also enables us to use the edge devices on all that compute for model training however there are two major security issues with this approach the first is called model poisoning where you can actually inject out lighter data outlier data points and parameters and change the nature of the model change its intent the second is data spilling where the end structure or the devices can leak data at the edge robust aggregation and the use of hardware based trusted execution environments are two approaches that can address model poisoning and data spinning for those of you who like statistics robust aggregation is a field of study that is looking at ways in which we can manage the impact of individual contributions on the definition of models and one of the ways we can ensure that these outlier parameters do not impact and influence the model is by smoothing out again how those updates impact model evolution robust aggregation as a methodology gets even better with the use of hardware as we can provide protection at the edge and then allowed the aggregator in the center to cooperate with trusted edge devices to filter out outlier updates I now want to turn our attention to the second implementation AI for security an implementation of artificial intelligence that holds quite a bit of promise as many of you know malware is one of the fastest evolving workloads albeit a malicious workload and one of the motivations for malware to evolve is to evade detection that's why earlier this year at RSA in San Francisco we introduced Intel threat detection technology and as part of Intel threat detection technology we introduced a capability call accelerator memory scanning where you use the interactive graphics on the device to detect malware in memory this capability accelerator memory scanning can actually be improved it can be enhanced with machine learning how do we do this we do this by providing a proactive machine learning based inference model that actually works in conjunction with their reactive pattern based approach that we announced at RSA let me translate this into plain English what this means is that we have the ability to build models that represent malware as images in memory and then we take a snapshot of malware in memory and we use our know-how envision data classification and apply it on these memory snapshots quite a bit of innovation there across different technology vectors while I was describing multi-party machine learning I highlighted our collaboration with Microsoft and Azure Confidential computing services we need to collaborate with the industry if we're gonna solve the challenges and capitalize on the opportunities that I've just described here and I'm really excited to introduce and announce three collaborations by combining the machine learning capabilities with the capabilities and the flexibility of containers at the edge of networks we can make machine learning systems bunch more useful and enable that federated learning model that I described before that's why Intel is collaborating with docker to help make artificial intelligence and containers themselves much more secure via the integration with several silicon technologies including virtualization for isolation and trusted execution environments fort annex and Intel are working together to extend the capabilities of the fort annex runtime encryption platform to support and secure Python and our language based applications that have widespread use in the data science community and by doing this developers and data scientists can now train algorithms with the best data as well as improve the integrity of these algorithms and the third and last collaboration that our highlight is our collaboration with teammates duality and we're collaborating to address the challenges of artificial intelligence workloads like homomorphic encryption we are very excited about this collaboration as I mentioned before there are computational challenges regarding homomorphic encryption that need to be addressed and our joint goal is to address this challenges with one of the best if not the best team ever assembled in the area of encryption and algorithms so let me wrap this up in summary we are committed to delivering security in the context of artificial intelligence by improving the integrity of the compute the data the algorithms critical elements that need to come together to create and deliver value we will continue to innovate with the industry across the two implementations that I mentioned security for artificial intelligence and artificial intelligence for security and finally our collaborations with docker fort annex duality and many others in the industry are fundamental to delivering the value of security in the context of artificial intelligence and with that and on behalf of Intel thank you for your time [Applause]
Info
Channel: TAUVOD
Views: 5,487
Rating: 4.6721311 out of 5
Keywords: אוניברסיטת תל אביב, CYBERWEEK 2018, Tel Aviv University
Id: pZ80M3aVlD0
Channel Id: undefined
Length: 15min 31sec (931 seconds)
Published: Sun Jul 08 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.