Sibos 2020: Scaling up artificial intelligence in finance

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Music] [Music] just [Music] welcome everyone to this session of cyborgs where we'll be talking about scalability of artificial intelligence so as you know during the last decade the adoption of ai has greatly increased and it's pretty much used everywhere so for example now on zoom you'd have no automatic caption that use natural language processing on your computer or your mobile phone your spam engine basically also uses artificial intelligence so it's being used now everywhere and it's really at the core of the business of the dominant players in the industry like google and alibaba for example but what about the finance industry so you might not know it but actually the finance industry financial services is one of the largest expanders in artificial intelligence today even before others like retail or defense for example and if you think of it no of course it's very obvious you know five years ago when we had to open a bank account now we had to go to a branch and open a bank account today many banks and pretty much know all fintechs allow you to scan your passport use facial recognition you know all of that uses artificial intelligence on a daily basis we see no banks not doing much more in that space for example deutsche bank sending a partnership with google to use google cloud and machine learning we see goldman sachs for example occurring white ops that uses machine learning to detect bots that try to impersonate humans so definitely a lot of developments that we see in the space but is it really transforming the industry yet for example you know for those of you who works in financial services sometimes it doesn't really feel like an ai world and where we tend to talk much more about legacy systems and even you know last year during covet of you know even using paper so how do we scale artificial intelligence what is required we have the chance to have exceptional speakers who will address a lot of different areas from infrastructure to fx and of course the people side so let me do the introductions first i will be your moderator for today i'm the co-founder of cft and our knowledge platform in digital finance where we help financial institutions and people upskill in this fast changing world i'm calling from london i was previously a managing director at city associate fellow at oxford and also founding partner of supercharger the largest fintech accelerator in asia with us today we have three exceptional speakers and i can't go through all their cvs it's very very long so you can have a look at the linkedin for this first i'm very happy to welcome ayesha mr ayesha khanna is co-founder and ceo of ado dot ai she's involved in many ai initiatives around the world such as the world economic forum she's also on the board of imda and she founded 21c girls which helps girls in singapore learned about artificial intelligence i first saw i don't know if you remember ayesha but i saw you in 2016 i think in hong kong and i still remember very vividly your presentation when you spoke about the role of people in artificial intelligence now after ayesha we have david david harden david is a senior advisor for data and ai at union bank of the philippines he also teaches and researches at a lot of different universities like smu and us ucl and i met him a few years ago when he was a still chief data officer at mas the regulator in singapore and leading all the work on ethics and ai that led to fit the framework around fairness ethics accountability and transparency which has now been a foundation for many reflections on ai and ethics around the world and last but not least very happy to welcome shamik who's head of financial services at true era he's also a member of the working group for the bank of england on ai and he was previously we met a few years ago when he was chief data officer of standard chartered where he was working on industrializing the process of implementing ai in the bank from the first discussions to proof of concepts and a real implementation and the three of them are based in singapore just for the story a few years ago two years ago actually with a cft and ian we embarked on this big project of creating a reference course for artificial intelligence and at that time we worked with dozens of experts around the world and i was one of the program directors aiesha was one of the senior lecturers talking about the foundations of ai david talked about ethics and shamik about the role of people so we've been talking about those topics for quite a while so welcome to the three of you and now let's jump directly into those topics and we have much to talk about first we'll talk about scaling because we're talking about the scaling of artificial intelligence uh and just now what do we mean by scaling and perhaps i'll jump right to you david how would you define scaling of ai well i think that's actually a wonderful place to start because different organizations and again there's no right or wrong here but there's different definition from scalability i mean if you look at ai ultimately as an innovation function as a pilot function as sandbox could be a scenario that scaling varies how many can you do in another situation whereby for example like myself very much focusing on operationalization of ai which means pushing into production scalar b could be a case of how widely can actually start uh adopting it within the organization how to what extent are you able to incorporate it in an enterprise from an execution perspective so it's a non-trivial and it's a critical matter to up front to be able to identify well first and foremost when we think about scalability which means how do we start adopting ai more prolifically within organization well what's our what's our benchmark what's out if i dare use the term what's our kpi what is it that we're actually after and based on that it is now possible to start going well first of all benchmarking being indexes or other organizations and then secondly start identifying what are those tactical steps that are necessary in order to meet that scalability objective would you have the same definition of scaling or different one yeah well i mean look this the way that artificial intelligence was done before was that somebody would read the covered of wired magazine and decide that they wanted to be the ceo of a tech company not a bank and they would have these little pocs and they were usually headed by some group innovation officer and after a year you had these pilots like littered like garbage across the entire institution um and people started realizing that it's not scalable because they were not connected to a data infrastructure and i think that's the big realization to achieve scale you need to rethink your data infrastructure and how it is going to accommodate and be machine learning enabled with streaming data with processing of unstructured data and you know all of us the way we started off in computer science and software engineering was very different the way you set that up what i'm noticing now is that a lot of ceos want to do it right of course as david said what's the kpi the kpi is still the business value but the foundation is set right from the start the data assets data pipelines metadata are repeatable across multiple projects even if they're pilots so that they're not done in isolation from each other and there's a data governance around it so that it doesn't become an unwieldy octopus the appreciation of this that is a huge change in the three years that i've been working um you know as ceo of ado ai and i think we're seeing finally a mindset shift in organizations it's going to take time but the key is the data infrastructure more than the models because that's actually almost easier to do once you have the right infrastructure in place thanks aisha and chemic from your experience at standard chartered what do you think of scaling well first of all i like the fact that aisha made me feel good and bad at the same time i was a group chief innovation officer but more importantly i was the chief data officer which did require me to focus exactly on those plumbing and governance and all the relatively boring things that underpin that so i'm both happy and excited at the same time uh how do i define scalability i think um i just say you you started off with the example of google or or alibaba or anybody else like that now imagine could these companies function without yeah i mean the answer is a resounding no and i think that's the point unfortunately most banks still can survive without ai if you think unfortunately only from a scalability perspective so to me scalability will be when scalable ai is when ai is deployed at a level at an at a at a scale that starts mattering to organizations so that if it didn't work tomorrow the bank would have serious repercussions and i don't think most financial institutions have reached that stage i think they have in specific pockets so yes you know onboarding would get you know direct onboarding would be at risk in many organizations if if the image processing didn't work some aspects of compliance function would be at risk um certainly some kind of automated investigation of fraud financial crime etc but for the most part i don't think that particular test which is does the ceo lose a slip sleep over whether ai is my ai is working today or not i don't think we've met that i think the ceo does lose his sleep on whether somebody else will eat my lunch on the basis of using ai but not yet at the level of if my ai didn't work tomorrow i'll have trouble and i think that is the test scalability just just to jump in and and not to relate to the points but i think it's even going one step further back in terms of definition of scalability is what's the objective i mean and and hearing you know all the points being made but i can still call out people who their their kpi is how many pocs they're doing and if ultimately that's what they're measured and we shouldn't be surprised that that's what they're doing again that's what i'm saying it's not necessarily a bad or wrong right thing it is what it is and that also goes back to the objective in terms of well is ai and data science done as a it's it's there and if it works if it's relevant we then put it and see how to combine it or do we say no we need to and i'll maybe use a colloquial term in singapore dai dai must work it has to be part of the business underlying strategy so that underlying predicate is absolutely critical and then that goes to a definition of well how do we make sure it's scalable within the organization what is that purpose of the scalability yeah thanks a lot david and i really like your your definition samick and if we take another definition am i right to say that you know from what i heard from the three of you it seems like we're still far from financial institutions in general in terms of scaling we're really at the beginning would you agree or not yes i would agree yeah so would i i think that there's a lot more in the media that people claim they're doing and then you go inside and see that they have you know the infrastructure nor the political mandate nor the talent but there is some rumblings in most banks now and if they get the right people to help map out their journey while providing business value along the way um you know i think they can go a long way but there are very few banks in our experience and we also work with banks in asia and in the middle east and we don't see them nearly as much as they claim they're doing in public we oh go ahead to me i would say you know on the basis of my newfound experience on the other side as a as a potential vendor to uh to banks and insurers on on how to make their ai trustworthy i'd say five out of the 20 senior data and ai execs i've spoken to around the world are at a level where they start meeting so i don't think it's as bad as you know 10 20 i think at least a quarter of the banks i've spoken to um are at a level but yeah vast majority still are not at the stage where ai matters enough for the ceo to lose uh i was just about to add that look at the end of the day if you're not talking about ai as devsecops if you're not talking about delivering it as part of almost like a factory and assembly line if you're not talking about scalability and execution if you're not measuring your ai in terms of pnl impact then no if you're doing those things i i i would say that that's absolutely the right path that a financial institution needs to take it less it's no longer a fancy it's no longer an innovation kind of a lure it is something which is basically a heart and the soul of the organization it's part of what it does i have to totally agree with that i think that's i think and all three of our us are making the same point which is um you know it has to bring value and be done in a way that is basically deployed at scale it's not a fancy anymore basically i like that word yeah that and i think you know when uh we were talking a few months ago david there was a wood that kept coming back uh in your mouth that was operation operational operationalization yes civilization yes oh absolutely scaling and industrialization we're at the stage and again this is a totally personal view that we talked about ai enough we we don't need to talk about it like oh it's going to deliver value is it going to be useful i mean if we if there's still people at this stage do and while and because of covert that are still not certain of the value associated with initialization and ai i i give up fully um so now the conversation is about operationalization how do we take it and make this basically the bloods of the organization and so let's talk about this operationalization uh and scaling key success factors and challenges what do you think there would be perhaps no ayesha do you want to start yeah i mean i think this this notion of operationalizing takes into account that the ai so-called ai team is not three data scientists with phds but has seven different roles in it from data engineer to mlsac ops to data architect this diverse team sets up a machine learning enabled infrastructure and it includes in it the data lakes the data warehouses the meta data management and then the you know self-serving analytics and then of course for the more hardcore machine learning modules that are custom made that is what what can be called an intelligent data platform it is the plumbing on which new systems new products new services they must rest on that foundation and that um can be done iteratively so the nice thing with software engineering is you just don't have to you know do it all at once so you can kind of very rapidly go through the process but it must be reusable it's really scale is about reusability and we're seeing that the cloud the three cloud providers and others that are jumping in are really bringing this to the fore and many many banks are adopting them just like the telcos are um you know that whole idea of on-premises being replaced by hybrid systems even if there is data localization people still want to be faster they want to use things that are pre-made and modules that and they don't want to have to look over hardware they want to actually create innovative services for their customers so i think that's really the key for it is that it takes a whole different view of traditional i.t like we don't even somebody was talking to us about process re-engineering and i was telling that we that's maybe i you know in new york was doing that with software but my team they're data first it's data uh data give you know database um data driven process engineering everything comes down to data and that is why how you save the data how you process it how you ingest it and how you ethically use it everything defines the next phase which is the ai modeling thanks a lot so uh just to summarize for you the key success factors is really the infrastructure the foundation from the technology standpoint being data first sham nick what do you think are the key success factors to scale ai again i'll i'll agree with all of that not least because i spent seven years trying to build some of that uh not always with success but with some success i hope um but i would also add a second element certainly for established organizations those that are not digital first or data first organizations those that have existing ways of doing business which is the vast majority of banks and insurers and that is trust that is trust within the organization so data scientists being able to convince the relationship manager that yes you should listen to my neural network's recommendations when selling to your clients it's um with your second line model risk management or compliance teams it's with your auditors it's with your regulators and most importantly it's with your customers right if you are unable to win that trust that this newfangled thing called ai is is going to somehow particularly if you're talking to people who've been doing this for the for a living all their lives a fraud investigator or a or a credit analyst we say actually you know what listen to this don't listen to your gut feel or you better be prepared to have that battle in terms of convincing them and you better be and actually the challenge here or the bar here should be that actually they shouldn't even think of ai nobody talks about uh you know i i'm making computer-enabled decisions today i mean yes of course you make computer enable what other decisions do you make although you did refer during the crisis maybe some people did it on paper but generally we don't talk about oh we're using a computer at work yes of course you're using it i think to some extent going to a stage where people are not mentioning ai anymore and it's just embedded into the way of working is important and to get to that there is a need to focus on the trust element at the various levels that i mentioned thanks a lot so we i did trust also um david what do you think oh i i don't want to repeat points that make us a fully agree with everything so perhaps the only one that can add to it uh is uh and maybe just shall make you a point it actually needs to be that the next apple none of us thought about what's actually inside the iphone or the ipad previously and we tell people actually it's using machine learning algorithms within it i think like really but the point i didn't want to make is because in the end of the day you know financial institution is a for-profit organization largely uh there needs to be a link to profit uh i'm always surprised of how many times initiatives when it comes to data science and ai or machine learning whichever form you want to call it isn't directly linked to a financial estimate and an impact on p l and that will result in situations where you may be doing something which is phenomenally interesting but has absolutely no impact whatsoever and that will erode maybe in one one maybe one form or another will erode a bit of a trust or or or or latitude of relevance so okay let's let's try it out so there needs to be that link and it can't be done it takes a bit of a proactive approach need to think about how do you structure it but it can be done so that's another mechanism of measurement that as we progress in maturity i believe it becomes more critical and more important and and so we talk about uh those the key success factors and from your experience and so you've been you know working with no very large and financial institutions not even regulators and uh what have you seen as being the main challenges for this to happen i think that uh there are two things one is that because precisely it does not connect to profit uh as david pointed out and that is the driving force understandably of any capitalist and private company uh till it is connected to that it is very difficult for companies to focus on it so we all get together on panels like this and we all say rara ethics and then we go back and you know we take it to our stakeholders and they're like are you crazy and then you do it and the stock market punishes you like you know what are you thinking because when you try to explainable ai and you try to invest in data governance sometimes it slows things down it makes them less accurate and they don't have the patience for that that's when the regulator comes in because the regulator does not hold itself accountable to the market but to its citizens and i think they have a very important part to play both in kind of nudging companies through you know publicly signing pledges but also frankly through fines and ultimately by writing it into law so right now we are at the stage where their guidelines even if you look at gdpr which i'm a big fan of in terms of its principles you know when it comes to ai ethics or or it's kind of vague but on the other hand it's kind of going in that direction we're still at the stage where we're grappling with data and once we understand data privacy data fairness data bias then we can go on to the next part which is ethics specifically in ai i mean they're interconnected but i think the the thing that most of them are actually writing into the law or finding companies for is still related to data at the moment and if i may i don't think that's necessarily accidental because i think i actually don't like talk of ethical ai or data ethics in isolation i mean i'm yet to come across at least in financial services an unethical algorithm which is good for business ultimately right i mean i've seen at least in two of true era's clients recently they were biased very basic gender biased models in both cases they were clear the reason they were caught quite quickly because they were clearly defining a particular group as low credit worthy when all common sense said no that particular protected group is higher credit worthy and it's quite clear that bad algorithms are also bad for business so i think to some extent and going back to the point about linking to p l we should stop worrying about ethical ai for its own sake or or any kind of ethical data even for its own sake and we should just think what is good for business an uncontrolled sharing of data without consent is ultimately bad for business if you get that kind of data you probably can't trust it you probably it is probably outdated because it doesn't have the consent or or that you will at some point get into trouble with the regulator for a fine etc so i actually think if we just think about selfish interest but selfish with the long term for the organization we'll all make the right decisions and i think regulators are providing a well needed nudge in that direction to say do the right thing because it's also good for your business as well as being good for your customers i i'll push back a bit on that aspect because i think you it's correct in terms of you don't really have unethical decisions and also i'm not a big fan about this whole this course about ethical life but for different reasons is when you have a kind of a a balanced trilogy certain degree between the financial institution the regulator and is ultimately the the consumers and the reason i say that is because when you don't have let's say sufficiently regulated space and a lot of times people kind of recoil from regulation it's like oh no it's going to be bad for business no i think it puts in place necessary um perimeters that are saying well you may perceive this to be good for you but it is going to have an adverse effect i mean if you go back not too long ago when you had a resume with basically people saying i'm not going to give loads of people this postcard code that's it was good for business but it wasn't the right thing and then again at the very risk of opening a big pandora's box right now because it's unregulated one can borderline saying that bitcoin is unethical given everything that's happening in the background so there's there's a kind of element so where i think it's important is that there is the proactive stance by financial institutions and realizing that we're not reinventing the wheel so on that point i fully agree there's not about ethical ai or ai ethics to be more specific it's simply corporate governance and culture it's whether it's coming out of an ai algorithm or not you need to do right decisions that you can stand by and you believe are correct and the reason i use the term belief is because there's always disagreement there will always be different viewpoints so at least at the very least the organizations corporate governance its board can say we believe we're doing the right thing but then from a regulatory stance there is an obligation to come and say this is the safety net that we're putting in place for the citizens that we believe going there may be okay in other locations but in our sovereignty in our jurisdiction in our operating environment we do not think it's the right thing to do or perhaps the contrary we do believe it's the right thing to do again not right or wrong so it's that again not make too many analogies but like a holy trillion trinity which is important in maintaining that balance yeah i think one of the things that's interesting is the education for example you know for software engineers or data scientists or product managers one in the process do they stop when does it almost become mandatory for them to stop and say and question is this ethical or not that kind of critical thinking had that our kids are learning at school that we kind of forgot the moment we graduated um so singapore is now coming out with some certification for product managers that will put in certain processes in place it's pretty much like agile or any other process you get used to it you know the crispr framework anything you may do once you embed it in something that's a standard just asking that question and the right questions um at the right point in time will make and you know people who are engineers and other they're not traditionally asked these questions and they don't ask them themselves either but having that space to do it as part of the process will make a huge difference because i think people even though the investors the stockholders the ceos are not always uh driven by ethics as much as i profit that the person is the individual is and we're seeing that now all over the world people are beginning to rebel against what they think is the injustice related to some algorithms thanks to ayesha that's a very good point and i will come back a bit later when we talk a bit more about trust here i would like to push you a bit more on this part about challenges for large organizations to uh to scale ai because the reality is that if we think about it you know there are some organizations not chiang man for example insurance company digital insurance company in china 500 million clients so the most number of clients around the world for company that was created in 2013 ai from you know end to end 3 000 employees to serve 500 million plants so for them clearly in terms of scalability they do it extremely well on the other hand stripe no created know 10 years ago now with a market valuation which is larger than almost any banks in the world so those tech companies let's call that no tech first no companies or you know let's call that modern companies won't have any issues to use ai now the reality is that no we have all been you know in banking or we're working with banks now our audience is with banks and the reality of what we see in ai today is very very far you know from this so i guess you know perhaps a provocative question is do you think that banks can make this transition into being ai companies uh and you know what is really blocking them of being ai first if if i just may plug something in because i think that's a bit of an unfair uh comparison and i'll explain why and in fact opens the door to something i strongly believe in is you're talking about a lot of tech companies that operated in effectively a completely unregulated space um which isn't the reality i mean one can argue that the financial space is one of if not the most the most regulated and that will result a natural um uh necessity to make sure that you don't uh play don't fall foul of the regulator versus situation of oh don't worry if it goes something wrong we'll just fix it overnight that is fundamentally a very very different mentality and yes results in creating extremely successful extremely large organizations but you don't know how many mistakes happen along the way but also creates big mistakes buy a card to call out one out of the many along the way now the scenario is in fact my personal point of view is that we need to have a balanced approach a balanced approach meaning that financial institutions and to answer your questions i do absolutely believe they can make the transition and they need to realize that while having that governance regulatory approach that is actually a value proposition that a lot of the big tech companies do not have but they need to be open for that innovation be open for that experimentation be open flexibility within a risk managed approach this cevi the tech company says i i desperately think you need a tech regulator you need to have an ability of saying go ahead go nuts make amazing stuff but there needs to be control there needs to be governance ensuring that innovation is done safely so i just wanted to kind of balance those two here i have to agree with david i think that i think absolutely they can make this transition in fact all of our clients are very large enterprises who've been around for decades who are trying to make this transition and they have an advantage not only because they're used to regulation but because they have been collecting all this information for a long time and they're they're about technical challenges just the the variation of the the data that they have everywhere just organizing all of that and then the cultural challenges i think there's a fear of automation and replacement of tasks and just political the political will is not there i think if they can overcome that we see they can very rapidly move towards modernizing their um their infrastructure and get to a point where they're really getting the benefits from ai so i'm pretty optimistic about their capability trying to transform i agree with uh with what you just said i guess i'll i'll come back to the point about um you know it's in some ways the def what what is the biggest barrier to ai that large banks need to overcome is perhaps to stop talking about ai i don't think most people don't remember that banks are built on elaborate statistical models in the first place every aspect of a bank tell me one other industry that is so fundamentally dependent on models we have always been dependent on models if there's any other maybe insurance that's the other one right underwriting models so i think if we stop talking about ai and make it less threatening and just focus on good good data well governed data i do think aisha we might be underestimating the extent to which uh sorry overestimating the extent to which banks of data i know all my ex say oh you have a lot of data but the reality is you probably do want to open up to non-banking data actually that is one of the technical jumps that take not technological but technical jumps that large banks need to do but other than that for the most part yes everybody knows about legacy tech everybody knows about data governance challenges those need to be overcome and those are being overcome but really overcoming the cultural barrier is to stop talking about yeah and just talk about one slightly more advanced way of doing your credit models or a different way of doing your credit models which happens to use a different technique and much more broadened data set most people will agree with that tell them i want to get approval from the ms or the pra for a new scorecard which involves a brand new modeling technique not so much right so reducing that fear i think is quite a significant part yeah i i mean i couldn't couldn't agree more in fact i strongly believe that one of the biggest wild cat not only the word mistake but uh challenges that financial institutions do especially when they walk into regulators basically say oh let me show you what this machine learning algorithm we build and how impressive it is and so forth and that's kind of usually when i kind of put my head in my hands the conversation should be not about the machine learning it should be that we simply have a new algorithm and this is how we manage the risk this is the risk we're exposed to you know no need to reinvent the wheel we have model validation we have models we have risk management we have mis mitigation and that's why i was mentioning earlier that i believe that the governance uh uh uh the you know the the governance element that's part of the dna or financial institutions in fact is a value proposition that is unbeatable yet because of that trust if i go back to the point you meet that you're making making much earlier about ai it's also that trust to the ultimate customer base that you're providing and uh we have ten minutes left uh so let's jump very quickly on the truss side uh you were talking about trust with consumers uh make a bit earlier and that reminded me i know if you know the story but i'm based in london and last year we had let's call the fiasco of the a-levels well basically they wanted to use some kind of algorithms not to uh create basically the a-levels we don't really know what is behind i don't think it's not very fancy in a machine learning but what was funny was uh boris johnson talking about mutants algorithm that really sounded no scary in terms of you know what was happening for the a-levels of the children and and i think that on the trust side a person um david do you want to talk in just you know one minute about feet and what it was and i think that would be a good transition into how do we operationalize this kind of approaches not sure i'll give a very uh very quick plug and in fact uh shamiq was very much part of that process of creating feed so feet in short is venice ethics accountability transparency and when it was introduced this was back in 2018 and i have to give that context because now there are there's a plethora of principles there's plethora of guidelines in terms of healthy organizations which is a good thing essentially but back then there was a realization that a lot of financial institutions while there was immense desire immense willingness to go into this world of data data science and ai there was a fear of well what is the regulated thing you know how how do i make sure i don't fully foul because there was effectively a vacuum to a certain extent in terms of what is permissible what is not permissible so feet essentially was an attempt to keep things simple it's 15 principles if i remember correctly to create a guardrail of saying these are underlying guidelines these are principles in terms of how an organization should go about in developing and more importantly operationalizing their ai and machine learning within the concepts of fairness accountability transparency and ethics within the internally organization and external and i just mentioned two particular important well three important points one is when you read it you actually don't even realize that it's applicable to financial set to the financial sector it's essentially it's it's wholesale good principle that one should have when going about developing and operationalizing ai i mean like i said to me it's more like a hygiene secondly and that was up to me one of the realizations that came out of it is a lot of those governments a lot of those councils a lot of those reviews were already in place these are things that we were doing we just didn't think about the fact that we were doing it and it was how to go about essentially replicate it or overlay it on those newly enhanced models or new models and then finally truly to identify and to example of the uh what was it mutant algorithms but there is a necessity in certain situations to think about where it is different and by using that as an example is where it is different is the appeals process there was a need to think of how do you go about in appealing a decision which was effectively a data driven an algorithmic decision now again i'm not here to saying right or wrong and how it should be done or how it should not be done but it was that highlight that you kind of need to think about it and if you don't think about it that's when you have what i call those oops moments so that was the intention is to create a guide i actually believe that we don't actually need feet anymore we kind of reached the stage when we got to that maturity with the guy was i think i believe we need to go to the next level we kind of need regulation already because we kind of also know what should be regulated and what should be left alone that was a really really insightful uh so ayesha i know you've been working a lot on this no also what's the answer for you in terms of trust in ai you know i think that for me and i'm writing a book on this the the main thing with artificial intelligence when you have institutions or governments be they banks or whatever facebook is the question of human agency and this appeals process is an example of human agency where you can go in and push back against structure such as institutions such as ai algorithms and every time i go and give a talk anywhere i have a lot of people who come to me afterwards and say we understand the trends we understand the charts but what about me like how what am i supposed to do i don't understand that what should i read what will happen to my job if i get refused a mortgage who can i go to and that is the question that is ultimately going to force all companies kind of like what we're seeing happening with climate change is when the people arise through their own ai activism as i call it to impose their agency on the use of ai when it significantly affects their life and when we educate them on how to critically evaluate that ai and what are the means judicial means by which to appeal them and others that is i think the linchpin between living in a world of ai where just a few are very wealthy and enjoying the benefits of it and the others are passively being affected by it to a world where it is ethical and just where we all participate in a way and accept and opt into these things so for me uh you know all of this boils down to that issue of human agency trust and explainability is one thing but really what are you going to do about it is also very important and i think that's where um explainability and i'm sure shamiq is going to talk about that is very important because people don't understand it at all thanks a lot and i guess that's a great you know segway into uh shamik and i also wanted to ask you the question hashemic why did you move from a cdo of standard chartered into a true era a startup specialized in explainability it almost feels like a glide path from david to aisha how it happened i started off with david writing uh co-authoring the regulations uh sorry the guidelines that um that david mentioned by the way david we dropped the 15th one it was thou shall be legal which we dropped uh so it was only 14 in the end um but it started with that principles guidelines internal guardrails then it moved to what you were talking about aisha we did education sessions in the end with 500 people most of them md's and above but quite a few of them data scientists technologies etc on on our first experiment with explainable and well governed ai and it was i mean there were compliance management teams there were ceos of countries etc and lots of data scientists on the ground and education and kind of awakening the human and asking them to challenge whether they're in the process of building the models or or being affected by them was a big role i was a big part of that but then at some stage i realized while both of those are essential they're not sufficient because ultimately a bit like you know devops or secops or anything else which needs to be operationalized to go back to david's word you need tooling and automation that embeds in it and so you might say somewhat self-servingly for a california-based startup um software is the solution to software right so in many ways you do need technology that will help those humans who are accountable for that technology look at that reliably and say what's happening inside what is determining whether this person is getting an insurance uh uh you know or getting their insurance or not or getting the lane loan or not how do i ensure that that that this model is not going to break as soon as the data moves this way or that way can i stare at this particular piece of outcome from the model and say yes i can stand by that as a human and david had a principle called justifiability which is it doesn't matter what the machine says can you as a human defend it justify it and then quite importantly as a special case of that justifiability if there is a particular bias for example if women are appearing to be better drivers or safer drivers in europe is that something that we can stand by or actually know even though it seems like the data is suggesting that we have to correct for it and i think my proposition my reason for that is regulatory guardrails internal or external people education awareness within the kind of the human agency to be complemented by the third pillar which is technology and tooling to help embed these practices as a standard way of doing any kind of model development you know reviews and rollouts thanks a lot and from listening to you it seems like from the last few years we've progressed a lot in thinking about you know ethics and ai well a few years ago was a lot of thinking and trying to understand what were the different frameworks and now it's much more the the level of you know how do we make it happen at scale which is quite interesting uh i see we have five minutes left i'd like to jump into the people discussion and uh and uh and you mentioned it ayesha he said the main question it's no and what about me or as a person uh which i think is the main one and when we talk about financial services of course now there's this big i would say no fear of machines and replacing people i think that we see it now every day what's your answer to this what do you think of this well the risk of automation is there you know we can say it's automating certain tasks and it is doing that for those people who are willing to take that automation and and you know learn and upskill themselves and try to have create more value creation in the company by partnering with ai by using the output of ai algorithms or by using ai to create new services their careers are in in no jeopardy at all it is for those who resist it and unfortunately that's usually those above a certain age group that it's very difficult for them to get a grip on it that because they are not tech natives or digital natives and they they need some way to get them over the hump so in singapore what imda has done is that they have said you know what you're going to have an internship or you're going to learn and the government will pay you x percent of your salary so don't worry about it you know just go and learn and you'll still get paid and that's very important when it comes to kids it's different we can start teaching them from school and college but for that middle age group we need to help them give them a support system and i think what we really need is we need it to be mandatory that computational thinking needs to be mandatory like reading and writing and doing math and we must include both boys and girls in it what we have seen is that you don't need to become a data scientist but you will certainly need to work with one at some stage in your life and the ability to communicate with each other requires both the data our ai engineer and the business person to learn a little bit about each other and that has traditionally not been the case we have not bothered to learn about technology as much as one should whereas the techies have already kind of scrambled around trying to learn about business now i think that they should meet together because without that communication that's where 90 of our client projects fail when the business and the tech or the ai engineers cannot communicate because they refuse to learn about each other just that basic stuff that's necessary to appreciate and to think expansively and out of the box thanks a lot ayesha and david what do you think and how can people not be left behind in an ai world especially if they're not technical well so so first and foremost i think that the principle and the philosophy of leave no one behind is critical and and just like we're all using zoom and we're not really thinking about it even though for some it's been a complete change we all we use word and we don't think about it like shamiq mentioned you don't say i'm doing my work on the laptop it's just hard and parcel that needs to be the guiding principle and we need to reverse engineer it but i also want to mention about the jobs because and maybe i feel somewhat responsible to it i always got accused by my best friend that i'm going to be the reason terminator 2 of you wandering the earth that it's it's it's we actually i'm an empirical person and we commissioned about i think it's about two years ago now and it was an ibf institution of institute of banking and financial professionals and mas a study and i said look if ai and data science is really going to go out there and destroy all these jobs there needs to be empirical evidence of it we need to be able to see it so we actually conducted that study and the objective was to really understand what's going to happen and guess what happened while yes there will be some tasks that will be displaced by automation more more specifically and they are what you find that fundamentally the jobs don't go away their task that constitutes the jobs would change and this is a freely available report out there which covers over 170 different jobs in the financial sector going down to task level and the imp implications and the impact that automation as well as well as data science you know a.i was going to have in it and to anxious point is what is that progression and career path that one should take and i think that's really really important and then just finally to add is nonetheless i do not take the fears and concerns likely i take them absolutely seriously and organizations need to take them seriously i've heard many times from c-suite saying but i don't understand why are people worried why people afraid put aside the so-called irrationality it's real it's there and that requires a proactive approach to addressing it and showing that the goal is not to have you without a job the goal is to progress the organization enable it with new tools and new capabilities and also enable you with new tools and new capabilities and new knowledge thanks a lot yes maybe just to very quick i won't add to what david and i have said because i agree with all of that but i do think there's one other angle that's missing and this is not for financial services alone it's across the board there's a very good book called hand head heart by david goodart which talks about valuing the professions that are not necessarily valued and i think one piece of language going back to stock stopping all this talk about ai almost paradoxically by talking all the time about the power of ai and digitalization we scare people the reality is you do need many many other roles you know yes they have to be ai enabled yes they have to learn how to work with the data scientist but i think we need to shift the conversation a bit also so that we don't stress out everybody who's not building a model or writing code to say well actually if you're not doing one of those things you don't have a career no actually there's a lot even in financial services even in a digitalized world that does require other aspects and frankly it's up to us as a society and as an industry to value those pieces because without that we are at risk of kind of destroying the the the golden goo the goose that lays the golden eggs thanks a lot so we're reaching the end of this session one minute for a very quick question so five seconds no kind of answers from all of you and i'll start with you ayesha one example of a very interesting ai initiative anywhere in the world i think some of the very because we're beginning to work with in healthcare and farmers i'm very impressed by companies like bioformus which are using artificial intelligence and the data from sensors et cetera to reduce and intervene at the right time so patients who've had heart attacks don't end up in the hospital again i think some of the most exciting uh you know applications of ai will actually be in healthcare and that means a lot to me thanks a lot david oh uh and i'll take a leaf of shamik's book from jamaica in terms of uh we need to value cement applying ai to predict the compressive strength of cement 28 in advance so all these buildings that we're sitting in and we don't even think about it there's a whole process and if that can be enabled with and will not call it ai just through processes through capabilities with a very very very traditional operation i mean let's talk about finance talk about cement manufacturing then absolutely and not only supporting the organization being enabled sustainability and meeting measurable sustainability goals and i emphasize the word measurable thanks david i'll cheat a bit i'm not sure it's entirely just it but technologies that help manage all the concerns around data privacy portability sovereignty and indeed algorithmic transparency etc and some of which also uses ai i think that's quite important the only way to get our arms around this problem is if we have robust technological solutions for example around secret computing et cetera so i find that area quite fascinating using technology to protect people from the ill effects of technology thanks and so a very fast no five second answer if you had and i'll start with you shamik if you have 10 million to start any ai project what would it be um i think it will be about using ai to solve some of the issues around climate change thanks david food was that five seconds good sustainability i mean there are phenomenal solutions out there but not enough in terms of how to leverage ai to really help in terms of growth production sustainability food something that we all kind of need on a day-to-day basis thanks aiesha healthcare i think what we've seen with deepmind did with protein folding and what we're seeing with mrna potentially vaccines for cancer this is so exciting and i would definitely invest in that and want to be part of that thanks so i think that's a great way you know to end this session so thanks a lot to the three of you thanks a lot to the audience now for listening to us it was super insightful and so you know best of luck for everyone in this new ai world bye thank you you
Info
Channel: SibosTV
Views: 115
Rating: 5 out of 5
Keywords: Technology
Id: x-AjADP0B18
Channel Id: undefined
Length: 55min 50sec (3350 seconds)
Published: Mon Mar 29 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.