Artificial intelligence – machines and humans

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] we've gotten used to the fact that in our pockets handbags or hands we constantly have ready access to all conceivable kinds of information in a matter of seconds we can be updated on the latest news find out where our friends are or find out whether the restaurant we want to go to has vacant tables but in the years to come artificial intelligence will be even closer than that it will be living in our possessions in our bodies and helping us think [Music] it senses how we feel and sees to it that our refrigerator the weather report and our outerwear are all working together for tomorrow with our cognitive partner perhaps in the form of a hologram we can have a dialogue when we need to make major decisions it will be a whole new world but in the center of all of this among the future technologies and robots will be us humans still standing what role will we play in this future and what will the road to their look like in terms of ethics and morals today we are in what is known as second generation ai research the first generation in the 80s and 90s focused on computer science and logic today's ai focuses on machine learning meaning that ai systems learn from large amounts of data with the help of such things as neural networks and deep learning but at the same time we are knocking at the door of the third generation which concerns how we humans also become part of the ai system and this is why the knute and alice wallenberg foundation is now investing almost 5 billion swedish crowns for support in ai research in sweden that's nearly 600 million u.s dollars so that the technology that will change our world can take shape right here when we talk to google home or siri today we can get help in calculating numbers or find out what the week's weather is going to be like but soon you'll also be able to ask questions such as where are my sunglasses and the personal assistant will in turn be able to communicate with all other sensors in the home and actually give you an answer as to where exactly your specific sunglasses are and not someone else's lost bear but also understand that those glasses that it sees are exactly your glasses that you've been wearing for some time so this ability to combine sensor information also with services i think will open up a lot of possibilities [Music] amy lotvey wants to teach robots to see and understand the world as we humans do but the road there is full of difficult trials so the reason why this is challenging is because when we use sensors whether they be distributed in the environment or even on a robot those sensors often give us information that is just numerical you can think for example if you look at a picture it's just a bunch of pixels and the challenge here is how do you augment that information so if you imagine you're looking at a picture of an apple how do you go from these pixels to also understanding all of these interesting properties that apples have that it can be held in your hand the sound that it makes when you bite into the apple the sourness of the taste of the apple and what you can do with it and that's really the challenge that i'm trying to look at by taking all of this sensor information and putting it together and learning how to represent that information for an artificial system [Music] amy's lab is a training camp for robots and a seemingly simple game like hiding a ball under a cup and then answering where the ball is is for a robot still too complicated there is still a lot it has to learn so one of the ways artificial systems can learn about it is first of all to create not just a category not to know that this is a cup but also to know that this is a very specific cup it's this exact cup and i have three different cups and to keep track of each cup as each cup moves the other thing that it has to start to learn is the fact that just because you've seen for example a ball which is here that objects don't magically disappear and that if you no longer see it most likely one of the objects that's blocking it can contain it and this is what's called reasoning so it's no longer only about learning from observations but also reasoning about the observations and that's really sort of the fundamentals of ai hey pepper how you doing hello human i'm super good thanks and you how are you this is pepper and as a final step in going from perception with sensors to creating meaning from what the sensors and cameras measure pepper will now be tested on interactions with humans so i'm thinking an important part of the interaction is both eye contact but then also to gesture where the ball is so i think we should run an experiment now where we show uh the ball and we put it under the cups and then we let pepper see where the ball is yeah okay [Music] okay pepper where's the ball [Music] not bad [Laughter] this time pepper failed at finding the ball and amy's work will continue to enable robots to work among us humans i think in 30 years what i envision is that we will have a lot of machines that are autonomous that means moving around us making decisions changing the world around us and i think a very important part of that is to give machines the ability to perceive and sense their environment and to understand their environment in a way that is also coherent with how we understand the environment so we work together [Music] so in lund two researchers with completely different backgrounds are at work inger brink a philosopher and christian balcanius a robot builder inger and christian want to make it possible for men and robots to live side by side in an open and equal way it in the past robots have been used as a tool but when they take the step to becoming equal participants we will end up with a completely new reality which will require new solutions i'll so traditionally the design of human robot interaction has focused on trying to mimic human cognition and reproduce the cognitive capacities of humans the way we see it things should be turned upside down so to speak and instead of starting with what's inside the head we should start with the social context that robot and human share what we need to get this going of course is not only a robot that is capable of asking for health but also a human being that wants to offer help and so we get the real interaction so in their joint research with christian's robot they perform experiments for example examining how two different arm movements from the robot can be perceived differently by a subject one movement that the researchers call efficient movement is simply straight and fast the second movement mimics a social and inviting gesture where the robot raises its hand high to show that it wants to cooperate with uh [Music] and so this is what we would like to reproduce in the in our robot so if the robot uh is using these social movements so to speak and we can uh make it do that in a way that prompts users to respond in the same way and also start to socially interact with the robot well then we're home then we have uh established cooperation between the two without relying on emotional engagement and so on ai's brain is like a black box with a digital consciousness and this black box needs somewhere to train [Music] at the visualization center in north chirping anders innerman and his colleagues work to build entire virtual worlds where ai can practice the way we are working with machine learning and artificial intelligence here is actually in two different ways one way is to try to make machine learning better and the other way is by trying to understand how the black box of machine learning how it actually works using visualization of course it's possible to film real environments and try to get ai to train with the images but the training environment is much better if the researchers themselves build these training worlds i can lead vectoring of very carefully for algorithm design under the leadership of professor eunice unger nor shepping is therefore working to build up the exact environments that the scientists want ai to be able to train in these are indoor environments as well as entire cities and here the researchers can with precision influence the weather reflections on cars which buildings should be present and any animals that they want to appear in this way an optimal training environment is created from spiritual systems but how does ai actually do this to learn to drive in these environments the ai trains itself ai has something called neural networks and in them ai mimics our human way of learning ai trains makes mistakes and tries again i think this is a new paradigm in computing and in understanding of our world that we live in and i think it's very very important that we humans put ourselves in the context of ai and machine learning [Music] but before the self-driving cars can roll on our streets they must also be capable of learning ethics and morals a self-driving car may need to make an ethical decision should the car for example give way to a dog that runs across the road and thus risked the passenger in the car there are no clear answers to these kinds of questions yet but further north in sweden we are working with which morals and ethics ai must have in order to be able to make the best choice in the human world ai will shape the future and fundamentally change our lives and therefore we should proceed slowly so says virginia dingnam a world-renowned researcher in ai and ethics ethical principles in ai are actually about us about the people who develop and use ai virginia is researching methods that will ensure that ai and autonomous systems are shaped in a way that does not conflict with our human values and ethical principles of course ai can do a lot of great things and we already see a lot of very good applications of aif but we really have to be looking as well at the impact of these systems and also although how do we want these systems to work in a way that benefits everybody and then you see all the transportation now and was infected and is still moving around and is not isolating in the house so the ones who are not for following the rules like they should be following so this yeah and here you see it as well in the graphs here for example we see in frank dignum's research ai helping to simulate how the coronavirus spreads among the inhabitants of a society but when it comes to decisions that are crucial to our lives which ones do we really want to hand over and how do we make sure that the choices ai systems make go hand in hand with what we humans actually want so if you are building for instance medical robots which can help someone take the best diet possible and take the medicine on time we are and because that's good for you but you don't really accept it as the the user how can we align these two things so it's really not just what is good for you but also what you yourself want and we really have to be able to combine these two aspects but programming ethics and morals that all people think is right will be difficult because even we humans don't all agree on what is right and wrong in all situations how then should we be able to teach the robots and even today ai has affected real life in a discriminatory way a very recent case is in the united kingdom because of the corona crisis high school students were not able to do some of the exams which they should have done for admission to universities instead an algorithm was used that would calculate the grade each student would have received if the student could have taken the test the algorithm based its prediction on previous grades and how reputable the student's school was it turned out that the students who had good grades but went to a lesser school were discriminated against this type of issues they do make an impact in people's life and that's why it's it's not about the ai itself it's not about the technology but it's really being aware that what are we doing with this technology so we really need to be very very aware as researchers and as developers of ai that our systems really make an impact and if we have to make sure that this impact is the impact that we really want it to be exactly what our future will look like and how long it will take before we live with robots among us is thus far difficult to predict and the challenges they'll continue to grow but here in sweden there are still researchers working to make sure it is as good as possible our future with humans and ai side by side
Info
Channel: Wallenbergstiftelserna
Views: 3,829
Rating: undefined out of 5
Keywords: Artificial intelligence, AI, Kunt and Alice Wallenberg Foundation, Wallenberg Artificial Intelligence, Autonomous Systems and Software Program, WASP-HS, Virginia Dignum, Amy Loutfi, Christian Balkenius, Ingar Brinck, Lund University, Umeå University, Örebro University, Linköping University, Anders ynnerman, Frank Dignum, Robotics, Research, Science
Id: 4JTsPjaBjTk
Channel Id: undefined
Length: 20min 30sec (1230 seconds)
Published: Thu Apr 22 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.