Panel: What can Quantum do for AI?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
we're delighted to have arum Harrell who will moderate what quantum can offer to AI and you know Arum is you know professor of physics at MIT he's the founder of the H al HL h HL algorithms for solving linear systems and quantum computing he works on a project or as a PI for our project for quantum in AI alongside of Kristin Tim who is a researcher research staff member and Peter Shore who is the Moores applied mathematics professor at MIT so welcome thank you very much [Applause] great thanks that Lisa for the introduction and for and for putting this together we're all we're all excited about this panel the idea is to have three quantum people and one machine learner from the beginning have a conversation about about how these fields could work together you know one of the last questions you got which was not from us with substantial computing power is needed and we hope that quantum computing can be can be part of that answer to hopefully hopefully will we'll get to some idea for how that might work and the way I thought we would do this so I'm going to moderate and introduce the whole thing and then maybe we'll have a little conversation between the four of us and I want to leave a lot of time to open up to questions because I think that many of you probably all have questions about how quantum computers work and and what we can hope to to get from them so let me just begin by saying very briefly if you if you don't know if you haven't heard of quantum computers or if you you've probably heard of them yeah here's no but I put up a few equations that might be useful they're pretty self-explanatory the idea the way I like to put it because I would not explain the power of quantum computing in terms of entanglement although I think that's a necessary part but I would say it changes it's a quantum mechanics can be thought of as a generalization of probability theory so if you have a classical computer with n bits that's got two to the N configurations and we know already it's useful to randomized things and classical computers you know we've both empirically and in theory and theoretically there's a lot of evidence that randomness helps and you can think of it as you've expanded the state space instead of just having two to the N possibilities a probability distribution is a vector of length 2 to the N of real numbers that are non-negative and add up to 1 then what a quantum computer does I'm sorry sorry this lets you explore very large state spaces so if you want to do high dimensional integration you're adding up an exponential number of points it would be foolish just to add them up one by one you're better off sampling which really means relying on a probability distribution to explore this exponentially large space we kind of take that for granted that you know this has sort of been which we've learned it so well that we we forget that there was a you know that this is actually an advantage to use randomness what quantum does it says which you have n qubits instead of 2 to the N probability probability if you have 2 to the N amplitudes and these are like probabilities and they're normalized but they're normalized in the two norm so they're complex numbers and the sum of their absolute value squared adds up to 1 in fact they're complex doesn't really matter if it was just positive and negative real you'd already get the idea and the reason why that's special is that if you have a probabilistic machine then you can take different paths through the machine and if there are many ways of getting to particular outcome you just add up all the properties that reach that outcome on a quantum computer it's the same thing you add up the amplitudes that reach an outcome but now if let's say some of these amplitudes are positive and some are negative they can cancel out and so you have an interference phenomenon like you do with waves you know what these lights I can only see the first two rooms so I hope you I hope you get I hope you're getting this the they interfere like wage so when waves interfere they can interfere constructively or destructively you know if they're in phase let's say they're both positive or negative that's constructive if they're out of phase the opposite sign they cancel out and that's just a richer set of phenomena then you get just by things that add right so light interferes paint just adds the colors just add and so that's sort of the difference between quantum and classical computing is this richer set of possibilities and in some cases we now to do really exciting things with this so Peter sure to find an exponential speed-up for factoring for example that's could have a lot of important implications for cryptography there's also less than exponential but still big speed ups for things like unstructured search so if I have a function that has n possible inputs and I want to find which input gives me the maximum and I've know I can't use gradient descent but say there's no useful structure to exploit then it sounds like I just need to evaluate it n times on a quanta computer I can evaluate its square root of n times so those are the types of things where we know provably there's a speed-up on a quantum computer and now you might the question is how do you use this power for machine learning and we have a research project together that's investigating this you know the title of this this session had a question mark in it which reflects the fact that we don't really know the answer and I have a I've I'm gonna stop up using my my role is introducer and I'll pass the torch the others the last day is I have a perspective on how I think machine learning people should think about the power of quantum computers because the quantum computers that we're gonna have soon are going to be pretty small and even in the long run their qubit is probably going to be more expensive than a bit so we're gonna have trillions of bits just on your laptop and maybe millions of qubits you know that'd be an ambitious thing to have and so what can you do with a what can you do with something that is much smaller than your existing computer but as much greater capabilities now one challenge is if we don't exactly know all the algorithms that we get speed ups for on a chronic computer I give you a few examples there are some there are more in some cases we know you can't get a speed-up in some cases we know you can but I think it's possible to leave that to the side for the moment so suppose you have a thousand qubit device and you know it can only run a hundred thousand operations so for a classical computer that would be very pathetic if you had that but these hundred thousand operations are extremely powerful right each one can process the data in ways that would not be pot that dough far beyond where the flash whole computer could you maybe even exponentially faster but it's still not very much data already that tells you something useful that sounds like you might want to apply it to a problem like cryptography we have a small little secret key and you want to you want to extract you know you want to you want to break the code or maybe it's something like quantum simulation which we know it has exponential difficult if classical meters of X exponential difficulty to do it doesn't seem like something you might apply to classifying a billion images because the quanta computer just can't store that much data and it doesn't even have time to have it stream through the quantum computer so I think already this tells you something as if you're a machine learning researcher and you want to say what problem might I use a quantum computer for already it it steers you in the direction of what is the problem that doesn't involve very much data and but it's very hard so Isis has a small input size and yet there's some extremely difficult computations view just because and then that I think is the opening of a conversation so once you have such a problem then quantum computing researchers can say well here's what we know about how to solve it here's you know we can try and come up with new ideas to improve it but I think that is a good starting point okay so with with that in mind we could maybe we can ask yoshua if you know what what problems can you think of that are really important to machine learning with not very much data left data than a babies yeah I haven't thought about this a lot yeah so it's gonna be very naive probably yeah okay so two things I've thought about while you were talking one was [Music] planning yeah because you don't need a lot of data to start with right and you only need to have one answer that depends on the result of all those paths being somehow combined so I don't know if this is an avenue that people have been thinking about it's like the search problem but you really want to you don't want to consider the paths independently like you would for searching key right there's lots of paths that join back in the future and and you want the sort of a function of all the paths together right so it sounds like maybe it could fit and the other thing I was thinking about is a bit more crazy so so I think of a quantum computer as a machine that does computation that would be very difficult to approximate with a classical computer right so there's an analog on which I've worked which is a deep net does computation that a shallow net cannot imitate easily right right so so it happens in the case of the deep that that there are functions that can be represented by a deep net which are really useful and if you try to learn those functions with a shell in that it just doesn't work very well maybe even exponentially expand in fact right we have theorems about that right so so maybe something similar it could be applied in the case of quantum computers so for this to work you would need to be able to train the quantum computer to learn functions which may be quantum classical computers can't do easily but quantum computers can I have no clue whether there is such a thing but you might train them and see what happens now the question is how do you train a quantum computer I don't know maybe Kristin do you want to I think that second question sounds like our paper a little bit doesn't it a bit yes so okay so there there are we've been looking at certain networks basically based out of circuits that that we plan to train classically okay where we basically use the the quantum computer to represent little data in a very large feature space essentially and I don't know what this is what you were thinking along but certain certain properties basically what we were focusing on is essentially that that would make these like querying or basically getting access to to this large feature space very very hard classically so that doesn't necessarily mean that it's immediately relevant to a given set of data but it basically paves the way of thinking along reticle Linus says is that is there a very efficient way of accessing properties of the data in in a quantum setting yeah that you wouldn't have access to in a classical setting basically or is their way of basically representing data in a very high dimensional space coming in with little points and blowing that up in a very in a very specific way and and and which would be computationally too expensive for you to do classically so why do you want to do this mapping to this intuitive space story it sounds like a kernel machine right exactly but but say is for instance suppose someone someone gave you a particular kernel yeah that is able to to estimate or they see that that's able to represent features of your data that you couldn't represent classically beforehand right so you could imagine that that that that there were hard to evaluate kernel it's a hard to evaluate kernel you can look at that and looking at that you mean even even the dot product is hardly correct this is exactly the point so especially when you look at this is okay let's there are many easy kernel you can look at you can write RBMS you can you can you can do other things but there but imagine there might be kernels where you say I'd like to have a feature space that that characterized something in your data right that is very hard to get access to and in principle the only way you would know classically if how to do that is to write down your ginormous feature vector and compute the inner product by hand and so this is one thing that a quantum computer could actually essentially do for you by by estimating those interpret enterprise brother do you figure out what that feature space is yeah that is a difficult question so what I'm proposing is you learn it right so the way I think about it is not like thinking about some feature space but more a computing machine right mix inputs produces outputs like usual neural nets except that it has these particular computations which would be very difficult to emulate on a regular computer right and now the question is whether the type of computation is going on there actually is is is something you desirable for solving real-world problems but you could test that experimentally right you could test that by training them quick on and maybe find some tasks for which they're doing a lot better than the equivalent not equivalent by you know some some normal neural net which is classical and doesn't exactly replicate the the other guys but potentially could still go to do a good job on that distribution on the data distribution correct to where in that in that picture you say you take that you take the circuit essentially and and you would take the before the computer tweak it yes to match the data indeed this is a very of course one question that's we did do this is you know I hear about all these qubits and it would be much easier to train these things if they were continuous sure there's some like continues machines so people have thought about continuous quantum machines and this is related to I guess coherent States rather than qubits yeah but it's really you know people thought about it but it seems really very difficult to make them actually work and you mean to build them or no I mean you could probably build them but to make them actually do useful things well but if you train them that's how they get to do useful right yeah but this is also why we don't have continuous classical bits rather than we used to sweet classical bits then rather than continuous classical things we could build continuous classical things but you know it's much easier to yes design a gate that just takes 2 0 & 1 & 0 & 1 it does something then design a gate that takes a whole continuum things and another whole continuum things and so does this is some continuous useful so the reason that I'm proposing this ideas because I've been working on something not Quentin but but a now analog for for analog circuits right so so one problem with analog circuits is that they don't do what you want in the sense that the actual devices have very complicated physics that don't correspond to neat things like multiply and add and then take Exponential's but they do something which is a nonlinear computation and if we could only tweak their degrees of freedom and to end to do the right thing you could train them so so in my group would design the procedure for doing this which doesn't require analytical knowledge of the actual physics that's going on under the hood only that you can measure things locally like what we call sufficient statistics and you know inputs and outputs and whatever is going on we're gonna tweak it so that it does better fitting the data and so maybe something like this could be done for quantum circuits that but they would have to be continuous so there's there's an interesting reason in terms of continuing a continuous variables and in quantum computing there's this particular reason is as Peter said boy like practically also why people like discrete structures and also for regular circuits and that is quantum systems are very noisy inherently yes yes and you really notice some quantum systems right and the way you you you fix this is by something a procedure called quantum error correction yes and they so I wouldn't try to fix it that's the thing right you guys and this meet we're always trying to fix the problem right by these some error correcting schemes whereas if you think in a machine learning way you think no I'm not gonna fix to have each piece do this idealized computation I just want the overall circuit to do the right thing and how can I tweak the different parts just a little bit so that it does oh well there's something that people have been looking at so there's been a big focus as quantum computers start to be built we might have an algorithm paper that takes a million cubits book we want to say what can you do with the 50 that we have here today or you know 20 or hundred or whatever one thing that my people are they are called variational quantum algorithms okay what is that so what this means is basically you have a bunch of knobs that you turn to design your quantum algorithm so the algorithms good yeah the algorithm might be shine a laser this ion for time t1 then a shine a laser that ion for time t2 and then adjust the frequency to an angle you know Omega 1 and all these things T 1 T 2 Omega 1 are parameters yes and so you run your your quantum circuit you see how well it does on your objective function and then you take a derivative you know maybe you do where you do have you have an outer loop you do gradient descent or stochastic gradient descent or I'll probably stop talking about part of the question to get to get this gradient effect go on not at all no no so and the advantage of that is it it may achieve some of what you want which is if the quantum computer is imperfect yes and you're just doing your your gradients based on the observed behavior then to some extent what you're doing is you're training for that particular machine and for all of its is imperfection right right so there may some of that we you know we may have stumbled into something like that in our and our goal is to find something for near-term computers yeah yes so near confirm computers are going to be very noisy or at least quite noisy and if you want to do all the machinery of error correction instead of you know having 50 cubits maybe 450 who real qubits you need a thousand I'm 50 sorry 50 logical qubits you need a thousand real qubits Oh so for near term quantum machines we're not going to be able to do error correction so what we need is these techniques like you're describing that we can use the noise to get results rather than trying to noise sounds interesting yeah so actually I'll say one thing about your first your first proposal of planning as well right right so there isn't known I mentioned that there's a square root speed-up for unstructured search this is one of the the first algorithms that got people excited about along with with Peter's factoring out and got people excited about the power of chronic computing and since then people have found quadratic speed ups for many many problems little to no structure and one of them so you could think of a unstructured search of taking the or and variables right you know is is there a target in any one of these end locations you could also do an and/or tree so you could say take M variables or them to get down to n over 2 then and those get down to n over 4 and what that looks like is it looks like a two-player game you know I make a move and you make a move and so on and then at the end there's some function that tells us who won the game and it turns out quantum computers can also get a square root of n speed-up for that and that comes a little closer to your planning this might be planning in a write adversarial environment but if you replace the adversary with with some model of nature that's right there could well be a provable square root speed-up and then we might hope that heuristics would do even better so I want to make here the connection with something I mentioned in my talk earlier which is that I guess there's not going to be ever enough qubit to represent the full richness of the side of the world right that you would be planning in but but I think we don't want to do that anyways right we we want to build AI systems that are planning in just the right dimensions and aspects of the world that are relevant to the plan because I need to plan about how to get back home in spite of the fact that my flight has been canceled actually right does that happen it's a true story I don't need a blessed data for quantum computers well maybe maybe but but I think we need to rethink how planning is done but that I believe will be needed anyways for a on so I noticed our time just decreased a little bit it's okay this is the amount of time including questions okay so after this we have another 10 minutes for questions I kind of wanted a lot of time for questions can we just start doing some questions and then we can jump in correct so hopefully you guys have been texting them in yo me ask answer this question yeah oh oh I feel like 20 minutes for questions okay go then maybe we should we keep going until Sheriff okay so questions will begin in 4 minutes and 43 seconds so prepare yourself so for the planning so I guess yeah the question is yeah how much work I got know so little about quantum things like how do you how would you do something like what we discussed that you basically considering an exponentially large set of paths through some state space right those paths intersect in the sense that they come back to states that are the past visit yeah and there's some quantities that are gonna be computed at each of the nodes on this on this graph right so right and you know even enough on a classical computer there's a combinatorial explosion of paths right so in it in a classic computer the problem is that the number of of nodes is exponential and we don't want to consider all of them I should say with this wish you know I opened with this idea of wishful thinking yes so the wishful thinking is that the quantum computer can explore the exponentially many things in polynomial time so we know that generically this isn't true you know that classical computer is we think can only do this when there's some structure right we we think that P is not equal to NP you know you can't do in the worst case but we have these heuristics that tend to work pretty often a lot of the time and so I think something we similarly to in quantum computing that we can't it will not always work but this gives us an idea of where to look for some structure that we can we can get a handle on so current methods using machine learning involves sampling to the lightest and and then a lot of the intelligence goes into sampling just the right paths important sampling and more intelligent than importance right but in that in that direction yes yes yes yes right right right basically you're learning a policy that figures out which direction should be exploring right and I guess the question is I what I wonder is if there are models so there are families of models that are not being looked for because they're too computationally expensive so it seems like there's a lot of things in machine learning where you train a fairly simple class of models and it does well because you have a large amount of data to feed in but I wonder if for quantum we might look for a richer class of models maybe harder to Train on a classical computer one that we even maybe given up on training in classical computers but we're in return we can't train it on as much data right so so the only way I could see that this would work is that the quantum computer is not used for doing the whole training it's only used for one particular computation which maybe happens on a per example or per mini batch right that's that's the advantage of planning like you actually need to plan at each time step right and that itself is an expensive computation if if we need the quantum computer to look at the whole data set and it's hopeless right let's not do it right but already it seems classical algorithms have thought a lot about that about you know how do you sample many batches that are used that are right mostly randomly right right but maybe in a maybe have you learned that tells you with which wait to do the random sampling and so on so maybe the thing that's closest to what you're talking about is active learning methods we right where you or even exploratory policies where you you decide we're in the state space you're gonna go and get examples right seems that if they you know I guess we have to think about it seems like one thing that we may want to think about is how a big classical computer can work with a small quantum computer so the big Platteville computer is the custodian of a large amount of data maybe it can interpret the outputs of a quantum computer is saying oh this is the type of data which I need to next feed into the quantum computer to train it well I mean the other option is to combine a classical part and the quantum part so so let's go back to the hypothesis and I have no clue if it's true that there are some computations that are useful and can be only done by quantum computer or efficiently so maybe we could still we could extend this hypothesis to there are some computation some functions which have part of it which is small is is you know needs to be done by quantum computer and the rest could be done by classical right and then if we did this thing where we could train and to end the madam computer we could also trying to end to end with some other Rice's they are classical and then it might work even though you only have a very small quantum computer yeah that would be that would be good to look at yeah all right all right oh that's good yeah can I do for Quentin who wants to wants to take that one on so okay I can I can so there's there's there's various places where we're in the actual experiment for instance there's certain tasks where you have to discriminate when you're measurement data where you have to tune up gates in a particular way there's certain tasks that need to be automated that is for instance a very good place for for for classical il il ai algorithm to help out what the quantum machine a computing machine actually does so for instance there's you could use literally binary classifiers based on that they'd use the measurement data that you get from a quantum circuit to predict whether the qubit state was in 0 or 1 when you measured it there you can you can come up with with with circuit optimizers that basically taken data from from previous runs from previous tune ups and basically go and only gradually tweak up the circuit the next run and that the device and next run so there's a lot of use of classical AI techniques when you when you want to basically control and a quantum experiment and when you read out when you want to read out data from quantum experiments so that's that's actually being done right now and so that's very useful so is it mostly predicting a few real values or are you into predicting more complicated distributions so it varies so there's there's multiple I mean there's multiple so in the simplest case it's just well ok it's just plus or minus 1 is it cubed 0 excuse it 1 Q 1 when it gets more intricate it's basically that you have and in these quantum devices you have slightly noisy measurements so although your your quantum state was in in a in a given state and you measure the state collapsed and you produce some outcomes for this there's there's crosstalk and noise among different qubits and it's really helpful in that setting to use classical AI techniques to basically filter the classical errors in the classical noise that you have by reading out the qubits in the end after the quantum computation has happened to use to use to you to use machine learning techniques for that so can you also use machine learning for error correction for decoder so people have thought about using machine learning for for coming up with with decoders for error correction indeed so so that you think so so if you go back to the quantum coding setting that you get there certain tasks it works in a particular way where you measure certain syndromes and then basically there's ways of using machine learning techniques to learn decoders noise that is also work that how do you get the ground truth for so there there's I mean the way you do this is the way I mean the way you would do this in that setting is basically you pair particular logical states and you know what you want to get out and some in some cases and and then train your decoder in that setting yes I should mention that you know the word quantum in some context is just used to mean quantum computing but in my you know in my department everyone was doing quantum something or other and so there was recently a meeting in the physics department where everyone had some youth or interested machine learning came together and people were using it to analyze LHC data to to predict material properties to model nuclear physics it's kind of amazing how almost every branch of physics and physics is mostly quantum physics is using AI in some way so I think even beyond quantum computing there's a lot of interest in using AI for quantum physics applications so I guess we don't have to always do the top question I mean they're pretty close and so anyone can at any time grab any of these questions we could talk about should we talk about rethinking the computing stack okay Peter you want to talk about yeah so well so a quantum computer you know so the classical computer stack is very well worked on quantum computer we have you know we're part of NSF grant that is going to you know think about how do you build a quantum computing stack and one of the things we've realized is that the different pieces of the stack are much more tightly connected than they would be in a classical computer and somehow you have to you know you have to design the stack to take advantage of this and I mean I guess one interesting part of it is even there's a lot of classical computing circuitry that goes into it right like the you know Kristen mentioned error correction because you have to correct the measure diagnose and correct the air and feed that back in you know as the computation is running and then maybe your quantum computer is at one or two Kelvin and so you have classical circuitry it has to communicate with the room temperature computer there's a lot of a lot of challenges there but always a parenthesis machine learning based software also doesn't really fit well in the standard computing software engineering framework because for example the the pieces interact a lot with each other they can do the wrong thing and yet overall it looks like everything is fine right the traditional ways of testing and and independently debugging pieces doesn't really work well with machine learning so there's also effort to rethink the computing stack for machine learning which is not trivial right yeah and one of the more difficult things about the quantum computing stack is debugging because know if you measure something you've affected your computation so you really need to make these very careful measurements that measure you know certain things don't really affect the computation which is you know that's basically a piece of quantum error correction and that's really difficult to figure out how to do in all cases all right I guess it's already an issue with the quantum experiments like if you want to build a magnetic field detector that's extremely sensitive often the best thing to diagnose it will be the detector itself you know so that's yeah definitely a new challenge in building quantum computers ok so we talked about different approaches sure so sorry yeah so there are different approaches to quantum circuit development so I guess IBM Oxford photonics etc and the question is does the type have any real advantage for AI over a different type well I mean classical computers it really doesn't matter what the bottom layer is they can more or less do all the same thing and the same thing is more or less true for quantum computers the big difference I think between the architectures is how easy it is to go know to get a piece of information from one piece of the quantum computer to another I mean there are some architectures where at least for small computers everything is very well connected whereas for some superconducting architectures you need to you know send this piece of aeration to the next qubit the next give it to the next qubit and all the way along the chain before you can get it from one side of the computer to the other but these are going to have a big effect on you know which algorithms are best for this structure I'm not sure which I'm not sure how that's going to interact with a pay I write it may also be that for a large quantum computer some of these differences will wash out right because if you want it the requirements of quantum error correction we were talking about it before how it's a little bit like a rocket ship 95% of its mass is the fuel a big quite a computer may spend 95% of his time correcting errors and so the in that sense all the architectures will have to come to terms with that and you know the connectivity the the resulting connectivity will look very different than the underlying physical one so so there was a question can quantum computing be apply to Monte Carlo approaches for Bayesian inference an optimization which is related to my earlier question for planning it's sort of a similar kind of question actually oh so there's there's a way of error mix explained earlier there's there's a method called amplitude amplification right so this specifically makes use of the fact that you have negative numbers and positive numbers and that can be used in many classical Monte Carlo routines by providing say a square immediately squared speed up and burning and burning burn in and in mixing times so there's a direct way basically if you have a classical Monte Carlo chain of classical markov chain that you want to run and you wonder for various tasks there's a way of using immediately using a quantum computer just by basically to speed up a the burn in time and B to actually also get a quadratic improvement in terms of the sampling someplace so one question I have about this is so let's say you want to do a ninja just a simple Monte Carlo integration is the integral gonna be the thing that the quantum computer computes because in general it's gonna be hard to put into your quantum computer no oh so so it depends so yes so all these basically require really large computation so right so as Aaron Peter mentioned area there's gold for near-term and this type of speed-up you would look for for something that goes with large quantum computers correct when you have a well of reasonably sized but then but then the answer come to my question then is yes that we would have a quantum computer the implements F the integrin that we want to did you want a sample from right in terms of yes so you have a circuit that essentially queries that that function yeah and and and then there's various techniques to boost the probability to basically see certain events so just to add to this so you might say if you want a sample from something there's a naive thing which is you could do rejection sampling and then something a lot smarter would be like metropolis you know Markov chain Monte Carlo and if you look at the quantum when I said the quantum can speed up the or function by a square root a similar argument can speed up rejection sampling by a square root that's not so impressive if the leading classical algorithm looks like MCMC and so with more effort you can also quadratically speed-up MCMC and that's kind of a big goal in a lot of quantum algorithms research is at least at a baseline whatever the leading classical thing is we hope to get a quantum square root speed-up of that there's one more thing you can do with this which we I think understand a little bit less well which is at the end of MCMC you've got a sample from your target distribution and at the end of the quantum process when it works you get something called a cue sample a lot of our that's a lot of our nomenclature or names or Institute you know there's cues all over the place so what you sample is is I said amplitudes they're squared add up to 1 so what this is is you take your target distribution and just square root all the probabilities and you call those the amplitudes and what good is that so one thing it can give you an ordinary sample if you measure it just yield you a sample so don't least give you that but there's some things you can do with it that are exponentially more powerful for example if I want to if I have samples from a distribution and I want to know this close to the uniform distribution this is related to the birthday problem I need square root of n samples but if I have a Q sample I can do it with one sample I can get sort of constant discriminate this constant bias to answer to that question and there are many other things like that if I want to know if two Q samples are close or far apart I just need one sample of each instead of a number that grow through the size of my my sample space so in general can you do the average over a large number of samples in more efficient way than doing them individually and then adding so let's say you can Q sample for many if you can keep ample for many things can you then Q sample from their average so in work together we normally just do a mean over many samples right right the thing we care about not necessarily just the right now you can estimate the means also with quadratically fewer samples so there are and I think this type of thing we have not done a sort of people have laid the foundations for this like showing you can do these these sort of building blocks but I don't think we've used them yet and people have not yet quite figured out how to use these building blocks so that would be I think really cool if we could connect those better to applications we keep skipping this one answered the purple one should we answer the floating I mean the floating point qubits can represent whatever you want I mean qubits can represent JPEG if you want but the question is can you have like look expensive can you have a continuous version of a qubit oh whether two ways of doing floating-point so one of them is if you know let's say us floating-point number on my classical computer 64 bits yeah I can allocate 64 qubits to that yeah and there could be in a superposition of different values it's just probably not the best use of my qubits but I certainly could do that then I guess we talked about this earlier that there are these sort of quantum computers with continuous degrees of freedom error correcting them Peter said is harder not impossible the the Yale group with their superconducting qubits is really pushing hard in this direction but what they're doing is sort of at the lowest level there using continuous degrees of freedom with some error correction they're using these to then encode logical qubits which are then used in the calculation I think the issue is if you have and here's the same for analog if zero and one of the only logical the only allowed values if you find your computer sets 0.98 you can push it back to one if any number between zero and one is legal and your computer is 0.57 how are you to know whether it should be 0.5 6 or 0.58 well you might have a prior right but it's there there may be things you can do and also maybe AI algorithms are inherently a little noise resilient so yes it's I'm not saying that there's a and maybe what I should say is not that you I'm just saying you can't use the floating-point I'm just saying yeah we have not come up with good answers yet on how to use it so should I take this one in July an 18 year old un Tang proved that classical computers can solve the recommendation problem nearly as fast as quantum computers thoughts well I mean my thought is you know we had a very clever algorithm the quantum computer for solving the recommendation problem but it turned out the recommendation problem wasn't anywhere near as hard as we thought and the classical computers could do it so I don't I don't think this shows that no in general classical computers can solve problems as quickly as quantum computers I think it shows that we should be more careful about claiming certain problems are hard when we find an algorithm for them on quantum computers right this has happened before we found it no at fari found this QA Oh a algorithm and showed it could do something that wasn't knowing how to do on classical computers and then oh no bunch oh yeah I think I'd have a doesn't I think 10 10 really smart computer scientists came up and found an album for doing it on classical computers yeah actually what happened is he asked them first what is the best approximation ratio you can get for this particular comic oral optimization problem and they said the ratio is such-and-such then he came with the quantum algorithm that beat it and then they read his paper and Scott Aaronson blogged about it oh okay well you know we hadn't really thought that hard about the problem and then then they found a way to match the performance actually to beat the quantum algorithm algorithm proved and I think now they're tied so yeah you know there could be a fast classical algorithm for factoring maybe someone knows it already you never know what can the qubit do i guess i yeah i think i already took a sort of stab at it with the amplitudes I guess I'll just say that if then it's not so much 1q but that's interesting but N cubed I don't know Peter crispy I want to say more so so yes so I think what what is distinguishable for a qubit is exactly is is its ability to interfere and a qubit by itself is boring having n qubits gets really interesting because then you can start interfering more things so that's why it comes so exactly so coming back to what Aaron said it's essentially the ability to have positive or negative amplitudes that interfere cancel each other and amplify certain things for two for two bits that's not very interesting for n bits that gives you 2 to the N numbers you want to amplify something then it becomes actually very interesting indeed quadratic optimization so I think this maybe refers to the d-wave problem or the problem of the d-wave machine solved which is an np-complete problem it's just minimizing a quadratic function of n bits to then be complete and the answer is that we have heuristic algorithms for this so we have what do we have two things so first we have a provable square-root speed-up over many classical algorithms not just brute force search but if you want to do like a tree search on a platform computer backtracking tree search you can you can square root that and other classical algorithms we can do quadratic leaf aster plus me up heuristics like the adiabatic algorithm that appear qualitatively different than classical analogues like simulated annealing so we would expect the instances in which they do well to be different not necessarily a super or subset than the ones on which classical do well and we don't think that they get an exponential speed-up in every case but they may get it in some cases and I think in part this awaits smarter theorists but I think you're really like with classical computers it away from pure hole testing so I'll I'll take the last one win will quantum computing be able to break current cyber security encryption so what we need for breaking RSA and some other things as a few thousand logical qubits so since we already have maybe Oh 20 or 50 logic 50 cubits it doesn't seem like a few thousand logical qubits will be that far away but these qubits are noisy and to get a few thousand logical qubits you need a few hundred thousand physical qubits and for these factoring algorithm you need them to be really fast because you want them to factor in all less than 30 years so no it's only first if you believe in a quantum version of Moore's law it's still probably a couple decades away and we don't really know whether the quantum version of Moore's law is actually working yet because we just have a few points on the curve and you draw a line through them and looks like it might be but who knows right it could be that with a modular architecture though once you build a 20 cubic computer that has three links to other computers it gets to the point where someone could throw a lot of money at the problem and quickly jump up oh we have a question from Charlie Bennett in nine seconds and the answer is yes youth Grover search because Grover takes a lot of iterations and in the inner loop you don't need to store the whole dataset it can just stream through the quantum computer and I'm writing a paper about this now I take that back negative time okay thanks everyone [Applause] you
Info
Channel: IBM Research
Views: 25,076
Rating: 4.9254327 out of 5
Keywords:
Id: vfJuvNuSPKw
Channel Id: undefined
Length: 47min 58sec (2878 seconds)
Published: Fri Nov 16 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.