Quantum-classical Hybrid Algorithms for Protein Folding and Docking

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
awesome thanks for the introduction hey everyone I'm Marc I'm not going to talk about kind of how I got here because I was covered quite nicely in the introduction but I want to dive straight into the topic I'm going to talk about quantum classical hybrid algorithms for protein folding and docking so that's mainly the Rd aspect of our company and I'm going to like start slowly and give you kind of a quick introduction to quantum computing high-level conversation about quantum algorithms and then I'm going to dive into three specific applications that we looked at and I'm going to try to cover them from kind of a high-level perspective to just kind of get you into the mindset of how to think about framing problems in the quantum computing world and how to actually kind of go through the workflow in order to implement and run these types of algorithms so first of all quickly about our company we're focused on pharmaceutical drug discovery and when you think about drug discovery can think of it as a key and lock principle where the lock is a protein target like this one here that is linked to a certain disease in your body so that might be a protein that is too active to upregulate it and in order to cure the disease pharma companies are trying to find molecules that bind to the binding pocket of that protein thereby for example down-regulating it and curing the disease and pharma companies traditionally have done that in the wet lab they just synthesize millions and billions of compounds screen them against those proteins and then do a local search and local optimization around the molecules that seem promising and what we're going to do at protein cure what we're already applying is that we model these things are in silico on a computer on supercomputers in order to kind of iterate through many many molecules and trying to converge on good binders that you didn't can sell to pharma companies and collect milestone payments on so it's important to kind of be aware of the landscape of drugs usually historically pharma companies have mostly focused on what's called small molecule drugs you can think of it as aspirin or ibuprofen small chemically synthesized molecules they're usually in the range of tens to hundreds of atoms then on the other end of the spectrum are antibodies which are large proteins tend to have ten thousands of atoms they're huge molecules and they tend to be able to cure diseases like rheumatoid arthritis or neurodegenerative diseases but they're also very expensive to produce and the field that we're playing in is kind of this in-between range medium-sized proteins peptides that you can synthesize and that have very unique capabilities to curing diseases that we currently don't have drugs for and the reason why they can do that is because our bodies know how to handle proteins our bodies know how to break them down which means they tend to be less toxic and because they are much larger you can basically inject them into your body and they really only do that one thing that they're meant to do bind to that one single protein and not bind to all kinds of other proteins which these guys like to do which causes side effects and so really there's a lack of tools a lack of computational techniques to design these types of compounds because the search space is generally too big and that's why I'm trying to kind of illustrate here so if you consider a protein of length 50 amino acid amino acids are the fundamental building blocks of proteins there's 20 natural amino acids if you consider all the possible sequences that you can create for a protein of length 50 you get 10 to the 65 possible sequences the problem is that right now we only know one hundred and fifty thousands protein structures they're publicly available in a databank and that's obviously a tiny amount compared to this massive state space and even if you would know all the protein structures and all the protein sequences that are cure in nature you would only cover 10 to the 15 molecules and so the issue and the question that we try to answer at protein cure is how can we develop techniques that help us venture into this unknown space and not just train a machine learning algorithm on this tiny amount of data and trying to kind of stay close to known structures but really trying to develop new scaffolds that you can work off and make drugs out of and so that the technology at the heart of the company are three technologies that we think are synergistic on the one hand is molecular dynamics simulations on classical supercomputers that model the protein folding and docking component I'm going to get back to that on the next slide the problem with those simulations is that they tend to be very slow and that's where quantum computers come in we're already developing algorithms today that will help us eventually when these computers are large enough to scale these simulations and do them faster such that we can iterate quicker and try out more molecules and then last we are doing Rd when it comes to reinforcement learning because it provides quite a nice approach to learning without a lot of training data you can operate in low data environments because you basically have your agent kind of learn like a dog and it just tries out and it can interact with a simulation environment which is modeled by the MD simulations and so putting this together basically ends up in this loop where you start out at this point you start out with a random amino acid sequence or ideally maybe with a good initial guess and the first step is protein folding you try it now from that two dimensional sequence of letters predicting a three month sorry so we have mention all structure of that protein and it's important to know that like the drugs that we consider our proteins themselves at the protein target it's also protein kind of gets confusing sometimes it's provided to us by the pharma company they usually have spent a lot of basic science in the lab trying to figure out these structures and we can then in simulation compute the binding score which means how well does that key fit into the lock and then depending on that outcome we can reward or punish a reinforcement learning algorithm which in turn changes the amino acid sequence and then we keep iterating iterating through the cycle until we converge on a drug candidate that we can actually sell to the pharma companies so it's important to point out that we do not sell any software all the software stays in house and we're purely like molecules as a service company it's actually a new thing that has been popping up quite a lot with startup companies and it's something that Pharma seems to be very comfortable with doing because they have a problem early on in the pipeline sourcing a lot of initial hits which they then can further develop so now I wanna quickly dive into the basics of quantum computing for the people that haven't attended Collins talked about like D waves approach to quantum computing so the idea here is that a classical computer is based on bits which are either a 0 or 1 where as a quantum computer has is based on this fundamental unit of a quantum bit short qubit and you get much richer set of states you can basically be in these super positions we can have 50% 0 and 50% 1 and you can really change these percentages as you want and by doing so and combining many qubits with each other you can essentially get an exponential increase in the amount of memory that you temporarily have in that quantum computer so a classical computer with n bits can store n bits of information but a quantum computer with n qubits can effectively store 2 to the N classical bits worth of information and to make this a little bit less abstract how do you actually implement a qubit physically there's a relatively simple example when you look at a hydrogen atom it has a single proton at its core and an electron surrounding it and usually that electron finds itself in the ground state which you can define as the zero state and in order to do a bit flip operation you can send a laser with exactly the right amount of energy to that electron which makes it move up into its first excited state which you can define as a 1 and you can create these sort of super positions by a shooting a laser that has not enough energy for it to completely transition into the excited state which means it has a little bit too much energy to stay in the ground state but also not quite enough to fully transition and that's how you can get these super positions and you can effectively take any two-level quantum system and turn it into a qubit and all of the different quantum hardware providers are taking different approaches of building them so in just briefly a lot of popular news if you read them people always talk about the number of qubits that's kind of like the single metric that everyone is focused on and it's something that it's quite important to realize that it's not the most important metric it's really a combination of the number of qubits but also the quality of your qubits because there's a lot of errors in these devices nowadays and it's really important to keep these errors low while still having a large number of qubits as well as the connectivity on the chip these qubits are not actually also all connected which means you're restricted in terms of the algorithms or the kind of layout of algorithms you can run on them and that's why for example IBM has introduced another metric because people tend to think in single numbers it's something that they understand easier is basically a trade of this quantum volume of error rate and the number of it's and you're really seeing here that just increasing the number of qubits while keeping the error constant doesn't really increase your quantum volume whether you really try to optimize both of those metrics at the same time so and also connect briefly talk about this notion of quantum supremacy this is mainly the reason why I'm here today why the field of quantum computing has been flourishing in the last couple of years because people expect this tipping point where you have a quantum computer large enough that I can actually do a computation faster or better in some way than a classical computer and that's something that people are expecting to happen sometime this year or potentially next year but also here I want to point out that there's different notions of quantum supremacy so what a lot of people kind of expect in the like kind of the first demonstration is going to be this idea of weak quantum supremacy where you essentially solve a very esoteric mathematical problem fast on a quantum computer in our classical computer but it might not translate to business value or like running on actual data and have some sort of meaningful results for the business world and that's really why we're aiming for this quantum advantage which really means solving valuable problems faster than a classical computer can and before diving into the specific algorithms I also just kind of like give level the expectations of what types of algorithms are going to talk about a lot of people know Shor's algorithm for example which kind of threatens our as a encryption and certain other popular quantum albums that seem to have exponential speed ups but I call them kind of old-school quantum algorithms right now because they tend to have be very wasteful when it comes to the resources they need error corrected qubits which means that you roughly need like ten thousand if not hundred thousands or physical qubits in your device and that's something that's probably going to take like seven to ten years to really come to full fruition whereas there's a new approach which is called variational quantum algorithms or hybrid algorithms and I stole that picture from a quantum hardware company called Righetti and it kind of nicely shows that the idea is to have a quantum processor which they call QB you like a run hand-in-hand with the CPU and kind of work together and both of them kind of leveraging whatever they're good at and so the idea really is to run the algorithm mostly on the classical computer but then only out so some of the heavy lifting to the quantum device and so if you put these things on a spectrum in terms of the speed-up that you can expect from these types of algorithms kind of today at this quantum inspired stage where people basically implement algorithms that derive some sort of insight from the way quantum computers work then we have these variational algorithms which people expect to be useful in the next two to three years and then the full-scale quantum algorithms kind of seven plus years out and today's focus of the talk is mainly going to be on this variational hybrid world of quantum algorithms and how you can apply that to the life sciences so the very first paper that we wrote as part of the company as we're still in that creative destruction lab incubator program was we wanted to look at protein folding on quantum analyst which are the type of quantum computers that deal with this building and whenever you try to run something on an actual qpu these days you need to somehow cause grain and like slightly simplify your problem because it's going to be a proof of concept so in our world we have a fully atomistic protein model which is too complex at this point and what we do is we coarse grain it by saying okay we're going to bundle up a bunch of atoms into a single amino acid unit and we're going to represent the protein as beats on a string where each bead represents an amino acid and also instead of folding it in continuous 3d space we're discretizing the space that this protein lives in and we essentially took a like different types of lattices in this case that's a cubic lattice and the problem of protein folding then boils down to finding self avoiding walks on that lattice which minimize the energy and so to zoom out quickly and talk about a workflow or typical workflow that you go through when trying to implement a problem on a quantum annealer you start out having a problem definition for example you want to solve the Traveling Salesman problem or in our case we want to fold a protein then the next step is you need to encode your problem in what is called an icing type Hamiltonian it's a quadratic equation as you can see up there and there's a lot of ways of actually mapping problems to this types or to this type of equation and we you see is that there's a quadratic term which has a coupling ji J which is a coupling between two qubits in the hardware and a bias term the H J which is a bias on a single qubit and once you've formulated your problem basically those coupling the biases are knobs and parameters that you can tune in the hardware so that's how you kind of load your problem onto the chip but before being able to do that you need to do a minor graph embedding because the problem is that in this step there might be in this quadratic term qubit 0 and qubit 100 they need to interact with each other but they might not be connected on the hardware itself so that's where this minor graph embedding comes in basically helps you mapping your problem on to the actual chip architecture and then once that's done you can either run it on the quantum annealer if you have access to it or you can run it on it with a classical solver and simulated so really what we're and now is like how do we encode this problem into an icing type Hamiltonian and there was some early work from a group in Harvard and what they did back in the day is they took a 2d lattice and folded the protein with four amino acids on that grid and what we did is we generalized that approach to cubic lattices three dimensions where we basically encoded the directions on the lattice with three bits each so basically the structure of the protein is fully determined by the turns that you take starting from the initial amino acid and we didn't just like generalize it to 3d but we also improved the implementation in terms of scaling which was necessary in order to actually kind of push the amount of protein lengths that we could do under on the hardware and so we actually implemented that on the d-wave 2000 q which is the current chip that is online that has 2,000 cubits and what we did is we did eight amino acids on a 3d lattice and ten amino acids on a 2d lattice you can see those final structures and what the red dotted lines are they kind of show the interactions that you see on the lattice like which amino acids are neighbors and are basically like realizing a part of this potential and what I want to point out here is that for example this da interaction you can see that in the experimentally solved structure here as part of that helix component of the protein as well as for example that KQ element which is over here so really even though we went into a coarse-grained world you can still derive some sort of high-level information from coarse-grained models and that's kind of the idea that as part of this first step where we try to predict the three-dimensional structure from a two-dimensional amino acid sequence and there's a thing called restraint based molecular dynamics simulations so you can think of a molecular dynamic simulation is starting from an extended state it's kind of like a long string of that protein and it's going to spend hours if not days just trying to kind of curl up the ends and slowly converging to some sort of globular structure and that wastes a lot of time resources and money and so the idea with restraint based simulations is that you can take external information that might be in those contacts from a lattice fold it might be evolutionary data it might be low resolution NMR or cryo-em structures and you can use that information to already kind of like pull your simulation in the right direction which saves a lot of time because you don't waste all this computer I'm just trying to like fold it up in some sort of way and that's something that people have done in traditional computational biology quite a lot and it shows it is a constant improvement in terms of the runtime which doesn't tend to excite like theoretical computer scientists but it definitely excites us because it translates to a lot of savings so this idea kind of like drives a lot of the components that I'm going to be talking about from now on the next step was we wanted to try to benchmark other hardware providers and see how far can we push their hardware because they have completely different frameworks different implementations so we work with Righetti and IBM on this Universal type of a quantum computer which means it's it's much more like you can basically have we have a set of logic gate like a classical computer just that they're quantum gates and you can arrange them in any sort of way and computing any function to any like any arrow or like approximation that you want and the workflow here looks similar you have a problem definition but instead of having to map it to an icing type Hamiltonian at this point you can pick a quantum algorithm and there's really a like a whole zoo of algorithms already out there and you don't have to write down a quantum circuit that basically represents that algorithm and it's really still quiet manual this kind of work like people starting to build abstraction layers but it you often still have to actually look at the actual circuits and then because that again might require certain qubits to interact with each other that are not neighbors on the chip you need to compile it using a quantum compiler and then you can run it on the quantum processor or on a simulator and so what I quite like is kind of contrasting those two frameworks and and kind of showing the similarity where in the universal world you can pick this quantum algorithm and formulate it as a quantum circuit and basically that's encapsulated in mapping your problem to an icing type Hamiltonian the algorithm was chosen for you because quantum annealing is already kind of it's a special purpose device and then mapping it to this Hamiltonian is basically compiling this quantum circuit even though it's not a circuit and then the compiling step is equivalent to the minor graph embedding because you're trying to take your problem and map it on to the quantum ship so for this universe a quantum computing world we picked our quantum algorithm to be the Q AOA which stands for quantum approximate optimization algorithms some people call it qua WA and the way it works is you have a quantum circuit that has some gates that are parameterised and at the beginning you just pick those randomly those parameters you run your quantum computation you measure it at the end and you feed the output into a classical optimization algorithm that tries to optimize those parameters the new set of parameters is programmed into the quantum device you run it again and you keep iterating keep optimizing those parameters in your circuits it's kind of like training a neural network and the hope is that you can actually the problem that you encoded you can actually get very close to an approximate solution and I want to highlight also that it's very similar to what quantum annealing is actually doing so the way quantum annealing works is you start from a simple initial Hamiltonian that describes your processor and then you slowly evolve it into your problem Hamiltonian which is basically this equation that describes your problem and then you freeze you measure and you hope that the results that you read out corresponds to a low energy solution to your problem and what the QIO a algorithm is doing it's basically doing what they call a Trotter ization it's kind of a like a step function a digitized way of doing this and if you have a QA or a depth of one which you can think of as like layers in a neural network you can have like a like different depth for those types of algorithms so QA or depth one looks like this and more you increase this QA or a depth the more you approximate a smooth trajectory that you would get from a quantum annealer and you can prove that in the limit to infinity it actually perfectly equals each other so we again encoded the problem or what basically we had to somehow frame the problem in a binary way and this time we chose a different encoding so we actually one hot encoded every direction that you can take on that cubic lattice which is a little bit more wasteful when it comes to the number of qubits you need but we wanted to showcase a particular example we didn't just want to replicate what we did on the quantum annealer but we wanted to show that there is something different that you can do under Universal which you couldn't do on the quantum annealer so it's going to get a little bit involved right now but like I'm trying to make a point why we did this one our encoding so there is a notion of hemming weight which is essentially the number of ones in your bit string so this bit string has a Hamming weight of zero this bit string has a single one so it has a Hamming weight of one and this bit string has a Hamming weight of three okay so why am I telling this because if you have a protein that has to take two turns on your lattice and you get this solution you're going to be wondering okay this actually means like the turn went to the right but what does this six tablet of bits stand for it doesn't correspond to any direction on the lattice so it's an invalid solution same with this one this six tablet of bits doesn't encode anything it doesn't mean anything so it's also not a valid solution so we can say that every solution offer a solution to be valid it needs to fulfill the condition that every sixth tablet needs to have a Hamming weight of one and this realization kind of leads to this idea where you have the entire state space of possible bit strings possible solutions that you could get out of your quantum device but we know that for example this string doesn't have any meaning because it doesn't encode any directions with this one so really this space is useless for us we know that we're not going to find any solutions there and we also know that there is a feasible subspace within that larger state space that corresponds to all the feasible bit strings and that's called the feasible subspace we're really every bit string has like every sixth tablet has a Hamming weight of one and we know that these are all valid solutions they might not be the optimal solutions but they're valid solutions and so the idea here is can we make an algorithm can we do this QA or algorithm such that it stays in here instead of searching in there and the answer is yes we can I'm not going to dive into like the mathematical details but basically there is a component in that algorithm which is called the mixer Hamiltonian which kind of shuffles around the values of the qubits and that's something that is hard-coded in the d-wave machine is basically dependent on the hardware implementation whereas on the universal quantum computer you can tweak that mixer Hamiltonian however you like and we tweaked it in such a way that it would like if it shakes and mixes the qubits it would always make sure that it doesn't leave that feasible subspace and so we ran simulations are on this approach where on this side you can see different types of mixer Hamiltonians this shaker Hamiltonian basically and if you just run the niller queue AOA without our new approach you get 10 to 15% ground state probability measuring the correct solution if you do the approach that we're proposing where you use these fancy mixer hamiltonians you can get up to 30 to 40% ground state probability which becomes really important if you're trying to solve big problems the higher ground state probability the more likely is it that you find the optimal solution and we then basically like also estimated that gate depth like how many circuit like how many gates do you need in order to implement this algorithm and that's really the main issue here like if you look at the total depth on the 19 qubit device by Righetti we need roughly like 800 to 900 quantum gates and actually like the C naught count that's like a true qubit gate you need like roughly 350 and the problem is that Righetti can only do 10 right now so that's why we had to basically like those were larger molecules so we basically restrict ourselves to 4 amino acids we managed to fold that to keep you but we couldn't push it to larger quantities because simply that number is way too large hardware needs to improve because basically after doing ten operations you might as well flip a coin you just get random noise yeah that's really the issue here so that's the protein folding component now I want to dive a little bit into molecular docking and we work together with Xanadu under sanity is a another company based in Toronto they do a photonic quantum ship they're still building that looking to release it by the end of this year so that means they're using the particles of light to do the quantum computation which is nice because do you a of IBM Righetti everyone needs to cool their quantum computers down to almost absolute zero where Xanadu if they manage to build this processor they could do it at room temperature and it's basically just light flowing through an optical array doing the quantum computation as it goes so we looked at molecular docking I just want to bring up the slide again where you have a molecule you have this protein target and you're trying to figure out how well does that key fit into the lock and that's the docking problem you have a lot of possible configurations and you're trying to find the lowest energy one and so what we did we simplified the problem as always instead of taking an entire molecule which would be too complex we reduce it to its pharmaco force which is basically just a fancy name for chemical groups important chemical groups so for example you have some hydrophobic groups you have some aromatic rings hydrogen acceptors hydrogen donors and you simply mark those elements those groups you label them as a single point with coordinates and that's how you can now represent your molecule it's something that pharma companies have been doing for years because it also speeds up the classical algorithms to search through databases and stuff and to kind of get to the next day I just want to briefly talk about what is a graph a graph is simply a set of vertices connected with a set of edges between them and there's a lot of ways of formulating any kind of problem as graph problems and there's also very good heuristics in the classical world to solve problems relating to graphs and also graphs tend to be a very nice breeding ground for good applications for near-term Computers a lot of these algorithms are very good at solving those and that's why we basically took the molecule reduced to its pharmaco fours and we can just represent it as a graph you basically just represent every pharmaco for as the vertex and you draw edges between them and we then went a step further and said okay well you can now label those vertices so a stands for aromatic ring H a stands for hydrogen acceptor and so on and you just draw an edge between each of those and measure the distance in the actual molecule so that's how you come up with this graph and so then the next step is you do this for your protein target which is the receptor and the ligand which is the drug molecule you both represent them as these pharmaco for graphs and you then create a hyper graph I'm not going to dive into the specifics of that because it gets a little bit too involved but basically you can combine them into a joined graph which tends to be very large and what that graph is trying to do it takes these two graphs and it tries to superimpose them in many different ways trying to apply a potential function and trying to find out what is the lowest energy configuration of these two graphs and then you can basically reconstruct the binding pose that this solution corresponds to so that's enough on the high level I don't want to dive too deep but then this problem boils down to finding a clique so what's a clique if you have a graph you can find sub graphs within that larger graph and this one for example would be a sub graph but it's not a clique because it's not fully connected there is no edge between these two vertices and the clique is defined as a fully connected sub graph so this guy here is a clique because all edges are connected with each other but it's not the maximal clique because it's not a largest sub graph that is fully connected that's this guy and those types of cliques maximal cliques is what we're trying to find in this hyper graph that we constructed and it's a non-trivial problem it's actually np-hard and that's why we turn to Xanadu and try to figure out a way how we can use their quantum hardware what they're building is a Gaussian boson sampler that's going to be their first device it's a special purpose device you send in photons on one end there's a large optical array which actually implements a matrix a unitary K and then your photons come out on the other end and you have single photon detectors lined up which measure if a photon comes through or not and based on that output distribution of photons you can basically extract sub-graphs from your large hypergraph which is encoded in this matrix so they basically publish on that at some point last year where they actually find out found out how you can tweak the parameters to encode any sort of graph in this device and they also drew some really nice they had some nice realizations which means that if you use this device and you sample sub-graphs from this Gaussian boson sample at these subgraphs tend to be very dense which is great because we're trying to find cliques and cliques are fully connected which is maximally dense so we kind of thought that that might be a good way of sampling cliques and that's what we tried out we tried different algorithms we first started out with the most simple algorithm that you could think of just a random search sample the sub graph look at it is it a clique no okay you throw it away you sample other sub graph if it's a clique you store it and you just keep doing that hoping that you're going to find large colleagues eventually so we tried that as kind of the baseline that we wanted to compare things to and then we started getting a little bit more fancy so these are all classical algorithms that people have used in a traditional world here you again sample a sub graph with the Gaussian boson sampler which might turn out to be this one the next step is picked a node with the least edges which is this guy and remove it and once you remove that you determine if your leftover sub graph is a clique if no you can repeat you again pick the least connected nodes and throw it away and eventually you're going to converge to a clique and you can store that and see if it's larger or smaller than what you've already have in memory and then we said okay well the problem with quantum algorithms papers is often that they don't compare to the state of the art and we wanted to compare to the state of the art and the state of the artists dynamic local search which is like they have won lots of competitions like max key search competitions and it's a very complicated algorithm I'm not going to elaborate it but we basically inserted a gawssian boson sampling step because usually all these albums usually just randomly sample whereas we have the advantage of using that Gaussian boson sampler which gives us dense sub-graphs rather than just any graph and so we did that and we compared it to classical random sampling so this is greedy shrinking with dynamic local search and you can see that the success rate of sampling the correct click increases dramatically when using that Gaussian boson sampling step all of these things didn't run on actual hardware that's all X still simulation based so also the grass were still relatively and small but ultimately if they would have the device you could scale it up and take a much more complicated representation of your molecule and it brings me back to this idea of a hybrid pipeline you extract your pharmaco force on a classical device you then use this GBS enhanced algorithm in order to sample the maximum clicks and then you already have a good idea of the orientation of the molecule and you can again use restrained docking simulations where you already pull the molecule kind of into that configuration and then just like sample locally around that so that's the part of molecular docking and I just wanted to quickly highlight some of the work we are doing on open source software and that's the exciting part that maybe also some of you if you're excited about quantum computing if you're a software developer but you might have never studied physics it's something exciting to know that quantum computing is happening quantum software engineering is kind of becoming a thing and most of it is actually happening open source actually most of the hardware providers have open source packages mainly like open source the entire stack with the exception of maybe some of the compilers and I always like to make fun of physicists and the way they code because I myself didn't know how to code before I started this company and there's a lot of software engineering outstanding so if you're excited to kind of get your hands dirty and like work with some of these projects they can even like do sometimes with some help there's like smaller projects by some grad students that don't know how to like properly package it with pip or things like that like there's a lot of things you can do without necessarily having to understand the quantum computing part of it but it also tends to like be very nice way of like picking it up documentation sometimes really walks you all the way like nicely from the introduction to quantum computing all the way to like implementing your first algorithms and so that kind of led like we were invited to write a review paper that reviewed the entire landscape of open source packages we've evaluated as like quality of documentation the way they adhere to best practices in the open source space and we founded this foundation the quantum open source foundation which really aims to so we have monthly updates on this on these metrics where we compare projects and we're trying to kind of get people to answer pull requests faster improve their documentation and just be aware of those shortcomings and we're always happy to have people getting involved like you can sign up for the newsletter we are planning to host a hackathon and like basically kind of issue fixing competitions we have a conference going on yearly which is on at FOSDEM in Brussels we have like our own quantum computing stream there where people come and talk about their projects and kind of try to find contributors that want to join the effort and that's it for my site I'm happy to ok that's perfect I'm happy to answer any sort of questions that you have thank you [Applause] I'm sure there must be questions there is one I'll pass the microphone to you hi I'm Matthias thanks we'll talk I was wondering if you could explain a little bit more about GBS and why it is that you get somehow ladders ladder graph from it like why you get denser graphs from it okay just get back to this so I mean DT argument gets involved but basically the probability of sampling a particular sub graph is linked to this half nyan which is a metric it's kind of like a determinant of a matrix that you can compute and the half nyan of that matrix AS basically represents that large hyper graph and the half nyan for sub graphs of that hyper graph tends to be larger than for less dense graphs and that's what they that's what they basically established here empirically that that was something that they figured out while playing with the device and also kind of like studying the equations and it is might it might not actually be a great property for some other algorithms where you may say I don't know you might try to find some sparse sub graphs or something like that but because of this relation of the huffing into the probability of sampling a particular output distribution of this of this Gaussian boson sampler that's why that probability tends to be skewed towards then sub grouse we can lega we can talk about that later and more detail if you want but it's it's a kind of involved mathematical argument do I see more questions from the audience oh there's one why don't come around well that transferring that what you said in reversing the direction of thought if you're using applying quantum computing to finding more consistent or improve the performance of finding consistent configuration somehow of your molecules what does your intuition tell you what would that do you think that nature might also have relied on these quantum effects to solve the evolutionary problem to find those configurations because this is this riddle that in evolution so much trial and error must have happened to find these things that there was not enough time for this trial trial and error so that now the question arises might quantum effects also affect larger structures so so I reverse your findings and say might quantum effects have contributed to constructing these real-life protein configurations frankly I don't know but just for your intuition what does your intuition tell you like there's actually like now a whole field called quantum biology where they basically look at exactly those things like quantum effects and macroscopic systems they for example could prove that there's quantum effects that without them photosynthesis wouldn't be possible they haven't like that there has obviously been studied like they're trying to for example do quantum mechanical folding of proteins which is extremely difficult to do on supercomputers because it involves too much computation what they found until this point is that it seems to be fine just using like simulating classical physics to most of the time arrive at the correct configurations but still molecular dynamics is not perfect by any means like this certain types of proteins that you can't fold properly is certain like interactions that aren't like accounted for so at the end of the day yes if you could simulate like the full quantum mechanics of the entire large protein you would most probably arrive at the correct solution much more likely than trying to use the approximations of classical phase but to what extent that kind of contributed to evolution no idea I think we had another question on the left hand side when you map your protein structure onto a cubic lattice how do you estimate or calculate the interaction between two amino acids yeah that's a good question basically if you in the papers on archive it's like a lot of equations where you take your basically trying to derive equations that if two amino acids are neighboring on the lattice it realizes a certain term that gets added to your energy function and if they are not if they are like separated by like a large distance on the lattice you do not apply this particular term to your energy function and you're trying to minimize the energy over that whole thing so you're they it's basically a lot of sums and products over the entire space of grid points in order to try to figure out like usually they value to zero but only in the case when they are next to each other that term evaluates to one and realize this adds that penalty term to your energy function so that's also how if you map a problem to the device you tend to introduce and Sylla variables in order to be able to map to the topology and those ancillary variables they enter the equation in a similar way they have a penalty term that if they don't act the same you apply an energy penalty to make sure that those solutions tend to have very high energy not popping up in your solution space so yeah that's like that's pretty much what it boils down to when it comes to quantum annealing you need to really think hard about how you can formulate it in this discrete binary way of a cube or problem which is a quadratic unconstrained binary optimization problem there's lots of ways like people have looked at like aerodynamics problems in quantum chemistry and so on and there's a lot of mappings didn't kind of a toolbox of elements that you can draw from and apply to your problem and there was another question in the first row hi thank you for your talk was really interesting I was wondering I recently read that people are having really large breakthroughs with doing protein folding by machine learning or artificial intelligence do you have any idea a rule of thumb how how the methods quantum computing for protein folding and machine learning compare in quality or tractable problems and the answer civil quantum computing right now it doesn't contribute anything meaningful to protein for him but eventually it will I think but to answer a question like alpha fold by Google they had a big breakthrough and they outperformed everyone on that folding competition that happens every two years it's it's super exciting and I talked about that yesterday at lunch because it was something that I liked investors asked us a lot when we raised our seed round right now like oh how are you better than those guys they got a a competitive competition the thing is that they still used like all of those existing proteins and train their model on that where's that slide here so they still they just have basically they were smarter than academia in the way that they are very good at software engineering they applied different ways of mining the existing database they didn't have any additional proprietary data they were just better at it but they still use that data which means it's very hard to actually come up with completely new scaffolds so a lot of protein design software that is out there is also based on piecing like existing fragments together like you look at your protein oh I know how that fragment looks like I know how that looks like and they piece it together and that's a model but usually those models like always stay close to what is known and that's really kind of what we're trying to surpass like being able to like show these pharma companies new things that they wouldn't have cooked up in their labs even so that's kind of like the differentiator that we see there we don't tend to rely too much on an existing training data I think we have time for one more question whom am I gonna pick let's go Tanna hi you mentioned queue AOA and I've heard that there's a very recent result from the beginning of this year that says if you make your variational quantum circuit deeper then the gradient becomes exponentially flat did you find this to be a problem nope because we didn't like had large enough problem instances yet the very I'd like that's a very interesting point actually it's something that worries me and I'm like I'm not doing the masters right now on the side which kind of focus on that problem because I want to answer the question like those barn plateaus do a cure if you scale up the number of qubits which is obviously going to kill any sort of quantum advantage if your classical optimizer kind of stumbles in the dark not being able to find a gradient but there was also some recent results as a response to that paper where they showed like the same problem happened in classical of neural networks right you had vanishing gradients and people came up with like techniques to kind of circumvent that and they now had some techniques where you can show that if you use if you don't start like that argument applies to random parameter in iteration if you have a smart guess a good answer you can basically avoid this and for certain problem instances don't those things don't tend to start so the question is really to explore that more and collect you analyses of the energy landscapes trying to develop techniques or like for example there's an idea of qubit dropout similar to like dropout in classical neural networks where you just measure them halfway through the computation thereby reducing the space that you are trying to optimize over so it is people are very aware of that because it's a it's a big issue when it comes to actually scaling up these hybrid orbitals the point Thanks so finally thank you for this magnificent talk and for the very interactive Q&A session let's give a big hand to mark [Applause] next up is a very short break where you can switch rooms that will just stay here for the next session in this room thank you
Info
Channel: TNG Technology Consulting GmbH
Views: 1,743
Rating: 4.9310346 out of 5
Keywords:
Id: kZZzyclPFvo
Channel Id: undefined
Length: 45min 2sec (2702 seconds)
Published: Fri Aug 23 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.