Fundamentals of the Virtual Brain - Dr. Dionysios Perdikis

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] all right so welcome everybody to this fifth uh lecture of the virtual brain and clinical research course an introduction so today we're gonna speak a lot about the fundamentals of the virtual brain that is the theory behind it its main components its main uses objectives so there's gonna be quite some theory um a little bit also an excursion into dynamical systems theory however all of this i plan to do in a way that not much expert knowledge in mathematics or programming would be needed um so first of all i hope let me see how can advance yep so i guess you already know about the website of the virtual brain where you can find all relevant information it's very comprehensive you'll find tutorials the software itself news all the nice research uh done using a virtual brain and i have to thank all of my colleagues in the group that helped me compose this talk you see some of them here together with their virtual brains especially i want to thank misha the one here on the left and professional doctors patriot in victory and victory irsa um for providing me a lot of the materials for the slides you're gonna see today so first of all let's start with what is dvd and why is needed so responding to this question in one phrase we would say that tdp is indispensable in leaking brain anatomical and neuroimaging data to unravel the secrets of brain function now what does that mean exactly so during the last decade neuroscience has advanced a lot especially in terms of collecting all kinds of data from neuroimaging structural functional mri eeg mnc optical imaging we're gathering tons of data but we might say that maybe the understanding of how brain functions has not advanced equally there is there seems to be a need for a way to compress all this data and represent it in a way that can help us advance theory the understanding of the brain as well as make the translations to clinic so this is exactly where the need for a model uh or where a model of the brain can be useful as long as it helps us compress all these data or let's say compress the data and our theoretical knowledge and assumptions test hypothesis make predictions etc more concretely as i said there has been quite some progress in non-invasive brain imaging in the 90s this may led to a boost to the network science application in the brain there was a lot of large-scale brain network modeling the use of graph theoretical analysis whether the human connection project has made a lot of effort to represent all the connectivity of the brain later on came the rising computational power that allowed the emergence of technologies like the virtual brain and lately this has led to nice ideas about clinical applications you will see later on uh some instances of the science behind the clinical trial of epinope in france sorry which is about helping the treatment of epilepsy through surgeries there are other initiatives like the virtual brain cloud where i'm working that tries to bring data and the virtual brain together from different sites all over europe so there is a lot of it going on and but what i would like you to observe is that it was first that there was a lot of progress concerning mainly the structure of the brain especially the structure as a network while the latest part that relates more to tbb has also to do with the function of the brain and this is exactly these two words structure and function is exactly where tvb stands and we're going to use them to understand more about it so we would say that the main in in one phrase the virtual brain is a model that is useful for linking the brain structure and function now i'm going to go through an overview of the virtual bank within a few slides and then we're gonna dip we're gonna go deeper into each one of its components so first of all there is a lot of theory behind the british behind the virtual brain and there are assumptions that come from theory about first of all the way but first of all about cortical homogeneity the idea is that the cortical sheath is relatively homogeneous it is composed of cortical columns as we know and within these cortical columns we find the neural the neural networks those neural networks can be represented at different scales we can have as we know detailed biophysical models of neurons then we can start abstracting from them and treat them as point neurons that can fire spikes and then we can also have statistical descriptions of populations of such neurons and represent their activity as for instance firing rate now the second important assumption is how exactly this function is reflected in brain measures like spikes local potentials eg meg fmri etc so one thing therefore is assumptions about the forward modeling about how function in that sense the activity of neurons within the cortical seat how it translates to the observations we have from the brain it is exactly at this scale of transferring or expressing this function into brain measures where tv base stands yeah as i mentioned we know now and we can have quite accurate forward models for each one of these measures that take into consideration the geometry of the cortical seat as well as anything else that exists between the measurements and the activity of the neurons at different scales now when we speak about function though again we stand at a specific level of detail um we we represent brain function mainly as oscillations in a large-scale brain networks to do this we you can see here evidence from different publications where the the mean field activity of many neurons for instance here expressed as spike trains can be summarized as locality potentials and there is evidence that there is a very high correlation between the lft gamma power and spike trains so this gives us the idea that we can actually model large scale oscillations in the whole brain the things usually we measure with fmri eg etc has the activity of interacting neural populations each of which is described by its mean field in other words by its average behavior for instance again here in another instance of the model we're going to see later on we see a population of neurons spiking and the main activity which is like a smear smooth curve so this is exactly the level of description that gives rise to the observations we have from the brain and therefore this is the computational unit of tbd so bringing all these things together what we have is on the one hand the geometry of the brain the cortical seat which is very important for our forward models for measurement we have homogeneous cortical sheath which means also possibly homogeneous local connectivity around its neuron the computational unit of the neural population which becomes a node in our network and finally finally long-range coupling between blank regions through white matter tracts which gives us the heterogeneity in the network so these are all together the main model components of tbp [Music] this is a first demonstration of what we mean by function by large scale network function in the brain you see a video demonstration an animation from a study that we're going to speak about also later on by sierna atoll this is a whole virtual brain model that is tuned against fmri data in a way that manages to reproduce a series of observable phenomena of neuroimaging like the scaling of fiji source alpha power anticorrelation between alpha power and fmri and the main fmri resting state networks that we detect when we do correlation analysis of fmri so this is the kind of science we want to do we want to have a brain model that we can simulate and test against neuroimaging data that we have available okay i just leave here three main basic reference from tvb where you can find all of this theory and mathematics and models and assumptions so you can go and look more for yourselves and now let's try to look deeper into the separate components let's start from the virtual bank structure specifically the topology and connectivity of the brain let's go first through a visualization where we see these different components we have the skull we look inside we have the homogeneous cortical seat and we look deeper inside we have the white motor tracks that connect distance brain regions and now it is exactly this cortical sheet that we can split into different brain regions following brain atlases which is called oscillation and then we can go even more abstract and string into this playing region to its center of gravity and form and network the links of which depend on the white miter tracks you saw earlier so this is the highest level of abstraction we can go with the large scale network model of the brain where we finally represent it as a network or where each node is a neural population and it's linked interacts with other nodes through the white matter tracts come on okay so first of all as i've said the first element of its structure is the cortical sheet geometry in personalization we get mri structural mri measurements and we can reproduce the surface of the brain and represent it as a triangle as a triangulated mass as you see here there are nodes and small triangles and they follow the the geometry of the cortical seat and then using specialized software like free surfer we can actually and and publicly available addresses of the brain we can actually attribute each one of this patches of surface the cortical surface to different brain regions there are several atlases out there some of them based more stay on structural differences of the brain regions some others also take into consideration functional elements now based on this topography we can have at least two ways or actually two ways to model the brain one is the case where we're actually shrinking each one of these pain regions to to its center of gravity and we form a network of nodes that have no local structure this is what we call the region based modeling and we can do this and have in the end something like everything between 30 and 1000 nodes or alternatively we can simulate and we can model and simulate along the whole surface using every using every vertex of these triangles the triangular mass has a separate node of the network in which case we can also have local interactions in this case obviously we can have much more detail and we can have simulations with several tens of thousand names now we need also to connect this network of course and this happens through global and local connections first of all global connections which will construct the diffusion tension imaging we start from diffusion mr t and with which we can track action bundles as they leave one region and go to another the algorithms that help us reconstruct the diffusion directions and therefore then using deterministic also has the calculus we can actually get representations of the white matter tracts and we can count them and we can measure the lengths and like this we can represent the the brain as a network where different brain regions are the different nodes and these are connected through connections that have a weight which depends with some transformation of the count how many tracks we find between every different pair of nodes and also they have a length and this is very important because this length can be used to determine the time delays of communication from one brain region to another when we don't have this we can also use of course just the euclidean business 3d and this is quite important because both of them weights and time lengths or delays create a space-time structure for the connectivity of the brain network which is represented here in a way where as we increase time delay we find different connections on every being let's say of this histogram of uh time so when we look at the shorter connection we have some of them and when we look at longer we have some others and this is what we would say that shapes the music of the brain think of it as a kind of chord instrument for instance uh when we play the guitar for instance we we hit the chord and depending on the length of the chord we get a different musical tone it is a bit like that for the brain dynamics it's not only the weight the strength of the connection between two regions but also the distance the time delay for communication that plays a very important role for the resulting global dynamics of the brain now coming to local connectivity remember that at the local level we are we have the assumption of homogeneity so first of all this applies only when we do a surface based modeling and local connectivity refers to connectivity that depends only on the distance between the nodes in this triangular mass and usually it is the geodesic distance so the distance along the surface for the local coupling usually we define local connectivity spatial kernels these are functions that only depend from all distance and there is no time delay so they look usually a bit like that so it will have a node here it will be coupled to all nodes in its neighborhood and the strength of the connection depends only on the distance so wherever we go on the brain the coupling is homogeneous it only depends on the distance now i'm going to show you a little bit what we said earlier that about space-time extraction how important this is and following this and several other studies we'll see with our subject uh we're gonna i'm gonna show you the demonstration here we model the brain surface as a neural field as a continuous cortical seat so that the dynamics of it the oscillatory dynamics of it have a spatial and the temporal component it it's models partial differential equations and what happens is that there is homogeneous local connectivity in its neighborhood of this cortical seat and the heterogeneous genius large distance connectivity and you will see how uh changing the distance of this connectivity and therefore changing the time delay actually changes the selectoral patterns we get along uh this space so we get different oscillations for a small distance different ones for a larger distance and even more different ones for even larger distance we can look at this as a video where we gradually change the parameter of the distance between the connections the local networks and this destabilizes the previous kind of dynamics and eventually they get straight transient to a different mode of different especially temporal pattern along the the neural seat feet yeah so distance matters times matters and therefore uh this space-time structure of brain connectivity is a very crucial element of the whole large-scale network model of dvd okay now let's get deeper into what we call function in the brain in the virtual brain this function is described as large-scale dynamics that are derived from connected population dynamics and i present you here with the most general equation of tbp so it comprises of four terms in this equation the first term and actually this is the equation that governs the dynamics of a computational unit of tbp i remind you the computational unit might be a whole brain region if we are doing brain region modeling and simulation or it can be even just a small vertex in this triangulated surface actually the way this equation is formulated using integrals it actually points to surface simulation so there are the following terms there is first of all uh some dynamics that define the special dynamics of its neural and population the way it oscillates or whatever else then there is a part of the local homogeneous connectivity you see that it only depends on the distance between our point x and every other point x prime on this surface then there is the global heterogeneous connectivity we know it is heterogeneous because it depends on the two points that are connected and this can be anything and this connectivity also has a time delay this time delay depends on the distance of the geodesic distance on the surface divided by the speed of communication and finally we can have an extra term of noise that are stochastic additions to the whole dynamics in other words dynamics that are not explained by our our equation can be equation white noise or it can be a color noise that has time correlations as well what is important though is that this is really the computational unit of the tbp the neural calculation exactly and this is where the the equations of it lie we can simplify this equation for plane region modeling we don't have any more integrals we have sums and this is a very simplified version of the same equation still we have the same kind of equation for the population model and here we have a there is no local coupling obviously and the global coupling here has a very simple it's a very simple linear function where um inputs coming from other nodes are just weighted by the weights of the connectivity and they arrive with the time delay based on the delay matrix and finally there is the noise there so usually in most cases this is the way our model looks like now let's go deeper into this local population dynamics the computational unit let's say the dynamical unit of tbp what is the population dynamics let's see let's take an example from nature you see in this video that there is a population of birds in this case and they fly around and we know that they form these nice forms exactly because they are have local interactions among them in some way they're following their neighbors let's say so they have local directions this interactions are generally homogeneous they only depend on distance and therefore although these are thousands thousand birds there it's likely that we can describe this 3d dynamics of their flying with much less equations much less degrees of freedom we can summarize these forms with a very simple model most likely or anyway much secret than the thousand degrees of freedom if we had to really model every single birth line now imagine that we model the brain like the large-scale brain brain dynamics as populations of neurons like that that are additionally connected through these heterogeneous long-range connections so it's an even more complex situation obviously in the brain because we have several thousands of those but here i want to pass you the message that we're gonna do a similar thing with neurons and try to have population dynamics described through a very few very simple equations oscillators [Music] okay so in tvb we have several population models that we can use there are models where the main stage variable that describes them it is something like a local potential or synaptic activity others they re they model something like the firing rate activity of neurons then there are models that poorly phenomenological in other words they model only the kind of dynamics like oscillatory or something else without being able really to identify their state variables with a biophysical quantity what we don't have though is uh more more cellular models we don't represent neurons here we only represent populations of them in all cases all these models would have in common is to be described by nonlinear dynamic systems via differential equations so that's why here we can make a small pose open a big parenthesis and go through a small tutorial with dynamical systems so that we understand better the way that the function is represented typically so what is dynamics dynamics is a way to describe change over time here for instance we have a variable that we record in time it could be the potential for instance or the firing rate and what we have below is actually the same uh phenomenon described uh as the rate of change of this potential the derivative so if this is the way the changes in time we represent this change to the derivative this allows us to have a different representation that doesn't depend on time explicitly for instance let's start from the beginning of the movement here and let's try to represent it in a space that is spanned on the original axis by the potential and on the well it seems to be the other way around actually but we'll find out no yeah so on the original axis in this case we have the derivative and the vertical axis we have the potential and then we would represent this movement like a spiral okay so imagine we are at this point it is here and we perform a damned oscillation which means we are spiraling towards the final equilibrium point so here we can see several elements first of all this is what we call the state space a space where we can represent the trajectory of the phenomenon and also a represent uh editor have a have a very explicit and deterministic representation of the way the movement uh will evolve in time although there is no explicit representation of time here and we have the emission condition and we have also the equilibrium towards which the movement evolves in another case we can have an oscillation for instance a steady state or self-sustained oscillation similarly in this case we have a different similar representation in the state space again having a stage variable and its rate of change and such a steady state oscillation is represented by a so-called limit cycle which is a closed trajectory wherever we start from we will eventually end up on this circular dynamical structure now in a similar way when we go to more further dimensions we can also have more complex dynamical structures so it's like a little bit like a statue right it's like taking a dynamical phenomenon that evolves in time and represent it through a static dynamical structure in a relevant state space so you have closed trajectories oscillations or we can have spirals like this and this reminds you of course of neural and bursts and we can also have chaotic uh dynamics where they are bounded in [Music] space however they never really cross from the same point twice and what is more interesting now is that when we change some parameters of our equations of our differential equations that describe the dynamics we can actually have qualitative changes from this and to have the one dynamic structure passing to another for instance from a spiral that we saw at the beginning towards a sustainer selection or to chaotic dynamics and this is what we call a bifurcation so verification is when we change the parameter of the dynamics and this leads us to a different qualitatively different uh dynamics and we also this is also what we call the state space sometimes a space space it comes from uh the phenomenon of uh changing the phase of the water from ice to vapor to liquid water when we change parameters like temperature and pressure so it's a similar situation here and historically we have it has evolved us to call the state space also has phase space so on the one hand we have the representation of of dynamical function through dynamics in the state space on the other hand we have parameters that when we change them we can pass from one structure to the other from one qualitative dynamics to the other let's see how this works for dynamics of one and two variables first for dynamics of one variables like the the ones we saw earlier with the potential so we have these different kinds of representations the dynamics we can have trajectories in a relevant state space we have time series described as the evolution of an amplitude or whatever else a long time and we can also have phase portraits in the first representation we see how a system that has a stable equilibrium looks wherever we start from we follow down the potential curve to a stable point or when it is unstable the opposite happens the there is a so-called source of activity here and the slightest perturbation will take us away from it either from the one or the other side or if it is just like that then it's indifferent and every point is equally stable this is represented in time either as an asymptotic trajectory towards an equilibrium point or as staying where you are or where you're put or in the case of unstable dynamics asymptotically leaving from the equilibrium point the relative the corresponding representation in the phase place remember we only have one variable so the face so the phase space is actually a line and in this line a stable point is symbolized like here a filled dark dot which attracts a flow from both directions whereas an unstable point symbolizes an empty circle that that sends the flow away anyway repels sorry that was the word repels the flow away from it let's see an example now so imagine we have a flow with a stable fixed point a stable equilibrium point in other words and an unstable equilibrium point so give me a second the the closer we are to this equilibrium point the slower the flow because the equilibrium point is actually the point where the derivative of the system is zero the same is also for here an unstable fixed point also has a derivative of zero but the difference is that the stable fixed point will attract the flow while the unstable point rebels it this we can see if we add an axis here to represent the derivative then here we are at the point where derivative is zero and as we go further from the stable fixed point the derivative becomes higher and positive on this side higher and negative on this side so positive means that the flow will go to increasing values of x or command while uh negative means that we will go to lower values of x so the stable point attracts the flow and the further we are the faster we go the unstable fixed point repels the flow and the further we are from it the faster we're going all this we can see here also now we see the same things we saw earlier so we have the flow on the line the stable and unstable points which correspond to the point where the derivative is zero and we can see now an example of what we call a bifurcation or a qualitative change in this dynamics so initially we have this system where given that the derivative curve does not ever cross the point that is zero this is another a system without any equilibrium point and wherever we are on this line of fix we will keep increasing infinitely and the further we are actually from this area the faster we will be going so actually what we're going to have is fast pass fast fast slow down fast fast fast going away to infinity however if we change a little bit this parameter oh come on and make it go a bit down then we're going to actually reach and touch the the point where the derivative is zero in this case we get what is called a saddle a saddle is a point that attracts a flow from one side and repels it from another side so what we would have is fast fast pass slowing down to zero and unless the perturbation externally arise we're going to stay here forever but this is if a small perturbation pushes us a little bit on the left we're going to still go back to the equilibrium point but if a small perturbation actually gets us to increasing x values then we are going to leave and progressively speed up and then go again far to infinity however and this you can see also in time here okay this is the way we approach and this is the way we leave the saddle and then if we change a little bit the parameter k then we will actually take another qualitative change to the system and lead to a point like this what we were seeing earlier we have a stable fixed point an unstable fixed point uh this attracts the flow from the one side and also from the other the further we are the faster we go the other one instead repels the flow so if someone starts around here he's gonna leave slowly then speed up and then slowly again reach the stable equilibrium point however if someone is here he will leave and go again far to infinity so this changes from qualitative changes that this change of parameter brings is what we call a bifurcation and on the line on one dimensional flows with only one dynamical state variable this is all we can get we can get a series of fixed points that alternate between stable and unstable however when we go to two dimensional flows which is by way more realistic because we need at least two two state variables in order to have oscillations for instance we have here an example of some equations uh here x and y can be considered as transformation of position and velocity and we have the derivatives the way position changes and the position velocity changes we have here one kind of dynamics of spiraling towards an equilibrium point and we can as resulted in time we have x in blue and wine red and the way this is represented in the state space what we can do now in this state space is we can compute these equations for every possible combination of x and y and therefore we can get a vector at each one of these points that points towards the way the flow of the system will go so if i leave you here you're going to travel along this way and end up on this equilibrium point if i leave here you connect with similar movement etc so this vector field here represents actually the flow of the dynamics additionally though we can define the following objects we can define the lines where the x derivative is zero and the y derivative is zero separately one from the other so these are represented these are the so-called null clients and they're represented here for instance the the line x is 0 is this line the y axis the line this line here is this one and for the y derivative we have the line y 0 which is the x axis and x is one which is actually this one and this i said again are derived by just solving these equations by setting them equal to zero now where the new clients cross is where we have equilibrium points because both of these are zero right the derivative of both state variables is zero in that way we get three equilibrium points one here zero zero one here one one and one here z two zero and we represent them here also in the vector field so this actually the two of them are saddles because you see they attract the flow from one direction and repel it from another the attractive flow from one direction they repel it from the other whereas this is a stable equilibrium point that attracts the flow and the flow arrives to this point asymptotically so we see that there are more things that can happen in two dimensions when we have two degrees of freedom two state variables now bifurcations can also obviously happen here uh i just show you a picture of an example we have a similar set of equations of two dynamical variables and depending on the parameters here we can get at least three different dynamical objects we can have a limit cycle which as we said is a closed trajectory and it models oscillatory dynamics so you can see the vector field and what it has an unstable point here which is causing the limit cycle to emerge so we start from here we're going to end up on the limit cycle and oscillate forever if we start from here asymptotically in all cases we go to the limit cycle behavior if we change a little bit the parameters though we have a different structure with a stable equilibrium point here and so wherever we start we're going to end up there and if we could focus here there would be a line where if we are a little bit on this side we'll make a small excursion and we arrive at the clipping point we are a bit further up we do a large excursion and we arrive here this is called an excitable system and the line you cannot see unfortunately here it's called the separatics because it separates the flow into different directions such a system is very suitable for modeling excitable systems like neurons that when they are pushed they make one cycle like one action potential and then they turn back to the equilibrium point another case here with two equilibrium points again you can the kind of c is a parametric between them separating the flow into two parts if we are on the one side we're going to end up on the one um guillain-barre point we are on the other side we can leave lead to the other uh this is a case of bistability and so on um i hope we get a little bit the message how with dynamical systems we can represent dynamics that is functional changes in time in a way that corresponds to structure dynamical structures and we have seen a series of different objects and how they represented in time equilibrium points that attract the flow asymptotically spirals of focus as they're called where we arrive with damped oscillation limit cycles that are suitable for modeling oscillatory behaviors sustained oscillations as well as from three dimensions and on a chaotic behavior like this summarizing this do a dictionary of terms dynamical systems are composed by state variables that change in time speed of a car in our case what is of more interest is membrane potentials firing rates etc they also comprise of parameters that remain relatively constant or certain scales however when they change they take us through bifurcations that is qualitative changes they are described by differential equations and they can be represented in a phase state space where the dynamics freeze and they are represented by geometrical structures and in this in this spaces we can represent flow as vector fields and the dynamical flow shows the direction the evolution of the system we take and the speed of it and therefore we can form trajectories or orbits like this uh describing the exact dynamics and along these trajectories we can have different dynamical structures like fixed points that represent equilibrium points limit cycles for periodic behaviors or saddles that actually send us ignite in are like crossroads and when a system comprises mainly of saddles we call it multi-stable why because saddles attract the flow they slow it down in one direction and then they let it go in some other so these are systems where they change the whole time they go from one state to another without ever resting but they stay they spend some considerable time in each one of these states that's why we call them multi-stable or sorry meta-stable multi-stable we call the systems actually where there are several equilibrium points like the bi-stable system i showed you earlier and system can rest in either of these points so these are kind of the terms we're going to use to describe the different population models we have in dbb so assuming now that we have been quite experts in linear dynamics we can go a little bit through some of the models dbb comprises of so we were exactly here now we understand what we mean that the dynamics of its local node here are described by nonlinear dynamical systems and let's have a look at a few of them uh one of the most frequently used is the wheels of gammon model it supposes the existence of any bitter and excitatory population that are mutually coupled the key theoretical idea behind it is that spike timings of heroes are random and distributed following a poisson distribution these are the equations they comprise a sigmoidal function here for the activation and um depending on the parameters they can show different dynamical regimes for instance this is a case where they saw two equilibrium points for a low activity and as higher activity separated or come on by a saddle with here in the middle however if we change the parameters for instance the input coming from other brain regions we can have actually the formation of an oscillation here a limit site and therefore this model is suitable in general for studying uh oscillate oscillations in the brain another yeah and here actually is a view where you can investigate this model and it's part of the dpp package here you can play we can change the different parameters of the model and see these qualitative changes in the dynamics both in the state space and the vector field as you see here and also in time series another model that you will see that is used in many studies is basically of resting state it's the one one model this is a relatively simpler model it can have either this state space here with only one equilibrium point uh representing a low activity attractor or um it can have a high activity in the low activity attractor separated by a saddle the state variable less here represents synaptic activity so these are timing series where only the low activity is stable while here we have also high and low activity stable and so we can have spontaneous jumps from one to the other another model that is purely phenomenological is the genetic to the oscillator depending on the parameters the equations of this oscillator can take completely different forms so we don't show them here in detail uh and again depending on the parameters we can have something like the monostable system i showed you earlier an excitable system that can perform one cycle only and then return to an equilibrium point as you see here so imagine an input here giving you a kick making one cycle and returning to the equilibrium point if we change the parameters a limit cycle can also appear in which case oscillations can be modeled now in all these cases until now we assume that the local population dynamics are quite homogeneous however there is no reason we should strictly restrict ourselves in this scenario there can be cases where um the parameters of the neurons might differ and for instance they can form a distribution that has more than one modes for instance imagine neurons that receive a low input and neurons that receive a lot a stronger input within the same eurola population this uh this case is covered by this model the stefanesco heresa model which i'm going to only show you and it's part of tbb and what i'm only going to show you the difference it makes with respect to the minifigure representation so on the left we have the case where there is homogeneity in the parameters so that the different individual neurons you see here as blue dots actually are quite well described by their mean the mean field which is in red so this would be pretty much the case of all the models you saw until now the main field would be unimodal and fairly well describing the statistical summary of the dynamics of all individual neurons however here we see a case where two clusters of neurons are formed and they are oscillating and at any case there are two clusters and the membership of each neuron into this cluster is changing as they are oscillating in this case the mean of this activity actually is meaningless it is exactly where this model then becomes important and it is available in tv beta models this kind of complexity in the dynamics okay having gone deeper into the separate components of tvb the structure and the functional one we can now discuss some of the details of simulation first of all simulation means integration of the differential equations of the models uh for integration tvb has deterministic and stochastic algorithms stochastic arguments are necessary when there is presence of noise and tvp also has different options for noise it can be wide gaussian noise or it can have time correlations as in colored noise uh so in case one chooses to has the calculation he has to also choose the kind of noise it's amplitude etc but the most important choice in this case is always the degradation time step a very large integration time step can make the integration unstable and our system will explode and a very small time step will make our simulation very slow because then the output trajectories will be approximated by many many many small little steps then one has to actually choose the measures he wants to have over the virtual brain activity this happens through so-called monitors that can observe the state of the brain now the simplest monitor is the one that just tracks the state variables of its neural and population this is what we call the role monitor there are also versions of it that perform at the same time some kind of sub-sampling or timbral averaging this would be for instance like measuring the lfp the local peak potentials then we have monitors to produce mri bolt through the implementation of reduced balloon windcastle model we have the monitors to to produce energy and eeg through forward modeling and this is where sorry the actual geometry of the cortical surface is very important because if you recall energy and eeg depend a lot on the direction of the dipoles of the new of the cortical columns and also we have monitors for stereotactic kg where we assume that our sensors are deep inside the brain it's quite easy actually for someone to also program and develop more monitors or modify the existing ones now another interesting aspect of dbb simulation is when we stimulate the brain and therefore we perturb the dbb dynamics dbp allows the user to easily define a stimulus that has on the one hand the time course for instance it can be a pass it can be a sinusoidal whatever and also it has a spatial structure in other words the user can define a spatial kernel so he can say that i want a stimulus that will be applied at this point and then fade out as we go further away from it and at the same time a time course i'm going to show you a study where i applied in that way in tpp and it is this one by spigler at all what they did it was to subsequently stimulate tbb at different brain regions allow it after a while to reach an equilibrium and perform a principal carbonate analysis on the resulting time series and computer correlations and actually recovers passive patterns of the different networks that emerge when tbp is stimulated at different points in the brain the result was that taken all together different networks appeared and the analysis showed that they corresponded to a large degree to the resting state networks we observed in fmri uh yeah the correlations of bold fmri which shows how important the space-time structure we discussed about of the brain connectivity is because we can recover this pressing state networks to a large degree now i'm reaching the point where we could have a break in a while and taking all together the usual workflow for modeling and simulating with tdb starts with determining or loading from files the brain topology and the global connectivity that is either the positions of brain region senders or the positions of fall triangle messages that if we do a surface-based simulation and the matrices of weight and track lengths then we choose the functional side of tv bay that is the population model and we can determine its parameter values we can determine a global coupling function how two distant regions will be interacting in other words for instance a linear coupling or multiplying them or whatever and optionally if we are doing a surface-based simulation the local gap in kernel or if also we want some kind of stimulus we have to determine also this the special temporal pattern of the stimulus then we one the user has to determine the measures he want to have if if you don't select any monitors you're going to get no output practically so the user can select as many monitors as he wants and determine the parameters like the sampling period etc then the user has to choose the degradation algorithm and depending on whether he wants to do a stochastic or deterministic simulation he can choose the corresponding algorithm in the case of the stochastic he needs also to choose the parameters of the noise in all cases a very important choice the one of the integration time step then you press the button and you simulate and finally you gather results you analyze plot the brain measures activity etc now further down the road uh there are other things the user can do to make really some interesting science out of it we can scan different parameter values and simulate we can compare the activity we measure with virtual uh the virtual data we generate with empirical data and maybe even fit it will be parameters to the data what is called model inversion we can test hypotheses make predictions um some of these things you're gonna see at the last part of the today's lecture now still tvb is a highly abstracted model of a very complicated system the brain and it's not straightforward to actually use it it's not like a black box that you put together brain function emerges there are many common errors that can happen like coupling functions being misused uh model parameters lead to completely off-range dynamics if the time step is not chosen correctly then the numerical integration will explode will not converge and there are even more complex problems like a conceptual confusion between structural functional and effective connectivity structural is what we get from the white matter tracks as we discussed functional referees to correlations with observing data effective the connectivity is when they're really two regions interacting dynamically and it's very often that people confuse these things and this is very often actually when people want to model task conditions in tvp and they think that if they just connect the the different regions in the brain through the structural connectivity and then they turn on the button they're going to see interesting task dynamics like vision audition or cognition or whatever this will not happen of course and as you're going to see until now most applications in tpb have to do with resting state exactly for this reason there are additional assumptions to be done to really go from the structural connectivity we have in ttb to the specific effective connectivity that characterizes a specific task finally there are two ways to use tvp one is there is a gui version that you can log in you can download it you can log in and perform all these little changes and checks and there are many interesting views and finally simulate recover the data view the data analyze the data but to be honest most of the time us doing science with tvb we do it through scripting thankfully again there are very detailed tutorials in many examples with jupiter notebooks and you can all find them in the very rates documentation that you see on this web address so i think we can now take a break until 7 10 i would say and then in the remaining time and depending on how tired you are i'm going to take you through a few examples of doing science with tvp i'm going to have a look also in your questions and try to respond them when we are back dennis we have answered most of the questions but there's one request to elaborate again the stephanes could hear 3d model it's actually not the stefanovsky model it's the defenescu and all the names are similar but have nothing to do with each other so maybe dennis maybe you you want to say something i have to say it's one of the most complicated models so maybe not the best model to start but dennis i would handle it yeah yeah thanks and i don't have a slide here with the equations of the model i would suggest that you really look at the paper because it's really a complicated model i will only say a few words and you're gonna see soyuz study that uses it so let me find it okay so the idea of this model yeah i mean i would just show you again this video the idea is that if among the neurons there is at least one parameter where they are deferring a lot and imagine for instance as i said earlier there is an input parameter like input current for instance uh imagine that it has a distribution across neurons with more than one mode like a bimodal distribution for instance this by modality is a kind of heterogeneity within the population of neurons and it leads to a possible by modality of the mean field also in other words the average behavior of the population is no more a good description of this population we need to have more state variables following the different modes as it's called like one mode is here another mode is here of the dynamics the way now this model works it has a at least at least six state variables and so it's quite complicated one and it can it does a kind of mode decomposition on the space of the parameter in other words kind of let's say projects the neurons to the according to their parameter value that the one that has a bimodal distribution and like this it can describe both modes and i guess more complicated versions could extend to even more modes than the two you see here i cannot go into more details here i would also have to study to be honest to to to make a better presentation of it but you can really go through the publications here and i hope yeah i hope you understand more there yes i also shared the sans leon 2015 paper where there's a very short overview about this one and also the other models maybe to start and then you can though go to the direct it's original publications right so let's continue because um we are a bit out of time and of course the main part of the course was until here so whatever we do from now on it's okay but if if you want to to finish it up also we can leave some things we're gonna post the slides anyway so i want to take you through a few examples and to show how all these things we i showed you today can jointly be used and to what kind of scientific results uh they they end up so we started saying that evp is the model we need to put between neuroimaging neuroscience data and theory and we said we can it's the model that connects function as oscillations in a large scale network of neuronal populations to the observable data we have in euro in neuroscience and this gives us the possibility to compare to compare the data we generate the virtual data we generate with vba and the empirical data we have and therefore we can test hypothesis or when we don't have data we can make predictions now we showed also how this dynamical system we use to model brain function can undergo qualitative changes when some parameters change like for instance the input parameter or whatever other parameter time scales connectivity strengths whatever so ah a usual case study i would say with tbb is to always try to tune it tune the parameters of the model such that we explain something from the data this is called model inversion in other words when we invert the model and we identify the parameter values that better explain a given set of data this means first of all that we can have parameters that have a spatial structure along the brain they defer in other words now there are parameters like the global coupling strength that scale all the connectivity or the speed of transmission delays but there are also parameters that can change from one side of the brain to the other when we do brain region simulations these parameters are one can be sorry can be one per brain region node when we do surface pain simulations they can have a gain some kind of spatial extent like what we show with local coupling or with stimulus now let's go through some use cases like for instance the one of fitting empirical and simulated functional collectivity and resting state which is a very frequent use of tbd by the way here the authors used the one one model we sawed earlier if you remember that model has a part of for some parameters it exhibits only a low activity and for some others it can have a high activity branch and the low activity branch and the idea was that okay we have the structural connectivity from magnetic resonance data and we also have fmri and we can compute empirical functional collectivity as correlations between the nodes of the brain the idea is now how can we tune the brain such that the dvd generates a functional connectivity that looks like the empirical one what the authors showed was that when they tuned a parameter the strength of the global coupling in other words they scaled up and down all connections of the structural connectivity they could actually identify a region where most well the similarity between empirical and modal functional connectivity was the best and that was actually the region which allowed for both states high and low activity and an easy transition between them in more detail they actually vary two parameters the global coupling strength and the speed of transmission these two parameters uh affect the global synchrony of all the brain regions and their metastability remember metastability is the easiness with which the brain switches from one state to another and they showed that for a specific region of those parameters they could actually approximate the the the functional connectivity of fmri to a very large degree as well as energy connectivity at different bands for instance what they do here they choose a specific regions as seed and they look at the strand and the strength of correlations to all other regions for two different bands of energy and they do that for the empirical data and they do that also for the model and you can see visually how well the model represents the data the lesson we learned from here is that structural connectivity on the one hand and the dynamics tuned to the critical regime or metastable regime where the brain is fluid enough to change from one state to the other but at the same time stable enough to stay to one of these states such that we can observe it as a different network this equilibrium between flexibility stability that we have for a specific range of parameters is where really the brain dynamics of tbb can better represent the data in another case um which is very similar here the authors managed to actually capture more than the functional connectivity additional characteristics the function concluding dynamics otherwise the changes in time of functional connectivity as well as the bimodality in the power of eeg at resting state they did the similar thing they vary two parameters the global coupling and the conduction speed and they did that uh by tuning the oscillators by the way here they use actually the stefanesque here's a 3d model and you can see that this model can exhibit dynamics like this where there are periods of births and periods of quietness and this is a kind of my bi modality for power there are moments of high power and low power and the author showed that for a specific again range of parameters there is a high correlation between the simulated and the vertical functional connectivity for two different bands both for uh tuning the system to perform delta oscillations or tuning the system to perform alpha oscillations and what more for this similar parameter regimes they also manage to to have the high correlation for other features of the dynamics which is the by modality of the power of the eeg power for instance as well as the fcd dynamics that is the different networks that appear and the different statistics of transitions from one network to the other of the functional connectivity all of these three measures functional connectivity functional cognitive dynamics and the by modality of power they all converge the similar range of parameters for global coupling and speed again this points to the specific regime for brain dynamics that are close to criticality or to metastable dynamics of an ideal equilibrium between flexibility and stability now mind that this has direct clinical applications because we can now identify statistical measures attacked like biomarkers in other studies people have compared different groups of people like alzheimer patients with control for instance and they're trying to find biomarkers that can identify the can separate the control group and the patient group and then this also has theoretical implications because as we said there is a specific equilibrium between stability and flexibility that actually leads to this narrow parameter region where we have this good fit with data therefore we can have also an explanation of what changes between control and patient groups in the brain another more complicated case which is uh where the demonstration i showed you the beginning comes from is this study by our colleague mikhail shirner in this case what they did is first they asked the question how realistic is the simulated fmri time series we get from tpp and then they had the idea of using the eg as an approximation of local input currents instead of a gaussian white noise the the schema what they actually did is they're trying to they build personalized models using diffusion and mri as we discussed of different patients they place a specific population model they reduce one one one that has an excitatory inhibitory population at its node of the network and they produce artificial fmri however they use also the recorded aeg as a source for noise and they tune the parameters of the model to get a good fit for ethmoid in that manner when the model is fit first of all they show that the fate is much better than any other alternative model like a body noise model in other words the the this hybrid model they're using it can fit the fmri data much better than a series of other models like that that they tried i don't want to get into details the good message though is that once they tune the brain like this several features of actual neuroimaging data that are known in the literature can be reproduced the several resting state networks of mri are reproduced and other phenomena like the relationship between the alpha phase and the spiking or between the power of alpha and the fmri what you also saw at the the most stressed in the beginning the other correlation between them the scaling properties of the fmri power functional connectivity functional connectivity dynamics it is it seems that this kind of fit actually can reproduce many of the observations we have in agreements in today another case with direct translations to clinics is the one we discussed at the beginning about epilepsy it's a virtual epileptic patient and it's related to a big clinical trial that runs right now in france with 400 patients tested the idea here is that we can use tvb in a mode similar to the stimulation we discussed earlier to actually describe the way epileptic surgery is generated at one point in the brain and transferred spreading to other points in the brain what you see here is the brains of patients with that have achieved zip signals transplanted you see a simple seizure starting from recorded by a specific scg contact and pretty much staying there against another complex seizure starting from the same side but then spreading to other channels and you have also a simulation of this issue by tvb so the idea is here we can generate scissors we can compare them with existing signals we measure with scg and we can choose some parameters that will tell us where might be the problematic tissue they want so-called epileptic organic zone in other words the the the actual tissue that is pathological and generates the seizure and distinguishes from other parts of the network that are only propagating the seizure so initially we have clinical hypothesis about where is the beginning zone where is the propagation zone and we represented through a parameter that is like an excitability parameter it shows how easy it is for an isolated brain region to trigger essentially autonomously and we can perform simulations trying different combinations such parameters we can actually do it systematically using evasion inference method using a dedicated software stand don't want to go into technical details the result of it though is that for every pain region we can get eventually a distribution a possible distribution for this excitability parameter the higher it is the more likely it is that that region is pathological and belongs to the epileptogenic zone and therefore this can direct the the surgeons to actually operate this region try to isolate it or whatever uh yeah i don't have time to go into more details but exactly this is what is now tried out to 400 patients in france here i have some other reference you can go through with applications for two more patients alzheimer patients i don't have more time for that um however i have to say that inverting tpp is very complicated because as we said pvp is a big abstraction of a very complicated system there are many things that are right now difficult to handle or can go wrong like for instance most of the error functions that we need to optimize are not differentiable so we try to complete numerically usually there are many local minima or alternative solutions that are almost equally good and therefore it's not easy to identify the optimum point and this is why we generally try to optimize at the same time several features of brain function dynamics just like what we show in the scenario study and also there is no right now there is no model inversion toolbox it's something to be expected so usually what we have to do is a very brute force grid search where we take the parameters we want to test and we try several values of them on the grid and we compare with empirical activity through some statistic and eventually we get some resulting heat maps that correspond to the clarification diagrams and where different quality behaviors of the brain are represented and these help us answer the question what is qualitative changing between one state of the brain or the other for instance the pathological or the healthy one all this process requires knowledge it's not something we can do just on a black box attitude with a black box attitude uh i have three minutes so i will not go deep into the next subject which is the last one also the modern inversion toolbox is one extension of tbp that is being worked out right now another one that one that i i'm personally working with is the multiscale simulations with tpp i will only tell you a little bit about it and the idea is okay i hope i have persuaded you until now that it is indispensable for large scale simulations that can directly link to virtual neuroimaging data and therefore make the connection to the reality we can measure in the brain however there are many spiking simulators out there that actually simulate spiking networks of neurons spiking neurons in other words and nest neurons are some of the simulators the idea there is different it's to study local mechanics of processing local systems and mechanics so are there cases where we would need a little bit of both well there are scenarios like that one scenario for instance these cases what you see down here where we want to create a realistic context for a spiking network because usually the input coming to spike networks is like a noisy poisson spike train or some oscillation but it might be the case that it is crucial for the mechanics we are standing to receive a differentiated input either in time in terms of oscillations for instance or in space to receive different inputs from different languages sorry this could be the case where we want to have one direction coupling from tpb to some spiking network models a specific brain region or a specific system there are cases where bi-directional coupling is also necessary when different modes of functioning of a specific spiking network can actually when averaged out to update the tv beaming field node can actually cause bifurcations or changes to the large-scale dynamics in which case we also have a bi-directional coupling that goes through updating a specific selected node of the brain of the tvb by the activity of a dedicated spiking network in order to have this working there are several modules to communicate activity and to transform for instance firing rate activity from tvp to spikes in nest and all the other way around from spikes to to to create the firing rate activity to update the activity to specific dbb node and yeah all this is quite complicated and computationally expensive so we're trying to have a kind of simulation that works actually in parallel and to minimize the communications between the two scales the large scale of tvp and the fine scale of nest and you can see in this there is a this is a graph of the performance of the tool and it's very crucial that the parallel parallel co-simulation is used and also that we minimize the necessary communications between the two simulations as much as possible and remember this is possible because actually the connections at b are delayed due to the length of the white matter tracks so actually the necessary communications between the two correspond to the minimum delay time uh which in turn depends on the length of the tracks in the brain so if we simulate in parallel and if we use synchronization every minimum tpd time delay this is actually the green line here the whole code simulation becomes much much faster and what we're also trying to achieve is a very modular interface because we have tvb we have some spiking simulator nest in this case and we have also transformer and communication modules to communicate and transform the activity from one to the other and possibly all of this each one of these could run on a different computational node in a cluster to allow these very large-scale ecosimulations to to happen anyway so that was a little bit too fast i just wanted you to know about it you can learn more if you try to connect here there is also github if you write differently multiscale github you will also find the existing state of the code it's really an ongoing project uh it's one of the projects that tries to extensively be to develop more its use cases so that was all and thank you very much for your attention i know it was quite intense still i can answer some questions if there are any
Info
Channel: BrainModes
Views: 114
Rating: undefined out of 5
Keywords: TVB, The Virtual Brain, Brain Simulation, BSS, Brain Simulation Section, Berlin, Neuroscience, Brain, Workshop, Tutorial
Id: C7iOdb8k4oY
Channel Id: undefined
Length: 89min 20sec (5360 seconds)
Published: Wed Nov 03 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.