Houdini Algorithmic Live #079 - Video Stippling

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right hello hello this is june shore can you hear me okay so this is 10 p.m on saturday i would like to start this week's tutorial live for dinner just call us make life and this is the 79th episode now the topic for today is to create a video stripping a stripping effect meaning creating an image stippling from a video frames using a technique explained in a paper called 2.5 d computational image stippling by kim mingwong okay so today i'm just going to try to implement what it shows in this paper and see if it's going to work in the houdini right now one thing to notice i am not going to fully implement the function explained in the paper since uh it is the the process explained in the paper is assuming that people are is going to use a gpu to calculate although i am not going to use a gpu or i'm not going to write any gpu code inside houdini nor opencl i'm just going to use a vex in order to achieve the similar effect so i am going to deform some of the function so that it can somehow be used in a rather real time than where they wait for several minutes to finish calculating since bex says i think it's max is on uh based on cpu so i'm gonna try to strip down some of the functions explain the papers to make a bit more faster so the actual result might not be the same as what x what it explained in the paper but at least from my view point of view it still shows a good image so that's just that at least you will understand what i'm gonna try to do here so if you know how you can code in gpu then i think you can just transfer back into a different language for sure once you know the underneath algorithm right so what's so special about this algorithm explained by these two guy for computational image stippling now this is another um example how you can convert the image into a blue noise which is this kind of scattered point and as it is explained in the paper there are already many examples to achieve the same pattern but what's special about this method is that it's using a physical based simulation for sampling using a point charge method so because of that it's you're going to implement some physical simulations per point and because of that if you use it for moving frames and if you apply this simulations on each frame time by time frame by frame you'll be able to create an effect such that the point seems like it's moving toward the next frames image and so you can kind of uh move the point at each frame to mimic the underneath image rather than compared to the other stippling functions if you use the other stapling functions if you try to do it for videos each time each frames you get a random point position so you don't really get this smooth transactions of these points but because it's you this method is based on the physical based simulations you get this smooth interpolation of these points so that's what's i think that's what's interesting about this method all right so that's what i'm going to try to achieve here feeding the video images the sequence image sequence from extracted from my video then convert it into a stippling image and play it like this that's our goal here by trying to implement what it shows in this paper from scratch okay so that's that for the basic information let's look at the paper where should we found where should we start so looking though those paper uh it actually this paper shows um three different three different steps in order to achieve the the final image and one thing you need is the base image obviously and it is also oops what happened it also provides it also asks you to provide this depth image as well for additional effect but what i'm going to do is just to use this original image and i'm going to skip using this depth image so in this case the section that i'm going to focus today for from this paper is just this section too a physically based blue noise sampling the section three is explaining how you use the depth image to create um a focusing effect based on those height field images height field informations i guess it's not that hard to do you just need to update the distance function or the charge functions right here instead of the like the one explained in the section two right here though i am going to skip this and i'm just going to try to implement the the the base the base physical physics based um blue noise sampling technique explained in section two and once you be able to do that i think you just understand how you can do the 2.5 d images as well later alright so focusing on section 2 i know that i need to provide an image and let's see what else i need to provide in order to start doing it now it is explaining that i do need the [Music] so each particle it explains that each particle has a charge informations and you have two forces applied on each point one is the positive charge which is going to be used to create this uniform sampling so that the point has the similar distance to each other like these so by using a uniform um positive charge you'll be able to create this kind of image first so this is the first goal to achieve creating this uniform point set and after that you would want to you would need to create this adaptive sampling by providing an image and by using an image you are you're going to create a additional plane i guess it's better to explain in the sketch i guess all right so let's see so there's two step involved in this algorithm step one first of all you create a image plane in this case i'm going to make it simple as inside the paper i'm going to make the plane size equal to one for width and height to make it simple starting from zero zero and one one okay and basically i'm going to start by scattering the random points on top of the plane randomly maybe using a scatter node in houdini and maybe without relaxation option so that we can see how it relaxes how it uniformly relaxes using our functions for step one after you have created this plane with the scattered points on top of the image what we're going to do is to apply a charges to each of the points using this function right here so at each point you have an f value okay and this f value is being calculated by looking at each of the point from this point you're going to look at each of the points like comparing with this point comparing this point comparing this point comparing this point so on taking the distance and the direction and using those information you can calculate this f i and you do the same things for all the points so you can imagine that the calculation cost is if you have a number of points like n then you the calculation cost is power 2 of n now that's a lot uh if you want to do this in real time now that's the reason why it suggests you to use a gpu for fast simulations though i'm not gonna use any gpus or opencl here so what i'm going to do is to try to reduce the number of points look from this each point by setting up the radius the search radius okay or the maximum point numbers from this uh the center point now by doing that you the sum of the information being lost especially if if you look from this point and all those points right there it's going to be ignored but if you look at the paper you can see right here the smaller the distance is between the points this is the distance information this is the distance between one of the point to another point okay so the and this is the distance square square of the distance so if the distance is smaller then these values become bigger but if the distance is big enough then this value is going to be really small so i kind of assume that if the distance is uh somewhat big enough then this value might be able to ignore it so that's what i'm going to do here in order to make the simulations a bit faster in terms of using a cpu because i'm just going to use x here so that's what i'm going to do and e j i is the directional vector value from point j to point i in this case if this is the point i and this if this is the point j then j i is this direction okay so that's that and you'll be able to by calculating the fi you'll be able to get a vector value so this f y is going to be a vector value and use it as a force now this itself cannot be added to a point position to move but you need to create the um the actual force vector value in order to move the point and that is based on the virulent physics and which is explained somewhere around here which is this one so we're going to use this customized velocity violet numerical integrator in order to create the force vector value and then we can move those points based on those forces to a specific direction for a convergence and as a result you'll be able to sample and you'll be able to create the noise blue noise sampling okay so let's try that out and the equations for this burlet integrator is being explained right here so this is what we need to implement and since this is the recursive functions we can use we can obviously use solver for this functions it's not that easy it's not that hard to implement i think by looking at these functions it's pretty easy to understand first you need to update the position based on the velocity in the and the acceleration value this is the velocity this is the acceleration value and d is the the maximum displacement value that you can move in one frame one sub step and then next you're going to calculate the acceleration value a by using the updated positions which is this one right here and next you can update the velocity using the same dispatch displacement value together with this s what was s the dumping value to slow down the velocity and together with the oscillation value the current acceleration value and the previous uh the previous acceleration value and the current excellation value this is the oscillation value updated right here this is the acceleration value which was previously used uh before updating the acceleration so you need you have two types of acceleration value now that's you that's why you need to do in order to update the force information and change the positions and as in results you should be able to get this uh uniformed sampling image so that's our first goal to create now after that we can come back to create a second we can come back to do the second step which is to use the image that we're going to provide to uh adapt to create a image something like this image based blue noise sampling okay so what we're going to do in this second step is to provide a image which could be underneath or above the original scattered point plane okay and the image is going to be explained as a grid of points so you're going to create a grid out of the image and at each point on the grid you have this color information or if you're using the black and white image you have this intensity image intensity value for each pixels and you use this intensity to create another force another charge information and this time you get going to create a negative charged information okay and the rule here is that the total positive charge from step one let's say that's uh f f total and let's say this is g total and when it when you add those two values f total plus g total in terms of the absolute value or not the absolute value that but the value itself you should get zero so this one is negative negatively charged value and this is the positively charged value so the rule here is that f and g when you combine when you add those two forces to get two charges together the value should be zero then that's the rule here and as i said if you pick up a few points out of these search radius then what you need to do is to calculate the the total amount of positive charge within this range and same for this negative charge as well if you are look using this point as a search radius within this search radius radius you pick up those pixel information from the image underneath and you add up all those negative charge informations coming from the intensity and those additions like let's say f part and g part those addition also has to be zero and in order to make the total amount of the force or charge to be zero you need to kind of uh calculate the coefficient value which is called as a in the paper right here okay so that's what you're gonna do and this is how you calculate the the negative charge information based on the intensity then this is the intensity right here okay so if you do this calculations and you'll be able to get some vector values and as a result as in result you'll be able to calculate or you'll be able to update point position to where it converges and as a result you'll be able to should be able to get the image something like these based on the intensity of the image so let's do this all right sounds a bit hard to implement but if you do it from scratch one by one i think it should be able to should be it shouldn't be that hard to implement it's just that if you want to use the gpu instead of cpu then that's not my like um field so maybe you can do it by yourself and maybe show me how you can do that okay so i'm just going to use i'm going to use the geomet i've just created the geometry node and let's try to find out the image that i'm going to use first of all so let's go to the website look for the image that you want to use i mean you can just use anything i think but the image with the high contrast in terms of the grayscale might be a good idea to use so let's say let's find out those that kind of image so i'm gonna use the envato elements since i have subscribed in this one let's search for a cat or something okay so let's say like to use this one kitty let's download this one right and any format is fine i think just download any image that you want to use i'm gonna stop by testing with the still image then later on i'm gonna try to use it with the video frames and see if it's still gonna work okay so let's do this i am going to just drag and drop the image that i've downloaded okay looks like this kind of a squarish image so that's also good right so first thing first let's import that image and to do that i'm going to use the attribute map attribute from map node and in order to do in order to use this you kind of need to have a grid before importing the image so let's create a grid and let's have enough points to show a crisp image so i'm not sure how many points i need let's say 512 by 512 and the size of the grid is going to be one by one for ease of use i mean it actually can be any values but based on the paper i would like to use the same parameter values so i'm going to keep the size to one and one and also i'm going to shift the grid so that it will align from 0 0 to 1 1 in terms of the xt plane so i'm gonna shift this center by point five and point five oops that's correct yeah like this okay now let's look at how many points that i have so you have 20 262 000 points i guess that's enough so let's try to import the image let me first save this so that i don't lose the information is it okay i'm going to call this video stippling all right now let's import the cat image like this now it's kind of flipped so let's do a uv flipping invert b yeah that should be fine all right and yep tons of information and let's retreat let's first retrieve the intensity value from this image currently i have r g and b information applied as a color attribute like this now let's convert this into an intensity and in this in this case the intensity could be a lightness and in terms of lightness i think i can just first convert the lg v value to hsv and then just get the v value the value so let's try to do that let's also hide the grid all right so i'm going to get the intensity first thing first i am going to get the color value and convert it into hsv by using a function called hv no h no rg b2 hsb converting the color vector value now if you just get the z value for the hsv you get i think you get the intensity now you get you just have converted the color image into a grayscale image and this grayscale value is going to be used as the intensity okay now um now that i realized that uh now that i have prepared the image for the sampling let's go back to the step one and try to implement the base uh uniform sampling using this fi using the positive charge information the charge information itself is the one called qs right here and this is actually the positive value that we need to create and it shows different image result by using a different value looks like if you use a higher positive value you get more high contrast image rather than using the smaller value right here okay so let's see if that is so okay so looking at this equations right here it's pretty easy to implement the i means the current point number the j is the one that you are looking from uh this point and i is not equal to j so obviously for that we could use a point wrangle first of all and in this case if you use a point wrangle then i will become pt num so at each for each point we're looking we're looking at each point to do a process using this pointer angle using of x and in this case i becomes pt num now for j uh you know if in order to access all the other points inside uh the from the scattered point and i can i kind of see that i don't really have a scattered point yet let's also create that so let's create a random randomly scattered points by not using any image information but from the original plane and let's not relax the point and but just randomly scatter the point on on top of a plane and obviously you can set the number of points how many points you want to use here let's try with 10 000 points okay now i think it also is a good idea to have a new node to have all the parameters that we want to control later so let's have that as well let's create an integer value to determine the point number point yum from zero to i don't know hundred thousand all right then for now let's make this 10 000. let's link the parameter value here now if we look at the points it's a bit hard to see uh what's going on with this background so let's make the background white by creating another grid which covers this plane right here so maybe all i can just scale the original plane with some scale information let's try to transform the original plane maybe by two and merge it together with the point so that if the grid is white then we'll be able to see a point a bit more clearly now let's also set the pivot for the transform to be the center of the plane like that and let's also look from the top view and currently the plane is a bit grayish so let's go to the background or the material and let's make the diffuse to white okay so it's a bit more easier to see what's going on okay maybe i can make the grid a bit bigger five all right looks good so this is this looks like a good setup in order to visualize the point if you want to scale up the point radius you can go to the geometry by pressing the d key you'll be able to get the display options you can change this point radius maybe depending on the number of points you can reduce the radius or increase the radius pixel all right now let's bring the point wrangle after the scatter then let's try to calculate the a positive f value the force value here using the charge qs okay so if let's see the qs is the amount of charge carried by each particle and this is the parameter that you can choose by yourself i is the number of points and j is the other point number so it which is determined by the number of scattered point n is the number of points you have in the space e i j eji is the directional vector from j to i so that's pretty straightforward let's try to get that so let's first get the qs value determine the qs value by yourself which is the flow value probably from 0 to 1 i think okay and in the example it shows something like .05 or 0.35 maybe let's try with the 0.3 or something all right so 0.3 and i think this qs can also be parameterized uh and being used as a global value here so let's do that going to drag and drop what i have created right here to right here as qs positive charged value for the uniform sampling all right now based on this value let's try to calculate this one so first thing first i as i said if you are trying to look at each point if you try to search for the search for each point like brutally it's going to be pretty slow in order to calculate because for each point you have to find you have to look for other 100 000 or 10 000 points so that's the reason why i wanted you reduce the number of points to look for by maybe using a function like new points but just for testing i'm going to try to just get look for all the points as an initial test for what it to see what the information is supposed to look like so the number of points we're looking at is the points and for each loop the i is going to be the point number for the the target point which in this case j so maybe it's a better we better call this j instead of i okay now if and might be easier to understand if we say i is equal to pt num so it corresponds with the paper value right here okay so and we do have a conditions i is not equal to j so let's give this condition if i is not equal to j then that's where we can calculate this f i or i mean this value here and this is the integrator here and integral here we can add up or calculate the mass addition of this sum of these equations for each i value for each j value and we look at this one and after we have add up after we have made the sum of these functions we can then multiply by the square of qs which is a positive charge value so let's try to calculate what's in here what's inside the loop so that is first of all we have this vector value going from which is the uniform or unit vector value going from r j to r i so in order to calculate this we need to have the point position for a j and i so let's get that the position i is obviously a at p position is a point information at j right and the e j i is a direction going from position j to position i so you you calculate the subtraction between position i and position j and this is since this is a unit vector we need to normalize this right and then after we have calculate this unit vector we are going to divide it by divided by this distance between the ri and rj the square distance between the ri and rj so for that we could use a length two function between the ri and rj we can just say position i minus position j for that okay now that we have calculated this value right here as a vector value we're going to add up until we leave the loop so let's make a value called fi outside the loop starting from zero zero zero as a vector value and then at each loop iterations we're going to add this eji and as a result you'll be able to calculate this part right here with uh and the result is fi called f5 now finally you can multiply with this the square of qs so if i multiply by square or qs multiplied by qs okay and that's that now that i have calculated this fi we can now store it inside a point as an attribute let's call this let's just call this f i something if i is equal to phi right if we look at this one we now have this value right here which is which can be used to which can be used to convert it into a force value to move the point to to separate the points uniformly creating a blue sampling noise the noise sampling okay so the next step is to use this fi value that we have created together with the valid integrator which is explained right here numerical integrator right here okay so let's do this and let me sell it where'd it go right so and since we have to update this at each frame i'm going to do this on solver and because we're going to use this in solver the the calculation that what we have just done right here must uh also be done inside the solver as well okay because each frame the point position changes so the obviously the f5 value also changes so we need to also calculate this inside the solver as well at each frame so let's create a solver and connect the point the scattered point and reconnect to a solver like this and let's cut and paste what we have wrote right here and how do i name this i don't know charge i don't know stippling okay now it's the pass has changed so let me update i guess one more all right so now i have the correct qs value and i have also calculated the fi value it's time to calculate the a actual force value force value in order to move the point to a specific direction okay so let's go back to the paper again maybe show this part the numerical integrator part and see how we should done so first of all we need to update the position then update the acceleration and then update the velocity now the information we have is if i and if i is um actually is um based on is the information for the acceleration value actually so which is this spot right here okay so we are going to first of all i'm going to separate those functions with different wrangles for ease of visibility so first of all i'm going to create this position update node maybe beforehand like this and then for the acceleration update i think we can just do it inside the stippling and also the velocity update can also be done in the same wrangle so let's do it like this and let's try to do the stippling for uh part first right here so i do have this fi value created as a force and the next thing i'd like to do is to actually try to use this with this equations and let's see where it should go right now actually this um [Music] this fi is a is based on the current point position right here okay so if we look at this this is actually computed from an x at t plus delta t this is the updated point position actually which is right explain right here so in the first step position update we are updating the point position in houdini we're call we're calling this at p using the previous point position and together with some velocity and acceleration value then we have updated then we're going to use the updated point position which is this one in this case this one to create the acceleration value in this case this is called fi okay so we can actually say vector acc or new acc is called as fi right and so we are done with this one actually we're done with this second step so the next what we need to do is the next step part where we have well we're going to update the current velocity value using the previous acceleration which is not this one but the acceleration value calculated previously so because of that you need to have this information maybe inside the attribute as a previous acceleration value so let's say we have let's create some attribute that we going to need i'm going to use the attribute create and all of them should be vector value in terms of the velocity and the acceleration so first of all let's say the current acceleration value is called acc which is applied for each point and which is also a vector value okay and let's create another one which is called as velocity which is also a point class and a vector volume now i probably don't really need the previous oscillation attributes because i can just get it i can just deal with the the current acc value here so let's just have those two attributes velocity and exhalation and i guess that's all we need here right so going back now let's check look at the geometry spreadsheet and make sure you have two attributes acc and velocity right then going back to the stippling point function where i'm maybe i'm going to create a comment calculate max acceleration okay now after that call velocity all right i might not need to have an fi attribute after all so now i have this new acc and i know that the previous acceleration can be accessed by looking at the v at acc attribute and the previous velocity can be accessed by accessing to the v at velocity so and the one that says t is the previous value and the one that says t plus delta t is the the new uh value the next steps value now new acc is this uh acceleration with t plus delta t and uh the the the a t is the previous oscillation value and the vt is the previous velocity value in this case and we have we also have some conditions here like the minimum so the size of the the whole vector value should be clamped with this value here which is the distortion the maximum distortion value which is going to reduce how much the point should move and each frame okay and it says in the paper that the best value for a uniform size which in this case one by one the way the equal to one height is equal to one the d value is point works best and point zero zero two and s is the dumping value and in this case point nine five works best in this case which is this one so let's keep that in mind and let's calculate this velocity here first of all let's calculate this part so we need an s and d first so s let's get give that as a parameter this is the dumping and the here's the maximum um what was it called how is it calling maximum displacement so meaning how much what's the maximum value it can move and the point can move all right so let's keep those as a parameter and in this case s is 0.95 and d is point zero zero 0 2 was it 0.002 like this okay now i think i have enough information to calculate this function here let's do this let's call this vector bell is equal to a s or current velocity or previous velocity multiplied by s plus 0.5 or i should start with the vector so the previous acceleration plus a new acceleration value which i've already calculated which is this one just this right here and we'll use the parenthesis and multiply by 0.5 multiplied by dt dt is the time step in this case so we also need to have a time step value all right now i might need to apply the time step to the exhalation as well i haven't done it maybe i should um let's think about later okay so i have dt and let's say it's called the time step 0.01 for now and i have this value uh next thing i would like to do is to to so now that i have calculate this velocity it's time to clamp the size of this velocity with this displacement value so in order to clamp it with the minimize function what i'm going to do is to first of all normalize the velocity and then get the size of the velocity and use the minimum function with this d divided by dt delta t all right that should do it okay [Music] and then now that i have a new velocity i can now update the velocity with this new one and also update the acceleration with the new acceleration value all right that's just that okay so now i have updated two information velocity and the acceleration it's time to update the point position as well going back to the previous node just need to do the similar calculation right here so in order to update the point position we have the current point position and using the current velocity and the current acceleration multiplied by the delta t and clump it with this displacement value so use the same displacement value i'm going to create another d value here as a parameter and i am just going to copy or link what it has here to this one so we are going to use the same value okay other than that we can just try to calculate so [Music] i'm just going to try to create a function called new vector called new pause which is wait a minute maybe first of all let's calculate this part first okay so let's call that add position which is a current velocity multiplied by dt so obviously we need to have a delta t time step value as well here let's link that as well copy this delta t to this one and then plus a current oscillation multiplied by 0.5 multiplied by dt by dt okay so this is this equations right here now we need to clamp the sides using the minimum function just like we did for the velocity so add pos is equal to first of all normalize the position and multiply by the length of that cos and use the minimum function to clamp or reduce the size using d that's just that oops okay and then finally we can add it with the current point position to create the new point position so new positions is equal to the current point position plus the add position and we can now update the current point position with this new position all right so that's that should do all the job and let's see what it's going to create and now right now i'm i'm brute forcing or i'm just retrieving trying to calculate distance between all the other points so it's going to be really slow to calculate but let's see if it's still going to work or not all right so let's check if i play it okay obviously the simulation is slow and i am seeing some caches so let me start from now as you can see i can now see the point has been normalized in terms of this distance and i see one problem is that it's in inflating and it's not really clamped in the bowel boundary so we need to think about how we deal with the boundary as well in this case but if i look at the effect it looks it seems to work as it's supposed to although it looks a bit too s slow okay so at least for the first try it looks good now let's try to fix the boundary issue first of all and let's see if we have an information about that and yeah i think it explained about it somewhere forgot what i think it's if you search it with a periodic or something okay so the process is simulated in a periodic domain meaning if one passes the boundary then the point will come from the other other side so if if this is the boundary and if the point goes beyond or beyond the boundary let's say the point goes underneath this boundary edge then the point will come back from this edge and go to the bottom direction if the point goes to the right directions and go offside off the boundary then comes back from the left side to go to the right direction and that's what it means by this periodic domain i think i assume so let's try to do that it's not that hard to implement you just need to move the point once the point passes the boundary so let's give a simple boundary functions as a point wrangle and let's see i'm also going to give a little bit of offset um like i'm going to create a little offset maybe outside the boundary when the point should goes from one side to the other because if i try to use the very boundary edge right here to move the point uh it might give a little bit of flickering around the edges so in order to avoid that kind of flip flickering stuff i'm going to create a little bit of offset for the boundary right so let's give some flow value offset a very small number is fine i think let's say like 0.05 or something and then um since i know that the size of this plane is equal to one for both width and height so it's easy to think how to run how the boundary should work i'm just gonna call the size as one called as s now if the point position dot x is minus offset then move the point to the right side by adding s plus offset multiplied by two so if if the point goes to the left side and let's say the boundary is somewhere around here with the offset 0.05 then move the point back to the right side by moving the whole size plus i'm offset multiply by 2 so the point should appear somewhere around here okay that's what it's doing and do the same for the when the point goes from right to left point from bottom to top and so on so we need four conditions just gonna copy four times and if the x goes to the right side which in this case a one plus or s plus offset then you wanna subtract by s plus offset by two multiply by two okay now if the we also would like to think about the z direction maybe i can just copy these part and replace with the z and that's all we need to do is like that right i think it's going to work let's see now let's redo the simulations and hopefully the point wouldn't go outside the boundary anymore as i said you do see a little bit of flickering near the border so that's why we have we have i had set this uh offset value so that we can clump around the boundary so right after right after we have leave the solver i can then use a another point wrangle to trim out the points if the point dot x is larger than let's say the size is equal to one as again and if x is more than s or at p dot x is less than zero or at p dot z is larger than s or f p dot c is less than zero then we can just delete those points and do we see the trimming yep but i do i still see some caches okay this happens okay let me reopen the file okay now the border has been trimmed by the offset and you should be able to see a bit more cleaner image of this point distribution now that's fine and the only problem left is the speed the calculation speed is slow obviously because we are calculating the n multiply by n blue falsely calculating and i'm not doing any optimization for the gpu obviously just did the brute force calculations here so um in order to make it faster i'm going to update the function for the acceleration a little bit right here to make it a bit more faster okay um one thing i would like to test out let me move those value to the somewhere run here well i just want to check if what if what happens if i multiply dt to this new acceleration value i guess it's doesn't really change the result i know it's too slow to check so i'm just going to leave this without it i'm gonna check this later remind me if you could remember all right so in order to make it faster maybe i can keep this for a reference and to make it faster i'm going to use a other function rather than look use looking at all the points i'm going to reduce the number of points to do the sampling okay for for a faster result and because of it's going to reduce to not the sampling points the result might not be accurate as the original solution but i mean i just wanted to test out so in terms of the visual it doesn't look that bad so for my case it's still fine all right so first thing first where i can change i can go back to where i have calculated this fi here and this is the problem this is where it makes all the calculations so slow because we're sampling all the other points so if we could reduce the number of points sample then obviously we could reduce the speed i mean we could speed up the uh process as well so what i'm going to use is to search for the nearest points number of nearest points from point i or at p position i so if the position is closing off if we are looking at this point then we are only going to sample those points within some range or limit the num total number of points to sample so that's what i would like to do and the function that i'm going to use for that is new points so let's use this and pts is equal to near points and the new points can get a number of points based on the search positions by setting up the search radius as well as the total number of points the maximum number of points to use okay so i'm gonna set both of them or maybe just maybe setting the maximum number of points is a bit more easier to um control the speed because if we have tons of points in a dense and plane then even if we have try if we have reduced the radius we you there's this chance that you still get like tons of points and still going to make the calculation a bit too slow so [Music] i am going to mostly focus on this parameter into max pts reduce the total number of point to sample in this case more points you sample more accurate results you get but less point numbers you get faster result okay so let's see the difference so zero [Music] at p and [Music] for the search radius we can just say really high number like thousand or hundred since the plane size is one maybe even 10 is fine then we have this point maximum point number which i'm going to make it as a parameter okay so starting with maybe thousand points and now mac in maximum you get 1000 points from each point okay so i just want to re replace these part with this new point list so i'm going to replace this with npts lengths of npts and i guess i should change the iterations to something like n as well because i want to use j as a point number okay and if i is not to this is all okay with j now oops what i want to do here is to get the point number j from the npts by accessing to an index okay so all the others all the other all the following code is keep kept the same what i just changed this this part these part and for the other one i think you can just use it as it is and let's see the result i'm gonna replace it this one and see what happens now you still get the similar result i don't know if it gets faster not so much let me reduce the number to 100 then see if i can reduce the speed i means reduce faster in the speed now obviously it becomes more faster okay the result become faster than before okay so maybe the number 100 might be a good idea to test out for a faster result and the result is not that bad i think and let's try with the more point numbers let's say 50 000 and still the speed is not that bad maybe it's the only problem i can see is that it's hard to converge if you have a small number to sample you see that the point is keep moving one of the reason is because at each frame it's the point is like moving back and forth even though it is it is at the converging state so the the one of the good idea to not move the point when it has been converged and keep the substep equal to 2 so that you can if it's converged then the point should stop at some point now if i look at it some of the point is still moving and maybe in this case the d value might be too big for displacement i mean if you i think if you just sample all the points then it just converge in a really stable state but obviously you can see that the calculation become a lot slower if you increase the sample numbers so i guess that's the reason why you want to use gpu for the accurate and fast implementation this is just a fast and dirty implementation that i did using houdini just because i wanted to try out so but in order to understand how it works in terms of the algorithm it's quite enough for me and it's pretty interesting now it's a bit hard to see if it's been converged so i'm gonna keep this 200 and try to reduce the d value like point zero zero one five since i have so many points the displacement can be smaller i guess um still moving but not that bad a result i guess and i could also try to multiply this dt that i forgot to test out and see what happens maybe that doesn't really give any maybe that's not that doesn't really do anything so forget about it right but after all i mean i can see that um it is creating a blue noise something that sample the noise so the initial goal in order to create a uniform sampling is done with these and using this rather dirty way to implement to make it faster it's still kind of a work so let's say it's okay all right so [Music] let's reduce the number of points to ten ten thousand again yeah this one seems more converged okay not bad a result all right now that i have done a step one it's time to go to the step two to use the image the actual image for the adaptive sampling now in order to do that we need to calculate this g value as a negative i mean this q k q is a negative charge value then calculate the x the new acceleration value for the particle and this total amount of negative charge and the total amount of positive charge should be equal to the addition of those uh amount those positive and negative charge should be should be equal to zero and that's where and that's uh by doing that you'll be able to get the final finalized a vector value in order to move the point at the right direction for the conversion state all right so let's do this shall we and before doing that i'm going to move all those parameters that i have set like t s dt to the controller so i'm gonna move i'm gonna open up the parameter editor going back to the solver drag and drop the s value d value dt value max pt num okay let's make all everything lower case this is okay with the capitals okay and let's also link with this one as well so i'm gonna copy paste it right here paste it right here wait a minute this is the dt and we're going to copy d which comes to this okay and might be a good idea that if we don't give an option to switch between the brute force way and the dirty and faster way let's give a toggle to switch between the fast mode or the slower mode go back copy the fast mode obviously for the movie stippling or video stippling i'm gonna use this fast mode because i don't i cannot wait until uh all the convergence is done for the slower implementation for this one but if you want to use the full implementation the accurate implementation you just need to use gpu or opencl i guess all right that's a reminder and let's do the adaptive sampling then all right so everything is quite close first of all let's calculate this qk for it by the intensity of the image together with this a value and as i said a is the coefficient value which is determined by the total amount of the positive charge so the total amount of this positive charge and the total amount of negative charge the sum of these two values should be equal to zero so we can assume how much the a should be by those equations all right so let's do this and we should we can do this inside the stippling node so let's do this maybe we sh we can do this and uh slower i mean yeah i'll just do it faster and move it back to the slower later so we we're going in this case we're going to calculate additional like acceleration value called gi previously we have calculated fi here so let's say this is for the uniform and this is for the adaptive okay now first thing first we need to calculate this q k okay and which comes from i and we don't know a yet so let's just have this value first so the intensity is do we have that the intensity value is applied in this plane right here which is this cat image so i'm going to use this as a reference for the solver so going gonna connect this to the second input to the solver and we need to check that if the image is overlaying with the scattered point and right now it is so we can just use this now if we look at this equations here right here and right here ik is the intensity of each pixels and the grid of this image and we can get that by accessing to the which one the cd value or we don't have that yet so let's give a float ik or let's just say i float i is equal to a hsb dot c okay so we now we have at this intensity value from zero to one if it's equal to one then we have white value if it's equal to zero we have black value so more white you have you get zero with this equations so you have um more negative value meaning repel force if it's close to zero meaning if it's close to black color you get a higher negative value so meaning you have strong cohesion force in this case i think so you have more point density to where the intensity is close to black or zero and if it's close to white meaning close to one then you have more separation so less point density to those areas that's what i assume and let's see and for those information this is the point the current point position of the scatter point which comes from this one and rk is the point position on a grid on each point position on the grid so if you show up the edges these are the point that we're talking about each point is considered as a single point on a grid which every every point has an intensity which called as k so in this case we have more loops than the previous uniform sampling we have in this case uh 262 000 points and obviously if we try to calculate the loop for all those points for each ri then that's that's just two matches again if you if you want to do it in cpu so we're going to reduce the number of points to sample again for this adaptive sampling so that's kind of um i don't know maybe call it a i can call it a fake or really dirty way to implement to make it faster but still seems to work um in some level so i'm just gonna try to do that way okay all the other things is similar to what we have done in fi like we have a unit vector value from k to i we have q k which i've calculated here we also use the q s that we have been using in right here that's small so let's do this let's do this shall we going back to the solver and let's first calculate this one so float q k so let's ignore m a for now so negative one point zero minus i and in order to get intensity value um we need to access the grid information now maybe the qk is defined for each pixels and the grid so maybe i don't really need to do it inside the solver maybe i can just do it outside when we have calculated intensity right here we can just calculate this part at least so let's say temporal temporal q k or temporal q k is equal to 1 minus i or just say h s v a dot c so we don't really need these one okay and let's create a comment one point line is okay so this is just a comment indeed indicate which part i'm referring to inside a paper okay so this is the temporal qk which is just this part without the a value maybe i can also make a negative like this okay now going back to the solver on the first thing i need to do here is to calculate this a value and in order to calculate this a value you need to first calculate the total qs value the positive value for each point for each scattered point in this case if we go back to where we have calculated this fi i am only sampling a number of points in this case 100 points by this parameter so meaning i have um 100 positive charged point for this calculation okay and each of them are using the same qs value so meaning the total charge of this the total charge value for this point the specific point is equal to a maximum point number multiplied by qs okay that's pretty easy to calculate float total qs is equal to a maximum num multiply by the multiplied by the qs now um since i am using the new points and new points also includes the point itself meaning i there always is a i value within this npts so i think it's a good idea to subtract this maximum number by one so that this is the actual uh charge number the the point number that is actually used as a calculation within this condition right here so if it's if the maximum point number is 100 then the actual point number to calculate is 99 so 99 multiplied by qs the actual total positive charge value okay and in order to get this a value the calculation looks like this so total q s and total q k is equal to zero and in order to calculate this total q k we need to um define how many points we're going to sample from the grid image so that's that's what we're going to do next we don't know yet how many points we're going to get so we're going to do the similar things that i did using this function near points so let's keep this for a while and go to the next step in order to get how many in order to sample all the points we're going to use for the negative charge or adaptive sampling so and um and pts maybe i should rename this how do i name this apts i guess is equal to new points from and in this case we're going to search from the second input meaning i'm going to connect the grid image to the second input then this grid image is what i'm going to refer to which is the cat image okay and i'm going to get those necessary information like uh intensity value to calculate the qk and so on so from i from the current point position search from the current point positions and maybe set the maximum distance maximum search radius with the big number and then use the same maximum point number maybe i can just create as a variable maybe you could also use the different number of point number for a adaptive sampling but for easy testing i'm just going to use the same value but you can i think you can also change the number number point to sample for the uniform sampling and the adaptive sampling if you want to see the result differences in terms of the quality okay i'm going to replace with the variable how's that right so where were we now i'm going to get all the sampled value like this and then so i'm going i'm using the same maximum number in this case hundreds so um probably i'm getting the same 100 numbers out of this apts okay now let's do the loop i'm just going to copy all these for ease let's give a little bit of space so that it's easy to see okay so i'm gonna replace some of the variables so looking at all the apts and the j in this case we are naming k so let's name this k integer k is equal to a bts and index then if i is now there there are no chance that i becomes equal to k so we can remove this conditions because we're looking at the different input all right like this and after we have got this k value the which is the point number we can now try to get this intensity value right here i mean i have also i've already calculated this minus 1 minus ik so let's get that which was called temporal qk right so float temporal qk is equal to a point from second input the name is equal to temporal q k at k all right now now that i have got these one what i can do now is to try to get the total amount of qk right here so float total qk starting from zero and each loop we can add this temporal qk to this one maybe i can call this temporal tqk total temporal qk it's a bit confusing all right now for the other part we could just try to calculate the one without this qk part so let's do this or maybe we can also use the temporal qk instead of the actual qk we can just the a always becomes a constant for this specific point so we can we can just multiply it later for this a so let's forget about this one but let's try to do this one first right so so position i is okay position k we can get the position k from the second input with the number k all right and what we're going to have what we're going to calculate is the first of all unit vector from k to i so this becomes k and then divided by this the length between r i and r k and squared so position i s position k lengths to right and finally we can multiply it by this qk in this case i can just try to multiply with the tqk for now without the a without a value because we don't know this value yet all right so and we can add that to g i and we don't have this gi yet so let's create a vector starting from zero zero zero as a gi okay and after that after we leave the loop we do the same things as we did for the fy multiply with the square of qs so like this one gi multiply by qs and the qs okay now with this calculation we have calculated mainly calculated two things one is this gi which is still missing this a value and well i also have calculated this total temporal qk which is the total addition of this i one minus ik the negative 1 minus ik okay so going back to sketch we can now estimate what a is by all those information that i have now the older all the positive charge has already been calculated as a total q s okay and this is a positive value and currently at each loop i have calculated minus 1.0 minus i k using a loop okay so and calculated as total and t q k okay and the solution is that in order to calculate the a value total q s plus total t q k multiplied by a should be equal to zero so a in this case will become minus um tall tall qs divided by tall tall t q k and as a result you'll be able to get this a value and then finally you'll be able to get this gi value by multiplying this a to this temporal gi to complete this equation okay sounds confusing but this should work all right so let's do this um what was the equation so in order to calculate this a we are going to divide the total qs and total tq okay okay i have named it too bad let's do this shall we um so a is equal to minus total uh qs divided by total tqk right and finally we can multiply a to this gi and you should be able to get the result hopefully right and let's see if this is really going to work all right so now that i have gi um i can go back to where i have calculated this new acc and this is pretty easy i just need to add gi as an additional force and that's i think that's pretty much it and let's see this is going to work okay finger crossed and there you go um the conversion seems a bit too slow maybe i'm the d value is a bit too small here maybe not where the number of points is not enough not sure maybe i'm missing something if i increase the qs maybe i was just seeing the cache maybe that's the reason okay the result looks not that bad even though i'm just sampling uh closest hundred points it still gives you a grasp of cat maybe i can increase the point radius a bit okay and maybe i can try to increase the point number as well a little bit maybe the radius is a bit too high and there you go i can clearly see that looks really looks like a cat all right now it still is not that fast if we have like five fifty thousand points but it converges it's pretty fast i do see a little bit of flick link because i because of the the less sampling uh values and maybe a bit too big displacement i'm not sure maybe not maybe this is the only reason why it makes a bit flickered okay but seems like it gives like a result a rather good result not bad at all i guess all right so so it seems to work in a static image it's now time to test it in a sequence of images and how it affects and because this is based on the physics based simulations meaning we're going we're just moving the point based on the charges it really works well in the animation all right so if the animation we're going to use is a smooth animation then this should work pretty fine let's reduce the number to 30 000 now this seems to converge the point positions a bit better in terms of the those single colored part and we could also play around with the this qs and maybe the d value if i make it zero zero one five the point no longer moves that much because we are reducing the point move the point displacement distance but then some of the point is missing to go to the right positions it just stops at some point so makes a bit more blurry result maybe that's not really what we want so maybe keeping this two point zero zero two is still a good idea is it paul i have a c have a comment is it possible to use in this type of calculations opencl and is it implementation vx yeah it is totally possible to do it as an opencl and i think the paper is also intended to use insider opengl i mean the gpu so obviously it can also be used in opencl as well and for sure it's going to be much faster in opencl so if you know how to write it on cl i would just recommend it doing that instead of backs i think as i said this is a really dirty implementation tim just because i wanted to make it faster in side effects and i want to test inside backs but really not really necessary to do invex but can directly bought it ported into opencl for sure okay so let's test this out in a sequence of video sequence image from the video i'm gonna use is the one that i've downloaded from envato yes this is the water drop image um any image should be fine i think but this one has a like strong contrast between white and black so maybe i thought it would be a good testing image so let's do this so in order to use this inside houdini we need to convert this into a sequence of images so and i guess there are tons of ways to convert this into a sequence what i'm gonna do is to use a ffmpeg to extract the images from this mp4 so in order to do that first of all i'm going to go to this directory then there should be a command in fm and pack to extract the image let me search for it in google so there's this i guess i can use this command fm pack dash i name of the movie the name of the sequence show frame image all right let's do use this one so the name of the movie is this one so fmm peg i more drop mp4 okay forgot the double quotation i'm not sure if i need it and the output names let's create a frames folder i might need to create empty frames folder so frames out zero three d dot j pick okay so how's that all right so it's now and now extracted the video into the sequence images and in total i have 277 from one okay so let's read those information read those textures now let's forget about the aspect ratio of the image i'm just gonna make scale it to a square for because it's uh time consuming to just trim it out so going back to the attribute map looking at the image right here let me duplicate this one and first of all i'm going to refer to the single frame image and replace this with the frame based value so which can be done by using the function called pad 0 back slash and use the pad 0 function the size is three and we're gonna convert frame so one becomes zero zero one two becomes zero two thirteen becomes zero 3 and so on ok let's check it all right now i can get all those images it's until 277 all right so now it's time to actually see how the point changes based on those based on this frame sequences if i need to do any extra things i play it wait a minute i might need to look for the [Music] let's test this out i do see a little bit of ripple here somehow maybe there are small like contrast differences in the actual image now if i look at closely the point is somewhat moving following the underlying image let's play from right here okay and it doesn't look that bad maybe i can control the contrast of the image a little bit for those underlying like ripple like image but the result looks quite good now these part is not converging that much i guess that's because of the contrast that i was saying so let me go back and tweak the intensity value a little bit to clamp the maximum than the minimum a little bit just a little bit so everything looks white but if i look at the actual color as i see some of the value is not really clumped to one like these so those differences are making a little ripple effect i guess maybe if i could be able to sample all the points rather than just getting the closest hundred points maybe i i'm getting more converged nice state but that's too slow for my dirty implementation so in order to reduce that i'm going to clump the contrast a little bit by using the [Music] f at tk or we're gonna get the intensity float i is equal to clamp the hsv dot c by the minimum and the maximum then combat convert it back or remap back to zero one two or i can just use the fit as it is yeah that's i guess that's faster rather than using clamp okay so fit like this this should be faster and i don't really care about the black part but let's make the maximum to 0.95 so that the white part will be clamped and let's look at there is uh and i'm going to replace this with i see the result and everything is zero maybe i'm doing something wrong all right so i have wait that's that's correct yeah because i'm subtracting one by one so i think that's correct okay let's see what's gonna happen okay this all looks more uniformed and the result looks more crisp and contrast with a little more contrast it still is not that fast even though i'm just sampling 100 points but not too bad and the result looks also good enough okay let's wait for a while let's do some caching until the last and see how it's going to look like with the smooth animation i'm going to make a cache and while doing it you can maybe you can try out with the other other types of images more video like this one [Music] what's back cpu are you running on this so well i'm using a m1 mac mini so not that fast obviously all right so this is the result and this is what it what we get out of this simulation not bad it's kind of taking time maybe i'm uh because of this caches if i keep opening this houdini it happens along with my mac let me reopen it okay but the result looks pretty nice pretty nice all right and so i assume the way i implement it is not too far away from the original solution i mean i am skipping a lot of stuff the important optimization stop for a gpu but at least seems to work in in terms of the visuals and i know what's the limitation for mine for my implementation for my dirty implementation so and you should also know that since i've been keeping saying keep saying it so if you want to have the full implementation the full result full accurate result then you should go back to the paper and then use it with the gpu or opencl or something parallel based processing rather than using cpu okay but so far so good let's try it out with the this doggy image that i've downloaded see if it's going to work as well let's replace this and there you go it's kind of a squeeze so this became like another type of dog but still look like a dog okay maybe because i'm reducing the number of sampling that's because that's the reason maybe i'm having this kind of a white space on outline if you look from the far away maybe that's not that obvious maybe that's obvious i'm not sure okay but so far so good just wanted to test how it works and pretty satisfied from what i get let me know how you think um let me know if you have any idea how you can make this efficient more efficient more faster if you maybe i would like to try it with the opencl implementation as well later that's one things that i need to try out for this setup for a gpu i guess you need to go for shaders and so on but because if you use shaders then you no longer have an access to those pointy information as a geometry so that might be a downside for it because you do want to access to those geometrical informations if you're using houdini you can apply all the other attributes as well to those points so you have much more like possibilities using those addition to those stippling right so that's about it um and if you have any comment or questions let me know what i done here is just trying to follow what it explained in this paper so i can't really tell what why this works in this specific way but looks pretty straightforward this fi is creating a repel forces to keep the distance for each point at the same instance using the the maximum displacement value and gi is controlling the uh the cohesion forces by changing the the attraction of the amount based on the sampled image using the intensity the key point is to the key point of this calculation is to keep the total amount of the positive charge and the total amount of negative charge be equal to zero and that's the i think that's the most important part you should keep that keep in mind because previously i forgot thinking about it and previously i tried setting this a value like as a constant value for all the points and it didn't really work it just gives you gave you a really blurred image and i thought that's that was the correct and correct image but as soon as i changed the equations to so that the positive and charged value became zero in total then the image became more crisp so that's the key of this equations i think hope it makes sense and if you are if you don't what these informations then you can i think you can obviously create this 2.5 d image stippling together with this depth image pretty easily you just need to update the adaptive um gi sampling force with additional like um ips long value what should i call this epsilon value which is uh does it explains i didn't really read this far so epsilon maintains the minimum distance between particle no this is um where does it say to change the view displace the whole density field towards the sampling plane which since the so the depth field is used to control the focus distance i guess so if you want to make it focused then you can i guess my image is that my guess is that keep the distance between the original imaginary imaginative plane and the the underlying image plane the same now if you want to make it blurred out then you can change the height or you can change the position of this plane in terms of the z direction i mean the y direction so meaning in this case by just moving this image upward you can make it make the distance far away from the original plane makes the whole image a bit blurred out i think let's try that out if i make it translate it maybe that's too much if i make it 0.2 0.25 and use it to calculate the distance then something might changes let's see how it changes now everything seems blurred out maybe that's too strong let's keep the distance a bit slightly upward 0.05 now i can see a little bit of silhouette and compared to the last view it does seem i blurred out and you have more contrast on those chair part interesting and i guess using a height field means is that to control the height for each point so closer to distances you have more crisp image farther the distances you have more fluid image that's what i assume i didn't really try it so interesting all right you could test it out yet just test that out for yourself okay having a little bit of distance makes a bit smoother result i guess so maybe translate a little bit upward might give you a little better result [Music] not sure maybe a little bit more i got i really gotta play around with these parameters to see what works in what way all those uh assumptions are just my assumptions so it's not really i'm not really sure if that's that and if i increase this d value maybe yeah i have more convergence oh looks better all right oh well i hope you find something new here i did learn new things by implementing this one and i am all the credit goes to a gaming one and tien zing wong for sure this is really a really exciting paper to look at and it was pretty interesting i really enjoyed in implementing this for myself okay and hope you did as well okay so after this live i am going to upload what i have done to a github as always so that anybody can download the file for your reference to look for your reference and play around with yourself and maybe convert it into an opencl version if you do that let me know or give a fuller full request on github then that will be great okay so that's it um any questions and there have a comment this reminds me a little bit of the way to extract 3d mesh from several images with different depths of field ah that's true it's interesting um i think it's using a um forgot what it says a surface it's called a surface reconstruction no from several images with different depth field i'm not sure how i called but maybe that might be a similar things not sure sure okay so if you don't have any comments i would like to end the live stream here thank you everybody for watching and thank you for all the comments maybe that this was too mathematic for today hopefully you find it interesting in some way i do kind of enjoy this uh kind of like paper implementation myself i hope you do as well but i i guess it's only fun when you actually read the paper by yourself and implementing the equations by yourself so i i think it's a i think it's better you if you try implementing by yourself first of all without looking at the file then come back to the file to see the differences from what i have implemented and what what you have implemented from the same paper maybe the method might be different from mice and yours that's also interesting all right so thank you very much hopefully what's happening here is this correct [Music] let me try without the qs just think maybe i can parametrize this value as well in terms of the height so let's give a image height from zero to one maybe one might be too big but like this and these one we can switch between the images just like that okay any other parameters that i forgot to promote oops forgot to set the boundary and i think i got every parameter set okay that looks good and if i change the height a little bit interestingly it converges faster oh interesting nice all right a comment for me the hard part is have a little bit of knowledge how to properly read the equations that's true that's true i mean of course in some paper the equations written in different formats so it always it always is a hard part to to understand to properly read the paper it is hard it is always a hard part i always get lost reading those equations and in this case this was pretty straightforward so that was good but in some cases i do get in law i do get lost a lot so but this one this paper is pretty straightforward and well explained it's really a good one okay thank you all and hopefully enjoyed so after that as i said after the streaming i am going to paste the url to this help file for the download for you to download in the video description the video description page of this live stream on youtube and also i'm also doing the patreon so if you're interested in supporting me that i appreciate that as well okay that's it thank you all for watching and good night as soon as i finish the live i'm gonna post it pause this file one thing to mention i am not going to include the image file that i've downloaded because that is copy right protected so for the image to use you just need to download yourself what you want to use okay that's just that all right that's it thank you and good night
Info
Channel: Junichiro Horikawa
Views: 1,874
Rating: 5 out of 5
Keywords: houdini, sidefx, live, tutorial, procedural, procedural modeling, parametric modeling, parametric design, parametric, fabrication, digital design, computational design, 3d, design, isosurface, lattice, structure, 3d modeling, modeling, computational, generative, line drawing, drawing, illustration, fractal, reaction diffusion, celullar automata, simulation, trail, particle, vfx, mitosis, magnetic field, field, volume, rendering, computer graphics, visualization, algorithm, motion graphics, graphics, remesh, quad
Id: E6vuMfdYJnI
Channel Id: undefined
Length: 145min 32sec (8732 seconds)
Published: Sat Sep 25 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.