Introducing REGL - Mikola Lysenko

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Hi. Let me try to get a power cable here. So� yeah. I'm going to be talking about REGL. Which is a functional abstraction over WebGL to make it easier to write modular, reusable 3D engines and visualization tools. But first I'll say a little bit about myself. Just to get it out of way. I live in Hawaii. I live offgrid. And for the last five months I have been living on an active volcanic lava flow trying to survive. If you have seen my commit history drop off, that's why. It's getting under control, so getting back into the control of things. So anyway, a little bit about REGL. These are examples that people made. I initially put REGL out a year ago, and since then people have adopted and used it in real systems. So first of all, a couple things people made with it. This is our cooperative Website. I should point that out first. So this is a little creative project that's basically using REGL. You can see this here. I don't want to go too much into it. But all of the effects are in WebGL. This is a CAD company that does finite element analysis and runs in your browser, more or less. And this also uses REGL for the visualization. Ricky has written a bunch much different demos using REGL. This is a realtime GPU simulation of erosion on some terrain. And another one he did. Some kind of end body simulation with like a million particles and they're all kind of forming together and making galaxies and stuff. A creative sketch. Another one, which is a bunch of particles flowing around in 3D. This is one by Greg. I don't know exactly what these are. They're kind of terrifying. But this is all in WebGL. They're cool looking. Credit for that. This is a terrain demo, also using REGL. This is a convolutional neural network, also running on the GPU using REGL. It's basically you draw a little digit using the data set. I mean, it works. It's a conv net in WebGL. That's cool. This is a visualization from FiveThirtyEight showing gun deaths in America and using the same framework under the hood. And also, a ton of examples in this gallery here. Just pages of this stuff. Click on these and figure it out and figure out how it works or look at the details there. So� okay. With that all out of the way, you can now see that it's a real thing. People are actually using it. So what's it about? Why did I create it? And, you know, what is it good for, right? So at a high level, if you want to do computer graphics, like 3D graphics or 2D graphics, there's two big approaches that people consider. So you can either grab some 3D engine out of the box and just configure it with different options and then use that to render your scene or your game. So, for example, you could just grab run real or unity. And maybe you have something out of the box and do it. Or you can roll your own. And on the Web that makes sense. You have a limited amount of availability to download stuff. And you have more control over all of the details. This can be good if your project has to be maintained over a longer period of time and you need to, you know, customize and add special features to it. But the difficulty with this is that you're then going to be reimplementing a bunch of stuff. So you either have something where you have more control, better for a oneoff project, or a longer term project that has to live for many years. Or something you can get up and running right away, but maybe down the road you have problems. So this is kind of annoying, right? It would be great if we could do both. What we want is something where everyone has full power to do whatever they want with the 3D engine that they're using. So we want people to be able to get into the shaders in their 3D engine and write their own code with that. Should be able to dig into whatever kind of geometry assets are in the 3D engine and mess with them or play with the data doing something novel. Rather than an opaque asset format that you can't introspect into. Reusable components so you can take tiny pieces of different rendering engines and put them together. And things should be explicit rather than an implicit thing with a bunch of weird configuration options that you have to know about before you do that. And it should be open so anyone can go in and start adding pieces to it and build exactly the 3D engine that they need for the problem they're trying to solve. So I made earlier attempts at this goal. And the first idea� or the first big approach was that Stack GL project. I worked on this with a variety of different people in the NPM community. It was marginally successful. There were three big parts. A collection of WebGL wrappers, which basically take the core WebGL API and break it into different classes that abstract things like shaders or buffers or, you know, like different parts of the system. There was Glslify, a system for writing shaders. And a set of math and geometry library for that. Surface, working with higher arrays and so on. So Stack GL was sort of a mixed success. I would say that the mathematics and shader logic that came out of Stack GL was incredibly successful and it worked very well. But the WebGL wrappers had a lot to be desired. And I don't think were ever fully� they never fully realized this broader goal of creating a modular system for writing 3D engines. And the reason is that each of these little, you know, WebGL wrappers had a ton of coupling. Because they were just trying to take out little parts of the WebGL piecemeal and were never able to take a broader, more holistic approach that could actually solve it. There was a lot of coupling between these modules. And also, because they were fairly complicated and the interfaces were stateful, it was hard to test them and there were unclear specifications how a lot of things needed to work. So it didn't really work that well. So at least Glslify worked well and they live on in projects today. So REGL is an attempt to go back and revisit and fix some of these mistakes. So the goal is to basically kill all of the shared state that exists in WebGL. Like React kills in the document object model. And also, while doing this to go back and do something that's welldocumented, wellspecified, welltested and very low overhead. So reducing the total cost of the code assets and overhead in the system. Make something that's small, fast, and kills all shared state in WebGL. And that's what REGL is about. What I mean by reactive or noshare state? We have seen a couple of talks about this. The basic idea in a sort of, you know, more traditional rendering pipeline, something like 3JS or, you know, the document object model, is you have this large external graph of objects, right? Usually in some hierarchical structure. And you have your data, which is what you actually care about, and then you have this data binding thing, right? Where basically you want to update something in your data and then you have to go back and update its representation in that graph and vice versa. So you have this thing where you're constantly trying to keep these two copies of the data in sync all the time. It's a dual write problem in the database. You think you have the index here and the actual data in the database, when you update it, you have to update the stuff in the index. You're fighting and making this complex stuff happen in your code to keep the two copies of what is really the same thing in sync. And so that's, you know, basically what most things do. And the reason things tend to look like this, if you're just working in writing a 3D engine, it would be nice if you could only think about the scene graph and not think about this. But when you're writing an application, holistically, you have to consider the entire system. So this data binding is problematic. Right? It creates a lot of friction. So there's another way to do it. Popularized by React, but going back to immediate mode, G UIs from Molly rocket, a variety of older ideas. And a functional rendering system, you have a single copy of your state. And you have a function that transforms this into pixels on the screen. So you don't have this data binding stuff going on. You have one state. This thing gets updated by whatever means you choose. And you have code you write that turns that state into rendered, visualized stuff. This is how React and redux work. Elm does this. It's a popular and successful paradigm for creating interactive GUI applications. So this is REGL. This is a replacement for the WebGL stuff in Stack GL with a new functional set of libraries�I could go on longer about how this works and the detail, but I'm going to do a livecoded demo so you can see for yourself what it's like. And hopefully I don't blow everything up in the process and it might work. Or not. Let me move some windows around here. The projector is a different size. Can everyone read the text on the screen here? Should I make that bigger? All right. We're all good. Okay. So what I'm going to do here first is just create like a little 3D engine from scratch using this thing. So I'm going to start out by clearing the screen to black. So this REGL module, when I load it, what it does is it basically gives me a constructer. Hopefully I did that right. Or I may have to restart the server. Okay. Here we go. So what this is doing. This require is loading up the constructer for REGL. And I get a single fullscreen REGL that I can call functions on and use that to draw stuff. By default, it's REGL.frame and it's rendered each frame. So different colors. Once I do the clear, it goes to some different value. Set it to red, can set it back to black. I can set it to green or blue. Whatever. Or I can make it flash. So a slight epilepsy warning. Do that. I'm not going to do that anymore. That's kind of annoying. Right. So that's the basic idea. So now I can draw stuff in it. In WebGL you basically have an API for drawing triangles. I'm going to show you how to draw a triangle. This may seem conflicted. But once you can draw one triangle it's not much more work to draw millions of triangles and some crazy configuration. So what I'm going to write here first is a fragment shader. So what this does is tells WebGL what color to make each pixel. I'm going to start it by making them all white. I can change that later. All right. Similarly, I have to give a vertex shader. And tells WebGL where to put each of the vertices on the screen. And now the position of each vertex in 2D right now, just to get started. And just go, pass through this to the output position variable. And now that'll just put it wherever, you know, the input comes from. So I now have a vertex program that's going to run on the GPU and a fragment program. And I need to give it some data to run in the vertex shader. That position attribute. And then I will just give it these two coordinates. So in WebGL the coordinate system works where the lower left corner of the screen is negative one, plus one. And the coordinate� or the upper left corner is negative one, negative one, and the other is positive one, positive one. And tell it how many vertices. And tell it to draw a triangle. I got a triangle. Good. Now make it move around. Do something like this. In WebGL you have things called uniform variables which you can set to different values and then they get broadcast into the shader. These are a global variable that you set outside WebGL and go into WebGL. I have a new variable, translate, which is a 2D vector. Add that. At the moment, it's zero. If I make this value 0.5, shifts it over there. Negative 0.5, shifts it over there. And I can even do something like this. I can actually read in the current tick and make it a function and have this do math dot cosign of tick. And it's go back and forth. I need to make it slower. Like that. Okay, maybe a more moderate thing, right? Now this is cool if I want to have it go in a fixed loop, but take in data from other sources. Like, for example, the mouse. Mouse change, listens for mouse changes. Gives you some state info about that. What I can do is call this function with the data from the mouse. So I'm going to actually give this vector. Call it translate here. And it will take in the X component of the mouse and the Y component of the mouse. Of course, these numbers are going to be very large, because they're in pixel coordinates. I'm going to eyeball and divide by a thousand. I can do something more precise. I can show that in a moment. Here's what I'll do. Rather than reading in a function, I'm going to read in the prop. And I call that translate. Oops. All right. And now when I move my mouse around, it should translate this thing, roughly. Although the Y is flipped, so switch that around. Now moving my mouse. Drags the triangle around. What's happening here is it's basically reading the state of the mouse from the module. This is some arbitrary blob state somewhere, and passing this into the context via the translate. That's popping out on the screen giving me this moving triangle. Okay. That's kind of cool. So I'm going to skip a few steps in the interest of time and we're going to doing in 3D now. I'm going to need a camera. This is a thing that tells WebGL� this is basically a module that handles all the details around a 3D viewport. Doing the matrix math around that. I have this camera. I'm going to read in a test object. Which is going to be a model of a bunny. And instead of calling this thing "Draw triangle." Switch to" Draw mesh." And maybe just mesh, that will be more descriptive. I'm going to draw from the mesh. Mesh.positions. And then need to draw the faces. So the elements here is a special keyword in REGL that basically tells you� that basically reads the cells, right? So the��let me back up a little bit. So what is this mesh they just loaded? I'm going to pop in a Node window here. Make it a little big. I'm going to say, mesh, requires bunny. This works in Node by the way. There's nothing special about JavaScript. So if I look at the mesh.positions. This is an array of tuples of points, and this is the triangles of the mesh, this is an array of indexes. This is data. There's nothing fancy going on there. Just plain old data. Now what I'm going to do is use the camera module. I'm going to call draw mesh. And at the moment it's going to do nothing interesting. I'm still using this uniform variable here. But I will now say uniform projection in view. And these are basically the state of the camera. And I will set the position value to basically be the projection matrix times the view matrix times the position variable from the mesh, or the position attribute. So if I do this, and assuming I didn't make a mistake, which I guess I did, so I have a syntax error here. Oops. Click that thing. Where did it� here we go. Oops. Wrong file. Line 22. Wait. Hold on. Here we go. All right. Let's load this. Wait. Where did I do the typo? Attribute, position. Vec three, vec four� where did I put that? Where? Right here? Vec four, no it should be a vec three position. Not vec four. Hold on, let me comment this out. Oh, I know why. Right. It should be this. Let's try that. There we go. Right? Phew. Like that was kind of weird. The reason it gave me a bad error is because I was running this babelify thing. Quick recovery. I'm still jet lagged. It's a six-hour time delay. It's a long time. This opaquelooking thing. Not that interesting. Let's color it. Going to take this module here called angle normals. And we're going to go compute the normals. The mesh.cells, mesh.positions. And if we do that, we now have this normal value, which isn't doing anything, so we have to actually pass it through to the fragment shader. So we do that using varying variables. We have this varying variable called color. And set the color equal to the normal vector. So 0.5 times 1 plus the normal. And then we have this varying color. So this is just a threepull for the RG components of the color vector. If we put that out, assuming I didn't typo anything. Yeah, we have a shaded bouncy bunny thing here. And we can do this here. Make the bunny fatter, 0.1 times the normal. It's a fatter rabbit. We can make it skinnier. Subtract the normal from this thing. Make it even skinnier than that. Make it. Now it's really kind of messed uplooking and deformed. And we can do this, like a universal float T. Multiply T times the normal. All right. And we'll set T to be� I don't know. Say math.cosign times tick. Let's see what happens? Whoa. It's oscillating, doing crazy stuff. We can do this, say, cosign of position.Y plus T. And this will do something even weirder. Make it go in some strange oscillatory pattern. I don't know what's going on anymore. So you get the idea, right? This is the basic stuff. Now I have a bunny over there. That's fine. All well and good. Let's do something with some real data. I got some stuff here and I got some time. This should be fine. We're going to abstract this a little bit. I'm going to create a function called process mesh. And instead of taking the bunny as info, we're going to take an arbitrary mesh and read this mesh as some data and then we're going to return the output NAV. And this is just kind of like refactoring the code. And we return a resulting function that we can call as a command. So we're going to say, process mesh. Bunny. And this should still work, right? So I didn't change anything, just refactored it. Make sure I didn't break something there. Oh, two Ss. All right. Forward. Bunny is not defined. I called it "mesh." All right. There we go. It's still doing the same thing as before. Nothing has changed. So let's now actually load something. So I'm going to use this module, NDRA, which is basically this module for working with, you know, multidimensional arrays in JavaScript. And I'm now going to load some data. So what I'll do is I'm going to use this other module here called RESL, a resource loader, works well with REGL. And give resources. It's an XMLHTTP request to pull in data. I only have five minutes, so I'm going to do this really fast. So type binary. So this is basically� the data I'm going to work with here, this is some neuron data from like a mouse brain. This was scanned at the Howard Hughes Medical Institute using a microscope that shoots photons in them or something. I don't know. The mice have things on top of their brains while they're still alive. I saw it. It was twisted. All right. And I think the size of this thing� so I'm basically going to load in a 3D volume set of data. They charge $15,000 for this stuff and I'm going it right here. Like the same thing, more or less. All right. So now I got this neuron data loaded. Going to go copy that thing and put it in the on done loop. And now what I've got to do, I need to extract an isosurface. This is a module I wrote called Surface Institutes. It extracts� works in 2D and 3D. And 4D. I haven't tested higher than 5. I don't know if it works in those dimensions. I wouldn't recommend that. So I'm going to call process mesh. First, I need to get the mesh. I'm going to say the mesh is going to be Surface Nuts. And take neurons and I'm going to use the level set 200. I think that looks pretty good. Can adjust it later. Then what I'm going to do is basically update the positions of the mesh. So I'm going to do positions.foreach. And I'm going to mutate this in place. So whatever. I'll need another module here. Three equals require GL vec three. And then we'll do vec3 divide� this is like a side where you have a variable. And then pass the mesh in. And so assuming everything is good� mesh. There we go. All right. And then let's see. Okay. Yeah, well, wait, I still have this weird oscillating thing going on. I got to turn that off. All right. All right. Let's just draw the thing normally. No other weird stuff. Okay, we got some neurons. Something is weird. I got to do one other thing here. Because the neurons are actually very large we have to use 32bit indices. So I have to do this. This is basically a little feature in WebGL that allows you to use 32bit indexes for triangles instead of 16 which you get by default. It worked. Now we have a chunk of neuron data. It's flipped around. That's fine. We can fix that. Here's where we load the data. So we're going to transpose the first two axis. Now it does its thing. You can see the different neurons and stuff in there. I don't know. I think those are neurons. I'm pretty sure this is from a mouse. So� yeah. And we could do other stuff and modify the level and do other things. I have gone long enough with the demo. I'm going to wrap this up really quick. So okay. So� right. That's REGL. The whole point of this thing is I did this live coding demo that you can do things quickly with it. But this is meant for longer lived projects. So you start out with a lot of lowlevel control. You can build up from shaders and vertex buffers and whatever geometry format you want to use. Has clear, wellspecified APIs. And even though I have been living in a lava flow for the last couple of months, six or so, and there have been a lot of people hammering on this thing and using it in production, commercial applications, in this entire time, not a single substantial bug report. There have been typos, but no substantial bugs in the code. Which is good. So I consider this to be sort of a success. So that's great. There's also tools like HeadlessGL, developed in partnership with things like Uber and Mapbox support this. And write code in REGL in a CI environment. Which is great. But also, there's a ton of improvements in performance. So I didn't get into the details of this, but the way REGL works under the hood is that when you do have one of these commands that you create, it basically just in time compiles a little blob of code. So if I set a break point right here and I step into this, we will see� hold on one moment. When we get to this� this code here. And this code is generated at run time. What it does is applies a minimal diff of the WebGL state from the state that you constructed there. It's close to zero overhead. This is benchmarked, basically on the order of microseconds. Using REGL and writing handtuned GL code. So it's fast. There's profiling and we do a bunch of different benchmarks. And there's a chat room and a Website with a bunch of demos. I'll conclude by saying here are some people that are instrumental in getting it off the ground. Jeremy Freeman who helped with financial and moral support and guidance. And Ekraman who came from nowhere in Sweden. And thanks to Bocoup for putting this event together. And Ricky, other friends in the cooperative. With that I think I'm basically done. [ Applause ]
Info
Channel: BocoupLLC
Views: 2,257
Rating: undefined out of 5
Keywords: Open Web, JavaScript, Programming, Open Source, Bocoup
Id: ZC6N6An5FVY
Channel Id: undefined
Length: 29min 16sec (1756 seconds)
Published: Mon May 15 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.