"Getting Started with WebODM: An Open-Source Solution to Your Mapping Problems"

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
yeah here we go hi everyone uh how are things going today is already our fifth workshop yeah times past so fast and we are really happy with all comments and people send a message to us thank you guys we are really excited with this project so i'm felipe machias from umass boston while philippe is from university wisconsin medicine exactly so uh thank you everybody to be here and we're gonna start our workshop today ah by the way as uh we always say if you are new here please subscribe here in our channel every time when we are starting a new presentation or we post a new information you're going to receive and you can go following us and learn with us and share with us and also give our like which help us to put this information out and help more people thank you yeah um so we start as usual which is probably a boring part for the people that are following us every day every time uh but who we are so we are phenomenal force we are a platform that just began it just start and we this platform aims to create a community of people working on plant phenomics um especially as me and philippe which we are like kind of beginners in plant phenomics um we we we felt the necessity of like creating a place where people can communicate and contact also like experts and so we want to create like to make plan phenomena easy for everybody so to start with we created first of all a website uh you can watch it here and where you can have all the information what we are what we are doing we also have a twitter account where we share um news and also we have a slack um workplace um so if you want to uh be part of this slack workplace you just need to send an email at phenom force to phenom force gmail.com so um actually the the idea of creating a slack workplace was because so based on my experience with other with my group and also with other um projects that i'm involved with it's a very easy platform to communicate um also so but i i we have seen that it's not very uh active probably because people are shy or they don't know they are not very familiar with the application so we think that we need to stimulate people to be there and communicate because as i say before we are a community so it's not just me and cleaver we are part of this community and we want you guys to communicate to each other start networking and also like if you have any problem you don't know how to do an analysis you can just type there say oh look i don't know how which kind of software do you guys use for drone images or anything so to uh stimulate people to uh be part of this community uh first of all you can send feedbacks about these workshop series and how it goes which topic you preferred most more compared to others and so there are there is like this part up so in the bottom part here it's not shown that i like the people that are part of the slack channel but here there are the channels that you can join and you can where you can write so the first one is friday's on which is as a channel where you can send comments um or everything that you think about this workshop series this feedback are important for us because we are going to um organize more events in the future so we want to heard we want to hear from you then there is you see the second uh there is a time called introduction so we strongly recommend people that are part of this like channel or the new one that i'm going to join to introduce themselves say what they are working on like okay i'm for example i'm plan breeder working on potatoes and i use drawn images and things like that and then there is also one channel that we create which is called suggestion uh so we are already thinking what to do for the next uh after this workshop cdsm and so we want to hear for you what you would like to to do more to learn more do you want more tutorials do you want more do you want seminars we are thinking about many ideas but we want to hear from you and also like you can see that there are other channels they have specific topic based on the different application of plant phenomics you can join them you can write there so please be active we will also try to find other way to make people more active and interacting them to each other on slack just one addition uh so if you have a pipeline or software that is really good for your lab and work pretty well or and you think that this can help more people and you want to share these with our community please talk with us give a suggestion and we can organize and use these you know channel this space that we are having here to share this information so yeah feel free to contact us and suggest a workshop or just a seminar about your research that you think that you want to put this out and you know um it's you know uh science communication basically right yeah so there is also one comment uh one guy said that uh um uh slack is quite new to some of us so we are now we are now learning the interface yeah so i think we also we can understood like i think also philippe is kind of near to the work to just get to the black place this is why i think the introduction it's the good place where you can introduce yourself and also because you know many times there are so many people working on many subjects in plant phenomics and we don't know that so it's good that you introduce yourself and you make yours your research like people knowing your research i think let's work together here we go um so uh let's go back to the work work workshops thing is um so the first event that we organize is this workshop series called friday and so on um we thought that the most challenging part in plant phenomics is analyzing data um and we wanted to promote open source mostly open source software that that can be used for different applications so this is the fifth workshop as philippe said um and this worksheet workshop series was organized with the collaboration from the mppn and in thanks to the support from the midwest big data hub uh we also want to thank uh well jennifer clark and christoph that kind of support us uh organizing this workshop series and also like noah falgren uh that he is like for his technical support with many things uh so we have to thank him and then so let's go uh to talk about today workshop yeah so there is uh in february in phenomena i heard about open drum map and i use to i work with remote sensing implant reading and then i discovered this platform for duo or tomozaiki and was an open source and then i you know i felt in love because it was super great possibility for us and basically they have the website we already share with you guys the link go there but our speaker today is going to talk much better about this could you pass if you there are many ways to install but uh we really recommend follow this documentation it's pretty straightforward uh i follow and i install in two different machines uh windows and mac and work pretty well just following the documentation sometimes it's there is some part with this coding but it's super simple just copy and paste okay so uh this is the link we already sent for people that was uh registered for the workshop series but we also can put here uh in the uh you know in the description of this video before you follow afterwards okay yeah also like this so as i am so if you registered to this workshop you will receive yesterday an email where we tried to provide you information how to install and it is also like um a link where you can contact or um the developer or like you can inquire any problem you can put you can direct your inquires i'm gonna copy that again in the comments uh during the workshop so if you guys have any problem with installation you can write there and i i think the developer will be very happy uh to help you um so yeah yeah they have in this forum community for questions so if you have everything fine uh you're supposed to open a window like this one and then from where the workshop today is gonna be okay so normally do this kind of introduction uh about the software showing what the user supposed to see today and then uh but today uh we're gonna have a really nice presentation the name is getting started with web of them open source solution uh to your mapping problems as i told you i am i'm user of web of them and then i was really happy with this possibility to share these with you guys and show that there is open source tools over there for those automotive which is a really important step for us and today we have uh india johnson she is your special developer at cleveland metro park and it's gonna we really have to have you here india so really welcome and let's introduce yourself hello everyone um so yeah welcome to be here we are really really happy that you accept to be part of this project hi yes i'm super excited this is my first workshop odu so i'm excited my name is india johnson i am a geospatial developer and uas pilot at the cleveland metro parks um in my role i take on a variety of development and technology and flight projects uh this includes server maintenance web app development infrastructure development i do marketing flights mapping flights and then i process the inventory that we collect from those flights so this workshop is going to cover using web odm which is the web version of open draw map it's an open source software program that we that we actually use for processing our aerial imagery and generating digital outputs that include ortho mosaics digital elevation models and point clouds and so i have a whole bunch of slides and this presentation has actually been adapted a presentation given by corey snipes for drone camp um yeah so if you want to start sharing your screen so we can put you on um also like i want to say that um so india prefer to have questions at the end of the presentation or the workshop um so they will be like um a queer section at the end but during the workshop you are free to post um to post any question in the life as you have done in the previous workshop and then we will ask uh india to answer them at the end of the uh in the last 30 minutes of the workshop and getting another extra information uh as we always say we are using online platforms we are using internet and sometimes one of us can lose the connection so if this happened please uh stay there we're gonna try to come back as soon as possible and yeah thank you for understanding uh so yeah uh india are you ready without your presentation uh yes because now i can see your notes i think let me try to swap that did that work for you uh i think there's still the problem with the screen yeah i guess you should change the screams the screen oh yeah it's the bigger one i think yeah um still the note why did it work two minutes ago and now yeah me and it worked like yeah two minutes if two minutes ago yes i think now if you yeah i think you put full presentation um it could be i'm thinking perhaps is this i'll open it on the screen yes yeah you should chant the computer exactly perfect yeah so good luck and see you later yeah okay let me know if you need anything okay okay okay so to give a little bit of background about myself um so to give you a quick run through an overview of what we're going to cover i'm going to give a little bit of background um on me an open drone map going to talk about how open drone map works use cases actually using web odm we're going to do some hands-on processing uh look at the potential outputs that you can get and then look at some potential pitfalls some future directions and resources so a little bit about me first so as i said before i'm a geospatial developer and a drone pilot i've been doing that for over two years i've been with the cleveland metro parks for a little over two years now i'm a part 107 certified drone pilot which means i could work in the commercial space my background is in biology and ecological research and then also computer science and development and i'm also a contributor to the open drone map ecosystem whenever i'm able to do that so open draw map is open source software until you might ask okay well why use open source so one of the reasons that people use open source software or open drone maps specifically is limited funding so these types of processing softwares can be quite expensive and open drone map is free so when you look at you know you could pay five thousand dollars especially me as individual don't really have five thousand dollars to just you know spend and i definitely don't have 350 a month um when i have and i have the option to use something that works just as well for no cost but also important is the control that you have over your tool chain so open drone map is um flexible so you can generate ortho mosaics dms um you can process multi-spectral imagery and it's installable in multiple different ways and is really flexible um for whatever your use cases may be so open draw map has many contributors and when people want to see something or have a feature they can either add it themselves or request that it be added and then it can actually become a part of the software and then it's also transparent so that helps lead to better quality and security of the software itself so this is just some cost comparisons um between web odm and then a few of its competitors inter and it depends it depends entirely on how many computers you have or users you have or how many images you're processing this particular graph shows 50 000 um process images a month in 2500 image batches so as you can see it can get pretty expensive um with odm for the most part unless you're paying for installers or for cloud services you're only going to spend the money that you need to for your own setup um so mostly ram and gpu may be a better cpu and then whatever you can run on your setup is free after you pay the initial cost but there are also cloud options so you don't have to run it on your own computer so a little bit of history open drone map uh it began as a concept or joke actually in 2013 but it grew relatively quickly uh with a dedicated community of backers and developers and then it kind of exploded between the years of 2015 and 2018 um and then was more widely adopted in 2018 until now with many contributors being added every day so the primary goal of this project was to be used for the public good without creating monetary barriers to entry which is what we often see particularly in certain countries that have pressing need for this type of software but not a lot of money and we also want to provide a software that grows and evolves with user needs and input so quickly i'm going to talk about the process of use of how open drone map works um so it's a photogrammetric process that takes in images and the first step is using structure from motion which is a photogrammetric process technique used to estimate three-dimensional structures from two-dimensional images and for those who don't know uh photogrammetry is the science of gathering information from photographic imagery so moral structure from motion so essentially structure from motion matches features in multiple images so it'll pick an image or pick a feature in an image and then another one and another one and another one and it'll match those same features in other images and this is why having images with overlap is so important because it helps structure from motion actually match features fine features and put them together later so here um you can see a sparse point cloud so this in one second is a set of features with the cameras at the top that you see those little triangles with their angles and so this is sort of the minimum of the features that have been matched so this is the initial point cloud this is another example of a sparse point cloud with the cameras and their angles so it's similar to the previous image and what we get after that is a dense points cloud so it goes from sparse point cloud with a minimum of features to a dense point cloud what happened open drone map filled in the blanks um in a way so it matched more features it added more points and it took a few educated guesses to get all of these extra points on the map and create a more complete picture so the next step is surface reconstruction so a little bit of background on surface reconstruction fresh up next slide that creates a mesh so this mesh doesn't have all of the detail of the ortho photo if you will but rather it fills in the gaps that you saw in the dense point cloud and to keep things simple for processing color isn't considered and to consider in this step so this is an example of a mesh or surface reconstruction um it uses a technique called a poisson surface reconstruction and then it's followed by mess decimation and a few other steps to sort of fill in the blanks in case you're wondering this is the old fort and the house of wonders in uh stone town in zanzibar i believe so then there's a close-up of the details of the mesh the detail that we didn't have in the dense point cloud after that we have multi-view stereo texturing so ta-da color so essentially after the blanks are filled in the color is and the textures are put back in to actually give a complete image then lastly there's orthophotogeneration orthophoto or orthomosaic um they're essentially the same thing which is the orthorectified projection of the 3d image and you can see that here so now what you see overlaid on top is a digital surface map and then underneath that is the ortho photo and so the digital surface map uh essentially shows the ground and pretty much everything on the ground in their relative elevations so a few you there are quite a few use cases for um open using open drone map and ortho photos and digital reconstruction one of the common area in which we see drone usage and requests for aero imagery is in construction so in this particular instance imagery was collected at regular intervals to monitor the progress of a construction site and was also used to provide data for measurements as you can see this was flown with the rgb camera so standard camera and essentially it was just used to assess the site uh the construction site on a regular basis and to update the owners and the stakeholders about progress uh drones and multi-spectral imaging are becoming increasingly popular in agriculture and is frequently used to assess plant health and stress as well as nitrogen and phosphorus loads in this example uh using rgb images alone details can be extracted to give us a very rough estimate of plant health in an area another area that we see uh drone use and aerial imagery being used is for assessing emergency areas um in this case environmental damage so what you see here is a dense point cloud of a tornado touchdown area the organization that sent this data over for processing was looking for a method to quickly assess damage and plan resource allocation and to estimate replanting costs uh coastal monitoring is another area in which uh journal imagery is often used this is also stone town um in zanzibar in tanzania so this is actually the same area that we saw in previous slides with the mesh and the texturing so this area was flown with a standard camera by regular intervals and the data was used to generate 3d models to map the archipelago to estimate flooding and provide a baseline for coastal monitoring so you can see how the coastline is changing in 3d and you can save that data and you can refer to it every two or three or four months or years and we also use aerial imagery in ecological assessment um so what you're seeing right now is a local wetland um the top right image is an rgb and the lower image is in near infrared um so we use this imagery to track changes in the area through time and to also assess the wetland so if you look closely you can see that the water is a little bit more visible in the near infrared spectrum it stands out a little more it pops if you will um so to sort of track how water is or where water is over time in the wetland um there are some restoration efforts underway in this area so we can also assess how those are going over time across seasons we can detect some plant stress indicators and as i said before highlight areas where water is and where it's moving and then there's erosion and elevation assessments so these so one the image on the left is a digital terrain model and the image on the right is a digital surface model so the difference between the two being a digital terrain model shows you the land without all the stuff on top a digital surface model shows you the land with all the stuff on top whether that's trees plants rocks uh buildings and so this information can be used to determine uh relative elevations and then also measure erosion so there are just some areas that erode over time and you can take imagery at intervals and assess how quickly something's eroding or if intervention um intervention is needed originally the image on the left that data was actually collected to create a climbing maps this is a climbing area a recreational climbing area okay so i'm going to give a very quick overview of how to use odm um before we begin processing our data processing our data sets uh it's pretty simple so we're just gonna i'm gonna run through pretty quickly um so these are supported systems windows 10 ubuntu mac os um we're going to be using web odm and i'm going to give a little bit of background um on the various ways to use odm web odm is the most common way um to use odm it's free for a manual install uh there's a web interface that's also free you can also purchase an installer for low cost so webodm lightning is the easiest way to use odm as there's no installation overhead i just go on you sign up and you start processing but you pay for the processing power uh there's a command line set up there's an api for configuring multiple processing nodes a little more on that in a bit there's live odium which contains odium node odm cloud odm and web odm software into either a dvd or usb which is plugged into your computer there's no installation required and it's ideal for offline use and there's also portable openstreetmap or possum and that's used for field mapping ah i'm hitting the wrong arrow key there we go so when you log on to web odm um with so you should have been given a username and a password uh you'll see a dashboard um so this dashboard is essentially like your hub you'll see the name of the project you created you'll see the name of the data sets within that project the number of images in each the time it took to process or how long it's been processing if it's not yet completed and you'll see its status and there are a few different statuses i won't go into but completed is pretty obvious um in process and then if it errors out sometimes you'll get a message about why it aired out these here contains your assets so once processing is completed um whatever assets were generated can be found here and downloaded in a few different formats so when we do the when we process the data set the first thing you do is select the images in gcps if you have them uh select all of them i don't think web odm takes folders and it doesn't take compressed um compressed files so you're going to select all of the images inside that you want processed uh name the data set and we're going to use default the default option which is usually your best option for most cases you can also click edit and change the parameters yourself if you know what you're doing or refer to the docs because there are a lot of parameters they can be toyed around with for different results um that's beyond the scope of this workshop but uh feel free to ask questions you may have during the q a um we are going to be resizing the images that should be the default setting so yes and we're going to resize to 2048 pixels click review uh when you click review it's going to the button is going to change to start processing so then actually start processing you're going to click that and then this is going to show up so it'll give you the status in this case running and if you want to see what's happening as it's happening also if the data set happens to error out you can turn on the task output and you can track what's going on or what happened um when it's done you'll see completed you'll see how long it took your asset to be there you can also view the map or the ortho photo the generated ortho photo and you can view the 3d models of the point cloud so a quick note on um node odm and large data sets so there's a relative new feature called split merge that can be used to process very large data sets um and usually when we say very large data sets we mean numbering into the thousands or tens of thousands um and essentially what split merge does um you set up multiple processing servers or nodes and these nodes process individual chunks of the data and then those chunks are stitched back together at a primary node and then that generates the outputs to web odm or odm okay so now we're going to start actually processing our data sets so give me one second i'm actually gonna switch screens and go to my account and this is always where things get wonky so i'm gonna stop sharing for a second yeah thank you that is great and thank you i just want everything to go smoothly when i start transitioning stuff um yeah it's fine so let's see i know yeah did you hear about uh web of them before open drone map no i i thought about open drama before um because actually we are so me and my supervisor we have a project on cranberries um and we are trying to use it for building your thomas image but so far we didn't have the chance to try it yet because we are busy with the sleep size project uh but yeah so soon we will do so i'm very interested about this web thing because it's we actually i think i don't know if my supervisor should try to install web um odm or another the open draw map software but if this is it's more it's smoother maybe we will try to use this one yeah super cool okay did you use it philippe yes yeah i use i use a lot actually yeah but yeah i'm learning many new filters that i normally not use with super cool okay let's continue so once you get to your dashboard you're going to unzip any files that you may have um and i'll actually go through before i show you this just from beginning to end so create a new project uh you don't have to create a new project there should be a um project called phenom force uh so this is just for an example for later and i'll let you know when we actually start doing um or should follow along i guess but if you were starting a new project you add the project name it that's you can add a description we're going to create that project so now this project holds um multiple data sets so our setup is we have projects named by park and then whenever we gather data from that park or do a flight in the park we add that to that parts project set so then i'm going to switch here you're going to select the images that you want to use and there are multiple ways to select all the images i usually just use ctrl a you can um click and drag uh you shift open them and then it's going to open up the screen or this panel and you can actually name the task uh within the project so here i recommend that once you get the data set that you want to use all the images selected to use some sort of unique identifier for yourself whether it's your name or zip code whatever it may be so that you can actually pick out your project later that you processed uh but i'm just gonna name this okay so now once the images are added we have a name uh the processing node um is usually chosen automatically if you have multiple nodes set up you can pick it uh which one you want to use to process the particular data set uh but for now just using auto we are going to use the default option um so there are multiple different options depending on what you want so if your main focus is on elevation you pick dsm plus dtm um there are certain parameter changes that are useful for say if you're processing a forced area or building or you're using this for volume analysis most of the time default is going to give you the best result for the least amount of processing time but it just depends on what your needs are um this should already be resize images should already be clicked yes and at 2048 pixels if it's not please click yes and put in 2048 pixels and then we're just going to click review so then it just gives you a chance to make sure everything is how you want it to be before you start processing and then once you click start processing it's going to upload the images this process itself can take a little while depending especially if you have thousands and thousands of images um so i'll wait for this to finish uploading before i move on there we go so now test123 is processing in the phenom force project um project folder and right now it's just resizing the images it'll do a few things before it actually starts processing now one cool thing to note that i mentioned before is the task output which actually hasn't started because it's just resizing um i'll give it a second to finish that it shouldn't take long because the task output window is actually very helpful particularly if you're not sure if your process is has i guess stopped but hasn't given you an error or if it's just hanging out for a bit while it's working on something that requires a lot of processing power um or to figure out why your process aired out but the default is for this to be off so just make sure you turn it on if you want to check shouldn't be too long it was another step after this i'll just come back to it ah there we go so now it's running this is the part that usually takes a little while but the task output is also started and so this is just telling you what's happening it's putting in the parameters at this point um it's extracting the data from the images and it's just very useful to know what's going on behind the scenes so now while that's processing we're going to talk a little bit about outputs so i'm going to go back to my slideshow and hope this works like i wanted to yeah we can see your your presentation okay great thank you so moving past processing to output so there are a few potential outputs um that you can get from your data set the main one or one of the most people think of is the 2d ortho photo so there's a lot you can do with the 2d ortho photo within web odm so this is what you see when you click map and you can use google maps openstreetmap as the general overlay you can take measurements and add contour lines so this is an example of taking a measurement and kind of getting volume estimate area estimate uh contour lines you can specify the details there there's also a plant health index tab so you usually use this when you have multi-spectral imagery but it also works with rbg to a certain extent and you can adjust the wavelengths to help provide insights about plant health there's the surface model so as i said before this shows um the surface with the trees and structures also there uh there are adjustable color ramps depending on your needs and you can export this as well and then there's the dtm which basically just shows the ground um it's the dsm without the trees or the other structures on it and this is a good comparison between the two so essentially this dsm uh contains trees and whatnot and boulders and buildings and then when you get to dtm you can see all of that's gone and there's just the land and what i presume to be uh river ravine so then that brings us to the 3d model uh you can view the point cloud and the texture model and there are a lot of settings to play around with here so here's an example of a point cloud that i generated of wetland um and this is a dense point cloud because it's using quite a few points and in the um interface you can actually change how many points you want to use so it can be very sparsely populated um populated with a lot of points it just sort of depends on what you want to see and then you can take pictures of this you can export these images um this is the texture model so it takes those points it actually puts textures on top and there are a lot of filters and tools that you can use you can change the scene and the background depending on what you need this is that same wetland uh but the textured version and just a little side note really cool thing that happened that i um didn't know could happen it actually got the rest of the scene so you can see in the background there you can kind of just see out into the distance so i thought that was neat i've never seen that before um but it processed this data set very nicely uh you can take 3d measurements here as well um and this is just an example of that and then once everything is done you have a few download options so you can download the ortho photo itself it's either geotiff or tile and then the dtm and dsm you can also download the 3d point cloud the textured model and the camera parameters themselves in case you need to refer to those uh just one thing that to keep note of um not everything is going to be able to open these types of files properly so just make sure you have a software that can actually display these types of files for the point clouds cloud compare is actually a very popular uh software to use for viewing point clouds uh so i'm gonna go off on a slight tangent where everything's still processing i can't see where it's at in terms of the processing but i'm assuming it still is so this is a local lake um on the [Music] well my left is the ortho photo and then there's a zoomed in shot of the ortho photo and then this is the point cloud uh on the left and then the texture model on my right so you'll notice in the point cloud there is a big gap so it looks like just an abyssal lake it's one of the interesting things to note about processing uh vario imagery to generate these products water processes somewhat unpredictably uh due to its reflectance and its motion um so whether or not water actually processes well because you do see it in the textured model and you see it in the ortho photo and in this instance it actually came out pretty well because i've seen water come out very oddly but whether or not it processes well depends mostly on the conditions that you're flying in the condition of the water itself the contrast sunlight or lack thereof waves etc so one thing we're going to look at is output quality so open draw maps outputs are comparable with commercial products um your output quality is going to change based on your settings um so like i said before and we'll go into this a little bit in a minute uh the default is generally the best setting for most of your uses but by changing various parameters you can get better uh outputs and then flight coverage is important so you're gonna want a lot of coverage of the area that you're trying to get information out of and process uh the more images the better the more angles the better um we have a good flight coverage tuned settings you can get really great outputs and one way to actually check outputs across softwares is to use uh uav arena so on that note of output quality i'm going to look at a particular data set and we're going to look at a few different settings used to process this exact same data set so this is state university in zanzibar souza all the flights and data collection was done by khadija abdullah ali and it's 135 images and just keep that in mind it's the exact same data set across all of these slides so first is the fast ortho photo um there's a far out shot and a close-up shot of one of the university buildings this only took 23 minutes to process which is insanely fast and from the air you might not know anything was a little weird uh but when you zoom in look at the edges and how undefined they are so things are very weird and wobbly here and that's usually where you'll start to notice um quality or quality dips in building edges this is the default setting so again it's going to be your go-to setting most of the time if you don't want to play around the parameters but if you do check out the docs this will generally provide a good output in a reasonable amount of time so this took a little over three hours um which is reasonable for this data set and the quality we get out of it so when you look at the building things are a lot less that's a good way to call it pudding like i suppose um the edges are more defined and then this is the hi-res version so notice that edge detail again for the building um but it almost took twice as long as the default so almost eight hours but if you're going for detail and accuracy using the higher settings and taking the extra time could very well be worth it and then this is the dsm and dtm just for time reference which in this case took about the same as the default so there are some pitfalls that especially first time users see we use an open drone map and some other softwares as well so one unique thing about odm is there are a lot of places to start um you can start with having i wouldn't recommend it but multiple nodes for processing smaller chunks and then bringing them all together for if you have split merge for if you have larger data sets you can use web odm you can use webodium lightning you can do a command line install um so there are just a lot of options and all of those require a little bit of reading a little bit of background a visit to the docs maybe to sort of get started some things are a little more complicated than others give me one second sorry i didn't realize i know i should have realized i should have i would have been talking a lot but i didn't i don't know um so another thing we see people starting with a large data set i would not recommend starting with large data sets that's why the uh recommended data set that we gave was only 99 images um they tend to be more problematic in terms of processing and we say large like we're talking thousands of images um but i would recommend maybe no more than a few hundred when starting out um because when you get into extremely large data sets like i said before you may have to use split merge and that's a bit more of a complicated process particularly with setting up the nodes than just putting something in web odm and letting it run system resources so machine specs are going to be very important and this might be a problem more for individual users than say corporate ones for people who work in research facilities uh because ram is going to be usually the deciding factor um and the more the better generally minimum requirements i think are 16 gigabytes of ram but if you want things to run pretty smoothly 32 gigabytes is more on the recommended end and then when you get into the hundreds you're good to go you can actually do other things on your computer while you're running uh data sets the more ram you have so that's something to keep in mind and then also get some faster processing as well because you won't get those uh bottlenecks because you ran out of ram or ram is maxed out another thing to note um cpu is less of a determining factor in terms of the speed that processing takes but if you have a lot of ram and things are still taking a little while or your computer's still acting funny do take a look at the cpu because i've had that happen i've with several different things not just odm i've had all the ram i needed but then my cpu was maxed out and so i still just had to let the computer just sit there and do its thing so yes specifications are something to think about and then also docker allocations um yeah learning about docker is that's a workshop in and of itself so i'm not even going to go into it um hanging processes and failed processes and being unsure why so here the task output is very helpful and so that's why i do recommend turning on the task output um particularly if you know weird things are happening or it's taking a long time to process or it aired out and you want to find out why now usually when something does air out you'll get a notification in that rectangle bar in case of a failure um but that also depends on how you're using odm because the command line's going to be very different than the user interface that you get with web odm uh and then the other thing is the quality of the output versus the expectation of the quality of the output so there are limits to what photogrammetry alone can do um odm is still a work in progress and there are a lot of parameters play around with um the data quality and the amount also we take into consideration and how the flights are done is going to be very important now getting into flight recommendations is again a workshop and of itself but generally the more images of an area you have the better the output uh flying across pattern is important for 3d structures such as tall buildings and plants um you want to aim for minimum of 70 overlap but for more complex areas like forests you're going to want between 77 and 83 percent overlap and side lap as well and one thing you're going to want to note about adding overlap inside lap and cross patterns this increases the time of the flight um so depending on what kind of drone you're using it's flight time it's uh durability you have to take these things into account because you can fly the same area in 30 minutes with minimum specifications you know minimum overlap minimum side lap no cross pattern but when you add all of those things to it it might become a two hour or two and a half hour flight but you'll get more and better data out of that so that's something to think about especially on the um logistics side of actual flight planning and gathering the data itself so some of the future directions of podium so one thing that we're looking to add is better multi-spectral support uh so for the most part right now that means supporting various sensors or more sensors adding reporting statistics gpu support to aid in processing power and time processing speed improvements thermal images support and then also being able to use supplemental imagery from 360 cameras so in summary open drill map is a set of tools for 2d mapping and 3d modeling uh it's open source with benefits in terms of costs and control it's used to use for site assessment site monitoring disaster response construction agriculture mapping which mapping alone has so many uses across so many different industries and then we're also seeing it being used increasingly in conservation and sustainability in those arenas so the 2d outputs that you can get are ortho photos terrain and surface models as well as plant health assessments the 3d outputs are point clouds and textured meshes it can be a little complicated to start off and then especially when you're getting into individual parameters a little bit specific but the docs are very helpful with that and i'll get to that in the next slide um but overall the output quality is comparable to commercial software for a lot less and it has a very strong supportive community which i'll also get to in the next slide so learning more so i mentioned the odm docs a few times they're at docs.opendronemap.org and so this essentially just gives you a quick overview of everything you need to know um how to install potential hiccups you may come across um the multiple ways of installing our tutorials there is um a list an explanation of the various parameters that you can toy around with information about the outputs how to use ground control points which i didn't cover um how to use split merge additional resources for using open drone map flying tips as well to get that good data so essentially you know good data in good data out or if you're data scientist garbage in garbage out so that is something to keep in mind the quality of the data and then some updates on multi-spectral support as well you can always request features and then contribute yourself as well more on that in a little bit uh piero tophanine um a contributor to opendrawmap a large contributor opendronemap has written a book about it so if you want to know more about the inner workings of odm and how to use it and some of the details of gathering data and using open drone map and hints and tricks and tips and just about everything else i do recommend you check out his book i've used it myself so i'm not just saying that again you can use uav arena to compare outputs across different softwares there is a community form which i highly recommend joining if you plan on using odm in any capacity in any way at community.opendrillmap um so here there's active community of people who are starting to use open drone map have been using open drill map contributes to the code um and sign up make an account and there are multiple different um areas in the community forum but if you ever have a problem or question or want a feature i recommend seeing if someone's already posted about it or answered that question or posted yourself if they haven't and the community is very friendly and very supportive and wrong here and this is an example of someone who um wanted details about gcps and yeah again people are very friendly and very helpful they want to help when i answer your questions um so yeah are there any questions yeah india so first of all thank you really nice presentation it was great thank you i'm gonna stop sharing yeah i think i will write in the chat if people have questions because yeah are you are you gonna show later the output from the data that you are running right now yes and i'm gonna check on that right now and it has completed no now i will share this screen give me one second and we can actually review that yeah that's great so guys if you are watching there please go send your questions and then we are going to start our ca section as soon as she finished this part i i do have some questions though okay journal jonah talk about the oh okay you're organizing this screen okay um can you see this yes we can see um so i don't know if for other people who are running this if it's still running or not um i have and i've seen amount of ram on my computer so that might have contributed to why it ran so quickly um but when it's done you'll get a completed uh status and you'll get to see everything that happened while it was running if there was any weirdness or if you just want to know what the process is so once um a data set is completed i usually go and look at the products before downloading them just because it's easier to view it in web odm than to download it and remember to open the right software so you can actually see a point cloud because most things cannot generate point clouds from the files so this here is the 2d map or the ortho photo and so you can kind of zoom in and get more detail of the area this is one of our local parks very nice and then this is the plant health feature and this wasn't flown with a multi-spectral camera so this is all just from rgb and then the surface model and so from here you can see all of those things now there are also a few other features so there's layers this i guess is more important when you're looking at plant health um you can pick your algorithm the filters for that the color spectrum that you want to use and then of change the um color ramp a bit it is possible save this output there like the any dvi output yes i think so i think you would export it as a geotiff okay great um yeah here we go just say that's a tiff file and then you can also share it um you can create links to it ah there we go and then you can change the underlying map i usually just use the google maps underlying add images do contour lines um and you can provide a get a preview and just sort of play around with contours i've never actually used contour lines for anything so um imagine they're important in sort of elevation assessment and measurements um i've never actually been asked to do anything with contour lines but those can also be exported and then those have multiple ways to be exported either geo package or dot json file and then this is the measurements feature which i've also never had to use well i should probably do it on the actual ortho photo here but then when you create the points it will give you estimated area um the location of those points and then the distance as well uh for example if i have an image and then i have my ground control points there i can for i can collect information from the first picture that i have from the same you know the same place and then use these information that i collect as a ground control point for the others for example i do not have the geographic position for my control point but i have in all segments of pictures right and then if i wanna over layer these pictures in the time i can use the ground control points that i collect right now from the first picture as the information the geographic information for all the other pictures is that is that possible right or does that make sense i don't see what you're saying yeah i'm just saying that because oh sorry sure actually um i don't think i've ever used a single like geo-referenced image as a i guess reference for other images um that's a good question yeah because i just thought that because uh what what's happened uh in our in our case we have our fields right and then we go in the field taking picture during the crop development so we keep follow for example in my case potatoes we keep following the potatoes growing and then we go in the field many times the same field and we have these ground control points but many people don't have the gps or some device to collect the geographic position but we need to over layer these different uh set of pictures that we took during the time so what i just thought is maybe do this for the first set of pictures and then we collect these information as you show there like the geographic position latch to the longitude for the ground control points and for the all the other pictures that we're gonna over layer we just use the same information that we just collect from the first set of pictures does that make sense yes i see what you're saying yeah that's i mean that's super cool because it's a freeway or cheap way uh to not you know you not need to have a a really precisely geographic position if you want to just overlay that pictures over time so you can run the first one and then use the first one to correct all the other ones that's coming soon does that make sense yes i see what you're saying um so then that would use uh ground control points um which is a feature i've only ever used them once um so typically when we're actually messing around the geographical data we use except tool to modify it in the command line um i've never done that before myself but i'm pretty sure someone on the forums probably asks okay yeah um but definitely ask your questions on the forums and they can actually probably walk you through how to do that because i've never done it i have used gcps as a reference point before and then exif tool to extract geographic data and then modify geographic data and also to append geographic data on the images i've never done exactly what you're saying of using and correct me if i'm not restating this right having one image that's um that has the gps data and then having other ones that don't but then using that is sort of the base point for the other images yeah kind of yeah yeah so basically you use uh this device that you just show to collect the geographic position from for the uh gcp points and then you can just like and i i was just thinking uh in an easy way to not expend money doing these overlaying uh research but it's super common now in agriculture and what it's what we normally do you know because we need to go following the crop but yeah thank you yeah that's super cool um yeah thank you in the meantime there are some questions um if you are do you so can can i we start the q a session india do you yeah oh um one thing i almost forgot um 3d model review that really quickly this is the 3d model of the same area and then if you want to see where it's located you can kind of just click this and get the lat longs and then for generating a fuller image this is the point budget i mentioned and that just sort of fills it out more or less and then there are just a bunch of features here so you can show where the images were taken and the angles and the pictures themselves so you can get an idea of what your coverage was and your overlap um then there's some background stuff quality changes you can do measurements in the 3d and yeah very nice i didn't want to forget that but yes starting the q a that's very cool um if steven mather may be joining as well in a little bit just a heads up okay so we will start and then maybe if he joins and he has something to add he can um he cannot work like more to what he you're gonna say um so the first question is hi india thank you for the presentation is it possible to use more than one node for the same group of images on web or dm i do not think so no um so for when you use web odm uh let's see and you input the images here you can pretty much pick one note um unless you set it up such that so it's possible to set up a note or the note that you're going to use to split the images up once it gets the data um but you can't do that directly from web odm unless you set up the unless you use node odm so if you use no odm and set up the api you essentially can set up a node um so once you send it off say i picked uh this one it'll send it off to this node which will split it up across multiple other nodes that you've already set up using the api and then it'll run that and send it back to this primary node and then it'll post here but it requires some setup outside now for web odm lightning you may be able to do that um within the interface so the the node that you say 10 000 17 is like a way to parallelize the the process basically across different nodes yeah not with this interface no it would require a separate set of see um philippe do you wanna or should i ask keep passing oh i'm gonna share here so this is casing hello joe uh johnson i'm casing a big user of web of them from mosaic in image from peanuts phenotype in gun africa i would like to know how long is web of them going to be open source that's a good question um for the foreseeable future there are no plans to make odium uh private or to sell it um the purpose when it was created was to keep it open source um 100 forever and then the controlling entities um and interested parties are all very adamant on keeping it open source and free um and i've never heard anything different okay um there is another question um so great how well is the current version of uh dem map of a homogenous hemogeneous farmland that i'm not sure i've never flown um farmland um but it would depend on the flight itself so that's when you get into the flight planning and the number of images and the angles my recommendation would be that if it is very homogenous over a large area to get as many angles as possible um particularly like side angles diagonal angles uh to add that to the image queue so that there's more reference points um because yes things being different or differentiation is going to be very important uh when you're processing but i also feel like for the dm that's going gonna be a little less important too because there will be elevation differences at the edges so i would say it would be pretty accurate but that's entirely dependent on the flight specs and then also the parameters and i recommend uh the community forums for details about specific specifications and the parameters and um flight specs on how to fly that and get the best product okay yeah great um we have another uh another question here first he said thank you and then he asked could you please talk a little bit about drone configuration uh i guess this he means uh it's one of my questions too it's coverage picture you know uh what the coverage among pictures that you normally use for drones or any configuration before taking pictures so most of the flights that i do are with the um dji mavic pro um i've used mavic mini i've used eb's um i've used a few other big swings and then i would say if you know your drone well and how to use it and you have the appropriate software the drone that you do use matters less um generally the djis are just used because they integrate well with other softwares and just are easy to use and easily accessible but i guess when you say configuration do you mean cameras like what's in the drone or the actual type of drone or the whole set well let's see what he uh surya could you please put more information about which kind of configuration are you looking for in the uh in the comments here and then maybe we can um explain a little bit better for example in my case uh we are flying at 200 feet and uh over over a layer of 85 percent uh 75 85 percent sometimes yeah 75 percent and at what height uh 200 feet yeah you would likely get a really good product um what are you flying over though actually uh it's normal people fly 50 feet 100 feet because we are we are working with a plant breeding and then we have a really small plot and we need high resolution to identify plants or sometimes soft uh saw and count flowers you know oh oh okay so it's all very low stuff exactly yeah yeah okay then yeah it would it kind of depends if it's very very low to the ground then 200 feet is with a 200 or 250 uh 200 200 is kind of high um when i wanted to look at say i think it was phragmite so it's like a very tall invasive plant maybe it's up to like 10 feet or so i was like a hundred about 150 feet um and got really good data the wetland that i showed you with the textured mesh that was flown at i believe it was 150 or 200 and i could actually um pull that one up because that was one of the lower flights that had a lot of detail um but it was a combination of trees and low shrubbery in the area so the height that i actually see that a lot with the areas that i'm flying so it's a combination of like low maybe wetland and then trees and so unless i'm processing um each area individually if i'm just flying the whole area it just i usually just try and do 10 to 20 feet above the trees but you lose some of that detail on the ground at times and let me see if i can actually find that data set here so that's this one this one was flown at i believe 200 feet which was about it was actually maybe 30 feet or so higher than the tallest tree um but then everything else apart from the trees was very very low to the ground but you can still make out patches of stuff even if not in detail but this one was probably one of the more detailed uh flights that i did at a relatively low elevation yeah and this was with the mavic pro um with a standard camera cool did you uh go ahead anna did you did you use a multi-spectral or a hyperspectral camera to do automotives before using uh open drone map no so we have not used um multi-spectral cameras in flight yet um so we actually haven't attached them to any of the drones so since the djis typically have their own set cameras we have big swing and um uh big swing drones and vertical takeoff vtols that we can attach multi-spectral um cameras to and we actually have the multi-spectral cameras and then the pandemic happened and it was the experimental side of that kind of got pushed down the line a bit um but we actually have not gone in the air and gotten aerial multi-spectral imagery uh recently now i do know there are a few projects we've done in the past um i don't know if they're up here this was before my time with the cleveland metro parks um steven mather did these fights but i don't know if they're up here um but if you look at if you go on the forums community there are people there who've flown with multi-spectral cameras and produced ortho mosaics and they might be able to provide a little bit more information yeah i don't think our multi-spectral sets are here because this would have been 2016 i believe and i had started in 2018. but the forum will definitely have more information and i think the docs may yeah i think steve just joined uh so maybe we can add him to the pool hello hi sorry for the delay i was uh to the doctor's office so i uh i couldn't get here any sooner yeah we are in the middle of the qa session so yeah great job um yeah so you were talking about yeah uh i don't know uh india do you wanna ask stefan to complement any answer that you had yes so your question about the gcps and overlaying that and then also about previous uh multi-spectral flights as well okay yeah actually the first question was more more or not possible idea to use for example because in plant breeding we normally fly many times during the crop season because we need to keep following the plants growing right and then we evaluate these plants uh we structure this information and then evaluate this plant so we need to over layer different orthomosaics and then uh what i was thinking was uh because we have let's suppose 10 10 pictures from the same site and was to use the first layer the first picture the first origin was like as a base or geographic correct the other ones to keep the perfect overlaying so uh you just show one one tool in the web of them where you can click and see the latitude launch to the altitude from that specific point so what i thought was imagine if i don't have dps and i can't collect my gps my gps ground controls my ground controls actually and then i use that tool for the first picture to collect that information from my ground controls and then for the next set of pictures that i'm gonna run for example dirty mosaic again i you i i i upload or i bring the information from these ground control points from my previous picture does that make sense yeah yeah that makes perfect sense as a workflow um there might be and i haven't gotten too deep into the multi-spectral side of the house yet but there may be ways to use some of the correlations in there and now that i say that actually given the changes in the plants and change in lighting probably approaches the better one so yeah if you need relative geolocation using gcps is absolutely a way to go yeah that's great because i was thinking a way to not need to go there and collect the ground uh i mean the gps location just use just fly the drone and get everything from the this tool for example from weber dam and yeah that was the first question yeah you really you really only need relative correction in that case because you're because you're trying to in fact it's it's better in some ways um because absolutely have to be so accurate uh relative to what you need for relative correction yeah we do that's great approach yeah we need we need to draw you know a field shape file and then extract information from this shape file and then if you do this only once it's like help a lot our life implied grading yeah and the other question was about the hyperspectral if you uh if you guys use hyperspectral or multispectral uh cameras to do the ultra mosaic using this platform so we do have multi-spectral support it's um it's one of the more challenging parts of the project because uh this the the lack of standardization in the industry is a nightmare everyone else everyone has their own calibration process everyone has their own um special way of handling file formats and handling all the bands so it's definitely a challenge but we do support it i think we've supported it since uh 2019 like we started it last year um and gradually we've just been adding more formats um as as available i think it's one of the one of the areas which could use some resources whether time as far as folks saying this is the sensor i have and i want to get this into the system or um or financial support but time's probably the more important thing right now uh because uh everybody who's everybody's working full full tilts on on the development side so um we could definitely use more hands yeah great thank you yeah so there are other um there is a one other question um which i think it's more technical um so she's like can you open the key heavenly files and open them later i don't think we have support for kmz in web odm directly a lot of the focus a lot of the focus for webodm is making open drone map easy to use and then we tend to expect that people are going to use qgis or some other um desktop or or cloud-based product to to continue to do analyses and overlays et cetera but um you know if there's if there's a good good general use case um i think there's value to we've got some capacity to add to the edge of the browser i think we've got we've got some extensions that allow you to overlay things there's value to that that is more interactive but we're also not trying to completely replicate um a desktop gis um i think i have a question but can be connected with the question of surya that right but i think india kind of replied to already but maybe you have something to add like she was asking can you please talk a little bit about the drone configuration um so my question is more like um so for example i didn't use uh open draw map yet but i'm going to use it and it's so we didn't use a drone actually to collect the images we use like a um in-house device which is basically a camera with an angler and then we were because we yeah we are kind of uh um analyzing um cranberries which are very small plants like these and so yeah yeah they're very small so you know drone image drones sometimes are quite expensive to buy um so if you are if you have a very huge breeding program maybe okay it's worth but if it's a starting breeding program maybe you don't have that capacity with money so my question is like is there um so do you need drone images or just geo reference image are fine like different images are fine um and we've done some some so uh back in back in december i started playing around with a lot of um unreferenced imagery as well um i even created ground control points for a ukulele that i that i reconstructed in 3d um it was kind of a ridiculous sort of holiday i couldn't let work go but i couldn't actually i shouldn't i shouldn't be working on work on my vacation so uh so i started just referencing all sorts of things um uh so periophone uh the uh one of the core developers the primary core developer right now for open joe map looked at the work and he said all right let's make sure that we can handle on reference to madrid better so if you load unreferenced imagery in uh from a from a drone or otherwise it will do its best to choose what the zed or z-axis is it doesn't always get it right but but it'll make a good best guess so unreferenced or referenced drone or no drone uh we absolutely support it that's good to know because yeah for me it's like it's i don't have running matches yeah it's gonna be a challenge yeah yeah uh so i think for now we are done uh because the other question is more a suggestion for me and felipe um so they are asking that we make a seminar on drone specific because i think many people don't know um which kind of drone are there like me for example so um i think it would be nice to invite someone that can give an overview of the drone that you can use in plan breathing real quick there was a question about um using nodes and web oem uh interface and you more than one node for the same group of images yes that's a great question so um if i understand the question um what you want to do is you've got a large data set and you want to split it out across a set of processing nodes um and if that's the case uh the the the way to do that is normally you have web odm as your basic interface you've got the node and then it does the processing is a relatively simple self-contained thing and maybe that node is someone else's server or maybe it's something you know that doesn't sit on your laptop but but you know sits somewhere um in in a space someplace else but what you can also do is um use uh used to be called node odm proxy but there's a there's a separate um i only have two hands this is hard um so you got web odm you have you can have a proxy in between that acts as a distributor node and then you can have multiple nodes behind that so that's called cluster odm it's actually the directions are relatively simple they're on the github repo so github.com open drum map cluster odm and essentially that that cluster can either be an auto autoscaling cluster so it can call on resources on cloud instances or you can connect nodes into it so if you have multiple machines that are smaller than then what you need for processing those images and then when you call it you would call it and say how many images you want to be in each uh portion of the of the sub models so say you've discovered that you can process process up to 400 images on your nodes um then you would set your split to maybe 300 or 250 something slightly smaller um and then it'll create a bunch of sub models little groups that it'll then process on each of those nodes um in parallel and then bring them back and then it'll stitch them all together um and this makes a big difference either if your hardware small or your project is huge so uh we've done this on uh data sets uh in the past of up to 30 000 images uh and we're currently we actually just completed uh the first round of a project processing 91 000 images this way uh breaking it into 5000 image batches and and chunking through and it's it's really fantastic um there are some it's not as simple as just processing something in web odm because you've got a lot of more a lot more administration a lot more things to troubleshoot and if if something goes wrong um it takes a little bit more effort because there's a lot more points of failure when you're when you're doing that much uh work but it absolutely works and and works pretty seamlessly yeah great we have another question actually so i i actually guys i'll post the link to classroom in the comments so if you guys want to awesome thank you yeah yeah so um so there's another question uh any way to adjust shift parameters for a very homogeneous farmland image processing as you usually have problems yeah good question um i don't know uh so yeah sift parameters i think we don't expose them in web odm itself um one thing that you can try is you can try different uh different feature uh extraction techniques so there's sift there's i don't even know how to say it h-a-h-o-g i just say ha hog but i don't know if i'm sounding like an idiot um and i think those are the two options right now um i think we default to sift since the patent expired um but if you're asking that question you may feel comfortable in uh changing some parameters on the back end in open sfm which is the underlying structure from motion library um and i'm guessing that's basically a static file so we can follow up i'll i can look i can do a little bit of digging find out what that process looks like and how how it can be done um but there's probably a way to do it yeah i think so i guess like if you guys have any other questions you can write on the community uh yeah so i guess like stephen and india would be happy to answer your question um so yeah thank you guys for joining and showing us uh his co-application um and yeah it was very nice to uh know more about this software i really appreciate the invite thank you yeah and also yeah guys thank you thank you for the amazing work that you guys doing i know that you are helping many people and we are really happy to have this opportunity to help you guys but this out there uh in general this community is really related with plant phenotyping but i know you guys are really known in the construction part and other environmental research and hopefully people in agriculture start using more this platform i guess it's amazing so i don't know have this space now for you guys talk about some project that is coming we have some time so feel free to use this time for talk with our community too so um one of the interesting things we've been looking at is um using multi-spectral cameras and multi-spectral imagery to identify invasive plants and so that was one of the things that we were starting to work on a little bit before pandemic and then everything kind of got like that so and then with the seasons changing the timing is a little weird to kind of get it back on track um so we're hoping sometime in the spring um when things start growing again we'll be able to get out there and actually kind of get more details about how to use or which multi-spectral cameras to use at what wave lens to identify which plants and potentially invasive animals beetles and other things and just sort of get an idea of what can and can't be done in that space yeah to that end um i think i think india talked a little bit about integration of 360 cameras um [Music] so one of the things that i expect will come out of the odm360 project will be a it's an opportunity for additional sensor projects so um right now if you if you want to deploy a multispectral sensor on a drone it's quite an expensive operation but the components that actually go into producing that you know six thousand fourteen thousand dollar device whatever whatever that price range is are actually not great sensors to begin with they're relatively inexpensive the components are relatively um broadly speaking across the industry they're they're commodity pieces that have been cobbled together and there's some calibration there's some clever stuff that the folks are doing as far as providing additional services within that package but with with the process of putting together our own 360 camera which includes multiple individual cameras we're working through the problems of synchronizing those sensors and as we as we sort of you know check the box as far as as far as getting that 360 camera ready that then gives us the opportunity to create an array of cameras that are really good at capturing three-dimensional structure or an array of cameras that um have different uh band pass filters that capture different spectra that tell us things about plant stress or or or other characteristics of what's happening uh in the growing environment so um watch uh watch this space i think there's gonna be some fun stuff and i should say but um my background many years ago um back in 2004 was was actually to you to attempt to use multi-spectral sensors to look at carbon sequestration processes in forested landscapes um so i've been interested in this problem for a long time it meshes well with our needs within the parks and it's it's sort of coming to fruition in the project so um it will be good to have this community in particular sort of like helping drive some of those questions and help help us understand what what the needs are within this community because i think there's a lot of overlap yeah i think like uh camera is very cool so actually i i work i work mostly with three species and i think like uh um plant phenomena like so far the high triple phenotyping has been focused on like cereals or animal plants because they are easy to to manage um lifetimes yeah yeah and i think like you know the three they are there and it's not that every year you can plan you have to wait 10 years and sometimes so i think it will be really cool cool to have these things because it will allow also breeders working on trees uh to use hydroput phenotyping um and because working like also drone flying over i don't know um three orchards i don't know if it's the same as with annual plant like let's say potato or like rice or so i think it will be cool for this kind of application probably there are someone already doing i don't know but if i think if open roar map can do also that and it will be very cool so i really support that um awesome um yeah so there is another question about the 360 camera 360 camera for gopro yeah well the the animated gif in india's presentation was actually from the gopro 360 camera um it absolutely works uh it has some they've made some improvements in the last few months which make a big difference um it's absolutely usable and basically just change your camera lens type to spherical and it will just work an open drone map um the biggest issue there's a couple issues with it it's uh memory serves me 14 or 16 megapixel for the entire sphere so it's pretty uh low uh resolution um which provides some challenges with respect to matching uh so in our in our quick sort of driveway test um we found that it takes a lot of photos to do a successful 360 recon 3d 3d reconstruction with it um but it's also 500 bucks and you just walk around one so um what we're working on with oem 360 will probably hit up depending on what gps you use with it will probably hit about the thousand dollar price point it may be as low if you if you go with a cheap gps chip maybe eight 800 um but it'll be closer to 75 megapixel so there'll be a lot of detail in there that you can't get from any other sensor in its price class um i think an equivalent uh sensor might be uh lastly checked probably in a 15 000 range so if you're willing to do the diy grab a few uh raspberry pi cameras um and throw it together um it's probably worth the labor that's awesome yeah super great thank you guys yeah um yeah so i think we have like last 10 minutes maybe we can introduce the next workshop so because we then we have two hours workshop so yeah yeah again thank you a lot for everything in here was awesome yeah so guys i want just to say that the more information on the on open draw map if you want to like find the links where to find open room up they're also on our website because we even updated our website so you can also find them there yeah this video is going to be online in our youtube channel so people can come back many times and also feel free to share with your community too yeah so thanks steven and india and thank you guys thank you thank you [Music] well yeah so let's talk about the next workshop yeah it's very cool the presentation actually it's great it was great i learned a lot today um so yeah um i think so first of all i want to say that we have the second person suggesting to have a seminar drone configuration so maybe we have to find someone that can give an introduction to that for the next series of seminars workshop uh but for now we are going to introduce the next one we are already more than enough for our workshop series uh so the next one are like um the developer of plan cv which is a python package that i have used for my project um and it's a very cool package because it's open source it's uh and also it's a community developed so um you can actually like they are these three programmers they are amazing because they support you even if you don't know python and you have never programmed in python as me so if i did it you guys can do it um and so the next workshop will be an introduction to image analysis workflows with planned cv we will have we will have noah falgren which is the guy that we tanked every day every time uh malia genn and also alice schull true i don't know uh but yeah so they will be there talking about mcb and so if you register to already this workshop you don't need to register again because we will send you all the information um as usual every thursday or on the advice if you want to invite other people collaborators you can sign they can sign using the registration link um also during the workshop we have seen that people have started to introduce themselves on the slack channel which is cool um and they have also like starting to communicate we also created a new slack channel which is called job offers uh because i think uh plan phenomenic force can be also a good platform for um share information about jobs in phenomics and there are already like some sharing so this is another of like another uh good um opportunity to join this channel and be part of this community yeah and yeah and then please help us to share this information and uh subscribe here if you're not subscribed yet and yeah talk to you guys next week friday we are here again then stay at 10 a.m central time bye bye bye
Info
Channel: Phenome Force
Views: 1,309
Rating: undefined out of 5
Keywords:
Id: U-bsA7QjzYE
Channel Id: undefined
Length: 111min 50sec (6710 seconds)
Published: Fri Oct 16 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.