React Native EU 2021 - Virtual Conference: Day I

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Applause] [Music] [Applause] [Music] [Music] [Music] [Applause] [Music] [Music] [Applause] [Music] [Applause] [Music] [Music] [Music] [Applause] [Music] [Music] [Music] [Applause] [Music] my [Applause] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] done [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] so [Music] [Music] [Music] [Music] [Music] [Music] [Music] hello everybody and welcome you to react native youth edition my name is mike and i'm your host of react native eu so welcome everybody to our stream we are streaming out of words of poland and as you can see this conference is happening remotely this time as well just like one year before now regardless of this fact we want to make sure that you take maximum out of it because networking and sharing knowledge between each other is what we think is the most important aspect of react native you so despite being remotely we put a lot of effort to making sure that this will be easy for you um to do so before we start with the talks i do have a couple of informations to tell you uh connected to the networking part there is a discord channel on our callstack server you can find the link down below you can join it and ask your questions about the talks we will answer them on our special react native show podcast that will be published soon after the conference you can also talk with other participants you can share your knowledge and experience within each other based on the talks so just log in there stay there and treat it as your networking aspect just like we were all here in poland in the same place in broadsword now also make sure to follow call stack i on twitter and reacneview on twitter and tweet react native eu everything about the conference let your friends know that we are live so that they can join us and they will not miss the first part of the conference which will be very exciting like the entire two days that are in front of us you can also visit our react native eu website where you can see the agenda all the talks and in case you are busy you can pick the ones that you want to attend but worry not we will all we will publish them to our youtube channel a couple of days or maybe weeks after the conference so it's still better to be here with us so you can have the first hand experience and knowledge straight from the speakers we are organizing this conference here as i said in our code stack office so as you can probably assume callstack is the organizer now callstack is a team of super great react native and react developers we are doing very very interesting projects all about cross-platform software so in case you are here somewhere from around poland or maybe from poland make sure to apply or just let us know that you are interested in changing your job we have a lot of great interesting opportunities for you and challenges that may uh feel you like there is still a lot of exciting projects in front of you just one important information before we start this conference is also co-hosted by my friend from costa gucash so in case you see somebody else not me introducing you to another call today or tomorrow don't be afraid everything's going okay we are doing this together because there is just so many great talks here today that i i felt like it's going to be a good idea to share some of that experience with somebody else that calls stack that also likes doing podcasting and conferences so let's start with what you are all in for here so the react native talks now the first blog is a cross-platform and architecture blog so in this part we will be talking about the things that are important to react native at a core level so underlying building blocks and some things that you may not hear every day about but understanding them and having full exposition to them will make you feel like you can build even more advanced apps so our first speaker is mark from expo and he will be talking about how jsi powers the most advanced camera library out there for react native now of course he will be talking about the camera library but one thing that you may that you don't that you can't miss out of this talk is that uh gsi is something that is yet to be public and open for everybody yet still folks at xbar using it already for the camera library so we are not only here for a great talk about camera features but also how jsi is enabling you to do even more advanced things so you can think about it as a very very future proof talk something that maybe next year or two years from now will be what everybody will be talking about so let's learn about how they are using jsi and how their camera library is working hello everyone my name is mark and welcome to my talk about vision camera and jsi i will be splitting this talk into two parts so in the first part we're going to take a look at vision camera as a library and how you can use it in your app and we're going to create a simpler object detector app using frame processes and the second part i will go over how gsi works what json really is and how you can create a simple fast and synchronous library using jsi so let's get right to it so i've created a new react native project and installed reanimated and vision camera i've added some basic code to request for camera permission so if the user has not granted permission we're going to display that and if there is no device available we're going to show a loading indicator otherwise we can display the camera let's take a look at how this looks for now and as you can see the camera is running fine all right we have our camera running how do we implement an image label now so let's first try to understand how this works in native apps in a native ios app you have to create your camera session and then create a new instance of the av capture video data output class then you can implement the capture output delegate method and this method gets called for every frame your camera sees so for example if your camera is running at 60fps this function gets called 60 times a second with a new frame in there you can try to implement any sort of processing you want you can use this for facial recognition object detection image labeling qr code scanning and even upload the frames to webrtc to create a real-time video chat on android this is basically the same story you create a new image analyzer and this image analyzer gets called with the new image every time the camera sees a new image so for example in here you might want to run your image labeling object detection facial recognition or webrtc uploading okay we now understand how it works in a native ios or native android app but we're react native developers we're all afraid of native code so how can we do this in javascript luckily mission camera provides an api for this it's called frame processors so just like the image analyzer or the capture output delegate from your native ios and android apps this function gets called for every frame the camera sees you can use the frame object to access frame data for example you can inspect the image width or height properties for high performance algorithms you can also create native functions so you write a few lines of native code and then directly call them inside a javascript frame processor for example the detect is hotdog function is a native function written in objective c or swift and java or kotlin let's take a look at how we can create a simple frame processing we're going to create a new variable called frame processor and we're going to use the use frame processor hope to create a new frame processor inside the use frame post is a function we have to use the workload directive so let's add it and let's just simply log something to the console every time a new frame arrives all right let's add the frame processor to the camera let's hit save and test the change in our console below we can now see that a new frame arrives every time the frame processor gets called which gets called every second by default we can also adjust this behavior by passing frame processor fps and pass some higher value which is 10 which gets called 10 times a second all right let's start creating our first frame processor plugin a frame processor plugin is a native function you can write an objective c or swift or java or kotlin and you can directly and synchronously call it from a frame processor for example if we create a native function that's called labor image we can call it like this label so to create the native function let's go ahead and open the project in xcode all right let's start implementing the plugin we're going to create an interface and an implementation for the plugin and then we're just going to create a static inline function which is called every time you call the plugin so for example in this case i'm calling it labor image and the function receives a frame as well as any arguments passed through the function we're going to export it to make it available in division camera runtime using the vision export frame processor macro let's go ahead and start implementing the labor image algorithm we're going to use ml kit for this and the ml kit has an api where we can label images we need to add the library to our pod file and then run pod install and then we're going to create an mlk vision image instance and assign an orientation after we've created this you have to create an mlk image label instance which is responsible for labeling images then you can use the process image function to process an image let's try it we're going to create an mlk image label instance which is going to be a static instance we're going to reuse and then we're going to create a new mnk vision image instance we're going to initialize it with the frame buffer as you can see here we use the frame and we're going to set the orientation to the frame orientation then we're going to scan for labors in this case we're using the results and image function which is synchronous using the labels we can now initialize a new array and fill in the labels so in this case we're creating an ns mutable array i'm going to call it results and then we're going to iterate over each label in this array and assign it to this array then we just return it all right that's it that's all of the native code you need for frame processor plugin let's go ahead and create the javascript site now i'm going to create a new file called label image in this file i'm going to export a new function called label image i have to use the workload directive so it can be called from revision camera runtime then as we learned earlier the function now exists in a vision camera runtime prefix with two underscores so i'm just going to call it like this lastly we need to add it to the bubble runtime so open your bubble conflict and then find the line where the reanimated plugin is added we just need to add a new configuration to it and pass it in as a global variable that's all you need to do now restart your metro bundler and you can simply call it function we're going to import it now let's try just logging all image layers so let's start the app and take a look as you can see the frame processor plugin gets called and we're logging all of the labels to the console all right now that we know how to use frame processors let's take a look at how jsi works we're going to create a simple jsi library and afterwards we're going to look into more advanced examples such as the jsi host object and how vision camera uses jsi to provide all of this functionality let me start by explaining what jsi really is i'm sure all of you already know that react native uses a bridge for communication between javascript and native since the javascript runtime runs in a very isolated context there's not a lot of things you can do in it sure you can create functions create numbers add numbers create strings create objects and stuff like that but what if you wanted to access the device name or the phone's ip address those things are available for platform-specific apis so api is written in objective-c or swift and java as with most programming languages out there you cannot automatically and direct your car into another programming language there always has to be some sort of communication layer in between for example in java you can call into native c or c plus plus code using jni the java native interface to call c plus plus aks in swift you have to manually create a bridge using c or objective c which acts as a communication layer in between in c sharp you have to reference the function's name and the dll it lives in so that's why the react native bridge was invented it provided a tunnel to send messages between native and javascript if you wanted to find out the ip address you have to send a message like get ip address to the native word the native word receives that message finds out the actual ip address and then sends back another message with ip address in it javascript receives that message and now javascript has a value which contains the ip address which is a string this is of course not an ideal solution since the bridge uses a batching system and the message is not immediately sent but rather patched with other messages at some later points in time all of the messages are then sent to native also there's a lot of serialization going on here which is done in the json format so you cannot send a number to the native function it has to be converted to a string adjacent first as you can imagine that's really slow okay so what's the solution well since our javascript run times javascript core hermes v8 are written in c or c plus plus some very smart people invented a c plus api called jsi jsi is an abstraction layer over virtually any javascript runtime it works the same for javascript core for hermes 4v8 and any other runtimes that might implement jsi so if you know object-oriented programming you can think of it like an interface which is exactly what it is it is functions defined which the randoms increment for example you can create a javascript number straight out of sql squares okay but let's take a look at some actual code so in javascript we can create numbers by using the assignment operator in this case the variable number stores the value 42. let's see how we can create this in c plus plus in c plus plus we use the jsi value constructor to create a new value called number we can also use this equals plus which holds the value 42 this variable can directly be passed to javascript either by passing it as a parameter or by citing it as a property on an object let's try creating a string in javascript we can again use the assignment operator and two quotes in c plus plus we also have to define the encoding format so in this case it's utf-8 and we have to pass the runtime again the value name exists in the cpus plus password so we can directly run some operations or functions on it now let's take a look at functions let's create a function which just adds two numbers together we're going to call this function add in javascript we can again use the assignment operator and create an anonymous function this function has first and second together and returns the result in c plus plus we have to create a jsi function which creates a new function from the host function a host function is a function which actually lives in the host environment so in a native world this function contains c plus code but can be directly called from javascript we have to pass the runtime we have to pass the prop name which will be add in our case we have to specify the number of parameters in this case it's two the first or the second number and then we can create a c plus plus lambda and then on this first argument is the round term the second argument is this way so you can bind a function to another base value the third argument is an array a c style array of all arguments passed to the function for example on the first position there would be the first number on the second position there will be a second number and the count parameter specifies how many arguments were actually passed to the function in our case this will be 2. then the lambda returns a json value you always have to return a jsi value and if you don't want to return anything you have to return the gsi value of undefend so in this case we're casting the first argument to a number then casting a second argument to a number and add it all together then we return a new jsi value of the result numbers in jsi are always doubles let's try calling this function in javascript we will simply use add and pass two parameters this function add can either be the one we created in javascript or the one we created in c plus plus the host function we can call both of them synchronously let's try calling the same function from c plus plus again we have to have a reference to the add function and then we can simply use the call property to call the function we have to pass the runtime and then all the variation parameters we want to pass so let's try adding this function to the over next place in javascript we can simply use global.add and then use the assignment operator to assign this property in c plus we can use runtime.glober to get the global property and then we can use set property to set a new property i won't go into detail about cpus plus memory management but you have to move the add function because there's no copy operator for jsi values so in this case if we set the add function to the global namespace in the javascript runtime we can directly call it in javascript which we call the native host function from here so if we compare this to a bridge module now you notice that this function no longer has to be awaited this function is completely synchronous so the result returned in the host function can directly be used in javascript this value holds the result we return here to do this with a bridge function we have to use a weight which cannot be used in a top level javascript code so as you might have already noticed this is the benefit of providing a direct fast and synchronous access to the javascript runtime if we go back to our ipaddress example we can now create a function c plus that simply returns the ipads we can then install this function in a javascript runtime's global property and then simply call this function we don't have to use a weight anymore and the function is directly called just like any other javascript function also there's no more serialization going on because as we learned earlier all of the javascript values can be directly accessed in c plus so a javascript number can be directly used in c plus as a jsi value of type number for our ip address this would be a jsi value of type stream so let's take a look at the ipa in this example we create a host function in c plus which simply gets the ap address from some platform specific api for example from an objective c api and that simply returns a jsi string which contains the ipa risk we move this function to the global namespace in javascript and then in javascript we can simply call it so global now contains the get ipa address function which is a host function that exists in sql plus and you can directly call it without using a weight this is how you would install a nav function into the javascript router so let's quickly recap jsi is a replacement for the bridge while currently both the bridge and json exists in a recognitive project the bridge will soon be completely removed and every native module will use gsi under the hood jsa is faster than the bridge and jsi is more powerful than the bridge by providing a direct access to the javascript content with the rich the communication between javascript and native was asynchronous remember the batch message system so this means you have to use a weight for every single function you call even if that's an add function which simply adds two numbers together with jsi everything is synchronous per default so you can use it a top-level javascript code but don't worry you can easily create promises to make something a way to work if it's a long running or an asynchronous test since jsi accesses the javascript runtime it is no longer possible to use a remote debugger such as google chrome instead you have to use flipper remember jsi is just a replacement for the bridge so only the underlying technology changes in most cases you don't need to use jsi directly or even sequels plus nr the turbo modules api will be almost the same as the native modules api so this means for every native module that exists today it will be very easy to migrate to turbo modules without rewriting the entire thing currently there are three runtimes that implement jsm javascript core which is the default runtime for now hermes which might be the default runtime sometimes in the future and v8 also json does not implement any sort of thread safety logging while jsi is ready it's a bit quirky to get access to the javascript runtime at the right point in time there are a few hacks involved i will show you now we'll take a look at my react native mmkv library which provides a fast and easy storage solution for react native using jsi it's about 30 times faster than async storage and it is a really good example for jsi since you benefit from better performance as well as synchronous access in top level javascript code let's take a look at the project structure first so compared to native bridge modules there's currently no way to auto link or automatically install a jsi module instead we have to go into our main application.java find the react native host instance we're creating and then overwrite the get jsi module package function in this function i'm going to return a new mmkv module package instance which is a class that implements the jsi module package interface let's take a look at it as you can see the class implements the jsi module package interface and overrides the get jsi modules function in this function you can return a list of jsi module spec instances but we just return an empty list why do we do this well there's something special about this function this function is actually called on the javascript thread so since this function is called in a javascript thread before the javascript bundle executes we can quickly install the mmkb module into the global namespace if we wanted to do that on another thread for example in the native module thread we would likely get an error at runtime and the app simply crashes so let's take a look at how the install function works we're going to open the mmkb module class which is still a java class and as you can see here's the install function which takes a javascript context holder as a parameter which is the jscontext this is a java hybrid class and it actually contains the javascript runtime as a c plus instance but we cannot access this in java right now so let's take a look at how we can pass it to c plus plus we call the native install function which is a jni function it's a native function that exists in c plus but you can directly call it from java this is the first time we cross languages so in this case we go from java to c plus plus to implement the c plus function we have to create a cmake and gradient setup i'm not going to go over this right now but you can take a look at this at github so let's take a look at the native installer function this function exists in a c classifier and as you can see here it is prefixed with the full java namespace so the first parameter is the jmi environment then i'm going to get the class which is the mmkv module then i'm going to get the first parameter which is a long in my case this is the pointer to the javascript runtime and then i'm getting a java string to the path where mmkv stores all the documents if we take a look at the java function this is exactly what we defined here so now in the cpus plus file i can now reinterpret the javascript runtime pointer to be an instance of the jsi runtime so and if the cast succeeded we can now install the actual functions if the cards didn't succeed we're likely not in an environment that supports jsi for example if you use a different runtime than the three runtimes are listed earlier or if you're using a chrome debugger so let's take a look at the installer function the install function receives a jsi runtime reference and i can use this reference to install variables into the global namespace in this case i'm installing the mmkv set function in the global namespace so this is a jsi host function as we saw earlier which simply sets the value to the mmkv storage instance as you can see you can check the arguments for the types using is4 this number is string and we can also throw jsi errors at the end of the function we always have to return a jsi value and if we don't want to return any value at all so no number no string no object we can simply return undefined and then here's the actual implementation for mmkv where we simply set the value to the default mmkd instance in this case recording set and getting the value if you want to convert the jsi value to a number you can use as number and you get a returning doubler value same thing for booleans and for strings there's a jsi string redwood so as string will return a jsi of course you can also convert the string to an actual c plus string so an std string using the dot utf-8 function so let's take another look at frame processes earlier we saw that we can directly access the frame's width and height properties and we can even directly pass it to a native frame processor plugin which by the way is a host function so how does this work what exactly is the frame object how does it contain a full 10 megabyte image from your camera is it really slow to copy frames from native to javascript for every frame the camera sees which can happen up to 240 times a second well there's actually no copy of serialization happening here the frame parameter is actually a jsi host object this means the object has been created in c plus but javascript can also interact with it similar to how host functions work so if i access frame.height this actually resolves to the c plus code and calls get property on it so let's take a look at the shape of the frame object i've created simple typescript types for this which don't contain any code or anything but we can understand the shape better so for example for the frame we have an is valid width height bytes per row planes count properties and then we have two functions two string and close all of these properties actually do not exist in javascript they all exist in c plus let's take a look at how we defined the host object so as you can see this is the frame host object header this is a c plus class which inherits jsi host object we can override those two methods to provide information for the jsi host object so you cannot directly use the frame object in javascript because it's an objective c object and javascript doesn't know how to interact with it instead we create a frame host object which acts as an interaction layer between the javascript frame instance and the objective c frame object we stored here so for example if we call frame.height the get function gets called with the name being height and then we can simply access the frame with objective c code to find out the actual height of the frame and return it as a jsi value a number let's take a look at the implementation so the get property names function simply returns a list of all valid properties this is useful if you want to use object.keys on the frame which then returns all of these keys so for example tostring is valid with height bias perov planes not enclosed get returned then you can implement the get function which acts as a getter so if you call frame.height which is not a function it's simply a property getter this function gets called with the name being height in our case we can then get the height using an objective c api from cm sampler buffer then we can simply return the height as a dauber using the jsi value constructor so for every time you try to access frame.height this is not a value stored in javascript instead this c plus plus code gets called if you try to access some value that is not supported we just return jsi value undefined the same thing applies to the close and two string functions for example for the close function we have to create the host function and then return the host function so if you access frame dot close this function gets returned so then if you try to actually call it so two brackets this function which is a host function gets called as you can see all of this exists in c plus and javascript only provides an interaction layer by using the host object instance you can find all of this code online on github at the vision camera repository so let's start creating our own custom hall subject i'm going to create a class in c plus i'm going to call it example host object and we're going to inherit from jsi host object then we have to overwrite the functions get and get property names so in this case we have to add get property names and overwrite the signature is always the same from jsi host object and now we start by implementing get property names you have to return a vector of all property names that you can access in the host object this is useful if you want to access object.keys on the given object so in our case we're just going to add some value in this case we have to create a prop name id and give it a string of some value now let's start implementing the get function so now let's try to find out what the user actually wants to access you can use name.utf8 to actually get an std string value for the past prop name id with the std string stored in the name variable we can now work let's find out if the user actually tried to access some value we simply compare name to some value and if it's true we can return some value in this case i'm going to return the lucky number 13 but you can also return an object a string a boolean whatever so let's look into some other value types first we can try returning a boolean so if the property sum bool is accessed we simply return true next let's try returning a string if the value some string is accessed we create a new string from a utf-8 scd string in our case hello let's try something more complex we try to build an object with two values some value and some bold so we create a new object using the jsi object constructor then we can simply use set property to set some values some value is set to jsi value of 13 and some bore will have the property true then we can simply return the object let's try building an array if the value sum arrays exist we build an array with two elements inside since we already know the size of the array we can simply pass it to the jsi array constructor now let's try inserting the values and simply returning them now in this case this should be one instead of zero but you get the idea now let's try creating another host object let's try creating a new instance of the example host object so in this case we're creating a new shared pointer of the example host object using the default constructor and then we can use jsi object create from host object to create a new host object next let's try creating a host function host functions look very complex at first but in reality they're really easy so the first step is to create a c plus landlord you have a capture list of all values that get captured in the lambda and then you have four parameters the runtime we're currently using the this value of the function if the functions bound to any specific this value a sees that array of all arguments and a property defining how many arguments are actually passed so how big this cra is then you return a jsi value which is undefined if you don't want to return anything or any other gsi value if you want to return something in our case let's just add the first and the second argument together so it's a simple add function we're getting the first argument as a number and adding the second argument as a number two then the result is a number we can create a new json value from the doubler and return it now we still only have a c plus plus lambda and not a jsi function yet so to create a jsi function you have to use the create from host function function you have to pass the runtime and then you have to pass a prop name id which is in this case just fine then you can specify the number of parameters that will get used for this function in this case it's two we're expecting two parameters the first and the second number and then you're simply passing the c plus plus lambda in most cases you want to move this so std move and then we simply return the function then you can also access the global namespace so anything that's defined in the global namespace can be accessed by using runtime.global for example you might want to create a host function that can be called from javascript and it performs some asynchronous tasks on a new thread such as uploading something to a server to make it a waitable in the javascript world you can create a new promise by accessing runtime.global getproperty as function and then simply get the promise property which is a function remember all constructors in javascript are simple functions so then we call the promis constructor as a constructor and then you have to pass in a jsi function which is a host function which then can resolve or reject the promise this promise object can then be returned to javascript and your code can execute inside the lambda you passed in here then as soon as your upload is complete on the other thread you can simply call the resolver and the code that is awaiting in javascript can then continue executing there's also another json value type that's not directly included in the json implementation and that's the array buffer and expo gl implements this as a typed array as you can see here so if you have a large data buffer in the native code converting into a jsi array by looping over it and pushing element by element um will probably make you hit an out of memory error or you're or you're going to notice serious performance problems instead you can use the typed array implementation to quickly make the offer available to the javascript world you create a new typed array by using one of the available types array types which are the sizes of the types so 8-bit integer 16-bit integer 32-bit integer and so on and then return it to js this is faster because under the hood only a simple man copy operation happens as opposed to looping over the entire thing and pushing a new jsi value into the array each time and this is also more memory efficient because as we heard earlier jsi values are always shortest hours with the only exception being a typed array so an array buffer so if you're dealing with anything smarter than a 64-bit double for example a 32-bit integer a six-simple integer or an 8-bit integer the five-dollar implementation actually only allocates that amount of memory so for example for one double you could actually allocate eight eight bit integers so this means if you have a one megabyte array of eight-bit integers this would actually be 8 megabytes in size if you use a normal jsi array with the typed array kind of type 8 array it would stay at 1 megabyte so now that we finally understand how jsi works it's about time it's already hard outside we can take a look at how vision camera uses jsi to provide the frame processing feature first of all why can't we not use the bridge for this there are multiple reasons for this first since the bridge sends messages in json format we cannot pass the entire camera frame to javascript that would be a 10 megabyte json for each frame and it will simply not be fast enough to run this 30 60 or even 240 times a second second we would have to pass the json back to the native world for every time we want to call a frame processor plugin this would be twice the same with the bridge since we go from native to javascript and then javascript back to native but with jsi this has almost no overhead at all third all of this json conversion would be blocking the javascript thread so any state updates navigations or re-renders will be run with jsi we can use really make this worklet api to spawn a secondary javascript runtime and run all of this code in parallel and uninterrupt it to the react.js runtime so this is how the frame processor gets created inside the frame processor we create a new instance of the j image proxy host object which is the frame from the j image proxy then we simply call the function which is frame processor by passing in the host object i'm not going to go into much detail as to how to work that aka from reanimated works this is a topic for a whole other talk but if you really want to know how it works i highly recommend you to read the source code of the animated basically that uses jsi to copy all values captured in a function in a worklet and i call it workflow function with the capture variables this means all of the captured variables used inside the worklet are actually only copies of the original objects so if you want to make changes to an object you is frozen also for the frame processing plug-in api the user can use objective-c or swift to write the frame processor plug-in to make this possible i convert all of the jsi values to objective-c or swift values for example a jsi value of type number can be converted to an ns number same thing for jsi objects which get converted to an nsdictionary or jsi array which gets converted to an nsra this of course also applies to the android side so any jsi value can be converted to a jni value so a java negative interface value in this case booleans numbers strings and objects can be used and for the arrays you use the native array implementations from java so that's about it for my talk today if you want to learn more about jsi or worklets i highly recommend you to check out the reanimated or vision camera code base make sure to follow me on twitter for updates and if you have any questions just dm me on there bye everyone thank you mark really exciting talk and like i said this is something that i think will be very popular next year when jsi becomes something that everybody is using so i kind of take this talk as i look into the future into how we all are going to be building apps and how we can benefit from the new architecture so thank you now our next speaker is joshua from facebook and this is a very very special talk because this talk is about fabric but it's unlike all the other talks we are not going to be talking about how fabric works and what fabric is this stuck is actually showing how fabric can be utilized in production so you all been listening that fabric jsi and new components were still in pro in development and one day they were about to become public and production ready but you know we've been waiting quite a while for it and i'm so happy to finally see a talk that is actually talking about this particular thing how facebook is using fabric in production what is their experience with it results performance benefits so this is another look into the future on how react native apps will be looking and working like next year so i'm really excited to learn more about fabric how facebook is using it and how they migrate it because their code base is huge and fabric is probably not the easiest thing to migrate to so let's learn about their experience and how they are using fabric in their facebook app my name is josh gross and i'm an engineer on the react native core team at facebook for the past three years the react native team at facebook has been re-architecting core pieces of react native and we've been testing rollout of them within our own apps including the flagship facebook app today i'll talk about just one part of that the fabric renderer rewrite and out react native was originally built for ios and then for android today react native has renderers for vr windows mac os tv os and many other platforms as well react native is written and essentially rewritten in java for android objective-c for ios c for windows and so on basically every one of these platforms is a full unique uh re-implementation of react native and they attempt to behave the same way but they they don't all behave exactly the same way in 2018 the react native team embarked on a project to build a new architecture that would be future compatible we wanted concurrent rendering we wanted better performance and reliability but we didn't want to build this end times for every platform ideally we could build these features once using a cross-platform implementation and then deploy it to every platform in order to accomplish this goal we had to rewrite react native but we needed to make sure that we could bring along all of the apps in the world our first step was making sure that the rewrite would work in the biggest app in the world the facebook app i'm going to focus today on our journey of rolling this out in the facebook app if you want more technical details i highly recommend watching these talks or some of these talks given by current and former members of the core team the first one here is a high level overview of all these projects by ram n at react conf 2018 there is a deep dive of fabric given by david vodka at react native eu 2018 and there is a talk about the new architecture by emily janzer at react native eu 2019 we will also be publishing more blog posts soon so you can watch out for those as well this will not be a technical talk i'm not going to be doing a technical deep dive into any details of fabric or its features but instead i'll describe our experience of using it at scale and migrating a very large code base to use only fabric this is going to be a story time so sit back relax grab a warm beverage or you can also grab a companion if you'd like and i'll share with you our journey of deploying fabric at scale at facebook so this is marketplace in the facebook app specifically this is the marketplace home surface marketplace consists of many many surfaces other surfaces include product detail and commerce profile all of these are built with react native in the facebook app react native is used to build many other products as well so dating and jobs and many other products all together there are over 1 000 surfaces in the facebook app built with react native that's a lot so i tweeted about this recently and a lot of people were confused or didn't believe me or thought i misspoke some people thought i meant that react native within facebook has 1000 components i can assure you it's a whole lot more than a thousand components so when we talk about a surface within react native at facebook what do we what do we mean technically speaking um if you have an ios or android background a surface would be like a full screen ui view controller on ios or a fragment on android um sort of roughly regardless the assumption here is that if you navigate to a new surface in react native at facebook we expect that surface to sort of take over and redraw the entire screen so some of these can be relatively small say you go to a privacy policy surface maybe it's just some text but most of them the vast majority are much more complex and some of them are huge with hundreds of features packed into a single surface so i want to emphasize that number a thousand plus surfaces however you want to think of it is a huge number another unique part of our setup is that uh at facebook um all of our apps using racknative use react native from the main branch of github so whenever we make a change to react native or whenever we merge a pull request from github that change goes into our main developer build immediately and then it goes live to all facebook users in the next weekly release this only gives us about a week to ensure that every change we make is stable on every one of those 1000 plus surfaces for all of our users so what does it mean to be stable more than one billion people globally visit marketplace each month we have to make sure that marketplace continues working well for people from different countries with different network conditions different device types since the launch of dating over one and a half billion matches have been created so even the smallest regression in react native easily affects product usage metrics and that gets amplified by the scale of facebook a tiny performance regression of a few milliseconds will get caught and matters a lot incredibly rare race conditions or crashes that are one in a billion events will happen many times it'll happen thousands of times a day we've had to ensure that every screen every metric every interaction was working properly in the meantime the facebook app is a moving target products and libraries built on top of react native are constantly changing they're being reworked and refactored so in order to accomplish our goals of unifying react native and enabling new features and better performance we focused on stabilizing existing features and maintaining neutral metrics so as part of this rollout we specifically decided not to focus on improving metrics or expanding capabilities yet we also made sure that any braking changes were absolutely necessary and very easy to to migrate and to roll up very proud to share that as of last month react native in the facebook app is now completely powered by the new architecture this doesn't mean however that it's quite ready for everyone to use right now mostly because documentation is lacking honestly and some things like popular open source native modules may need to be updated i'll say more about this towards the end of my presentation let's talk about the development and rollout process of fabric we modified our navigation system to support selectively turning on and off the architecture for individual surfaces and allowing us to control what percentage of users would get fabric or the old renderer for a particular screen our work kind of followed the cyclical pattern we would identify a surface we would play around with it we would we would investigate if we found any issues and try to fix them or implement the new fabric feature that they were using that wasn't supported yet we'd fix any of these issues and then we'd run a production test we'd analyze data fix any issues that we found again and then repeat the cycle pretty straightforward after we kind of nailed the cycle and this workflow all we had to do was repeat it a thousand times it was easy we just had this playbook and we had to just keep doing it over and over and over again we thought that this was going to be a six-month project that's not a joke and it took us about a full year before we realized the full scope of the migration from the start it took about two years to enable the architecture on the first surface and nine months after that to enable fabric on all the rest of the surfaces so to be fair to us our estimations were fairly close only five times off seems pretty good in the world of software estimations so what challenges did we face well first we had challenges of scope so the full scope of the project like i said wasn't realized until about a year into the migration our scope expanded a lot after the investigation kind of went underway along the way we discovered a lot of hidden features of react native as well as hidden and undocumented optimizations i i hinted towards this at the beginning but we also found a lot of subtle differences between the android and ios code bases the vast majority of which were not documented and the vast majority of which were not even intentional they just sort of accidentally drifted over time we discovered and documented and patched a lot of these and actually improved the non-fabric code base as well but a lot of them couldn't be fixed without the fabric rewrite some of these platform differences over time had resulted in javascript product engineers writing platform checks and slightly different code for android and ios so platform switches were sprinkled all across the facebook codebase thankfully with the the migration effort we were able to delete many of these entirely because of the unified code base so in this way our production code has become much easier to read much easier to reason about after the migration and the experience between android and ios users has has been unified one concrete example is that scroll view for android has been almost entirely rewritten internally compared to two years ago and it's much more stable performance full-featured and in general just higher quality and aligned with existing features that ios had already already had and a lot of these changes we were able to back port to the non-fabric renderer as well another example is layout animations which was never fully supported in react native on android previously it was always flagged as sort of an experimental feature now it works well on both platforms with fabric equally well another challenge for us in the migration was coming up with backwards compatible alternatives to apis that we were deprecating so this is a pain that nobody else will go through because we spent a lot of time making it easier for ourselves and for our own engineers internally to migrate code in most cases because of the time that we spent up front the migration work involved required deleting code because the new apis are simpler in most cases and some of the changes involve just like using a best practice and some of the apis are simpler so we could just delete code this applies to apis used for rendering for native modules and for custom native components because we always ran screens during this experiment in both fabric and the non-fabric render at the same time until the experimentation was complete all code needed to be compatible with both in the vast majority of cases a surface's code didn't need to be modified at all to work in both fabric and on fabric we also encounter challenges with scale first there's a spectrum of instrumentation across the facebook code base some surfaces have no instrumentation at all besides sort of the basics that are provided for all surfaces so our only signal might be crashes or bug reports we would test these screens ourselves we would rely on qa teams to do some manual testing but you know this sort of lack of instrumentation could pose some challenges other surfaces have everything instrumented and would catch regressions of just a few milliseconds in performance with some specific interaction these hyper-instrumented screens are of course also challenging but did offer us an opportunity to you know kind of really make sure that fabric is very performant in in all corners of it another challenge we faced is that screens were hyper optimized specifically for the non-fabric renderer um and facebook had really found like a local a local maximum you know local optimum for for the the performance of these screens within the rack native renderer screens would rely on undocumented undefined and sometimes just unsupported behavior in order to squeeze out any performance when non-fabric code does some things incorrectly but was sometimes faster because of it a good example is measure apis the timing is not guaranteed previously to fabric it's not guaranteed so there are many possible race conditions that can and do occur and sometimes the measure api returns incorrect results because of it but the api is very fast since it was optimized for speed and not correctness we were eventually able to optimize the new fabric apis enough to get to to parity with many of these apis are all these apis and fabric is very performant now but we did have this initial disadvantage um because the non-fabric renderer had sort of an initial unfair advantage here right due to its trade-offs another challenge we faced is that facebook is a moving target surfaces are being iterated on very quickly so there are again i'll reiterate the number 1000 plus surfaces but this number is growing daily as well and in addition many of these screens are being worked on very very actively so new features are being released old features are being deprecated new metrics are being introduced old metrics are being removed metrics are being updated and the baseline is sort of constantly shifting especially in the early days in 2018 and 2019 we would try to experiment with screens that didn't use as many features so that we could experiment with fabric without having implemented all features necessary for the for the full implementation so we'd implement say 50 of fabric and then find a screen that only used sort of those 50 features the problem with this approach is that those screens could be updated at any time to introduce some usage of an unsupported feature so um this was this was a bit of a challenge especially in the early days because of how fast facebook moves given all of that for context let's move back to the timeline so hopefully it'll make a little bit more sense now through mid-2020 most of our time was spent discovering the scope of fabric and re-implementing existing apis for fabric we provided migration paths for a very small number of apis that we are deprecating like set native props and find node handle and we provided replacements backwards compatible replacements for them we also migrated screens from using these deprecated apis to using the new apis by the end of um like december 2020 we were 99 feature complete most of the 1000 plus surfaces had fabric enabled but only for a very small number of users so around 1 by the end of 2020 99 of surfaces were only using fabric for all users so that left only one percent of screens but as you can imagine that is because these one percent of screens were very important those were the very high volume hyper optimized hyper instrumented screens like marketplace home for for instance these screens were extremely optimized specifically for the non-fabric renderer so we had to spend a lot of time understanding the the screens and these products and improving their performance specifically for fabric luckily so when i say that that we improved performance for fabric basically this just meant adhering to best practices no longer using all these undocumented uh you know sort of unsupported corners of react native that people had been using just using best practices using some of the new apis using just the documented features six months later those one percent of screens were migrated fully to fabric as well now i'd like to tell a couple of interesting stories from my personal experiences with this rollout so this is a real photo from very early on in the fabric architecture discussions and we spent a lot of time like this in the early days huddled around a white board with someone trying to explain to us or figure out how fabric worked uh sort of one person at the whiteboard explaining it and the rest of us just kind of staring confused until it clicked um and a lot of time in the in the early days it was us trying to pull ideas out of sebastian's head as you can see in this photo until the idea sort of clicked for the rest of us i want to share these stories because i think they can reveal the scale of facebook and how things can work internally and i also want to emphasize um i expect the vast majority of people migrating to fabric to never have any issues like this in part because we already went through all of this pain at facebook so this is part of the reason that we haven't encouraged facebook adoption as much before we wanted to iron out as many of the edges as possible another point here uh to make is that some of these problems are specific to facebook so believe it or not sometimes code within the facebook app isn't perfect i know that's hard to believe but sometimes you know there's bugs that get introduced sometimes there's some optimal patterns that really shouldn't be used and this did make um getting feature parity and and debugging harder um so the the first one of these stories i like to tell um is that every november like clockwork um there ends up being some big crash or bug that blocks my work in production and every time it takes over a month to solve usually well well over a month actually and this has been true every november that i've been at facebook so i'm really hoping it doesn't happen this november but you know we'll see i'll have to have to wait and see so to give an insight into what my day to day looked like during the migration um i generally would wait until a new version of the facebook app was released so every week a new version is released to production i would wait 24 to 48 hours for you know some significant chunk of users to um to update um and then you know have some time to use the app as well and then i would analyze early crash data so basically i wanted to know did fabric crash more or less and if it crashed more i would do a deep dive into finding out what the issue was and i would you know try to fix it as soon as possible um and if the crash was big enough i would i would just like disable fabric temporarily as well obviously as we were rapidly implementing new features especially in the early days in 2019 and 2020 we would get maybe usually like a maximum of one to two new crashes a week many weeks we got nothing but if we got new crashes it was like one or two most of these were very small volume uh very marginal like one in a billion or one in a trillion type events and most of them were very trivial like a null pointer exception where the the root cause is very obvious and the mitigation was like a null check you know like one line of code and the fix is easy um but the reason i'm telling the story is that sometimes the crashes were not as easy to fix or replace so in one particular case i got an android crash that was pretty high volume so higher than i was comfortable with um and uh you know much higher higher than the most crashes i had gotten i looked into the sac trace and there was no facebook code involved and there was no react native code involved something in the internals of the android ui layer was crashing that's basically all i knew and so initially i was going to just ignore it or punt it to another team um but there was a clue that i had was which is that it was only crashing on react native surfaces and in addition to that it was only crashing within uh the sort of the fabric experiments it was only crashing for react native fabric users on react native screens and only on a few screens as well so there's you know very little for me to go on in terms of debugging but it was very clear that it was actually a fabric and react native problem um but yeah react native is huge our products are complicated so uh i didn't have that much information so at this point i did what i would normally do i tried to reproduce the issue um and of course i couldn't after you know many many hours of trying to reproduce it on different devices and different emulators i just i couldn't reproduce it my co-workers couldn't reproduce it so the next step is to add logs so hopefully i could get some logging information with a production crash telling me roughly what react native was doing right before the crash at least what section of code was executing before the crash so i added logs over the course of a few weeks and predictably got no new information um and because of the cadence of the releases of our apps it takes a while to you know write the code to do logging to to land it and then for that code to go into production and so this is a pretty expensive process um so it actually took me a month and a half to get no new information at this point um i wasn't just waiting i wasn't just sitting around i had already looked into the data deeper and noticed an unusual pattern which is that it only reproduced on a few specific devices that itself isn't that unusual but it didn't reproduce on any samsung devices or google pixel devices which is very unusual and in fact it only reproduced on a few devices that are generally not used at all within [Music] north america or within europe at all and some of these devices it's not even possible to buy within the us uh so this is also a problem um but at this point you might be thinking well why did i just waste all this time why didn't i just buy one of these devices that like i could get my hands on and the answer is pretty simple i had never had to do that before in all the times i'd done development before that and all the time after it i had never run into an issue this thorny that required that i have like one specific device um and often um you know i have run into a lot of device specific issues but usually there would just be a stack trace that indicated that like oh there's a null pointer exception that only happens on this one device it happens in this code dig in and it becomes clear after some investigation not not so easy in this case so i finally got one of these devices and in order to protect the innocence i will not name the oem but thankfully i was able to immediately reproduce the crash within 30 minutes of unboxing this phone hopefully you're curious at this point so the problem was that if you have an empty parent view on android and you attempt to remove a child from it so a view with no children and you want to remove a child with no parents from it uh google like pixel samsung stock android will just fail silently and move on and don't do anything because the end result has already been achieved you're trying to remove a view from a parent with no children so you can kind of kind of squint at it and say like well you've already achieved your goal so we can just move on but some oems have modified android i guess to crash at this point um and not only that but a few custom internal facebook view managers on android did this in some like use this behavior in some marginal cases this is notable and kind of funny to me because it's the first time i've run into a case where the oem had changed really core behavior in this way and changed basically an undocumented feature of android without also any documentation um so sort of just undocumented on both sides it's really unclear you know what to expect from this api without digging into the code it's also worth noting that this is kind of a classic design problem for apis if the user does something nonsensical should you fail silently should you crash should you give them a warning so there are pros and cons with any of these approaches i don't think it's incorrect or correct for you know stock android to just move on silently i don't think it's correct or incorrect to to crash at this point they're just different decisions with different trade-offs and you know unfortunately i ran into indoor case where both decisions are made in different cases um foreclosure here fabric became resilient to the sequence of operations and fabric now does just fail sort of silently in this case although it logs something to the console if it happens so technically there is some clue here in case you have an issue and it's and it's related to this um another november bug i had was related to performance and event ordering i believe that this was the the previous previous november so about a year earlier so earlier days in fabric i was testing a marketplace screen and i noticed that in fabric it would take like five seconds to load the screen and most of the time the screen was just blank and i could tell from like metro logs that it wasn't like loading javascript it wasn't really doing anything it wasn't logging anything it was just sitting there as far as i can tell so i did what we normally do for performance investigations i collected log markers which we have instrumented in react native in our product code and so i got sort of a linear sequence of events for the startup during fabric and then during non-fabric and i would compare these timelines and over time i would drill down to find you know which specific regions were slower in fabric um and you start you add additional markers you can do like a binary search of where in this startup and which segments of code are slower um and and i basically just use this use this method and then sort of rinse it and repeat it to dig down um but i kind of exhausted that eventually um and i ended up uh using another tool which is one of my my favorite and most sophisticated uh logging tools which is logging text to the console using console.log so i had a hunch here i would add logging statements all over the code base i would collect text logs and fabric and then a non-fabric i would literally copy and paste them into text edit put them side by side on my monitor and just comb through line by line you're probably horrified by this after a couple times of doing this i did end up using like more sophisticated diff tools to highlight the differences between them but i was still ultimately just using console.log and what i ended up finding was pretty interesting and it was that the ordering of some events was different in fabric so these are like android life cycle events mixed with react native life cycle events and methods the specifics don't matter here too much and i i want to emphasize that for the vast majority of even our services in facebook like over 99 did not have any issues related to this and most of those surfaces don't do anything with those lifecycle methods so they're usually just not needed but for a few surfaces that were highly customized highly optimized for the for the previous renderer they were doing certain things to optimize startup so they were using they were doing sort of some some special stuff in like android fragments on fragment create on start on stop on activity results sort of that's what i mean by life cycle methods um mixed with react native java life cycle methods like layout and and measurement and you know again start up things like that so again the specifics don't matter just want to give you kind of a general idea of what kind of apis were were involved um and so yes some of these events were happening in a different order in fabric than they were happening in non-fabric so i thought well it's it's a long shot but but i kind of followed my gut and dug further and it turned out that the startup of the screen was constructed as a state machine sort of an asynchronous state machine where it expected event a to happen and then event b asynchronously and then event c and if the sequence was was changed at all the state machine just broke so if the sequence instead of being abc was acb startup just didn't work and the screen would just sit there in a in sort of a perpetual loading state eventually there was there is like a long-running timeout that would force the screen to re-render um so that users were never like truly stuck um but the screen could appear more slowly in those cases so i want to be clear here i'm being intentionally vague because the specifics of like the apis don't matter this is one of those local maxima of performance that i mentioned earlier we were doing something uh sort of not great to squeeze milliseconds of performance out of react native in the best case um and i actually expect this optimization will go away entirely this this is like again an internal facebook optimization i think this this will go away internally entirely in the future it's not really as relevant with fabric and i've never heard of anyone else doing this outside of facebook so before i continue i just want to be clear i don't expect anyone to run into this uh scenario period but why is the ordering why why is like this a problem if the ordering events has changed slightly and since it was causing us an issue why didn't we just change fabric so that we could guarantee the previous ordering of operations even though it was never like documented or guaranteed why didn't we just do that well since it was never documented or guaranteed um the ironic thing is that the sequence of events changed because we were able to optimize certain things in the internals of fabric so maybe that's still not convincing you're saying it's it wasn't documented it wasn't guaranteed but we still changed something that was working right so sort of so this is where things like facebook scale become important it turned out that even uh with the old system the ordering events was never consistent so not only was it not guaranteed not documented but it also wasn't ever really consistent so it would have this particular sequencing of events like 97 98 of the time and in fabric we had that expected sequencing of events only 90 of the time so we always had this problem for some number of users but we never realized it because it happened sort of too infrequently for our metrics to really like alarm on it and internally uh nobody reproduced it and so you know didn't catch it really in production it wasn't a big enough problem in in production to raise any alarm bells and then internally nobody was impacted by it so we just uh ignored it never caught it um the the real problem is that you know basically if you're building a state machine based on asynchronous events you need to either know that the ordering events is always going to be the same 100 100 of the time and then you have to add warnings or crash if you ever encounter a different ordering or you can be resilient to different orderings of events um this is a hard problem you know async programming is difficult uh state machines are difficult to get right but especially at scale it's crucial to get this correctly if you assume a particular ordering and it's wrong for one percent or five percent of users that's that's a huge number of people at scale so long story short we were able to fix this issue by making our product code more resilient to different orderings of events and in so doing we actually also improved the user experience for both fabric and non-fabric and improved performance for both as well so again i want to reiterate for the vast majority of difficult issues we face with say poor interactions between product code and fabric fixing the issue for fabric also improved the user experience for non-fabric because generally the issue was caused by usage of like undocumented apis undocumented behavior or just like not using best practices um and um yeah so so we we and we ended up uh as part of this process also um spending a lot of time thinking about and analyzing the ordering of events in react native and fabric and sort of ways of solving this class of problems in general and what i what i will say is that it really illustrated how important it is both for apis to document any assumptions and constraints they have but also that it's very important as a as a developer as a user of apis for me to keep an eye on what is not documented so if documentation was perfect it would just call out explicitly that something is subject to change um but obviously that doesn't always happen and now when i notice a gap in documentation when i'm looking for something and i kind of can't find it sometimes now i wonder if that's almost intentional sometimes maintainers don't want to provide guarantees because if you do then people rely on them and then that causes parts of the systems to become less flexible again i don't expect frankly anyone outside of facebook to be impacted by this most people will never use or think about these events that were involved um as a side note one thing that we thought about uh doing internally to force product code to be more resilient is to have like a chaos monkey event mode where for you know developers only in some cases we would just randomly delay events or reorder some events um so we haven't done this yet if you hear facebook engineers complaining about it in the future maybe it's maybe that's my fault um but i actually suspect that doing this sort of thing would catch a lot of bugs and like future bugs in waiting finally um it's taken us a you know very long time to fully migrate the facebook app onto the new react native architecture we're very proud of this moment and you know uh very proud of all the work that went into it but we know that we are we are we're not done yet and our next step is to bring it to all of you we're starting to plan this work now we're clearly you know very bad at estimating so i'm not going to make any claims for dates um so while we don't know how long this full process will take we we do know the next steps we're currently working on tutorials and guides to help people take advantage of the new architecture we're partnering uh closely with companies like expo call stack and software mansion as well as maintainers of the most popular packages in open source to make sure that they're compatible with the new architecture it's very very important for us that this upgrade goes as smoothly as possible for everyone and we really can't wait to get to get it into all of your hands i am but one part of a pretty large team uh and this effort involved a lot of people including a lot of engineers within facebook that are that are not on the react native team too many people to count um but within the team i'd like to give a big thanks and recognition to the people that i've been working with daily for the past few years so especially david vodka samuel susla and valentin shurigen big thanks this is a huge effort thank you for uh making my life easier and helping me out along the way um i'd also like to thank our leadership who supported such a huge ambitious rewrite especially when it took years longer than expected um so big shout outs here to tom aquino tim young yuzi zang kevin gozzali and eli white thank you so much for watching and i'll see you on github thank you joshua for this talk it's been really great to learn about your experience and i guess i'm really happy to see that you at facebook are actually sharing that knowledge i feel like this is not something that everybody would do because to a certain extent this is sort of a um you know your business domain so maybe not something that you want everybody to know about but i'm so happy that you are sharing this knowledge with the community so we can all learn from you so i just wanted to thank you for being around with cold stack and react native eu uh ever since i guess we started your support has been great here and i'm so happy that we can have this uh readme video as a field for you to kind of share those breakthrough updates so uh thank you and i'm looking forward to the next year edition already now let's move to the next speaker uh which is calif from microsoft and caleb is going to talk about react native for windows and mac os now you probably heard about red native for windows quite a while ago but maybe you haven't heard about it but it's been rewritten to a new version and then folks at microsoft added the macos part now windows and mac os didn't have a easy start because react native has been always about android and ios so making sure that react native architecture is ready for another platform has been always a challenge and we have seen that in react native cli for example that i'm contributing to but also in other parts so they've been doing a very very hard job into not only supporting windows and mac os but also into back porting the features that they are doing uh into react native itself so today caleb is going to share their experience creating that framework and how they actually enable their teams to build great applications so i'm kind of excited to see what they are about to share with us but i'm kind of expecting that there's going to be some interesting use cases of how react native is working in desktop applications in both macos and windows environments so let's learn about that and calif good luck hi everyone i'm calif and i'm really excited to be here to talk to you about react native for desktop platforms so a little bit about myself i am an engineering manager at microsoft and i work in the office division before becoming an engineering manager i was an engineer and i worked primarily on accessibility and when it comes to react native i have been in this space for the last couple of years now i really like how with react native you can learn the technology once and then be able to run your apps anywhere you need to and you can imagine that for applications like office such technology is really important because our code base has millions of lines of code and so we really benefit from being able to share our code across multiple platforms as well as reapply the same the same technology across we've seen a lot of engagement around react native at microsoft as well like the there are multiple apps and features that have been built either you know brownfield or greenfield applications um using react native and we have a nice little ecosystem that keeps growing every day so it is a very very exciting place to be and uh i'm glad to be here to talk to you a little bit about how we are evolving the desktop platforms and so what we will talk about today first we'll go through the investments in react native for windows and mac os that have happened recently and then we'll talk a little bit about some applications that have been built using those platforms so let's go ahead and jump in and talk about react native for windows and for mac os and as you probably know uh this has been a journey for microsoft for a few years now with a lot of investments for so that we can bring the power of react native uh to the to the desktop platforms and it's been a really nice partnership like we've worked very closely with facebook uh to to evolve those platforms lots of design meetings discussions uh alignment and directions and where we're going and uh we've made a lot of improvements over the last few years and and the fact that we're seeing multiple applications shipping on top of that platform is really encouraging for the future so uh let's talk about a little bit about the investments that we've actually made and uh when we think about those we typically bucket them into three different areas the first one is the core runtime itself so think of this as the core of react native that runs on user devices and allows you to build your experience on top so we'll talk about that then we'll talk about some ui frameworks that we've built on top that allows you to augment the default experience that's provided with react native and finally we'll talk a little bit about some tooling that is making our lives easier and hopefully will make yours as well so let's jump in and talk about the runtime first starting with the react native for mac os and so the major highlight that we have for mac os is that we recently released the version 0.63 of the platform there are several improvements in it including support for svg for tooltip and key loop we are currently working on version 64 and expect this to be ready in the next few months and there's also been a lot of really nice additions to community modules for mac os so if you're interested head over to react native.directory and then filter on mac os and you will see all of the modules that are now currently supported [Music] and then when it comes to react native for windows we've released version 64 0.64 and we're currently working on version 0.65 so uh one of the things that uh i would like to highlight here is that uh for this version 0.64 we were able to release react native for windows at the same time as the ios and android platforms version 0.64 as well were released and this was something that was very important to us because what we want is we want to make the upgrade process as seamless as possible and allow people to upgrade both the desktop and the mobile platforms at the same time so we spent a lot of time getting windows up up to date and current with uh react native and then we're able to release that 0.64 simultaneously now we are working on getting mac there as well such that once we're done like you will be able to see all four platforms being released at the same time and then the next thing is we now have support for react native windows in upgrade helper if you're not familiar with this uh what a great helper is is it's a place where you can go plug in your current version of react native and plug in the version that you would like to upgrade to and upgrade helper will show you what changed and how you can go ahead and update it's a really cool tool uh makes upgrading a lot easier uh we've also improved the developer experience so uh you are probably familiar using you're probably familiar with codebush already so that technology was available on ios and android and what it does is it allows you to update your bundles over the air without having to redeploy your application it's very powerful and it's now available natively in react native for windows as well code push itself is part of our visual studio app center offering and if you head over to microsoft.github.io codepush you'll get more information on how to get started and leverage it uh and the next thing i want to talk about is the documentation so uh this is one place where uh we've spent a bit of time uh to to make to improve what we have already and we definitely want to spend a bit more time there uh like getting the whole spectrum of our documentation out there for you to use but right now the biggest improvement is on the native api side and so if you head over to our documentation site you will see a lot more native apis being documented including you know like how to use it if you need to um also we want to make it easier to to use react native across again across multiple platforms and in order to do that we've increased the parity that we have in windows with ios and android so there are new props supported like accessibility info and platform version and we have more coming up as well so the next thing i want to talk about is uh is your build time and uh we have in experimental support right now a binary distribution so what this is is uh instead of consuming react native for windows uh as source code you can consume binaries that have already been built and just start building your application on top this is something that we've been experimenting with and we've seen really really good improvements to both build time and disk usage by doing that so in our tests the build time was about 80 faster and the disc usage was about was about half of what it used to be so this is still experimental we're working to make it full production quality uh but you can give it a shot and see if it works for you and then let's talk about hermes so i'm sure everyone must have heard about hermes right now and uh how it improved both the startup time and the memory usage on the mobile platform so uh we've had hermes for some time in react native windows as well and recently we've done a lot of work to make it easier to use so first there in order to use hermes all you need to do is turn on one build flag and we will take care of the rest so we will package the runtime we will do the bytecode generation and package this with the with the application and then and then configure react native for windows to use hermes instead of the default engine on windows and what we've seen is that the performance gains are similar to the ones that uh that uh have been seen on mobile platforms as well uh it's easier now to debug your application using when you're using hermes we support direct debugging and uh we've also recently added support for profiling so you can see how your performance characteristics are measuring up so hermes right now is supported for our c plus plus applications and we will have support for the c sharp ones as well in the future [Music] and then let's talk about community modules so these are of course super important in the ecosystem and uh we've increased the number that is supported by windows as well there are a few here that are listed to give you an idea for example the react native sketch canvas is one that i will talk about a little bit later in one of the demos but i think at this point we support around 40 community modules and if you want to see that full list that's supported just head over to react native.directory and filter on windows and you'll see that so in summary when we talk about react native for windows it is much easier for you to upgrade to the more recent versions we are so as i mentioned we have version .64 up and running we are currently working on version 0.65 it should be available very soon and we've improved the developer experience we've improved the support for hermes and also we built react native windows on top of the wind ui library and so when we make changes to the ui layer you get that for free and that's what happens in the case of windows 11 where we rejuvenated the ui layers so you will just get that for free by using react native for windows which is pretty great so let's switch gears and talk a little bit about the ui frameworks and the first one i'm going to talk about is fluent ui react native right now this is mostly for early adopters we have a few controls that you can find and see if it works for you and we're in the process of adding new controls um and and being able to support additional scenarios as well but what this is so fluent ui react native is a is a set of controls that are supported across multiple platforms and that have the fluent design system built in and the fluent design system is is essentially a set of styles that is designed to look great across all platforms so this comes built into the controls when you use them now if for any reason you want to override the built-in design system to match what you would like to to render you can do that as well so head over to github.com microsoft slash ui react native and you can learn more about it and and i'll show you a quick demo of how this works so you have a mac os an ios and an android application and what we're showing here is the difference between light mode and dark mode as well as the ability to inject your own branding so if you see for example word is typically blue and excel is green and so you can you can customize it such that on on excel it looks green and on word it looks blue and then and so have that branding for those applications and here's a couple of examples of controls that we have today so uh these are the ones that are on mac os in light mode these ones are controls on mac os using dark mode these are the ones that are on ios and these ones are on android so again this is an in this is an area that we're investing fairly heavily in as well uh so you'll see more controls coming up but feel free to give it a try and let us know what you think so the next ui library that we'll talk about is react native xaml and what this is is it's a library that exposes the xaml ui controls to react native for windows if you're not familiar with xaml what it is is it's the native ui framework that that's built into windows and uh what react native xaml does is it provides a projection of those controls into javascript so that allows you to write uh you know any of your view managers like purely in javascript without having to touch the native code at all so let me show you a demo of what this looks like and here what we're doing is we're using the the color picker of the xaml of the xaml library so uh with just a few lines of code about like five lines or so uh you can you can use the control inside of your application and then uh respond to the callbacks that it provides so in this case uh what we're doing is uh we are rendering the control and then just displaying the rgb values that are coming back into a text block pretty powerful control and very easy to use with with just a few lines of code uh so react native xaml is intended for uh windows developers who are familiar with the xaml language already and who are not planning to to use their applications across multiple platforms and then the third ui library that we'll talk about is babylon react native so what this is is first let's talk about babylon.js and babylon.js is a very popular javascript library that is built on open web standards so it is completely free and open source and there's several really nice models that have been built using it if you head over to the js website you will see hundreds of examples of of this uh already and babylon js on native is also supported on native platforms so the first thing is what we that we have is babylon native and it brings that power of babylon js over to those native platforms and babylon react native is the integration of that babylon native platform into react native so what it does is it provides you a react native component that you can use as a rendering surface for your babylon model and you can use that to render the model and etc this is currently supported on ios on android and windows so over to a quick demo uh this is this is for example uh one that's actually running on android here that shows you uh live reload working so you have a couple of models and on the left side what we're doing is we're just switching between them and so here's the first here's the first one and we went to the second one now we can go back to the first one etc and so that's completely supported in react native for windows and one of the experiences that you can build there is uh is an augmented reality one and so here's the same model that was built earlier running on top of the hololens device so we can render it in inside of the space of uh that the person is seeing in augmented reality which is pretty cool and again that's running on top of react native for windows so here so those were a few ui frameworks that we provide uh it's there's a lot of work going on there uh let us know what you think try them uh happy to hear feedback uh on how this uh how your experience is using them and so now let's talk a little bit about tooling and uh and i will mention a couple of things so one is uh we have uh we recently built a test app we call it the react native desktop uh that simulates you know like a real neat real react native application on multiple platforms so ios android mac os and windows and uh uh it's really nice because it abstract a lot it abstracts a lot of the complexity of all those platforms for you allowing you to just go ahead and just write your applications on top fairly easily and the other thing that's cool about it is that there is a we have a workflow that's uh supported that allows you to quickly switch between react native versions and that's important because you can then just try a newer version and see how your experience will run um in a very quick and efficient manner so uh the link for this one is github.com microsoft slash react native test app the other one i will mention is a reaper called rnx kit that is also supported by microsoft so what this is is it's a set of tools that we built to help you maintain and build your react native applications and libraries it's a lot of things that we learned as we were building our own applications that uh where we felt we needed small tools to help us and all of those are in this rna skid repo so i put a table on the right here where you can see some examples of those tools but i would say if you're interested in learning more about this we have a separate talk at the react native eu conference where we will discuss that so that talk is by lorenzo chandra and tommy nuyen and it's called improve all the repos highly recommend that you check this out we will go into a lot more details around one of the tools that is in the rnx kit as well as the react native test tab cool uh so next i'm going to talk a little bit about the applications that have been built using react native uh and and maybe starting with one that you might have heard already so facebook is building the new messenger application on top of react native for windows and mac os um and this is a great testament of the power of the platform it allows you to build that to build one app to build your app once and have it running on both on both operating systems so you've heard about that one and i will now talk about a couple of other ones that you may not have heard about so the first one is the pneumatics backpack application and what it is is it's a digital classroom experience with tools that are integrated for students and teachers to use the app was originally developed but for ios and android and then pneumatics saw that there was a high demand for windows as well and so what they did was they migrated the application for to windows using react native for windows now you can imagine like those applications right now are really important especially with people working uh studying remotely and needing really good uh applications for them to be able to do this efficiently and so here's a screenshot of the dashboard for pneumatics you can see it's very detailed it gives you an idea of the classes that you have the assignments that you need to do etc the next one is what pneumatics calls its smart paper experience and this one is actually using the react native sketch canvas so it supports touch it supports inking and what it does is it allows you to for example here do a long division and uh enter your step you enter your content step by step and that's really nice because the teacher can then replay uh the steps that the student did and identify areas where they might have made mistakes and help them there so so in some cases these applications are optimized for touch as in the case of when you're running on android and and when you're running on windows with touch support you can use that as well the next one is the one called smart label and so this one allows you to take a static image and be able to add labels to them dynamically so really cool experience as well for kids so that they can label things as part of their workflow cool and then let's look at one at the second application so this one is coming from mashreq bank and uh mashreq is the oldest regional application the oldest regional bank in the united arab emirates and it has uh offices in several continents so here's what mashreq had they had they had an android application already uh that was being used by their frontline workers to interact with the customers uh and what they what they needed was this a similar application for windows because uh part of their part of their employees were also using well using windows devices so they used react native for windows in order to stand up that application now what's great about it here as well is that on android the application is optimized for touch uh and on react on windows it's optimized for mouse and keyboard uh thanks to the built-in support that we have there already uh the the the team was able to do this uh while sharing the majority of the code between the android and the windows application which is a really nice win as well so this is a screenshot of what they built this is the logging part of their experience and then when a user uses logged in you can see what a sample customer information page would look like so this is also very rich and again built on top of react native for windows with a lot of code shared between the android and the windows implementation and then let's talk a little bit about how microsoft is using react native on desktop platforms so we've got several apps again i mentioned that we've been shipping already some of brownfield applications some some are greenfield ones but at this point we're shipping to hundreds of millions of customers around the world so uh we'll talk about office first and uh office builds brownfield epic applications in the client apps so think about word excel powerpoint and outlook there's at least an app there's at least any experience in it today running react native and these are shipping in ios on android on mac os and on windows as a couple of examples uh this is a revamp of our commenting experience so previously when you commented in word it used to be inline on the canvas we've now moved it to a pane and provide you a very rich experience where you can comment and interact with other users or who are looking at the document and this is a screenshot of what that experience actually looks like the next one is our privacy dialogue this is again like completely built using react native this is the dialogue that you get when you sign into office for the first time and then to be able to set your privacy settings and then this one is is what we call the unity canvas it's it's a place where we can advertise some really cool features around the application that we would encourage people to use uh so this is this these are examples uh for office and then we can talk about uh react native uh for windows on xbox and uh you might have seen those screenshots already because we've shared them before but but again i just wanted to show the richness that we have here so there are multiple experiences today on xbox that are shipping on react native and if you have a fairly recent version of the operating system and an xbox at home chances are you're running react native on it so here's an example uh the when you get to the xbox events this page is built using react native once you go uh once you go inside and start looking more about the events you get into a new a new react native for windows view that is that you can interact with here's a couple of additional screenshots uh with showing apps that you can purchase um and and a set of search results so uh we had really great feedback from our xbox colleagues as well on their usage of react native they they highlighted a lot of really good performance improvements that they saw and uh if you if you're using this experience on uh on the xbox you'll notice that it's it's very very smooth so it was it was a it was a great experience for us and so how do you get started uh writing your own react native applications so the first thing you want to do is head over to aka dot ms slash react native and we have a bunch of getting started guides guides there you can start you know writing an app from scratch or you can port an existing app that you have for android or for ios over uh to windows and and mac os and we'd be very happy to hear you know how your experiences and one of the things that you've been able to do so you can always connect with us on github if you have any questions as well that we can help you with uh so so just as a short demo i i went over to that website and in the interest of time i already set up an environment so here i am on my mac os laptop and i am running a virtual machine for windows and what you can see is the uh the application for our test application when you go through the first experience you will get this test application uh on the left one is the one for windows and the right one is the one for mac os and so uh so i have the development experience set up and i already downloaded the prerequisite the prerequisites i have my bundler running and so what i will do next is go ahead and just um show you show you how what this looks like so on the mac uh you can use you will use xcode and we have live reload working for example so here you know i'm increasing my font size uh from 24 to 48 to 48 once i save you can see uh the live reload working pretty quickly in the test application uh and that's all fully supported and uh the same thing on react native for for windows as well so here i'm using using visual studio you can use visual studio code as well let me just make this a little bit bigger and i'll just try the same thing so change the font size and then go ahead and save the new bundle will get pushed there and here you can see on the left that the phone just became bigger so you can you can try this as well it's it's very easy to get started and write an application on both of those platforms so give it a shot let us know how it how it is and uh we'll be very happy to hear what your experience was like so with that i'd like that's the end of my presentation so i'd like to thank you for listening uh thank you also for call stack from course focal stack to set this this conference up it's really great to have everyone together to talk about those experiences and i know it takes a lot of time to get those conferences up and running so thank you again if you want to get in touch with me my contact info is here on this page and then i have on the left a link to all of the repos that we talked about in this presentation so again if you want to get started aka dot ms slash react native give it a try let us know how it how things go and again thank you for listening and enjoy the rest of the conference bye thank you khalif really exciting talk and honestly i'm so happy that there is mac os and windows at the same time because i can kind of make windows apps developing mac on macos at the same time so i don't have to deal with those vms now i guess you know this has been a very very intense first block so i guess you are all probably excited for what's next to come but before we do that i do have a 15 minutes break for all of you to kind of maybe get more coffee have a break and most importantly open up discord go to the react native eu channel and just talk with each other maybe you have some interesting insights about the talks that you have just heard or maybe you want to ask questions to folks at microsoft or facebook or expo about the things that they have shared with us so just take the most out of this time and we will see each other in 15 minutes from now this conference is brought to you by codestack react and react native development experts [Music] [Music] [Music] [Applause] [Music] my [Music] [Music] [Applause] [Music] [Applause] look at this garden [Music] hello [Music] hello i'm mike serious co-founder at callstock and today i'm looking for the best react native developers to join my team besides working on high-end software used by millions we also contribute to many opus projects such as react native paper react native testing library or repack and so you will have an opportunity to develop your skills and knowledge within these projects as well as move your own ideas into life by taking part in our r d program we are a great team full of people crazy about react native technology and we can't wait to share our knowledge and description with you trust me it's great to be part of such a team so don't wait anymore join us check out the job description and in the link below and apply and i'm hoping to see you soon in our callstack office or maybe remotely depending on your location bye bye [Music] hello everyone this is luca speaking i have a pleasure to co-host this fifth react native eu conference live from wordsworth poland from our call stack office i hope you're having a great time today we just had our icebreaking session and we talked about cross-platform and architecture of react native applications at our first session we had a great speakers mark from expert joshua from facebook and caleb from microsoft thank you guys for doing this i want to remind you all that all of the talks will be available on our youtube channel call stack engineers please check check that out um join us also on discord when we can do some networking and you can ask questions that will be answered either by our speakers right now or will be addressed in our next react native show podcast you can also and you are encouraged to tweet about this conference using react native eu so that you can let your friends and family know that you are here with us today um yeah and that was all from this break uh let's have a cup of coffee and we'll see you in a few minutes for our next round of talks [Music] wake up in frostwolf get up and get at him react native eu is on and it's happening check in to check up on all of the latest news and strategy man it's kind of the greatest and welcome to react native you 2019 [Applause] keynotes that unlock a new world of insight lightning talks this cutting edge it really seems so right networking with everyone in the react native community from the thinkers to the linkers we're all here in unity q and a's that dive deep into cold dna and it's all covered here in only two days especially when it comes to trouble after the last session we're gonna party okay food drinks with good vibes and dope karaoke [Music] high fives all around it was good to be here [Music] can't wait to the next one we'll see you next year [Music] [Music] so [Music] uh [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] this conference is brought to you by coldstack react and react native development experts [Music] welcome back after break and uh we are after a quite intense series of talks about react native core features and what's here to come so now let's focus on something that you can probably all relate to which is application development and features or libraries that we are using every day now next two speakers are from call stack so these talks are what i'm particularly excited about um so the first speaker is alexandra and she's going to talk about uh game development with react native now there's been quite a few talks in the past about game development the react native on the react native eu but this one is different because it's talking about using unity 3d now i've been also doing unity development quite a while ago but my prototyping with react native failed so that's why i'm particularly excited to learn more about what she has to tell us today and how she has succeeded on her journey while developing games with react native and unity so let's learn about that and see how she make that happen hello everyone i hope you are all having a great time in react native eu 2021 i'm alex i'm here to talk to you about game development the exact title of my talk is what if i want to be a game dev i would like to start with a quote by ernest cline from the book ready player one maybe some of you have heard about it or the book or the movie being human totally sucks most of the time video games are the only thing that makes life bearable okay that quote is kind of a bummer but video games are really a great way to spend your free time right and we do play a lot of games that's why i would like to talk to you about developing games because you know somebody has to create the games that we play my talk will be divided into four parts first i'll do an introduction about a little bit about myself and about the reasons for making this talk i will then go on to presenting your options what can you do if you want to develop games and then i will concentrate on a game engine called unity and lastly i will talk about integrating unity in a with a react native app i'd like to start off with a little introduction as i said in the beginning my name is alex this is actually my second talk at react native eu i spoke last year about animations in react native apps and i showed you a little preview during the talk of a small app called that i worked on and i'm very proud to say that that app is currently available on the google play store and on the app store i would like to briefly add that publishing uh the app by myself was a challenge but it's doable so if you're out there doing some solo mission and you're worried if you'll be able to make it you will it's okay you can you can do it um i like to push boundaries of uh what i do of currently of app development so working on games is kind of pushing it what pushing what we currently do in react native apps usually and last but not least i recently joined call stack and i'm very excited about that let's move on to the topic of this talk i entitled this slide why so serious because we developers sometimes take ourselves a little bit too seriously so let's talk about games a little bit let's wake up our inner child i would like to convince you that developing games is a great thing to do because it creates new challenges in very new areas it can become your creative outlet you know some people make bread some other people like make pottery you can make games you can learn new stuff new programming languages but also new concepts in programming especially if you are self-taught like me then facing like new approaches to programming is very enriching and maybe there's some game that you would like to play yourself but that doesn't exist yet so you can go ahead and create it so now that you're super convinced that game development is fun and is a good way to spend your time let's weigh your options you can take pretty much one of three votes you can go with just plain old react native and it can be done in react native there are a lot of flappy bird examples of games developed in just plain old react native i guess flappy bird is the game equivalent of the to-do apps for react.js tutorials another road you can take is using a library called react native game engine it's just exactly what the name says it's a game engine it has some built-in functionalities like loops or character movement and it will help you start faster or you can try to integrate a game engine that can be super useful if the game i have in mind is maybe a bit more complicated than flappy bird let's take a deeper dive into those three options there are many games written in react native alone um i myself saw a presentation of flappy royale on the stage of react native eu 2019 written in react native alone and there's a minesweeper game a basketball game you can you can find all kinds of great examples on github and i would like also like to um recommend watching william candyland's youtube channel where the channel is called can it be done react native and the answer is usually yes so what he did is he created a [Music] three baby 3d engine and he shows how to use it and how he used it with just react native and reanimated if your game is a little bit more complicated or complex you may want to go with open source library react native game engine this library will help you tackle some standard game functionalities such as character movement or loops or many many others but what if your game is even more complex then you may want to go with the game engine there are many many game engines out there in the world two two game engines that are really renowned are unity and unreal unity has a vast tutorial ecosystem and it has actually existing integrations with react native unreal is is a famous game engine that has great 3d rendering and there is an unreal js plugin where you can use the unreal engine and write all your code in js so why am i talking about unity here unity is simply my personal preference to be honest both of those game engines have editors which have a little bit of a learning curve it takes some time to understand all the fields and there's a lot of fields [Applause] [Music] [Applause] [Music] [Applause] and it's you have to you have to put some time in so i put my time in to unity and what i got back was really satisfying for me and that's why also i wanted to present it to you today here you can see a few second gif of a game i created thanks to a tutorial on unity it was a 14 hour tutorial that i took a couple of hours a day for a week so after a week i had i have this complete game uh with ruby the main character who is animated moves around she can get damaged she can eat her little strawberry to get her health back you can see her health in the corner she can talk to other characters and this was all done in just a few days with the help of a very nicely done tutorial all the assets were free and this was a real fun way to learn unity and at the same time i ended up with a game that i can play that i can show to my friends and family that i can build out i can create a whole world now and make this a really big game here are a list of some key points which convinced me to use unity as i said unity has a lot of tutorials some of them are in editor very very comfortable to use with highlights of buttons that you have to click to to achieve the goal of the tutorial as well as tutorials on the website the website also contains learning paths where by completing tutorials you earn points and that's that's a pretty nicely gamified solution for learning there are a lot of free assets to start off there are a lot of paid assets as well but if you don't want to invest any money you can start off with really a bunch of very nice free assets unity is great for 2d games as well as for 3d games and it is used for some very few for some very big titles which i will mention in a second and there are actually a few react native integration libraries out there that you can use here are a few notable titles they use the unity engine it's used in the game overcooked 2 4 guys or subnautica and as far as mobile games are concerned unity is used for super mario run monument valley angry birds epic and many many more if you notice super mario run here i can add that nintendo actually uses the unity game engine for all of their mobile games and that includes animal crossing crossing a pocket or dr mario world and mario kart tour and three quite a few titles out there so now i have convinced you that game development is great then i hope i convinced you that unity is great now the last piece of the puzzle is talking about the marriage of react native and unity i believe they create quite the power couple integrating react native with unity can be done in two ways on a high level you can have a react native app and integrate a unity mini game or some unity functionalities in your app or you can have a unity app in which you integrate a react native app and functionalities so let's start with the first approach use we start with a react native app and integrate a little bit of unity goodness inside of it on the high level it takes three steps you start by creating your app your react native app then you can use unity as a library which is a new feature introduced to unity in 2020 or you can check out some open source implementations and then you just sit back and admire your beautiful app game here are some examples of such integrations here you can see a a game like a shooter game which is inside of a react native view it's a pretty it's a pretty big complex game um but at any time you can go back to your react native app and there's also another here you can see another example of an app which has a react a unity game a reaction app with a unity game but it's presented on a tab and if you look closely you can actually even see there are buttons in the unity app where you can change the background or like rotation of the 3d object can you imagine trying to render this 3d object in javascript i mean that would be and that would be a lot of work right and unity is pretty much a few clicks here you can see another example of an integration is it's different in the sense that the react the unity game or view is in the background of your react native app [Music] and on the right you can see something that i find super exciting it's an example integration made by the united team of using unity as a library inside a native app so they have like a native app with a store which we all know you can do in react native and then at the touch of a button you go into an augmented reality powered by unity where your shopper can see what their item would look like on their actual desk so as you can see there are a lot of ways to use unity you they have virtual reality augmented reality mixed reality 3d models games you know the sky's the limit and this is my cat who really wanted to participate in this talk [Applause] i have to tell my cat to go away he was not being very helpful so the sky is the limit for for all the different things you can do between react native and unity and let's not forget that thanks to those implementations unity and react in your react native app can communicate communicate both ways which really opens up a lot of awesome doors so let's take a look at going the other way around and integrating react reactant app into a unity game the steps to accomplish this on a high level are pretty much the same you start off by creating a unique game this time and then you export it into an android studio project or an ios project and then you can integrate the react native activity or react native as a module inside of unity 3d so the first thing is what is recommended by uh the fish react native documentation you can always implement your react native into a native app right um but i read an article which presented an integration of react native as a module into unity 3d and i have to tell you it got me very excited about some other possibilities of creating great experience creating great experiences for gamers so here's a short gif of which presents how how such an integration could work and as you can see on the screen you can just imagine your favorite game written in unity and you click the the store button and you can stay in the game and still by yourself maybe like a cute t-shirt with your uh favorite character from that game and that creates an awesome opportunity to create very immersive experiences for our gamers and all of our users so that's it for me thank you very much for listening i really hope i did convince you to try out unity if you decide to try out unreal i will not be mad it's it's a great game engine as well and it also has a lot of tutorials and just go out there and create some great bye thank you alexandra for this talk and now a fun fact is that alexander actually applied to react native eu while not being a call stack employee now you know we've been discussing herzog for so long that i guess our relationship turned out to uh this kind of a form where we are now working together so it's kind of interesting how real native eu can you know let you meet new people and maybe somehow and sometimes make you work together now our next speaker is satya and satya has been working with us actually from quite a long probably from the from the very beginning and he's been very very excited about open source in particular about react navigation it's something that has been on from the very beginning developing ever since you know the react navigation came out to the open space and he's been i guess last year talking about react navigation new features and now he's going to talk about kind of the same thing but worry not it's not going to be boring because there is a whole new version coming up and that's what satya is going to tell us about today so react navigation 6 let's fasten our seat belts and see what satya has prepared for us today i'm satisfied and i'm the maintainer of react navigation today i'm excited to share the act navigation sex with all of you so navigation in react native is one of the most important parts of react native it's one of the first things that users experience in the app and that's why it's important to provide a smooth and native feeling to the user while keeping things mature brand today we have react navigation which is one of the most popular navigation libraries but it wasn't always like that in fact there have been many attempts at solving this problem there have been a lot of libraries with a lot of different apis and varying approaches i still remember when i first tried building something like native one of the first things i was looking for was to replicate that native field it was one of the first things that came to my mind so i wasn't like searching for navigation libraries per se but that's what i meant like that's the feeling i wanted to capture so interestingly the ignitive already included a navigation library when it first launched unlike that now so initially there was navigator ios it was long before react native added android support and it was ios only so it was a fully native component which was only for ios and it replicated ios navigation controller it had a very small set of customization options you could only change the header color and the font family etc but since it was native it was pretty smooth this component is no longer in react native code but luckily this was just the beginning so when the f native launched on android we needed a new navigation library since uh navigator ios did not work on android so navigator was born navigator was fully javascript written and did not use any native apis and this was a little bit more customizable than navigator ios since it could change a lot of stuff but there was one major drawback with it it did not use the animated api so the problem is when animated added support for native driver you could not use a native driver because it used to call request animation frame to drive its animations without using the animated api so the migration wasn't straightforward so we needed something new and we had navigation experimental it was built by eric visanti from facebook and it was part of reactionary code it was a set of navigation related components which are now built with the animated api it was very low level which allowed you just to build own behavior on top of it like our own navigation library on top of this it was very customizable but wasn't easy to get started i remember when i was using this api in one of my apps i had a lot of custom code to like manage the navigation because this library only manages the animation not not the logic so not long after a new navigation library from wix was born called react native navigation this has fully native navigation library worked on android and ios is still maintained and currently at version 4 and one of the most popular navigation libraries after react navigation it introduced the concept of registering screen a separate groups so each screen acts like its own independent app so this data is still accessible from javascript you can use javascript to do everything but it still used some native apis and some things were not possible so next we got x navigation this was a kind of continuation of navigation experimental by expo and mainly built by brent batney and other mischievous similar to navigation experimental it used animated api for animations and implemented in pure javascript this was also one of the first navigation libraries which supported multiple patterns such as tabs drawer etc and it was very customizable and had a lot of features but as all good things to do like this also came to an end uh it wasn't forever so yeah the next big navigation library was airbnb navigation so this was built around brownfield apps and it was to make um integration with brownfield apps easier which airbnb did a lot so it was very similar to native navigator uh react native navigation by wix and we have to register the screens uh upfront like we can we do in weeks so sadly now is no longer maintained since since the airbnb is no longer working with react native but yeah it's it's now up and down like a lot of navigation libraries and recognitive so all of these libraries have their pros and cons i also didn't mention many libraries because they are not very popular and probably i i cannot like mention all of the libraries that exist in this presentation but yeah now it is time to come up with something uh from the community something that people who actively work with react native maintain so the community facebook and expo came together and built a new navigation library called react navigation so it was very customizable adjust many use cases it was written in javascript only like x navigation like navigator but the main limitation of this library was a static navigation api which prevented many use cases it went through many major releases so current version is the accommodation for but the core api of defining the screens statically is still the same so even if all all of these things changed the basic api stayed the same so react navigation 4 is still used in a huge number of apps so we are not going to duplicate it anytime soon because it will be like terrible to migrate all of them to new reactivation 5 which which is completely different api so yeah we are still merging cool requests for getting innovation for and if while we are not adding any new features it's still actively maintained so yeah uh mentioning reactivation 5 after react navigation 4 the next thing we had was react navigation 5. even though it's still react navigation i i thought like i should mention it here because it had a reimagination of the code api so i think it's a like big milestone in the landscape of navigation so one of the biggest changes was the dynamic api instead of the old static api that navigation for react navigation for used uh recognition 5 had a new api which was fully component based and it was dynamic that you can change only time uh depending on state of drops the code had to like go through a lead to to be able to support this and it was like major undertaking service the migration for the ethnicity accommodation four to five isn't straightforward because the whole api for the navigation changed while some apis remained a lot of apis are like totally different but i guess it was worth it considering how how many pain points it's solved like how many new use cases it enabled and people tend to find it like a lot more intuitive than the previous api so now there is another major version we introduced the academic project navigation 5 here on react europe two years ago now it is time for the activation 6. largely react navigation 6 maintains the same code api as react navigation 5 and you can think of it as like further polishing of the api on the code base of what was in react navigation 5. when working on like navigation 6 there were a few things we wanted to focus on the first thing was flexibility many of the navigators accept some customization as props which means we cannot customize them based on the currently active screen so for example here is a video i have where we have a bottom test and we have several tabs we have contacts we have album say i switch to the album tab and you can see that the whole styling of the tab change now we have a translucent uh and dark background tab bar and the highlight color of the tab change to white instead of blue so previously this was not possible at all because all of the options we wanted to specify were specified at navigators level so at navigator level you can have like only one set of props so we cannot change the background color when you switch to album screen so to make this possible we have to make this all of these options customizable per screen so this is how we did it we had tabber options property top on navigator before and instead of using tabber options now they are part of screen functions so now you can pass all of these things inside screen options but you can also specify them in the options prop of the screen so screen options is the default for all of the screens in navigator and options will be specific to the screen where you're using it so with this we'll be able to have a set of default for the whole navigator and you can override those defaults per screen so we did the same thing for bottom task material top tabs and drawer so all of them instead of having tabber options or drawer content options you can specify all of these options in screen options it will also be simpler for users to understand because you don't get confused about what is tabber options and what is screen options so to make it more flexible we also introduce a new library called react navigation elements so the activation elements is a set of components and utilities that we use in their navigation uh but they're like exported for using any any any app or any library if you're building custom navigator you can use these or you can use this inside your app so we have currently header header back button header title platform principle and some other components and we'll be adding more soon and if you have ideas on what to add then feel free this is not a like full full-fledged component library these are only components which are useful for navigation and integrations with navigators so next thing is we wanted to simplify some of the api and reduce the manual work the app navigation supports a lot of customization and it's possible to achieve many advanced things with it and you can do a lot of things but you have to tweak things a bit you have to play around with it a bit because sometimes things may not be obvious but what we wanted to do is like we wanted to simplify some common tasks because yeah if you want very advanced niche tasks it makes sense that you will have to like figure out things your own but for common tasks if we just make it one line it's much better for everybody so yeah one example of this was model presentation style we introduced this in the act navigation 5 and also in idea navigation for with the activation stack version 2. so to do to use it you have to specify the animation with the transition preset and you have to specify card overlaying able to true you have to specify head or status bar height so you have to specify all of these things manually now with react navigation 6 all you have to do is like specify the transition and everything else will be set automatically so you can still customize everything if you want to but the common case which is the native model presentation styles from ios you just need one line so this is not all another common example was when you set a custom header on stack navigator you have to also set header mode to screen or if you don't do that you have to handle the animation manually so while it's nice to have nice to be able to handle the animation manually a lot of people got confused that they had to do that they and they did not read the documentation even if it was mentioned and we wanted to reduce this confusion and so in react numeration 5 we could not do this automatically because the problem is we have this option for the header but we also have header mode which is a prop on navigator and these props cannot be changed based on screen so in the identification 6 which moved header mode to screen options and since we did that we are able to automatically set it when you specify a custom header so now it will default to screen when you have a custom header you can only change it to float and if you if you need it but the default will be screen when you're using a custom header another small thing is when you had use header height it would return zero if you had if you had a hidden header hidden header so in the activation 6 if you have it inside a nested stack and the pattern stack has a header phone and the nested stack as a hydrogen falls now use header height will return the height of the parent header which is more useful so another thing is transparent model for transparent models you also need a couple of options now we have only one optional presentation transferring model and that's all you need for transparent models of course you can also customize it more there are examples in the documentation but the simplest case request online similarly we added a new feature to specify a background for bottom tabs for example you can specify blurred view or you can specify a gradient or like an image or whatever you want without having to use the custom tab button or wrapping the default tab so yeah we we wanted to make this thing simpler for users and react navigation 6 includes a lot of these quality of life improvements so another api we looked at is uh in a lot of us people wanted to navigate from redux middleware on like things which are not in components and to do that they have to add a left to the navigation container but when you do that you have to manage some things manually for example you have to like make two reps you have to check if the navigation layer was assigned and you also have to add a delay to check if uh navigator navigation was ready to be able to disperse options now we have like a create navigation container that helper and also use navigation containers hook and it has all of these set up automatically so all you need is like just check is ready on this dress and then you call navigate on this website which is much simpler than before so another thing we often noticed that people nest their navigators a lot sometimes due to necessity and sometimes because of organization we always recommend keeping listening to the minimum when possible because of a couple of reasons when you nest more it becomes harder to do some things for example you have like typescript there is a lot of a lot of complicated code when there are nested navigators and when you are navigating you have to like write screen then pass patterns where you specify the name of the child stream so it also becomes more complicated as you nest more and more a flat navigation tree is much simpler to use if if you have the same same amount of strings so another reason is performance when you nest more on the performance decreases because updating deep navigation trace is slow and also listing a lot of views is not not good for memory users in the int native there are layout waves which are automatically optimized out but if you're nesting navigators unnecessarily um it's gonna increase the view count so there is enough there is not a lot we can do so we wanted to minimize this uh this necessities by providing alternatives so one of the alternatives yeah i mean one of the use cases was for nesting navigators is model and regular screens so earlier you had to have a model screen at the root and then as a child you have to have a regular stack so you you put all of the modal screens inside this model stack and all of the regular screens inside of this regular stack it was not possible to mix model screens and regular screens so you had to have two stack navigators so we did some work to make it possible so that you can use the single stack navigator with two types of screens so now you can have one stack navigator and where wherever you want model you just pass presentation modeling options and that screen will be treated as a model so in case of uh here you don't need to invest anything it's all possible in one navigator but there is still the case of organization maybe maybe you like to keep all the models in one one group yeah so what we did is we introduced a new new api to group things uh normally you can use the react fragments for this but fragments are couple of things issues i mean they're not issues but they're more readability thing because it's not obvious why you are using a fragment like maybe you read the quote and like there is a fragment why is there maybe if you add a comment it gets clearer but still another problem is pregnant do not do not pass options to their uh screen like you cannot specify options like you turn on your navigator so if you have two navigators there for all of the models you want like a blue blue header or something like that so you cannot do that when you're just grouping their fragments so we introduce a new api called group and with group you can just like group things inside this group component and you can also pass common options in screen options to this group like you can do on a navigator and you can even nest groups inside groups so yeah you can go crazy with like how much organization you want growth don't actually gain but anything like fragments they are just used for configuration so it's not going to affect performance like nested navigators too so here you can see that we have two groups we have one group for regular regular screens and one group for model strings so another common reason for nesting was headers previously we when we wanted to show a header in a tab bottom tabs or drawer we had to nest a stack navigator inside because they did not render a header by default so what we do now is we render the header by default in draw or bottom tabs and then you don't need to nest anything because it's already there and for draw we automatically add a button to open the drawer for tabs uh we don't add any button but in both torrent apps the header will represent the name of the current screen by default so you don't have to handle that manually so i think um as a whole it simplifies things you can of course hide this header if you want but it's there if you want it so next thing is native navigation historically like navigation and a lot of precursors like x navigation navigation experimental have been javascript only with animation sun just shows everything written in javascript it works for a lot of apps this heavy heavy streams can suffer so we also wanted to address this concern uh so we wanted to like promote use of native navigation primitives so with react navigation file we introduced the native stack package this native style package from react native screens and we also had uh the viewpager backend for material top tabs and this backhand used viewpager2 and ui pageview controller so yeah we wanted to promote user ui navigation controller for stack on ios and fragment on android and view pager to 100 for the material. and ui pageview controller for ios format top tabs so uh in react navigation file these are these were optional and i mean in reactivation 6 now we promote use of native stack by default if you go to react novation.org and you go through the getting started guide by default you will set up native stack the diversity stack is still there and still maintained but for new apps we'll have the native stack by default so you'll have better performance unless you need more customization then you can always go for the javascript best slack navigation so i wanted to thanks react native screens from software mansion made by christopher magera voidtech and casper and native strike is possible only because of their native screen without their work it would be possible so yeah i want to thank them and similarly i also want to thank pilser zero zero one zero for the admitted pager wheel it wouldn't be possible to have a native backhand for metal top tabs without their work so yeah thanks thanks everyone uh thanks for making the activity better so next thing we wanted to focus on are the better types react navigation five has much better types than their connection for but some things that as use navigation were still untyped by default you could like navigate to any random string and it will give you a type error so we wanted to change that and make them more type safe so now in the acquisition 6 you do not need to annotate use navigation to get auto completion on type checking it is typed by default but like it's not automatic because we cannot know all of the screens in your app automatically you still need to change one thing uh but like if you already have typing for your navigation screens it's just like very few changes to make this work so it is possible by defining a type of the root root navigation the navigator of the root and it works by using a feature in fact called declaration merging you can find more about this feature how to use that in our task related talks so next thing we wanted to improve as developer tools now we have a flipper plugin for your communication uh this is only working in reactivation 6 but this plugin includes similar functionality to us currently in the was currently available in the redux step tools extension that we have uh this data stream tools extension works with both react navigation 5 and react navigation 6 but um the problem with uh reacted later step tools extension is that it is not our ui we are just using the redux step tools so we cannot like build anything we want so with this flipper plugin since it's fully custom built we can add any any new features we want we can like here you can see there is a linking tab where it shows you the deep link structure another advantage of this triple plugin is it doesn't need chrome debugger so when you are using chrome debugger some things may not work properly so this flipper plugin gives you a better idea of like how things are working uh instead of this chrome debugger which which may break things unfortunately this playable plugin doesn't work in expo but it works in other react native apps so yeah with every major releases there is still question about upgrades because everyone did this upgrades while react navigation 6 isn't a huge change over there navigation 5. it's much smaller change than the activation 4 to react navigation 5. and there are still a lot of breaking changes so there are a few things we can do when updating to make things easier so first of all all of these changes are listed in our upgrade guide we have some mandatory breaking changes which you will have to do if you are upgrading we also have some duplication warnings you don't have to do or change them immediately these things will continue working until next major release but they'll show you duplication ones which you can ignore with your yellow box api so another thing you can do to simplify this migration is you can mix versions from 5x and 6x so generally it's not something you do if you're starting a new application but for upgrading is perfectly reasonable and it will work but there are some caveats please refer to the upgrade guide for the list of cabinets if you're mixing these two packages and as long as you keep these two things and keep these things in the mind you should be able to mix those versions so yeah go to our update guide and read more about it we have a full list of breaking changes full list of duplication and we also have the list of calgar's when you're mixing 5x and 6x packages so just wanted to mention few more things before i sign off we enabled github discussions on our depot so feel free to go there and talk with other community members developed and community so yeah please participate in these discussions answer people or like ask questions or like just share things you have done with that navigation so if you want to help um you can help with code obviously but there are a lot lot more things you can do to help so you can improve our documentation you can answer questions uh on stack over for github discussions you can try issues uh check which issues are valid which are not check the uh check the reflow they have posted or if they haven't you can ask for something or try to make a difference yourself or review full requests we have a very small development team for any any help will help us a lot and it will be very beneficial to everyone another thing i wanted to ask is we enable sponsorships for the ignominication and if react navigation is valuable to you if it brings value to your company consider sponsoring us we have twisted any financial contributions and sponsorships so you can go here github.com sponsors slash react navigation and we have a lot of different tires and you can sponsor whatever is comfortable for you okay so that that's all for now and i thank you a lot hope you have a nice day or nice evening and hope you enjoy react navigation thank you satya for this talk and i'm really excited can't wait to actually try these features on a application now maybe this is a challenge for you but let's see if we can uh challenge you to do react navigation seven point x next year if you submit a talk like that i'm pretty sure you will be on our agenda as well now let's move to another speaker trolls and he is working at crown and he's going to tell us about react native going native and while this title may seem like something very generic this talk is about very interesting use case when sometimes you have to do a bit of native code while doing rec native in their particular use case they were building a podcasting application so related to music and audio streaming and they had to do quite a few things natively to make sure that the application works properly with the system things properly reacts to the native events and most importantly works properly in the background so there are cases when sometimes you have to go native and i guess there his talk is about these use cases and when they are happening and how to handle them so let's see how they did that and how he made that happen in his podcast app hello everyone today i'm going to talk about how you can leverage the native sites in order to improve the performance the readability and maintainability of your react native app some of you might know that airbnb tried to integrate react native into an existing app and coinbase recently rewrote their app from a native app and to a react native app today i'm going to talk about another option where we actually started off with the rack native app and we moved some of it to the native side so a bit about me my name is troops i work for a company called kirwan here in norway where we make investments available for everyone a shameless little plug here if that's something you're interested in we are looking for react in ragna developers so please reach out other than that i've been using react native since version 08 when upgrading to a new react version was actually opinion ass and ios was only supported i've also built and maintained several rack native apps for for a period of years and the project i'm going to talk about today is a project called shift that was started over three years ago so before we dig deep into that i just want to give you overview of uh what this talk is going to be about so i will start off giving your introduction to shift dab so you can get a better context of of what app is about when i'm going to show you the use cases of where we moved code from react native to the native site and then i will talk a bit about the motivation of why i wanted to have this talk in the first place then we will start off with the use cases and then i will close off with some closing thoughts so let's dig into it so the elevator pitch for shift is that is a workout workout app that lets you shift within podcast and music uh it all started over three years ago when uh my friend and i wanted to carve out more time for podcast in our daily life so uh we figured that listen to podcasts while we work out is is a good start but really you can't really listen to podcasts when you are working out hard but the rest times in between worked flawlessly so what we did is that uh we built an app that plays music and podcast and it's all controlled by a timer so if you run in 12 for four minutes it will play four minutes of music when the four minutes is up the app will automatically shift to podcast and then play podcast for your rest time and then it will shift back to uh music when your rest time is up so i guess that's enough talk i wanna give you uh just a short intro to the app as well just to give you some issues it might be easier to to to understand what i'm talking about so let's go so this is the front page of the app you have a portal section and a music section for music you can either use spotify or our own curated mixtapes for podcast it's basically just a normal podcast app where you have your queue we have a set of discover episodes and you can also import podcasts from other podcast apps but the most important thing for this talk is the workout screen so let's start a workout and jump right into that so as you can see now i've started working out and i'm in resting mode and then i'm listening to podcast so one thing uh we have a couple of settings and some of them are actually quite important for this talk so remember them for later we have setting where you can play beep when it's a given set of time left of your rest or work period and we also have the same for for speech so now i don't know if you saw it but the timer just ran out and it automatically uh shifted to music and now we're listening to club live club live by tiesto and another thing to remember for later is that we also have a notification that also shows the timer left and uh what is current plane so now the timer will run out again and it will automatically shift to the podcast and you can see that the notification is also updated and we also have a feature where you can pause or what we call it freeze the workout and you can unfreeze it and these are features that i will break down later in the talk in the in the use case so i guess that's enough of that let's dig back into the presentation so uh the motivation for me to to do this talk was that when we started uh thinking about moving stuff from rec native to the native side um there wasn't a lot of good resources out there there are resources on how to create native uis and to integrate them into your native app there are resources now to create uh bridge but there we didn't find any good resources on why you should do it and that's essentially why i would like to have this talk here today to try to give you a tool in your toolbox that you can use if you also experience the same thing that's that we have experienced there and i want to give a shout out here to my co-founder michael gunderson uh this is what i'm going to present there today is a team effort that we have done all together so uh let's dig into it and before i start with the use case uh it is important to know what the bridge is in react native because that's essentially the whole point so in react native for you to communicate with the native sites you go back and forth over a bridge so an example of that is that if you create a view and you for instance have a style that will go will be passed over to the native site and rendered on the native side and if you have inline styles for instance react native need to sterilize that and pass it over a bridge which requires more of the bridge so if you have tried to console log a property on your style seat you will see that it's a number and that's for performance optimalization so majority of the time everything runs smoothly just like a real life bridge but you can occasionally get traffic jams and that's essentially what i'm gonna show you here today so we have this concept of shifting and one thing that i actually didn't show you but you can also press on this button and that will automatically shift to uh to the other music source and if that happens then that is catched by the javascript site and it goes over the bridge and pauses the podcast because how we uh play podcast we're using a package called react native track player on the native side so we need to go over a bridge and pause the podcast and that in return resolves a promise when the podcast is paused and then that gets back to the javascript side then we can start playing music and the music does the theme so for music we are using a package called react native spotify to play spotify and for mixtapes we are using a package or we have a package we built our own and then the same process here it returns a promise over a bridge when the the play has started and we can finally update the ui so you can see here we are going a bit uh back and forth over a bridge so this is actually the best case scenario for us it's uh it gets it gets worse so a lot of clients said that they don't want to they don't want to click on this button in order to shift if they don't have put a timer here so we implemented a way for them to do that and that's with headphones so with headphones you can press play or pause you can and it will automatically switch and that that means that we have set up headphones events on the native side and they catch when the clients press on player pause and that is emitted over the bridge and then it starts the same process and the same happens for the beep that i showed you and also the android notification in order for us to actually [Music] communicate that you're gonna play a beep then uh we pass that back over the bridge in order to play the actual beep and then now it starts to get a bit of course over the uh over the bridge but it's not much it's it works perfectly but uh what i'm gonna show you next is uh is is our worst use case but before we get there i wanna just give a bit of background on what it actually actually is so i don't know how many of you have tried to run a background timer in react native when the app is also in in in background so it seems like at least it's 3 000 people here who have had this problem or at least looked for a solution and what jose is is mentioning here is that he tried to use package called react native background timer which we also tried to use but it had some bugs at the moment and the the same for us unfortunately so but as a good samaritan as jose's he also answers his own question and that's he actually computes the value he takes computes the the duration between two dates so he takes the value it takes the the timestamp when the app does something and when it finished and then he calculates how long the duration has been and that's essentially what we do as well but we have one more uh problem for us and that's we have this timer loop because we need to uh actually check every second uh what is going on we also need to update the ui every second so every second we are emitting an event over bridge to the javascript side and we update the javascript size and update the ui it checks the counter if the counter is zero it will automatically switch and we also proceed some data every 15 seconds for instance we are persisting uh the podcast position so that when you close the app and open it again uh you will start where you left off and now as you can see uh we are sending an event over the bridge every second now uh that starts uh to blow the bridge a little bit so how did we solve this so i recently stumbled upon this table on the react native documentation and this is quite important for in this talk as well because as you can see here react native has listed out a couple of uh components and what they are in the native world and that means that react native is actually compiling your views into native code and why that is important that is because that's what we are going to do as well we are going to create our own native use in order to solve this problem so as you remember this was where we was where we were we were meeting an event every second over bridge and maybe based on that we will go even more over a bridge so what we did is that we took the duration and the countdown timer and we moved those to the native site and then we can get rid of all the calls going over the bridge so essentially what it looks like then is that now we have a main counter which is basically the same as the as the time loop from before and then that emits an event to account ui which is responsible for just displaying this and displaying this and then we have something called the shift master which is responsible for the notification that i showed you playing a beep whenever the counter is at the threshold set by the user playing speech and also when the counter goes to zero it will automatically switch so you will also have control over which audio is playing and have a reference to all the players so this is essentially the architecture we went for and now we are not using the bridge at all but another uh great thing that came out of this was actually that our codes got much cleaner and the readability was much better so i'm going to show you an example of that so i've heard that you might not see this and so i'll try to just go through it briefly but the whole thing is the the freeze or the pause mode that i showed you earlier the thing is that when you press set and you start to freeze we need to get the current timestamp and also do some calculations based on previous timestamps and then when you unfreeze it will use those timestamps and calculate what the current uh countdown is and what the duration is and this was quite error-prone and you can see we also have some sort of reporting here that reports to century for us and this was really hard for us to debug because we had events coming on the bridge over bridge every second and then it was very hard to reproduce that events came in a certain order so after we did this rewrite this is essentially what that looks like now now we just have a basic counter which most people start off with when they learn how to program the same as how to do app i guess and it's just made our confidence in the codes so much more and this is just one of the examples of where our scalability and readability increased quite a lot so i just want to emphasize some of the key points i think it's important for this talk uh before we finish and i hope that if there's one thing that you will take away from this talk is that you will have another tool in react native tool belts when you are gonna when you are approaching problems so this is the first time i've ever experienced a problem like this and i i don't think you will reach for this often but i'm i it might be and that's also the as i said in the motivation uh part is that what i want to have this talk in the first place to just show that this is actually in is is possible and we also did this change after three years of maintaining the app so we had we knew the ins and outs of the app we knew where we had problems of bugs and where things were hard to reason about the code and also where things might fail feel a bit sloppy and why do you think and the reason why that is important is that when you start moving stuff from the native side like from react native to the native side you lose one of the biggest uh biggest things of frag native and that's learn once and write everywhere because now essentially we need to write every feature for uh all the platforms that we support and so i don't think this is something you should reach for whenever you have a problem like it you really need to think think it through and make sure that this is something that you would like to maintain for the period of the app as well but another important point for this is that our app was not did not perform poorly because going over the bridge as much as as we did our app was chosen as app of the day by apple we were monthly promoted in the app store as well and we're also invited to the apple into a nerd program so i'm actually i was very positive surprised how good react native is when you're actually bloating the bridge as well and how much the bridge actually can manage so i'm not saying that uh so i'm saying that this is this might not be something that you will do that that often and another thing is that often when we when i speak about the rewrite to other uh people um a lot of a lot of them are saying that this is not for them they are not comfortable writing native code they are comfortable in direct native world and i was the same as well so i want to just share a couple of tips on how you can dip your toe into into the into the native side so uh what's how we started is that we took a small problem and we moved it to the native side and over time we saw how that went and then we just start moving more and more parts and now each time i try to each time i integrate a new react native native package into my app i always go and look at the score source code and see how how they structure their bridge how are they writing their code and then i use that as an example of things i want to get more interested in and i start to take the internet and there's a lot of great resources out there on how to create native use correct for for deck native and integrate them create bridge and there's also great resources if you want to pick up android or ios development and at least for me going going into the native site has made me an even better react native developer because now i have a better understanding of how the native platform works so that's it thank you for listening all the way through and if you have any questions for me please route reach out on twitter my handle is through science and yeah that's it thank you thank you jules very much for this talk now you know if there is a way we can make react native show our podcast be played while resting let me know we'll be interested in making that happen now moving on we got another speaker suncat from geeky hands he's going to talk about something very important often overlooked which is accessibility um they have actually created a set of hooks uh called regnative aria that lets you build accessible apps with less effort than usually so something that you probably should all be aware of and should be using to make sure that your applications are actually okay for people with visual impurities or any kind of disabilities that accessibility supports so let's learn about their library and how mate did how they did make that happen and how we can benefit from it from within our applications hey everyone i'm super excited to be back at react native eu and uh today i'm going to talk about building accessibility hooks for react native and web i'm sanketh i'm a software engineer at geeky hands and yeah i go by the twitter handle sun kate sahu you might have come across this tweet which went viral and yeah so that's me and uh yeah so i'm tuning in from the beautiful city of bangalore uh i wish i was there and i wish it was a an in-person conference but yeah here we are uh this nice illustration by abby buck so thank you so much for that uh so let's get back to the topic and try to decode that building accessibility hooks for react native and web first of all what is accessibility so i went through the mdn docs and found this which broadly defines what accessibility means so it is the practice of making your websites uh usable by as many people as possible i think the same applies to apps too so it would be websites and apps making them usable by more and more people that's a very broad broad definition so if we narrow it down into like what are the characteristics or what are the support that you need to make your website and apps accessible it would be something like it should be uh compatible with screen readers so that people with visual impairment or low vision can can use the screen readers and understand what the app is all about or what the website says uh and the same thing applies with contrast ratio with people with low vision uh and then we have responsive design like the the app or the website needs to be accessible on uh mobiles like smallest mobile out there and and to the like desktop and maybe to the largest display out there 108 inches tv or something and keyboard accessible i think this i i really am fond of using keyboards a lot and i use like tabs and arrow keys a lot and websites that don't work uh with with keyboards it's it's like really hard uh so let's see what what are these things and how we can enable these things in our websites and apps uh so yeah how to build accessible websites uh and i'm going to talk specifically in term in terms of react so straight to the documentation documentation says that uh web accessibility also referred as a11y uh it can be enabled we can work on this and react fully supports building accessible websites often by using standard html techniques so what are these standard html techniques so uh w3c and wcag uh these are like guidelines uh wcga cag is a guideline that is provided uh and and we can like go through all the things that we can do so it's usually with attributes that you can add to to html markup and that makes uh the the particular page or app accessible let's see an example for example if there is an input box and i would like that to uh to make it aria compatible and make it required for people who are using it which just say screen readers so i can pass in a prop called arya hyphen required as true and this is an html standard attribute that is also supported in in react if you pass that when we are using a page with or this input box with a screen reader it it says that this is a required field and we can also add a label something like arya hyphen label and and drop in some some label and it reads that loud so react and html standards for arya it's it's more or less the same thing but react provides reactivity on top of it which makes more sense so let's see that in detail so how do we create a component so a component can be created by these five steps that i always come back to writing the markup doing the styling and adding interactions like handling events and then handling the states and last but not the least accessibility let's take an example of creating a checkbox component for the web so say i want a checkbox so i'll wrap it in a label uh markup and then have input type checkbox and some text around it as step 2 i would add some styling just with class names here no css and js and yeah so late next we can add interaction for example on the right side you can see like as as we click on the check box it shows an alert message so really standard stuff input on change alert and then it does the trick next let's make it talk to the state and let the state update the the dom how do we do it so create state and then on change set that state and pass in the same value in the checked value of the input box check drop of the input box so as we do it we click on that and then state is updated and the screen the component renders last accessibility so we have props like arya hyphen checked arya hyphen labeled by that we are going to use here in this input box is uh the same value that we passed to checked prop in input we can use the same thing and pass it to aria hyphen checked so with screen readers or with when we like use keyboard so this says that the the current value object box is checked and the label also reads out we can have area labeled by as as i agree so one thing to notice here is that is the is the reacts component state uh that is building the initial ui and it's not just the visual ui that it is building it is also building the accessibility uh tree structure and and access it like something that is not that can't be seen by the users just by the screen readers it also builds that and whenever that that is being updated it's it's updating the state and that reactively updates the aria values as well so react is working for both the things for the visual aspect of things and and the non-visual aspect of the things which is super interesting and here we are using standard input of type checkbox so we don't have to do much but we can also make use of say something like div and make it behave like a checkbox by adding something like roll is equal to say check box and then we have our checked in there and labeled by so visually we can style a div like a check box but to make the browser understand and the users of the browser understand that that particular div is the checkbox we can use role property great uh let's move on so let's see accessibility for mobiles for ios and android and to do that we'll you will use react native so straight to the documentation react native also provides a unified api for ios and android to make the apps accessible for example we can say that a view which is equivalent to which is somewhat like to div accessible is equal to true so as we do that screen readers start identifying this piece of this block of code as accessible unit and it starts to read out uh like text one and text two at the same time in this example uh both the text nodes are not accessible uh directly but the view wrapper view is is accessible and the sound the screen reader reads contacts everything inside that like text one text to and it reads that out uh it also has focus management and things like that which which just maps by using this accessibility accessible is equal to true anyway we have more props that can be used so we have things like accessibility role and accessibility label and yeah accessibility role and state and value these three are like very important that makes the end users understand what type of element they are they are focusing on and then what's the current state and what's the current value all these things apart from these three there are like really important ones and uh the the black ones are supported on ios whereas the green ones are for android so let's create a check box component for mobile devices so standard react native we'll use the pressable component we'll create a nested view inside that and add some text then add an icon and style it with standard react native stylesheet.create and then add handle the state on press handle toggle update the state and also update the ui based on that state checked with checked block checkbox marked and checkbox blank outline and then we need to wire the same state with the accessibility role so here we are using accessibility role as checkbox and accessibility state as checked now this makes it work for screen readers for the visual aspect of things and it just works so this is a very basic example of like how accessibility can be enabled uh on top of the existing ui talk back on not checked check box i agree checked so this is a quick example of like how screen readers work when we have accessibility enabled uh how can we do the same thing for web or can we write a unified uh say markup or unified component that works on both the platforms uh yes it's pretty much possible we can use react native web to map the same react native api uh on the on the web and let's see how that can be done so let's assume that this is our dream api and this is what we want to do like create a checkbox component that works on all the platforms it has it takes an is check prop and it has on change handlers that that sets updates the check value and then it has text inside it that says i agree so to build this and to make it look the same on web ios android on an on android without losing the capabilities of the native accessibility on all these platforms how can we do it let's see so we can abstract out the the ui part and create a component say custom checkbox ui that that's a dumb component takes and checked and then uh gives us the the checked ui of that we can pass in false for the check value and just empties out the checkbox great and so to make it accessible we have to write some conditional code reactant web handles it well but there are cases that where we will have to write something specific for web uh the first default thing that comes to our mind is is handling the keyboard uh versus handling the touch events so here's a quick example of like how we can make the same checkbox ui uh like accessible on different platforms so we can write some conditionals with platform.os as web and then write if it is web then use the native input uh and and write the on change and type as check box so this does most of the trick like if we have uh input as check box but on the parent if we see we have accessibility role as label and this makes it tappable on the web or maybe sorry clickable on the web and here we're using the same custom checkbox ui and if we have a look at the input on change that component is is wrapped in a visually hidden uh view so it hides the the real input box and shows the custom checkbox ui while maintaining the accessibility aspect of things because the wrapper view at the top level has accessibility role as label great and for mobile phones we go ahead and add accessibility role as check box accessibility state as checked so on the web where exactly are we handling accessibility role as check box so we are using a native input type is equal to check box so that does the trick we don't have to specifically notify that we are using a check box uh and then you update the state accordingly so this piece of code works on all the platforms like for native react native does the trick on web we are using a native input box uh input type is able to check box and then this works so this is a very simple and a high level uh example of of like adding accessibility that works everywhere uh there are more things to consider the first one is definitely keyboard interaction uh an example of this would be when we are building say combo box or select boxes uh we focus that using tabs and then when we press spacebar it opens up and we can use the arrow keys up and down and then press enter to select that value then it closes the pop-up that's one example and and this is like one of the the hardest example when it comes to adding accessibility to a custom combo box it also goes as a joke in the community in the design systems community anyway uh and then screen readers we have to make it compatible with the labels and all those things and contrast ratio uh even the visual aspect of things uh those with low vision can they read the text do we have like enough contrast ratio in the foreground and the background color uh we have more than this these are like just few points and that's where the a11y project comes in uh it's a very nice project they have detailed down like all the different aspects of accessibility they also provide a check checklist that any developer or a team can can go ahead and check if they are compliant with all those things and yeah they have the compliance in like three different levels uh a double a and triple a a is is essential uh sort of accessibility that all the apps and websites must adhere to uh this can be something like adding all text adding captions uh adding adding placeholder and and whatnot like these basic things that that that is like essential for for any app or any website per se then we have double a double a is a little stricter than a it has all the checklist of a but it has a few more things for example it has the contrast ratio of text versus background must be at least 4.5 to 1. uh so yeah that's one thing then then we also have uh subtitles for the videos and few other things and and this is like required by multiple government and public body websites then we have triple a triple a is like very strong level of compliance this is like uh for example it has a contrast ratio of seven to one and then we have if there is any video on the website that has to have a sign language uh support for for people who can't hear so yeah these three levels are really important to understand and also to cross check with the website like if your app or website has falls under like which category anyway uh they also have a checklist the a11y project for example for keyboard they have like these three things make sure there is a visible focus style for interactive elements that are navigated to via keyboard uh then we have like when you press tab does it like maintain the same order as per the visual layout or does it like jump from the first one to the fourth one and then goes back to the second one and then a lot of things like this uh it's in their checklist and they have a very nice uh like write-up around it same thing for mobile and touch do we have like horizontal scrolling if we rotate the screen does everything like reorganize well uh so go ahead and check out the a11y project and yeah so this brings me to react aria this is a brilliant project by the adobe team uh reactaria provides a set of react hooks that that anybody can use to build like accessible ui primitives for their design system or for just their component library a huge shout out to devon who is heading this project at adobe and they also have another project in the same set of projects the parent group of projects is called as a react spectrum which has react aria and reacts stately react stately is the state management piece of of building these like ui primitives for example we have use toggle state as a as a hook let's see that let's see one of the examples so to build the same check box if we're using react aria so we can use toggle state from stately react stately and we can use use checkbox from react aria and then we can have a label and input and spread that input props that comes from use checkbox right into the input element and yeah this does all the trick of like making that particular component accessible with like keyboard with uh mouse on top of it and and uh for screen readers and and everything also these hooks are like service layers and it's it's more like uh they don't provide any ui so the the keyboard handling or mouse handling uh and the the accessibility part of it all these things are like abstracted out which can be dropped into other ui components to make them accessible and that's the beauty of react aria uh yeah so this is an example and we can go ahead and use like check box and this this just works it also gets all those props like is indeterminate and all the things that a checkbox has great so uh then there's another project this is called react native aria this extends the support of react aria to mobile phones so and this project a huge shout out to nishan he has worked a lot on this project his is like a sole developer but there were like people in the team who helped him out uh and uh yeah so so let's see react native varya and how nishan and team they have built this react native arya so react native arya is a set of react hooks to build accessible ui primitives for react native and web so very similar thing but it works for react native but with the help of react native web it works on like all the platforms so here's another example so we can import use checkbox from react native aria and then uh use toggle state from react stately and then spread this input props which has like on change on press and accessibility state so depending on the platform it returns if we need on change or on press on press works for react native uh and and on change works on on the web so let's see so we can use like visually hidden that's another component that hides that that particular input box that comes from react area and then the custom checkbox ui can be used and we just spread that input props that's received from use checkbox and and pass it on input and the same thing also goes on the principle component in in react native and yeah so it handles like uh most of the things for accessibility right from say tab like focus ring or handling the keyboard interaction and check boxes again it's a very simple example it gets really complicated when we have like a menu or combo box or radio buttons here is the the source code a very high level source code that this hook provides it takes in all the props and state and also the input ref of the check box and then it returns the input props that is actually spread on the on the pressable component if you see it in the last slide pressable spreads input drops so all these things are passed back to that particular element pressable and it adds accessibility role and accessibility state which is picked up by react native reactant expressible component and on the website so react native aria has a file for use checkbox.web.js so this is straight away imports react arias checkbox use checkbox and then exports it the source code of react aria's checkbox is something very similar to this but instead of using react native specific roles we use like arya hyphen check then check great so we have a lot more hooks that's available in react native area another example is use focus ring this adds the the focus ring on on uh components so when using keyboard and pressing tabs uh we get the focus ring of the keyboard which is independent of where the mouse cursor is so we can use it like use focus ring and then spread it right away on the pressable component next we have use hover use hover is similar to use focus ring but yeah this is straight away uh does the trick of spreading hover props like uh and all those things which makes it like hoverable great uh so that's then we have used overlay position this places the the the overlay something like a tooltip or a pop over and yeah so we can use overlay position and this takes in the target element and also the overlay element the ref of them obviously and then places that gives the props like top left bottom right position of that of the placement and it just really works really well so we can use a trigger component and then click on the the button and then it just places that pop over great uh so that's use overlay pop use overlay position and then we have more hooks like use combo box use slider use menu use checkbox use group use radio and yeah switch tool tips and pop over great so this is a joint effort i would say that that nishan and team they have like worked on something on the react native side and then mapped back to react aria for the react uh system and yeah we have uh put all these things in in uh ui component library uh on the next version of native phase so native base essentially is mobile first accessible components for react native and web this is based on the utility first principles and react native aria is used here with with many components of of native phase to make it accessible and also to make it work on all the platforms so native based components works on ios android and web and it also behaves natively like with tabs with a hover and a lot of platform level things uh yeah so the checkbox example that we had in the earlier slides is actually available in native phase and we can go ahead and use it and yeah here is another example that uh is to create a menu component it's uh really hard to make menus accessible like pressing enter a space opens up the menu and then using the cursor it goes so let's see that example how this is done so we can yeah we can press enter and then type event like rm and and all those things to filter out or hover that particular menu so let's see it again enter and then down and then we can type r and then m and it takes us there this is this is what we developers do most of the time isn't it and the same thing works for the for screen readers on on different platforms so this is an example of android uh let's see that i hope you can hear it tapped new options menu button pop-up window menu item ariel menu item to homework disable menu item roboto expo no options menu but great so yeah that those were like few of the examples from react native aria being used in native phase and this is the team they have worked really hard on building uh react native aria plus native phase and yeah it's open source go ahead and try to use it uh all ears on the feedback uh yeah the team is trying to build it like compatible for all the platforms it's a hard problem to solve to be honest uh but yeah let's see how far we go that's all from my side yeah there are a few projects that you would like to check from the same team uh from that there's a state management library called sync state then we have react pluggable a form library using mst forms t and then uh builder x that's the design tool that codes and yeah api beats great so uh that's all from me i'm sanketh and go ahead on and let's let's connect on twitter my dms are open yeah that's all for now thank you so much thank you sanket for this talk and once again i'm really happy that you're working on something like this i understand that sometimes these are not the most important or hottest topics but are very needed for our community to use now we had four talks in this panel and seven in total so far so i feel like this is good moment to make half an hour break so you can take this time for whatever you want from refueling your whatever you are having in your mac right now or just socializing with other attendees on our discord channel which is i guess the most important and fun part you can also ask questions to our speakers they are on our discord channel so some of the questions can be addressed right now the rest will be addressed on our react native show podcast so we'll see each other in half an hour and have a good break this conference is brought to you by codestag react and react native development experts [Music] [Music] man [Music] perfect [Music] [Music] [Applause] [Music] hello [Music] hello i'm mike cyrio and co-founder at call stock and today i'm looking for the best react native developers to join my team besides working on high-end software used by millions we also contribute to many opus projects such as react native paper react native testing library or repack and so you will have an opportunity to develop your skills and knowledge within these projects as well as move your own ideas into life by taking part in our r d program we are a great team full of people crazy about react native technology and we can't wait to share our knowledge and description with you trust me it's great to be part of such a team so don't wait anymore join us check out the job description and in the link below and apply and i'm hoping to see you soon in our call stack office or maybe remotely depending on your location bye bye [Music] so [Music] [Music] [Music] wake up in frostwolf get up and get at him react native eu is on and it's happening check in to check up on all of the latest news and strategy man it's kind of the greatest and welcome to reacquainted view 2019 [Applause] keynotes that unlock a new world of insight lightning talks this cutting edge it really seems so right networking with everyone in the react native community from the thinkers to the linkers we're all here in unity q and a's that dive deep into cold dna and it's all covered here in only two days especially when it comes to trouble after the last session we're gonna party okay food drinks with good vibes and dope karaoke [Music] high fives all around it was good to be here [Music] can't wait to the next one we'll see you next year [Music] [Music] [Music] uh [Music] hi it's me again we just wrapped up our second session today and we heard from amazing speakers like ola satya trolls and sunket two of which are actually my colleagues from call stack and i'm really proud of you that you gave such an amazing talks and actually if any of you want to join team at callstack go to our website and find out more there we are constantly looking for new employees for new react native developers and while you're there you can check out our blog we publish a lot of cool technical stuff we have a react native show which is a podcast and we have a lot of conference talks and other video material on our youtube channel called stack engineers and yeah you can also do go to our discord channel uh we have a networking session right now uh where we can discuss all the talks that are happening today and also you can ask questions that will be answered in the dedicated react native show episode uh also please go to twitter right now and tweet react native eu so that you can let everybody know that you are having a great time with us today okay it's enough of me talking i'm going to grab some lunch and i encourage you to do the same we'll see each other here again in 20-something minutes don't miss out on our next bunch of talks that we have prepared for you [Music] do [Music] [Music] do [Music] [Music] [Music] [Music] [Music] [Music] [Music] so [Music] [Music] [Music] this conference is brought to you by codestack react and react native development experts [Music] this conference is brought to you by codestag react and react native development experts [Music] [Music] man [Music] perfect [Music] [Music] [Applause] [Music] [Applause] look at this garden hello [Music] hello i'm mike serio and co-founder at call stock and today i'm looking for the best react native developers to join my team besides working on high and software use by millions we also contribute to many opus's projects such as react native paper react native testing library or repack and so you will have an opportunity to develop your skills and knowledge within these projects as well as move your own ideas into life by taking part in our r d program we are a great team full of people crazy about react native technology and we can't wait to share our knowledge and description with you trust me it's great to be part of such a team so don't wait anymore join us check out the job description and in the link below and apply and i'm hoping to see you soon in our callstag office or maybe remotely depending on your location bye [Music] [Music] [Music] wake up in vroswolf get up and get at him react native eu is on and it's happening check in to check up on all of the latest news and strategy man it's kind of the greatest and welcome to reacquainted view 2019 [Applause] keynotes that unlock a new world of insight lightning talks this cutting edge it really seems so right networking with everyone in the react native community from the thinkers to the linkers we're all here in unity q and a's that dive deep into cold dna and it's all covered here in only two days especially when it comes to trouble after the last session we're gonna party okay food drinks with good vibes and dope karaoke [Music] high fives all around it was good to be here [Music] can't wait to the next one we'll see you next year [Music] [Music] [Music] uh [Music] [Music] [Music] [Music] [Music] [Music] [Music] so [Music] [Music] this conference is brought to you by codestack react and react native development experts [Music] welcome back after a break uh for me with lunch break so i feel like i got a lot of power to guide you for the next panel of talks which is going to be the last one today but equally exciting so we got four talks in front of us so let's fasten your seat belts and let's get started our first speaker milka she's going to talk about um the journey of going native from react so i guess this is an important talk to all of you that are react developers right now and are interested about react native and want to start developing react native or native apps in general so if you are about to do that step or maybe you have already completed your journey from react to react native you will feel like this joke is going to probably have a lot in common with your experiences so let's let's let's see how she did her journey and how she transitioned from react to react native and how we can benefit from her experience to maybe potentially make this journey faster for ourselves as well in the future hi everyone my name is milita and today i'm going to talk about react developers in the wild world of native apps a little bit about me i'm a software engineer with experience working on react and react native application you can find me on twitter and instagram as melee code i'm working in a company bad in soft in serbia located in the city niche with a strong tradition in engineering and technology and in one of our projects we need a mobile application for ios and android and you chose to go with react native because the faster development you have you will go to single code base for two applications both ios and android but it also has one more advantage and that is the usage of react with all of its benefits including the wide community of web developers however it's a whole new world because the mobile development is totally different adventure so in this talk i would like to go over all the differences challenges and advantages to keep in mind when diving into the native app universe let's start starting with react native comes naturally for react developers because of all of the similarities you have you will still use components as a building blocks you still have state and probes you will still use jsx and you have the state management so you can use redux or context api but when you start going deeper we can see more differences and challenges we have to learn and you will see that the mobile world is a completely whole new world and a whole new adventure so let's start with that adventure the first one is go for web development you only need a browser for mobile development you will require to run your code on some sort of mobile device for web you're familiar with browser it's easy to just open a browser and start developing and testing but when it comes to native developers you must always think about the platform ios and android you can use android amoled emulator which you can download from the android studio or ios simulator which you can use from the xcode so simulators are good when it comes to testing or building application but we have some cases in our projects when our future we are developing and testing on the simulator and emulator and everything was working fine but when we start testing on the real device we will see that everything goes smoothly so because the simulation can never replicate 100 of the hardware on the real device so you can find your application working without any error on the simulator and having trouble with an actual device so my advice is that i can give you is to always test your application on real devices through the development process especially if you want to release on both ios and android so the next difference you can see is styling styling on the web you will use the css you have a class name that's associated with the dom element the css will target the that element with the class name and assign set of properties and values to the social element on the react native eric native does not use css installing styling in react native is done using the javascript since react components have support for the style property you can also create an object of style values and pass them on the component as props and style names works exactly the same name as in the css except they're writing in camel case but don't worry flexbox is still here for example flexbox will work the same way in the css with a few exceptions the flex direction is defaulting to the column instead of the row and the flex is only support a single number as a parameter and under the hood flexbox is implemented with the yoke and one more thing about styling is you can use the popular css in js library and that is style components style components allows you to write an actual css code styler components and benefits of using style components are that you will have css syntax out of the box you can use dynamic styling you can use steaming and with style components you can also support more complex styles like transform which is really great so if you're familiar with style components and you can use it on the web before my advice is to go for it and let's talk let's talk about one more big difference and that is navigation in the browser world you have the url you will have a current page you will have to go back with the back button and you if you're using react on the web you'll probably use react raptor in the react native react router has still support for for the navigation but you can use libraries like react navigation or react native navigation and having pages and going back on a history is not situation on the native world and you must always have that in mind in the native world we have screens instead of the navigation between them and for navigation between screens we will use the patterns like tab navigation tech navigation and backstar stack and navigation works like you are having a different decks of cards where cards are screened so you will define different decks of screens for each purpose and this is because the user could add or go back to a previous screen it's important to note that on native you start from the first screen defined in the deck and when you add screen it goes to the top of your deck when you go back it goes off the top of the deck as well and this means that the first screen is loaded on the background till you dismiss stack or go back so have that in mind always you're not using pages you're using screens and the navigation is really different you can always associate your navigation to the decks as i described so have that in mind the next one i want to talk about is the platform specific codes so unlike react react native others the needs to write a different code on different platforms and build applications that follow a platform-specific ui and ux guidances so react native offers two alternative ways to build the clause platform application more efficiently the first one is platform module so react native provides a module called platform to detect which platform or which version of platform you're developing on joint the application is currently running so this is especially useful when you're using a component that contains small parts of the platform-specific codes and you can import the platform from reac native and set some kind of condition to check that if you're using the ios or entrant and the second way to check is platform specific extensions so react native will load the matching files for a specific platform by automatically detecting whether the file you're working on has ios or android.js extensions and this way you can be sure that each time you're coding for ios or android react native will always impose the correct required component and have that always in mind the next thing i want to talk about is react negative application state and this is one more thing you don't have to worry about on the web and that is important to think about when you're working with mobile application and that is application state because user can have application in the background or use it uh in the foreground and this is very important because you don't want the user putting your application in the background and when it comes back to you want to show something or to update your state and users back and forth on your application is very common and you must always think about that you can use application stats from reac native so it can tell you if your application is on the foreground or background and notify knew when states change so with app navigation from react native you will have the three states the active when the application is running in the foreground the background where the application is running in the background and the user is either in another application on the home screen or on another activity and you will have the inactive when the state occurs when transitioning between foreground and background and during the periods of inactivity maybe some multitasking view or some incoming call so always have that in my mind when developing your application to check the application state and to check if your application is in the user foreground and backgrounds this comes really handy the next thing i want to talk about is the native features there are many native features mobile has to offer like push notification camera on the web too many of those are not used so often but when building an application you must think of when you want to ask user permissions for those or how to handle those permissions for future on your applications for example when you're asking for user permissions for the camera in order to update the their profile pictures you will have to check every time for yourself for permission because user may have pressed or not give the permissions so think that i think about that as well and the next thing i would like to talk about is the network uh when we are talking about the mobile devices your connections may be slow unreliable on non-existent and you must build your software to support these cases beginning with connectivity prompts and including offline methods such as caching so be prepared for the possibility of the connections breaking in the middle of network activity so the great user experience is that you will always show in your application some sort of the text message and maybe icon or some sort of offline screen that is showing that your connectivity is bad or non-existent that you're offline and you have that to show your user that application maybe does not working as well because you don't have the connectivity great so think about that as well and when you're talking about network i one thing i want you to check out is the package net info the package provides information about the users active network connections and collectivity status of their mobile devices it also identifies the user current network type such wi-fi and then we are using the net info packaging uh in our application to check the network type so you can check it out and use it in those cases when you want to check the the network on on your application and so everything runs really good so the next thing i want to talk about is sharing code so if you're building both web and mobile application you can leverage some of the codes reuse so if you're converting a react native application to react you can use a react native web to bring a mobile application to the browser it will require some significant modification to perfect your port as well and some adjustments for the user interface and there's also another way of sharing and using code and that you can build your application and share code and share logics between web and mobile react and react native and you can build the components independently of the platform and so think about that as well that is one of the way uh so so you can reshare share your code on reuse your code and talk about sharing code i have to mention the monorepo monorepo allows you to have the multiple projects and share common dependencies instead of installing the dependencies for each of them and this also simplifies sharing code between your projects allowing you to import code from one package to another and you might want to have a monorail containing a website a mobile application and some shared comb between them and one of the most popular options to do that is to use the yarn workspaces and talking about monorepo i want to mention one more thing to check out and that is learner learner is tool for managing javascript project with multiple packages so my recommendation is to check it out we use it in our application when we want to have multiple packages and uh to share code between web and mobile applications so it's great thing to check it out also and talking about welding mobile one more thing i want to talk about is testing so programmers and developers are also humans and humans makes mistake and testing is important because it helps you to uncover these mistakes and verify that your code is working it also gives you confidence that your applications is working and it will work in the the production your code is working your components are good and when you're writing test tests uh the default template of react native ships is just so if you use testing before on the web you will probably use chests so that things is similar and doesn't change and when you're testing with jess i have to recommend some libraries for writing the unit test of your native application that is libraries like test renderer or react native testing library those two are really great libraries and you can check out and see what is working for your application so always have in my testing also and one difference about i want to talk about is debugging so for debugging you can use the chrome developer tools to debug your javascript code in chrome so that thing is not different from the web that is like you're common to debugging in the browser so everything is fine but one thing i want you to check out is the flipper flipper is extensible mobile application debugger flipper is a platform from debugging ios and android and react native application it can help you to visualize inspect and control your application from a simple desktop interface so i strongly recommend flipper for mobile developers with flipper you will also inherit the plugin ecosystem that exists for native android and ios applications this means that you will be able to use plugins that are aimed for native apps for react native app as well like you will have plugins will include the device logs the device crash report something that was really useful in our applications to inspect the network request that is like really great so um one more thing you can check out is the device perform preferences you can check the cached images you can inspect native layout elements and my strong recommendations is to use the the flipper it really helped us in our application as well and one of the biggest changes when when we are when we are going from web to uh to native is the deployment on the react native side we are using the native way of deploying applications so remember that react native is only a tool for writing native apps you will have to build and upload your build for if you're using ios to app store you for google to google play store and for huawei to app gallery so after your build is upload for example apple will run some automated tests on your application for basic info providers for the app store and let you know if there are some issues and after this everything is okay you can use applications with other developers through a program apple provides a test flight and test flight is the way to send your application somewhere for testing without plugging device on your machine and test fly really comes useful when it comes to the better programs and better testing your application and it's very important to know that each deployment will need to pass some validation from apple that will take some days in order for your release to be available and to end user to go through testing phase and for google you will use the google play console to publish your applications and maybe games for the google play uh so yes and one more thing to also mentioned here is the code push code push is part of the app center and codepers is in the app center cloud service that enable the react native developers to deploy mobile applications and updates directly to user devices so if you have some cases when your tester needs bug fix really fast and you don't want to wait for the build and everything you can push it directly using the code push and the test i could get check it directly so code push is really great and i strongly recommend you to to check it so that will be it um overall yeah i strongly recommend for all the all the react developers who are thinking about going on the mobile and the native to go with react native to dive deep into the native world and so that's all for me thank you all if you want to contact me you can use twitter and instagram so that's all bye thank you milika for this talk uh it's always great to listen how people are moving from react to react native at call stack we are also looking for react developers and we kind of kind of do the same steps for them as well with them together and we i feel like part of great like one of the great things about react native is that it lets you actually build mobile apps as a react developer that's how i started on android myself for example now moving on another speaker john from expo is going to talk about expo features and how you can benefit from it but these features are going to be different from what we usually hear normally we talk about expo when we think about you know just getting started sometimes we may even talk about expo being good for beginners or having just great set of libraries and features to make you build your application faster but there is also another dimension of features that you may be interested about which is about configuring your project sharing it with your team members submitting it to the app store and generally speaking managing all the things that are not related to coding itself so as you can see expo has a lot of interesting offerings for you related to the devops and management of the application as well that go beyond the libraries that they have in their expo sdk so let's learn about what john has to say today about these features and how we can take advantage of that with our daily projects hey everyone my name is john samp and i'm a software developer at expo and today i want to talk about how to iterate faster with expo i started thinking of this idea because of an app that i use named tweetbot this is an app that shows you your twitter timeline with a nicer design and a bunch of other features that i find really nice and convenient i've been using this app since the early days and it used to look like this this design is really something and a relic of a pastime that i think is really cool i love the skeuomorphic icons and i like the glow around the different elements but i know i can confidently say that if i were to ship an app that looked like this today my users might think it's old or antique so tweetbot also had to update the look and feel of their app if you remember when ios 7 came out they introduced this design language of really thin line icons and everything being very flat and so tweetbot followed suit and just like that they also made a very current version of this app that looks a little different that has some of those same elements from tweetbot three that has those thin line icons but now the icons are a little bit thicker and things still look clean we've got soft shadows which is like something that's very in design right now and looking at the spread of these different designs it got me to realize that tweetbot has to change to match what its users expect and what the current design language and philosophy looks like today and they've really had to iterate a lot they've had to take these features and change what they look like constantly this means that the first version of our apps can never be the final version of our apps it would be so nice just to design something and to build something and have it be done but that's not really how apps work they're living they have to change and they have to evolve so to allow our apps to do this what we really need to do is we must embrace iteration and we must think about how our apps can change over time to better match what our users want so let's talk about what iteration looks like in a react native app here's what i think it looks like first you start with requirements what your app should actually do after that there's the implementation of those requirements then you review that your implementation actually fits the requirements then you would make a build of your app you'd submit it to app stores and then once users have it they're going to tell you things that are wrong and things that could be done better so you're going to be fixing bugs and adding more features and making new requirements and then this whole cycle starts over again so usually we can get this circle spinning somewhat and hopefully add a good clip but what if we could make this circle go even faster i think we can do that with tools that expo provides so let's start with the first thing in this wheel which is requirements and so together today we're going to build an app together and this app is going to be a coffee app so let's brew a coffee app together [Music] so let's talk about the requirements we need to implement to make a great coffee app the first one is we need to know the amount of water we need to the amount of coffee we need when making a pour over coffee and we can measure that with a scale usually a great ratio is 16 to one so 16 units of water to one unit of coffee one thing that's kind of hard to do in the morning especially when you're feeling a little groggy is trying to say okay i've got 27 grams of coffee all right what is 27 times 16 so i can figure out the amount of water so that's one problem and one requirement we will have for our coffee app the second requirement is around pulse pouring so pulse pouring is pouring a certain amount during a certain duration during the brew and usually you'll pulse pour three to maybe six times throughout the brewing process so our app should also count down and guide us through the brew and tell us how much to pour and when so those are our requirements that we're going to implement today the next step is implementation of our app the first thing to implement our app is to start an app we can do that by installing expo cli globally with an npm install global command after we do that we can run expo init coffee app this is going to result in a folder structure that looks like this on our computer on the left you can see the directories the expo init made and this is exactly what happens right after running expo init we've got an app.tsx file a few assets and just a couple other standard files in our project directory notice that there isn't an ios in an android folder full of native code expo is able to handle that stuff for us which can make our implementation much simpler and then later down the line if we need native code or custom native modules we can always add them next up we want to start our app which we can do with yarn start and through the prompts we can open up an android emulator and an ios simulator to let us preview in real time the changes that we're making to our app.tsx now one thing that would make this a little bit better and one problem i've had in the past is when i'm developing something in a simulator and then i actually open it on my phone after building it and getting it onto my device is that sometimes it appears a little bit differently so it would be great if i could actually see the app on my phone right now and expo allows us to do this and to me this is such a big step in iteration during development you can download the expo go app and then when you run the yarn start from the command that we did earlier you can use your camera app to scan that qr code which is going to open up your app a development version of your app inside of expo go once it's in expo go and you make a change to app.tsx you'll see it appear almost instantly on your actual device what a wonderful way to test the stuff that you're actually using in the stuff that you're actually developing so i should mention we've been talking about expo a lot i work at expo so what is expo exactly expo is a company that builds free open source tools and also hosted services that help you build an application with react native and it might be helpful to talk about what is react native exactly and what is expo exactly so here on the left react native is a set of component apis i think its main job is to render jsx from react into native views on android ios and also the web it comes along with a small unopinionated core it's got great third-party libraries that you can plug in but it doesn't come with a lot of the extra stuff you need to build a full application and that's where expo comes in expo provides a component sdk tools like expo go like we just showed you and also services to build submit and update your app it is powered by react native and sits on top of it and hopefully along with both of these you can create incredible applications so at this point we've got some requirements of what we need to build it's always great to start with a design i actually spent a while designing what this app could look like and i went through a lot of different iterations and i ended up making designs and then implementing them and this is what they look like so on the left i'm implementing requirement number one i ended up putting a question of how many ounces of coffee would you like to brew first so you can say maybe i want 24 ounces and then below it tells you how much water to heat and how many grams of coffee to grind on the right this fulfills requirement number two which is pulse pouring once you tap start it'll do a countdown timer and then tell you how many grams of water to pour over your coffee and when throughout the entire brew process this makes every brew really consistent and if you can replicate a really great process every time you can wake up with a beautiful cup every morning also when testing this app it's definitely a caffeinating process you could say i was maybe a little bit shaky after all of this so the next step is how do we review our app now that it's built how do we make sure that the requirements are actually implemented this is a problem i can test it myself and think like okay this is looking good but how can i let my teammates and my colleagues try this if i wanted to do this manually what i'd probably do is create an ad hoc archive at least for the ios side i would add the allowed devices through the apple developer ui and i would also then build an apk for my android users after that i would have to make builds of my app and then i would have to distribute them either by giving my team download links or distributing them via the play store internal track or via test flight for ios but there's a better way to do this and we can distribute our app faster to our teammates with this solution which is called internal distribution if we let xo handle our credentials and the whole process of this and expo can also build our app we can distribute it to our team even faster let's look at how this works so the first thing we need to do is install something called eas cli which stands for expo application services and eas is the services part of expo that will help set up your app for internal distribution the next command to run is eis device create and this is going to add a device to your credentials that we will then manage for you so every time you make a build you can say okay i've set up this device previously make sure that this build is built for that device that i set up that i actually connected to the provisioning profile is what it's called after that i can run a command called eas build profile internal and this is going to create two different builds for me that are set up to work with those credentials which enables me after these builds are done to do something like this where i can just send a link to someone and say hey try my app and then go to this link and then download the app that i just built and they can sideload it on their phone so what this whole process looks like is registering a device then making a build and then you can send a link to download and that makes this whole process much faster and allows people on your team to try your stuff faster and if you can try something faster you might try more features you might try stuff that you wouldn't try before and this iteration is going to help us make an awesome app now we can automate this process even further by setting up a ci action expo has something called the expo github action and we could make a ci action that looks sort of like this where we set up eas to build every time we merge our code into the main branch our expo action will install our clis and we'll we can then install our dependencies and then we can say let's build for all of our platforms and then also build the internal builds so every single time someone merges to main we can go and test our changes now there might be an even faster way to test our changes which is using expogo and publishing our app with updates so how does this work we can do something called expo publish on our app locally and when that occurs we build a bundle of all of the javascript code and all of the assets in your app and we make that into an update bundle that is stored on xbox servers then when you open up expo go you can see i have this coffee app and if i click on this thing that says default right here it's going to open up that coffee app in expogo itself so anytime i have a change i can run expo publish and then that change will be visible right here in the app without me needing to download another build anyone in your organization that's logged into expo go can access that app which makes it super fast to share the experiences that you're developing and we can make this even faster just like before by using that same ci action that we showed previously but this time using expo publish so that makes that iteration cycle super fast and it also after we've tested it with our colleagues and our team means we're ready to build our app for the app stores so the problem that we want to solve is how can we distribute our apps easily to the app store it's all built now we've reviewed it it's ready to go what steps do we have to take hopefully not that many if we were to do this manually at least for ios we'd want to enable the app transport security we'd have to configure the release scheme and then we'd have to build the app locally for release and that is a lot of steps and a lot of configuration you have to deal with instead we could use eas or expose application services to do this for you and your team and in the cloud which will hopefully make it an even easier process and one where you don't have to set up everyone's computers identically to make this happen so let's look at the steps to do this we can use eas build and then pass in a profile called release which is set up when you run eas build and that's going to create two apps for us one optimized for the ios app store and another one for the android app store or google play then on our website we can see some ui where you can see all the builds that have ever been made for your app you can download that build and you can also submit that build which is the next step of our process if you were to submit manually we would have to at least for android download the aab file and then open up the play store console and drag and drop that thing in there and then we have to wait for it to upload make sure that everything is good same for ios we'd have to download the ipa file and then we'd probably open up a program like the transporter app and then we'd have to drag and drop that in there and wait for it to upload and see if any logs these are just manual steps that we have to do you could automate this but it would take some extra work so instead eas provides a command called eas submit and we've actually set up this automation for you already so you don't have to deal with this process yourself and you can take builds that you've made with the eas and then run eis submit and you can say take the latest build that i've built and then it will take that build and we will update or submit it to the app stores for you this is super convenient and makes building and submitting to the stores really quick and it really speeds up our iteration cycle for those two steps of the iteration process so our app is now out in the store and we've actually found a bug or let's at least pretend that we did a popular way to brew coffee is using something called an aeropress but let's say that instead of writing aeropress we accidentally wrote sparrow dress instead so if i wanted to fix this bug in production which is a pretty blatant bug i would need to rebuild my apps i would need to resubmit them to the app stores and then i would need to wait for my users to download that new version instead we can use expo updates to help you provide small fixes between builds like this so let's look at how this works expo publish ends up making an update and an update is a bundle of code that's javascript and also the assets from our app so in order to fix this sparrow address problem we would go into our app and then we would fix the spray address to say aeropress once that code change has been made we could run a command like expo publish and in the command line you're going to see that it builds the app at least all of the javascript and all the assets into what we would call an update bundle and then that gets uploaded to expo servers so once we have the new update on our servers it becomes available to users and it's important to think about what users have in their app so essentially apps are built into two parts one part is this native code that's built into the app binary and the native code is stuff that you can't change with an over-the-air update this is stuff like the apps icon and some of the configuration about how the app runs but then there's the second side of the app which is this js code and this update code that we're able to interchange with other updates so if i'm a user using an expo app and i open my app there's a module that's going to query our servers and say is there any new updates and if there is then we're going to look for any new updates based on a configuration inside of that app.json file that we saw when we set up our project this makes it so if you have critical bug fixes and things that you need to get out to users right away you can do that with expo publish so in this case if i ran expo publish with this aeropress fix then my users would open up their app they would download that new code and then they would see the typo fixed it it's at that point that i could start making new builds and then submit them to the app store to get review now there are a lot of different options that we can opt into with expo updates let's look at those okay so i'm here in the expo documentation and inside of app.json there is an updates configuration which i can see here on the left there are three different configurations i could configure which is whether they are enabled or not when to check automatically the different options are on load or on error recovery on load is the default here so when users force close their app and then open their app it's going to check for an update but probably the most important one is this fallback to cash timeout this is saying how long should we wait before launching your app while we download the new update if there is a new update so if this aeropress update was actually like really large and it took people a while to download or if they're on a very slow connection we could cap it at 30 seconds or three seconds or something and if they don't download the update within that time frame then they will fall back to the cache version of their app that's already downloaded on their phone we want to make sure that users can always get their app so that they don't get stalled on that splash screen now there is sort of a trick here by default we set this fallback to cache timeout to zero which means we won't wait at all for any new updates which means if we push an update up to a user and they open their app it's not going to wait for the new update to download and it's actually going to download in the background while they continue using the app then the second time that they open their app it's going to load the update that was downloaded in the background the last time they used the app this is preferable for some developers but it's also might be preferable for you to let users wait a couple of seconds to download the new update so that they can get those new updates even faster so it's just good to know about the different configurations we can set with updates so this is a lot of stuff that you can do with expo and there's a lot more configuration to all the stuff that i haven't talked about yet but i do want to mention just quickly some of the recent updates and things that have been going on at expo in case that you've used expo before so one awesome change that we've come up with this year with eis is that you can build an app with any native code you can add native modules like bluetooth or you can also add in-app purchases or things like a blur hash module that will allow you to uh add in like a native module that like blurs out your images as placeholders there's tons of these and you can add codes like this and we will still build your app for you and that is not possible we also have something called custom dev clients before i showed expo go and expo go works great if you're not using any custom native code if you are using your own custom native code you can make your own version of the expo go app and still get all of those niceties around opening up updates on your phone and also your colleagues can do this too and finally we're coming out with a service that updates our our updates called eas update and this is incoming this fall and early spring we'll be rolling this out so keep an eye out for that you're going to be able to do things like roll out updates to your users and see exactly which builds are pointed at which updates at all times so the point is let's iterate together and figure out how to iterate faster expo allows you to do this in development by implementing an app without requiring knowledge of your native code you can also review your work faster with internal distribution or with expo go and you can build and submit your apps in lightning speed in the cloud and make it consistent across all of your teammates finally you can fix critical bugs with updates which helps you fix things that might have slipped through the review process which means hey it's coffee time so if you haven't had a coffee today and you like coffee please have one i would love to have one with you next time we can have something in person uh hit me up i'm always free and available and thank you so much for iterating with me and listening to this talk i hope you enjoyed it and can take some of these concepts into your work life and into the projects that you build in the future let me know if you have any questions and we can't wait to see what you build with react native and with expo thank you john for this talk and i'm really happy to see that expo is you know developing quite rapidly and has more and more features these days uh so i keep my fingers crossed for your future developments because i know that your roadmap is pretty ambitious now moving on to the next speaker of shivai and uh he's going to talk about machine learning with react native using media pipe and tf lite now these are like mysterious keywords to me so i'm not going to be introducing you much to this talk because i guess machine learning with react native is all we need to know and we will learn more from this talk and what's interesting is that they know how to build that machine learning experience in a cross-platform way that works nicely on low-power devices so let's see what he has prepared for us today hello everyone i am shivaya lumber currently a google server code mentor at mediapipe and tensorflow and a developer advocate at fabrica i am really excited to be presenting at react native eucon 2021 and the topic of my presentation is machine learning with react native using media pipe and tf light so we'll be going over what exactly is machine learning and how can we integrate it with react native and what exactly are media piper and df flight so without wasting for for the time let's get started now of course we all know that how incredible machine learning is probably each and every different aspect of life that we see whether it's some kind of industry healthcare medicine is today utilizing machine learning and machine learning in mobile devices is something that is gaining a lot of attention today because a lot of the different applications and processes are happening with the help of smartphones and websites and it becomes imperative to also be able to include machine learning within mobile devices now generally mobile devices are having lower end processors as compared to let's say standard computers and servers therefore there is a need to be able to actually run uh machine learning models on lower-end devices and that does require you to have specific changes be made to your machine learning models or support for running these machine learning models on these devices and we can also have on device machine learning for faster processing so removing the need to actually connect to machine learning model that might be hosted on a separate server on a cloud which can result in increase in the latency that can be covered up or that can be reduced by actually introducing the machine learning to take place on the device itself as the processors of the various devices and um as the machine learning application then the models themselves are becoming more compact and becoming easier to run uh by increasing their performance and reducing the overall uh you could say processing that they actually require so both of these different techniques is actually helping us out to do a lot of different things like on device uh translation of languages on device classification of images being able to detect different things and do a lot of different uh on device algorithms that can run machine learning algorithms that can run on mobile devices and that is making machine learning really uh you could accessible on these mobile devices as well now we'll be talking about a few different solutions that are there and we'll be looking at how can we integrate the two different methods that is either media pipe and tf light both for uh react native applications now let's take a look at this particular slide and let's see like okay from all these different images what are all of these different images having something in common so you see an android picture with face detection you see an iphone xr showing you poses and basically landmarks of your hand you see the web where you can see the different kind of effects happening on uh this girl then we have raspberry pike and we have uh basically the webcams and uh all of them are having something in common and that's something in common is that they are all having different applications of machine learning and that has been powered by one technology and that technology is media pipe meta pipe is an open source uh cross platform framework that is provided by google that helps to build uh perception pipelines or we can also say that they actually help us to build a lot of different audio visual and video based uh pipelines and it is widely used within google for a lot of its research and products and uh probably till 2018 it was internally used by google and not really known by a lot of people but after having been introduced and open sourced in 2019 now it has been widely used to also do a lot of different machine learning implementations and pipelines built around video audio and sensor data so it can be used to basically do data preparation pipelines for machine learning training or it can also be used for doing ml inference pipelines we'll be also looking at a lot of different examples on how it is being actually used and that will give a better perspective of why machine learning using media pipe and being able to integrate it is really popular and useful now some of the uh features that are actually provided with media pipe uh it first of all helps us to actually include end-to-end acceleration uh that means that whatever uh acceleration is required for uh the machine learning models to actually work that is provided by the uh on device support that is you know there uh when we are actually implementing media pipe on the uh mobile devices now you just need to build it once and it can be deployed at multiple places so um if you are creating a media pipe based machine learning solution uh you don't need to just you know it's not this okay that it will be limited to mobile devices it can also be used for the web for python and for other places well so you just need to create one media pipe solution that can be deployed to various places and these are completely ready to use solutions there's no requirement to actually write post uh code for doing things like object detection face recognition and the solution itself will be provided to you with the help of uh the media by solutions that we are building and it's completely open sourced as well now taking a look at some of the popular solutions uh we'll you'll see selfie segmentation that allows you to basically run segmentation masks that allows you to recognize the humans uh and their faces uh in the current window and that will help you to basically do things like changing your background that also we see in things like google meet or zoom then you have face mesh uh now face mesh uh basically points out 468 different landmarks on your entire face and this can be used for a lot of different applications like let's say putting up makeup uh and we'll see a lot of their use cases as well in the coming slides then we have hand tracking that basically has 21 different landmarks for tracking different points of your hand then we have basically human pose which has total of 33 different poses that you can basically have or to do things like let's say exercise monitoring then we have hair segmentation that only detects your hair we have the standard object detection and tracking that can be used to detect certain objects and identify different objects then we have face detection uh holistic tracking is basically combining all face mesh hand tracking and the pose uh in or right and you can do a lot of different things with that as well let's see create youtubers or you know create such kind of animations then we can use a 3d object detection also called as a detectron right and all of these different solutions are ready-made solutions that are that have been provided by the midi pipe solutions team and you can just go to mediapipe.dev and look at one of the solutions and look at their implementations for python java android ios web right and you can start utilizing these solutions now one of the again these are some of the examples where you know it is being used right now so for example you can see the ar lipstick try on where you can see that it is using the media pipe face mesh model to basically uh detect the lips and whenever you're putting some kind of an effect it will be able to actually detect that then you can see some of the media pipe being utilized within uh you know youtube itself and you can see that being utilized in the ar movie trailer that is you know being shown over here on youtube and you can see uh being able to detect uh things or for example in the lens live living surfaces then showing augmented faces that again is utilizing the face mesh and then we have some kind of on device uh machine learning capabilities that are happening let's say with the help of lens translate all of these are essentially capturing the data that is there on the device itself and you know uh helping to carry out different kinds of machine learning media pipe based solutions and these are of course just a few of the examples but there are so many different different kind of applications that you can build and that is why media pipe is becoming really popular for creating machine learning based solutions applications on mobile based devices because of its easy integration and really great performance as well now uh to this sort of you know look at okay what are the current uh things that have been released right so of course um before it was released to the public uh there was a lot of application of media pipe use cases within uh google uh you know ever since starting in 2012 uh it is being used a lot in youtube uh whenever there's any kind of a youtube upload so the processing in that in that and then it is also being used in youtube stories right now we actually see it being used in things like nest cam the security cams we see it in google lens ar ads uh we see it even in you know gcp and specifically in the cloud vision api cloud uh ai videos all these are basically different solutions uh provided by google which does internally use media pipe and that is why it makes it so versatile across different platforms and not just being limited to mobile devices and uh let's take actually an example of one of the live perceptions so let's say that like what we are trying to get is uh the hand tracking to actually work with the help of media pipe now let's see how does this actually work so let's say that what we need to do is that we have been given an image of a hand and uh all the different landmarks the 22 different landmarks are represented over there inside of the right hand side so that is our ideal uh you know case that we want to generate the landmarks and then superimpose them with the live footage or the live video that is being represented right now so what we can do is that let's say that this is our you know perception pipeline so the first thing that you'll see is basically your video input which is uh providing uh you the live footage of whatever you want to actually track then uh you'll see the next uh process taking place which is basically the image transformation which takes in your video input and converts it into a relevant size that is required you know by uh the image processing algorithm and uh once it has been resized and scaled to a relevant size uh we are basically converting that image into tensors so since media pipe internally uses atf light we are basically going ahead and converting our image that we get from our video input into tensors now if you are not aware of tensors tensors are essentially these uh numerical arrays high dimensional arrays that contain values and again these can be used for different kinds of calculations and then from let's say any particular given machine learning model we run the inference on these tensors and we are able to decode information about uh the tensors and then basically what we do is once we have gotten the information about the tensors we render uh basically the landmarks on top of these images and then we display the resulting video output that showcases uh the you know basically this particular like you know the same thing uh that it renders the landmarks on site on the video that you are showing so basically that is how the inference of any kind of a media by base solution takes place where you given input and it runs you know the inference on top of it and poses the landmarks based on whatever it is detecting and uh when you're talking about media pipe uh if we go a bit deeper into how you know media pipe actually works so you might have heard about the graph data structures right it's really common uh it's representing various types of edges vertices and the flow of information takes place through different you know edges and vertices similarly in media pipe we have the midi pipe graph that uh is sort of laying down the path of how basically the different um you know information uh about let's say the input and the respondent the responding uh or the relevant output you know is flowing through that media pipe graph uh essentially we uh are containing the entire solution when we when we're talking about like any kind of a media pipe based solution uh your input uh that you're providing or sort of the image that you're providing uh will be given to the media pipe uh graph and that information will flow and again we do have these edges and vertices now specifically these vertices that we have they contain uh configuration information about um the different aspects about our meta pipe solution and these configuration information are essentially you know our calculators now of course we'll be talking a bit about calculators but of course the node of the graph is basically showing us you know that it's the calculator and of course um the entire media pipe graph is representing the overall flow of how basically the input image is that or the input information that we are receiving how we are able to you know put up the landmarks on top of it and get the resultant output now uh basically any two uh nodes are connected by string which can be you know represented as let's say an edge of a graph which basically carries the packets and these packets are actually containing the information about you know uh the let's say whether it's the uh input image the tensors or let's say it is providing relevant metadata that will be required for you know further processing and it will contain the time stamps alongside that data now if we talk about the calculators right that are representing the nodes so they have been written in c plus and they declare basically both the input and the output ports that what kind of input do they expect uh when you know any kind of an input is actually coming to them by the packet and what uh once that like some kind of processing has been done like what is the expected output that they will be sending to let's say the next node right and uh they basically implement something called as the open process close methods which basically means that whenever there is an open process happening or an open method that means that the full graph is currently running processes like let's say that whenever some kind of an input packets have arrived at the node they are ready to be processed and closes that after one entire run has completed now it's time to actually close you know uh the uh graph so that is basically the calculator that is representing at each edge that is responsible for making changes or bringing changes to the input images that are there and it will help to basically uh create our process for converting whatever input is into relevant transformations that need to take place and this becomes part of the calculator now uh we do also have some inbuilt calculators uh we can create our own calculators there are a lot of different inbuilt calculators that are used for image audio video processing and that are uh you know native with tensorflow and tf lite for the ml inference part now now basically uh we also have a lot of different uh calculators for post processing as well so those have actually been created let's say for example for selfie segmentation we have the selfie segmentation calculator and you can also actually go ahead and create your own calculators as well and finally basically what we are going to be talking about is the synchronization and this is really important because all the inputs that we receive inside the calculators and how we are you know being processed and they're going outside we need to make sure that the time stamps are not compromised if the time stamps are compromised then it can actually result in erroneous data and that is why the synchronization is an important aspect of how the midi pipe solution actually works and uh we can also talk about a sub graph basically a sub graph will be used you know whenever uh we are having to divide our steps into multiple steps and um like you could say that a subgraph is a part of the main graph and it can also have its own calculators and be used to basically do one of uh you know the tasks that will be there so as you can see over here that um if you have a larger you know a larger task to you know take place uh we can have one sub graph that is dedicated to let's say only the hand landmarks and then let's say one sub graph only for taking uh pl for taking care of the post landmarks and of course um the main aspect of this is that um the video will be taken in it will be passed through the hand landmarks and you'll see the rendering image but of course there are certain issues that can you know take place because uh you know the hand let's say if you are using a live video uh the uh scale of the hand will think dynamically if let's say you are taking your hand closer or you know taking it further away from the video camera and then let's say it will actually take a lot of capacity from the model to deal with such kind of variations that you know that you know takes place so there are steps that we can take place you know to overcome these issues uh or basically these issues in real time and that is why like we can also use a lot of different techniques to optimize our performance of our models as well and if we just look at uh an overview of how does the media pipe tech stack look like so at its core uh we as i we have described that basically we have you know we have created this cross platform framework uh that has been written in c plus plus and it has support for interacting with things like tensorflow opencv so uh it comes in with helper utilities uh from you know tf lite uh that can be utilized right and if you look at uh you know basically how does the entire functioning work so at the core level you have your graphs that are responsible for showing the path of how the packets are you know flowing in from various and when how they're being collected at different nodes now these nodes are essentially calculators and apart from all of these you do get a certain apis that help you in your process one is the graph construction api uh that will be help you know that will be actually helpful for creating graph for the first time the calculator api and the graph execution api all of these different apis are responsible for helping run your you know graphs and uh essentially at the top layer is all the different kind of applications that are possible to run with on whether it's the desktop android ios or let's see even in embedded devices like uh iot core or let's say you know uh for example raspberry pi and this sort of shows you uh high level uh overview of how basically the media pipe tech stack goes from uh you know the top to bottom bottom being at its more raw level uh and that is how sort of you know the media pipe applications look like internally and uh you can actually go ahead and look at some of you know the docs if you actually want to go ahead and understand some of the parameters and if you want to look at some of the examples for media pipe you can go ahead and actually look at that or you can also use the visualization tool for meta pipe that helps you to see okay if you were to build a midi pipe solution how would it actually look like now we can also take an example of some of these you know mobile examples where for example we have the hand tracking gpu based where basically the gpu is powering you know the media pipe and hand tracking and you can see the visualization that has been built for that and similarly for you know the phase detection and the object detection and not talking about tf light right so tensorflow lite is an open source uh framework that is specifically you know used for deep learning now it comes or it stems from the main tensorflow library and it is used for uh you know on device inference so whenever you want to actually go ahead and do your machine learning uh and run your machine learning algorithms doing that inference on device on your uh let's say mobile devices audience devices uh it is a really great tool to use and again it supports for on-device machine learning and again it gives a really high performance so one of the questions that we before move on to is uh we have these two different ways right one is a tf light and the other is a media pipe now media pipe is generally used mainly for the audio video and such kind of streams because you're receiving these uh media streams and these are the ones that are responsible for you know being uh sort of shown and then being worked upon but tf light uh allows you to actually convert any kind of a tensorflow based model into a ta flight format that can be utilized on your device so of course with dflight you get much more amount of freedom in terms of what kind of models you actually want to run those might not necessarily be in you know be just audio or video based so uh for example again uh with respect to let's say if we were to create like a solution uh that has been built for the um for you know the mobile based devices uh we can use media pipe for things like uh doing pose detection uh x-rays monitoring right audio and video uh for let's say doing things like facial um uh actions uh you know creating face masks and all so all that can be happened like can be done very easily with the help of uh with the help of media pipe but of course with tf flight you are open to a wide variety of different kinds of models uh that you can actually run on your mobile device so after having converted a model that has been written let's intensify in python you can convert into a corresponding tf lite format and that will actually help you to you know run those on uh react native on mobile based devices and let's actually go ahead and take an example of uh tf flight actually being used for doing an image classification problem in react native so as you can see from the code uh over here in the first uh you know essentially the third line we're importing the tf light react native package now this is an npm package that has been created for importing the tf light you know uh inside of um react native and so we are importing tf light uh using the uh you know basically the np module t flight react native and uh the idea is that we'll be using the image speaker np model uh to basically pick any image and in our react native application our machine learning uh solution will detect what is the class of the image that we are going to be having so in the line number five we are just initiating our tf light function uh or our sort of our object and the method tf flight and as you can see over here we have this provided some initial uh you know uh sort of declarations that are there to basically do things like height image of our canvas uh things like what is you know the detection model that we're using so we are specifically using the coco ssd mobile net model now we also have some other ones as well we have the yolo model we have ssd mobile net and these are the some of the models that we'll be actually comparing with so we are having uh three different types of models that we are going to be using then we are setting up our initial state that contains like the model source what is the image height and what are the recognitions because when you are going to be running these machine learning models on top of your image they're going to be providing you an array of recognitions but the most recognized one will be selected so that is what is happening in our initial declaration of our variables of important deals that are required now let's actually go ahead and look at this particular code where we are going to be looking at all the different kinds of models that are currently being supported so uh we have basically taken uh three different uh models the first one is the ssd model and as you can see like within you know these models we have a unique tf flight uh you know a unique tf light file for you know the different models so the first model is the ssd mobile net model we have all the labels uh so the labels are essentially you know what will be recognized whenever you provide any kind of an image it will it will actually be able to provide like a label to it based on whichever uh you know label it the machine learning model actually thinks is the closest then we have the yellow again yellow is used for image classification and we have another rtf light uh you know uh file for it and then we have a standard mobile net version v1 that is you know the df light so we can uh sort of you know select one of these models based on the performance that we are seeing using the switch statement and uh as you can see that the ta flight dot load model is responsible for actually going ahead and loading the model inside of your local storage because uh initially the model will not be loaded directly uh for you know saving the time and saving the amount of uh you know computation but using the tfli dot load model we are basically loading now the desired model so by selecting which particular model do we actually want to load uh we'll be loading that particular model and that will be used for doing the image classification now if we go ahead and look at the next particular uh next particular uh code now this is the code that is being utilized for rendering the boxes so of course what we want to do is that uh once we detect something inside of an image we want to create a bounding box around it now what exactly is a bounding box the bounding box is representing uh you know four different points like a rectangle you could say and those uh you know are sort of the ending points where we have detected we have detected something uh like you know inside of the image so uh we are going to be using uh the rendering box function to render basically the um bounding box on top of the image that are being recognized and as you can see what we're doing is that we are having this recognitions.map because there will be a number of different recognitions and for all of them what we're doing is that we are basically going to be you know uh having set uh the coordinates for our left top width and height basically this will give us you know uh the coordinates or the endpoints of that particular image where we are creating the bonding box and we're returning a react native view that view is basically the generated view that is you know created when we have detected something successfully and we want to render the um you could say the bonding box on top of it so we are returning a view a react native view where you'll see basically it being created by a blue colored background as we take in the text style we have basically also used it and uh you know we are using the styles dot box and uh what we are returning is basically the detected class so class of you know what type of uh image like you know that is there so we are detecting that and you're also giving like what percentage is the confidence because as we know in machine learning that uh based on any kind of recognition or predictions that you're making you'll be getting a range of different predictions but we are going to be taking the one with the highest prediction and we'll be returning that and drawing that once we render when we once we call the render boxes function now if we go ahead and look at uh you know the code that is our main code we are looking currently at the on select image now in this we use the npm package of image picker and we you know go ahead and select one of the images so you will be providing the path in your os since this is react native so we're providing you know that path and then you will be giving the response you are in the file path uh we'll be defining the height and the width of the response and uh we'll be basically giving in the state we are using the set state so since we are using uh react as well over here so we are setting the state of you know the source uh path file and uh you know what is the image height now this is where we are also like as you can see height we are doing basically we are uh going ahead and uh transforming the height the image height and image weights in the next couple of lines where we basically are scaling it right and then what we're doing is that whatever particular uh model that we want to actually go ahead and use we are uh we have created the switch statements and we have created the different cases now let's say if you want to use the ssd one we can simply just go ahead and call the initial method that we had declared pflight and we can use the detect object on image function from it and uh in that we'll be basically passing on the path the threshold that is uh you know that is required and the number of results per class and uh that is where we are then going to be calling right uh the uh basically this function and the direct object on image function will go ahead and you know try to detect if it finds out something and uh if it does right if there's some error then it will throw the error but otherwise if it does not throw an error uh within our current state that we're maintaining for the recognition we are basically going ahead and you know with the recognitions that we have uh we are providing that resultant array of uh basically or you could say the map of you know the resultants where basically for each particular different type of result that it gave it has given uh it has given a corresponding uh value attached to it so it's giving you a map of different you know recognitions that are not made for example if an image is of a cat it might have detected cat dog and some other kinds of you know labels as well and their corresponding uh percentage of you know um how much percent like uh the con the model is actually confident that it will be true and you can see that being utilized for all these different ones now whichever one is the one that is having the highest one that will be again as we have discussed uh right will be used in this render boxes one because from the state we are first finding out okay which particular model to use uh what is the map of you know the recognitions and the image height and image width and this will help us to render the boxes and this will go ahead and be utilized you know to draw it on the canvas so apart from this the other steps that you will be doing is basically create you know um another create basically components one component would be to you know render the output image that will be there and uh on the output image you'll be rendering uh the bounding box and of course if you let's say are trying to do it on live via let's say your webcam you can have another component and use the react native uh webcam and have those uh you know and sort of create this entire as let's say a separate view uh inside of your react native application but this sort of shows an example of how you know it would function so you pick an image you select that image and you put it and the machine learning model uh or sorry if you know the t flight model is actually able to detect multiple objects within a single image and it will be able to create the corresponding bonding boxes on top of each one of these and help us to you know create a simple image classification uh solution on react native now apart from this there are a lot of other benefits as well so there is of course uh you know the simple example but of course you can do a lot of different other things by including uh you know tf like with react native now uh the support for specifically running media pipe solutions inside of react native is uh not that great but there are a lot of different you know uh sort of you know things happening within the google's media pipe team to help you know to integrate because right now there is a great support for running it in native android and native ios applications but very soon we should be able to actually go ahead and get our hands on basically with the uh implementations of react native with uh you know like implementation of media pipe solutions with react native so um that sort of you know is basically the example that we sort of covered today and with that that brings an end to my uh you know to my talk and i hope that you have liked and you have learned something from this particular talk uh the most important thing to sort of keep in mind is that media pipe is a really great solution that has been created by the google team and has a lot of future applications and of course the amount of uh support and the amount of uh you know examples are coming out for the media pipe community those are really great to see so if you are genuinely interested in creating these uh streaming uh machine learning applications now it could be something like you know um like a physiotherapist a virtual physiotherapist let's say or some kind of a dance teacher or doing things related to your face and all these will actually be come in very handy and benefit by the media pipe solutions that are actually provided by you know the midi pipe team and you can definitely go ahead and check out those and of course a tensorflow lite is uh sort of added score you know it allows you to run uh the tensorflow models uh using uh basically you know by using a t flight or like sort of a light version of uh you know the machine learning model but it's been generally created for running on device machine learning uh solutions right and it's so that can do the ml processing on device and that makes it really uh great as and of course it's not just for ml uh solutions right or basically for uh streaming solutions it can be used for a lot of different other aspects as well so with that that brings an end to my talk i hope you know you like the talk and you can connect with me on my twitter handle i did it how to deal up and on github on github.com to you know ask any questions that you might be having with respect to media pipe how to integrate uh you know tf flight with uh react native and how to basically empower you know react native applications because react native is one of the most popular frameworks you know that are there and of course being able to integrate machine learning models with them is uh highly uh useful so thank you so much for attending this talk and i hope to see you uh in next next year in act in person thank you so much thank you shivai for this talk uh it's really great to see how react native can be used with some additional advanced features such as machine learning or sometimes even vr showing us that react native capabilities go beyond traditional mobile development now moving on to the next speaker we have arnold from adventist and he's going to talk about graphql with react query something that they are using in production in on a fully fledged application so in this step we're going to learn about their experience with that all the great features and our now i guess will try to convince us that react query is what we should be all using for graphql library of graphql development and all the features that they are also planning to use in the future so let's see what arnold has prepared for us today hey everyone i'm very happy and excited to speak at the reg native europe conference so today we'll talk about requery with graphql cogen and tapscript let's start with a few words about me my name is arno and i am an entrepreneur and cto i'm also passionate about developer experience and solving complex problems requery so requery is quite popular now it's a powerful solution to fetch and cache remote data and the library has been created by tanya linsley and is also famous for a react table or rare chart components the library is not maintained by dominic and you can find on his blog a lot of very interesting articles about requery and tab script we'll also use graphql code generator uh it's a simple cli that you will use to generate your typescript types and operations from your graphql schema this cli has been created by the guild the guild is a group of open source developers they have a lot of other solutions for graphql and the graphql code generator itself is provided with a lot of plugins for popular languages and libraries so let's have a look at the technical stack for this demonstration so for the rack native app we will use expo tab script the graphql code generator with graphql config requery with graphql request and rack native paper and also suspense and error boundaries for the api we'll use hazura with the post-brexit database and everything will be hosted with nost which is a great backhander service solution so this text nickel stack is very close to uh what we did for for own recognitive app in production so we will be able to share uh some recipes and some code with you today uh about how to use a requery with code generator and typescript so let's have a look to the agenda so we will review and look at the different features and the code uh in this ragnativ app so here you can see the different the topics that we will cover today and we'll conclude with a comparison with other popular libraries as apollo clients and only and i will share with you the lessons we have learned during this project so let's start with the stale revalidate concept to see how it works to demonstrate the style while revalidate concept i will use hazura to edit the data in the database azura will expose my date my my data as a graphql api that is consumed by this very simple uh ragnativ application i have created especially for this talk so when you click on one movie we fetch the data about the movie details here so what we want to offer as a user experience is when you go back to this movie details you want to use the cache at the client side so you will have an instant response experience and in the meantime you expect that your ragnativ app will fetch fresh data from the server so this is actually what it is happening now but my connection is too fast that if i change the data on the server you can see that is displayed instantly because i have a very high speed connection so now i have to slow down the connection to illustrate the style while revalidate concept so i'm using the network link conditioner which is a great solution to experience different connection speed you can download this tool with the xcode additional tools so first let's change a value of this movie title and the ratings and i just save the data and now i can slow down my network connection and click on the movie details and you can see that we we have one very uh few seconds just to to display the cache data and uh he has just the time to revalidate the data from the server so let's do it again and now let's use a very bad connection you can see that we are still displaying the same data that are in the cache and if i change the connection quality to edge for example we will have the data updated so this is exactly what you you expect in a modern app is to have instant response from the cache and in the background re-fetch the data from the server so requery has this very good default is state while uh revalidate concept and it's very useful because you will have a snappy experience while optimizing the data fetching behind the scene so let's see now how it works and how to generate the corresponding code with graphql code generator to use the graphql code generator you have to install the cli first next you have to configure the code generator to do that i will use a graphql config that will allow me to also use a vs code graphql extension so you can use the same configuration file for both the graphql code generator and the vs code extension so to configure your graphical code generator you have to define the schema endpoint here you can add some optional headers you can define uh where your queries and mutations are defined and where you want to generate your typescript types and with with which plugins like here we want to use subscripts and we want to generate hooks by using requery and requirey is using as a fetcher graphql request and that's it you can now move on your queries and mutations definition so let's create a new one and here i'm using again the vs code graphql extension so i can i have autocomplete so now i can just browse my api schema here and i can just select the fields and the queries i want to use and i can even execute a query so it's uh it's it's it's really great to have this possibility to create very easily and quickly uh your queries and mutation in this graphql file and have a preview of the data so if we have a look at a more detailed example here that we will use in our demo application uh you can see that you can use you can define some movie fragments that you can use in your queries and your mutation and you can also define your graphql variables and you can also define some mutations so everything is defined in this graphql files you can have one you can have multiple files it depends on how large is your application and how you want to structure your queries and mutations so the next step is to generate the corresponding types and hooks to do that i will use the terminal and i will use my new generate command and this command is just using graphql code gen by using the config file i've just defined so it's quite easy and it's very fast here it generates all the types and all the hooks in two seconds so let's have a look to the generated file so here you have a very large file with all the types and hooks generated from the graphql api schema so here you can look at the movies uh type um and if we go at the bottom we will find our hooks uh that are using uh the requery hooks and the fetcher so you don't have to take care of this file it will be automatically maintained and generated by a graphical cogen when you did two so i mean that you don't have to manually edit or add types to this file you just have to regenerate your uh your this file when your schema has changed or when you have added some new queries or mutation in your graphql file so as you can see graphql generator it's a productivity booster you can just focus on using your requery hooks instead of having to create first all the types and all the queries and hooks somewhere it will be done automatically by graphql code gen so the next step will be to see how to use these generated hooks and type in our components so here we are in our app component so as a standard rac native application we have to define some providers and here we have to define one provider for the graphql client and another one for the query clients the graphql client provider is a custom one that we just use uh the graphql request clients and where you just have to define your graphical api endpoints and maybe for your app in production you have to define some authorization headers the next provider you you need is to use the existing query client provider it's a provider that is available with the raw query and you have to set your cryo client so here our query client is a new query client from racquery and what is really great with requery is that you can define some global settings for all of your queries so here we have just defined that we want to use suspense and we don't want to have multiple retries when you're occurs it's just for the demo purpose but maybe the default is three but you can just adjust this setting according to your needs and you have a lot of multiple settings that you can define globally for all of your queries and and mutation so that's it for the main app file you just have to define these two providers next you have to actually use the hooks generated by the graphical code generator in your components so let's have a look to the movies list components so here i have my movie movies list component and this list is using a custom hook to fetch all the movies from the api so if i go inside this custom hook in this custom hook i'm using the graphql client that requires needs and i use the generated hooks generated by a graphql code gen and that's it so it's nice because here i have a very nice encapsulated hook and my component the code is very lean and here i can just add some additional configuration and logic and i don't have to implement this hook it's already generated by uh graphql code gen so in this example the first parameter is a fetcher so in our case we want to use graphql client and after you have some variables so here this is the query variables i have uh defined in my graphql file so here i have also for example the offset this is one available variable i have defined in my graphql file so it's very easy you just have to consume the hooks generated by the graphql code generator and use the generator and then you have to you get this query info result but you can if you want uh uh use all the uh possible core info uh fields uh you need here like is fetching is your uh is success and so on but in this case we just want to return to our components the data so this is uh the movies again thanks to a graphql code generator and types in typescript i have automatically these nice types and um i want just to expose also to my components a refetch method that is exposed by requery use query and uh i can now in my movie movies list component i can use the movies data and if i need to re-fetch the list i can use this function in my component to to re-fetch the data so this is basically how you can use uh requery and the hooks generated by um the code generator to have a very clean and encapsulated solution in your components okay so we have just seen how to use graphical code generator and the generated hooks into our components and now let's have a look to the other features that require provide provides to us so first let's have a look at the initial data so if we go back to our very simple app here we have the list of this list of data and when so this list just fetch some data from our graphical api like the id the title and the ratings but so it's just this simple set of information and when we click on one movie we want to display instantly the information with we already have have in our list and display this information while loading extra details uh about the movie okay so again if i click here it's already in the cache so it's instant if i go here i have instantly the title and the ratings and i want to load the details information when the screen is displayed so to do that uh the maybe the nice way to to use requery uh to do that is to use initial data so if we look at our movies list components when you click on one item you we use a rack navigation to navigate to the movie detail screen and we send as a parameter as a root parameter we send the movie the actual item in the list so when i click here i just navigate to the screen and send the movie data i have as a parameter so i have the ids the title and the ratings and i want to display instantly this information so to do that uh if i go to the movie details um screen component so i have this uh hook it's a custom hook i use use movie details and the parameter is a movie a movie is a movie fragment and if we have a look to the fragment here it's only the the ids the details in the ratings but the movie details fragment is more it's the storyline the the jar and the ratings so what i need here is i need the details but i want to display instantly the information i have so to do that i can go into my custom book and here i'm using the generated uh hook generated uh automatically by graphql code jam and i have a variable which is the movie id and i have this great initial data requery property so i can uh say that uh just use this initial data for this query and it means that it will be automatically used uh by my component uh to display uh some information without waiting to have the all details coming from the api so it's a great way to to manage this case because your movie detail uh used as a single uh source of truth this is the requirey you don't have to have an extra movie um a movie list item information somewhere you just have to rely on the same movie details information here so this is how you can use initial data to have this kind of very optimized solution because here you can have a graphql query with only the data you need for the screen and you don't have to fetch uh everything so i know that with graphql you can do that and it's nice in a lot of cases but if you have a huge list or if you have a lot of details to fetch is better maybe to optimize your list data to fetch only what you need to display and to display all the and to load the details when needed but you can instantly display what you have in your list uh what also you can do with requirey is you can prefetch so uh on some events you can even prefetch some data if you anticipate that the user will display another screen and so you can preload some some data so that's it for the initial data but again it's very nice to have a single source of truth in your components and to optimize how the data are fetched and displayed let's now have a look at the automatic refresh options require provide and let's start with the on app focus refresh so here i am in the app component and i have added also some logs about when data are fetched from the server so on the left you have my actual device and you can see that when i manually refresh my list you can see the logs at the same time so what i want is when the application goes to the background and goes back to the foreground i have this automatic refresh so you can see that it's already done here automatically and it's working also for the detail screen okay so when the application is going back to the foreground it automatically require we will refetch all the active queries i think it's a great behavior because the user don't the the users don't have to re-fetch manually when they go back to a screen so as it just uh automatically performed by requery so how to implement that is quite easy it's not easy as the web in a web application because this is a default with a react web application in with the react native application you have to use this hook that will use upstate which is a recognitive hook that will use this on upstate change function and when it means that when the regnative app state change it will call automatically this function and this function will tell to recognitive that the app is focused and when the application is active you don't want to do a re-fetch when the application is not active you want to do a reference when the application is active and that's it and you know that all of your requery active queries will be refetched when uh the application uh goes back to the foreground so i think it's great it's you don't have to to code everything in your component and and in your code to to manage this this case you just have to add this very simple function and use the focus manager to to tell requery that he has to to refetch on focus for the next feature uh i won't be able to demonstrate it but uh i'm going to show you how to implement this solution is when the network is offline and when you go back to online mode you want to automatically refresh uh the data so let's see how you have to implement uh this solution so one way to do that is to create a new custom hook and in this hook we'll just use a requery online manager to set the network status and to know the network status you have to use a net info object from a rack native community that unfolds so it's very easy just you just have to to implement and to call this hook in your app components and when the device is offline and when the device is going back online and you are on your app automatically all the queries will be refreshed so again it's a very nice ux uh because with mobile you can have these issues with uh with the connection so you know that if a user has your application open and when it's going back offline and from from offline to online mode you will you will have his data automatically refresh the next refresh options we are going to see is on-screen focus and polling on screen focus is when you want to refetch for example the movies list when you display the screen so right now you can see that this is actually what we we do when you go to a detail screen and we go back to the screen with refresh data but again this is still while revalidate feature i mean that when you go back to the list react query will use a cache first and in the background refresh the data from the server but to add this automatic refresh on screen focus we have to implement it so do do that in the movies list component i'm using a new custom hook and this hook is quite simple is using use focus effect from rack navigation and it will just initiate a refetch uh a required refresh when we display again a second time the screen so it will be not done when the screen is displayed the first time it will be refreshed only when we going back to the screen so this is a very simple solution to implement this on refresh focus with the requery and the refretch function here is a parameter of this use refresh on focus hook and is coming from the use query requery hook that has this refretch function so pretty straightforward you just have to use it where you need to have this automatic refresh when a user is going back to one of your screen the last automatic refetch options we are going to see is pulling pulling is when you want to automatically refresh your list in this example the movies list at a specified interval so it's very easy to do you just have to edit your require requery options so here i'm using my custom hooks use movies and i just have to set uh the refetch interval option in my uh use query look so let's say i want a refresh every two seconds uh you can see in the terminal that automatically uh this query will be refreshed every two seconds you can set this option for a specific query as we do now or globally in the new create client options so it's great when you want to have this automatic refresh this can be complementary to push notifications or graphql subscriptions it's good to know that you can use this option just take care of the energy consumption and if it's really relevant if the user stays on the same app at the same screen for a long time so it's up to you to decide if you have to use this refresh interval option so just to to conclude about all of these automatic refresh options i think that uh requery provides maybe 80 of your needs in terms of automatic refresh and is quite well integrated with rack native and easy to to to to implement so i think recrea one very strong point is that is so easy to have these very good defaults and what uh you can have and what you can expect from a modern react native application now we can explore mutations and we can see how to to invalidate uh cache and to do optimistic updates uh with the rear query so in your in our movie details component here we we can do some mutation about the movie ratings here what we want to do is to have an instant feedback so i mean that when you click here on the star you want to update the ui instantly without waiting for a response from the server so this is optimistic updates what you want to do also is when you admittation has succeeded you want to refresh automatically list information to have a consistent information here according to the mutation you would have just done in the details screen so how to manage caching validation and optimistic update in my code so what we have done here is we have this custom hook that returns a function that you will call when a user click on a star here so this function will accept the movie id and the new ratings if we look deeper into this hook code you can see that we have this function definition here that we return and this function just manage the new ratings and we'll call this mutate in sync function that is a required function that you use to call your graphql api and you send this by using these variables so this is where the the mutation actually happened we have another hook where you have to define uh your mutation that you use here so this hook will use uh the the hook generated by a graphql code gen according to the schema and your graphql file so here it will it will create a new mutation according to the different options you have defined you have the the on mutate option unsuccess and on your you have also unsettled option but i don't use unsettle here uh in on mutate what i want to do first is to consult ongoing fetching i don't want to have ongoing fetching if i do a mutation uh i need to get the the previous movie details because i want to do an optimistic update so i want to alter the current data and i'll also need this previous movie details data if i have to roll back if an are occur later occurs later uh here this is actually where we do the optimistic updates so where you are using a query client from requery with the set query data function so i say that i want to update my query data for this query and i want to use this variable so this is the movie id and i want to update the data so i want to change the the ratings here with the new ratings so this is how to do optimistic update with recurring and if the mutation succeed i just have to invalidate the queries i want so in my case when i change a rating here i want to update the list and you can see again if i click on the star look at the terminal it will automatically uh initiate a cache invalidation of the movies list so i'm sure that if the user go back to this screen he will uh he will try to have fresh data consistent with my mutation and the last point is error management is if my if i have an error during my mutation i can by by the same solution using secret data roll back to my initial state so here again i can roll back for this particular query i can set the data to the previous data so this is it's very straightforward you just have to to work with queries queries key and data and you have to you can easily set your cray data by this way for optimistic update or when you have to roll back to a previous value for example so this is basically how complex is uh to manage uh mutation and cache administration and optimistic update i think that at the end is not so complex uh you have a very good control of what is going on and you can do exactly what you want in terms of caching validation and what data you want to update when a mutation occurs now how to use react suspense and error boundaries uh with real query so first of all require support both react suspense and error boundaries but how do how to to use these features in your code so actually our movies list is using suspense and it keeps our component code very lean because we don't have to to manage uh loading indicator here or error management it's all done in the movies list screen where you where we use suspense on the top of the movies list and thanks to the rear query support we don't have to add any additional code so we just have to fall back to our loading screen so let's use a slow connection here and reload my app so we have the time to see the loading indicator and here you can see the logic indicator and the list will be displayed just after so it's great uh i think for eighty percent of the cases suspense it's a very it's a very very good combination with real query and the same for error boundary um racquery provides a query or reset boundary reprovider that you can use to manage error boundary and you can just fall back to an error message when requiring your occurs for example a graphically or something like that a network error so you can display a nice generic uh this screen and you can use a button to reset the the query state so it's again maybe for most of the cases this can be very nice but if you need more control for example in the movie detail screen we have this two-step display where we want to display first the data from uh the list and then we want to load the details so we need to have uh something more precise uh maybe it's doable with suspense but to keep uh to keep the code and the component very simple we have just eject from suspend from from for this movie details component so how to reject you just have to set the corresponding property in the requiry hook and here uh i just said suspense to false but i still want to use air boundary so you can mix exactly what you want here and now in my movie details code i can display a loading indicator according to the to the state or to data i want so it's great you know to have the possibility to uh to define the behavior you want or to to to have the choice to to use suspense or not according to your needs for each screen flat list with infinite queries so let's see how it works with rare query and ragnativ so now my application supports infinite list in the movie screen so when i scroll down after 30 items i will fetch a new page from from my graphql api so here you can see the terminal that requirey automatically fetch the next page so how it works so i have this use infinite movies custom hook that will return a fetch next page function this fetch next page is called by the unreached function header and this handler is used in this flat list property so when the end is reached it will automatically call this and this function will if i go back here this function we call fetch next page and this is a function of requery and recovery will fetch the next page according to the settings defined in this custom hook so here we don't use a hook generated by a graphical code-gen because there is no infinite hooks generated automatically by by graphical coding and requery plug-in so we do have to implement our own uh function so we use uh use infinite query function function from requery and uh we just have to uh to call uh the graphical client to use a graphql claim to do the graphical request and we this function will accept uh the page number and according to that we have to define the new variables like the offset and the the limit is the page size so in this case uh because we are using an offset for the pagination it's very easy to configure and to calculate what is the next offset for the next page and here we you have also to implement this get next page option with a function that will return the next page number so this is how to to manage a infinite uh query in requery so first you have to [Music] do your query with the page param you receive and you have to expose a function to return the next page param and that's it and yes sorry you still have to return the data and the data is all the movies from all the pages so you have to return the pages that are in the current state and to add your new movies here so by this way your movies list will contain all the movies and it's great because now i can just have this infinite pagination and i can when i do a mutation you can see that it will refresh automatically all the pages which is good and not good because in term of performance maybe you want only to refresh the impacted pages so this is something we will discuss just after but here it's a good default i mean if you do some mutation you want to have your list fresh when you go back to the list so recrea will handle that for you you don't have to take care of that you just have to invalidate your query and if your query is using pages no problem it will re-fetch all the pages in your infinite list you may want also to support optimistic updates so in our case when i click on the rating here i want to have automatically my ui updated here so to do that i have to update my mutation to um in addition of doing an optimistic update from my movie details i have added this set movies create data function that will optimistically update my list of movies so to update my list i again i'm using racquery security data and i have to update the list and to do that i have to get the previous data and to update the corresponding item in my list so this is a movie i've just updated i want to update the movie that has to be impacted so this is the way to go the solution i think is is nice uh when you really need to have a an optimistic update uh on all the screens impacted by a mutation so we don't use that a lot because the query and validation are working very nicely and we don't have enough line support mode very advanced so we don't have to to do that but again if you uh if you want to to optimistically optimistically update the list in addition of the current screen this is also something that you can do with the recovery and thanks to the set query data function and now the last but not the least feature i would like to discuss with you today is the cache persistence in the async storage uh this is great when you want to store all the cache data into the async storage so when your users are going back to your app you will have all of this data already populated in your screens so it's a very snappy ux and it's also very nice when you want to to implement offline mode support so uh to uh to activate this feature you have to um use some new experimental racquery features so in the app components we call this function and this function will need persistor it's a custom function we'll use the new persisquery client function and then you create a sync storage persistor so it's very straightforward this person's correct client just receive a query client a persister and uh value to to know when the persis query client has to burst the cache so here we want to clean the cache when the application version has changed so we are sure that uh we don't uh we we won't reuse uh data that can have changed according to a new api exchange or about a new changes in our components so we want to to burst the cache uh when uh when the app version has changed and that's it this is actually what you have to do to persist your queries cache data enthusiasm storage is simple as is so you can imagine oh simple now is to persist all of your queries cache by using this very simple function here so it's a great feature hopefully it will be an official and stable feature right now we have decided to use this feature in production and so far so good and we are so happy and so excited to to use it because again it's a maybe 10 likes 10 lines of code just to manage uh this great feature as you can see racquery is a great solution especially in terms of developer experience and features with very good defaults but how it compares to other popular libraries i think all of the solutions i show you now are great you can create very nice and sophisticated application with all of this solution so it's really a question about the philosophy and the strategy you have uh in your rank native app development and also about the skills of your team and what you want to do so i think that two categories right now there is the the universal data featuring and caching approach and the graphql set management frameworks approach so requirey is on the first category is something very universal you just have to think about queries cache optimistic updates you don't have to think about the schema and how your graphical api schema is defined it's just queries and just invalidate queries with variables and so it's something very straightforward and uh it's also you have a total control on how your cache will be invalidated and when so this is something which is can be really good for some team because you have this very easy learning curve but you can have very precise configuration and for sophisticated cases in this category you have also swc often used with neck gs and redox store kit which is something more integrated with redux if you like this approach so racquery in this category is is really great and you have also to have in consideration the community as a mentor and all the resources around like the graphql gen plugin so for me is my better library is my preference choice in this category on the other hand you have the state management frameworks with relay apollo client and oracle i think relay it's the oldest one and it's very mature but it's opinionated about the schema so it can be a show blocker in some situation where you want to do what you want about the schema and you don't want to have some requirements about the schema you have created but if you respect the rules but really you have a really great solution because it's you have all the documentation all the features you just have to follow uh the rules of this framework and you will have great results apollo client is really nice and used to use a lot of follow clients on other projects and even in the in the first version of our react native app is very nice again if you like the framework approach and other very good features uh that relies on your graphql schema is nice it also relies on the normalized cache i mean it will when you do a mutation it will automatically knows which data to update in other queries so it's nice but there are a lot of edge cases and again there is maybe a kind of learning curve when you want to to use all the possibilities of this great uh apollo client framework ercol it's something a little bit newer compared to apollo climate and relay but it's very nice because it provides both document and normalized cache approaches so i think it's great maybe this is the one i prefer in this category very close to apollo clients so again just uh have a look at the documentation the the repo a real life application example and just decide first the approach you want is it a universal one or you want maybe something more optimized about your graphql schema and the last one is uh gqls so it's very very uh interesting one because you don't have to create your query before you just have to code uh your your um and to use your query and it will automatically generate uh your graphql query so it's brand new it's very interesting maybe if you want to go very fast and don't bother with creating your graphql queries and mutation elsewhere just use your query and it will automatically generate your graphql queries so i think it's a great uh a great new solution and we are very lucky because the graphql ecosystem now is so good with very good solutions that we can just decide which flavor we prefer so it's a question of test and team and how you want to to manage your recognitive project with with graphql it's time now to share with you the lessons we have learned during our journey with ria curry so first start your project with graphql code generator it's a game changer in terms of developer experience and productivity it's so fast so don't waste your time by adding types and hooks just use a graphical code chain with the requirey plugin it's so fast and the vs code extension is really nice use custom hooks as possible to keep your rack components logic clean even if the use query used mutation are very easy to use it is good to have this kind of abstraction so you can keep your rack components really clean when possible just use and have a single source of truth for your queries data so you can use the initial data property when uh it's relevant so it's great because your components just fetch some data and this data can be set by initial data or by an optimistic update or by a graphql api but it will be the same source of truth so try to to to have only a single source of truth uh when it comes with your data always test with the network conditioner because uh if not uh often your connection is very fast and you don't see exactly what is going on so if you want to test loading screen errors and just hold the optimistic updates behaves just use the network conditioner as as possible when you evaluate your feature and your application it's very easy as you have seen just before to log everything is happening in your cache and your queries so just monitor and especially at the beginning to understand really how uh recovery is working and when the data is fetching so it's it's a great solution to learn and to debug there is a enable option in use query and we use this a lot to manage dependent and conditional queries so again it's a great solution and if you use have a look at this option and you will see a lot of use cases when you may need to use this enable option and about suspense yes suspense you know in terms of productivity is great because it mean it means less code in your components but in some situations you may need to have more control about the loading state or the error management so at one point you can be limited with suspense um and so this is not a problem because uh requery supports a lot of status variables you can use to check very precisely uh what is the status of your query what's coming next for us uh so we will uh work with requerry for full offline support so right now we have some limited uh sup offline support features and but i think real query will provide all the features we need to have this full of line support and the other point is graphq graphql subscription supports so right now it's possible but it's not very well documented in the in the current website so you have to investigate a little bit there is a very nice article by dominic about using web sockets with mercury a discussion in the repo but we would like to to see more details and more examples and and best practices about using graphql subscriptions with recurring so again i was quite excited and happy to to to speak at rec native europe conference and i hope that this talk will be useful and with a lot of recipes and cod example and i wish you a great conference and with the very interesting talks thank you thank you arnold for your talk i'm always happy to see people sharing their first event experience of their production application i think we can all learn from that and let's discuss this on our discord channel and next up is helena ford founder of stack tiger and maintainer of a library called notify and her talk will allow us to take our notification game to the next level helena the stage is yours hey everyone i'm helena ford and i'm the cto of stat tiger a mobile dev agency in the uk i've been working with react native since the beginning and i'm an active maintainer of notify a local notification library by invites you can find me at helena ford on twitter and i have a blog 4.dev today i'm going to give a quick demo and talk about how you can exploit the full power of notifications to increase user engagement and retention we'll cover get and start set up with notify media support quick actions scheduling user notifications and a few other bits personal notifications often require third-party services like firebase to operate however local notifications are configured on the device itself and allow you to get them up and running without any such third parties a good example is your alarm clock that sends a local notification at whatever time you set your alarm for there are several react linkedin libraries out there that can help you configure these notifications one of these which i'm going to talk about today is notified by invitates node fee enables developers to build richmond creations with a simple api whilst thinking of complex problems such as scheduling background tasks device api compatibility and more okay so let's get into the code then so you see on the screen an example project i prepared for this demo which is some tv shows um just plain simple json data with a display notification button um and if we look here um it's just plain button on press so what we want to do here is display a notification a local notification so before we even do that we should also say we would need to install the library first of course so i've already installed it for this example but if you can see my kernel here you can do yarn art add note free for slash react native and then if you cd into ios pod install that's all you need to do to set up nooga fee [Applause] and i'm going to just get right in scope so it's fairly straightforward um so bait no display notification so everything is typescript subconsciously okay what does it expect to take um we'll just start off with simple python body hello world world and android requires one extra step which is a channel id [Applause] and we'll have to make it this i normally just tend to go general default anything really you like but just be keep in mind that the user will see the name so when you create the channel [Applause] [Music] create channel [Applause] the same id you pass it to the payload so and then i mean so this is what the users see so just keep it fitting user friendly and then importance okay so this is probably one of the things that rips people up the most it which took me myself up to um is if you want it to be a banner like a headset to go over the apps and not just like hidden in the traditional tree when it comes through is to do importance of high um [Music] also you can call create channel as many pictures you want it's not really an issue um because android will just ignore if it's already created um but also remember once you've created channel you can't then update any of this properties you have to delete it first and just so if we go ahead and try to display this fingers crossed it'll come up hello world we haven't added we haven't specified a notification yet but we can easily do that um small icon spot icon [Music] and [Music] you have to add this to your project separately which is detailed on the box page i've added this earlier and it's just a tv icon you can give it any name that's just the default name that address studio uses so if we try to display the scan you'll see now it has a tv notification icon yay so that's basically part one and next we'll go into making this bit more complicated so this is probably like what you know most apps can do title body is pretty standard okay okay so as you can see i specified a large icon um this is literally just a random url i got off google and it's a png so on android you can add a png.jpg and it can be local or remote so this is a remote url you can also add like require through here i tried to do an example of a local and a remote one so here's a local asset file so if i look into here you can see i've added some i've added the image and then some actions um this is probably getting ahead of ourselves but we're just taking it once every time so if i slowly start to build this up so if i do a large icon and then this url just copy from here i'm not going to type that out okay there you go so you can see the large icons on the right hand side which is perfect it's what we want so if i just swipe that way and then we'll go down into adding the big picture so on android this is thinker style so you can have different types of styles messaging big picture inbox um again i don't really want to go into like specifics of um the api because it's all online along the box but it's just to give a feel of what you can do what is possible so oh yeah because i've taken it out of this farm here it's not recognizing it's at six okay so if i swipe down now on the notification you should see the picture so this is the larger icon and then this is styles and restart big picture we should use just like a local asset and if we now look at ios so on ios they don't have channels but they do require to request permissions um so for the civic demo i'll just add it here and let's see we got this permission you can specify exactly what you want but default is fine it will give you the efficiency you need to display alert that comes over there so i swiped up and i was um so here we want to show a video you can add images too but this is the cool thing about ios is that you can't have videos i've been an example again of how you can do the image which is pretty much identical to android just the field is under attachments on ios is you can add an array so it will if you have loads of images and videos specified here it will let you just go down the list in order until um one can successfully um load okay so let's just fill over the ios emulator and see what this video displays like so let me say display notification okay so this is the extra step there's way different ways you can call this and all where you can call it throughout your app life cycle to be less intrusive to the user but demo processes let's just call it straight away and display the notification okay so yeah the first time it um loads it might take some time to load if you've got big files i've obviously not optimized this video or anything you could also make a gif which probably speed it up excuse me as well and you can play it here you go so if i um just quickly take all this out and actually see what it would look like with all the white text and everything so new episodes great enough to me absolutely play great this isn't actually very useful though to the end user so now we can actually start going into what quick actions are so quick actions is a cool way for your user to engage with your app without them actually having to be in your app which is great if they're doing something else and they see a notification pop up they can quickly interact and it's not it doesn't really affect what they're doing so for android let's just we'd have to go into the api a little bit um he accepts an array of actions i press action which is an object and a title so titles basically that enable so we've got watch now and save later and an id so id default notifi will know this what you want when someone presses on that you want the app to open and id bookmark this is completely whatever you want um in our case we want to know we want the user to be able to bookmark from a notification so before we actually display this or we can display this and show you the buttons won't do anything much until we add the event handlers to know okay user has actually pressed these buttons so the first thing we need to do is add a background one so you just import notifications [Applause] okay and then literally you've got the background the back is saying the expression and this will give you an event of type and detail so we can quickly just print that out to see what we get back from the event handler and this as the function is called background this will only run when the app is in background there's also an on foreground event handler um which we'll add in a separate place so with the background event you'd always want it to be really here before your apps loaded so it's always there um and then the foreground one you can pretty much have it wherever you want and i always try to put it in my when i put it in my apps i always try to put it when the app first renders um as early as possible really is a while but again that one really anytime anywhere you want um okay so as we have two actions um with these id this is what we want to hook into so we can do if type equals equals event dot action press um not sure well type script pick that up yeah but action press and id so we can get that from press action to dpl dot press action for id um and so if your user presses the default action that will open up but also we'll have to come to the notification so we'll just do notary.console notification and give the id of the notification so that's just detail dot notification dot id we'll just demonstrate we can demonstrate here but watch now how we would use watch now so we trigger this um if this was a real world up out there in the world this would be triggered when you know the actual episode is about to air or it has been released um so if you put that in the background and then hopefully our background handler will pick up this event which is what we want i want to cancel it when the app opens and if you look down in my terminal you will see that you can see all these events coming through um if i do this to make it more more easier to detect two is press action and you can see it's coming through from the background next we'll add a foreground of a handler so this can go like i said can you go anyway i always do it this way i'll use the fact when my app is just rendered or loaded and um he bought on the foreground event and it exactly the same as the background one so we have a type and detail [Applause] of the event [Applause] we always want to make sure we're returning that so it's unsubscribed after the app is unmounted and then in here we want to do the same as what we did in the background one so you want to do if and just copy that put it here and port event type and this needs to be brick marker and we want to set so our user has actually bookmarked this this episode or tv show so what we want to do here is actually set up the state of bookmarks um so the user can see that they booked back this okay so all we need to do is to set the max and then the id which we will get from data you could set your notification id to match the show id but you could also do data um i don't have it here because if you wanted to say have two notifications upgrades and that's me you might if you just used show id that would restrict you um so if we just get it from data the notification so also um just to explain this every property in data needs to be a string so dot notification this could be anything any custom copy um this isn't a number anymore you pass it um okay so [Music] if we try that we should see grey's anatomy a big man awesome so that's android next we have ios which if i just bring that in quickly so on ios it works a little bit different but also very similar to android so if you it takes in a category id so rather than like specifying the quick actions inside the notification payload you have to create this category beforehand and it can be like a one-time thing so you have to create the categories all at once together i tend to do this again in the use effect um just when the app first match but it's totally up to you we could also do it in when we click display notification it really doesn't matter that much um so if i just write about the native function and call it so in here wait no you've got set notification copies this is the ios function only but on android with numerically you'll just ignore this so you don't have to worry about wrapping it in a condition condition check the platform so category so it's basically the same as this except for without the press action object you just need an id so i'd be basalt and python watching so if you go into actions [Applause] so i got this i mean new accent [Applause] new episode is the one category we got um we wanna basically want it to be the same watch now same old android and ios you can also do it differently um and then the next action for this category is bookmark which is the most important one probably um okay so and then our foreground event will be cut i'll request performances okay let's give this a whiz let's play notification we drag down see the actions click save later and things first yeah it's updated to be bookmarked yay foreground true so this is actually important so these little um little differences between andrew and i asked like documented on our thing so if you do trip up you just go to the docs you'll see that um or ask for help on the github get help we post policing um easiest way to get in touch with us who maintain notably is just quickly do a get issue or look through and see if anyone else has the same issue probably they have um but yeah more than have to answer any questions um on there so let me try this again display notification put it up in the background use a notification watch now yeah so yeah don't forget to do the foreground trick now we'll go on to scheduling so now we've seen quick questions about android on ios and how they can help the user engage without more we go into scheduling so next we move on to scheduling so this is really useful if for like calendar apps allow notifications um anything really that you want to plan ahead alert the user ahead of time and these are not great and then normally you normally ask to use it beforehand if they want these notifications so like for example in our skeleton now it's on tv show so if i just get right into it if you click on grey's anatomy or ozark which is the payload already designed for this um you can ask okay would you like a reminder when this show is back to a 10 minutes before one day before do you want it to be repeat repeated every monday at 8pm um but for this stuff i would just say time is before ozark is aired please let me know set reminder at the moment this button doesn't do anything so let's go ahead and [Music] give it something to do okay so we want in here to start to create our [Music] figure notifications so we'll import notified here so this makes it a notification so what the notification wants to we want to look like our payload and the trigger so when do we want this to be scheduled into the future so um i already have this setup to in my notifications. also this will be available online so go back to it this is exactly the same as the notification i showed above with the new episode but instead it's just a different show different image and slightly different quick actions um we've got default smith's default um and yeah a different category idea on ios which will have the same actions but the other thing is i do have this here which is slightly jumping ahead of the demo um but yeah i'm this so we will trigger type there are multiple um trigger types for this one we're just going to do timestamp eightfold is more if you want to like alarm and you want to notify the user every five minutes after they press snooze or you want to count down like okay trigger in 60 seconds that's what info is great for time sounds more great for an actual date and time so if we do timestamp and for the demo we'll just go ahead and say we want trigger in three seconds rather than monday at 8pm so which might be um five seconds one time you're watching this right now but for me it's not so i'm just gonna go ahead and [Applause] create a date but it's gonna be three seconds into the future and we go ahead and just add that date then as the time is done [Applause] okay i also need to import this which is the payload to the right okay and we should be good to go um i think that's literally all you need oh that everything we're doing android first so yeah don't need to create the category at the moment and yeah that's all we're going to be doing so we're just going to be dismissing it or default for the dismiss this is just like okay i don't need i appreciate the reminder but i don't want to look in the app right now i'm just going to get ready for my tv show so we can actually when they click dismiss we can actually cancel that so if we update our event handler let's cancel this for them otherwise they can spec it out ahead of time so this is gonna be a ten minute reminder ahead of the episode so they could browse up get the app ready if they're not eager to watch the show the um reminder and close the model you should get a notification yeah here you go oh sorry about the beginning 10 minutes are you ready grab the popcorn okay great um and there's an option dismiss or see more if we for double reasons put this in the background you see that anything it's not clear i'm not sure hi um i think did i start something wrong oh i know why this is it's because i put that up in the background but this is what i'm saying about this is the awful grand event i can quickly just put this logic into the background event and i already have one of you actually just doing it again you could also just um okay is it default or dismissed basically any id that you want to cancel you could just have to put them there i think i'm gonna have to every time you update the background handler you you have to make sure your app updates um and we'll do that now and we put the app in the background and just hit dismiss yeah it cancels okay great so let me open up that so that concludes that's a quick brief look at scheduling um but it really opens up a lot of stuff you can do without a database like remote notifications just within the device itself um so that's pretty cool and i haven't even encountered this i have not even come to half stuff you can do this is just i think the features that i think are most useful like that are more i probably gonna be used to the most about different types of apps so images um videos scheduling there's also loads of stuff you can do like timers the code for this demo is live on github so go ahead and try it out if anyone has any questions feel free to reach out to me notifies free for development and full production a license is required on android but invites has been kind enough to offer 100 free licenses using code rneu i'm helena ford thank you so much hope you enjoyed the brief intro to local notifications in react native thank you helena for showing us the ins and outs of this notification library i'm going to check that right after our conference today which is now because our conference is ending that was our last speaker please join our discord channel to discuss all today's talks and interact with other react native freaks uh thank you for staying with us for the whole day uh we really appreciate it thank you uh to all the speakers for the great talks and i hope we'll see each other tomorrow at the same time we have a great lineup of talks prepared for you see you tomorrow then this conference is brought to you by codestack react and react native development experts [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] you
Info
Channel: Callstack Engineers
Views: 4,057
Rating: undefined out of 5
Keywords: reactnative, reactjs, javascript, callstack, conference, reactnativeeu
Id: xKOkILSLs0Q
Channel Id: undefined
Length: 472min 7sec (28327 seconds)
Published: Mon Sep 20 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.