Instance Data Replication: Managing data across multiple ServiceNow instances

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right welcome everybody thank you very much for joining us for intro to instance data replication hopefully you find this session informative and interesting we'll go ahead and get started so just obviously you've probably seen this slide all the time and over the past couple days so safe harbor any forward-looking statements we talk about you know may or may not occur etc so all right so let me go into the agenda of what we're gonna cover today so we're just gonna sort of introduce instance rate of replication this is a new feature we're to talk about what makes it different from some other options we'll go over some use cases we'll do a demo and then Q&A so I will start I'll just introduce myself my name is Andrew streamer I am a product manager in the core platform team here at ServiceNow and I'm joined by so I'm Maxie I'm a senior software developer in platform as well I'm Jennifer Lee senior manager platform persistence okay so I'll start by just giving a little context to sort of why I dr why have we created missus data replication so you know there are ways to sync data between instances today there are manual processes there's other things but it can be cumbersome especially if you're dealing with a large data set and you have to create your integrations for your web services you need to set up your polling you need to understand what to do with conflict resolution you need to monitor and it can take you know it can take some effort to keep this running especially in an ongoing basis so that's why we've created instance data replication so that really does remove all of this manual overhead and creates a integrated instance to instance replication option in the platform that is a no code experience if we completely setup in admin configuration it doesn't have much impact on performance and that and the data never leaves service no data centers so it's also very secure so a little bit of more about how this works in terms of instance data replication can be set up between any target and source and target table on different instances it can do one-to-many replication you can also do bi-directional replication and once that you set up that replication relationship updates on your publisher your source table automatically propagate to all of your target tables on other instances it handles things like scalability reliability conflict all automatically behind the scenes and so how is this different from just doing this manually so in addition to being something that you can configure entirely in the instance and we're going to talk about this in one more detail in a few minutes but instance data replication has its own dedicated message bus that we have installed in all the data centers so this is a helps to be a more performant way to replicate data it can preserve order we have performance monitoring that goes on that you can see in the instance you can understand when your last replication event occurred you can trim it as a heartbeat monitor so you can really understand what's going on and this is all sort of built into the infrastructure so you don't really have to worry about any of this you just set up you know what replication relationships you wish to have and then it will operate in the background just continuously as people make updates to tables that those updates will get propagated it also gives you some flexibility both in the producer side and on the consumer side so as you are publishing from a specific instance and if you want to consume that data on one or more instances you have a few different ways you can do so you know first off when you create a source table to publish you can only publish selected records you don't have to publish the whole table so if you really want to publish a subset of data you can do so you could also publish your whole record or you can actually just select specific columns so if there's only certain pieces of data that you actually want to sync you can do so it also supports attachments so if you wish to replicate attachments you can but you also kind of omit that if you don't need that on your consumer instances and then additionally on the consumer side you have a few additional option and this is for each individual consumer of the published data so first off in a similar to the message bus it establishes a secure connection between each instance so that's all done in the background you don't have to do anything except set up that relationship and the secure connection is established between the two the two instances or more additionally it supports data transformation so as data comes in to your consumer instance you can do transforms on it and you can be replicated either to the same table on the other instance or to a different table so you have a lot of flexibility in what you want to do there you can also have the option to seed a full data set so if you say you know I want my entire active sis user table replicated onto another instance you can do that or you can simply avoid that and only get new updates coming in you really have a lot of different choices you can do there and as I mentioned before once you set up that relationship any changes and he updates any new records from your producer table or your producer table will get replicated to all your consumers automatically in the last point I do want to call out is similar to you know trade data transformation which can be done differently on each consumer incidence you can also choose to fire business rules so if you want to do transformation and cause that to fire business rule once it once the data is replicated to the consumer instance start you know and have a process going that is starts on it originates on a producer instance and continues on a consumer instance you can do that so I'm now gonna actually we're gonna dive in a little bit more on the details of how this is all set up in how this works and let Jennifer take over I think I'm from the engineering side so I'm gonna share more of the details and caveats in what we have word for IDR so saying that you have producer instance in consumer instance a lot of times in when a company gets bigger and bigger you do started to split multiple instances and sometimes it depends on your business needs some set of users may not have access to the other side of the instance so in this case behind the scene we we have producer instance and we have consumer instance and the data flow actually is pretty straightforward when you can configure which table and you define the filter what kind of records you do want to replicate so for example if you want an only replicating subset of the users or subset of the company records or department records you can choose to do so and same with Andrew just mentioned even within the user records you may not want to share every single field you can also select which columns you want to replicate and so once you selected it go through the filter criteria and then what we do underneath is we immediately whenever there's a change happens in order not to impact the user who's actually modifying the data we don't do a synchronous call to Kafka what we do is immediately we save that change to local my secret table and so we kind of have our local buffer that's where the outbound replication queue it capture any records that you want to replicate anything changes in there it will be having a quick buffer there and then once we buffered up there's a repeatable jobs that pushes those changes to Kafka queue in order and then once we get to Kafka queue there could be one or more consumer instances that really depends on how you set it up you can have a one consumer instance or you have like surface now does we have internally we have a high we have serve we have filters one we have App Store instances or you have data center instance we could we actually sometimes we use high as a source of truth for User Profile and all other instances just directly cloned our users specific information to other instances and once you get to a consumer side consumer can't decide whether or not you wants to transform the transforms are very similar to what you have been familiar with the current trend for mapping import/export except that it's much more reduce to set the reason why we're doing that for this release that's that's the beginning the first release we have ydr the reason why we're doing that is that so far in the past we've seen customers sometimes using transform apps and did not realize that some of the transformation can be very expensive especially when we started to allow you can do a scripting transform so each one of the record loads into summer instance will take maybe a while to finish or it may be block something so we so far we actually want to do is limit the set so that we can guarantee the performance for now and slowly will increase more capability in transformation and that once you transform that you could also choose from source table for example it's a user and then in target you may have you know customer as your table so you don't necessarily have to be exact the same table and then you can transform into the target table one we load it into the data so that's a simple you know data flow from producer to consumer and like we said you can have multiple consumers if you want to and how do we get the data all the way through behind the scene there are jobs so there's a for this release there's two jobs in producer side and one job at least one job in consumer side the producers I will wake up periodically every 15 seconds to see if there's any changes currently captured if there's any changes current captures we're going to push that to Kafka and each of the the replication sets we call them replication set it's a set of tables that you you are you are interested in to replicate for example I'm interested in user profiles I'm a replicate this user I may want to say I want to replicate users in HR department and then you have user members user groups um well related records that also associated to the user can be specified into the same set and each said you know by producer and said actually decided what topic in Kafka is and so periodically the jobs actually will wake up and pushes the data by each topic and then from consumer side every 15 seconds there's a job wake up check Kafka to see whether or not the topic I'm interested add is there any new messages if there's a new message I'm gonna subscribe and take down the message and then apply transformation and loading the data and so that's you know pretty much the simple fundamental work and them from because these are the one thing that's different the goal for IDR is really to replace what you have to do today manually set up resting integration and soap integration you have to write the scripts to build the rest integration in soap integration this is using no code configuration to allow you to you know shuffle the message from a record from one side to the other and also the other can and other things that people normally build such integration this tough is that it's hard to capture changes like I want to see one producer Sai whenever there's a one change for example some user update certain records I want to push that records when that record gets updated I want to push that to my consumers and today you may have to build a business rule you may have to build certain mechanisms but this actually what directly lesson behind the scene as soon as you've configured we all will listen to any changes happens to the records you care about and so any changes that happens will will actually record a delta in the buffer and so that part you also don't need to you know build the scripts to manual that you set it up and also what that said because it's job it wakes up every 15 seconds it's not you know if you say real-time it's there's a certain delay so jobs may wake up's 15 seconds and then run for 15 minutes and your yield and then it takes a little time - for the next time it picks up so if you realize if you say I make a change in producer a and I'm expecting the change happening my car my consumer instance and how come he hasn't showed up in 30 seconds or a minute it's the jobs is still you know picking it up but in general in this kind of synchronization is they'll be much faster than the current way of doing where people are doing you know schedule job runs once everyday to kind of push massive records in order to synchronize the records between the two instances so once you set it up the IDR replication we do have two phases seeding we call them seeding it's actually a bootstrap face the bootstrap phase allows you to move all the records from that you set it up from producer to consumer and this face is optional you could choose you don't want to start with it so you could directly start with replicating any changes on the producer side we'll start going into consumer one after another but you don't maybe you don't want to have an initial copy so the initial copy is optional and the second thing between the seeding and replication is that once you start bootstrapping phases sometimes people use that is when your producer and consumer tables are started to drift and you may want to say well let me just completely copy from producer table to consumer table one more time so that these two tables are in sync and then seeding particularly copy giant copy the records are very our consumer specific so in this case if you set up multiple consumers and one producer whenever you're requesting seeding it's always requested from consumer side and in this case we don't broadcast all that message because you don't want to have one consumer started to say I want to copy everything and then all the consumer started to get overwritten by the same data from producer so perdu seeding particularly do the bootstrap copy only requested you can only request that from each consumer and only that consumer will receive that initial set of data and then finally there's a there's also another limitation we do for seeding seeding is literally a convenient tools for for people to start up but we don't intend for that to be the way to copy the whole instances because a lot of times you do realize that your instance may have a millions of records and then it will take days to to copy the records from what and to the other and we should there we have there should be a better mechanism then shift then copy every single tables and messages to Kafka and then come from the other side we have a cloning internally we have automation internally that can actually move large amount of data from one data center to another or bootstrap another instance of and those are much using database dump those are much more efficient than copy single messages at the time so that's why we kind of limit three million records so two so that people don't use this as a way to do a database dump because we do have a separate tool for that and then so thing that I did mention here is that every messages does get encrypted so between producer and consumer producer and multiple consumers that's a one set they share the same topics there are the publisher and subscribers in the same topic each topic they do have their own encryption key so when you set it up initially it does encrypt all the messages and when consumer receive the messages from that topic it does it has to decrypt it using that share secret so that you don't you can't really browse each other's instances or each other's topic so once your seeding is done there's then you can start the replication phase the replication phase meaning we're starting to shuffle all the messages from any changes that happen some producer sides gonna get published into Kafka queue and the consumer is gonna receive their messages and apply to download code table the only thing is that if you request any seeding if the consumer at that time is requesting sitting saying that I need to copy the whole record over to my table again at that point will pause all the replications so the real-time changes doesn't during seeding time is not yet applied to consumer just because you may have a race condition where the latest changes get apply and then you copy records over the that copy may not be up to date because when we do a seeding copy with your guy records and you have all the records in in local buffer actually we do have a disk buffer and then we shuffle all the records over so finally when the current for the first release for the first idea released our coalesced key is only limited to the sis ID and that's probably pretty limited for a lot of the use cases but so far currently we trying to make it simple and make it performing so later on we will starting to allow more other Cola's keys so we do realize that sometimes it's a it's a little annoying if you try to replicate users and you do have same user IDs already existing on the consumer instance and when you try to try to replicate users over they the society mismatch you end up creating another user record so that's some of the caveats behind the scene but in general that's how we implement it and so typical use case for IDR the use case for IDR is first thing I like we mentioned internally we do like master-slave replication so you have a source of truth such as you have a company record the primary records it's owned by certain instance and you have a user records owned by another instance maybe internally you do have another asset instance that tracks all your assets and that's yet another instance so these each one of the instance may own certain type of data and that gets propagated to other instances so that they can wrap they can refer to it so for example if we have a central for especially in our service now use case high highest our customer facing instance and that has all the user profiles in there and so in user profile we replicate all the user profiles departments are in there we have another instance that handles or you know data center and all the instances in all the notes and we're using that as a as a producer to produce all the instance information to to high so that when we open the ticket when we open support tickets we can actually refer to refer to those record the second thing is bi-directional you can also choose to use master master replication and master master replication has its own caveat it's the same thing as when you build bi-directional and rest api so if you do a rest services try to synchronize incident tables from a a to B and then any changes some instance P you want to replicate the back to a you pretty much will hit similar constraints as what idea I will will present you idea does makes it a lot easier for you to configure that because when when actually when you do the bidirectional copy first thing you have to be careful about is in this loops because the messages made some changes happens in instance B and then you send it to a they applied it have a business or on and that that makes another change and then the message sent over to to message to instance B and then the master circling so we do have a protections against that so you don't have to you know if you build your own REST API you have to be careful of all these and then for this kind of concerns we have that built in already but the things that you do have to concern about is whether or not your records what stay can you write to so for example this if you replicating incident between the instances your two instances may should have the same state so that when you say state three that's a work in progress you do want to have stage three replicating on the other instance has the same you know has the same value or you may be replicating different different fields so that they don't they don't conflict that's that's about it for for the use cases internally we also use that for bi-directional for between high and our development instance where we replicate the stories so any one updates a story from one instance does get replicated to the other side and then same when developer actually updates my story it's here's when I finished builds in and then when the tester actually finished testing you you see that happening on the other things too so general limitations again this is the first release so we do you know make it fast and simple but we do have you know introduced and limitations we're slowly working on whether or not it's important to remove such limitations when use case come along so the first of all the first one is like when we mentioned the seeding you know the first copy does you cannot copy records more than three million records and the second one is need to move data a preset schedule hold on oh so that's the same thing as when we mentioned about jobs so you it's a near real-time but not like real real time because there's a certain delays between between the producer jobs actually picks up to consumer jobs picks up and then also you cannot replicate metadata metadata one we main member metadata is sis metadata is the root of the table any changes below that any tables is the child of sis metadata cannot be replicated the reason why we introduced our limitation is that this metadata represents system behavior those are configuration records that changes when you change those values in those records that changes the system behavior so what we don't want to do is when you change a behavior in producer and suddenly your consumer to see such changes and that may have affecting your consumer consumer instance health or reliability or whatnot and especially when you adding a new dictionary columns you don't want that column started to propagate into consumer instances they may be use case for that but right now you know that can be pretty complicated and the last thing is because it's our internal infrastructure cough cough message queue is what we have already built within ServiceNow and current support is only ServiceNow to ServiceNow so we their calf cop itself is not exposed for security reason as well it doesn't expose to any external services so right now using IDR its rapid data replication between our ServiceNow instances what's now said oh yeah so here's the summary for you know how to simplify your configuration build up the build up your integration from producer setup consumer set up and then you can choose the relevant tables and records per your use case and then you can start the replication all those you don't require a code so we're gonna do a quick demo through that and so our first demo scenario is that internally this company Acme Corp has you know two instances one is you know they have a security department owns one instance they don't want other their users to hop on or jump on do it their instance they maintain the security information user profiles and whatnot and there's IT department so IT department serves all internal employees they serve all the IT services so in employee portals and stuff so in this case security department also the user profiles and IT departments are the are the consumers of the same information and so musics gonna walk you through how to set it up a user profile for replicating from from security departments to IT department okay so let's go back to that so I have two instances here I have my security instances and then I have an IT department instances and let's say that we just thought of our IT department instances so if I go to my music table I don't have any username I mean I just have some admin and that's it so all the users right now are located in my security department instance and I need to get those as well because if a person called in and said that my laptop is not working or anything like that I need to be able to say that who call and basically pick that person so in order to do that on my security department I am gonna go into application navigator and from here if we type in replication you'll find out different sub modules so the one that we need right now is that from security department we want to create a producer so we'll click on that and basically this will give us their list view and we have a new button on top so I'm gonna click on this and I'm gonna say that ok what I need to do is that this is gonna be my central user registry ok so this is basically just a simple name for what is the use case that I'm gonna do so I'm gonna transform all the user data so I'm gonna create that and now basically you say that their status isn't draft right I haven't picked any tables yet so I still don't have any scenario Sarah so if I scroll down you will see their application entries these are basically the tables that I would like to pick and then I would like to set it as my use case for this a specific set so I will click on new and this is gonna basically bring up all the tables that are acceptable as being replicated so I'm interested in this user and so I'm gonna click on that and you see here there are filters that I can say for instance only I'm interested in username blah blah blah or like they are in a specific department but for simplicity I'm going pick everything and you can also say that if you don't want to replicate a specific fields but again for simplicity we'll pick everything so I will click on submit and now if I scroll down I can see this user as part of my entry but I'm not only interested in sis user I also want to know what are basically their groups of what group are they part of so I'm going to add another table the table that keep tracks of all the group is called sis user group so I'm gonna pick that and again we don't want to choose any filter for now we will click Submit and also the relationship between a user group and a user so we'll pick the relationship table as well which is okay now I have all the use cases that I want so I am interested in this user the sis user group and also the relationship between user and their group so now that I feel comfortable that I have everything what I can do is that I can click on activate and then I click on activate this means that anyone that has information about this set can join it so I'm going to show you how we will find out that information so I'm clicking and active and I see that okay my status is active replication so if you scroll down there are a bunch of related links links and one of them is basically replication setup instruction so now if i basically bring this up i'll see the information that any other instance would require to subscribe to this particular producer so now my IT department what they want to do they want to create a consumer so from here they're going to search for consumer and they will see okay there is a consumer replication sets now again this is going to bring up your list view that you guys are familiar with so from here I'm going to click on you and when I click on new so this is gonna give me some information so you see the exact same basically columns the name producer ID producer said ID so technically I have to copy paste this so okay what is the name of the implication set what is the instance of the producer so this is the instance ID of the producer that I'm interested in and what said am i interested in so this is the sis ID of the set so I will copy that and I will click Submit now this was the one that I created right so now if I open it up I'll see let me actually close this so that okay so basically this will say that for your instance we send a subscription back to the producer now by default you as a as a producer you have the option to either approve a consumer or deny the consumer so basically this still doesn't mean that I have I'm consuming this doesn't sell me that because nobody approved me yet so from producer I'm gonna close that so from my security Department and since I'm gonna do a refresh and if I scroll down on a related list I have consumer subscription this basically shows me the list of all the consumer that are interested in getting the data for this a specific set and if you look at it right now says approval pending because as an admin I haven't done anything yet so what I'm gonna do I'm gonna open this record and I say oh okay then why dr1 yes I know that this is my IT department and like okay the consumer in society looks alright so I'm going to approve this by approving this basically I'm allowing them to read all the deltaab and changes that I have for these tables so if I don't recognize anyone I wouldn't put in a profit right so basically then they would never ever get any Delta so I approve that and now in my consumer subscription I can say see that I have one approved consumer now from consumer you will see now that my approval status is approved but as still my status isn't isn't draft so if you basically scroll down I still don't know what are the tables I haven't got the web diving for machine yet because I just got the proof so what I'm going to do if I want to get the information I'm going to unsynchronized and this will basically send another message back to the producer and said okay give me all the tables so and then if I click OK this will give me all the tables that basically was part of this set now if you remember this IT department instance we just set it up so it doesn't have any user it doesn't have anything so I want to get all those information so basically my from my use case I require a seeding rate to get all those information so I'm going to click on I'm gonna sit everything and I want to say active activated sitting and this basically game so we'll send a message back to that producer and it will say that give me woulda stabbed me with all the records that you have in these three tables and this is gonna this is the only part that is gonna take a bit of that so it's gonna send a request and soon we should be able to sit on the other side so if you look out other side on the producer I have the sitting request and it has also the percentage of when it has been created but it's a start time and so on and so forth and this should be done by now but the see ya so it's completed so producer already sent all the information to the cath lab and now if we do a refresh on the IT department we'll see that my sitting request also got completed meaning that I should have received all the SIS user information all the group information and all their membership but let's take a look so if from here I'll open up the user if you remember we only we only had just admin and now I have all the information that I need and let's open up for instance security admin and if we scroll down we already got all the group information about it as well so I know that this is part of a security admin and so on and so forth so so this was part that was sitting now basically now I'm ready for any changes that's gonna happen in the security admin because the crazy admin they have all the SIS user information so that's a life and we have a new hire as a security admin I'm not going to the admin user table and create a new user so let's say that I would create k9 okay I have to refresh it doesn't let me change now okay so let's say that I will say I have a new k-19 user and I'm gonna submit this this is very basic is there so what I expect right now now that I create this on my security department I expect this to be appear also in the IT department so let's actually check it out so we will yesterday so it appears already in the IT department in any changes so far let's say that I've made a mistake and I have to come back in and say no the user first name is not this it's actually like that and I will update this and I again expect this guy to be updated so you see that it got updated and even if I delete this so any called operation I expect us to replicate to so even if I delete this record and again this is not as Jennifer explained these are not real time depends on the amount of data that you are particular closer so you see that it got deleted so we got record not fun so this is the first part so basically we created a one-way application from my security department to sending over all the user information the group information and they're good membership back to my IT department so like we said it looks like a real time because we reduced a job cycle so it goes to recycle very fast so that's why you see things happening like a real-time but in reality depends on how busy your instances and default is 15 seconds it should be soon enough like most of the integration so you don't expect that happens like instantly so you know 30 seconds round trip with no traffic but it really depends on the load on your instance so second scenario we're gonna you know demo master master replication in this case you know security department owns the user profile and then saying IT department serves all their end users they their employee portals so today this morning when Nancy comes along so and between them so yeah let me explain how we set it up the part of the setup is for incident tables we share between the two instances so you could potentially choose if security aming is is modifying or handling certain incidents and those incidents was show up on their on their instance and IT department looking at the employee in incidents so they'd look at from their instances and behind the scene what we set up is you have a security department replicating incidents to IT department instance and IT departments also replicating any changes from IT department on incident tables back to secure security departments and so menzies gonna show you yeah that's fine yeah I can I can we can show you the the configuration real quick before we come back to the scenario yes so basically from my security department I'm interested to see if I have the producer in set for the incident table so if we click on that this was the one that we created together but I do have another one security incident sync and if I scroll down to consumer subscription which you already are kind of aware of I see that okay consumer at them or idea or one wishes this guy is right now listening to me and because this is a two-way replication I also expect in the IT instance I have another producer set be kind of like the same thing so I will go into this guy and I have an IT incident which basically replicate incident table sorry I was forgettin and again if I look at my consumer subscription I'll see that demo ID r2 is my consumer so we are we have this two-way replication on the incident table so now this morning Nancy comes to work and then she's trying to use the portal but she can't log in so what she does is she calls IT departments to say hey my logging my employee portal doesn't work and so here's you know here's how IT handles that okay so so as an IT which is this guy I got incident Pharmacy so what I have to do first I have to create an incident so I'm going to go to my incident table and I'm gonna click on Neal and I'm going to say that ok man see call anything remember from previous set we already import all user information so I know who man says already have that and I'm gonna say man seek complains Porter is not working okay and I'm gonna submit this so that we go to my queue and whenever I have time I'm gonna look into it so now I open this I said I'm the admin again and I will look into the port on like home portal looks to be fine I can't log into it so it looks like there isn't any issue with portal I will dig more and I found that maybe it's an issue with Massey's password that it doesn't work so this is not something that I can do as an IT department I have to pass it to the security department so what I'm gonna do I'm gonna come here and say that please check manses username and password and what I'm gonna do I'm going to also send that to the security team so from here I'm gonna say okay who is in my security team now it's user security admin so assign it to my security admin and I'm gonna save that so they will take a look at it now as a security admin I'm on my instance and I got in the morning and I look at my incident and I will see that I have a new created incident so I will open this up and I will look into okay what is happening so I will look into the notes and I see that oh they have complaining about math username and password because I'm a security I have access to all the user information so I will go to the user and I will find man see here and oh looks like man says like out so what I have to do I have to unlock this save it and then going back to the incident and say that I'd resolve the issue so what I'm going to about here okay so and then I have to resolve it as well so and I'm gonna say unlike Nancy okay I'm gonna resolve this so as a security admin I resolve this and now the IT department Maps I call again is gonna ask about what is the update so if i refresh i'll see that oh it got resolved actually and if I scroll down because I'm curious to see what is happening I will see that I got updated from security admin and you notice this ID are basically this means that this has been replicated it wasn't something that happened on my instance I got it from basically idea are they in sense identification so it shows that oh man see was locked out and again here as an activity I will see that see that the security admin resolve this instance so I think that's that concludes the whole whole demo kind of explain a real quick how do we do setup masters slave and master master replication with IDR yeah thanks passing and so just you know obviously from these scenarios you can see that you can be really creative with you know different processes and things they have a lot of flexibility when setting these things up and it's very quick and simple to do we did two different ones in the course about 20 minutes I do want to point out a couple of key things so this is a new general availability availability product in the New York release however there are some caveats to that because this is our very first release so replication today can only occur between instances that are in the same region so North America in North America amia to amia but not North America to amia or vice versa additionally in this first release the replication is really only occurring between your instances like per customer so if you have to production instances or a production or a sub prod that's all fine but not from an instance on you know a production instance that belongs to you and belongs to another customer and then we do not replicate encrypted data and so that includes edge encryption or column level encryption data in the New York release so I just want to sort of call those things out it's important so you know ahead of time and so for the roadmap I'm seeing talk through a couple of things obviously it's a brand new product we are very excited in New York for people to take it up and I hope to have conversations about learning more about the use cases and how folks are using it but some of the things we are going to be doing we're gonna make it much easier to deploy a replication set to different instances quickly you know we showed you how easy it is to do so but you don't really want to have to repeat that process 20 times if you had 20 instances so we're gonna make it easier to simply propagate that across multiple instances if you're doing the identical type of replication set additionally then we are going to address those two major copies I talked about before the first one being replicating between different regions so that will be one of our top priorities and then again between different customers so we'll be continuing to expand the amount of flexibility you have using this product ok we have just a couple minutes for questions we do have a mic runner do we have a mic runner do we have a mic ok I'll be the mic runner are you good okay thank you so I guess go ahead we can repeat what say just go ahead and you know yeah call out yeah yeah and I'll just repeat what your question so go for it please it doesn't matter we do backward compatible so if you have a one instance upgraded to Orlando and one stays in New York this replication will still work you could change that those are scheduled jobs behind the scene so you can change that to run every 15 seconds like default or runs once per midnight hey oh I'm sorry yes yes oh my question just setting up these applications so there is no authentication required so that to have access to some extent on the other instance when it does so in New York right now what you do behind the seeing is that between the instances there is a certificates behind the scene so when you're requesting for subscription we actually behind the scene there is a message gets encrypted by the certificates and consumers I it validates who who is the requester for you and so when you approve all you need to do is is this instance name you recognize and you can trust that instance name all right okay another question I have is that for like audited tables is there like audit history entries replicated when syncing data there is options that you can including histories so you can because most of the like task tables all that activity strings is actually stored in both places one is seeing the comments and work nodes one is in the history set insist audit tables so there is option two to replicate in your initial set including all the histories all right okay and and when like picking the instances do we need to like this copy-paste these societies or is there a way to have a like a register set of instances we can it would be nice but right now it's a very primitive so you have to copy that will definitely yeah we have heard that feedback though absolutely that's something we're gonna be looking at thank you we got some folks up front and then some in the back as well did you ever do it with the seem to be NCIS CMD BCI we do but it really depends on how big your CMD BCI is so in in the real internal production we haven't replicate our CMD BCI our CMD BCI only I think it's around five hundred thousand records those are still okay but once you get to several millions to boostrap them is a little tougher another caveat is that when you configure the set you actually configure a concrete table so you cannot just say I want to replicate tasks and your incident problem change everything was in there you have to say I want to replicate problem and here's the fields I want to replicate because otherwise when you say task that involves could be several thousand different fields in there so right now the limitation is you have to specify the concrete table to replicate so we use Kafka internally is there a roadmap for supporting Kafka as a standard integration method like rest or so so far not on the roadmap but we may consider consider that when the when the use case come along is the configuration domain aware sorry domains that configuration domain away is it the main the configuration itself is is not domain aware so it does replicating the whole set of tables in if you care about certain domains you have to specify your own conditions and so same thing how when you on two instances the domain can be completely different so when you're replicating from incident which is for domain a and then you only have a domain XYZ then you have to also replicate you have to synchronize your domain information first and then replicating the records including domain field to the target records in order to keep that field to keep that record in the same domain okay so the use case I've got is that as an OSP we have customers that synchronize users use nailed up into production so we can use this to synchronize those users down to test and dev yes but we don't want to set that up a table wide so we don't want to do all the users we just want to do the users from that domain but we still don't want someone from another domain C in the configuration so what you can do from for the configuration for that case you can the configuration itself is global but you can set up to say I only want to replicate certain domain user to my sub prod with the domain ID equals or starts with condition and then have the same domain setup in your suprise so that other domain users cannot access those records on your sub prod hi how do I know if the instances have NSYNC data all the time so do I get reports a good questions right now we we are going to work on a dashboard so that you can get a real-time feedback but right now what you what you will see is the replication delay we do have xml stats behind the scene our internal ops is tracking that to know how far delay how far replication are behind us it is so you could get that information but right now it's a little harder to get yeah we will be making that better though and the question from my side is so yeah this is your table as a password column which is encrypted I believe and when it replicates from instance a to instance B how it handles so right now saying as we mentioned it's a limitation that we don't want to replicate any passwords so because each instance actually behind the scene has its own internal keys that encrypts those passwords and those keys are not shared and so one you your encrypted data can be only encrypt and decrypt on the same instance so even if you replicate that same field with a bunch of binary codes base64 encoded to target instance we cannot decode it so right now passwords has to be reset if you want to use that or you can use a SSL for the you know single Sangam for the authentication part in the profile using this to set it up you havin this your table is replicated to instance B I'll not be able to log in using your user which is which has got replicated from another instance right we may not be able to until you know the if you don't have any central authentication you maybe have to say let me reset all the users so everyone on this instance will get reset it so they will have to reset their own passwords thank you but generally you know the practices nowadays with with a different open allows at different authentication methods you generally also start to a third party integration but your profiles gets synchronized we'd be possible for unpromising to use that feature if we use already office okay Kafka possible for other consumers other customers to see the data from careful unpromising from promise so currently no sorry because kefka are now using sigh our data center and it's in the protected but we use Kafka it possible they have their own adapter configuration to install potentially yes actually we could discuss it yeah we can discuss that because behind the scene we do set up SSL and SSL communications between our instances in Kafka theoretically we could set that up for your instances well but currently I don't think it's on the roadmap but we can discuss them yeah are you gonna are you gonna enable the functionality to be able to transfer encrypted data in future releases or not so far we do concerned about the security vulnerability if yeah I mean it's certainly I mean we want it to be as seamless as possible but there are some challenges there right potentially what we would do in that case is allowed the sourcing since the production instance to decrypt and then using the share key to encrypt again and then ship that data over so that consumer can apply with a after decryption but right now you know we kind of hesitate for those encrypted data we don't want them to kind of have any vulnerability right now you spoke mainly about the production to production synchronization would you recommend this functionality also for production to sub production like you au 80 instances to make sure so for the use cases where it depends on use case I think we've been talking I think several customer has approached us about a subset of test data so because cloning takes maybe five six hours potentially twelve hours to finish the whole clone all you want to do is to have your user data or Department data or certain incidents to be up to date and those I think it's a pretty good use case to do to do so and we don't limit that yeah there's no limitation there but definitely it is not a good substitute for the full clone right yeah it would be would you allow the transport of edge encrypted data since the keys are controlled outside the instance anyway that's a good question I don't think right now if I think you can actually configure that to another instance but because edge encryption also directly tied to a particular instances so I don't think you can have an edge size sitting on one domain and talking to two instances and be able to decrypt data from two instances so potentially if that can set it up maybe we could because literally what we do is shifting to just push the same exact encrypted a to another instance yeah when can we expect customer to customer instance [Laughter] so customer to customer currently we're looking at the Quebec timeframe but we might be able to pull it forward depending baby earliest would be the Paris timeframe so at the absolute earliest so this part of base platformers are a subscription required there is a subscription required yes yes there's a separate subscription for this service required the final pricing is TBD but there will be a cost is there any limitations with replicating from FISMA to Nantes I think right now they actually completely separated yeah so our our layout in the topology inside data center is FISMA and non-ferrous or completely separated so we have actually a different caveat that don't communicate each other so no yeah so no you can use it in each one but not between okay so you can within peace mind and with a known face my correctly what about performance is this something that you can kick off in production without any impact performance it shouldn't impact your performance so what we do is there's only one job so right now we don't potentially that the thing people can be worried about is how long does it take to see to copy my three million records it could take a long time because we only have a single job that does that and because of the data is store in Kafka we don't you we take as much as we can so we don't overload certain instances if that becomes slow or the jobs goes to sleep it takes a while before picks up because of some other jobs I really have a higher priority it would just pass in which means that your replication time would get longer but it wouldn't impact the rest of the yeah that's not you high in the if the consumer instance for example the user table demo a while ago if the user table already has some contents and you request a seed what happens to the data that's already in there so if you if you have the same field that you already have other contents whoever you see it over whichever message later on arrives gonna overwrite it unless you have a different rules you can potentially check transformation but in general when you see the over writes the data so when you do a master/slave such as a user as your use case you have a source of truth or you have a master instance in others you should pretty much make it read only because next time you think it's gonna get overwritten and when we say overwrite in the consumer instance the whole user table with all of the records will be deleted the records not being deleted directly overeaten so well basically read that and then set the values in but unique records in the consumer table and not you may end up being hashed a bunch of you have a new record sitting there that's not in this one yeah yeah okay yeah thank you thank you not with the same society because we call us based on society so if you have a society with a different user ID from one instance in soy producer instance has a different user ID with the same accessories ID that society is gonna change in the in the consumer instance because we call that space down societies and that's supposed to be globally unique hi like just one last question from my side so let's say we have setup master to master application on the group membership let's say you have a QA annuity if someone asks group in QA it should replicate annuity and vice versa so does that any create any problem with the societies although society is unique to an instance so if you create groups is gonna go to the other side it's gonna create it so if you have people creating groups like same groups for example I have a security group I created exactly at the same time and that would be two different societies and you replicate you end up having two different records both of them call security group but for ID are wise because we're nice only society that's literally two different records yeah my question was like societies are unique to an instance society are unique globally so in the future we want to introduce more other cholesky for example you want to use a group name or user ideas so coalesce key so that we actually call as based on that instead of society okay but right now it's a society yeah thank you will it work within data centers it'll work within the same region yes yes same region yeah within the different pairs no problem yeah and then related to that we've been talking everything so far is basically source so what about the backup so what about your backup record for all of our instances what time or frame are we looking for IDR to basically sync with your backup well IDR doesn't really so backup always goes on per instance so every every day every hour there's a backup jobs always going if when you synchronize IDR which means that you're updating your records yeah it's gonna get copied over to the backup yeah okay any other questions oh great thank you I just do want to call out that we have a simpler demo and deck and stuff in the creator con asked the experts pod on core platform so if you have other questions feel free to stop by and we can you know I'll be there Jennifer might be there and we're happy to continue the conversation
Info
Channel: ServiceNow Events
Views: 1,914
Rating: 5 out of 5
Keywords:
Id: HgTnecV_miU
Channel Id: undefined
Length: 62min 43sec (3763 seconds)
Published: Fri Nov 01 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.