ETL Testing interview questions and Answers | ETL Testing Interview Preparation

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
oh let me just try and first explain you when you talk about the interview perspective like how do you put I mean how you have to put your roles and responsibilities when it comes to that and I'll also explain you the sample project what you need to do it so once you key details I mean in an ETL project so basically as an ETL tester if you start working for a I mean if you start looking for a job so basically what are the things that they are going to test so then they will test you predominantly on the sequel like how comfortable or you it is equal because if you know is equal then completing the ETL testing process as well as a bi testing or anything becomes easy and then less you a little bit on the data warehousing basics to understand the terminology and the data modeling basically to understand the technology like dimensions facts slowly changing dimensions and all these things they may verify with respect to that and they will completely test you on the ETL testing process when you say ETL testing process whether you understand are a source to target column mapping sheet what kind of validations that you do basically the count validation different types of test scenarios all those things they mean very fine and along with this they will also touch base on the BI testing so they ask you like what kind of testing that you have done in the bi and maybe if the interviewers and all they'll also test you on the basic unix commands how you have used whether you are comfortable using a putty or winscp all these are all the different things which they test you on the any ETL testing or a data warehousing testing profile so these are all the things that you need to prepare yourself first make sure that you are understanding in on out of SQL and the data warehousing basics what are the terminologies and the ETL testing BI and the UNIX process so when you start or go to the project basically the first and foremost question that you get is a project like they'll ask you what project you are doing what is your current roles and responsibilities that you want to do and all those things so basically when you are explaining the quarter project you need you need to make sure that what ETL tool that you are using you need to tell right and also what database that you are using what is your source system like whether your source system is all about whether it is an insurance or whether it is a what type of data that is available in your source system and basically the source system is nothing but what is your source database whether it is in Oracle or a sequel server or it is in the db2 database or we are pulling from the mainframe system or from where you're pulling the data that you need to know and basically the target system see in a data warehousing project if it is not a data integration project so your target system will be first you load the data into staging and from staging if possible we load the data into dias and from aureus will load the data into etw or under data Mart so your data will are you need to understand the complete data flow how the data is flowing and that you need to explain let's say from OLTP generally the data will go to staging and from the staging it might go to the audience but not in all the projects and from four years let's say it is being loaded into enterprise data warehouse or a data Mart what it is there so as an ETL tester you should explain that I have validated the data between a transactional system to the staging and from staging to odious odious to etw or anything so this is how you need to explain and what database it is your target system what target come in based on your project so what target database it is that also you need to explain and when you talk about the next one basically the what kind of BI tools that is being used and what kind of FBI our tool that is being used and how that is the overall idea like when you explain about the project you should explain in and out so let me take a sample project and I will tell you now you have to explain about the project so I am going to give the sample insurance-fraud project like how the data is moved or implemented let's say we have a sample project I mean an insurance - I mean for insurance management and all there is our clue is called as a guide wire how many of you are of it so for management managing of all the insurance things so insurance basically will have three types of modules one is called as a claim center for processing all the claims and for selling all the policies which is called as a DC for policy center and another is called as a B C which is like a billing center so it has a three modules I mean three modules are C C stands for a claim center PC stands for a policy Center and B C stands for a building sector now this is your transactional system and I am just giving you the example this is a transactional system where a data is available in the SQL Server database so my data is available in the SQL Server database you need to explain the architecture okay basically in our project we have the complete our staging data I mean first the data is being staged in to the stage database and this stage database is actually available in the Oracle and let's say in this we have our different staging tables like a STG underscore t and STG underscore raw you know the claimant and claim claim 8 and you also have our tables like a claim transaction so what are the different tables we have for a policy we have our STG underscore policy STG underscore location likewise you have our different staging tables you can explain just that we have a staging database which is created on Oracle our informatica is used as an ETL tool or if you are keeping any other experience as well you can tell what ETL tool is being used so informatica is being used as an here too to extract the data from all these guide wire cables and then the incremental data from the oil TP system is integrated into the earth is integrated into an Oracle team I mean in the Oracle staging database so these are all the different tables which is being integrated and from this again the data is being integrated into another odious layer right so another odious layer which will have which will capture the historical data and the data is being ordered into the odious layer as well so in this again we have a separate tables for each module so for C C we have a separate audience tables for a PC we have a separate audience table and for a B C we have a separate ODST so you can say for each module there are separate audience tables which are being designed and this will also capture the historical data and the data will be the appended code so from audience let's say if you want to know what are all the tables that you want so something like a claim transaction and you have like a podía sender score clink and you have like let's say odious and the score claim location or something claim location so likewise we have a different set of a tables in the audience or separate tables for the claims like the separate tables for a Policy Center and separate tables for a billing side so this is where we are also maintaining the historical data which some other dimensional related table are being maintained in the system and from this again the data is being loaded into the individual star schemas so basically you can call it as we have a three data much one is like a claim data Mart policy data Mart and the billing they numbered so that you can tell okay so now this is how the data is being are separated from this into the three different data Mart's called as a claim policy data Mart and then this one so according to the each subject area the data is being divided into the three different databases or if within a single database as well you can say so here and so this is for claims and this is for a policy recovered right so this is for a policy and this is for a billing so this is a what it's being developed now here in the claims rate Ahmad we have a different set of our star schemas and here in the policy we have a different set of all star schemas I am here in this we have a different set of star schemas that you can see so in this let us say we have a tables like accident here so which is for performing the accident ear analysis and we also have in this we have a policy year fact table for performing all the policy related analysis and here all the billing fact for understanding all the building related transaction so accident here are calendar year so Callan I mean claim calendar year claim loss accident year here we have a two data much here you have a policy here is what data Mart and calendar year policy calendar your data Mart likewise we have a different set of a star schemas which are created in each individual data Mart so at every stage to move the data from you know the guidewire to the staging and from staging area to OTS from CODIS to the claimed at amount of a policy data Mudd your informatica tool is being used so and now as an ETL tester body type of event like okay now out of this from this claim data Mart the tableau or you can say the Cognos is being used as a reporting tool and here let's say the tableau is being used as a reporting tool for generating reports so we have a different set of a dashboards which are being created on the tableau and using which the business users are actually generating or performing the analysis about the claims policies and the buildings so as an ETL tester what is your responsibilities that you performed in the current project so that you can say that as this way so here informatica is an ETL tool and all these things are developed in the Oracle database you can say the claim data Mart is later base policy data wat is an Oracle database billing data what is an Oracle or uses in Oracle and the staging is also in Oracle but your source system is in the sequel server from here it is an incremental and you can say that if you know if you want to say about the scheduling tools you can say shady link tool or in say that informatica itself is being used as a scheduling tool in the current project ok now as a ETL tester if they ask you what kind of roles and responsibilities that you have or as a tester what is your roles and responsibilities so first thing is that you need to understand the business requirements right so understand the business requirements business requirements from the BRD and from the business analyst if anybody is there and the next one is understanding the the understanding the architecture and you know the data model and the different types of components and everything of data model from the TDD and the from TDD I mean architecture and everything architecture from TDD and data model that you can say so understanding this so by understanding all these technical design document and everything so what you need to do is as a tester first you need to prepare the test plan and you need to define the scope of the project like what you're going to develop and everything so test plan and everything and the desk strategy document if it is required so under junctions and everything which will be listed down here and the fourth one is about preparing the test cases so basically again you can say like how the test cases have been categorized like a smoke test cases functional disc cases and the performance test cases system test cases or whatever we have discussed smoke our functional test cases we have written and then the functional and then the formants testing and as well as our system testing all those test cases have been prepared so you can say that which tool is being used as a for a defect tracking so maybe HP ALM is being used as a defect tracking tool or be used to communicate in the excels about the defect tracking and all those things on whatever HP Quality Center all these things and once it is done I mean once the migration is done what we do will actually execute all your test cases so test case execution and finding out the defects right finding the defects in the ETL as well as a bi process and everything so you can say that I am proficient in both are performing or executing the disk cases in the BI test cases as well as the ETL test cases so all these things and also basically as a tester you more interact with the project managers and all in sharing the test reports I mean how far you have executed the test cases and how many are failed how many are passed closely interacting with the developers in following up with them film fixing the defects and explaining the defects what you have found all these things I'm sharing the test reports and all those and generally the testing is something that which will be implemented in a multiple cycles so you can say that or distant I mean test process is being implemented in the multiple cycles so for testing the data so once you do that then basically you a sign-off for a project so this is how you can perform the different responsibilities basically you can say that I am very much proficient in writing the queries are converting the mappings into an SQL this is what is a complete thing that we have done so you can say that we have not used any automation tools for testing but it just being done based on the manual testing process that is nothing but an ice cube so you can say that I am proficient in performing the heterogeneous testing as well as a homogeneous system so let us say heterogeneous testing is that in order to validate the data between SQL Server and Oracle here is a heterogeneous testing and staging to state audience this is a homogeneous because both the cases you Oracle database which is being there so this is how you need to explain the project and you can also say that we have used a booty and unique I mean and Vanessa P for executing or verifying the commands and everything and along with this a bi testing how you have done all those are details also I can explain within that project so when you are explaining about the project project architecture plays an important role and what do you have used all those things if you are able to explain that will give you and in this project basically in case of our tableau you can say that we have we have some four or five dashboards for our claims and two dashboards for a policy and for a building we have one dashboard so all these dashboards are being tested and if they ask you what is the team size you can say that four or five because it's a big project big project and you can also explain like what kind of SDLC lifecycle is being used more of our agile methodology will be used in this but you can even say that in this project we had a waterfall or something whatever the hdl-c lifecycle that you have you can that explain that and you should also they'll ask you like whom do you use to interact for the requirements or how do you use to maintain the defects and everything that you can say that we used to use a tool and then our interact with the developers and everything and I used to involve in the calls to explain I mean about the test status of X huge now so this is how you will explain about the current project roles and responsibilities so once you are complete about this then they will step into the technical things so this is how you should explain our project and let me bring it back to the interview question so before that if you have any questions on the project explanation so just let me know I can try and answer them and then we'll move on to the interview questions right I think it this is it hello are you able to hear me kostik yes I can hear your voice this is a a normal project at yeah can you tell me more about this migration project is the same process or anything else are there for the QA s responsibility so the migration projects is mainly to verifying you know so migration project is something let's say you have a so Siebel CRM which is being used as a transactional system here you have different tables like s contact s underscore lead SNS or maybe address so all these kind of a different tables now you are moving the data from Siebel CRM to let's say s ap CRM or a salesforce.com which is like cloud one and here your ETA 2 is being used as a migration tool for migrating the data so here basically what you need to do check in case of a migration projects is that the data or the number of records here as well available in the S contact is directly matching with your salesforce.com or not let's say in your salesforce.com there is an object or a table object which will store the data directly in the contact tip now the data from the S contact table is being loaded to the contact table in the salesforce.com for that they have used an ETL tool and to move the data now I need to validate the data between this and this there is no concept here as such or dimension they will fat table and all those things and there is no concept of a shade Yuling as well because it is going to be a one-time activity our data migration project is like a one-time activity once you migrate entire data from this system with this and we are going to decommission this system and then start using a salesforce.com so it's like a one-time process that you do there is no concept of scheduling in the data migration project it's that there is no concept of BI as well you not will not have a bi so unless that is a bi migration project okay so you'll not have a bi it's all that we will validate the data between your source table if the CRM sequence here versus the table which is there in the salesforce.com so here also when they are migrating they should do the analysis which table in the CRM should be loaded too which table in the salesforce.com and which which artists table in this should be loaded to which table so basically by understanding the column mapping between this and this we will validate the data again it is with respect to the minus query validation kind of a thing I mean it since it is a heterogeneous basically we are going to do the file to file comparison and the count validation let's say are there any records which are missing from CRM system to this whether everything is being captured each and everything we are going to verify this is a kind of a migration project or migration project right so any questions on this are you good all right so there is a chance of you know the data integration projects as well I mean you may working on a project where data integration I mean like let's say to load the data into the salesforce.com itself they are receiving a data in the form of a files or in the form of an XML files and they are using informatica Remini ETL tool to load the data into your salesforce.com so this is a data integration project where you are integrating a data from different type of a sources into your transactional system in this also will more or less will do the similar kind of a testing but here you are reading the data from flat files and validating the data so even if you are hired as a tester in this kind of a data integration project we do the similar kind of validations and then verify that it is not actually causing the duplicates in your transactional system and all those so in case of a data migration and the data integration projects you don't have a concept of dimensions you don't have a concept of scheduling unless it is a daily activity ok so data migration definitely will not have a scheduling enough but in case of a data integration project if it is a daily task which happens then even these jobs are going to be scheduled either in informatica or an eighth year to log some of that right so this is about another sake so here Nord I explained you is about the data of various projects in case of a data migration and data integration will not have a report ok no reporting constant alright so any other questions on the project what analyst use most frequently festive and migration happens karthick determinate issue sir yeah data integrity issues might happen if you if you are loading a bedding table I mean if you're loading a child a will first and the painting table but also the data truncation let's say in the contact I have a name something called as a Karthik and when it is being loaded because of my mapping it is being loaded as something like this so that is one issue and the data type conversion issues I mean let's say I have a date which is being represented or in this format in my Siebel CRM and when it is being loaded it's being loaded in some other format let's say instead of 0 1 Chan let's say 0 1 December it is being loaded as strong chance something like this so because of the conversion which is being loaded in a different format that is one issue in case of a data migration or records might have been rejected let's say in the S contact table we have some around 10 million records but when it is loaded into your contact it is being some of the records are being rejected because of the data type issues up so that that also another issue what you find and most of the cases there might be a validate I mean some validations which were supposed to filter the existing record from the CRM but that record is being handed to Salesforce not something attacked so this is a two type of issues what are generally we see in case of migration projects but about the duplicates and all may be very less but if you are loading loading the data into contact table from the multiple jobs or a multiple ADL's it might call the duplicates again so it depends a kind of validation is what they are being set up in the ATL in general will come across that point so if you're good let's move on to the next one that is okay one of the interview question asked was what is the what are the fact table increase of the PDW on the data mark okay fact table entries this is the question okay let me just take that and then I will try to answer this one by one what is the fact ability suffer etw for a de tomate see in general in the UW we might have a daily level transaction tables and in a data Mart you might have a unrelated things so what is the case is that let's say in my fact table I have a customer ID and I have a product ID and I have a order date order date and let's say I have a price and the quantity surprising the quantity what being happy so basically this is a detail level transaction table which we might maintain in the unit of you in case of a data Mart we might have an aggregate tables as mint which is being built so this is just that customer ID is hundred let's say the product p1 order date is 1st Jan or 2015 the price is let's say 100 and the quantity is 2 so likewise you will have a for each and every day you'll have one entry so I'm just trying to tell you in case of a PDW we might have a detailed transaction table but when we go to the Mart we might see that related tables but there is no hard enforce rule that you should have the all aggregated tables in the data Mart's or you should have only the transactional tables in the etw so this is let us say e DW entry and when it goes to the this one let's say in the data Mart I want to store at the month level so the customer ID let's say the product so I'm just trying to give you product ID and you have order instead of our order I have a month and this is a total price or whatever and the quantity so this is what I want to store so what do you do now hundred it's an aggregated data P one for an entire month it is going to be the let's say join 2015 which I call it as 2015 zero one price is going to be let's say five or four items four hundred and it is going to be it this is our ready store so you can say that but it is just that maybe the interviewer just wanted to understand how whether you have seen the ATW increase in the data market reason or not that's so there is no a definite rule that you should define our transactional I mean our detail level tables and here aggregate tables you might see both of them in the ATW on the data model right right so I'm just moving on to the next one about your interview questions what generally we see so let's try to make this as a more interactive one so I want you guys to participate so that we can go ahead and then discuss all these ok one by one so so the first one is about the data warehousing I hope okay who can answer this what is data warehouse if somebody asks you like what is the data warehouse so what is your answer okay all right so this is basically the list of questions that you actually see in our interviews like they generally ask you to explain about the project and what is the test process that you have done here for your project what is different between a join and assign join at least this one who can answer this what is difference between a join and a self join join is self join is the grid did this if the same table itself got me come back to the joint between two people what is your a promise number joining of two different tables so different columns and who different tables can be joined together occupancy it's about when you say that it's an inner join and self join or not that we can see but with this question what I understand is between and join and a cell join John self join is one of the join that I can say I mean in looking at this question another joins we have different things like inner join outer join like that under that this one of the difference so what is the difference between a back here at work at - oh you have a different hair types right back here and back here - what is difference right can it has a fixed length and welcome to the will allocate to under how much characters are given to that yes so when you define anything let's say we're care of 100 for a particular column if I'm giving it as fanfic it will or your database will allocate 100 bytes of memory for it even though you are storing some 60 characters it will use 100 bytes of memory when you say we're care to of hundred and if you are storing only 60 characters it will use only 60 bytes of memory not the hundred bytes of that is so it's a fixed-length and then this one what is the difference between a dimension and a fact dimension holds the word pipe of data measurable Collins okay dimension in fact a fact holds the measurable columns and Dementors dimension holds stock character data or achievement of descriptive inator right so what is a datum and what is the data point it's a subset of data warehousing Hepzibah yes very good now it's a subset of the data warehouse okay some set of data warehouse which is defined for a specific subject area is called as a data Mart how do you find the latest entry in the table through by the time stamp latest times term kind of stall you don't even use a group IDs no latest entry how do you identify let's say you have a create a date and last updated date how do you identify that last updated data is the compared with the present day no even just started the day right so you're max run you can just order it and then find out the latest entry there is just to display but if you just want to pick up the latest record you have to fit the maximum value of last updated date is equal to your last updated date calm that's how we can determine so how do you display the maximum average salary from each department what is the query finish maximum average average salary from each department to use minimum or maximum or max of function yes so how do you do that it's a very simple thing so first when you say that select average of salary or 11 from employees group by department 90 so what does it gives you it will give you the average salary with respect to the each department so if I apply a max function on top of it that will give you the maximum average salary of each department I mean out of all the departments so it's it gives you the max of five irate cell with one all the departments is the question is not it will give you only one money so if your employee table is like this employee number employee name location one year Hyderabad and 2g Bangalore and I just want to display like one a binomial and 2g hand over that on the Horace's okay so if you want to display like this how do you decide I don't know how this works okay one Yi and then it has come as a band over here and then to Chi oh I don't know ping of locations I don't know how this works and I'm not very sure okay let's go over this one a B are the two tables I have 10 columns and 0 columns whenever I write a query select star from a comma B have a 0 columns 10 columns and I have a 0 columns so 0 columns how it is possible in a table or consciousness whenever I write a query select star from a comma B okay it might be 10 here so how many columns it will written basically 20 columns little bit and you say select star from a comma B but if it is about the row I hope you know like how does the Cartesian join works so my table has a name student student ID mass physics chemistry and Telugu oh I want to find out the highest mark highest marks for an each student so all of this year this is interesting okay so how do you identify the I just want to display only the highest marks of these so how do you do that or its function we can use average function right no later has to find out the column I mean your average function will perform on the column but I eat asked across the columns okay this is basically that you have to do convert these columns into rows and then you have to apply that so for converting to the column say that you can use pivot or unmute or then you can fetch the complete list of cards otherwise will not work but okay so how you need to do is you need to get the data something like this okay 125 150 to one of basically the 32 and one you have a 44 so once you get the data just group the data based on the student ID and get the maximum value so then only you can get that otherwise you cannot use a max directly and that will not work so this data has to be converted from column to ropes by using an p vote of you would order so and then on top of that you need to apply the max function so in my schema 100 tables are there some of them have the same column names here and I want to display the how many tables have the how many tables have the same column names so how do you do that and anybody tell this in my schema I have a hundred tables some of the tables have the same column name and I want to display the table which have the same colony tables which have the same column name so it has to be in other you have to make use of the complete list of our data dictionaries for this there is a data dictionary which can be used something called as a user underscore columns in this you can identify the are tables I mean you can identify the tables and the respective columns so basically you need to check you need to use a sub query to do that for it so let's say select star from users underscore columns let's say this is your outer query and where you need to use an exes and write the condition as select one or anything from let's say again use the user underscore columns and this is like a I and you need to join the outer query and the inner query based on the table name so based on the table name is really required or you can just not based on the table name by all where our table name is not equal to ID on table that should work yeah so basically this will give you the not for the same table if it is for any other different table if it is same you can just use a not equal to condition but you just have to try I'm not really sure I think they should work and if you exclude the same day meaning it will work a priority and severity I think you will be knowing what is a fact table and what are the different fact a bills in your project that depends upon the data model which you are using and how to update all the columns at a time and you just have to use each and every individual column name in that you don't have any other thing and what is the difference between a primary key foreign key and a unique key I think you guys know this so primary key will not have a null values and unique he can have a null values foreign key is to express the relationship between a one I mean two tables and next one is about is ad type one time two time three so I'm one will store the most recent data time to history type three party list display the fifth highest salary I think you can use a rank function and then I defiant for finding out the fifth highest salary without using a sub-query display the rich department is having a more than three employees I think this is very simple group the data based on the department ID having count of star greater than 3 but life cycle I think you know defect lifecycle whatever it is and delete truncate and drop so delete can be rolled back truncate cannot be rolled back but in both the cases will have a structure will remain in the database but in case of a drop structure and the data everything will be and explain the nest plan you have a document which you can go through and Union and union all basically Union will not will eliminate the duplicate records Union all we not eliminate the duplicate records okay so and this is about another one so how do identify the how to find the files in a directory or subdirectory with the first file it is basically I just shown you now are using LS command so just use a LS command out of that you just use a grep command you'll get that how to know the what processes are running in a UNIX yeah I just didn't tell about this so it is nothing but your PS minus AF is a command to find out the processes which are running in and clinics types of schemas in a data warehousing it's a like a star schema snowflake schema or not how to kill the process in UNIX it is like you can if you know the PID that is a process ID if an you killing minus 9 and then the process ID whatever the process ID how to see the hidden files in a UNIX so I just didn't tell you this to know the hidden files in UNIX you need to use LS - J that you can tell you how to know how many members are logged into system UNIX command I think ddy is a command to know all the terminals which are we talking but I think if you see that whether I have shared the document I'm not very sure think TDY or the different list of users which can you okay so oledb all I have difference I think you know what is data warehousing background and foreground processes BG is for a background process foreground B's for a FG foreground processes I mean it's like if it which is working on that that is background processes in my table I have a hundred rows there I want to display the rows from three to seven can anybody answer this how to display the records from three to seven rows only how can I do that three to seven rows it's basically the concept of row number has to be used okay so this will work so what you need to do is let's say select Rollem comma last name and this one row none you can call it as a condom from the you can try this it will definitely work so from employees and let's say if you have hundred records and you want to display this you need to use an inclined view so select from whatever whether you use star or whatever it doesn't matter where are numb row number is between three and seven okay so this will give you the complete list of records how to display the duplicate records by using a group statement deleting a duplicate records I think I've shown you you using SQL using a row ending that we can do it display the employee name and his marriage in me you can use a self join for this and then you can find out display the each department total employees group by not count stock group by department ID count staff will give you that to care function what it does it converts the character I mean day to a character a number to a character truncate function it is used to remove the decimals and it can also be used against the dates to remove the I mean for truncating with respect to the month and year in VL in reality convert a null value to a predefined value operator precedence in SQL like it's like multiplication first division addition and the subtraction what are the classes in SQL it's like a from clause where Clause group ideals having close order by clause whatever and traceability matrix I think this you guys know our traceability matrix testing life cycle ETL des process retesting and regression testing we have checked display the maximum salary or to display who was joined in the last month like how do you find out who was joined in the last month okay can somebody tell me this like how to get the I want to find out how many employees of joining in the last month you don't know what is in this month but you need to get that so how do you use it so this you can try okay basically there is a command right so which will give you the I want to find out the first day of the month so how do you identify the first day of the month so if you say funcade of system date or did he do I think I have to try this query so when you say truncate of system date with respect to the month by default always it will give you the first day of the month okay and if you say minus one and this will give you the last month date okay and then use a to care function on top of this and gate if you use mmm that will give you the complete month value whatever it is a or if you use mo and also it can be it can be get anyway I'll just show you how it can be so truncate of sustained mmm when you give you the first day of the month and minus one will take you to the last month and out of that you can try and use this one okay so this is the way that you can use validation and verification retesting DD ldml software development lifecycle and testing lifecycle okay so I think this we have we are good how to maintain the history priority and in theists I'll bring this you can use deleting and displaying the duplicate records how to display even number and odd numbers by using a modular division function and particular records display query that is nothing but using your row number display the last record also you can do concatenate two columns how to retrieve the all the rows except the last stroke oh my god so probably you can use a - for this I mean retrieve all the rows in one query and retrieve only the last row and then do the - till you get it only the distinct values but you can do Department wise is salary count the number of columns in a query how do you do this it's very interesting how do you count the number of columns anybody can tell this what is the number of columns in a query count the number of columns in a query it's again it might have to use a case statement okay so for each and every column replace the each column with one value and then do the song o sum of column one plus column two plus column play something like that so you just have to use a case statement and that is one way that I can think but I am NOT very sure I mean if you are using a be born again it is easy so just convert all the columns into rows I mean you just have to and I don't think you really need it you just have to mark for each and every column one one one and then do the sum that's it it's like what I'm saying is let's say you have to find out a select case when case when one is equal to one then one something like and in column name so you know the column name right so whatever the column value that you have so you can give something like this column 1 is equal to column 1 then one else zero something like this ok then end so obviously this will give me one ok and then you have to do the same thing for all the plus plus plus plus so that's how we can do if the table is having only one row this is this makes sense otherwise it doesn't make one sense to count the number of columns but number of columns in a query you have to count but if it is number of columns in a table then it is easy user underscore tables there is one thing which you have and from the certain user underscore columns you can always determine up in define that so that way you can be done mind so with that I think we are done with all the things so this is already there as part of your ETL testing folder whatever I am an ETL testing materials which you have given so in case if you have any issue also in the future you can just get back to me I mean I mean give that to us we'll try and help you out all right so I have a long question here hopefully last one let's see that this one is about if you don't have any documents for a project that is a it's a legacy system as a QA what is your approach I mean if you do not have a documentation it becomes more like a white box testing where you have to log in and understand the code and test it ok that's only the way so the approach is that you need to understand the system what it is then and go on open the tools what kind of ETLs have been developed understand the title point so this is a complete things from mind if you have any other questions quickly probably you can go ahead and I try and answer them and then we can wind up this course if do not have any right so it's been almost six or seven weeks we had the classes so for now I'm closing the meeting I hope you enjoy the classes so I
Info
Channel: TEK CLASSES
Views: 134,511
Rating: 4.7903099 out of 5
Keywords: etl testing interview questions, testing interview questions, etl interview questions, etl testing jobs in bangalore, etl concepts, etl testing concepts, etl testing tools, etl tutorial, etl testing tutorial, etl testing, scope of etl testing, etl testing career growth, etl testing training, etl testing online training, etl training, etl testing interview preparation
Id: MbJtbgOZvPU
Channel Id: undefined
Length: 51min 27sec (3087 seconds)
Published: Mon Jan 11 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.