How to backup a remote PostgreSQL database (aws) using pgAdmin (local)?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

why would you do this as opposed to using the --host switch?

👍︎︎ 2 👤︎︎ u/WhydYouKillMeDogJack 📅︎︎ Jan 29 2021 🗫︎ replies
Captions
hey guys welcome back in this video we are going to see how to use pg admin to take a backup of the remote database and then with this backup file we are going to restore the database in our local machine for development purpose so before we start with hands-on work let's have a look at the steps that we are going to execute so this is an overview on how we are going to connect to a remote database which is deployed in a kubernetes cluster running in aws cloud this is my local machine and this is my remote machine which is nothing but a worker node ec2 instance of kubernetes cluster running in aws cloud as you can see here one one of the database is running in the docker container in my local machine and other database is running in remote machine which is being used by an application in aws cloud so our goal is to run a pg admin in a docker container in our local mission and from this pg admin we are going to connect to the remote database using this host address and port number this host address is nothing but the public ip address of this worker node ec2 instance once we are connected we are going to take a backup of this remote database and with this backup file we are going to fill this local database with the recent data for development purposes and currently the service type of this database port is cluster ip which means that the sport dv pod is only accessible within the cluster it cannot be accessed from outside the cluster in order to make it accessible outside the cluster we need to set the type as node board and we have to assign a port number to this node port and we are going to assign the node port number 31 000 another important step is to add a five volt rules of type custom tcp and port 31 000 in order to enable uh this port number to be accessible from outside world and we have to set the source as ip my ip which means this is the ip address of my local machine which means only this ip address will be able to access this port 31000 so let's get started so this is my doc opencourse file which i am going to use it for running database and pg admin containers in my local machine as you can see here these are the configurations for the database and pg admin containers this is my ddbdb service i am using my custom post resql image but you can use your official image here i have set the username password and the name of the database so here comes the service pg admin and i have used the official image deep page slash pg admin 4 and i have set the container name as pg admin and it depends on the service db which means when we run this service pg admin it is going to automatically start the db instance for us and for volume binding i have bound binded my [Music] folder pg admin that you see here to this folder on container so that when we make a backup or something it's going to get stored in this folder and we will be able to easily access this stored backup from this folder so for port binding i'm using w55 from host machine on the port number 80 on container and these are the two mandatory environment variables that we have to set in order to run the container one is default email along with default password now that we have uh both the services in docker compose file let's execute it using docker compose command which is docker compose up minus and service name just page admin this command is going to create two containers one is database another one is pg admin for it so now it has successfully created now let's see if you're able to access the ui of pg admin on port number five double file all right let's go to localhost w55 enter now we are on pg admin 4 login page let's give our username and password um the username and password should be same as how it was written in the doc compose file hit login now we are successfully logged into pg admin console as you can see here this there are no servers added in it let's add new server i want to connect to my local database instance so let the name be invade local and for the connection the host name is db the service name that was mentioned in docker compose file and port is 5432 and database name is lp proton and the username is stress squares hit save now we are successfully logged into this successfully connected to this local instance as you can see here this is the database that has been initialized in our local machine right now the database empty doesn't have any data now our goal is we are going to connect to a remote database take a backup from remote database and then apply it on this local instance before we connect to the remote database we have to include some firewall rules and we have to change the type of service from cluster ip2 node port as discussed earlier so let's get started so now i have logged in logged into my kubernetes cluster let's uh execute the command cube ctl get all it shows us the list of ports and services running in our cloud as you can see here this is my database pod which is running and this is the service of the database port and the type of the service is cluster ip which means it's accessible only inside the cluster it's accessible only within the cluster and it's accessible within the cluster on port 5430 now our goal is we have to change this last type to node port and assign a port number to this service so let's write a new yaml file for it i already have a boilerplate ready so the kind is service and the name of the service is db and in the specs i have given the app is db and port that's the boot on the power and i'm going to enter another port node port and the number is 31 000 as discussed earlier and type should be set as node port let's say now let's apply our changes cube ctrl apply minus f click stored now you can see that it says service tv is configured let's execute keep serial data again now you can see that the type of the service db has been changed to node port and the new port number has been assigned to this port 31 000 now other important step is we are going to log into our aws console and we are going to add a firewall rule to this node saying that we want to open this port number 31000 for my local machine so this is my aws account here these are the instances currently running these are the instances of my kubernetes cluster let's select one of the node and let's go to security let's go to security groups here you can see all the firewall rules that has been applied to the security group now we are going to add a new rule custom tcp and port number is 31 000 and we are going to make this port available only for my ip and automatic takes my ip here let's hit save rules now this node port 31000 is going to be accessible from my local machine let's go to rpg admin and let's create a new server here the name of server is going to be portal remote and for connection the host address is going to be our ip address let's go to instances let's click this node let's go to details as you can see here this is the public ip4 address let's copy it let's go to our project for pg admin console let's paste this here and the port number is 31 000 and the database is reporter i am going to use my admin credentials hit save now we have successfully connected to our remote instance as you can see here now we are able to access the database that's running in our geometers cluster in aws cloud now you can see that as many schemas that's being used by our application let's take a back about this complete database hit backup now we're going to give the file name backup remote let's choose the encoding as utf-8 and the role name is i think administer super user hit backup this is going to uh this has started the backup process backing up an object on the server it shows that it's running let's wait for a few seconds yeah as you can see here the backup has been successfully completed let's see more details and let's open the file location all files as you can see here this file backup remote has been stored in this folder you can we can also access this file from our pg admin folder here as you can see here under pg admin under storage we have this backup remote that's a backup file so we have successfully uh created a backup file from our remote database now let's apply it apply this uh backup file on our local database so now let's restore let's choose the file name backup remote select role name is postgres which is our super user and then hit restore as you can see here it's saying that we're storing backup on server has successfully completed let's see whether we have all the data let's refresh let's go to schemas i mean points tables as you can see here it shows as a list of tables and let's say [Music] first hundred rows i want to view all the elements all the rows from this table called hub as you can see here which gives us a list of all the hubs that's been stored in this table which means we have successfully uh exported the data from our remote database and loaded the data in our local instance thank you very much for watching and see you guys on my next video
Info
Channel: LumaBIT
Views: 782
Rating: undefined out of 5
Keywords: postgresql, postgresql tutorial, docker, postgres, pgadmin4 tutorial, pgadmin4, aws, kubernetes, aws security groups, aws ec2, kubernetes nodeport, docker bind mount, docker-compose, docker depends_on, kubernetes service types, kubernetes cluster, firewall rules, aws ec2 ip, sql, cloud, programming, dbms, pgadmin, sql tutorial, relational database, backup, postgres tutorial, pgadmin 4
Id: viuggqNLBcE
Channel Id: undefined
Length: 14min 30sec (870 seconds)
Published: Fri Jan 29 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.