Streaming replication setup on postgres 9.6

Postgres streaming replication in 7 steps.

1. Create replication user on primary and grant ‘REPLICATION’ and ‘LOGIN’ privilege.

reporting=# CREATE ROLE replication WITH REPLICATION PASSWORD 'replUs3r';
CREATE ROLE
reporting=# alter user replication login;
ALTER ROLE

2. PostgreSQL.conf changes on the primary

listen_addresses = '*'
wal_level = replica
max_wal_senders = 5 
wal_keep_segments = 32 
archive_mode = on
archive_command = 'test ! -f /var/untd/pgsql/archive/%f && cp %p /var/untd/pgsql/archive/%f'

listen_addresses variable is required to allow all hosts to connect to this database.

wal_level is minimal by default, we need to set it to replica so that PostgreSQL server logs all changes to transaction logs (xlogs).

wal_keep_segments specifies the number of past log file segments to be kept in pg_xlog directory. If the standby server falls far behind than these many segments, replication breaks. Read More

Oracle get column ddl from dba_tab_columns dictionary view

Query to generate alter table add column DDL from Oracle dictionary views:

There will be a situation when we might need to generate add column DDLs for a particular table or all tables in a schema.

Oracle data dictionary view dba_tab_columns/all_tab_columns stores column details such as

DATA_TYPE – column data type

DATA_LENGTH – Length of the column

DATA_PRECISION – Decimal precision for NUMBER type

DATA_SCALE – Digits to right side of decimal point in NUMBER type

Below example covers following data types: Read More

Hadoop get active namenode

Few hadoop commands to get namenode information

List the namenode hostnames

[email protected][~] hdfs getconf -namenodes
dw-master01.cstest.com dw-master02.cstest.com

Get nameservice name

[email protected][~] hdfs getconf -confKey dfs.nameservices
nameservice1

Get active and standby namenodes

[email protected][~] hdfs getconf -confKey dfs.ha.namenodes.nameservice1
namenode70,namenode95

[email protected][~] hdfs haadmin -getServiceState namenode70
active
[email protected][~] hdfs haadmin -getServiceState namenode95
standby

Read More

slony add tables to existing set

Steps to add new tables to slony replication

In order to add a new table to existing set in slony, we need to create a new set with the table and merge it with the existing set.

  • Before adding the table to slony, verify below things
    • Table exists on the target node.
    • Table has a primary key. Slony doesn’t work without a primary key.
    • It is better to truncate the table on the target node before adding it to slony replication.
  • Verify existing slony configuration. We need to capture below details.
    • Cluster name
    • Source/target node connection info
    • Existing Set ids, Table ids

Cluster name is same as slony schema name without the underscore at the beginning, it is ‘testslony’ in our case.

Source and target node details from below query Read More

Cloning Oracle Grid Infrastructure home and ASM installation

perl clone Oracle GI and ASM installation

This tutorial is meant to work for RHEL/Centos environments. Covered few issues faced during grid software installation

  • TAR binaries and copy, make sure to bring down oracle services before creating tar.
[[email protected] product]# tar -czvf grid_home.tar.gz grid_home
[[email protected] product]$ scp grid_home.tar.gz node4:/u01/app/oracle/product/.
  • Partition disks required for ASM instance – follow procedure here
  • Create oracle OS user under dba group, this user will be owning the grid binaries.
[[email protected] tmp]# groupadd dba
[[email protected] tmp]# useradd -g dba oracle
  • Unzip binaries and setting few system environment variables.
[[email protected] disks]$ cd /u01/app/oracle/product
[[email protected] product]$ tar -xzvf grid_home.tar.gz
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/grid_home
export PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_SID=+ASM
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
  • Running perl clone

Read More

fix dataguard broker configuration

Steps to rebuild dataguard broker configuration

Issue with dgmgrl configuration

DGMGRL> show configuration;

Configuration - apps

  Protection Mode: MaxPerformance
  Databases:
    appsnew_dca - Primary database
      Error: ORA-16810: multiple errors or warnings detected for the database

    appsnew_vgs - Physical standby database
      Error: ORA-16810: multiple errors or warnings detected for the database

    apps_lax    - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
ERROR

Viewing exact database error Read More

Copy oracle schema from one database to another

Using expdp and impdp tools to backup a user/schema data and restore to another database.

Create a backup directory which will be used by expdp tool

CREATE OR REPLACE DIRECTORY "BACKUP_DIR" AS '/u01/expdp'

Use expdp to take full schema backup

expdp \"/ as sysdba\" schemas=pin2 directory=backup_dir dumpfile=pin2.dmp logfile=pin2.log

Dump file set for SYS.SYS_EXPORT_SCHEMA_02 is:
  /u01/expdp/pin2.dmp
Job "SYS"."SYS_EXPORT_SCHEMA_02" successfully completed at 07:08:14

[email protected][dev02]/u01/expdp>ls 
pin2.dmp         pin2.log

copy the dump file to another server where we need to restore. We need to create a backup_dir on that server and copy dump file to that directory.

CREATE OR REPLACE DIRECTORY "BACKUP_DIR" AS '/u01/expdp'
scp [email protected]:/u01/expdp/pin2.dmp /u01/expdp/.

Start importing the schema to pin3, we need to drop the existing schema if it exists. Read More

Mysql ibdata1 file recovery

Steps to recover data when ibdata file gets corrupted or removed

ibdata is really important file in Mysql because it contains following data –

data dictionary – metadata for Innodb related objects.

double write buffer – protection against a crash in the middle of writing a block to disk.

rollback segments and MVCC data – Undo logs which are required if any transaction needs to be rolled back and also useful if any other transaction needs to see the original data (consistent reads).

And if we accidentally dropped the ibdata file, we definitely cannot rebuild it but we can safely recover all the data onto a new Mysql instance.

  • First shut down mysql if its already running :
mysqladmin -uroot -p shut
  • Let’s try to start it up to see the error we get

Read More

Migrate oracle database from Non ASM to ASM

 Steps to migrate database files from local disk/SAN to ASM :

First create ASM disks and diskgroup – here

Migrating all data (Datafiles, Controlfiles, logfiles) to DATA diskgroup.

Startup mount and restore SPFILE to ASM

SYS @ test > startup mount

[[email protected] dbs]$ rman target /

Recovery Manager: Release 11.2.0.1.0 - Production on Fri Feb 8 11:15:51 2017

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: TEST (DBID=2104987387, not open)

RMAN> RESTORE SPFILE TO '+DATA/spfiletest.ora';

Starting restore at 08-FEB-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: restoring SPFILE
output file name=+DATA/spfiletest.ora
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/product/11.2.0/dbs/01o1et80_1_1
channel ORA_DISK_1: piece handle=/u01/app/oracle/product/11.2.0/dbs/01o1et80_1_1 tag=TAG20130208T111407
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:16
Finished restore at 08-FEB-17

Copy and switch database files to ASM Read More

Mysql innobackupex full and incremental backups

Mysql backup strategy using Innobackupex/Xtrabackup :

If we are planning to automate backup of our Mysql production instance, we can think of two approaches :

  1. A full backup followed by cumulative backups i.e All incremental backups with last full backup as base directory.
  2. A full backup followed by differential backups i.e All incremental backups with last incremental/Full backup as base directory.

Both approaches have their own benefits and drawbacks.

The first approach would need little bit more backup storage since we would always perform incremental backup from the last full backup checkpoint. Also the time required to perform incremental backup would be more compared to the second approach.

The main drawback of having the second approach is that we would be spending more time during backup restore process, mainly we need to to prepare the full backup by applying all the subsequent incrementals to it.

A better approach would be definitely the first one since it saves our time while we need to restore the database to a particular point, we just need to apply required incremental to the base backup.

Below are few innobackupex examples :

Full Backup :

/usr/bin/innobackupex --slave-info --user=${User} --password=${Password} --defaults-file=${DefaultsFile} --compress /u01/mysql/backup/fullbackup/

We are using compress option which would compress our backups and save us atleast 2x times of backup storage. slave-info useful while backing up the slave server, generates xtrabackup_slave_info file with master information.

Incremental Backups : Read More