Linux script to get schema difference using mysqldbcompare in MySQL

Complete script to get missing structural changes between two schemas in MySQL

Description:

The goal of this blog is to provide .sql files that we can import directly to remove the missing structural changes of a schema.

We will get queries for creating missing tables and columns.

We are going to use mysqldbcompare utility for taking the basic difference file(.sql).

There are many bugs if you are planning to use mysqldbcompare alone for taking structural difference

  • Wrong queries for alters containing add index,change index,add foreign_key constraints.
  • For missing tables we will just get the names.

Setup description:

We are having both the perfect schema and the dirty schema on the same server.

We will use the perfect schema as a template schema to compare the dirty schema with to get the missing changes.

We are not comparing data consistencies so we are taking  the structure dump of both schema and importing it into our testing server as we don’t want the mysqldbcompare slow down our business servers.

Getting into the script: Read More

Upgrade MySQL 5.6 to 5.7

Complete steps to upgrade MySQL 5.6 to MySQL 5.7

Description of current setup:

We are currently using MySQL 5.6.34  on Linux Centos 7.

The upgrade version we have chosen is MySQL 5.7.21

Step 1:

Take dump of mysql schema and other important ones. (in case u need to downgrade to current version)

Step 2:

Stop MySQL service:

service mysql stop;

Step 3:

Remove your current MySQL rpms:

First select all installed MySQL rpms

yum list installed|grep -i "mysql"

Then you can remove all rpms having string 5.6 in their names Read More

Mysql ibdata1 file recovery

Steps to recover data when ibdata file gets corrupted or removed

ibdata is really important file in Mysql because it contains following data –

data dictionary – metadata for Innodb related objects.

double write buffer – protection against a crash in the middle of writing a block to disk.

rollback segments and MVCC data – Undo logs which are required if any transaction needs to be rolled back and also useful if any other transaction needs to see the original data (consistent reads).

And if we accidentally dropped the ibdata file, we definitely cannot rebuild it but we can safely recover all the data onto a new Mysql instance.

  • First shut down mysql if its already running :
mysqladmin -uroot -p shut
  • Let’s try to start it up to see the error we get

Read More

Mysql innobackupex full and incremental backups

Mysql backup strategy using Innobackupex/Xtrabackup :

If we are planning to automate backup of our Mysql production instance, we can think of two approaches :

  1. A full backup followed by cumulative backups i.e All incremental backups with last full backup as base directory.
  2. A full backup followed by differential backups i.e All incremental backups with last incremental/Full backup as base directory.

Both approaches have their own benefits and drawbacks.

The first approach would need little bit more backup storage since we would always perform incremental backup from the last full backup checkpoint. Also the time required to perform incremental backup would be more compared to the second approach.

The main drawback of having the second approach is that we would be spending more time during backup restore process, mainly we need to to prepare the full backup by applying all the subsequent incrementals to it.

A better approach would be definitely the first one since it saves our time while we need to restore the database to a particular point, we just need to apply required incremental to the base backup.

Below are few innobackupex examples :

Full Backup :

/usr/bin/innobackupex --slave-info --user=${User} --password=${Password} --defaults-file=${DefaultsFile} --compress /u01/mysql/backup/fullbackup/

We are using compress option which would compress our backups and save us atleast 2x times of backup storage. slave-info useful while backing up the slave server, generates xtrabackup_slave_info file with master information.

Incremental Backups : Read More

xtrabackup fails with error number 23

innobackupex fails with below error:

InnoDB: Operating system error number 23 in a file operation.
InnoDB: Error number 23 means 'Too many open files in system'.
InnoDB: Some operating system error numbers are described at
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/operating-system-error-codes.html
InnoDB: Error: could not open single-table tablespace file

Issue :

Innobackupex is failing due to server restriction on open file descriptors for ‘mysql’ user.

[email protected][/mysql/data] ulimit -Hn
32000

Ulimit hard limit is only set to 32k, we can estimate the ulimit required by Mysql user by looking at the number of partitions/tables we have.

[email protected][/mysql/data] find . -name "*.ibd"|wc -l
185293

Or

mysql> select count(*) from information_schema.partitions;
+----------+
| count(*) |
+----------+
|   185433 |
+----------+
1 row in set (1.68 sec)

Fix :

Read More

Mysql Galera WSREP_SST: [ERROR] xtrabackup_checkpoints missing

Adding new node to galera fails with following error :

WSREP_SST: [INFO] Waiting for SST streaming to complete! (20160607 12:21:20.988)
160607 12:21:23 [Note] WSREP: (3d7453d3, 'tcp://0.0.0.0:4567') turning message relay requesting off
WSREP_SST: [ERROR] xtrabackup_checkpoints missing, failed innobackupex/SST on donor (20160607 12:21:30.966)
WSREP_SST: [ERROR] Cleanup after exit with status:2 (20160607 12:21:30.968)
160607 12:21:30 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '10.103.34.27' --auth 'sstuser:dev2' --datadir '/var/untd/mysql/data/' --defaults-file '/etc/my.cnf' --parent '4858': 2 (No such file or directory)
160607 12:21:30 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
160607 12:21:30 [ERROR] WSREP: SST failed: 2 (No such file or directory)
160607 12:21:30 [ERROR] Aborting

160607 12:21:30 [Warning] WSREP: 1 (hadoop-dn1qa.cs1cmates.com): State transfer to 0 (node2) failed: -22 (Invalid argument)
160607 12:21:30 [ERROR] WSREP: gcs/src/gcs_group.cpp:long int gcs_group_handle_join_msg(gcs_group_t*, const gcs_recv_msg_t*)():717: Will never receive state. Need to abort.
160607 12:21:30 [Note] WSREP: gcomm: terminating thread
160607 12:21:30 [Note] WSREP: gcomm: joining thread
160607 12:21:30 [Note] WSREP: gcomm: joining thread
160607 12:21:30 [Note] WSREP: gcomm: closing backend
160607 12:21:31 [Note] WSREP: gcomm: closed
160607 12:21:31 [Note] WSREP: /usr/sbin/mysqld: Terminated.
160607 12:21:32 mysqld_safe mysqld from pid file /var/untd/mysql/log/mysqld.pid ended

Fix :

Read More

Mysql Secure Installation for non default socket.

SECURING MYSQL SERVER –

mysql_secure_installation is a script useful to

  • set root password
  • disallowing root login remotely
  • removing anonymous user accounts after first installation
  • removing test database which can be accessed by any users

Above script would work for default socket location i.e /var/lib/mysql/mysql.sock

If the server is started with a non-default socket location, mysql_secure_installation does not consider any socket options you specify and the script would fail (For example -> mysql_secure_installation –socket=/mysql/socket/mysql.sock)

We should also consider this situation in case if we need to install more than one instance of mysql on a single server. Read More

Partition maintainance script for Mysql

A simple way for adding future partitions or purging old partitions is through mysql procedures. Later we can use any scheduler or a cronjob to call this procedure daily.

Here I will provide a sample procedure for maintainance of a daily partitioned table based on a datetime column. Please do look at my previous post on mysql range partitioning in the following post. I have taken the example from this post and modified it based on our requirement.

First we will create a sample table along with a partition for our testing purposes.

mysql> create table test_partitions(id int,name varchar(50),tx_date datetime);
mysql> alter table test_partitions PARTITION BY RANGE (to_seconds(tx_date)) (
partition test_partitions_16092015 values less than ( to_seconds('2015-09-17 00:00:00') ));

Now lets create a table for holding list of tables and number of partitions to preserve/drop.

mysql> create table test.part_tables(dbname varchar(50),tabname varchar(50),numpart int,nextpart int);

Lets go ahead and insert a sample row there :

mysql> insert into test.part_tables values('test','test_partitions',30,7);

So, we will be preserving only last 30 days partitions and make sure we have next 7 days partitions for ‘test_partitions’ table.

Mysql Procedure 1 (part_maint) : This procedure takes in table list from our meta-table i.e part_tables and runs partition maintainance for each table :

Read More

Mysql range partitioning on date column

Here is a simple example for partitioning a mysql table based on datetime column :

  • Creating a test table
mysql> create table test_partitions(id int,name varchar(50),tx_date datetime);
Query OK, 0 rows affected (11.83 sec)
  • Creating a initial partition :
alter table test_partitions PARTITION BY RANGE (to_seconds(tx_date)) (
partition test_partitions_16052016 values less than ( to_seconds('2016-05-17 00:00:00') ));
  • Adding new partition:
mysql> alter table test_partitions add partition ( partition test_partitions_17052016 values less than ( to_seconds('2016-05-18 00:00:00') ));
Query OK, 0 rows affected (10.87 sec)
Records: 0 Duplicates: 0 Warnings: 0
  • Verifying data in specific partition

Read More