You are here

Feed aggregator

Galera Cluster VS PXC VS MariaDB Galera Cluster - Benchmarking

Abdel-Mawla Gharieb - Thu, 2014-08-07 15:36

It is not clear for many MySQL users that Percona XtraDB Cluster (PXC) and MariaDB Galera Cluster depend on the same Galera library i.e used in Galera Cluster for MySQL which is provided by Codership team:

  • Galera Cluster: MySQL Server (by Oracle) + Galera library.
  • Percona XtraDB Cluster: Percona Server + Galera library.
  • MariaDB Galera Cluster: MariaDB Server + Galera library.

But the question is, are there any performance differences between the three of them ?

Let's discover that by doing some simple benchmark to test MySQL write performance in Galera Cluster, PXC and MariaDB Galera Cluster installations.

System Information: HW configurations (AWS Servers): Nodes Servers HW configurations:
  • CPU: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz (# of cores 8, # of threads 16, HT enabled).
  • Memory: 16GB RAM.
  • Storage: HDD 120GB/ 5400RPM.
Load balancer Server HW configurations:
  • CPU: Intel(R) Xeon(R) CPU E5-2651 v2 @ 1.80GHz (# of cores 4, # of threads 8, HT enabled).
  • Memory: 16GB RAM.
  • Storage: HDD 10GB/ 5400RPM.
Load generator Server HW configurations:
  • CPU: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz (# of cores 16, # of threads 32, HT enabled).
  • Memory: 32GB RAM.
  • Storage: HDD 10GB/ 5400RPM.
Software configurations:
  • OS : Red Hat Enterprise Linux Server release 6.5 (Santiago)
  • Sysbench : 0.5.3
  • GLB : 1.0.0
  • Galera Cluster : 5.5.34 and 5.6.16
  • Percona XtraDB Cluster : 5.5.37 and 5.6.19
  • MariaDB Galera Cluster : 5.5.38 and 10.0.12
  • Galera Library : 3.5
Test Information:
  • The testing environment consists of 5 AWS servers, three servers for a three-node cluster (each node is installed on a single server), one server for the load balancer and the final server for the load generator in which sysbench is installed to send requests to the load balancer from.
  • Sysbench command: sysbench --num-threads=64 --max-requests=1000 --db-driver=mysql --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-table-engine=InnoDB --mysql-user=dev --mysql-password='test' --mysql-host=load_balancer_ip run .
  • Table structure which was used by sysbench tests: mysql> show create table sbtest.sbtest\G CREATE TABLE `sbtest` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `k` int(10) unsigned NOT NULL DEFAULT '0', `c` char(120) NOT NULL DEFAULT '', `pad` char(60) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `k` (`k`) ) ENGINE=InnoDB AUTO_INCREMENT=8574 DEFAULT CHARSET=latin1
  • The my.cnf used is something like: [mysqld] key_buffer_size = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 innodb_buffer_pool_size = 8G innodb_flush_log_at_trx_commit = 0 expire_logs_days = 10 max_binlog_size = 100M server-id = 1 log-bin = mysql-bin binlog_format = ROW auto_increment_increment = 3 auto_increment_offset = 1 log_slave_updates default_storage_engine = InnoDB # Path to Galera library wsrep_provider = /usr/lib64/galera/libgalera_smm.so # Cluster connection URL contains the IPs of node#1, node#2 and node#3 wsrep_cluster_address = gcomm://nodeB-IP,nodeC-IP innodb_autoinc_lock_mode = 2 # Node #1 address wsrep_node_address = nodeA-IP # Cluster name wsrep_cluster_name = test_cluster # SST method wsrep_sst_method = rsync # Authentication for SST method wsrep_sst_auth = "sst:password"

Notes:

  • The number of threads used in this test is 64 as it generated the highest throughput on all cluster installations.
  • Each throughput value for each test case is generated by the average of ten (10) times execution.
Testing Results:









The raw results in Transactions / Sec might be useful:


sync_binlog=0innodb_flush_log_ at_trx_commitGalera Cluster 5.5.34PXC 5.5.37MariaDB Galera Cluster 5.5.38Galera Cluster 5.6.16PXC 5.6.15MariaDB Galera Cluster 10.0.120525.119534.022534.249519.575532.19520.7361125.615131.748341.384157.001162.783174.972526.761528.858524.039511.817526.06521.024sync_binlog=10242.201249.622262.516220.313229.807220.97196.82996.759148.815111.995114.8113.0562224.476210.904217.142209.139201.596214.311
Conclusion

According to the above results:

  • innodb_flush_log_at_trx_commit = 1 significantly slows down Galera.
  • sync_binlog also cuts in half the throughput.
  • All other are more or less equal in throughput.

Galera Cluster VS PXC VS MariaDB Galera Cluster - Benchmarking

Abdel-Mawla Gharieb - Thu, 2014-08-07 15:36

It is not clear for many MySQL users that Percona XtraDB Cluster (PXC) and MariaDB Galera Cluster depend on the same Galera library i.e used in Galera Cluster for MySQL which is provided by Codership team:

  • Galera Cluster: MySQL Server (by Oracle) + Galera library.
  • Percona XtraDB Cluster: Percona Server + Galera library.
  • MariaDB Galera Cluster: MariaDB Server + Galera library.

But the question is, are there any performance differences between the three of them ?

Let's discover that by doing some simple benchmark to test MySQL write performance in Galera Cluster, PXC and MariaDB Galera Cluster installations.

System Information: HW configurations (AWS Servers): Nodes Servers HW configurations:
  • CPU: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz (# of cores 8, # of threads 16, HT enabled).
  • Memory: 16GB RAM.
  • Storage: HDD 120GB/ 5400RPM.
Load balancer Server HW configurations:
  • CPU: Intel(R) Xeon(R) CPU E5-2651 v2 @ 1.80GHz (# of cores 4, # of threads 8, HT enabled).
  • Memory: 16GB RAM.
  • Storage: HDD 10GB/ 5400RPM.
Load generator Server HW configurations:
  • CPU: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz (# of cores 16, # of threads 32, HT enabled).
  • Memory: 32GB RAM.
  • Storage: HDD 10GB/ 5400RPM.
Software configurations:
  • OS : Red Hat Enterprise Linux Server release 6.5 (Santiago)
  • Sysbench : 0.5.3
  • GLB : 1.0.0
  • Galera Cluster : 5.5.34 and 5.6.16
  • Percona XtraDB Cluster : 5.5.37 and 5.6.19
  • MariaDB Galera Cluster : 5.5.38 and 10.0.12
  • Galera Library : 3.5
Test Information:
  • The testing environment consists of 5 AWS servers, three servers for a three-node cluster (each node is installed on a single server), one server for the load balancer and the final server for the load generator in which sysbench is installed to send requests to the load balancer from.
  • Sysbench command: sysbench --num-threads=64 --max-requests=1000 --db-driver=mysql --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-table-engine=InnoDB --mysql-user=dev --mysql-password='test' --mysql-host=load_balancer_ip run .
  • Table structure which was used by sysbench tests: mysql> show create table sbtest.sbtest\G CREATE TABLE `sbtest` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `k` int(10) unsigned NOT NULL DEFAULT '0', `c` char(120) NOT NULL DEFAULT '', `pad` char(60) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `k` (`k`) ) ENGINE=InnoDB AUTO_INCREMENT=8574 DEFAULT CHARSET=latin1
  • The my.cnf used is something like: [mysqld] key_buffer_size = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 innodb_buffer_pool_size = 8G innodb_flush_log_at_trx_commit = 0 expire_logs_days = 10 max_binlog_size = 100M server-id = 1 log-bin = mysql-bin binlog_format = ROW auto_increment_increment = 3 auto_increment_offset = 1 log_slave_updates default_storage_engine = InnoDB # Path to Galera library wsrep_provider = /usr/lib64/galera/libgalera_smm.so # Cluster connection URL contains the IPs of node#1, node#2 and node#3 wsrep_cluster_address = gcomm://nodeB-IP,nodeC-IP innodb_autoinc_lock_mode = 2 # Node #1 address wsrep_node_address = nodeA-IP # Cluster name wsrep_cluster_name = test_cluster # SST method wsrep_sst_method = rsync # Authentication for SST method wsrep_sst_auth = "sst:password"

Notes:

  • The number of threads used in this test is 64 as it generated the highest throughput on all cluster installations.
  • Each throughput value for each test case is generated by the average of ten (10) times execution.
Testing Results:









The raw results in Transactions / Sec might be useful:


sync_binlog=0innodb_flush_log_ at_trx_commitGalera Cluster 5.5.34PXC 5.5.37MariaDB Galera Cluster 5.5.38Galera Cluster 5.6.16PXC 5.6.15MariaDB Galera Cluster 10.0.120525.119534.022534.249519.575532.19520.7361125.615131.748341.384157.001162.783174.972526.761528.858524.039511.817526.06521.024sync_binlog=10242.201249.622262.516220.313229.807220.97196.82996.759148.815111.995114.8113.0562224.476210.904217.142209.139201.596214.311
Conclusion

According to the above results:

  • innodb_flush_log_at_trx_commit = 1 significantly slows down Galera.
  • sync_binlog also cuts in half the throughput.
  • All other are more or less equal in throughput.

FromDual invites MySQL community to company meeting 2014 in Barcelona

FromDual.en - Wed, 2014-08-06 14:29

FromDual holds its annual company meeting this year in Barcelona, Spain.

We are pleased to invite everybody interested in MySQL technologies (MySQL, Galera Cluster, FromDual Tools, Percona Cluster, MariaDB, etc.) to participate on Thursday evening September 11 at the HCC MONTBLANC, Via Laietana 61, to meet, exchange ideas and discuss MySQL related topics.

The event starts at 18:00, we will meet in the hotel lobby. The planned schedule is:

  • How to Implement GTID Replication in MySQL 5.6 (25') and 5' Questions and Answers.
  • MySQL backup/restore for anonymized exports (25') and 5' Questions and Answers.
  • Break 15 min
  • Quick presentation (15') of YOUR project or company.
  • Quick presentation (15') of YOUR project or company.
  • Break 15 min
  • Quick presentation (15') of YOUR project or company.
  • Quick presentation (15') of YOUR project or company.

Please feel free to send us your suggestion about your presentation. Any technical or non-technical MySQL related topic is welcome. For example how you use MySQL in your company or special problems you have faced and solved (or not solved yet), research work you have done on MySQL products, business cases you solve with MySQL products, evaluations or experience you have made, etc. For the proposal please send us a mail.

Please also send us a short notice if you plan to participate or join us with MeetUp.

This gives us the possibility to arrange and organize all the infrastructure with the Hotel.

The event is free of costs for all participants.

We are pleased if you can make it to the event,
Your FromDual Team

FromDual invites MySQL community to company meeting 2014 in Barcelona

FromDual.en - Wed, 2014-08-06 14:29

FromDual holds its annual company meeting this year in Barcelona, Spain.

We are pleased to invite everybody interested in MySQL technologies (MySQL, Galera Cluster, FromDual Tools, Percona Cluster, MariaDB, etc.) to participate on Thursday evening September 11 at the HCC MONTBLANC, Via Laietana 61, to meet, exchange ideas and discuss MySQL related topics.

The event starts at 18:00, we will meet in the hotel lobby. The planned schedule is:

  • How to Implement GTID Replication in MySQL 5.6 (25') and 5' Questions and Answers.
  • MySQL backup/restore for anonymized exports (25') and 5' Questions and Answers.
  • Break 15 min
  • Quick presentation (15') of YOUR project or company.
  • Quick presentation (15') of YOUR project or company.
  • Break 15 min
  • Quick presentation (15') of YOUR project or company.
  • Quick presentation (15') of YOUR project or company.

Please feel free to send us your suggestion about your presentation. Any technical or non-technical MySQL related topic is welcome. For example how you use MySQL in your company or special problems you have faced and solved (or not solved yet), research work you have done on MySQL products, business cases you solve with MySQL products, evaluations or experience you have made, etc. For the proposal please send us a mail.

Please also send us a short notice if you plan to participate or join us with MeetUp.

This gives us the possibility to arrange and organize all the infrastructure with the Hotel.

The event is free of costs for all participants.

We are pleased if you can make it to the event,
Your FromDual Team

FromDual invites MySQL community to company meeting 2014 in Barcelona

FromDual.en - Wed, 2014-08-06 14:29

FromDual holds its annual company meeting this year in Barcelona, Spain.

We are pleased to invite everybody interested in MySQL technologies (MySQL, Galera Cluster, FromDual Tools, Percona Cluster, MariaDB, etc.) to participate on Thursday evening September 11 at the HCC MONTBLANC, Via Laietana 61, to meet, exchange ideas and discuss MySQL related topics.

The event starts at 18:00, we will meet in the hotel lobby. The planned schedule is:

  • How to Implement GTID Replication in MySQL 5.6 (25') and 5' Questions and Answers.
  • MySQL backup/restore for anonymized exports (25') and 5' Questions and Answers.
  • Break 15 min
  • Quick presentation (15') of YOUR project or company.
  • Quick presentation (15') of YOUR project or company.
  • Break 15 min
  • Quick presentation (15') of YOUR project or company.
  • Quick presentation (15') of YOUR project or company.

Please feel free to send us your suggestion about your presentation. Any technical or non-technical MySQL related topic is welcome. For example how you use MySQL in your company or special problems you have faced and solved (or not solved yet), research work you have done on MySQL products, business cases you solve with MySQL products, evaluations or experience you have made, etc. For the proposal please send us a mail.

Please also send us a short notice if you plan to participate or join us with MeetUp.

This gives us the possibility to arrange and organize all the infrastructure with the Hotel.

The event is free of costs for all participants.

We are pleased if you can make it to the event,
Your FromDual Team

FromDual: Tools for MySQL and Galera - Backup - Monitoring - Operations

FromDual.en - Sun, 2014-07-27 10:46

FromDual tools provide valuable additional functionality which facilitate and optimize daily operations of your MySQL databases. Since our last newsletter a lot of things have changed in the FromDual tools.


To the MySQL Environment (MyEnv) numerous improvements and suggestions of our customers were added. The most important changes were introduced to the MySQL Backup Manager (mysql_bman).


With the MySQL Ops Center we meet the wishes of our numerous customers, which desire a graphical user interface for operating complex MySQL environments.
These users are often little used in handling MySQL but want nevertheless operate more complex MySQL installations as Master/Slave- or Master/Master-replications.


In the MySQL Performance Monitor (mpm) numerous little bugs have been fixed which were reported to us by our customers.


Note: In our MySQL Service Contracts, Business Hour (5x9) and All around the Clock (7x24) the usage and support of our tools is included.
If you would like to know more about our service prices, we are pleased to send you an offer.



MyEnv v1.0.5

The MySQL Environment (MyEnv) gets more and more popularity in the MySQL eco-system. MyEnv is optimized for MySQL and mimics the popular TVD BasEnv which is popular with bigger Oracle database users.


With MyEnv you can easily consolidate several MySQL instances (mysqld) on one single machine. Thanks to MyEnv this complicated configuration is a piece of cake. Furthermore MyEnv is more and more common with customers testing their applications against different MySQL versions (5.5, 5.6 and 5.7) or different MySQL branches (Galera Cluster, MariaDB, Percona Server).


The most important improvements in MyEnv v1.0.5 are:

  • Old PHP functions were replaced to achieve better compatibility with PHP 5.4 and 5.5.
  • MyEnv overview (up) of installed MySQL instances was polished and numerous smaller bugs were fixed.
  • Extensions for active/passive fail-over clusters and Oracle Enterprise Monitor Agents for MySQL were integrated.
  • The user guidance of the MyEnv installer was made more user friendly.
  • Problems of MyEnv with SuSE Linux Enterprise Server (SLES) were removed.
  • The tools for MySQL Partitions were extended and improved.

All improvements in detail you can find in the Release Notes.


Here you can download MyEnv.



MySQL Backup Manager v1.0.5

The MySQL Backup Manager (mysql_bman) is actually getting most of interest from our customers. It significantly eases backups for MySQL for all different types of backups.


At this point we would like to quote a MySQL user:

"MySQL Backup Manager is a very nice tool! Congratulations for FromDual! I made my own shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best! Xtrabackup + mysql_bman!!!"


In mysql_bman version v1.0.5 the following improvements were integrated:

  • Security improvements (password is not exposed any more).
  • Every instance can be tagged with a name and uniquely identified.
  • The MySQL Backup Manager considers now the MySQL configuration file ~/.my.cnf.
  • The compression of backups can be disabled to support de-duplicating drives.
  • The option --no-memory-table-check was introduced to allow inconsistent backups with MEMORY tables.

Download (included in MyEnv).



MySQL Ops Center v0.2

Our MySQL customers have requested a simple user interface to operate and administer many MySQL databases. This is the reason why FromDual has launched the MySQL Ops Center.
The Ops Center can centrally operate and control complex MySQL configurations like Master/Slave or Master/Master set-ups, monitor, start, stop and reconfigure the replication.
With the MySQL Ops Center you can also start and stop easily virtual IP's and move them to an other host.


The most important features, which were added to the first public preview-release of the MySQL Ops Center v0.2. are:

  • Starting and stopping of MySQL databases on remote machines by a central management console.
  • Starting and stopping of the MySQL replication.
  • Starting and stopping of a virtual IP (VIP).
  • Fail-over of VIP from active master to slave (master/slave replication) or passive master (master/master replication).
  • Configuration of the master/slave replication.

The MySQL Ops Center can be downloaded here. Further information you can find at MySQL Ops Center.



MySQL Performance Monitor v0.9.3

The MySQL Performance Monitor (mpm) was optimized in many places. Further know bugs were fixed and the mpm agent was made ready for the newest Zabbix version v2.2:

  • Bugs related to sha/sha1 encryption were fixed.
  • A stopped database is better detected now.
  • DRBD informations were improved.
  • New behaviour of zabbix_senders in Zabbix v2.2 is handled correctly now.
  • New measuring points was added (Galera Cluster) and wrong ones fixed.

You can download the latest version of MySQL Performance Monitor from here here and for more information about the manual installation just follow up the steps on the installation guide. To check all changes and improvements of MySQL Performance Monitor check out the Release Notes.

Taxonomy upgrade extras: galeraBackupmanagermonitoringOperationsMyEnvfromdual_brman

FromDual: Tools for MySQL and Galera - Backup - Monitoring - Operations

FromDual.en - Sun, 2014-07-27 10:46

FromDual tools provide valuable additional functionality which facilitate and optimize daily operations of your MySQL databases. Since our last newsletter a lot of things have changed in the FromDual tools.


To the MySQL Environment (MyEnv) numerous improvements and suggestions of our customers were added. The most important changes were introduced to the MySQL Backup Manager (mysql_bman).


With the MySQL Ops Center we meet the wishes of our numerous customers, which desire a graphical user interface for operating complex MySQL environments.
These users are often little used in handling MySQL but want nevertheless operate more complex MySQL installations as Master/Slave- or Master/Master-replications.


In the MySQL Performance Monitor (mpm) numerous little bugs have been fixed which were reported to us by our customers.


Note: In our MySQL Service Contracts, Business Hour (5x9) and All around the Clock (7x24) the usage and support of our tools is included.
If you would like to know more about our service prices, we are pleased to send you an offer.



MyEnv v1.0.5

The MySQL Environment (MyEnv) gets more and more popularity in the MySQL eco-system. MyEnv is optimized for MySQL and mimics the popular TVD BasEnv which is popular with bigger Oracle database users.


With MyEnv you can easily consolidate several MySQL instances (mysqld) on one single machine. Thanks to MyEnv this complicated configuration is a piece of cake. Furthermore MyEnv is more and more common with customers testing their applications against different MySQL versions (5.5, 5.6 and 5.7) or different MySQL branches (Galera Cluster, MariaDB, Percona Server).


The most important improvements in MyEnv v1.0.5 are:

  • Old PHP functions were replaced to achieve better compatibility with PHP 5.4 and 5.5.
  • MyEnv overview (up) of installed MySQL instances was polished and numerous smaller bugs were fixed.
  • Extensions for active/passive fail-over clusters and Oracle Enterprise Monitor Agents for MySQL were integrated.
  • The user guidance of the MyEnv installer was made more user friendly.
  • Problems of MyEnv with SuSE Linux Enterprise Server (SLES) were removed.
  • The tools for MySQL Partitions were extended and improved.

All improvements in detail you can find in the Release Notes.


Here you can download MyEnv.



MySQL Backup Manager v1.0.5

The MySQL Backup Manager (mysql_bman) is actually getting most of interest from our customers. It significantly eases backups for MySQL for all different types of backups.


At this point we would like to quote a MySQL user:

"MySQL Backup Manager is a very nice tool! Congratulations for FromDual! I made my own shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best! Xtrabackup + mysql_bman!!!"


In mysql_bman version v1.0.5 the following improvements were integrated:

  • Security improvements (password is not exposed any more).
  • Every instance can be tagged with a name and uniquely identified.
  • The MySQL Backup Manager considers now the MySQL configuration file ~/.my.cnf.
  • The compression of backups can be disabled to support de-duplicating drives.
  • The option --no-memory-table-check was introduced to allow inconsistent backups with MEMORY tables.

Download (included in MyEnv).



MySQL Ops Center v0.2

Our MySQL customers have requested a simple user interface to operate and administer many MySQL databases. This is the reason why FromDual has launched the MySQL Ops Center.
The Ops Center can centrally operate and control complex MySQL configurations like Master/Slave or Master/Master set-ups, monitor, start, stop and reconfigure the replication.
With the MySQL Ops Center you can also start and stop easily virtual IP's and move them to an other host.


The most important features, which were added to the first public preview-release of the MySQL Ops Center v0.2. are:

  • Starting and stopping of MySQL databases on remote machines by a central management console.
  • Starting and stopping of the MySQL replication.
  • Starting and stopping of a virtual IP (VIP).
  • Fail-over of VIP from active master to slave (master/slave replication) or passive master (master/master replication).
  • Configuration of the master/slave replication.

The MySQL Ops Center can be downloaded here. Further information you can find at MySQL Ops Center.



MySQL Performance Monitor v0.9.3

The MySQL Performance Monitor (mpm) was optimized in many places. Further know bugs were fixed and the mpm agent was made ready for the newest Zabbix version v2.2:

  • Bugs related to sha/sha1 encryption were fixed.
  • A stopped database is better detected now.
  • DRBD informations were improved.
  • New behaviour of zabbix_senders in Zabbix v2.2 is handled correctly now.
  • New measuring points was added (Galera Cluster) and wrong ones fixed.

You can download the latest version of MySQL Performance Monitor from here here and for more information about the manual installation just follow up the steps on the installation guide. To check all changes and improvements of MySQL Performance Monitor check out the Release Notes.

Taxonomy upgrade extras: galeraBackupmanagermonitoringOperationsMyEnvfromdual_brman

FromDual: Tools for MySQL and Galera - Backup - Monitoring - Operations

FromDual.en - Sun, 2014-07-27 10:46
Taxonomy upgrade extras: galeraBackupmanagermonitoringOperations

FromDual tools provide valuable additional functionality which facilitate and optimize daily operations of your MySQL databases. Since our last newsletter a lot of things have changed in the FromDual tools.


To the MySQL Environment (MyEnv) numerous improvements and suggestions of our customers were added. The most important changes were introduced to the MySQL Backup Manager (mysql_bman).


With the MySQL Ops Center we meet the wishes of our numerous customers, which desire a graphical user interface for operating complex MySQL environments.
These users are often little used in handling MySQL but want nevertheless operate more complex MySQL installations as Master/Slave- or Master/Master-replications.


In the MySQL Performance Monitor (mpm) numerous little bugs have been fixed which were reported to us by our customers.


Note: In our MySQL Service Contracts, Business Hour (5x9) and All around the Clock (7x24) the usage and support of our tools is included.
If you would like to know more about our service prices, we are pleased to send you an offer.



MyEnv v1.0.5

The MySQL Environment (MyEnv) gets more and more popularity in the MySQL eco-system. MyEnv is optimized for MySQL and mimics the popular TVD BasEnv which is popular with bigger Oracle database users.


With MyEnv you can easily consolidate several MySQL instances (mysqld) on one single machine. Thanks to MyEnv this complicated configuration is a piece of cake. Furthermore MyEnv is more and more common with customers testing their applications against different MySQL versions (5.5, 5.6 and 5.7) or different MySQL branches (Galera Cluster, MariaDB, Percona Server).


The most important improvements in MyEnv v1.0.5 are:

  • Old PHP functions were replaced to achieve better compatibility with PHP 5.4 and 5.5.
  • MyEnv overview (up) of installed MySQL instances was polished and numerous smaller bugs were fixed.
  • Extensions for active/passive fail-over clusters and Oracle Enterprise Monitor Agents for MySQL were integrated.
  • The user guidance of the MyEnv installer was made more user friendly.
  • Problems of MyEnv with SuSE Linux Enterprise Server (SLES) were removed.
  • The tools for MySQL Partitions were extended and improved.

All improvements in detail you can find in the Release Notes.


Here you can download MyEnv.



MySQL Backup Manager v1.0.5

The MySQL Backup Manager (mysql_bman) is actually getting most of interest from our customers. It significantly eases backups for MySQL for all different types of backups.


At this point we would like to quote a MySQL user:

"MySQL Backup Manager is a very nice tool! Congratulations for FromDual! I made my own shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best! Xtrabackup + mysql_bman!!!"


In mysql_bman version v1.0.5 the following improvements were integrated:

  • Security improvements (password is not exposed any more).
  • Every instance can be tagged with a name and uniquely identified.
  • The MySQL Backup Manager considers now the MySQL configuration file ~/.my.cnf.
  • The compression of backups can be disabled to support de-duplicating drives.
  • The option --no-memory-table-check was introduced to allow inconsistent backups with MEMORY tables.

Download (included in MyEnv).



MySQL Ops Center v0.2

Our MySQL customers have requested a simple user interface to operate and administer many MySQL databases. This is the reason why FromDual has launched the MySQL Ops Center.
The Ops Center can centrally operate and control complex MySQL configurations like Master/Slave or Master/Master set-ups, monitor, start, stop and reconfigure the replication.
With the MySQL Ops Center you can also start and stop easily virtual IP's and move them to an other host.


The most important features, which were added to the first public preview-release of the MySQL Ops Center v0.2. are:

  • Starting and stopping of MySQL databases on remote machines by a central management console.
  • Starting and stopping of the MySQL replication.
  • Starting and stopping of a virtual IP (VIP).
  • Fail-over of VIP from active master to slave (master/slave replication) or passive master (master/master replication).
  • Configuration of the master/slave replication.

The MySQL Ops Center can be downloaded here. Further information you can find at MySQL Ops Center.



MySQL Performance Monitor v0.9.3

The MySQL Performance Monitor (mpm) was optimized in many places. Further know bugs were fixed and the mpm agent was made ready for the newest Zabbix version v2.2:

  • Bugs related to sha/sha1 encryption were fixed.
  • A stopped database is better detected now.
  • DRBD informations were improved.
  • New behaviour of zabbix_senders in Zabbix v2.2 is handled correctly now.
  • New measuring points was added (Galera Cluster) and wrong ones fixed.

You can download the latest version of MySQL Performance Monitor from here here and for more information about the manual installation just follow up the steps on the installation guide. To check all changes and improvements of MySQL Performance Monitor check out the Release Notes.

FromDual: Werkzeuge für MySQL und Galera - Backup - Überwachung - Betrieb

FromDual.de - Sun, 2014-07-27 10:26

FromDual Tools bieten wertvolle ergänzende Funktionen, die den täglichen Betrieb von MySQL Datenbanken erleichtern und optimieren. Seit unserem letzten Newsletter hat sich einiges getan bei den FromDual Tools.


In das MySQL Environment (MyEnv) sind zahlreiche Neuerungen und Verbesserungsvorschläge unserer Kunden eingeflossen. Die wichtigsten Erweiterungen betreffen den MySQL Backup Manager (mysql_bman).


Mit dem MySQL Ops Center kommen wir dem Wunsch zahlreicher Kunden nach, welche sich eine graphische Benutzeroberfläche für den Betrieb von komplexeren MySQL Umgebungen wünschen.
Diese Nutzer sind oft wenig geübt im Umgang mit MySQL, möchten aber trotzdem komplexere MySQL Installationen wie Master/Slave- oder Master/Master-Replikationen betreiben.


Im MySQL Performance Monitor (mpm) wurden zahlreiche kleinere Bugs behoben, welche uns in den letzten Wochen durch Kunden gemeldet wurden.


Hinweis: Bei unseren MySQL Service Verträgen, Business Hour (5x9) und All around the Clock (7x24) ist die Nutzung und der Support für unsere Tools mit enthalten.
Wenn Sie mehr über unsere Service-Preise wissen möchten, erstellen wir für Sie gerne ein Angebot.



MyEnv v1.0.5

Einer immer grösseren Beliebtheit erfreut sich das MySQL Environment (MyEnv). Dieses wurde dem bei grösseren Oracle Datenbankkunden beliebten TVD BasEnv nachempfunden, und für MySQL optimiert.


Mit MyEnv lassen sich bequem mehrere Instanzen (mysqld) auf einem Rechner konsolidieren. Dank MyEnv wird diese sonst eher komplizierte Konfiguration ein Kinderspiel. Im weiteren wird MyEnv zunehmend beliebt bei Kunden, welche Ihre Applikation gegen verschiedene MySQL Versionen (5.5, 5.6 und 5.7) sowie unterschiedliche MySQL Branches (Galera Cluster, MariaDB, Percona Server) testen wollen.


Die wichtigsten Neuerung in MyEnv v1.0.5:

  • Alte PHP Funktionen wurden ersetzt um Kompatibilität mit PHP 5.4 und 5.5 zu erlangen.
  • MyEnv Übersicht (up) über die installierten MySQL Instanzen wurde optisch aufbereitet und zahlreichere kleinere Bugs und Unschönheiten entfernt.
  • Erweiterungen für aktiv/passiv Failover-Cluster und Oracle Enterprise Monitor Agents für MySQL wurden integriert.
  • Die Benutzerführung beim MyEnv Installer wurde optisch aufbereitet und benutzerfreundlicher gestaltet.
  • Probleme beim Einsatz von MyEnv auf SuSE Linux Enterprise Server (SLES) wurden behoben.
  • Die Werkzeuge für MySQL Partitionen wurden erweitert und verbessert.

Alle Neuerung im Detail finden Sie in den Release Notes.


MyEnv können Sie hier herunterladen.



MySQL Backup Manager v1.0.5

Am meisten auf Interesse stösst zur Zeit der MySQL Backup Manager (mysql_bman). Dieser vereinfacht signifikant Backups für MySQL in allen möglichen Varianten.


An dieser Stelle möchten wir gerne einen Kommentar eines Anwenders wieder geben, den wir kürzlich erhalten haben:

"MySQL Backup Manager is a very nice tool! Congratulations for FromDual! I made my own shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best! Xtrabackup + mysql_bman!!!"


Im mysql_bman Version v1.0.5 wurden folgende Neuerungen integriert:

  • Die Sicherheit wurden verbessert (Passwort wird nicht mehr angezeigt).
  • Jede Instanz kann mit einem Namen versehen und somit eindeutig gekennzeichnet werden.
  • Der MySQL Backup Manager zieht jetzt auch die Konfigurations-Datei ~/.my.cnf in Betracht.
  • Die Backup-Komprimierung kann zur Unterstützung von deduplizierenden Laufwerken ausgeschaltet werden.
  • Die Option --no-memory-table-check wurde eingeführt um inkonsistente Backups mit MEMORY Tabellen zu erlauben.

Download (in MyEnv enthalten).


MySQL Ops Center v0.2

Unsere MySQL Kunden haben immer wieder nach einer einfach zu bedienenden Benutzeroberfläche zur Steuerung und Verwaltung von mehreren MySQL Datenbanken nachgefragt. Aus diesem Grund hat FromDual das MySQL Ops Center lanciert.
Dieses kann auch komplexeren Konfigurationen mit z. B. Master/Slave oder Master/Master Setups zentral steuern, die Replikationen überwachen, anhalten und wieder starten sowie umzukonfigurieren.
Ebenfalls können mit dem MySQL Ops Center einfach virtuelle IP's gestartet und von einem Knoten auf einen anderen umgezogen werden.


Die wichtigsten Funktionen, welche in den ersten öffentlichen Preview-Release des MySQL Ops Center v0.2 eingeflossen sind:

  • Starten und Stoppen von MySQL Datenbanken auf entfernten Rechnern durch eine zentrale Management-Konsole.
  • Starten und Stoppen der Replikation.
  • Starten und Stoppen eine virtuellen IP (VIP)
  • Schwenken (fail-over) der VIP vom aktiven Master auf den Slave (Master/Slave-Replikation) oder einen passiven Master (Master/Master Replikation).
  • Konfigurationen der Master/Slave Replikation.

Das MySQL Ops Center kann MySQL Ops Center Download heruntergeladen werden. Weiter Informationen finden Sie unter MySQL Ops Center.



MySQL Performance Monitor v0.9.3

Der MySQL Performance Monitor (mpm) wurde an zahlreichen stellen optimiert. Zudem wurden bekannte Fehler behoben und der mpm Agent für die neuste Zabbix Version v2.2 fit gemacht:

  • Bugs im Zusammenhang mit sha/sha1 Encryption wurden behoben.
  • Eine gestoppte Datenbank wird jetzt besser erkannt.
  • DRBD Information wurden verbessert.
  • Neues Verhalten des zabbix_senders in Zabbix v2.2 wird korrekt genutzt.
  • Neue Messpunkte wurden hinzugefügt (Galera Cluster) und fehlerhafte korrigiert.

Wie der MySQL Performance Monitor installiert wird finden Sie in der Installationsanleitung. Die vollständige Liste der Verbesserung entnehmen Sie den Release Notes. Den MySQL Performance Monitor können Sie hier herunterladen.



Wir freuen uns, von Ihnen zu hören.

Taxonomy upgrade extras: BackupgaleramonitoringbetriebOperationsüberwachungalarmierungMyEnvfromdual_brman

FromDual: Werkzeuge für MySQL und Galera - Backup - Überwachung - Betrieb

FromDual.de - Sun, 2014-07-27 10:26

FromDual Tools bieten wertvolle ergänzende Funktionen, die den täglichen Betrieb von MySQL Datenbanken erleichtern und optimieren. Seit unserem letzten Newsletter hat sich einiges getan bei den FromDual Tools.


In das MySQL Environment (MyEnv) sind zahlreiche Neuerungen und Verbesserungsvorschläge unserer Kunden eingeflossen. Die wichtigsten Erweiterungen betreffen den MySQL Backup Manager (mysql_bman).


Mit dem MySQL Ops Center kommen wir dem Wunsch zahlreicher Kunden nach, welche sich eine graphische Benutzeroberfläche für den Betrieb von komplexeren MySQL Umgebungen wünschen.
Diese Nutzer sind oft wenig geübt im Umgang mit MySQL, möchten aber trotzdem komplexere MySQL Installationen wie Master/Slave- oder Master/Master-Replikationen betreiben.


Im MySQL Performance Monitor (mpm) wurden zahlreiche kleinere Bugs behoben, welche uns in den letzten Wochen durch Kunden gemeldet wurden.


Hinweis: Bei unseren MySQL Service Verträgen, Business Hour (5x9) und All around the Clock (7x24) ist die Nutzung und der Support für unsere Tools mit enthalten.
Wenn Sie mehr über unsere Service-Preise wissen möchten, erstellen wir für Sie gerne ein Angebot.



MyEnv v1.0.5

Einer immer grösseren Beliebtheit erfreut sich das MySQL Environment (MyEnv). Dieses wurde dem bei grösseren Oracle Datenbankkunden beliebten TVD BasEnv nachempfunden, und für MySQL optimiert.


Mit MyEnv lassen sich bequem mehrere Instanzen (mysqld) auf einem Rechner konsolidieren. Dank MyEnv wird diese sonst eher komplizierte Konfiguration ein Kinderspiel. Im weiteren wird MyEnv zunehmend beliebt bei Kunden, welche Ihre Applikation gegen verschiedene MySQL Versionen (5.5, 5.6 und 5.7) sowie unterschiedliche MySQL Branches (Galera Cluster, MariaDB, Percona Server) testen wollen.


Die wichtigsten Neuerung in MyEnv v1.0.5:

  • Alte PHP Funktionen wurden ersetzt um Kompatibilität mit PHP 5.4 und 5.5 zu erlangen.
  • MyEnv Übersicht (up) über die installierten MySQL Instanzen wurde optisch aufbereitet und zahlreichere kleinere Bugs und Unschönheiten entfernt.
  • Erweiterungen für aktiv/passiv Failover-Cluster und Oracle Enterprise Monitor Agents für MySQL wurden integriert.
  • Die Benutzerführung beim MyEnv Installer wurde optisch aufbereitet und benutzerfreundlicher gestaltet.
  • Probleme beim Einsatz von MyEnv auf SuSE Linux Enterprise Server (SLES) wurden behoben.
  • Die Werkzeuge für MySQL Partitionen wurden erweitert und verbessert.

Alle Neuerung im Detail finden Sie in den Release Notes.


MyEnv können Sie hier herunterladen.



MySQL Backup Manager v1.0.5

Am meisten auf Interesse stösst zur Zeit der MySQL Backup Manager (mysql_bman). Dieser vereinfacht signifikant Backups für MySQL in allen möglichen Varianten.


An dieser Stelle möchten wir gerne einen Kommentar eines Anwenders wieder geben, den wir kürzlich erhalten haben:

"MySQL Backup Manager is a very nice tool! Congratulations for FromDual! I made my own shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best! Xtrabackup + mysql_bman!!!"


Im mysql_bman Version v1.0.5 wurden folgende Neuerungen integriert:

  • Die Sicherheit wurden verbessert (Passwort wird nicht mehr angezeigt).
  • Jede Instanz kann mit einem Namen versehen und somit eindeutig gekennzeichnet werden.
  • Der MySQL Backup Manager zieht jetzt auch die Konfigurations-Datei ~/.my.cnf in Betracht.
  • Die Backup-Komprimierung kann zur Unterstützung von deduplizierenden Laufwerken ausgeschaltet werden.
  • Die Option --no-memory-table-check wurde eingeführt um inkonsistente Backups mit MEMORY Tabellen zu erlauben.

Download (in MyEnv enthalten).


MySQL Ops Center v0.2

Unsere MySQL Kunden haben immer wieder nach einer einfach zu bedienenden Benutzeroberfläche zur Steuerung und Verwaltung von mehreren MySQL Datenbanken nachgefragt. Aus diesem Grund hat FromDual das MySQL Ops Center lanciert.
Dieses kann auch komplexeren Konfigurationen mit z. B. Master/Slave oder Master/Master Setups zentral steuern, die Replikationen überwachen, anhalten und wieder starten sowie umzukonfigurieren.
Ebenfalls können mit dem MySQL Ops Center einfach virtuelle IP's gestartet und von einem Knoten auf einen anderen umgezogen werden.


Die wichtigsten Funktionen, welche in den ersten öffentlichen Preview-Release des MySQL Ops Center v0.2 eingeflossen sind:

  • Starten und Stoppen von MySQL Datenbanken auf entfernten Rechnern durch eine zentrale Management-Konsole.
  • Starten und Stoppen der Replikation.
  • Starten und Stoppen eine virtuellen IP (VIP)
  • Schwenken (fail-over) der VIP vom aktiven Master auf den Slave (Master/Slave-Replikation) oder einen passiven Master (Master/Master Replikation).
  • Konfigurationen der Master/Slave Replikation.

Das MySQL Ops Center kann MySQL Ops Center Download heruntergeladen werden. Weiter Informationen finden Sie unter MySQL Ops Center.



MySQL Performance Monitor v0.9.3

Der MySQL Performance Monitor (mpm) wurde an zahlreichen stellen optimiert. Zudem wurden bekannte Fehler behoben und der mpm Agent für die neuste Zabbix Version v2.2 fit gemacht:

  • Bugs im Zusammenhang mit sha/sha1 Encryption wurden behoben.
  • Eine gestoppte Datenbank wird jetzt besser erkannt.
  • DRBD Information wurden verbessert.
  • Neues Verhalten des zabbix_senders in Zabbix v2.2 wird korrekt genutzt.
  • Neue Messpunkte wurden hinzugefügt (Galera Cluster) und fehlerhafte korrigiert.

Wie der MySQL Performance Monitor installiert wird finden Sie in der Installationsanleitung. Die vollständige Liste der Verbesserung entnehmen Sie den Release Notes. Den MySQL Performance Monitor können Sie hier herunterladen.



Wir freuen uns, von Ihnen zu hören.

Taxonomy upgrade extras: BackupgaleramonitoringbetriebOperationsüberwachungalarmierungMyEnvfromdual_brman

FromDual: Werkzeuge für MySQL und Galera - Backup - Überwachung - Betrieb

FromDual.de - Sun, 2014-07-27 10:26
Taxonomy upgrade extras: BackupgaleramonitoringbetriebOperationsüberwachungalarmierung

FromDual Tools bieten wertvolle ergänzende Funktionen, die den täglichen Betrieb von MySQL Datenbanken erleichtern und optimieren. Seit unserem letzten Newsletter hat sich einiges getan bei den FromDual Tools.


In das MySQL Environment (MyEnv) sind zahlreiche Neuerungen und Verbesserungsvorschläge unserer Kunden eingeflossen. Die wichtigsten Erweiterungen betreffen den MySQL Backup Manager (mysql_bman).


Mit dem MySQL Ops Center kommen wir dem Wunsch zahlreicher Kunden nach, welche sich eine graphische Benutzeroberfläche für den Betrieb von komplexeren MySQL Umgebungen wünschen.
Diese Nutzer sind oft wenig geübt im Umgang mit MySQL, möchten aber trotzdem komplexere MySQL Installationen wie Master/Slave- oder Master/Master-Replikationen betreiben.


Im MySQL Performance Monitor (mpm) wurden zahlreiche kleinere Bugs behoben, welche uns in den letzten Wochen durch Kunden gemeldet wurden.


Hinweis: Bei unseren MySQL Service Verträgen, Business Hour (5x9) und All around the Clock (7x24) ist die Nutzung und der Support für unsere Tools mit enthalten.
Wenn Sie mehr über unsere Service-Preise wissen möchten, erstellen wir für Sie gerne ein Angebot.



MyEnv v1.0.5

Einer immer grösseren Beliebtheit erfreut sich das MySQL Environment (MyEnv). Dieses wurde dem bei grösseren Oracle Datenbankkunden beliebten TVD BasEnv nachempfunden, und für MySQL optimiert.


Mit MyEnv lassen sich bequem mehrere Instanzen (mysqld) auf einem Rechner konsolidieren. Dank MyEnv wird diese sonst eher komplizierte Konfiguration ein Kinderspiel. Im weiteren wird MyEnv zunehmend beliebt bei Kunden, welche Ihre Applikation gegen verschiedene MySQL Versionen (5.5, 5.6 und 5.7) sowie unterschiedliche MySQL Branches (Galera Cluster, MariaDB, Percona Server) testen wollen.


Die wichtigsten Neuerung in MyEnv v1.0.5:

  • Alte PHP Funktionen wurden ersetzt um Kompatibilität mit PHP 5.4 und 5.5 zu erlangen.
  • MyEnv Übersicht (up) über die installierten MySQL Instanzen wurde optisch aufbereitet und zahlreichere kleinere Bugs und Unschönheiten entfernt.
  • Erweiterungen für aktiv/passiv Failover-Cluster und Oracle Enterprise Monitor Agents für MySQL wurden integriert.
  • Die Benutzerführung beim MyEnv Installer wurde optisch aufbereitet und benutzerfreundlicher gestaltet.
  • Probleme beim Einsatz von MyEnv auf SuSE Linux Enterprise Server (SLES) wurden behoben.
  • Die Werkzeuge für MySQL Partitionen wurden erweitert und verbessert.

Alle Neuerung im Detail finden Sie in den Release Notes.


MyEnv können Sie hier herunterladen.



MySQL Backup Manager v1.0.5

Am meisten auf Interesse stösst zur Zeit der MySQL Backup Manager (mysql_bman). Dieser vereinfacht signifikant Backups für MySQL in allen möglichen Varianten.


An dieser Stelle möchten wir gerne einen Kommentar eines Anwenders wieder geben, den wir kürzlich erhalten haben:

"MySQL Backup Manager is a very nice tool! Congratulations for FromDual! I made my own shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best! Xtrabackup + mysql_bman!!!"


Im mysql_bman Version v1.0.5 wurden folgende Neuerungen integriert:

  • Die Sicherheit wurden verbessert (Passwort wird nicht mehr angezeigt).
  • Jede Instanz kann mit einem Namen versehen und somit eindeutig gekennzeichnet werden.
  • Der MySQL Backup Manager zieht jetzt auch die Konfigurations-Datei ~/.my.cnf in Betracht.
  • Die Backup-Komprimierung kann zur Unterstützung von deduplizierenden Laufwerken ausgeschaltet werden.
  • Die Option --no-memory-table-check wurde eingeführt um inkonsistente Backups mit MEMORY Tabellen zu erlauben.

Download (in MyEnv enthalten).


MySQL Ops Center v0.2

Unsere MySQL Kunden haben immer wieder nach einer einfach zu bedienenden Benutzeroberfläche zur Steuerung und Verwaltung von mehreren MySQL Datenbanken nachgefragt. Aus diesem Grund hat FromDual das MySQL Ops Center lanciert.
Dieses kann auch komplexeren Konfigurationen mit z. B. Master/Slave oder Master/Master Setups zentral steuern, die Replikationen überwachen, anhalten und wieder starten sowie umzukonfigurieren.
Ebenfalls können mit dem MySQL Ops Center einfach virtuelle IP's gestartet und von einem Knoten auf einen anderen umgezogen werden.


Die wichtigsten Funktionen, welche in den ersten öffentlichen Preview-Release des MySQL Ops Center v0.2 eingeflossen sind:

  • Starten und Stoppen von MySQL Datenbanken auf entfernten Rechnern durch eine zentrale Management-Konsole.
  • Starten und Stoppen der Replikation.
  • Starten und Stoppen eine virtuellen IP (VIP)
  • Schwenken (fail-over) der VIP vom aktiven Master auf den Slave (Master/Slave-Replikation) oder einen passiven Master (Master/Master Replikation).
  • Konfigurationen der Master/Slave Replikation.

Das MySQL Ops Center kann MySQL Ops Center Download heruntergeladen werden. Weiter Informationen finden Sie unter MySQL Ops Center.



MySQL Performance Monitor v0.9.3

Der MySQL Performance Monitor (mpm) wurde an zahlreichen stellen optimiert. Zudem wurden bekannte Fehler behoben und der mpm Agent für die neuste Zabbix Version v2.2 fit gemacht:

  • Bugs im Zusammenhang mit sha/sha1 Encryption wurden behoben.
  • Eine gestoppte Datenbank wird jetzt besser erkannt.
  • DRBD Information wurden verbessert.
  • Neues Verhalten des zabbix_senders in Zabbix v2.2 wird korrekt genutzt.
  • Neue Messpunkte wurden hinzugefügt (Galera Cluster) und fehlerhafte korrigiert.

Wie der MySQL Performance Monitor installiert wird finden Sie in der Installationsanleitung. Die vollständige Liste der Verbesserung entnehmen Sie den Release Notes. Den MySQL Performance Monitor können Sie hier herunterladen.



Wir freuen uns, von Ihnen zu hören.

FromDual Performance Monitor for MySQL 0.9.3 has been released

FromDual.en - Wed, 2014-07-09 12:25

FromDual has the pleasure to announce the release of the new version 0.9.3 of its popular Database Performance Monitor for MySQL, Galera Cluster, MariaDB and Percona Server mpm.

This release contains various minor bug fixes and improvements.

You can download mpm from here.

In the inconceivable case that you find a bug in mpm please report it to our Bugtracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

New installation of mpm v0.9.3

Please follow our mpm installation guide.

Upgrade from 0.x to 0.9.3 # cd /download # tar xf mysql_performance_monitor-0.9.3.tar.gz # cd /opt # tar xf /download/mysql_performance_monitor_agent-0.9.3.tar.gz # rm -f mpm # ln -s mysql_performance_monitor_agent-0.9.3 mpm

No other upgrade requirements are known.

Changes in mpm v0.9.3 mpm agent
  • Typos fixed.
  • Kill trap reports to the log file as well now.
mpm agent and MaaS
  • Example for timeshift feature added to configuration template.
MySQL module
  • DB down not detected (bug #27/#138).
InnoDB module
  • InnoDB Status module: SHA fix (bug #139).
Master module
  • Missing values in cache file fixed.
mpm templates for Zabbix
  • No changes.
Taxonomy upgrade extras: mysqlperformancemonitormonitoringmpmmaasperformance monitorreleasefpmmm

FromDual Performance Monitor for MySQL 0.9.3 has been released

FromDual.en - Wed, 2014-07-09 12:25

FromDual has the pleasure to announce the release of the new version 0.9.3 of its popular Database Performance Monitor for MySQL, Galera Cluster, MariaDB and Percona Server mpm.

This release contains various minor bug fixes and improvements.

You can download mpm from here.

In the inconceivable case that you find a bug in mpm please report it to our Bugtracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

New installation of mpm v0.9.3

Please follow our mpm installation guide.

Upgrade from 0.x to 0.9.3 # cd /download # tar xf mysql_performance_monitor-0.9.3.tar.gz # cd /opt # tar xf /download/mysql_performance_monitor_agent-0.9.3.tar.gz # rm -f mpm # ln -s mysql_performance_monitor_agent-0.9.3 mpm

No other upgrade requirements are known.

Changes in mpm v0.9.3 mpm agent
  • Typos fixed.
  • Kill trap reports to the log file as well now.
mpm agent and MaaS
  • Example for timeshift feature added to configuration template.
MySQL module
  • DB down not detected (bug #27/#138).
InnoDB module
  • InnoDB Status module: SHA fix (bug #139).
Master module
  • Missing values in cache file fixed.
mpm templates for Zabbix
  • No changes.
Taxonomy upgrade extras: mysqlperformancemonitormonitoringmpmmaasperformance monitorreleasefpmmm

FromDual Performance Monitor for MySQL 0.9.3 has been released

FromDual.en - Wed, 2014-07-09 12:25
Taxonomy upgrade extras: mysqlperformancemonitormonitoringmpmmaasperformance monitor

FromDual has the pleasure to announce the release of the new version 0.9.3 of its popular Database Performance Monitor for MySQL, Galera Cluster, MariaDB and Percona Server mpm.

This release contains various minor bug fixes and improvements.

You can download mpm from here.

In the inconceivable case that you find a bug in mpm please report it to our Bugtracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

New installation of mpm v0.9.3

Please follow our mpm installation guide.

Upgrade from 0.x to 0.9.3 # cd /download # tar xf mysql_performance_monitor-0.9.3.tar.gz # cd /opt # tar xf /download/mysql_performance_monitor_agent-0.9.3.tar.gz # rm -f mpm # ln -s mysql_performance_monitor_agent-0.9.3 mpm

No other upgrade requirements are known.

Changes in mpm v0.9.3 mpm agent
  • Typos fixed.
  • Kill trap reports to the log file as well now.
mpm agent and MaaS
  • Example for timeshift feature added to configuration template.
MySQL module
  • DB down not detected (bug #27/#138).
InnoDB module
  • InnoDB Status module: SHA fix (bug #139).
Master module
  • Missing values in cache file fixed.
mpm templates for Zabbix
  • No changes.

Replication Troubleshooting - Classic VS GTID

Abdel-Mawla Gharieb - Fri, 2014-07-04 15:05

In previous posts, I was talking about how to set up MySQL replication, Classic Replication (based on binary logs information) and Transaction-based Replication (based on GTID). In this article I'll summarize how to troubleshoot MySQL replication for the most common issues we might face with a simple comparison how can we get them solved in the different replication methods (Classic VS GTID).

There are two main operations we might need to do in a replication setup:

  • Skip or ignore a statement that causes the replication to stop.
  • Re-initialize a slave when the Replication is broke and could not be started anymore.
Skip or Ignore statement

Basically, the slave should be always synchronized with its master having the same copy of data, but for some reasons there might be inconsistency between both of them (unsafe statement in SBR, Slave is not read_only and was modified apart of replication queries, .. etc) which causes errors and stops the replication, e.g. if the master inserted a record which was already inserted on the slave (Duplicate entry) or updated/deleted a row which was not exist on the slave, ... etc.

To solve this issue, we have to either reverse what we have done on the slave (e.g. delete the inserted rows) if that was made by mistake and is known or we can skip executing those statements on the slave and continue the replication again (I'll focus on skipping a statement in this post as it needs different interaction in Classic and GTID replication).

Sample error messages (from SHOW SLAVE STATUS output): Last_SQL_Error: Could not execute Write_rows event on table test.t1; Duplicate entry '4' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.000304, end_log_pos 285 Last_SQL_Error: Could not execute Update_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 492 Last_SQL_Error: Could not execute Delete_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 688 How to solve that issue ?
CLASSIC REPLICATION

Solving this problem is a straight forward process in the classic replication setup, what only we need is to issue the following SQL commands on the slave's:

SQL> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; SQL> START SLAVE;
GTID REPLICATION

Solving this problem is not a straight forward in GTID replication like it is in the Classic replication and the variable SQL_SLAVE_SKIP_COUNTER wont be useful in this area anymore.

To get this problem solved in a GTID replication we will need to inject an empty transaction as follows:

  • Check which transaction is causing the problem: SQL> SHOW SLAVE STATUS\G . . Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-7 Executed_Gtid_Set: 4f6d62ed-df65-11e3-b395-60672090eb04:1, b9b4712a-df64-11e3-b391-60672090eb04:1-6 Auto_Position: 1

    Retrieved_Gtid_Set means the retrieved GTIDs from the master

    Executed_Gtid_Set means the executed GTIDs on the slave.

    According to the above output, the slave retrieved GTIDs from 1:7 (b9b4712a-df64-11e3-b391-60672090eb04:1-7) and executed only from 1:6 (b9b4712a-df64-11e3-b391-60672090eb04:1-6), so the problem is in transaction number 7.

  • Inject an empty transaction: SQL> SET GTID_NEXT='b9b4712a-df64-11e3-b391-60672090eb04:7'; SQL> BEGIN;COMMIT; SQL> SET GTID_NEXT='AUTOMATIC'; SQL> START SLAVE;

    BE CAUTIOUS: The first part of Executed_Gtid_Set (4f6d62ed-df65-11e3-b395-60672090eb04:1) is the local executed GTIDs (not received from the master) while the second part (b9b4712a-df64-11e3-b391-60672090eb04:1-6) is the executed GTIDs which retrieved from the master (check the master's UUID by either checking the UUID value in "Retrieved_Gtid_Set" which is basically for the master's UUID or by issuing SHOW GLOBAL VARIABLES LIKE 'server_uuid'; on the master server). So we should make sure that we are using the master's UUID when injecting an empty transaction, otherwise, the problem will still remain and the slave wont be started.

Note:

After starting the slave successfully in either classic or GTID replication we might need to use a combination of Percona tools pt-table-checksum and pt-table-sync to fix the inconsistency problem.

Re-initialize/ re-build a slave

For many reasons, we might end up with only re-build a slave to get the replication working, e.g. if we stopped a slave for a while where the master purged the binary log file that is needed by that slave or there are many duplicate entry errors so that pt-table-checksum and pt-table-sync could not be used then we have to re-initialize the slave from the beginning by having a fresh backup from the master server and restore it on the slave. Lets check how can we do that in both replication methods.

How to solve that issue ?
CLASSIC REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'

Fix steps:

  • Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Get the binary logs information when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep "CHANGE MASTER TO" CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
  • Issue the "CHANGE MASTER TO" command using the new information: SQL> CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
  • Start the slave: SQL> START SLAVE;

NOTE:

Xtrabackup tool could be used instead of mysqldump,especially, if the database size is big. Check out this link for more information.

GTID REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'

Fix steps:

  • Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Check the GTID value when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep PURGED SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';
  • Reset the GTID_EXECUTED and GTID_PURGED values on the slave: SQL> RESET MASTER;
  • Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Make sure that the values of GTID_EXEUCTED and GTID_PURGED are the correct ones: SQL> SHOW GLOBAL VARIABLES LIKE 'gtid_executed'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_executed | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.00 sec) SHOW GLOBAL VARIABLES LIKE 'gtid_purged'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_purged | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.01 sec)
  • Start the slave: SQL> START SLAVE;

NOTES:

  • If we didn't reset the GTID_EXECUTED and GTID_PURGED values on the slave before restoring the backup file, the following error will be appeared:
    shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql. ERROR 1840 (HY000): @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty.

    The above error indicates that the statement at the beginning of the backup file - which is "SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';" - failed because GTID_PURGED cannot be set unless GTID_EXECUTED is empty. Since GTID_EXECUTED is a read only variable, the only way to empty its value is to issue "RESET MASTER" on the slave server before restoring the backup file.

  • Xtrabackup tool could be used as well instead of mysqldump to get this problem solved and without the need to reset GTID_EXECUTED and GTID_PURGED values . Check out this link for more information.
Conclusion

While GTID provides many benefits over the classic replication but it has different troubleshooting and fix strategies which must be known first before deploying GTID in production systems.

Taxonomy upgrade extras: GTIDreplication

Replication Troubleshooting - Classic VS GTID

Abdel-Mawla Gharieb - Fri, 2014-07-04 15:05

In previous posts, I was talking about how to set up MySQL replication, Classic Replication (based on binary logs information) and Transaction-based Replication (based on GTID). In this article I'll summarize how to troubleshoot MySQL replication for the most common issues we might face with a simple comparison how can we get them solved in the different replication methods (Classic VS GTID).

There are two main operations we might need to do in a replication setup:

  • Skip or ignore a statement that causes the replication to stop.
  • Re-initialize a slave when the Replication is broke and could not be started anymore.
Skip or Ignore statement

Basically, the slave should be always synchronized with its master having the same copy of data, but for some reasons there might be inconsistency between both of them (unsafe statement in SBR, Slave is not read_only and was modified apart of replication queries, .. etc) which causes errors and stops the replication, e.g. if the master inserted a record which was already inserted on the slave (Duplicate entry) or updated/deleted a row which was not exist on the slave, ... etc.

To solve this issue, we have to either reverse what we have done on the slave (e.g. delete the inserted rows) if that was made by mistake and is known or we can skip executing those statements on the slave and continue the replication again (I'll focus on skipping a statement in this post as it needs different interaction in Classic and GTID replication).

Sample error messages (from SHOW SLAVE STATUS output): Last_SQL_Error: Could not execute Write_rows event on table test.t1; Duplicate entry '4' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.000304, end_log_pos 285 Last_SQL_Error: Could not execute Update_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 492 Last_SQL_Error: Could not execute Delete_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 688 How to solve that issue ?
CLASSIC REPLICATION

Solving this problem is a straight forward process in the classic replication setup, what only we need is to issue the following SQL commands on the slave's:

SQL> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; SQL> START SLAVE;
GTID REPLICATION

Solving this problem is not a straight forward in GTID replication like it is in the Classic replication and the variable SQL_SLAVE_SKIP_COUNTER wont be useful in this area anymore.

To get this problem solved in a GTID replication we will need to inject an empty transaction as follows:

  • Check which transaction is causing the problem: SQL> SHOW SLAVE STATUS\G . . Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-7 Executed_Gtid_Set: 4f6d62ed-df65-11e3-b395-60672090eb04:1, b9b4712a-df64-11e3-b391-60672090eb04:1-6 Auto_Position: 1

    Retrieved_Gtid_Set means the retrieved GTIDs from the master

    Executed_Gtid_Set means the executed GTIDs on the slave.

    According to the above output, the slave retrieved GTIDs from 1:7 (b9b4712a-df64-11e3-b391-60672090eb04:1-7) and executed only from 1:6 (b9b4712a-df64-11e3-b391-60672090eb04:1-6), so the problem is in transaction number 7.

  • Inject an empty transaction: SQL> SET GTID_NEXT='b9b4712a-df64-11e3-b391-60672090eb04:7'; SQL> BEGIN;COMMIT; SQL> SET GTID_NEXT='AUTOMATIC'; SQL> START SLAVE;

    BE CAUTIOUS: The first part of Executed_Gtid_Set (4f6d62ed-df65-11e3-b395-60672090eb04:1) is the local executed GTIDs (not received from the master) while the second part (b9b4712a-df64-11e3-b391-60672090eb04:1-6) is the executed GTIDs which retrieved from the master (check the master's UUID by either checking the UUID value in "Retrieved_Gtid_Set" which is basically for the master's UUID or by issuing SHOW GLOBAL VARIABLES LIKE 'server_uuid'; on the master server). So we should make sure that we are using the master's UUID when injecting an empty transaction, otherwise, the problem will still remain and the slave wont be started.

Note:

After starting the slave successfully in either classic or GTID replication we might need to use a combination of Percona tools pt-table-checksum and pt-table-sync to fix the inconsistency problem.

Re-initialize/ re-build a slave

For many reasons, we might end up with only re-build a slave to get the replication working, e.g. if we stopped a slave for a while where the master purged the binary log file that is needed by that slave or there are many duplicate entry errors so that pt-table-checksum and pt-table-sync could not be used then we have to re-initialize the slave from the beginning by having a fresh backup from the master server and restore it on the slave. Lets check how can we do that in both replication methods.

How to solve that issue ?
CLASSIC REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'

Fix steps:

  • Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Get the binary logs information when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep "CHANGE MASTER TO" CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
  • Issue the "CHANGE MASTER TO" command using the new information: SQL> CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
  • Start the slave: SQL> START SLAVE;

NOTE:

Xtrabackup tool could be used instead of mysqldump,especially, if the database size is big. Check out this link for more information.

GTID REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'

Fix steps:

  • Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Check the GTID value when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep PURGED SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';
  • Reset the GTID_EXECUTED and GTID_PURGED values on the slave: SQL> RESET MASTER;
  • Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Make sure that the values of GTID_EXEUCTED and GTID_PURGED are the correct ones: SQL> SHOW GLOBAL VARIABLES LIKE 'gtid_executed'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_executed | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.00 sec) SHOW GLOBAL VARIABLES LIKE 'gtid_purged'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_purged | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.01 sec)
  • Start the slave: SQL> START SLAVE;

NOTES:

  • If we didn't reset the GTID_EXECUTED and GTID_PURGED values on the slave before restoring the backup file, the following error will be appeared:
    shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql. ERROR 1840 (HY000): @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty.

    The above error indicates that the statement at the beginning of the backup file - which is "SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';" - failed because GTID_PURGED cannot be set unless GTID_EXECUTED is empty. Since GTID_EXECUTED is a read only variable, the only way to empty its value is to issue "RESET MASTER" on the slave server before restoring the backup file.

  • Xtrabackup tool could be used as well instead of mysqldump to get this problem solved and without the need to reset GTID_EXECUTED and GTID_PURGED values . Check out this link for more information.
Conclusion

While GTID provides many benefits over the classic replication but it has different troubleshooting and fix strategies which must be known first before deploying GTID in production systems.

Taxonomy upgrade extras: GTIDreplication

Replication Troubleshooting - Classic VS GTID

Abdel-Mawla Gharieb - Fri, 2014-07-04 15:05

In previous posts, I was talking about how to set up MySQL replication, Classic Replication (based on binary logs information) and Transaction-based Replication (based on GTID). In this article I'll summarize how to troubleshoot MySQL replication for the most common issues we might face with a simple comparison how can we get them solved in the different replication methods (Classic VS GTID).

There are two main operations we might need to do in a replication setup:

  • Skip or ignore a statement that causes the replication to stop.
  • Re-initialize a slave when the Replication is broke and could not be started anymore.
Skip or Ignore statement

Basically, the slave should be always synchronized with its master having the same copy of data, but for some reasons there might be inconsistency between both of them (unsafe statement in SBR, Slave is not read_only and was modified apart of replication queries, .. etc) which causes errors and stops the replication, e.g. if the master inserted a record which was already inserted on the slave (Duplicate entry) or updated/deleted a row which was not exist on the slave, ... etc.

To solve this issue, we have to either reverse what we have done on the slave (e.g. delete the inserted rows) if that was made by mistake and is known or we can skip executing those statements on the slave and continue the replication again (I'll focus on skipping a statement in this post as it needs different interaction in Classic and GTID replication).

Sample error messages (from SHOW SLAVE STATUS output): Last_SQL_Error: Could not execute Write_rows event on table test.t1; Duplicate entry '4' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.000304, end_log_pos 285 Last_SQL_Error: Could not execute Update_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 492 Last_SQL_Error: Could not execute Delete_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 688 How to solve that issue ?
CLASSIC REPLICATION

Solving this problem is a straight forward process in the classic replication setup, what only we need is to issue the following SQL commands on the slave's:

SQL> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; SQL> START SLAVE;
GTID REPLICATION

Solving this problem is not a straight forward in GTID replication like it is in the Classic replication and the variable SQL_SLAVE_SKIP_COUNTER wont be useful in this area anymore.

To get this problem solved in a GTID replication we will need to inject an empty transaction as follows:

  • Check which transaction is causing the problem: SQL> SHOW SLAVE STATUS\G . . Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-7 Executed_Gtid_Set: 4f6d62ed-df65-11e3-b395-60672090eb04:1, b9b4712a-df64-11e3-b391-60672090eb04:1-6 Auto_Position: 1

    Retrieved_Gtid_Set means the retrieved GTIDs from the master

    Executed_Gtid_Set means the executed GTIDs on the slave.

    According to the above output, the slave retrieved GTIDs from 1:7 (b9b4712a-df64-11e3-b391-60672090eb04:1-7) and executed only from 1:6 (b9b4712a-df64-11e3-b391-60672090eb04:1-6), so the problem is in transaction number 7.

  • Inject an empty transaction: SQL> SET GTID_NEXT='b9b4712a-df64-11e3-b391-60672090eb04:7'; SQL> BEGIN;COMMIT; SQL> SET GTID_NEXT='AUTOMATIC'; SQL> START SLAVE;

    BE CAUTIOUS: The first part of Executed_Gtid_Set (4f6d62ed-df65-11e3-b395-60672090eb04:1) is the local executed GTIDs (not received from the master) while the second part (b9b4712a-df64-11e3-b391-60672090eb04:1-6) is the executed GTIDs which retrieved from the master (check the master's UUID by either checking the UUID value in "Retrieved_Gtid_Set" which is basically for the master's UUID or by issuing SHOW GLOBAL VARIABLES LIKE 'server_uuid'; on the master server). So we should make sure that we are using the master's UUID when injecting an empty transaction, otherwise, the problem will still remain and the slave wont be started.

Note:

After starting the slave successfully in either classic or GTID replication we might need to use a combination of Percona tools pt-table-checksum and pt-table-sync to fix the inconsistency problem.

Re-initialize/ re-build a slave

For many reasons, we might end up with only re-build a slave to get the replication working, e.g. if we stopped a slave for a while where the master purged the binary log file that is needed by that slave or there are many duplicate entry errors so that pt-table-checksum and pt-table-sync could not be used then we have to re-initialize the slave from the beginning by having a fresh backup from the master server and restore it on the slave. Lets check how can we do that in both replication methods.

How to solve that issue ?
CLASSIC REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'

Fix steps:

  • Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Get the binary logs information when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep "CHANGE MASTER TO" CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
  • Issue the "CHANGE MASTER TO" command using the new information: SQL> CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
  • Start the slave: SQL> START SLAVE;

NOTE:

Xtrabackup tool could be used instead of mysqldump,especially, if the database size is big. Check out this link for more information.

GTID REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'

Fix steps:

  • Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Check the GTID value when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep PURGED SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';
  • Reset the GTID_EXECUTED and GTID_PURGED values on the slave: SQL> RESET MASTER;
  • Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Make sure that the values of GTID_EXEUCTED and GTID_PURGED are the correct ones: SQL> SHOW GLOBAL VARIABLES LIKE 'gtid_executed'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_executed | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.00 sec) SHOW GLOBAL VARIABLES LIKE 'gtid_purged'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_purged | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.01 sec)
  • Start the slave: SQL> START SLAVE;

NOTES:

  • If we didn't reset the GTID_EXECUTED and GTID_PURGED values on the slave before restoring the backup file, the following error will be appeared:
    shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql. ERROR 1840 (HY000): @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty.

    The above error indicates that the statement at the beginning of the backup file - which is "SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';" - failed because GTID_PURGED cannot be set unless GTID_EXECUTED is empty. Since GTID_EXECUTED is a read only variable, the only way to empty its value is to issue "RESET MASTER" on the slave server before restoring the backup file.

  • Xtrabackup tool could be used as well instead of mysqldump to get this problem solved and without the need to reset GTID_EXECUTED and GTID_PURGED values . Check out this link for more information.
Conclusion

While GTID provides many benefits over the classic replication but it has different troubleshooting and fix strategies which must be known first before deploying GTID in production systems.

Replication channel fail-over with Galera Cluster for MySQL

Shinguz - Thu, 2014-06-19 07:05
Taxonomy upgrade extras: channelgaleraclusterfail-overreplicationmasterslave

Sometimes it could be desirable to replicate from a Galera Cluster to a single MySQL slave or to an other Galera Cluster. Reasons for this measure could be:

  • An unstable network between two Galera Cluster locations.
  • A separation of a reporting slave and the Galera Cluster so that heavy reports on the slave do not affect the Galera Cluster performance.
  • Mixing different sources in a slave or a Galera Cluster (fan-in replication).

This article is based on earlier research work (see MySQL Cluster - Cluster circular replication with 2 replication channels) and uses the old MySQL replication style (without MySQL GTID).

Preconditions
  • Enable the binary logs on 2 nodes of a Galera Cluster (we call them channel masters) with the log_bin variable.
  • Set log_slave_updates = 1 on ALL Galera nodes.
  • It is recommended to have small binary logs and relay logs in such a situation to reduce overhead of scanning the files (max_binlog_size = 100M).
Scenarios

   

Let us assume that for some reason the actual channel master of channel 1 breaks. As a consequence the slave of channel 1 does not receive any replication events any more. But we have to keep the replication stream up and running. So we have to switch the replication channel to channel master 2.

Switching replication channel

First for security reasons we should stop the slave of replication channel 1 first:

mysql> STOP SLAVE;

Then we have to find the actual relay log on the slave:

mysql> pager grep Relay_Log_File mysql> SHOW SLAVE STATUS\G mysql> nopager Relay_Log_File: slave-relay-bin.000019

Next we have to find the last applied transaction on the slave:

mysql> SHOW RELAYLOG EVENTS IN 'slave-relay-bin.000019'; | slave-relay-bin.000019 | 3386717 | Query | 5201 | 53745015 | BEGIN | | slave-relay-bin.000019 | 3386794 | Table_map | 5201 | 53745067 | table_id: 72 (test.test) | | slave-relay-bin.000019 | 3386846 | Write_rows | 5201 | 53745142 | table_id: 72 flags: STMT_END_F | | slave-relay-bin.000019 | 3386921 | Xid | 5201 | 53745173 | COMMIT /* xid=1457451 */ | +------------------------+---------+-------------+-----------+-------------+--------------------------------+

This is transaction 1457451 which is the same on all Galera nodes.

On the new channel master of channel 2 we have to find now the matching binary log. This can be done best by matching times between the relay log and the binary log of master of channel 2.

On slave:

shell> ll *relay-bin* -rw-rw---- 1 mysql mysql 336 Mai 22 20:32 slave-relay-bin.000018 -rw-rw---- 1 mysql mysql 3387029 Mai 22 20:37 slave-relay-bin.000019

On master of channel 2:

shell> ll *bin-log* -rw-rw---- 1 mysql mysql 2518737 Mai 22 19:57 bin-log.000072 -rw-rw---- 1 mysql mysql 143 Mai 22 19:57 bin-log.000073 -rw-rw---- 1 mysql mysql 165 Mai 22 20:01 bin-log.000074 -rw-rw---- 1 mysql mysql 62953648 Mai 22 20:40 bin-log.000075

It looks like binary log 75 of master 2 matches to relay log of our slave.

Now we have to find the same transaction on the master of channel 2:

mysql> pager grep -B 6 1457451 mysql> SHOW BINLOG EVENTS IN 'bin-log.000075'; mysql> nopager | bin-log.000075 | 53744832 | Write_rows | 5201 | 53744907 | table_id: 72 flags: STMT_END_F | | bin-log.000075 | 53744907 | Xid | 5201 | 53744938 | COMMIT /* xid=1457450 */ | | bin-log.000075 | 53744938 | Query | 5201 | 53745015 | BEGIN | | bin-log.000075 | 53745015 | Table_map | 5201 | 53745067 | table_id: 72 (test.test) | | bin-log.000075 | 53745067 | Write_rows | 5201 | 53745142 | table_id: 72 flags: STMT_END_F | | bin-log.000075 | 53745142 | Xid | 5201 | 53745173 | COMMIT /* xid=1457451 */ | +----------------+----------+-------------+-----------+-------------+---------------------------------------+

We successfully found the transaction and want the position of the next transaction 53745173 where we should continue replicating.

As a last step we have to set the slave to the master of replication channel 2:

mysql> CHANGE MASTER TO master_host='master2', master_port=3306, master_log_file='bin-log.000075', master_log_pos=53745173; mysql> START SLAVE;

After a while the slave has caught up and is ready for the next fail-over back.

Discussion

We found during our experiments that an IST of a channel master does not lead to a gap or loss of events in the replication stream. So restarting a channel master does not require a channel fail-over as long as an IST can be used for resyncing the channel master with the Galera Cluster.

The increase of wsrep_cluster_conf_id is NOT an indication that a channel fail-over is required.

A SST resets the binary logs so after the SST a slave will not replicate any more. So using this method should be safe to use. If you find any situation where you experience troubles with channel fail-over please let us know.

MySQL Environment MyEnv 1.0.5 has been released

FromDual.en - Fri, 2014-06-13 18:29

FromDual has the pleasure to announce the release of the new version 1.0.5 of its popular MySQL, MariaDB and Percona Server multi-instance environment MyEnv.

The majority of improvements happened on the MySQL Backup Manager (mysql_bman) utility.

You can download MyEnv from here.

In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.0.x to 1.0.5 # cd ${HOME}/product # tar xf /download/myenv-1.0.5.tar.gz # rm -f myenv # ln -s myenv-1.0.5 myenv
Upgrade from 1.0.2 or older to 1.0.3 or newer

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/

Exchange the MyEnv section in ~/.bash_profile (make a backup of this file first?) by the following new one:

# BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile cd $MYENV_BASE; $MYENV_BASE/bin/showMyEnvStatus.php; cd - > /dev/null cd $MYENV_PWD # END MyEnv Changes in MyEnv 1.0.5 MyEnv
  • Schema output in up was still ugly.
  • Instance output is split correctly similar to up/down display.
  • Instance list is now shorter when short instance names are used.
  • ignore-passive option added for myEnv to ignore passive databases in an active/passive fail-over cluster. Based on existence of datadir.
  • Upgrade instructions have been improved and denormalized.
  • Only display existing OEM agents, criteria is directory in oratab must exist.
  • Up instances are not reported with missing mysqladmin command (Galera binary tar balls) but it was not visible what is the reason. Reason is displayed as an error message now.
MyEnv Installer
  • Lists each basedir candidate in a separate line when adding a new instance. More conveniant for reading if many basedirs are available.
MyEnv Utilities
  • block_galera_node.sh: Insert instead of Append used for firewall rules. Only block load-balancer ports and not everything else.
  • block_galera_node.sh made more flexible.
MySQL Backup Manager
  • Cleanup job errors with missing target. Fixed for MGB.
  • Password on command line is not exposed anymore to log file.
  • Instance name optionally added to binary-log backup file names.
  • Binary logs are not cleaned-up because they are not copied with bck_ prefix (Bug #143).
  • Config file example in --help output done more nicely.
  • More strict option checking implemented.
  • All schemas with non transactional tables are shown instead of just the first one.
  • Help typo fixed and example improved.
  • --ignore-memory-table-check implemented to avoid error exit with MEMORY tables.
  • Preparation work for blocking MyISAM backup done.
Taxonomy upgrade extras: myenvoperationMySQL Operationsmulti instanceconsolidationBackuprelease

MySQL Environment MyEnv 1.0.5 has been released

FromDual.en - Fri, 2014-06-13 18:29

FromDual has the pleasure to announce the release of the new version 1.0.5 of its popular MySQL, MariaDB and Percona Server multi-instance environment MyEnv.

The majority of improvements happened on the MySQL Backup Manager (mysql_bman) utility.

You can download MyEnv from here.

In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.0.x to 1.0.5 # cd ${HOME}/product # tar xf /download/myenv-1.0.5.tar.gz # rm -f myenv # ln -s myenv-1.0.5 myenv
Upgrade from 1.0.2 or older to 1.0.3 or newer

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/

Exchange the MyEnv section in ~/.bash_profile (make a backup of this file first?) by the following new one:

# BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile cd $MYENV_BASE; $MYENV_BASE/bin/showMyEnvStatus.php; cd - > /dev/null cd $MYENV_PWD # END MyEnv Changes in MyEnv 1.0.5 MyEnv
  • Schema output in up was still ugly.
  • Instance output is split correctly similar to up/down display.
  • Instance list is now shorter when short instance names are used.
  • ignore-passive option added for myEnv to ignore passive databases in an active/passive fail-over cluster. Based on existence of datadir.
  • Upgrade instructions have been improved and denormalized.
  • Only display existing OEM agents, criteria is directory in oratab must exist.
  • Up instances are not reported with missing mysqladmin command (Galera binary tar balls) but it was not visible what is the reason. Reason is displayed as an error message now.
MyEnv Installer
  • Lists each basedir candidate in a separate line when adding a new instance. More conveniant for reading if many basedirs are available.
MyEnv Utilities
  • block_galera_node.sh: Insert instead of Append used for firewall rules. Only block load-balancer ports and not everything else.
  • block_galera_node.sh made more flexible.
MySQL Backup Manager
  • Cleanup job errors with missing target. Fixed for MGB.
  • Password on command line is not exposed anymore to log file.
  • Instance name optionally added to binary-log backup file names.
  • Binary logs are not cleaned-up because they are not copied with bck_ prefix (Bug #143).
  • Config file example in --help output done more nicely.
  • More strict option checking implemented.
  • All schemas with non transactional tables are shown instead of just the first one.
  • Help typo fixed and example improved.
  • --ignore-memory-table-check implemented to avoid error exit with MEMORY tables.
  • Preparation work for blocking MyISAM backup done.
Taxonomy upgrade extras: myenvoperationMySQL Operationsmulti instanceconsolidationBackuprelease

Pages

Subscribe to FromDual aggregator