You are here
Feed aggregator
Setting the right GCache size in Galera Cluster
One of our customers had a question related to the right value of Galera Cache size (gcache.size) in Galera Cluster for MySQL which I would like to share with you.
The question was: My maintenance window takes 4 hours for my 5TB DB. How can I avoid an SST ?!
Basically, having too small GCache size will lead to SST (Snapshot State Transfer) instead of IST (Incremental State Transfer), thus we can avoid the SST by setting the GCache to the appropriate value.
To check the current value of the GCache size:
mysql> SHOW GLOBAL VARIABLES LIKE 'wsrep_provider_options'\G Variable_name: wsrep_provider_options Value: base_host = 192.168.1.12; . . . gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; . . .The value of GCache size could be changed by adding the following line in the my.cnf file and restarting the node (it could NOT be changed online):
#my.cnf [mysqld] wsrep_provider_options="gcache.size=256M"But the question is how can we calculate the right value for GCache size to cover the maintenance window and at the same time not larger than what it needs?
To answer that question we should first find out how much GCache can handle which could be calculated by the following formula:
Hold time = GCache size / Replication Rate.
Where:
- Replication Rate = Amount of replicated data / time.
- Amount of replicated data = (wsrep_replicated_bytes + wsrep_received_bytes) after the maintenance window - (wsrep_replicated_bytes + wsrep_received_bytes) before the maintenance window.
The amount of replicated data for the customer's case = 7200MB.
Now, we can find out how much GCache (default 128M) can handle for the customer's case:
Hold time = 128MB / (7200MB / 4h) = 128MB / 0.5 MB = 256s.
Then, we can calculate the right GCache size value to handle the maintenance window by the following formula:
GCache = Maintenance window * Replication Rate = 14400s * 0.5 MB.
GCache = 7200MB.
In other words, the right GCache size should be equivalent to (or not less than) the amount of replicated data.
A shorter way using the binary logs sizeIs there any relation between (wsrep_replicated_bytes + wsrep_received_bytes) and the binary log traffic? Lets check that by the following test scenario:
- Starting a Galera Cluster where the newest binary log file size will be empty (120 bytes) and the same for the status variables: mysql> SHOW GLOBAL STATUS LIKE 'wsrep_replicated_bytes'; +------------------------+-------+ | Variable_name | Value | +------------------------+-------+ | wsrep_replicated_bytes | 0 | +------------------------+-------+ 1 row in set (0.00 sec) mysql> SHOW GLOBAL STATUS LIKE 'wsrep_received_bytes'; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | wsrep_received_bytes | 368 | +----------------------+-------+ 1 row in set (0.00 sec)
- Execute some DML statements on the current node (to increase wsrep_replicated_bytes) and on another node (to increase wsrep_received_bytes), then check the status values:
mysql> SHOW GLOBAL STATUS LIKE 'wsrep_replicated_bytes'; +------------------------+----------+ | Variable_name | Value | +------------------------+----------+ | wsrep_replicated_bytes | 80125192 | +------------------------+----------+ 1 row in set (0.00 sec) mysql> SHOW GLOBAL STATUS LIKE 'wsrep_received_bytes'; +------------------------+----------+ | Variable_name | Value | +------------------------+----------+ | wsrep_received_bytes | 40062948 | +------------------------+----------+ 1 row in set (0.00 sec)The Amount of replicated data = (80125192 + 40062948) - (0 + 368) = 120187772 Bytes.
- Checking the increase of the binary log file: shell> ll /var/lib/mysql/mysql-bin.000243 -rw-rw---- 1 mysql mysql 113769454 Mar 20 13:42 mysql-bin.000243
- The variable log_slave_updates MUST to be enabled on ALL nodes, otherwise, the option wsrep_received_bytes will not be reflected on the binary logs, thus will lead to WRONG calculations!!
- Since the cluster is freshly started, MySQL started to write into a new binary log file while in an already running cluster we can force MySQL to start from new binary log file by issuing the SQL command "FLUSH BINARY LOGS;" so that we can use the total size of the binary logs generated after that during the maintenance.
- GCache size = Maintenance window * Replication Rate.
- GCache size = (wsrep_replicated_bytes + wsrep_received_bytes) after the maintenance window - (wsrep_replicated_bytes + wsrep_received_bytes) before the maintenance window.
- GCache size = binary log traffic which occurs during the maintenance window.
Notes:
Although the binary log traffic will be always less than the amount of replicated data but they are nearly close, thus we can use it to get a rough estimation about the right GCache size value, hence the formula will be as follows:
Setting the right GCache size in Galera Cluster
One of our customers had a question related to the right value of Galera Cache size (gcache.size) in Galera Cluster for MySQL which I would like to share with you.
The question was: My maintenance window takes 4 hours for my 5TB DB. How can I avoid an SST ?!
Basically, having too small GCache size will lead to SST (Snapshot State Transfer) instead of IST (Incremental State Transfer), thus we can avoid the SST by setting the GCache to the appropriate value.
To check the current value of the GCache size:
mysql> SHOW GLOBAL VARIABLES LIKE 'wsrep_provider_options'\G Variable_name: wsrep_provider_options Value: base_host = 192.168.1.12; . . . gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; . . .The value of GCache size could be changed by adding the following line in the my.cnf file and restarting the node (it could NOT be changed online):
#my.cnf [mysqld] wsrep_provider_options="gcache.size=256M"But the question is how can we calculate the right value for GCache size to cover the maintenance window and at the same time not larger than what it needs?
To answer that question we should first find out how much GCache can handle which could be calculated by the following formula:
Hold time = GCache size / Replication Rate.
Where:
- Replication Rate = Amount of replicated data / time.
- Amount of replicated data = (wsrep_replicated_bytes + wsrep_received_bytes) after the maintenance window - (wsrep_replicated_bytes + wsrep_received_bytes) before the maintenance window.
The amount of replicated data for the customer's case = 7200MB.
Now, we can find out how much GCache (default 128M) can handle for the customer's case:
Hold time = 128MB / (7200MB / 4h) = 128MB / 0.5 MB = 256s.
Then, we can calculate the right GCache size value to handle the maintenance window by the following formula:
GCache = Maintenance window * Replication Rate = 14400s * 0.5 MB.
GCache = 7200MB.
In other words, the right GCache size should be equivalent to (or not less than) the amount of replicated data.
A shorter way using the binary logs sizeIs there any relation between (wsrep_replicated_bytes + wsrep_received_bytes) and the binary log traffic? Lets check that by the following test scenario:
- Starting a Galera Cluster where the newest binary log file size will be empty (120 bytes) and the same for the status variables: mysql> SHOW GLOBAL STATUS LIKE 'wsrep_replicated_bytes'; +------------------------+-------+ | Variable_name | Value | +------------------------+-------+ | wsrep_replicated_bytes | 0 | +------------------------+-------+ 1 row in set (0.00 sec) mysql> SHOW GLOBAL STATUS LIKE 'wsrep_received_bytes'; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | wsrep_received_bytes | 368 | +----------------------+-------+ 1 row in set (0.00 sec)
- Execute some DML statements on the current node (to increase wsrep_replicated_bytes) and on another node (to increase wsrep_received_bytes), then check the status values:
mysql> SHOW GLOBAL STATUS LIKE 'wsrep_replicated_bytes'; +------------------------+----------+ | Variable_name | Value | +------------------------+----------+ | wsrep_replicated_bytes | 80125192 | +------------------------+----------+ 1 row in set (0.00 sec) mysql> SHOW GLOBAL STATUS LIKE 'wsrep_received_bytes'; +------------------------+----------+ | Variable_name | Value | +------------------------+----------+ | wsrep_received_bytes | 40062948 | +------------------------+----------+ 1 row in set (0.00 sec)The Amount of replicated data = (80125192 + 40062948) - (0 + 368) = 120187772 Bytes.
- Checking the increase of the binary log file: shell> ll /var/lib/mysql/mysql-bin.000243 -rw-rw---- 1 mysql mysql 113769454 Mar 20 13:42 mysql-bin.000243
- The variable log_slave_updates MUST to be enabled on ALL nodes, otherwise, the option wsrep_received_bytes will not be reflected on the binary logs, thus will lead to WRONG calculations!!
- Since the cluster is freshly started, MySQL started to write into a new binary log file while in an already running cluster we can force MySQL to start from new binary log file by issuing the SQL command "FLUSH BINARY LOGS;" so that we can use the total size of the binary logs generated after that during the maintenance.
- GCache size = Maintenance window * Replication Rate.
- GCache size = (wsrep_replicated_bytes + wsrep_received_bytes) after the maintenance window - (wsrep_replicated_bytes + wsrep_received_bytes) before the maintenance window.
- GCache size = binary log traffic which occurs during the maintenance window.
Notes:
Although the binary log traffic will be always less than the amount of replicated data but they are nearly close, thus we can use it to get a rough estimation about the right GCache size value, hence the formula will be as follows:
Impact of General Query Log on MySQL Performance
Sometimes, it is required to enable the general query log (which is disabled by default). If the general query log is enabled the server writes to this log information when clients connect or disconnect, and each SQL statement received from the client.
The question is, does enabling the general query log affects the MySQL performance ?
Also, it is possible to record the output of this log into either file or table in the mysql database (mysql.general_log), what is the performance impact of each one?
Let's do some simple benchmark for those scenarios to measure the actual impact on the mysql performance.
System Information: HW configurations:- CPU: Intel® Core™ i7-3520M Processor (4M Cache, up to 3.60 GHz).
- 2 cores, 4 threads, HT enabled.
- Memory: 8GB RAM (1600).
- Storage: HDD 1TB/ 5400RPM.
- OS: Ubuntu 12.04
- MySQL Server: 5.6.17
- Sysbench: 0.4.12
- Sysbench command: sysbench --num-threads=1 --max-requests=1000 --db-driver=mysql --test=oltp --mysql-table-engine=InnoDB --mysql-user=root run .
- Table structure which was used by sysbench tests: mysql> show create table sbtest.sbtest\G CREATE TABLE `sbtest` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `k` int(10) unsigned NOT NULL DEFAULT '0', `c` char(120) NOT NULL DEFAULT '', `pad` char(60) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `k` (`k`) ) ENGINE=InnoDB AUTO_INCREMENT=8574 DEFAULT CHARSET=latin1
Note:
The test had been made against 1,2,4,8,16 and 32 threads, each throughput/response time value for each thread's number for each test case is generated by the average of ten (10) times execution.
General log Disabled:To make sure that the general query log is disabled:
mysql> show global variables like'general_log'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | general_log | OFF | +---------------+-------+Testing results:
General log Enabled:The general query log is a dynamic variable which means that it could be enabled or disabled ONLINE without restarting MySQL (since MySQL 5.1):
mysql> SET GLOBAL general_log=ON;We can choose the output for this log to be either in a log file (by default) or in a MySQL table (mysql.general_log).
What are the benefits that we might get if we have the log output to be stored in a table not in a file?
- We can use the normal SQL statements to access the log contents to get information about specific criteria (e.g. using WHERE condition) and this is a little harder when dealing with files.
- The log contents could be accessed remotely if someone can connect to the MySQL server.
- Standard format for the log entries.
- If the CSV engine is used for the log table, it will be easy to import the CSV file into spreadsheets.
- It is easy to expire the logs by simply TRUNCATE the log table.
- Log rotation is possible by using RENAME TABLE statement.
- Log entries are not replicated to the slave because they are not written to the binary logs.
- The mysqldump does not include the log tables contents (general_log or slow_log) in the backup even if --all-databases backup option is used.
So, let's check the performance impact then of each log output.
Output is FILE:To check the output destination of the general log, the following command should be used:
mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | FILE | +---------------+-------+Testing results:
Output is TABLE (CSV table):To change the output destination of the general log from file to table (CSV by default), the following command should be used:
mysql> SET GLOBAL log_output='TABLE'; mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | TABLE | +---------------+-------+Testing results:
Output is TABLE (MyISAM table):Maybe due to the nature of the CSV storage ENGINE we faced performance issues in the previous case. Is it possible to change the table engine of the general_log table?
The answer is yes, but unfortunately, we are restricted to use only MyISAM storage engine and no engines other than CSV or MyISAM are allowed. Check this link for more information.
To alter the log table, you must first disable the logging:
mysql> alter table mysql.general_log engine=MYISAM; ERROR 1580 (HY000): You cannot 'ALTER' a log table if logging is enabled mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log engine=MYISAM; mysql> SET GLOBAL general_log=ON;Testing results:
Output is TABLE (MyISAM table with some structures changes):In general, to make any SQL query work faster, we might need to optimize the table structure, add indexes, re-write the query, .. etc.
The following is the general log table structure:
mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log'Let's check what can we do to optimize the general_log table structure (other suggestions are welcomed):
- Can we create partitions on that table which might boost our search?
Although this is a normal MyISAM table but partitioning IS NOT allowed for the log tables (by the way, partitioning is not permitted also for CSV tables). - Can we change the datatype of the `user_host` column from mediumtext to e.g. varchar(100)? (The max length of this column's data on my machine didn't exceed 50 characters)
While it is - syntax wise - accepted but no logs will be stored in the table after that and the following error will be printed in the error log file: 2014-03-06 18:44:21 6987 [ERROR] Failed to write to mysql.general_log: 2014-03-06 18:44:23 6987 [ERROR] Incorrect definition of table mysql.general_log: expected column 'user_host' at position 1 to have type mediumtext, found type varchar(100). - What about creating indexes on the columns we will use for most of our searches (`event_time` and `argument`)? lets try that!
- Add index on `event_time` column:
mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add index ev_tm_idx(`event_time`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;Testing results:
- Add index on `event_time` column:
- Add FULLTEXT index on `argument` column:
mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add fulltext index (`argument`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`), FULLTEXT KEY `argument` (`argument`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;Testing results:
To make it more clear, the following is the combination of all results in one graph followed by response time comparison:
The raw results in Transactions / Sec might be useful:
Threads12481632General log disabled383.996814.7591421.2881674.7331414.9851071.189General log enabled (File)281.642521.391230.7431406.1271095.896923.986General log enabled (CSV Table)231.659447.173787.578507.846426.324439.992General log enabled (MyISAM Table)249.47536.379933.304532.912476.454454.015General log enabled (MyISAM Table + index)238.508430.05875.209465.464465.464395.063General log enabled (MyISAM Table + Fulltext index)157.436236.156210.968212.273218.617220.701
Conclusion:
- The best MySQL performance - among all above test cases - could be achieved by disabling the general query log, e.g. if we compared the above results for the case 4 concurrent threads (the highest value for most scenarios) we would find that:
- Using the general log enabled (general_log = ON) and the log destination is file (log_output = FILE) decreased the throughput by 13.4% and increased the response time by 17.5%.
- Using the general log enabled and the log destination is CSV table decreased the throughput by 44.6% and increased the response time by 90%.
- Using the general log enabled and the log destination is MyISAM table decreased the throughput by 34.3% and increased the response time by 59%.
- Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` decreased the throughput by 38.4% and increased the response time by 73%.
- Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` and FULLTEXT index added on column `argument` decreased the throughput by 85% and increased the response time by 542%.
- Although using table as a log output destination has many benefits - as described above - but it has more negative impact on the MySQL performance as compared to log file.
- Increasing the number of concurrently running threads - in the case of log_output=TABLE - will increase the general_log table contention which is controlled by the table locking level for MyISAM or CSV ENGINES.
- Like any other MySQL table - as many rows inserted in the log table as more negative performance impact.
- Although mysqldump does not include the log tables contents in the backup but this is not the case when having full physical backup using Xtrabackup or any other physical backup based tools.
- Finally, it is preferred to only enable the general query log when it is really needed and it is not recommended to enable it in a production system. It could be enabled (dynamically) for a while then should be disabled again once we have got what we are searching for.
Impact of General Query Log on MySQL Performance
Sometimes, it is required to enable the general query log (which is disabled by default). If the general query log is enabled the server writes to this log information when clients connect or disconnect, and each SQL statement received from the client.
The question is, does enabling the general query log affects the MySQL performance ?
Also, it is possible to record the output of this log into either file or table in the mysql database (mysql.general_log), what is the performance impact of each one?
Let's do some simple benchmark for those scenarios to measure the actual impact on the mysql performance.
System Information: HW configurations:- CPU: Intel® Core™ i7-3520M Processor (4M Cache, up to 3.60 GHz).
- 2 cores, 4 threads, HT enabled.
- Memory: 8GB RAM (1600).
- Storage: HDD 1TB/ 5400RPM.
- OS: Ubuntu 12.04
- MySQL Server: 5.6.17
- Sysbench: 0.4.12
- Sysbench command: sysbench --num-threads=1 --max-requests=1000 --db-driver=mysql --test=oltp --mysql-table-engine=InnoDB --mysql-user=root run .
- Table structure which was used by sysbench tests: mysql> show create table sbtest.sbtest\G CREATE TABLE `sbtest` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `k` int(10) unsigned NOT NULL DEFAULT '0', `c` char(120) NOT NULL DEFAULT '', `pad` char(60) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `k` (`k`) ) ENGINE=InnoDB AUTO_INCREMENT=8574 DEFAULT CHARSET=latin1
Note:
The test had been made against 1,2,4,8,16 and 32 threads, each throughput/response time value for each thread's number for each test case is generated by the average of ten (10) times execution.
General log Disabled:To make sure that the general query log is disabled:
mysql> show global variables like'general_log'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | general_log | OFF | +---------------+-------+Testing results:
General log Enabled:The general query log is a dynamic variable which means that it could be enabled or disabled ONLINE without restarting MySQL (since MySQL 5.1):
mysql> SET GLOBAL general_log=ON;We can choose the output for this log to be either in a log file (by default) or in a MySQL table (mysql.general_log).
What are the benefits that we might get if we have the log output to be stored in a table not in a file?
- We can use the normal SQL statements to access the log contents to get information about specific criteria (e.g. using WHERE condition) and this is a little harder when dealing with files.
- The log contents could be accessed remotely if someone can connect to the MySQL server.
- Standard format for the log entries.
- If the CSV engine is used for the log table, it will be easy to import the CSV file into spreadsheets.
- It is easy to expire the logs by simply TRUNCATE the log table.
- Log rotation is possible by using RENAME TABLE statement.
- Log entries are not replicated to the slave because they are not written to the binary logs.
- The mysqldump does not include the log tables contents (general_log or slow_log) in the backup even if --all-databases backup option is used.
So, let's check the performance impact then of each log output.
Output is FILE:To check the output destination of the general log, the following command should be used:
mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | FILE | +---------------+-------+Testing results:
Output is TABLE (CSV table):To change the output destination of the general log from file to table (CSV by default), the following command should be used:
mysql> SET GLOBAL log_output='TABLE'; mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | TABLE | +---------------+-------+Testing results:
Output is TABLE (MyISAM table):Maybe due to the nature of the CSV storage ENGINE we faced performance issues in the previous case. Is it possible to change the table engine of the general_log table?
The answer is yes, but unfortunately, we are restricted to use only MyISAM storage engine and no engines other than CSV or MyISAM are allowed. Check this link for more information.
To alter the log table, you must first disable the logging:
mysql> alter table mysql.general_log engine=MYISAM; ERROR 1580 (HY000): You cannot 'ALTER' a log table if logging is enabled mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log engine=MYISAM; mysql> SET GLOBAL general_log=ON;Testing results:
Output is TABLE (MyISAM table with some structures changes):In general, to make any SQL query work faster, we might need to optimize the table structure, add indexes, re-write the query, .. etc.
The following is the general log table structure:
mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log'Let's check what can we do to optimize the general_log table structure (other suggestions are welcomed):
- Can we create partitions on that table which might boost our search?
Although this is a normal MyISAM table but partitioning IS NOT allowed for the log tables (by the way, partitioning is not permitted also for CSV tables). - Can we change the datatype of the `user_host` column from mediumtext to e.g. varchar(100)? (The max length of this column's data on my machine didn't exceed 50 characters)
While it is - syntax wise - accepted but no logs will be stored in the table after that and the following error will be printed in the error log file: 2014-03-06 18:44:21 6987 [ERROR] Failed to write to mysql.general_log: 2014-03-06 18:44:23 6987 [ERROR] Incorrect definition of table mysql.general_log: expected column 'user_host' at position 1 to have type mediumtext, found type varchar(100). - What about creating indexes on the columns we will use for most of our searches (`event_time` and `argument`)? lets try that!
- Add index on `event_time` column:
mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add index ev_tm_idx(`event_time`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;Testing results:
- Add index on `event_time` column:
- Add FULLTEXT index on `argument` column:
mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add fulltext index (`argument`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`), FULLTEXT KEY `argument` (`argument`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;Testing results:
To make it more clear, the following is the combination of all results in one graph followed by response time comparison:
The raw results in Transactions / Sec might be useful:
Threads12481632General log disabled383.996814.7591421.2881674.7331414.9851071.189General log enabled (File)281.642521.391230.7431406.1271095.896923.986General log enabled (CSV Table)231.659447.173787.578507.846426.324439.992General log enabled (MyISAM Table)249.47536.379933.304532.912476.454454.015General log enabled (MyISAM Table + index)238.508430.05875.209465.464465.464395.063General log enabled (MyISAM Table + Fulltext index)157.436236.156210.968212.273218.617220.701
Conclusion:
- The best MySQL performance - among all above test cases - could be achieved by disabling the general query log, e.g. if we compared the above results for the case 4 concurrent threads (the highest value for most scenarios) we would find that:
- Using the general log enabled (general_log = ON) and the log destination is file (log_output = FILE) decreased the throughput by 13.4% and increased the response time by 17.5%.
- Using the general log enabled and the log destination is CSV table decreased the throughput by 44.6% and increased the response time by 90%.
- Using the general log enabled and the log destination is MyISAM table decreased the throughput by 34.3% and increased the response time by 59%.
- Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` decreased the throughput by 38.4% and increased the response time by 73%.
- Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` and FULLTEXT index added on column `argument` decreased the throughput by 85% and increased the response time by 542%.
- Although using table as a log output destination has many benefits - as described above - but it has more negative impact on the MySQL performance as compared to log file.
- Increasing the number of concurrently running threads - in the case of log_output=TABLE - will increase the general_log table contention which is controlled by the table locking level for MyISAM or CSV ENGINES.
- Like any other MySQL table - as many rows inserted in the log table as more negative performance impact.
- Although mysqldump does not include the log tables contents in the backup but this is not the case when having full physical backup using Xtrabackup or any other physical backup based tools.
- Finally, it is preferred to only enable the general query log when it is really needed and it is not recommended to enable it in a production system. It could be enabled (dynamically) for a while then should be disabled again once we have got what we are searching for.
Impact of General Query Log on MySQL Performance
Sometimes, it is required to enable the general query log (which is disabled by default). If the general query log is enabled the server writes to this log information when clients connect or disconnect, and each SQL statement received from the client.
The question is, does enabling the general query log affects the MySQL performance ?
Also, it is possible to record the output of this log into either file or table in the mysql database (mysql.general_log), what is the performance impact of each one?
Let's do some simple benchmark for those scenarios to measure the actual impact on the mysql performance.
System Information: HW configurations:- CPU: Intel® Core™ i7-3520M Processor (4M Cache, up to 3.60 GHz).
- 2 cores, 4 threads, HT enabled.
- Memory: 8GB RAM (1600).
- Storage: HDD 1TB/ 5400RPM.
- OS: Ubuntu 12.04
- MySQL Server: 5.6.17
- Sysbench: 0.4.12
- Sysbench command: sysbench --num-threads=1 --max-requests=1000 --db-driver=mysql --test=oltp --mysql-table-engine=InnoDB --mysql-user=root run .
- Table structure which was used by sysbench tests: mysql> show create table sbtest.sbtest\G CREATE TABLE `sbtest` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `k` int(10) unsigned NOT NULL DEFAULT '0', `c` char(120) NOT NULL DEFAULT '', `pad` char(60) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `k` (`k`) ) ENGINE=InnoDB AUTO_INCREMENT=8574 DEFAULT CHARSET=latin1
Note:
The test had been made against 1,2,4,8,16 and 32 threads, each throughput/response time value for each thread's number for each test case is generated by the average of ten (10) times execution.
General log Disabled:To make sure that the general query log is disabled:
mysql> show global variables like'general_log'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | general_log | OFF | +---------------+-------+Testing results:
General log Enabled:The general query log is a dynamic variable which means that it could be enabled or disabled ONLINE without restarting MySQL (since MySQL 5.1):
mysql> SET GLOBAL general_log=ON;We can choose the output for this log to be either in a log file (by default) or in a MySQL table (mysql.general_log).
What are the benefits that we might get if we have the log output to be stored in a table not in a file?
- We can use the normal SQL statements to access the log contents to get information about specific criteria (e.g. using WHERE condition) and this is a little harder when dealing with files.
- The log contents could be accessed remotely if someone can connect to the MySQL server.
- Standard format for the log entries.
- If the CSV engine is used for the log table, it will be easy to import the CSV file into spreadsheets.
- It is easy to expire the logs by simply TRUNCATE the log table.
- Log rotation is possible by using RENAME TABLE statement.
- Log entries are not replicated to the slave because they are not written to the binary logs.
- The mysqldump does not include the log tables contents (general_log or slow_log) in the backup even if --all-databases backup option is used.
So, let's check the performance impact then of each log output.
Output is FILE:To check the output destination of the general log, the following command should be used:
mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | FILE | +---------------+-------+Testing results:
Output is TABLE (CSV table):To change the output destination of the general log from file to table (CSV by default), the following command should be used:
mysql> SET GLOBAL log_output='TABLE'; mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | TABLE | +---------------+-------+Testing results:
Output is TABLE (MyISAM table):Maybe due to the nature of the CSV storage ENGINE we faced performance issues in the previous case. Is it possible to change the table engine of the general_log table?
The answer is yes, but unfortunately, we are restricted to use only MyISAM storage engine and no engines other than CSV or MyISAM are allowed. Check this link for more information.
To alter the log table, you must first disable the logging:
mysql> alter table mysql.general_log engine=MYISAM; ERROR 1580 (HY000): You cannot 'ALTER' a log table if logging is enabled mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log engine=MYISAM; mysql> SET GLOBAL general_log=ON;Testing results:
Output is TABLE (MyISAM table with some structures changes):In general, to make any SQL query work faster, we might need to optimize the table structure, add indexes, re-write the query, .. etc.
The following is the general log table structure:
mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log'Let's check what can we do to optimize the general_log table structure (other suggestions are welcomed):
- Can we create partitions on that table which might boost our search?
Although this is a normal MyISAM table but partitioning IS NOT allowed for the log tables (by the way, partitioning is not permitted also for CSV tables). - Can we change the datatype of the `user_host` column from mediumtext to e.g. varchar(100)? (The max length of this column's data on my machine didn't exceed 50 characters)
While it is - syntax wise - accepted but no logs will be stored in the table after that and the following error will be printed in the error log file: 2014-03-06 18:44:21 6987 [ERROR] Failed to write to mysql.general_log: 2014-03-06 18:44:23 6987 [ERROR] Incorrect definition of table mysql.general_log: expected column 'user_host' at position 1 to have type mediumtext, found type varchar(100). - What about creating indexes on the columns we will use for most of our searches (`event_time` and `argument`)? lets try that!
- Add index on `event_time` column:
mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add index ev_tm_idx(`event_time`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;Testing results:
- Add index on `event_time` column:
- Add FULLTEXT index on `argument` column:
mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add fulltext index (`argument`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`), FULLTEXT KEY `argument` (`argument`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;Testing results:
To make it more clear, the following is the combination of all results in one graph followed by response time comparison:
The raw results in Transactions / Sec might be useful:
Threads12481632General log disabled383.996814.7591421.2881674.7331414.9851071.189General log enabled (File)281.642521.391230.7431406.1271095.896923.986General log enabled (CSV Table)231.659447.173787.578507.846426.324439.992General log enabled (MyISAM Table)249.47536.379933.304532.912476.454454.015General log enabled (MyISAM Table + index)238.508430.05875.209465.464465.464395.063General log enabled (MyISAM Table + Fulltext index)157.436236.156210.968212.273218.617220.701
Conclusion:
- The best MySQL performance - among all above test cases - could be achieved by disabling the general query log, e.g. if we compared the above results for the case 4 concurrent threads (the highest value for most scenarios) we would find that:
- Using the general log enabled (general_log = ON) and the log destination is file (log_output = FILE) decreased the throughput by 13.4% and increased the response time by 17.5%.
- Using the general log enabled and the log destination is CSV table decreased the throughput by 44.6% and increased the response time by 90%.
- Using the general log enabled and the log destination is MyISAM table decreased the throughput by 34.3% and increased the response time by 59%.
- Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` decreased the throughput by 38.4% and increased the response time by 73%.
- Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` and FULLTEXT index added on column `argument` decreased the throughput by 85% and increased the response time by 542%.
- Although using table as a log output destination has many benefits - as described above - but it has more negative impact on the MySQL performance as compared to log file.
- Increasing the number of concurrently running threads - in the case of log_output=TABLE - will increase the general_log table contention which is controlled by the table locking level for MyISAM or CSV ENGINES.
- Like any other MySQL table - as many rows inserted in the log table as more negative performance impact.
- Although mysqldump does not include the log tables contents in the backup but this is not the case when having full physical backup using Xtrabackup or any other physical backup based tools.
- Finally, it is preferred to only enable the general query log when it is really needed and it is not recommended to enable it in a production system. It could be enabled (dynamically) for a while then should be disabled again once we have got what we are searching for.
Why FromDual customers are using Galera Cluster for MySQL
We are very satisfied with the functionality and the launch. The entire Cluster is running absolutely stable. This is a significant advantage for our customers because we are able to offer our services without interruption. We will use Galera in further projects and convert our existing Master/Slave installations.
A. Rempening, Datenbank-Administrator, KiKxxl GmbH
Download this case study: KiKxxl Gmbh: Highly available communication services based on a MySQL database and Galera Cluster (PDF, 420 kbyte).
Taxonomy upgrade extras: galeraclustercustomerWhy FromDual customers are using Galera Cluster for MySQL
We are very satisfied with the functionality and the launch. The entire Cluster is running absolutely stable. This is a significant advantage for our customers because we are able to offer our services without interruption. We will use Galera in further projects and convert our existing Master/Slave installations.
A. Rempening, Datenbank-Administrator, KiKxxl GmbH
Download this case study: KiKxxl Gmbh: Highly available communication services based on a MySQL database and Galera Cluster (PDF, 420 kbyte).
Taxonomy upgrade extras: galeraclustercustomerWhy FromDual customers are using Galera Cluster for MySQL
We are very satisfied with the functionality and the launch. The entire Cluster is running absolutely stable. This is a significant advantage for our customers because we are able to offer our services without interruption. We will use Galera in further projects and convert our existing Master/Slave installations.
A. Rempening, Datenbank-Administrator, KiKxxl GmbH
Download this case study: KiKxxl Gmbh: Highly available communication services based on a MySQL database and Galera Cluster (PDF, 420 kbyte).
Warum FromDual Kunden Galera Cluster für MySQL einsetzen
Wir sind mit der Funktionalität und der Einführung sehr zufrieden. Das gesamte Cluster läuft absolut stabil. Dies ist ein erheblicher Vorteil für unseren Kunden denn wir können unsere Dienstleistungen unterbrechungsfrei anbieten. Wir werden Galera bei weiteren Projekten einsetzen und bestehende Master/Slave-Anwendungen umstellen.
A. Rempening, Datenbank-Administrator, KiKxxl GmbH
Den Referenzbericht können Sie hier herunterladen: KiKxxl Gmbh: Hochverfügbare Kommunikationsservices auf Basis der MySQL Datenbank und Galera Cluster (PDF, 420 kbyte).
Taxonomy upgrade extras: galeraclustercustomerWarum FromDual Kunden Galera Cluster für MySQL einsetzen
Wir sind mit der Funktionalität und der Einführung sehr zufrieden. Das gesamte Cluster läuft absolut stabil. Dies ist ein erheblicher Vorteil für unseren Kunden denn wir können unsere Dienstleistungen unterbrechungsfrei anbieten. Wir werden Galera bei weiteren Projekten einsetzen und bestehende Master/Slave-Anwendungen umstellen.
A. Rempening, Datenbank-Administrator, KiKxxl GmbH
Den Referenzbericht können Sie hier herunterladen: KiKxxl Gmbh: Hochverfügbare Kommunikationsservices auf Basis der MySQL Datenbank und Galera Cluster (PDF, 420 kbyte).
Taxonomy upgrade extras: galeraclustercustomerWarum FromDual Kunden Galera Cluster für MySQL einsetzen
Wir sind mit der Funktionalität und der Einführung sehr zufrieden. Das gesamte Cluster läuft absolut stabil. Dies ist ein erheblicher Vorteil für unseren Kunden denn wir können unsere Dienstleistungen unterbrechungsfrei anbieten. Wir werden Galera bei weiteren Projekten einsetzen und bestehende Master/Slave-Anwendungen umstellen.
A. Rempening, Datenbank-Administrator, KiKxxl GmbH
Den Referenzbericht können Sie hier herunterladen: KiKxxl Gmbh: Hochverfügbare Kommunikationsservices auf Basis der MySQL Datenbank und Galera Cluster (PDF, 420 kbyte).
Programm für DOAG SIG MySQL vom 27. 3.
Das Programm für den DOAG SIG MySQL Tag in Berlin steht jetzt fest: http://www.doag.org/termine/termine.php?tid=477934
Unternehmen aus der Berliner IT Szene zeigen, wie Sie ihre Probleme mit MySQL lösen.
Anmeldung hier: https://www.doag.org/termine/anmeldung.php?tid=477934
Programm für DOAG SIG MySQL vom 27. 3.
Das Programm für den DOAG SIG MySQL Tag in Berlin steht jetzt fest: http://www.doag.org/termine/termine.php?tid=477934
Unternehmen aus der Berliner IT Szene zeigen, wie Sie ihre Probleme mit MySQL lösen.
Anmeldung hier: https://www.doag.org/termine/anmeldung.php?tid=477934
Programm für DOAG SIG MySQL vom 27. 3.
Das Programm für den DOAG SIG MySQL Tag in Berlin steht jetzt fest: http://www.doag.org/termine/termine.php?tid=477934
Unternehmen aus der Berliner IT Szene zeigen, wie Sie ihre Probleme mit MySQL lösen.
Anmeldung hier: https://www.doag.org/termine/anmeldung.php?tid=477934
Galera Cluster für MySQL Kurs 17./18. März 2014 in Essen
Am Montag, 17. und Dienstag 18. März 2014 findet im Linux-Hotel in Essen eine Galera-Cluster Schulung statt. Die Schulung wird mit Sicherheit durchgeführt werden, da die minimale Teilnehmerzahl bereits erreicht ist.
Falls Sie Interesse haben, an dieser Schulung teilzunehmen, bitten wir Sie, Sich rechtzeitig anzumelden, um Sich Ihren Platz zu sichern.
Anmelden für die Galera Cluster Schulung können Sie Sich hier.
Vom 7. bis 11. April findet eine weitere Schulung MySQL für Profis in Berlin statt.
Bitte vermerken Sie unter Anmerkungen, dass Sie über den FromDual Newsletter auf das Angebot aufmerksam gemacht wurden.
Alle übrigen Schulungstermine finden Sie unter MySQL Schulung.
Taxonomy upgrade extras: galeraclusterschulungGalera Cluster für MySQL Kurs 17./18. März 2014 in Essen
Am Montag, 17. und Dienstag 18. März 2014 findet im Linux-Hotel in Essen eine Galera-Cluster Schulung statt. Die Schulung wird mit Sicherheit durchgeführt werden, da die minimale Teilnehmerzahl bereits erreicht ist.
Falls Sie Interesse haben, an dieser Schulung teilzunehmen, bitten wir Sie, Sich rechtzeitig anzumelden, um Sich Ihren Platz zu sichern.
Anmelden für die Galera Cluster Schulung können Sie Sich hier.
Vom 7. bis 11. April findet eine weitere Schulung MySQL für Profis in Berlin statt.
Bitte vermerken Sie unter Anmerkungen, dass Sie über den FromDual Newsletter auf das Angebot aufmerksam gemacht wurden.
Alle übrigen Schulungstermine finden Sie unter MySQL Schulung.
Taxonomy upgrade extras: galeraclusterschulungGalera Cluster für MySQL Kurs 17./18. März 2014 in Essen
Am Montag, 17. und Dienstag 18. März 2014 findet im Linux-Hotel in Essen eine Galera-Cluster Schulung statt. Die Schulung wird mit Sicherheit durchgeführt werden, da die minimale Teilnehmerzahl bereits erreicht ist.
Falls Sie Interesse haben, an dieser Schulung teilzunehmen, bitten wir Sie, Sich rechtzeitig anzumelden, um Sich Ihren Platz zu sichern.
Anmelden für die Galera Cluster Schulung können Sie Sich hier.
Vom 7. bis 11. April findet eine weitere Schulung MySQL für Profis in Berlin statt.
Bitte vermerken Sie unter Anmerkungen, dass Sie über den FromDual Newsletter auf das Angebot aufmerksam gemacht wurden.
Alle übrigen Schulungstermine finden Sie unter MySQL Schulung.
Verbesserte MySQL Datenbank-Performance mit Virident Flash-Cache
im letzten Newsletter haben wir Sie über die FromDual Beratungs-Dienstleistungen informiert. Fokus waren die Verfügbarkeit und die Vermeidung von Ausfällen der MySQL Datenbank (Welche Kosten verursacht eine Stunde Ausfallzeit Ihrer MySQL Datenbank?).
Heute möchten wir uns auf Hardwareperformance und somit auch auf Applikationsperformance konzentrieren. Wir zeigen Ihnen zwei Möglichkeiten auf, wie Sie mit Ihrer MySQL-Anwendung schneller werden und haben interessante Angebote für Sie. Im Januar 2014 ist FromDual Virident (Western Digital HGST) Partner geworden und hat damit die Möglichkeit für erhöhte Performance-Anforderungen Ihre vorhandenen Server auch mit PCIe Flash-Karten auszustatten und so nicht mehr auf langsame Platten angewiesen zu sein.
Beschleunigen Sie Ihre MySQL Datenbanken mit unserer Hilfe:
- Performance Tuning und Optimierung der MySQL Datenbank durch:
- Überwachung der Datenbank mit dem MySQL Performance Monitor,
- eine Datenbank-Konfigurationsanalyse mit unserer Frageliste: dem MySQL Performance Tuning Schlüssel, einer Anleitung zum identifizieren von Performance Problemen und zu deren Behebung,
- sowie dem MySQL Gesundheits-Check.
- Performance Tuning der I/O-Raten und der Plattenzugriffe mit neuester Hardware von Virident.
Wir stellen Ihnen ein Paket für MySQL Turbo Performance vor, bestehend aus der PCIe Flash-Einsteckkarte FlashMaxII und einem Architektur-Consulting für die optimale Anpassung der MySQL Datenbank an die Flash-Karten.
Hier ein Performancevergleich zwischen konventionellen RAID-10 Platten und Virident Flash-Karten:
Spezial-Angebot 1: PCIe Einsteckkarte FlashMaxII MLC mit 550 GByte KapazitätInklusive 2 Tage FromDual remote Consulting für die Unterstützung bei der Inbetriebnahme und der Optimierung der Datenbankanwendung für eine optimale Nutzung der Flash-Karten.
Hilfestellung erfolgt bei:
- der Konfiguration des Linux Kernels,
- der Optimierung der Treiber,
- der Konfigurierung des Laufwerks,
- der Konfigurierung der MySQL Parameter für die Datenbank-Caches, Log-File Grössen, etc.
- sowie weiterer Tuning-Tipps bezüglich MySQL, MariaDB oder Percona Server.
Preis für eine Karte und 2 Tage FromDual remote-Consulting: EUR 4'990.-
Spezial-Angebot 2: PCIe Einsteckkarte FlashMaxII MLC mit 550 GByte KapazitätInklusive 1 Tag FromDual remote Consulting für die Unterstützung bei der Inbetriebnahme und der Optimierung der Datenbankanwendung für eine optimale Nutzung des Flash-Karten.
Hilfestellung erfolgt bei:
- der Konfigurierung der MySQL Parameter für die Datenbank-Caches, Log-File Grössen, etc.
- sowie weiterer Tuning-Tipps bezüglich MySQL, MariaDB oder Percona Server.
Bei diesem Angebot muss die PCIe-Karte selber eingebaut, Linux konfiguriert und das Filesystem für die Nutzung der Flash-Karten aufgesetzt sein. Die Leistungserbringung beschränkt sich auf die Konfiguration der MySQL Datenbank.
Preis für eine Karte und 1 Tag FromDual remote-Consulting: EUR 3'990.-
FlashMaxII ist verfügbar in weiteren Konfigurationen und Grössen. Diese finden Sie auf der Virident Webseite.
Den Support für Ihre MySQL-Datenbank erhalten Sie von FromDual, denjenigen für die Flash-Karten direkt bei Virident.
Gerne erstellen wir Ihnen ein Spezial-Angebot für unsere Leistungen in Verbindung mit weiteren FlashMaxII Konfigurationen.
Wir freuen uns, mehr von Ihnen zu hören.
Taxonomy upgrade extras: mysqlperformanceflashssdviridentVerbesserte MySQL Datenbank-Performance mit Virident Flash-Cache
im letzten Newsletter haben wir Sie über die FromDual Beratungs-Dienstleistungen informiert. Fokus waren die Verfügbarkeit und die Vermeidung von Ausfällen der MySQL Datenbank (Welche Kosten verursacht eine Stunde Ausfallzeit Ihrer MySQL Datenbank?).
Heute möchten wir uns auf Hardwareperformance und somit auch auf Applikationsperformance konzentrieren. Wir zeigen Ihnen zwei Möglichkeiten auf, wie Sie mit Ihrer MySQL-Anwendung schneller werden und haben interessante Angebote für Sie. Im Januar 2014 ist FromDual Virident (Western Digital HGST) Partner geworden und hat damit die Möglichkeit für erhöhte Performance-Anforderungen Ihre vorhandenen Server auch mit PCIe Flash-Karten auszustatten und so nicht mehr auf langsame Platten angewiesen zu sein.
Beschleunigen Sie Ihre MySQL Datenbanken mit unserer Hilfe:
- Performance Tuning und Optimierung der MySQL Datenbank durch:
- Überwachung der Datenbank mit dem MySQL Performance Monitor,
- eine Datenbank-Konfigurationsanalyse mit unserer Frageliste: dem MySQL Performance Tuning Schlüssel, einer Anleitung zum identifizieren von Performance Problemen und zu deren Behebung,
- sowie dem MySQL Gesundheits-Check.
- Performance Tuning der I/O-Raten und der Plattenzugriffe mit neuester Hardware von Virident.
Wir stellen Ihnen ein Paket für MySQL Turbo Performance vor, bestehend aus der PCIe Flash-Einsteckkarte FlashMaxII und einem Architektur-Consulting für die optimale Anpassung der MySQL Datenbank an die Flash-Karten.
Hier ein Performancevergleich zwischen konventionellen RAID-10 Platten und Virident Flash-Karten:
Spezial-Angebot 1: PCIe Einsteckkarte FlashMaxII MLC mit 550 GByte KapazitätInklusive 2 Tage FromDual remote Consulting für die Unterstützung bei der Inbetriebnahme und der Optimierung der Datenbankanwendung für eine optimale Nutzung der Flash-Karten.
Hilfestellung erfolgt bei:
- der Konfiguration des Linux Kernels,
- der Optimierung der Treiber,
- der Konfigurierung des Laufwerks,
- der Konfigurierung der MySQL Parameter für die Datenbank-Caches, Log-File Grössen, etc.
- sowie weiterer Tuning-Tipps bezüglich MySQL, MariaDB oder Percona Server.
Preis für eine Karte und 2 Tage FromDual remote-Consulting: EUR 4'990.-
Spezial-Angebot 2: PCIe Einsteckkarte FlashMaxII MLC mit 550 GByte KapazitätInklusive 1 Tag FromDual remote Consulting für die Unterstützung bei der Inbetriebnahme und der Optimierung der Datenbankanwendung für eine optimale Nutzung des Flash-Karten.
Hilfestellung erfolgt bei:
- der Konfigurierung der MySQL Parameter für die Datenbank-Caches, Log-File Grössen, etc.
- sowie weiterer Tuning-Tipps bezüglich MySQL, MariaDB oder Percona Server.
Bei diesem Angebot muss die PCIe-Karte selber eingebaut, Linux konfiguriert und das Filesystem für die Nutzung der Flash-Karten aufgesetzt sein. Die Leistungserbringung beschränkt sich auf die Konfiguration der MySQL Datenbank.
Preis für eine Karte und 1 Tag FromDual remote-Consulting: EUR 3'990.-
FlashMaxII ist verfügbar in weiteren Konfigurationen und Grössen. Diese finden Sie auf der Virident Webseite.
Den Support für Ihre MySQL-Datenbank erhalten Sie von FromDual, denjenigen für die Flash-Karten direkt bei Virident.
Gerne erstellen wir Ihnen ein Spezial-Angebot für unsere Leistungen in Verbindung mit weiteren FlashMaxII Konfigurationen.
Wir freuen uns, mehr von Ihnen zu hören.
Taxonomy upgrade extras: mysqlperformanceflashssdviridentVerbesserte MySQL Datenbank-Performance mit Virident Flash-Cache
im letzten Newsletter haben wir Sie über die FromDual Beratungs-Dienstleistungen informiert. Fokus waren die Verfügbarkeit und die Vermeidung von Ausfällen der MySQL Datenbank (Welche Kosten verursacht eine Stunde Ausfallzeit Ihrer MySQL Datenbank?).
Heute möchten wir uns auf Hardwareperformance und somit auch auf Applikationsperformance konzentrieren. Wir zeigen Ihnen zwei Möglichkeiten auf, wie Sie mit Ihrer MySQL-Anwendung schneller werden und haben interessante Angebote für Sie. Im Januar 2014 ist FromDual Virident (Western Digital HGST) Partner geworden und hat damit die Möglichkeit für erhöhte Performance-Anforderungen Ihre vorhandenen Server auch mit PCIe Flash-Karten auszustatten und so nicht mehr auf langsame Platten angewiesen zu sein.
Beschleunigen Sie Ihre MySQL Datenbanken mit unserer Hilfe:
- Performance Tuning und Optimierung der MySQL Datenbank durch:
- Überwachung der Datenbank mit dem MySQL Performance Monitor,
- eine Datenbank-Konfigurationsanalyse mit unserer Frageliste: dem MySQL Performance Tuning Schlüssel, einer Anleitung zum identifizieren von Performance Problemen und zu deren Behebung,
- sowie dem MySQL Gesundheits-Check.
- Performance Tuning der I/O-Raten und der Plattenzugriffe mit neuester Hardware von Virident.
Wir stellen Ihnen ein Paket für MySQL Turbo Performance vor, bestehend aus der PCIe Flash-Einsteckkarte FlashMaxII und einem Architektur-Consulting für die optimale Anpassung der MySQL Datenbank an die Flash-Karten.
Hier ein Performancevergleich zwischen konventionellen RAID-10 Platten und Virident Flash-Karten:
Spezial-Angebot 1: PCIe Einsteckkarte FlashMaxII MLC mit 550 GByte KapazitätInklusive 2 Tage FromDual remote Consulting für die Unterstützung bei der Inbetriebnahme und der Optimierung der Datenbankanwendung für eine optimale Nutzung der Flash-Karten.
Hilfestellung erfolgt bei:
- der Konfiguration des Linux Kernels,
- der Optimierung der Treiber,
- der Konfigurierung des Laufwerks,
- der Konfigurierung der MySQL Parameter für die Datenbank-Caches, Log-File Grössen, etc.
- sowie weiterer Tuning-Tipps bezüglich MySQL, MariaDB oder Percona Server.
Preis für eine Karte und 2 Tage FromDual remote-Consulting: EUR 4'990.-
Spezial-Angebot 2: PCIe Einsteckkarte FlashMaxII MLC mit 550 GByte KapazitätInklusive 1 Tag FromDual remote Consulting für die Unterstützung bei der Inbetriebnahme und der Optimierung der Datenbankanwendung für eine optimale Nutzung des Flash-Karten.
Hilfestellung erfolgt bei:
- der Konfigurierung der MySQL Parameter für die Datenbank-Caches, Log-File Grössen, etc.
- sowie weiterer Tuning-Tipps bezüglich MySQL, MariaDB oder Percona Server.
Bei diesem Angebot muss die PCIe-Karte selber eingebaut, Linux konfiguriert und das Filesystem für die Nutzung der Flash-Karten aufgesetzt sein. Die Leistungserbringung beschränkt sich auf die Konfiguration der MySQL Datenbank.
Preis für eine Karte und 1 Tag FromDual remote-Consulting: EUR 3'990.-
FlashMaxII ist verfügbar in weiteren Konfigurationen und Grössen. Diese finden Sie auf der Virident Webseite.
Den Support für Ihre MySQL-Datenbank erhalten Sie von FromDual, denjenigen für die Flash-Karten direkt bei Virident.
Gerne erstellen wir Ihnen ein Spezial-Angebot für unsere Leistungen in Verbindung mit weiteren FlashMaxII Konfigurationen.
Wir freuen uns, mehr von Ihnen zu hören.