Since I started doing the occasional consulting job for Open Query, I've seen a lot of MySQL servers that have been installed once and then forgotten about. This gave me the idea to do a short presentation about some basic MySQL server configuration. The first go was at DrupalCampMelbourne and I recently tried (and failed) to cram it into a three minute lightning talk slot at the LUV September meeting.

The title of the talk is (now) MySQL > YourSQL. I chose this not because I think that MySQL is better than the $other_database you use or because I may or may not run a newer version of MySQL on better hardware, but because I use InnoDB and not MyISAM as the default table format. More importantly, I do not run the server with the shipped default configuration.

These configuration tips are also included in the MySQL section of Pro Linux System Administration.

Most distributions ship MySQL with a configuration file that sees you running a server optimised for a system with 32MB of RAM. That was great in 1996, but these days your Nike+ shoes have more capacity than that, so it makes sense to optimise the configuration - or at least to make it not suck quite as badly.

I chose InnoDB as the default storage engine not because it gives me a magic increase in performance or transaction support, but due to the  improved reliability that comes from its ability to automatically recover from crashes better than MyISAM does.

All the following settings go in the [mysqld] section of your my.cnf file.

Optimising basic settings

Lets start with the buffer that MyISAM uses to keep indexes in RAM. If you don't use MyISAM, you can make this number nice and low.

key_buffer_size = 16M

Keep in mind that the mysql privilege tables will remain in MyISAM format, even if you switch all other tables to InnoDB. If you use FULLTEXT indexes on tables (like mediawiki does) you won't be able to switch those to InnoDB, so the actual value you assign here depends on your data. Mine is 16MB, because virtually all my tables are InnoDB format.

Performance can be helped greatly if MySQL doesn't need to keep opening and closing table files, but can instead keep them opened for re-use. The default here is 64.

table_cache = 512

Of course, it makes no sense to set this value higher than the number of tables you have. Keep in mind an open table uses a bit of memory and that your MySQL server may have a limited number of file descriptors available to it.

When a query needs to scan data on disk will will read tables (and indexes) in chunks. The default chunk size is 128KB. If you need to scan a 100MB table, that means a lot of round trips, so you may want to up the size somewhat. There are two parameters, which control the buffer size for sequential and random disk scans.

read_buffer_size = 1M
read_rnd_buffer_size = 512K

Keep in mind that each concurrent connection can allocate this amount of memory. Mine are set to 1MB and 512KB respectively, which means there will be less round trips to disk, whilst not overcommitting memory when the number of concurrent connections goes up.

The query cache is tricky. You want to cache the results from as many oft-used queries as you can, whilst at the same time you don't want to sacrifice too much RAM to the cache, because you won't be able to use that memory for actual data. It's best to set a fairly low limit to the maximum size of a result that will cached, so you can store lots of little query results as opposed to only a handful of huge ones.

The query cache won't be used if a query needs data from one or more tables that were changed, so the larger the result, the less likely it will be in the cache anyway.

query_cache_limit = 128K
query_cache_size = 16M

I set my limit to 128KB with a total cache size of 16MB, which seems to do a good job of keeping the cache fairly relevant. If at all possible, it's a good idea to make use of memcached in your application. After all, each query for which the database doesn't need to be accessed will gain you some performance.

If you have GROUP BY queries, those will use temporary tables. This is fine as long as the temporary table fits into memory. But once it grows over a certain size (defaults to 16MB) the temporary table will be written to disk, killing performance.

tmp_table_size = 64M
max_heap_table_size = 64M

The maximum temporary table (in memory) size is actually the lower of these two values, so if you need larger temporary tables that don't get created on disk, you need to change both these values.

Disable bad stuff

Whilst working with an application, you might be tempted to enable the ASCII log to see what queries get run. In that case the my.cnf file would have an entry like this one:

log = /var/log/mysql.log

This completely kills performance, so you should comment it out. If binary logging is enabled (and it should be) you can use the mysqlbinlog tool to parse the binary logs and output the queries in a human readable format.

Default to InnoDB

Changing the default table format is easy, just add the following to the [mysqld] section of you my.cnf file and restart MySQL.

default_storage_engine = InnoDB

However, the default InnoDB settings suck for most modern systems, so you have a little bit of additional configuring to do.

Optimising InnoDB settings

InnoDB keeps all its data, indexes and assorted goo in as much memory as you're willing to give it. The default is 8MB and I'm willing to bet you have more data than that, so you should change this value and set it as high as you can.

innodb_buffer_pool_size = 512M

On a dedicated MySQL server with 16GB of RAM and only InnoDB tables, you could set this up to 90% of all available ram (leaving enough for the system and other services, of course). On a server that also runs Apache and PHP, you might try for 25% to 50% of available RAM for this value.

InnoDB will default to using a single file on disk to store all databases and tables.  This file is /var/lib/mysql/ibdata1. Over time, this file will grow in size, but never decrease. If you only ever store 100MB of data and don't change that data, this is not an issue. However, most people would have hundreds of databases and thousands of tables with gigabytes (or more) of data.

innodb_data_home_dir = /var/lib/mysql
innodb_data_file_path = ibdata1:10M:autoextend
innodb_file_per_table = 1

You can force InnoDB to create a single file on disk for each table in a directory named after the database (sort of like MyISAM does) by setting innodb_file_per_table.  When you do this, you'll want to adjust your table cache as well. The location and size of the ibdata1 file are not by default added in the my.cnf file, but it's a good idea to add in these defaults.

When data in an InnoDB table changes, it's updated in the buffer pool and also written to the transaction log, but not to the actual table file. The latter happens when the transaction log is full. The default size for these files (you have 2) is 5MB, which is awfully small; a large query which modifies a lot of data could cause the transaction log having to be flushed to the table files several times over.

Your InnoDB transaction logs are the ib_logfile0 and ib_logfile1 files in /var/lib/mysql. A reasonable size for these files would be 1/16th to 1/8th or so of the innodb_buffer_pool_size, with a maximum upper limit of 128MB.

innodb_log_group_home_dir = /var/lib/mysql
innodb_log_file_size = 32M

Because the transaction logs are likely to contain live data that has not yet been written do the table files, you cannot increase their size by simply removing the current ones and having MySQL create new ones when it restarts. Instead, you need to explicitly tell MySQL to flush the data in these logs to the table files when it shuts down.

You can do this by connecting to MySQL as the root user and setting the innodb_fast_shutdown variable to 0.

$ mysql -u root -p
Enter password:

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 18
Server version: 5.1.38-maria-beta1-ourdelta67-log (Ubuntu)

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> SET GLOBAL innodb_fast_shutdown=0;
Query OK, 0 rows affected (0.00 sec)

mysql> \q

You can now shut down the MySQL server, and it will flush all pending changes from the transaction logs to the table files. That means you can now safely move the existing files out of the way, so the MySQL server can create new ones when we next start it.

Do not delete the ib_logfile* files; just move them to a different directory. If MySQL refuses to start with a transaction log error when it is supposed to create new transaction logs, you can shut it down. Now you can copy the old files back and try the procedure again without the risk of losing any data.

Finally, there is a a setting for Linux servers that will allow you to make more efficient use of the RAM in the system. Normally Linux will keep any data that is written to disk in part of the system memory that it reserves for use as a file cache.  The next time such data needs to be read, it'll be available in the cache, so the disk doesn't need to be accessed.

That is fine for MyISAM tables, but our InnoDB data is in the innodb_buffer_pool already anyway, so we don't want Linux to waste memory on keeping a second copy hanging around in the Linux file cache.

innodb_flush_method = O_DIRECT

Setting the flush_method to O_DIRECT will cause Linux to not keep a copy of the data in its file cache, leaving more memory for the buffer pool or other services.

Convert existing tables

Any new tables will now default to InnoDB, but any previously created ones are still in the MyISAM format. Luckily MySQL ships with a utility to convert tables to a different format:

mysql_convert_table_format --type=InnoDB --user=root --password='secret' --force databasename

The --force option is there so that the utility does not exit when it encounters an error. For instance, if  a database contains a table with a FULLTEXT index, the utility will not exit after trying to process that table, but continue converting the other tables.

Just remember to leave the mysql database in MyISAM format.

Caveat … but not

Your MySQL backups are of course handled via mysqldump or a similar tool and not by simply copying the contents of /var/lib/mysql to a different location on disk. If that is what you did, and you use MyISAM as storage format, you were probably lucky you didn't lose any data when you had to restore.

When using InnoDB you will not get away with backing up your data this way. Or, more correctly, you will not get away with restoring data from such a "backup". Please use a proper backup strategy.

I've attached my data dump script to this post; it can run via a daily crontab and dumps data to a specified directory. You can then use standard backup tools to backup the contents of that directory. My current preference for doing that is rsnapshot.

Inevitable disclaimer

This blog is not supposed to give you a MySQL server that is tuned to outperform the Google cluster. Instead its aim is to allow you to run with a more optimised set of defaults than your distro gives you, without needing to spend days reading books and searching the web for info.

Of course it's possible to create a better configuration specific to your hardware and data, and by all means do.

Further reading

File mysql-backup.sh1.55 KB


If you find your MySQL is still slow and seems to be stuck on writing data to disk even though the system isn't showing a lot of IOwait, you can tell the storage engine it is allowed to perform more than the 200 I/O operations per second it defaults to.

innodb_io_capacity = 400

If you store your data and transaction logs on RAID or SSD, tweaking this setting should help a lot.

Note that you need to have an OurDelta build or MariaDB with xtradb for this to work. If you don't, your mysql server won't start with this variable set!

So you are telling us that at default configurations, MYSQL is running as if it were on a very, very slow system? I have a dedicated server with 8GB RAM and I'm only running 10 databases from that. My programmer told me it should be left alone and couldn't be optimized anymore because the 'makers' of MYSQL had done it, but now I am not so sure. Even though he's from Malaysia I will have to make all the above very clear to him. Even I with my limited experience can see that the 128kb default chunk size is inefficient.

Yours Sincerely,

Blog - propane fire pit
Internet Marketer/Housewife
Location - Great Britain

Hi Kendra. If those 10 databases are each 64GB in size, it'll be VERY slow, but if they all fit in RAM, then there's not a great deal you can do to the configuration to speed it up.

The 'makers' of MySQL ship most packages with some sort of basic configuration file, but that file basically ensures the server works. Not that it works as well as it possibly can. "As well as it possibly can" depends on the data you store and the way you use that data.

You can get speed and reliability increases by tuning the configuration, but more often you get speed increases by tuning your queries. Enable the slow_log and see what queries are being inefficient.

I'd be wary of anyone who says a database server has been optimised as far as it can be.

You can get speed and reliability increases by tuning the configuration, but more often you get speed increases by tuning your queries. Enable the slow_log and see what queries are being inefficient.

I'd be wary of anyone who says a database server has been optimised as far as it can be.

A beautiful theme on Mysql thank you for your blog, I hope I can still read such interesting topics of you

Add new comment