MySQL Benchmarking: Setup, Optimizations & Results Using Sysbench on m3.Large

By Flux7 Labs
July 29, 2014

Throughout the history of our blog, we have shared many posts in regard to benchmarking, such as explaining how to setup and use sysbench for MySQL benchmarking. You can just do a search in the upper right-hand corner for “benchmarking” to find all of these. Today, we are continuing to add to the library!

In this post, we are sharing our experiences using sysbench for MySQL benchmarking. To start, let’s explore the setup we used for benchmarking.


Here it is:

  • Machine: AWS m3.large instance (64 Bit, paravirtual)

  • Storage: 32 GB SSD instance store

  • OS: Ubuntu 14.04 LTS (3.13.0-24-generic)

  • MySQL Version: 5.5.35

  • Sysbench Version: 0.4.12

We used four different tables sizes for our benchmarking. They ranged from 50,000 to 50,000,000 whereby each table is 10 times larger than the previous one. Initially, the benchmark was run without applying any optimization and used the default “my.cnf.” We then applied several optimizations for MySQL based on best practices recommended by MySQL documentation. Then we ran the benchmark again.


We applied the following optimizations to the MySQL configuration file “my.cnf” (/etc/mysql/my.cnf). A short description of the system variables is given below.

  • Caches and Limits

    • max_heap_table_size → The maximum size for in-memory temporary tables is the minimum of the tmp_table_size and max_heap_table_size values. The default value of  tmp_table_size. The default value for max_heap_table_size is 16M and is now set to 32MB so that it will be equal to tmp_table_size.

    • query_cache_size → We increased the query_cache_size so that the results are cached to some extent.

    • thread_cache_size → Although for benchmarking purposes, we do not need to use this variable as there will only be one connection. We have included this just to make sure having this variable does not affect the performance.

    • open_files_limit → Increase the open files limit.

    • table_definition_size → Increasing this cache helps speed up the opening of tables.

    • table_open_cache → This is the number of open tables in all threads.

  • Inno DB specific variables

    • innodb_flush_method → By default, the flush method used is fadatasync(). O_DIRECT can help to avoid double buffering between the InnoDB buffer pool and the operating system’s filesystem cache.

    • innodb_log_file_size → A large log buffer enables large transactions to run without a need to write the log to disk before the transactions commit. Having a large value makes sense if there are big transactions.

    • innodb_file_per_table → Enabling this variable makes sense when there are fewer tables. You can find more details about it here.

    • innodb_buffer_pool_size → The larger you set this value, the less disk I/O is needed to access data in the tables. On a dedicated server, this value can be set up to 80% of the machine’s physical memory.

The details of all the system variables for MySQL can be found here:



Now we have come to the results. Here they are, showing the benchmarking exercises both before and after applying the optimizations.

Specifically, we ran each benchmark 10 time for each of the different table sizes. We then discarded the top two and bottom two outliers and calculated the average for the remaining six. Just to be sure that the results are consistent, we also calculated the std deviation for all 10 results.

Transaction per second



Average Response Time


95 Percentile Response Time




Based on all the above results, it is evident that optimizations play a more prominent role when the table sizes are larger. The tuning parameters described give you a general idea of how to tune MySQL.

However, you need to note that database performance tuning is highly dependent on the application and the type of usage that goes along with the application. Hence, the system parameters need to be specifically tuned for every application to get the best results.

System Resources

We were also interested in how the system resources, such as CPU, memory and disk I/O, are used when the benchmarks are being run.

Each of the following graphs represent the benchmarks run for all four table sizes starting from 50,000; 500,000; 5,000,000; and 50,000,000, respectively. You can see several differences in the way memory, CPU and disk octets.

With Optimization

Without Optimization







Disk Octets

Disk Octets

Now that we have completed our exercise of MySQL benchmarking using sysbench, it’s time to get your feedback about our results and how we got them. Ask us questions about our experience, or tell us about your exploration of MySQL benchmarking. Send your comments to

You can also visit us to learn more about our DevOps approach and cloud migration solutions for both SMBs and large enterprises. You can even sign up for an IT assessment. Just go to for more information.