Wednesday, April 13, 2016

Query Rewrite Plugin and Binlog for Replication

Starting with MySQL 5.7 we introduced the Query Rewrite Plugin. That tool is really useful for changing queries. Of course the best location to modify the query is the source code of the application, but this is not always possible. Either the application is not under your control or queries are generated from a framework like Hibernate and sometimes it is hard to change the query generation.
If you are interested in details about the Query Rewrite Plugin, I recommend this blogpost from the MySQL Engineering: http://mysqlserverteam.com/the-query-rewrite-plugins/
Recently I was asked how this works in replication environments. Which query goes into the binlog?

If you are using the Rewriter plugin that comes with MySQL 5.7, the answer is easy: This plugin only supports rewriting SELECT queries. SELECT queries don't get into the binlog at all. Simple.

But you might write your own preparse or postparse plugin. In that case you can define the behavior with the server option --log-raw. See documentation here: https://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_log-raw
You can either bring the original query to the binlog or the rewritten query. So all flexibility you need. However be aware that --log-raw also affects logging of passwords in the general log file. With --log-raw passwords are written in plain text to the log files. So consider this side effect when switching --log-raw on or off.

Monday, April 4, 2016

MySQL 5.7: Optimizer finds best index by expression

The optimizer in MySQL 5.7 leverages generated columns. Generated columns will physically store data in two cases: Either the column is defined as STORED or you create an index on a virtual column. The optimizer will leverage such an index automatically if it encounters the same expression in a statement. Let's see an example:

mysql> DESC squares;
+-------+------------------+------+-----+---------+-------+
| Field | Type             | Null | Key | Default | Extra |
+-------+------------------+------+-----+---------+-------+
| dx    | int(10) unsigned | YES  |     | NULL    |       |
| dy    | int(10) unsigned | YES  |     | NULL    |       |
+-------+------------------+------+-----+---------+-------+
2 rows in set (0.00 sec)

mysql> SELECT COUNT(*) FROM squares;
+----------+
| COUNT(*) |
+----------+
|  2097152 |
+----------+
1 row in set (0.77 sec)


We have a large table with 2 million rows. Selecting rows by the surface area of squares can hardly leverage an index on dx or dy:

mysql> EXPLAIN SELECT * FROM squares WHERE dx*dy=221\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: squares
   partitions: NULL
         type: ALL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 2092860
     filtered: 100.00
        Extra: Using where
1 row in set, 1 warning (0.00 sec)

Now let's add an index over a generated, virtual column that defines the area:

mysql> ALTER TABLE squares ADD COLUMN (area INT AS (dx*dy));
Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE squares ADD INDEX (area);
Query OK, 0 rows affected (5.24 sec)
Records: 0  Duplicates: 0  Warnings: 0


Now we can run query again:

mysql> EXPLAIN SELECT * FROM squares WHERE dx*dy=221\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: squares
   partitions: NULL
         type: ref
possible_keys: area
          key: area

      key_len: 5
          ref: const
         rows: 18682
     filtered: 100.00
        Extra: NULL
1 row in set, 1 warning (0.00 sec)

 I did not change the query! The WHERE condition is still dx*dy. Nevertheless the optimizer finds the generated column, sees the index and decides to leverage that.
So you can add complex indexes and without changing the application code you can benefit from these indexes. That makes life much easier.

One limitation though: It seems the optimizer recognizes expressions only in the WHERE clause. It will not use the generated column and index for the SELECT expression:

mysql> EXPLAIN SELECT SUM(dx*dy) FROM squares\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: squares
   partitions: NULL
         type: ALL
possible_keys: NULL
          key: NULL

      key_len: NULL
          ref: NULL
         rows: 2092860
     filtered: 100.00
        Extra: NULL
1 row in set, 1 warning (0.00 sec)

mysql> EXPLAIN SELECT SUM(area) FROM squares\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: squares
   partitions: NULL
         type: index
possible_keys: NULL
          key: area
      key_len: 5
          ref: NULL
         rows: 2092860
     filtered: 100.00
        Extra: Using index
1 row in set, 1 warning (0.00 sec)


CHECK constraint for MySQL - NOT NULL on generated columns

During our recent TechTour event the idea came up to implement JSON document validation not necessarily via foreign keys (as I have shown here) but to define the generated column as NOT NULL. The generation expression must be defined in a way that it returns NULL for invalid data.
DISCLAIMER: This has already been explored by yoku0825 in his blogpost. He deserves all credit!

Let's do a short test:

mysql> CREATE TABLE checker ( 
    i int, 
    i_must_be_between_7_and_12 BOOLEAN 
         AS (IF(i BETWEEN 7 AND 12, true, NULL))  
         VIRTUAL NOT NULL);
Query OK, 0 rows affected (0.04 sec)

mysql> INSERT INTO checker (i) VALUES (11);
Query OK, 1 row affected (0.01 sec)

mysql> INSERT INTO checker (i) VALUES (12);
Query OK, 1 row affected (0.01 sec)

mysql> INSERT INTO checker (i) VALUES (13);
ERROR 1048 (23000): Column 'i_must_be_between_7_and_12' cannot be null




As you can see I used the column name to create a meaningful error message when inserting invalid data. It is perfectly possible to add a generated validation column for each data column so that you run several check constraints.
Or you can even check a combination of columns:

mysql> CREATE TABLE squares (
     dx DOUBLE, 
     dy DOUBLE, 
     area_must_be_larger_than_10 BOOLEAN 
           AS (IF(dx*dy>10.0,true,NULL)) NOT NULL);
Query OK, 0 rows affected (0.05 sec)

mysql> INSERT INTO squares (dx,dy) VALUES (7,4);

Query OK, 1 row affected (0.01 sec)

mysql> INSERT INTO squares (dx,dy) VALUES (2,4);

ERROR 1048 (23000): Column 'area_must_be_larger_than_10' cannot be null

As generated columns are virtual by default this costs no extra storage. Data volume is the same. The expression is evaluated when inserting or updating data.
If you add a validation column to an already existing table and want to verify all existing rows, you could define the validation column as STORED (instead of the default VIRTUAL). This will fail if there are any invalid rows in your existing data set. However in normal operation a virtual column seems more appropriate for performance reasons. So I recommend to always use VIRTUAL validation columns and check pre-existing data separately with a small procedure.

Tuesday, February 2, 2016

Looking for the smallest possible MySQL Footprint

MySQL is known and famous for it’s simplicity and small size, especially compared to other RDBMSs. But what if you want to deploy on tiny hardware? I mean something even smaller than RaspberryPi?
I tested three steps to make the MySQL footprint as small as possible. All my tests were compiled for Oracle Linux 7 on x64 platform. I did not test any ARM cross compile. And these are the steps:
  1. Compile my own binary
  2. Remove all unnecessary tools/files
  3. Strip symbol information from binary

Let’s take a closer look at the tree steps.

Compile my own binary

MySQL is available as a source release. Using that you can configure the make process. That is documented pretty well in the Reference Manual. By switching off some options I was able to reduce the binary size from 240MB to 216MB. I switched off some performance_schema features, removed some storage engines that are irrelevant in most environments anyway (like ARCHIVE, NDB, EXAMPLE, …) and I removed all options for profiling. The final CMAKE statement is at the bottom of this post.

Remove unnecessary tools

I removed scripts and binaries from the distribution. Ted has written an interesting blog post about this. The remaining share directory contains some SQL scripts for installing additional tools. You need these at most once during setup and never again. So let’s remove these. If you are happy to live without textual error messages you can also remove the errmsg-utf8.txt file as well and all translations in the country specific subdirs. And if you can live with reduced charset support, you can even remove the rest of the share directory. You are running essentially only with a mysqld binary.

Strip symbol information from binary

All compilations are done with extended diagnosis information in the binary. These symbol data helps if you want to analyze a core dump for example. Symbols are included by default in the MySQL binaries. These take a surprisingly large amount of space. You can remove these symbols from the binary with the tool “strip(1). After stripping the binary size came down to 24MB, which is only 10% of the initial size.

More ideas

There are some more options to use either system libraries or the libraries that come with the source code. Using existing libraries from the system might help save a few bytes.

Summary

It is possible to make MySQL very lean for your (embedded) system. Despite all the functionality that we added to MySQL in the releases since MySQL 5.1 you get a full featured RDBMS with only a handful of MB. Here are my final results:

  • MySQL 5.6, minimal features: 79MB, stripped 13MB
  • MySQL 5.7, default features: 240MB, stripped 24MB
  • MySQL5.7, minimal features: 216MB, stripped 24MB (removing features brings minimal savings only)

Addendum

This is the CMAKE statement I used to compile MySQL 5.7 on Oracle Linux 7:
cmake . -DCMAKE_INSTALL_PREFIX=/home/testy/TQ/dist-mysql-5.7.10/        \
        -DDOWNLOAD_BOOST=1                                              \
        -DWITH_BOOST=/home/testy/TQ/boost/                              \
        -DDISABLE_PSI_COND=1   \
        -DDISABLE_PSI_FILE=1   \
        -DDISABLE_PSI_IDLE=1   \
        -DDISABLE_PSI_MEMORY=1 \
        -DDISABLE_PSI_METADATA=1 \
        -DDISABLE_PSI_MUTEX=1 \
        -DDISABLE_PSI_RWLOCK=1 \
        -DDISABLE_PSI_SOCKET=1 \
        -DDISABLE_PSI_SP=1     \
        -DDISABLE_PSI_STAGE=1  \
        -DDISABLE_PSI_STATEMENT=1  \
        -DDISABLE_PSI_STATEMENT_DIGEST=1    \
        -DDISABLE_PSI_TABLE=1 \
-DWITH_ARCHIVE_STORAGE_ENGINE=0 \
-DWITH_BLACKHOLE_STORAGE_ENGINE=0 \
-DWITH_EXAMPLE_STORAGE_ENGINE=0 \
-DWITH_FEDERATED_STORAGE_ENGINE=0 \
-DWITH_PARTITION_STORAGE_ENGINE=0 \
-DWITH_PERFSCHEMA_STORAGE_ENGINE=0 \
-DENABLED_PROFILING=0 \
-DENABLE_DEBUG_SYNC=0 \
-DENABLE_DTRACE=0 \
-DENABLE_GCOV=0 \
-DENABLE_GPROF=0 \
-DOPTIMIZER_TRACE=0 \
-DWITH_CLIENT_PROTOCOL_TRACING=0 \
-DWITH_DEBUG=0 \
-DWITH_INNODB_EXTRA_DEBUG=0 

Thursday, November 26, 2015

JSON memory consumption

I got some more questions on the new JSON data type and functions during our TechTours. And I like to summarize the answers in this blogpost.

Memory consumption 

The binary format of the JSON data type should consume more memory. But how much? I did a little test by comparing a freshly loaded 25,000 row dataset stored as JSON and stored as TEXT. Seven top level attributes per JSON document. Average JSON_DEPTH is 5.9 . Let's see:
mysql> DESC data_as_text;
+-------+---------+------+-----+---------+-------+
| Field | Type    | Null | Key | Default | Extra |
+-------+---------+------+-----+---------+-------+
| id    | int(11) | NO   | PRI | NULL    |       |
| doc   | text    | YES  |     | NULL    |       |
+-------+---------+------+-----+---------+-------+
2 rows in set (0.00 sec)

mysql> SELECT COUNT(*),AVG(JSON_LENGTH(doc)) FROM data_as_text;
+----------+-----------------------+
| COUNT(*) | AVG(JSON_LENGTH(doc)) |
+----------+-----------------------+
|    25359 |                7.0000 |
+----------+-----------------------+
1 row in set (0.81 sec)

mysql> DESC data_as_json;
+-------+---------+------+-----+---------+----------------+
| Field | Type    | Null | Key | Default | Extra          |
+-------+---------+------+-----+---------+----------------+
| id    | int(11) | NO   | PRI | NULL    | auto_increment |
| doc   | json    | NO   |     | NULL    |                |
+-------+---------+------+-----+---------+----------------+
2 rows in set (0.00 sec)

mysql> SELECT COUNT(*),AVG(JSON_LENGTH(doc)) FROM data_as_json;
+----------+-----------------------+
| COUNT(*) | AVG(JSON_LENGTH(doc)) |
+----------+-----------------------+
|    25359 |                7.0000 |
+----------+-----------------------+
1 row in set (0.08 sec)

mysql> select name,allocated_size/1024/1024 AS "size in MB" from information_schema.innodb_sys_tablespaces where name like "%temp%";
+-------------------+-------------+
| name              | size in MB  |
+-------------------+-------------+
| temp/data_as_json | 23.00390625 |
| temp/data_as_text | 22.00390625 |
+-------------------+-------------+
2 rows in set (0.00 sec)
The increased memory consumption is 1/22 in this case, which is roughly 4,5%. At the same time you see the benefit: The full table scan with some JSON operation has a 90% reduction in runtime when using JSON datatype.
Don't take this number for real. Of course it depends on the number of JSON attributes, character set and others. Just a rough indication. If you want all the details look at the JSON architecture in WL#8132.

Monday, November 23, 2015

Document validation of JSON columns in MySQL

Starting with the new release MySQL 5.7 there is support to store JSON documents in a column. During our recent Tech Tour events we got questions about document validation, so ensuring that a JSON document has a certain structure. (Funny. It all started with the idea to be schema-free. Now people seem to need schema enforcement.)
I have two ideas how to implement a schema validation for JSON columns. The first one is by leveraging generated columns together with a foreign key. The second idea is by implementing a trigger. Today I want to focus on the generated columns and foreign keys.
When defining foreign keys with generated columns there are two limitations we need to be aware of:
  • Foreign keys require indexes. JSON columns cannot be indexed. We need to leverage other types.
  • Only STORED generated columns are supported for foreign keys.
So here is an example of an address table that leverages JSON to define an arbitrary number of phone number entries per row. In fact I use a mixed model of relational (e.g. to enforce a strict model for name NOT NULL) and document so that phone numbers are more free to define.

 CREATE TABLE `people` (  
 `name` varchar(30) NOT NULL,  
 `firstname` varchar(30) DEFAULT NULL,  
 `birthdate` date DEFAULT NULL,  
 `phones` json DEFAULT NULL,  
 `phonekeys` varchar(30) GENERATED ALWAYS AS (json_keys(phones)) STORED,  
 KEY `phonekeys` (`phonekeys`));  


The generated column phonekeys is a string that includes the types of phone numbers for each row. Some sample data:

 mysql> INSERT INTO people (name,firstname,birthdate,phones)  
 VALUES ("Plumber", "Joe, the", "1972-05-05",'{"work": "+1(555)24680"}');  
 Query OK, 1 row affected (0.00 sec)  
 ...some more inserts...  
 mysql> SELECT * FROM people;  
 +---------+-----------+------------+--------------------------------------------------------+-----------------------+  
 | name | firstname | birthdate | phones | phonekeys |  
 +---------+-----------+------------+--------------------------------------------------------+-----------------------+  
 | Doe | John | 1995-04-17 | {"mobile": "+491715555555", "private": "+49305555555"} | ["mobile", "private"] |  
 | Dian | Mary | 1963-12-12 | {"work": "+43987654321"} | ["work"] |  
 | Plumber | Joe, the | 1972-05-05 | {"work": "+1(555)24680"} | ["work"] |  
 +---------+-----------+------------+--------------------------------------------------------+-----------------------+  
 3 rows in set (0.00 sec)  


The column phonekeys gets populated automatically.
To check that we use "correct" attributes in our JSON object we can now create a table that contains the valid JSON keys:

  CREATE TABLE `valid_keys` (  
  `keylist` varchar(30) NOT NULL,  
  PRIMARY KEY (`keylist`)  
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1 |  
 +------------+--------------------------------------------------------------------------------------------------------------------------------+  
 1 row in set (0.01 sec)  
 ... after some inserts...  
 mysql> SELECT * FROM valid_keys;  
 +-------------------------------+  
 | keylist            |  
 +-------------------------------+  
 | ["mobile", "private", "work"] |  
 | ["mobile", "private"]     |  
 | ["work"]           |  
 +-------------------------------+  
 3 rows in set (0.00 sec)  

Now we can define a foreign key with the people table as a child table:
mysql> alter table people add foreign key (phonekeys) references valid_keys (keylist);

That should enforce that inserted JSON documents in the people table must have a list of attributes that matches any entry in the valid_keys table. Let's try:



mysql> INSERT INTO people (name,phones) VALUES ("me", JSON_OBJECT("work","12243"));
Query OK, 1 row affected (0.01 sec)

mysql> INSERT INTO people (name,phones) VALUES ("my friend", JSON_OBJECT("home","12243"));
ERROR 1452 (23000): Cannot add or update a child row: a foreign key constraint fails (`mario`.`people`, CONSTRAINT `people_ibfk_1` FOREIGN KEY (`phonekeys`) REFERENCES `valid_keys` (`keylist`))
mysql>


Works fine. "home" is not an allowed attribute. I can leverage the foreign keys to make sure my phone numbers match a certain attribute list. However it is not perfectly simple to use. With five different allowed attributes in an arbitrary order you would have to add all permutations to the valid_keys table. With five attributes you end up with 6! permutations ("not defining an attribute" is also an option, hence six), which results in 720 rows for valid_keys. But it is a first start. For more complex explamples the ideas with triggers might be more favorable.

Thursday, April 9, 2015

Secondary Indexes on XML BLOBs in MySQL 5.7

When storing XML documents in a BLOB or TEXT column there was no way to create indexes on individual XML elements or attributes. With the new auto generated columns in MySQL 5.7 (1st Release Candidate available now!) this has changed! Let me give you an example. Let's work on the following table:
 mysql> SELECT * FROM country\G  
 *************************** 1. row ***************************  
 docid: 1  
  doc: <country>  
     <name>Germany</name>  
     <population>82164700</population>  
     <surface>357022.00</surface>  
     <city name="Berlin"><population></population></city>  
     <city name="Frankfurt"><population>643821</population></city>  
     <city name="Hamburg"><population>1704735</population></city>  
 </country>  
 *************************** 2. row ***************************  
 docid: 2  
  doc: <country>  
     <name>France</name>  
     <surface></surface>  
     <city name="Paris"><population>445452</population></city>  
     <city name="Lyon"></city>  
     <city name="Brest"></city>  
     <population>59225700</population>  
 </country>  
 *************************** 3. row ***************************  
 docid: 3  
  doc: <country>  
     <population>10236000</population>  
     <name>Belarus</name>  
     <city name="Brest"><population></population></city>  
 </country>  
 *************************** 4. row ***************************  
 docid: 4  
  doc: <country>  
     <name>Pitcairn</name>  
     <population>52</population>  
 </country>  
 4 rows in set (0,00 sec)  

The table has only two columns: docid and doc. Since MySQL 5.1 it is possible to extract the population value thanks to the XML functions like ExtractValue(...). But sorting the documents by the population of a country was impossible because population is not a dedicated column in the table. Starting with MySQL 5.7.6 DMR we can add an auto generated column that contains only the population. Let’s create that column:

 mysql> ALTER TABLE country ADD COLUMN population INT UNSIGNED AS (CAST(ExtractValue(doc,"/country/population") AS UNSIGNED INTEGER)) STORED;
  Query OK, 4 rows affected (0,21 sec)   
  Records: 4 Duplicates: 0 Warnings: 0   
  mysql> ALTER TABLE country ADD INDEX (population);   
  Query OK, 0 rows affected (0,22 sec)   
  Records: 0 Duplicates: 0 Warnings: 0   
  mysql> SELECT docid FROM country ORDER BY population ASC; 
  +-------+   
  | docid |   
  +-------+   
  |     4 |   
  |     3 |   
  |     2 |   
  |     1 |   
  +-------+   
  4 rows in set (0,00 sec)  

The population value is extracted automatically from each document, stored in a dedicated column and the index is maintained. Really simple now. Note that the population value of the cities is NOT extracted.

What happens if we want to look for city names? Each document may contain several city names. First let’s extract the city names with the XML function and store it in an auto generated column again:

 mysql> ALTER TABLE country ADD COLUMN cities TEXT AS (ExtractValue(doc,"/country/city/@name")) STORED;  
 Query OK, 4 rows affected (0,62 sec)  
 Records: 4 Duplicates: 0 Warnings: 0  
 mysql> SELECT docid,cities FROM country;  
 +-------+--------------------------+  
 | docid | cities                   |  
 +-------+--------------------------+  
 |     1 | Berlin Frankfurt Hamburg |  
 |     2 | Paris Lyon Brest         |  
 |     3 | Brest                    |  
 |     4 |                          |  
 +-------+--------------------------+  
 4 rows in set (0,01 sec)  

The XML function ExtractValue extracts the name attribute of all cities and concatenates these with whitespace. That makes it easy for us to leverage the FULLTEXT index in InnoDB:

 mysql> ALTER TABLE country ADD FULLTEXT (cities);  
 mysql> SELECT docid FROM country WHERE MATCH(cities) AGAINST ("Brest");  
 +-------+  
 | docid |  
 +-------+  
 |     2 |  
 |     3 |  
 +-------+  
 2 rows in set (0,01 sec)  

All XML calculations are done automatically when storing data. Let’s add another XML document and query again:

 mysql> INSERT INTO country (doc) VALUES ('<country><name>USA</name><city name="New York"/><population>278357000</population></country>');  
 Query OK, 1 row affected (0,00 sec)  
 mysql> SELECT * FROM country WHERE MATCH(cities) AGAINST ("New York");  
 +-------+----------------------------------------------------------------------------------------------+------------+----------+  
 | docid | doc                                                                                          | population | cities   |  
 +-------+----------------------------------------------------------------------------------------------+------------+----------+  
 |     5 | <country><name>USA</name><city name="New York"/><population>278357000</population></country> |  278357000 | New York |  
 +-------+----------------------------------------------------------------------------------------------+------------+----------+  
 1 row in set (0,00 sec)  

Does this also work with JSON documents? There are JSON functions available in a labs release. These functions are currently implemented as user defined functions (UDF) in MySQL. UDFs are not supported in auto generated columns. So we have to wait until JSON functions are built-in to MySQL.
UPDATE: See this blogpost. There is a first labs release to use JSON functional indexes.

What did we learn? tl;dr

With MySQL 5.7.6 it is possible to automatically create columns from XML elements or attributes and maintain indexes on this data. Search is optimized, MySQL is doing all the work for you. And Brest is not only in France but also a city in Belarus.