This article describes different techniques for inserting data quickly into MariaDB.
When inserting new data into MariaDB, the things that take time are: (in order of importance):
The following describes the different techniques (again, in order of importance) you can use to quickly insert data into a table.
You can temporarily disable updating of non unique indexes. This is mostly useful when there are zero (or very few) rows in the table into which you are inserting data.
ALTER TABLE table_name DISABLE KEYS; BEGIN; ... inserting data with INSERT or LOAD DATA .... COMMIT; ALTER TABLE table_name ENABLE KEYS;
In many storage engines (at least MyISAM and Aria),
ENABLE KEYS works by scanning through the row data and collecting keys, sorting them, and then creating the index blocks. This is an order of magnitude faster than creating the index one row at a time and it also uses less key buffer memory.
When inserting big amounts of data, integrity checks are sensibly time-consuming. It is possible to disable the
UNIQUE indexes and the foreign keys checks using the
unique_checks and the
foreign_key_checks system variables:
SET @@session.unique_checks = 0; SET @@session.foreign_key_checks = 0;
For XtraDB/InnoDB tables, the AUTO_INCREMENT lock mode can be temporarly set to 2, which is the fastest setting:
SET @@global.innodb_autoinc_lock_mode = 2;
The fastest way to insert data into MariaDB is through the
LOAD DATA INFILE command.
The simplest form of the command is:
LOAD DATA INFILE 'file_name' INTO TABLE table_name;
You can also read a file locally on the machine where the client is running by using:
LOAD DATA LOCAL INFILE 'file_name' INTO TABLE table_name;
This is not as fast as reading the file on the server side, but the difference is not that big.
LOAD DATA INFILE is very fast because:
Because of the above speed advantages there are many cases, when you need to insert many rows at a time, where it may be faster to create a file locally, add the rows there, and then use
LOAD DATA INFILE to load them; compared to using
INSERT to insert the rows.
You can import many files in parallel with mysqlimport. For example:
mysqlimport --use-threads=10 database text-file-name [text-file-name...]
When doing many inserts in a row, you should wrap them with
BEGIN / END to avoid doing a full transaction (which includes a disk sync) for every row. For example, doing a begin/end every 1000 inserts will speed up your inserts by almost 1000 times.
BEGIN; INSERT ... INSERT ... END; BEGIN; INSERT ... INSERT ... END; ...
The reason why you may want to have many
BEGIN/END statements instead of just one is that the former will use up less transaction log space.
You can insert many rows at once with multi-value row inserts:
INSERT INTO table_name values(1,"row 1"),(2, "row 2"),...;
The limit for how much data you can have in one statement is controlled by the
max_allowed_packet server variable.
If you need to insert data into several tables at once, the best way to do so is to enable multi-row statements and send many inserts to the server at once:
INSERT INTO table_name_1 (auto_increment_key, data) VALUES (NULL,"row 1"); INSERT INTO table_name_2 (auto_increment, reference, data) values (NULL, LAST_INSERT_ID(), "row 2");
LAST_INSERT_ID() is a function that returns the last
auto_increment value inserted.
By default, the command line
mysql client will send the above as multiple statements.
To test this in the
mysql client you have to do:
delimiter ;; select 1; select 2;; delimiter ;
Note: for multi-query statements to work, your client must specify the
CLIENT_MULTI_STATEMENTS flag to
||Increase this if you have many indexes in InnoDB/XtraDB tables|
||Increase this if you have many indexes in MyISAM tables|
||Increase this to allow bigger multi-insert statements|
||Read block size when reading a file with
See mysqld options for the full list of server variables.
© 2019 MariaDB
Licensed under the Creative Commons Attribution 3.0 Unported License and the GNU Free Documentation License.