How to DELETE lots of rows from a large table? Here is an example of purging items older than 30 days:
DELETE FROM tbl WHERE ts < CURRENT_DATE() - INTERVAL 30 DAY
If there are millions of rows in the table, this statement may take minutes, maybe hours.
Any suggestions on how to speed this up?
To be ready for a crash, a transactional engine such as InnoDB will record what it is doing to a log file. To make that somewhat less costly, the log file is sequentially written. If the log files you have (there are usually 2) fill up because the delete is really big, then the undo information spills into the actual data blocks, leading to even more I/O.
Deleting in chunks avoids some of this excess overhead.
Limited benchmarking of total delete elapsed time show two observations:
Solutions
The idea here is to have a sliding window of partitions. Let's say you need to purge news articles after 30 days. The "partition key" would be the datetime (or timestamp) that is to be used for purging, and the PARTITIONs would be "range". Every night, a cron job would come along and build a new partition for the next day, and drop the oldest partition.
Dropping a partition is essentially instantaneous, much faster than deleting that many rows. However, you must design the table so that the entire partition can be dropped. That is, you cannot have some items living longer than others.
PARTITION tables have a lot of restrictions, some are rather weird. You can either have no UNIQUE (or PRIMARY) key on the table, or every UNIQUE key must include the partition key. In this use case, the partition key is the datetime. It should not be the first part of the PRIMARY KEY (if you have a PRIMARY KEY).
You can PARTITION InnoDB or MyISAM tables.
Since two news articles could have the same timestamp, you cannot assume the partition key is sufficient for uniqueness of the PRIMARY KEY, so you need to find something else to help with that.
Reference implementation for Partition maintenance
Although the discussion in this section talks about DELETE, it can be used for any other "chunking", such as, say, UPDATE, or SELECT plus some complex processing.
(This discussion applies to both MyISAM and InnoDB.)
When deleting in chunks, be sure to avoid doing a table scan. The code below is good at that; it scans no more than 1001 rows in any one query. (The 1000 is tunable.)
Assuming you have news articles that need to be purged, and you have a schema something like
CREATE TABLE tbl id INT UNSIGNED NOT NULL AUTO_INCREMENT, ts TIMESTAMP, ... PRIMARY KEY(id)
Then, this pseudo-code is a good way to delete the rows older than 30 days:
@a = 0 LOOP DELETE FROM tbl WHERE id BETWEEN @a AND @a+999 AND ts < DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY) SET @a = @a + 1000 sleep 1 -- be a nice guy UNTIL end of table
Notes (Most of these caveats will be covered later):
If there are big gaps in `id` values (and there will after the first purge), then
@a = SELECT MIN(id) FROM tbl LOOP SELECT @z := id FROM tbl WHERE id >= @a ORDER BY id LIMIT 1000,1 If @z is null exit LOOP -- last chunk DELETE FROM tbl WHERE id >= @a AND id < @z AND ts < DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY) SET @a = @z sleep 1 -- be a nice guy, especially in replication ENDLOOP # Last chunk: DELETE FROM tbl WHERE id >= @a AND ts < DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY)
That code works whether id is numeric or character, and it mostly works even if id is not UNIQUE. With a non-unique key, the risk is that you could be caught in a loop whenever @z==@a. That can be detected and fixed thus:
... SELECT @z := id FROM tbl WHERE id >= @a ORDER BY id LIMIT 1000,1 If @z == @a SELECT @z := id FROM tbl WHERE id > @a ORDER BY id LIMIT 1 ...
The drawback is that there could be more than 1000 items with a single id. In most practical cases, that is unlikely.
If you do not have a primary (or unique) key defined on the table, and you have an INDEX on ts, then consider
LOOP DELETE FROM tbl WHERE ts < DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY) ORDER BY ts -- to use the index, and to make it deterministic LIMIT 1000 UNTIL no rows deleted
This technique is NOT recommended because the LIMIT leads to a warning on replication about it being non-deterministic (discussed below).
To perform the chunked deletes recommended above, you need a way to walk through the PRIMARY KEY. This can be difficult if the PK has more than one column in it.
To efficiently to do compound 'greater than':
Assume that you left off at ($g, $s) (and have handled that row):
INDEX(Genus, species) SELECT/DELETE ... WHERE Genus >= '$g' AND ( species > '$s' OR Genus > '$g' ) ORDER BY Genus, species LIMIT ...
Addenda: The above AND/OR works well in older versions of MySQL; this works better in MariaDB and newer versions of MySQL:
WHERE ( Genus = '$g' AND species > '$s' ) OR Genus > '$g' )
A caution about using @variables for strings. If, instead of '$g', you use @g, you need to be careful to make sure that @g has the same CHARACTER SET and COLLATION as `Genus`, else there could be a charset/collation conversion on the fly that prevents the use of the INDEX. Using the INDEX is vital for performance. It may require a COLLATE clause on SET NAMES and/or the @g in the SELECT.
This is costly. (Switch to the PARTITION solution if practical.)
MyISAM leaves gaps in the table (.MYD file); OPTIMIZE TABLE will reclaim the freed space after a big delete. But it may take a long time and lock the table.
InnoDB is block-structured, organized in a BTree on the PRIMARY KEY. An isolated deleted row leaves a block less full. A lot of deleted rows can lead to coalescing of adjacent blocks. (Blocks are normally 16KB - see innodb_page_size.)
In InnoDB, there is no practical way to reclaim the freed space from ibdata1, other than to reuse the freed blocks eventually.
The only option with innodb_file_per_table = 0 is to dump ALL tables, remove ibdata*, restart, and reload. That is rarely worth the effort and time.
InnoDB, even with innodb_file_per_table = 1, won't give space back to the OS, but at least it is only one table to rebuild with. In this case, something like this should work:
CREATE TABLE new LIKE main; INSERT INTO new SELECT * FROM main; -- This could take a long time RENAME TABLE main TO old, new TO main; -- Atomic swap DROP TABLE old; -- Space freed up here
You do need enough disk space for both copies. You must not write to the table during the process.
The following technique can be used for any combination of
This can be done by chunking, or (if practical) all at once:
-- Optional: SET GLOBAL innodb_file_per_table = ON; CREATE TABLE New LIKE Main; -- Optional: ALTER TABLE New ADD PARTITION BY RANGE ...; -- Do this INSERT..SELECT all at once, or with chunking: INSERT INTO New SELECT * FROM Main WHERE ...; -- just the rows you want to keep RENAME TABLE main TO Old, New TO Main; DROP TABLE Old; -- Space freed up here
Notes:
Any UPDATE, DELETE, etc with LIMIT that is replicated to slaves (via Statement Based Replication) may cause inconsistencies between the Master and Slaves. This is because the actual order of the records discovered for updating/deleting may be different on the slave, thereby leading to a different subset being modified. To be safe, add ORDER BY to such statements. Moreover, be sure the ORDER BY is deterministic -- that is, the fields/expressions in the ORDER BY are unique.
An example of an ORDER BY that does not quite work: Assume there are multiple rows for each 'date':
DELETE * FROM tbl ORDER BY date LIMIT 111
Given that id is the PRIMARY KEY (or UNIQUE), this will be safe:
DELETE * FROM tbl ORDER BY date, id LIMIT 111
Unfortunately, even with the ORDER BY, MySQL has a deficiency that leads to a bogus warning in mysqld.err. See Spurious "Statement is not safe to log in statement format." warnings
Some of the above code avoids this spurious warning by doing
SELECT @z := ... LIMIT 1000,1; -- not replicated DELETE ... BETWEEN @a AND @z; -- deterministic
That pair of statements guarantees no more than 1000 rows are touched, not the whole table.
If you KILL a DELETE (or any? query) on the master in the middle of its execution, what will be replicated?
If it is InnoDB, the query should be rolled back. (Exceptions??)
In MyISAM, rows are DELETEd as the statement is executed, and there is no provision for ROLLBACK. Some of the rows will be deleted, some won't. You probably have no clue of how much was deleted. In a single server, simply run the delete again. The delete is put into the binlog, but with error 1317. Since replication is supposed to keep the master and slave in sync, and since it has no clue of how to do that, replication stops and waits for manual intervention. In a HA (High Available) system using replication, this is a minor disaster. Meanwhile, you need to go to each slave(s) and verify that it is stuck for this reason, then do
SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1; START SLAVE;
Then (presumably) re-executing the DELETE will finish the aborted task.
(That is yet another reason to move all your tables from MyISAM to InnoDB.)
TBD -- "Row Based Replication" may impact this discussion.
The tips in this document apply to MySQL, MariaDB, and Percona.
Rick James graciously allowed us to use this article in the Knowledge Base.
Rick James' site has other useful tips, how-tos, optimizations, and debugging tips.
Original source: http://mysql.rjweb.org/doc.php/deletebig
© 2019 MariaDB
Licensed under the Creative Commons Attribution 3.0 Unported License and the GNU Free Documentation License.
https://mariadb.com/kb/en/big-deletes/