Queue items to allow later processing.
The queue system allows placing items in a queue and processing them later. The system tries to ensure that only one consumer can process an item.
Before a queue can be used it needs to be created by Drupal\Core\Queue\QueueInterface::createQueue().
Items can be added to the queue by passing an arbitrary data object to Drupal\Core\Queue\QueueInterface::createItem().
To process an item, call Drupal\Core\Queue\QueueInterface::claimItem() and specify how long you want to have a lease for working on that item. When finished processing, the item needs to be deleted by calling Drupal\Core\Queue\QueueInterface::deleteItem(). If the consumer dies, the item will be made available again by the Drupal\Core\Queue\QueueInterface implementation once the lease expires. Another consumer will then be able to receive it when calling Drupal\Core\Queue\QueueInterface::claimItem(). Due to this, the processing code should be aware that an item might be handed over for processing more than once.
The $item object used by the Drupal\Core\Queue\QueueInterface can contain arbitrary metadata depending on the implementation. Systems using the interface should only rely on the data property which will contain the information passed to Drupal\Core\Queue\QueueInterface::createItem(). The full queue item returned by Drupal\Core\Queue\QueueInterface::claimItem() needs to be passed to Drupal\Core\Queue\QueueInterface::deleteItem() once processing is completed.
There are two kinds of queue backends available: reliable, which preserves the order of messages and guarantees that every item will be executed at least once. The non-reliable kind only does a best effort to preserve order in messages and to execute them at least once but there is a small chance that some items get lost. For example, some distributed back-ends like Amazon SQS will be managing jobs for a large set of producers and consumers where a strict FIFO ordering will likely not be preserved. Another example would be an in-memory queue backend which might lose items if it crashes. However, such a backend would be able to deal with significantly more writes than a reliable queue and for many tasks this is more important. See aggregator_cron() for an example of how to effectively use a non-reliable queue. Another example is doing Twitter statistics -- the small possibility of losing a few items is insignificant next to power of the queue being able to keep up with writes. As described in the processing section, regardless of the queue being reliable or not, the processing code should be aware that an item might be handed over for processing more than once (because the processing code might time out before it finishes).
Name | Location | Description |
---|---|---|
Batch | core/lib/Drupal/Core/Queue/Batch.php | Defines a batch queue handler used by the Batch API. |
BatchMemory | core/lib/Drupal/Core/Queue/BatchMemory.php | Defines a batch queue handler used by the Batch API for non-progressive batches. |
DatabaseQueue | core/lib/Drupal/Core/Queue/DatabaseQueue.php | Default queue implementation. |
Memory | core/lib/Drupal/Core/Queue/Memory.php | Static queue implementation. |
Name | Location | Description |
---|---|---|
QueueInterface | core/lib/Drupal/Core/Queue/QueueInterface.php | Interface for a queue. |
ReliableQueueInterface | core/lib/Drupal/Core/Queue/ReliableQueueInterface.php | Reliable queue interface. |
© 2001–2016 by the original authors
Licensed under the GNU General Public License, version 2 and later.
Drupal is a registered trademark of Dries Buytaert.
https://api.drupal.org/api/drupal/core!core.api.php/group/queue/8.1.x