I'd like to guard against concurrent conflicting inserts given a certain condition. I'm building a system that allows users to select an available time slot and reserve it but prevent two users from reserving the same slot. The condition involves a couple of constraints, but for simplicity sake lets assume the following:
CREATE TABLE reservations (
id INT AUTO_INCREMENT PRIMARY KEY,
val VARCHAR(255) NOT NULL
);
where val remains unique (in reality the "uniqueness" involves datetime range checks and more).
Optimistic locking
I had assumed that the following would suffice:
INSERT INTO reservations (val) SELECT :val FROM reservations
WHERE 0 = (SELECT COUNT(*) FROM reservations WHERE val=:val) LIMIT 1
But this causes two problems in some of the concurrency tests I've performed (MariaDB using Go):
- It happens that one insert is successful, but the others return
Error 1467 (HY000): Failed to read auto-increment value from storage engineupon INSERT - Sometimes it inserts the same value twice! There doesn't seem to be a lock between the SELECT and the INSERT, and neither is the entire statement atomic. Neither if I add
FOR UPDATEto the inner SELECT
Optimistic insert
The other option I figured is the following:
SELECT COUNT(*) FROM reservations WHERE val=:val
-- application: check count=0 beforehand to prevent inserting a row and updating AUTO_INCREMENT unnecessarily
INSERT INTO reservations (val) VALUES (:val)
-- application: store last insert ID
SELECT COUNT(*) FROM reservations WHERE val=:val
-- application: check count=1 afterwards to detect conflicts
-- only if 1<count:
DELETE FROM reservations WHERE id=:last_insert_id
The following considerations I believe to apply on concurrent use:
- We can exclude the first SELECT from consideration since it doesn't guarantee anything, it just prevents 99% of the bad inserts beforehand
- I believe the simple INSERT, SELECT and DELETE statements to be atomic in the sense that they happen "in an instant" without possibly interleaving with other statements (eg. a SELECT at the same time of an INSERT either selects nothing or the inserted value, nothing "in between")
- We could insert the same value twice, and then both are deleted (worst case)
- We could insert the same value twice, but only delete one of the rows
- A transaction would defeat the purpose of the different selects with the repeatable reads isolation for example
- We now have two selects instead of one, which is not optimal
- We could combine the SELECT and DELETE into one:
DELETE FROM reservations WHERE id=:last_insert_id AND 1 != (SELECT COUNT(*) FROM reservations WHERE val=:val)but I'm not sure if this could cause a problem
Putting a uniqueness constraint on the table would be preferable but not easy/feasible with the conditions I need. See also Concurrency with Select-conditional Insert/Update/Delete - PostgreSQL for some considerations why locking rows (with FOR UPDATE) is insufficient, you'd need to lock the entire table to prevent race conditions.
Any thoughts or opinions are highly appreciated!
START TRANSACTION...COMMIT. AlsoSELECT ... FOR UPDATE.SELECT ... FOR UPDATEwhile doing an insert? I don't need to lock a row, I need to "lock" a value/constraint. The only transaction isolation level I can see working is READ COMITTED for the INSERT and the SELECT to check if constraint is violated, if so then rollback.AUTO_INCREMENTis guaranteed to give you a unique value when doingINSERT.