Imagine this: your MySQL replication suddenly stops with the terrifying error:
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log:
'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1,
but the master has purged binary logs containing GTIDs that the slave requires.'
This error represents one of the most common yet critical replication failures in MySQL GTID-based replication. In this comprehensive guide, we'll demystify what's happening, explain the crucial MASTER_AUTO_POSITION setting, and provide multiple solutions to get your replication back on track.
Global Transaction Identifiers (GTIDs) revolutionized MySQL replication by providing a unique identifier for every transaction committed on any server in a replication topology. Each GTID follows the format:
UUID:transaction_number
For example: 3E11FA47-71CA-11E1-9E33-C80AA9429562:23
This system eliminates the complexity of managing binary log files and positions, making failover and topology management significantly easier.
MASTER_AUTO_POSITION is the magic behind GTID-based replication. When set to 1, it enables an intelligent protocol where:
The replica tells the master which GTIDs it has already executed
The master calculates exactly which transactions are missing on the replica
The master sends only the required missing transactions
No manual tracking of binary log files or positions is needed
Here's how you enable it:
CHANGE MASTER TO
MASTER_HOST='master_host',
MASTER_AUTO_POSITION=1;
The error occurs due to a GTID gap between what the replica needs and what the master has available:
Master: Has GTIDs 1-100, 150-200 (purged 101-149)
Replica: Last executed GTID was 90, needs 91-next
CONFLICT: Replica needs 91-149, but master purged them!
Common causes:
Binary logs expired on master (expire_logs_days, binlog_expire_logs_seconds)
Manual binary log purging (PURGE BINARY LOGS)
Replica being offline for extended period
Master crash with binary log corruption
Backup scripts that rotate logs aggressively
SHOW SLAVE STATUS\G
Look for:
Last_IO_Error: The actual error message
Executed_Gtid_Set: What GTIDs the replica has processed
Retrieved_Gtid_Set: What GTIDs have been fetched from master
-- On the replica, check executed GTIDs
SELECT @@global.gtid_executed;
-- On the master, check available GTIDs
SELECT @@global.gtid_purged;
The gap between these two sets represents your missing transactions.
Use this only if the missing transactions are non-critical (like failed schema changes or test data).
-- Stop replication
STOP SLAVE;
-- Identify the missing GTID range from the error message
-- Let's say we're missing GTID uuid:101-149
-- For each missing GTID, inject empty transactions
SET GTID_NEXT='uuid:101';
BEGIN; COMMIT;
SET GTID_NEXT='AUTOMATIC';
-- Repeat for the entire missing range...
SET GTID_NEXT='uuid:102';
BEGIN; COMMIT;
SET GTID_NEXT='AUTOMATIC';
-- Continue until you've covered the gap
-- Restart replication
START SLAVE;
Automated approach for multiple GTIDs:
DELIMITER $$
CREATE PROCEDURE SkipGTIDs(
IN gtid_uuid VARCHAR(64),
IN start_range INT,
IN end_range INT
)
BEGIN
DECLARE i INT DEFAULT start_range;
WHILE i <= end_range DO
SET @sql = CONCAT('SET GTID_NEXT=\'', gtid_uuid, ':', i, '\'');
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
START TRANSACTION;
COMMIT;
SET GTID_NEXT='AUTOMATIC';
SET i = i + 1;
END WHILE;
END$$
DELIMITER ;
-- Usage example
CALL SkipGTIDs('3E11FA47-71CA-11E1-9E33-C80AA9429562', 101, 149);
This is the safest approach that guarantees data consistency.
Step 1: Create backup from master
# Using mysqldump with GTID preservation
mysqldump -h master_host -u user -p \
--all-databases \
--single-transaction \
--routines \
--events \
--set-gtid-purged=ON \
> master_backup.sql
Step 2: Rebuild replica
-- On replica, stop and reset replication
STOP SLAVE;
RESET SLAVE ALL;
-- Restore the backup
-- From command line:
mysql -u user -p < master_backup.sql
-- Verify the new GTID set matches master
SELECT @@global.gtid_executed;
Step 3: Reconfigure replication
CHANGE MASTER TO
MASTER_HOST='master_host',
MASTER_USER='repl_user',
MASTER_PASSWORD='password',
MASTER_AUTO_POSITION=1;
START SLAVE;
If you absolutely must get replication running immediately and accept potential data inconsistencies:
STOP SLAVE;
-- Find a binary log file that still exists on master
-- Check master: SHOW BINARY LOGS;
CHANGE MASTER TO
MASTER_AUTO_POSITION=0,
MASTER_LOG_FILE='mysql-bin.000025',
MASTER_LOG_POS=107;
START SLAVE;
Warning: This may cause data inconsistencies if the missing transactions contained important data.
-- Keep logs for longer (7 days example)
SET GLOBAL binlog_expire_logs_seconds = 604800; -- 7 days
-- Make permanent in my.cnf
[mysqld]
binlog_expire_logs_seconds = 604800
-- Create monitoring query
SELECT
server_id,
NOW() - MAX(ts) AS seconds_behind
FROM (
SELECT
server_id,
FROM_UNIXTIME(UNIX_TIMESTAMP(now()) - variable_value) AS ts
FROM information_schema.global_status
WHERE variable_name = 'SLAVE_RECEIVED_HEARTBEAT'
) AS heartbeat;
-- Only purge logs older than what all replicas have processed
-- Check replica status before purging
SELECT
MIN(gtid_subtract(@@global.gtid_executed, @@global.gtid_purged))
FROM information_schema.processlist
WHERE command = 'binlog dump';
#!/bin/bash
# Weekly replication verification script
mysql -e "SHOW SLAVE STATUS\G" | grep -E "(Seconds_Behind_Master|Last_Error)"
if [ $? -ne 0 ]; then
echo "ALERT: Replication issues detected" | mail -s "MySQL Replication Alert" admin@company.com
fi
For MySQL 8.0+, consider using MySQL Shell with the AdminAPI:
// Connect to replication topology
var cluster = dba.getCluster()
// Check replication status
cluster.status()
// If replica is broken, re-add it
cluster.removeInstance('replica@host:3306')
cluster.addInstance('replica@host:3306')
The "Master has purged binary logs" error is a serious but solvable problem in MySQL replication. Understanding MASTER_AUTO_POSITION and GTIDs is crucial for modern MySQL administration.
Key takeaways:
MASTER_AUTO_POSITION=1 simplifies replication but requires careful binary log management
Always rebuild from backup for production systems when possible
Monitor replication lag and binary log retention proactively
Test your recovery procedures before you need them in production
By implementing the solutions and preventive measures outlined in this guide, you'll be well-equipped to handle this error and maintain robust, reliable MySQL replication.