MySQL DBA for non-DBAs – What is replication?

Databases are inherent parts of most applications these days, and are usually one of the most critical parts – if the database fails, the rest of the system usually fails as well. Common advice is to use database replication to mitigate failures in the database layer, but it’s then left as an exercise for the reader to find out even what replication is, how to set it up, and what to do if it goes wrong.

Replication is also given as a way of improving performance by load-balancing across several servers, but once again the actual implementation is left entirely up to the reader.

Although there is plenty of information about MySQL replication available, some of it is woefully outdated, some of it can introduce new and spectacular ways for your database system to explode, and some is just plain wrong. Let’s start at the beginning…

Tell me a story, grandpa!

Back in the old days, when CPU speed was measured in megahertz, RAM in megabytes and hard disks in gigabytes, there were few options for databases with integrated replication facilities. Those systems that did offer it tended to be of the expensive variety, were tricky to set up, and demanded their own set of server attendants to perform the arcane rites required to maintain them.

People were well aware of this single point of failure in their systems – that they were reliant on the stability of their backend databases, and the practitioners of the voodoo that powered them. Various workarounds were built for the “new” database systems becoming available, either independently and internally by organisations, or as additional products that attempted to provide some of the same functionality without the same cost and/or complexity.

The stone age – replication becomes a thing

Database replication basically means having more than one copy of the database, usually running on different servers, in order to prevent loss in the event of server death and/or to allow queries to be spread rather than all hitting (and potentially overloading) a single server. The earliest, and still most common, replication setup is the “master-slave” configuration – one server handles all of the updates, and the others get their instructions from the master. Data can be read from any server in the herd, but data modifications must be sent to the master.

The early “solutions” were crude – one approach was to have additional code that would connect to multiple servers and perform the same operation on all of them, either internally as part of the application, or as a kind of database proxy system that would intercept and duplicate the requests. Another approach, for data that didn’t require up-to-the-second synchronisation, was to copy the actual database tables from one server to the others.

Both of these systems had a major failing – if an update wasn’t successful on all of the servers, the database herd was left in an inconsistent state. Even if the original contents of the affected rows were saved, the non-transactional nature of the databases (at that time) meant that it still wasn’t safe to try to perform a “rollback” by restoring them with another update query. Database herds could quickly and easily fall out of synchronisation, leading to inconsistent and unpredictable application behaviour.

From stone to bronze – native replication

Developers and DBAs (myself included) watched with eagerness at the progress of the next major release of the MySQL database server – version 4.0 (in 2003) promised built-in master-slave replication! One server would take on the role of the master, would log all of the successful database modifications (to a “binary log” file), and would allow other servers to subscribe to this “news feed” to update themselves. Truly, all our problems were solved!

While the initial replication feature did make things easier, it still suffered from a number of problems.

Firstly, it was asynchronous – there was a delay between the master performing an update, and that same update being applied to the slave servers. This created “ghosts” in the data, where the same query run on different servers could return different results, as they all tried to keep up – a problem made worse if the slave herd was comprised of machines with different performance capabilities.

Secondly, the default table engine at the time was still MyISAM, which had table-level locks rather than row level locks. If an update took a long time to run, it would block all other requests until it had finished, leading to seemingly random and unrepeatable performance issues.

Thirdly, the replication method used was “statement based” replication – the same SQL command was run on each of the servers. Provided that statement didn’t throw any actual errors, it was accepted as being successful, even though the data stored on the different servers may have differed. DELETEs and UPDATEs that used WHERE clauses or made relative value changes could affect different rows and store different values than the master (or other slaves) and would fall further out of synchronisation over time.

There were some techniques that developers and DBAs could use to try and work around this – setting and reading a timestamp value in a schema to track replication delay (and rejecting servers that were stale), aggressive query and schema optimisation and splitting to try and minimise performance-killing table locks, table checksumming and comparison to detect data divergence. The situation had improved, but there was still some way to go.

Bronze to iron – transactions come of age, and replication evolves

When Oracle acquired Innobase OY back in 2005, it became the owner of the most popular and promising transactional database engine in the MySQL ecosystem. Although the InnoDB engine was already available in MySQL, the fact it was now “owned” by Oracle provided a bit more confidence that it wasn’t suddenly going to disappear, although there was some fear that Oracle would deliberately stifle it in order to drive sales of its own RDBMS (up until Oracle acquired MySQL AB in 2008, when people feared it would stifle MySQL completely).

This increase in confidence meant that more people started to look at InnoDB, and work on the performance of it. There were still use-cases where the MyISAM engine delivered significant performance improvements, and functionality (like full-text indexes) that was completely missing from the InnoDB engine, making it unwise (or impossible) to migrate tables from one engine to the other. However, it still offered two very attractive features: row-level locking, and transactions.

Row-level locking helped to alleviate some of the issues with data modification statements coming in from the master preventing data select queries from running. Performance of slave servers now started to settle down, and the fear of “big” queries was reduced.

Transactions solved another major problem – groups of data modification statements that needed to be considered as a single operation, but couldn’t be written as one. Now, if a transaction was aborted (or failed) half-way through on the master, the slaves were never notified about it, and if it failed on a slave server, replication would halt rather than carry on with inconsistent data.

Neither of these things addressed the other big issue with replication – data divergence (or data drift). The 5.1 release of the MySQL database server (in 2008) offered a different kind of replication – “row based” replication. This differed from statement based replication in that the binary log no longer stored the SQL statements needed to update slave servers, but stored the actual row values to be written to the tables. Now, even if a slave did start to suffer from data drift, the next time affected rows were modified they would be brought back into line with the values on the master.

So, with things starting to look up, attention turned to another issue with binary logs – the manner in which slaves tracked their position against the master.

The Industrial Revolution – machines take over the work

From the very beginning, slaves used two pieces of information to keep track of their position – a binary log filename, and a file position within it. Together, the slave could inform the master to send updates only from that point onward. This is fine – provided that nothing happens to the master server…

One catastrophic failure case for master-slave replication was a binary log file losing or corrupting data. The most common cause was the master server restarting unexpectedly; this could result in a half-written binary log entry, or the file being truncated (or removed entirely) by a filesystem check. When the slaves reconnect to the master, they may request a non-existent binary log file, a binary log position that no longer exists (file truncated, slaves are requesting a “future” position) or is garbled (half-way through a binary log entry, or requesting a partially written log entry). This kind of situation is pretty fatal for the slaves, who have no way of knowing a “good” binary log position – the only reliable way of getting the slave back into service is to throw the existing data away and start again.

MySQL Server 5.6 introduced (in 2013) a new feature – Global Transaction IDs, or GTIDs. Servers would generate a globally unique identifier (rather than rely on a manually configured “server id”), and the binary logs used this to keep track of the update entries. Database slaves now only needed to be configured to point to a master server, and would send the GTID of the last transaction they successfully processed – it was now up to the master to find the right position in its available binary logs and send all subsequent entries. Although the loss or corruption of the binary logs was still potentially fatal to the herd, it made setting up slaves much easier.

So…I set up replication, and my database is bulletproof?

Not quite. So far, we’ve only looked at master-slave replication, which still has a single point of failure – the master database server. More exotic database configurations exist like multi-master slaves, “master-master” replication, and clustering (which is not the same as master-master replication!) but they’re a bit too advanced for this introduction.

There are also advantages and disadvantages to both row-based and statement-based replication, and schema design considerations that are affected by which replication method is being used. There still exists a certain amount of voodoo in replicated database setups, but at least us server attendants have cooler hats these days…


Posted

in

by

Tags: