January 5, 2018

Storage Mirroring vs. Database Replications/Mirroring Technologies

Most organizations have some sort of storage based remote mirroring (array mirroring) technology as a disaster recovery solution. It's really a good solution for files and applications but definitely NOT for RDBMS.

Array mirroring is a sophisticated technology promoted as a generic infrastructure solution that makes a simple promise – whatever is written to a primary volume will also be written to a mirrored volume at a remote site.

Keeping this promise, however, can have disastrous consequences for data protection and availability when the data written to primary volumes is corrupt.



Disadvantages of Array/Storage Mirroring -
  • No Isolation: There is no isolation between primary & replicated copy of the data. If primary gets corrupted, replicated copy is also corrupted as well.
  • Bandwidth inefficient:
    • Since storage mirroring does not have any idea about RDBMS technologies (internals) thus it replicates everything which are written in the file/block level.
    • Array mirroring must replicate every write made to all primary volumes (writes to data files, undo and temp files, online redo log files, archive log files, flashback log files, control file, etc). Biggest example is UNDO & TEMP in Oracle, these tablespaces can have huge amount of data written and wiped out later since they are only temporary to keep the database in a consistent state. But doesn't have anything to do with a remote copy.
    • Array mirroring further increases the volume of replicated data because it must replicate the entire block (or potentially a 1 to 4 megabyte sector size), even if only a small portion of the data within the block has changed.
  • No or little Data Validation:
    • Array mirroring offers zero database/data/block validation – it only performs basic storage-level checksum validation – providing limited isolation between mirrored volumes. Limited isolation and zero data block validation virtually ensures that physical corruptions written to a primary volume as well as administrative errors that occur out-of-band of the database (e.g. accidental deletion of data files or log files) are faithfully replicated to remote volumes, making both copies unusable.
    • Array replication does not understand the logical blocks of RDBMS so in case the primary site have a physical/logical corruption or a fractured block, array replication is not capable to detect such corruption as a result the corruption will be replicated in the DR/replicated site.
  • No Automatic Repair: Since array replication is not capable to detect corruption so it is obvious that it won't able to repair the corruption automatically.

On the other hand, database based replications (e.g. Oracle Active Dataguard, MySQL Replication, PostgreSQL Streaming Replication, MS SQL Server Always On) have more or less (depending on the RDBMS) features & functionalities which are:
  • Superior Isolation: Database-aware replication/mirroring technologies replicates only the information needed to recover an transaction (redo) which represents a small percentage of the total write volume generated by a database. It replicates data directly from the memory of the primary database ensuring that the standby is isolated from corruptions that can be introduced by the I/O stack.
  • Bandwidth Efficient: Since it's only transport committed transactions (redo), naturally the transported data is minimal thus bandwidth efficient.
  • Continuous Data Validation: Database-aware replication/mirroring technologies uses an independent Database in the standby/DR/replicated site that uses media recovery to apply the changes to the standby database to maintain a synchronized physical replica of the primary. Database recovery processes perform continuous validation as changes are applied to the standby. This validation uses knowledge of redo and data block structures to check for physical data corruption, logical intra-block corruption and lost write corruption to insure the highest level of isolation between primary and standby.
  • Automatic Repair: Database-aware replication/mirroring technologies makes block level corruption invisible to users with no changes to application code. Block level corruption is caused by intermittent random I/O errors that can occur independently at either primary or standby databases. Under normal operation when a Database reads a block and detects corruption it marks the block as corrupt and reports the error to the application. No subsequent read of the block is successful until the block is recovered manually. Database-aware replication technology like Oracle Active Data Gaurd automatically repairs physical block corruption at a primary database by retrieving a good version of the block(s) from the active standby. Conversely, corrupt blocks detected by either the apply process or by read-only users on the standby database are automatically repaired using the good version from the primary database. Both HA and data protection are maintained at all times.

In Summary, this is exactly the opposite of the above discussed disadvantages of Array/Storage Mirroring.

Along with the above, database based replication/mirroring have the following additional benefits -
  • Fast, Automatic Failover: Most RDBMS based replication/mirroring solution comes with fast & automatic failover.
  • Database Rolling Maintenance: Some RDBMS (e.g. Oracle) supports -
    • Upgrades and many other types of maintenance are first performed at the standby database.
    • Once implemented and fully tested, production is quickly switched to the standby database already running at the new version. 
    • Total database downtime is limited to the time required for switchover (less than 60 seconds). 
    • The standby can also be used to fully validate the new release before the switchover is performed without compromising data protection.
  • Less Risk: You Know it’s Working: Database based replication/mirroring solution enables a standby database to service read-only workloads while it is being synchronized with the primary database. Workloads running on a standby database provide continuous user and application level validation that the standby is ready for failover if needed. This online validation is impossible to accomplish with array mirroring. The only way to validate that mirrored volumes are able to support production is to stop mirroring, mount volumes, and open the Database. This explains why users of array-mirroring often discover that they have problems at their remote site at the most inopportune time – after a primary outage has occurred.


About Storage Consistency Groups
This topic merits discussion because it is often used by storage vendors to diminish the value of RDBMS based replication/mirroring. 
A storage consistency group is a composite group of storage devices created with special properties to maintain dependent write consistency across all devices, and across one or more storage arrays. In RDBMS context, a consistency group ensures crash-consistency for Database files that span multiple volumes. Consistency groups compensate for the fact that a storage array has zero intrinsic knowledge of the application data it is attempting to protect. Consistency groups are also required in order to provide a crash consistent copy when array mirroring is used - by ensuring that writes are mirrored to the remote volumes in the same order that they were written at the source. 
Storage vendors often expand the use-case for consistency groups by positioning them as a tool for achieving global point in time consistency, for example in cases where dependencies span multiple databases and applications. It is evident that storage consistency groups provide an important benefit by enabling data consistency at the I/O level. It is incorrect, however, to conclude that consistency groups by themselves provide application or transactional level consistency, for either a single database or a set of databases.

Consistency Groups enable databases to perform a database RESTART function, which examines the DB logs and tables, backing out 'in flight' and 'in doubt transactions', making the DB “transaction” consistent. Depending on the application, this may or may not be enough.

I/O Consistent Crash Point versus Transactional Consistency 
Consistency groups provide storage level crash consistency. They guarantee that writes to a number of volumes are applied as if there were a single volume. The problem is that crash consistency does not equal transaction consistency. 
When a Database crashes there will be uncommitted transactions that were written to disk which applications should not see. For example, in the process of updating 100 rows a user may not have committed the transaction when the crash occurred. Imagine a similar scenario for workload on each database participating in a given consistency group. When an outage occurs the storage brings all participating databases up at the same [consistent] crash point. However, each Database will then perform additional recovery that rolls it back to its own [different] transaction boundary than the crash point. Achieving any level of application consistency would be highly unlikely since there is no application aware mechanism, such as a transaction monitor, that coordinates database crash recovery across multiple databases in the consistency group. Additional reconciliation at a transaction level is still required across the different databases in the consistency group. 
This explains why storage provides I/O consistent crash points instead of transactionally consistent crash points as required by the application. A similar outcome occurs when using consistency groups to recover a mix of databases and non-database files. Each database will be recovered to a point in time that is different from the crash consistent point that the storage system works so hard to present.

No comments:

Post a Comment