Liverpool Data Recovery — RAID 1 Recovery Specialists
Liverpool’s No.1 RAID service • 25+ years • 2-to-32-disk mirrored sets • Consumer, SMB, Enterprise
When a RAID 1 (mirrored) system fails, it’s usually because one member has degraded silently, a rebuild went the wrong way, or controller metadata/bitmaps diverged after an unclean shutdown. We recover mirrored arrays from workstations, servers, NAS and racks, across Windows, Linux and macOS. Our workflow is image-first (we never write to your originals) using PC-3000, DeepSpar and Atola plus controller-aware metadata parsers and our own mirror-reconciliation tooling.
Do not re-initialise, force a rebuild, swap disks around, or run CHKDSK/fsck on a degraded mirror. Power down and contact us—these actions can permanently overwrite the good member.
Platforms & vendors we handle
Software/host RAID: Linux mdadm/MD, Windows Dynamic Disks & Storage Spaces (mirror), Intel RST/Matrix, Apple Disk Utility (CoreStorage/APFS mirror), ZFS mirror, btrfs RAID1/1C3/1C4.
Hardware RAID/HBAs: LSI/Broadcom MegaRAID, Dell PERC, HPE Smart Array, Adaptec (Microchip), Areca, HighPoint, Promise.
Filesystems/LVMs: NTFS, exFAT, ReFS, EXT3/4, XFS, btrfs, ZFS, APFS/HFS+, VMFS/VMDK, VHDX/CSV, LVM2.
NAS brands & representative models we frequently see in UK recoveries (15)
(Representative of lab intake; not a sales ranking.)
Synology (DS920+/DS923+/DS1522+/RS1221+) • QNAP (TS-453D/TS-464/TS-873A/TVS-872XT) • WD My Cloud (EX2 Ultra/EX4100/PR4100) • Netgear ReadyNAS (RN424/RN528X) • Buffalo (TeraStation TS3410 / LinkStation LS220D) • TerraMaster (F4-423/F5-422) • Asustor (AS5304T/AS6604T) • Thecus (N4810/N5810) • LaCie (2big/5big) • TrueNAS/ixSystems (Mini X/X+) • Lenovo-Iomega (ix4-300d/px4-300r) • Zyxel (NAS326/520) • Promise (Pegasus DAS / VTrak) • Seagate (Business/BlackArmor NAS) • Drobo (5N/5N2 legacy).
Rack/server platforms & typical models (15)
Dell EMC PowerEdge (R730/R740/R750xd) • HPE ProLiant (DL380 Gen10/Gen11; ML350) • Lenovo ThinkSystem (SR630/SR650) • IBM xSeries (x3650 legacy) • Supermicro (SC846/847 2U/4U) • Cisco UCS (C220/C240) • Fujitsu PRIMERGY (RX2540) • NetApp (LUN exports) • Synology RackStation (RS1221RP+/RS3618xs) • QNAP Rack (TS-x53U/x83XU) • Promise VTrak, Areca ARC-1883/1886, Adaptec ASR, Intel RS3xxx, D-Link ShareCenter Pro (legacy).
Our RAID 1 recovery process (safe & deterministic)
Triage & preservation — Label members by slot; capture controller NVRAM/foreign configs and mdadm superblocks; disable auto-rebuild; read-only clone every disk (head-mapped imaging for failing HDDs; controlled NVMe/SATA for SSDs).
Metadata acquisition — Mirror bitmaps, event counters, array UUIDs, GPT/MBR, ZFS labels/btrfs chunk trees, Storage Spaces pool metadata.
Best-member selection — Determine which copy is newest and self-consistent (journal state, timestamps, bitmap). We never trust “last rebuilt” blindly.
Virtual assembly (no writes to originals) — Mount the best clone read-only; if both have unique data, we export both and reconcile at the file level.
Filesystem repair on the image — NTFS ($MFT/$LogFile), EXT/XFS journals, APFS checkpoints, ZFS/btrfs scrub on images; never on originals.
Verification & delivery — Hash manifests, sample-open priority files/DBs/VMs; secure hand-off with engineering notes.
Packaging: Put each drive in an anti-static bag and a small padded box or envelope, label order/slot, include contact details and symptoms. Post or drop off in person—both are fine.
40 RAID 1 failures we recover — with our technical approach
Disk/media & electronics (HDD/SSD)
One member clicking (head crash) → HSA donor swap, ROM/adaptives; per-head imaging; use the healthy mirror for gaps.
Both members degraded (different areas) → Image both; build a composite image by region (best-of-each).
Heads stuck (stiction) on one member → Controlled ramp release; immediate low-duty clone.
Spindle seizure → Platter/hub transplant to matched chassis; alignment; image.
Service-area/translator corruption → Patch SA modules; rebuild translator; restore LBA; clone.
PCB/motor driver/TVS failure → Donor PCB + ROM transfer; preamp bias check; image.
G-list storm / SMART hang → Freeze reallocation; reverse imaging; head-select strategy.
SMR member timeouts → Long sequential passes; zone-aware mapping.
HPA/DCO capacity lock on one member → Normalise on the clone; reconcile partition offsets.
Thermal/reset instability → Temperature-controlled, duty-cycled imaging with persistent error maps.
SSD controller no-enumerate → Vendor/test mode; if removable NAND: chip-off → ECC/XOR/FTL rebuild → LBA image.
SSD FTL corruption → Parse vendor metadata; rebuild L2P; export consistent stream.
OPAL/SED on one mirror → Image then unlock with keys/PSID; compare to non-SED partner; choose newest.
High BER worn NAND → LDPC/BCH soft-decode; read-retry voltages; majority-vote reads.
Controller/bitmap/metadata issues
Divergent mirror bitmaps → Use event counters/journals to identify newest; mount newest clone; file-level merge for recent writes.
Controller writes new array over old headers → Carve residual superblocks; reconstruct prior mirror; ignore new empty map.
Dirty cache shutdown (WB cache) → Replay FS journals on the array image; avoid controller-level rebuilds.
Foreign config conflict after migration → Snapshot configs; choose quorum by epoch; assemble virtually (no init).
Wrong disk treated as ‘good’ → Serial/GUID audit; reverse unintended clone direction; salvage from true newest member.
Accidental quick-init/initialise → Recover previous GPT/superblocks from backups; mount clone RO.
512e vs 4Kn replacement mismatch → Logical sector emulation in virtual map; fix offsets.
Backplane/expander link flaps → Direct-attach to HBA; re-image at QD=1.
Geometry/order/host issues
Members swapped between bays → No parity to guide us; determine newest by FS/journal; mount best; reconcile file-level diffs.
Stale member reintroduced → Detect by timestamps/sequence; exclude stale; export from consistent clone.
Intel RST: degraded then forced rebuild → Stop; image both; select pre-rebuild newest; recover from that set.
Storage Spaces mirror metadata damage → Rebuild pool metadata from backups on clone; rebind virtual disk; mount.
mdadm: one member marked ‘bad’ incorrectly → Assemble in degraded mode on images; pick correct event count; export.
Apple CoreStorage/APFS mirror split-brain → Choose APFS checkpoint by TXID; export from coherent half.
Filesystems/logical on top of the mirror
NTFS $MFT/$MFTMirr mismatch → Rebuild from mirror + $LogFile on the best clone; recover orphans.
ReFS metadata rot → Salvage intact object/stream maps; export valid trees.
EXT4 journal pending → Replay on the image; recover directories/inodes.
XFS log tail damage → Zero bad log on copy;
xfs_repair; rebuild AG headers.APFS container corruption → Select healthy checkpoint; rebuild spacemaps/B-trees; mount RO; export.
HFS+ catalog/extent B-tree damage → Rebuild from extents; salvage forks; minimal carving.
RAW prompt / “format disk” → Restore boot/VBR and partition GUIDs; virtual mount.
Accidental deletion → Metadata-first restore (journals/snapshots/bitmaps); minimal carving to preserve names/timestamps.
Quick format on the mirror → Recover from secondary headers/anchors; carve gaps; verify with hashes.
BitLocker/VeraCrypt inside mirror → Decrypt on the image with recovery key; then repair inner FS.
Time-skewed mirrors (RTC drift) → Prefer member with consistent journal close; reconcile changed files from the other.
Environment & human factors
Rebuild in the wrong direction (good → bad) → Abort; image both; identify last-known-good; salvage from unaffected regions on the “overwritten” member using error maps.
What we need from you
RAID level (RAID 1), disk count, controller/NAS model, any changes just before failure, and your must-have folders/VMs/DBs.
If possible, note the time of first error and any rebuild/drive-swap attempts.
Why Liverpool Data Recovery
25+ years of mirrored-array recoveries across consumer, SMB and enterprise estates
Controller-aware, image-first workflows; filesystem repairs only on clones
Deep expertise with NTFS/ReFS, EXT/XFS/btrfs, APFS/HFS+, ZFS, VMFS/VHDX
Clear engineer-to-engineer communication and free diagnostics
Contact our Liverpool RAID 1 engineers for free diagnostics today. We’ll stabilise both members, select the newest coherent copy, reconcile differences safely, and return your data with full technical reporting.




