Liverpool Data Recovery — RAID 5 & RAID 10 Recovery Specialists
25+ years recovering arrays for home users, SMBs, enterprises, and public sector.
We recover RAID 5 (striped + single parity) and RAID 10 (striped mirrors) from workstations, servers, NAS and rack systems across Windows, Linux, macOS and virtualised estates (VMware/Hyper-V/Proxmox). Our workflow is image-first (we never write to your originals) using PC-3000, DeepSpar, Atola, controller-aware metadata parsers, and in-house stripe/mirror reconstruction tooling.
Do not re-initialise, force a rebuild, or run CHKDSK/fsck on a degraded array. Power down and contact us—those actions can permanently overwrite parity, mirrors, or filesystem journals.
Platforms & vendors we handle
Controllers / Host RAID: Broadcom/LSI MegaRAID, Dell PERC, HPE Smart Array, Adaptec (Microchip), Areca, HighPoint, Intel RST/Matrix, Promise, mdadm, Windows Storage Spaces.
NAS / SAN: Synology, QNAP, Netgear ReadyNAS, Buffalo TeraStation/LinkStation, WD My Cloud, TerraMaster, Asustor, Thecus, LaCie, TrueNAS/ixSystems, Lenovo-Iomega.
Filesystems / LVMs: NTFS, exFAT, ReFS, EXT3/4, XFS, btrfs, ZFS, APFS/HFS+, VMFS/VMDK, VHDX/CSV, LVM2.
NAS brands & representative models we frequently see in UK recoveries (15)
(Representative of lab intake; not a sales ranking.)
Synology (DS920+/DS923+/DS1522+/RS1221+) • QNAP (TS-453D/TS-464/TS-873A/TVS-872XT) • WD My Cloud (EX2 Ultra/EX4100/PR4100) • Netgear ReadyNAS (RN424/RN528X) • Buffalo (TeraStation TS3410 / LinkStation LS220D) • TerraMaster (F4-423/F5-422) • Asustor (AS5304T/AS6604T) • Thecus (N4810/N5810) • LaCie (2big/5big) • TrueNAS/ixSystems (Mini X/X+) • Lenovo-Iomega (ix4-300d/px4-300r) • Zyxel (NAS326/520) • Promise (Pegasus/VTrak) • Seagate (Business/BlackArmor NAS) • Drobo (5N/5N2 legacy).
Rack/server platforms & typical models (15)
Dell EMC PowerEdge (R730/R740/R750xd) • HPE ProLiant (DL380 Gen10/Gen11; ML350) • Lenovo ThinkSystem (SR630/SR650) • IBM xSeries (x3650 legacy) • Supermicro (SC846/SC847 2U/4U) • Cisco UCS (C220/C240) • Fujitsu PRIMERGY (RX2540) • NetApp (FAS iSCSI/NFS LUN exports) • Synology RackStation (RS1221RP+/RS3618xs) • QNAP Rack (TS-x53U/x83XU) • Promise VTrak arrays • Areca ARC-1883/1886 • Adaptec ASR series • Intel RS3xxx • D-Link ShareCenter Pro (legacy).
Our RAID 5/10 recovery process (safe & deterministic)
Triage & preservation — Label members; capture controller NVRAM/foreign configs; disable auto-rebuild; read-only clone every disk (head-mapped imaging for failing HDDs; controlled NVMe/SATA for SSDs).
Metadata acquisition — mdadm superblocks/DDF headers, PERC/Smart Array/MegaRAID configs, DSM/QTS layouts (btrfs/ext), ZFS labels, Storage Spaces metadata, GPT/MBR.
Virtual reassembly (never on originals) — Reconstruct geometry (member order, chunk/stripe size, parity rotation/delay for RAID 5; stripe/mirror pairing for RAID 10).
Parity/mirror repair — RAID 5: resolve write-hole/half-stripes and stale members; RAID 10: select freshest mirror partners and build a consistent striped image.
Filesystem repair on the image — NTFS ($MFT/$LogFile), ReFS, EXT/XFS/btrfs, ZFS (MOS/txg), APFS/HFS+; mount read-only and extract target data.
Verification & delivery — Hash manifests, consistency checks, open-test critical files/VMs/DBs; secure delivery with an engineering report.
Packaging: Put each drive in an anti-static bag and a small padded box or envelope with your contact details and symptoms. You may post the drives or drop off in person.
RAID levels — typical failures & how we handle them
RAID 5: Single member loss is tolerable, but UREs during rebuild or a stale member re-introduced will corrupt parity. We image all members, exclude stale disks, repair stripes offline, then fix the filesystem.
RAID 10: Two failures in the same mirror pair are critical. We identify healthy mirror partners, build a coherent striped set from the freshest mirrors, and export data from that composite.
40 RAID 5 & RAID 10 errors we recover — with our lab approach
Disk/media & electronics (HDD/SSD)
Head crash on one member → HSA donor swap; ROM/adaptives; per-head imaging; parity/mirror reconstruction.
Two degraded members (different areas) → Image both; parity fills (RAID 5) or composite from mirrors (RAID 10).
Stiction (heads stuck) → Controlled ramp release; low-duty imaging to prevent re-adhesion.
Spindle seizure → Platter/hub transplant; alignment verification; clone.
Service-area/translator corruption → Patch SA modules; rebuild translator; restore LBA; image.
PCB/TVS/motor driver damage → Donor PCB + ROM transfer; preamp bias check; clone.
G-list storm / SMART hang → Freeze reallocation; reverse imaging; per-head strategy.
SMR latency storms → Long sequential passes; zone-aware re-stripe on reconstruction.
HPA/DCO capacity lock → Remove on the clones; recompute offsets in the virtual map.
Thermal resets during imaging → Temperature-controlled, duty-cycled sessions with persistent error maps.
SSD no-enumerate (controller dead) → Vendor/test mode; if removable NAND: chip-off → ECC/XOR/FTL rebuild → LBA image.
SSD FTL corruption → Parse vendor metadata; rebuild L2P; export consistent stream.
Worn NAND (high BER) → LDPC/BCH soft decode; voltage read-retries; majority-vote reads.
OPAL/SED enabled on a member → Image; unlock with keys/PSID; integrate into set; proceed with reconstruction.
Controller/cache/NVRAM
Dead RAID controller → Dump NVRAM; import foreign config on bench controller to decode geometry; assemble virtually.
Dirty cache shutdown (write-back) → Reconstruct write journal; resolve half-stripes using majority logic + FS journals.
Firmware downgrade/upgrade changed metadata → Compare config epochs; select last consistent; rebuild map.
Foreign config conflicts → Snapshot all; choose quorum; disable initialisation; assemble on images.
Wrong disk selected as rebuild target → Serial/GUID audit; revert to valid member; reassemble from correct set.
Rebuild aborted at N% → Split array into epochs; stripe-wise choose consistent generation guided by FS anchors.
Cache policy flip (WB↔WT) mid-incident → Detect torn writes; correct stripes prior to FS mount.
Battery/BBU failure mid-IO → Identify incomplete stripes; repair offline on the images.
Geometry/order/offsets
Unknown disk order → Brute order with XOR scoring and entropy checks; confirm on FS anchors (MFT, superblocks).
Unknown stripe size → Probe 16–1024 KB; choose by parity satisfaction and metadata alignment.
Parity rotation unknown (RAID 5) → Test left-sym/left-asym/right-* variants; validate on directory structures.
Delayed parity / data-start offsets → Detect skip-blocks; rebase LBA0; re-stripe.
4Kn/512e mixed members → Normalise logical sector size in the virtual map; recalc offsets.
Backplane/HBA induced global offset shift → Detect by pattern correlation; correct globally.
Hot-spare promotion changed member count → Epoch-split at cut-over; merge results after reconstruction.
HPA set on single member only → Expand on clone; reconcile map before assembly.
NAS/volume/filesystem layer
Synology SHR on top of RAID 5 equivalent → Derive md layers; assemble SHR; mount btrfs/ext volumes.
QNAP md + ext4/btrfs → Reassemble md sets; correct RAID chunk quirks; mount volume RO.
btrfs metadata tree damage → Rebuild chunk/tree roots; restore subvolumes/snapshots.
ZFS mirrored stripe (RAID 10-like) → Import pool RO; select healthy leaf vdevs by txg; export datasets.
NTFS $MFT/$MFTMirr mismatch → Rebuild from mirror + $LogFile on the array image.
XFS log tail corruption → Zero bad log on copy;
xfs_repair; rebuild AG headers.EXT4 journal pending → Replay on image; restore directories/inodes.
ReFS metadata rot → Salvage intact objects/stream maps; export valid trees.
Human/admin actions
Accidental re-create / quick-init on the array → Carve prior superblocks; reconstruct original geometry; ignore the new empty map.
CHKDSK/fsck run during degradation → Reverse damage using journal/backup metadata where possible; keep repairs strictly on the cloned image.
What we need from you
RAID level (5 or 10), disk count and sizes, controller/NAS model, and any actions taken just before failure (drive swaps, rebuild attempts, firmware changes).
The time of first error and your must-have folders/VMs/DBs so we can validate those first.
Why Liverpool Data Recovery
25+ years of RAID/NAS/server recoveries across consumer, SMB & enterprise
Controller-aware, image-first workflow; parity/mirror reconstruction on clones, never on originals
Deep expertise with NTFS/ReFS, EXT/XFS/btrfs, ZFS, APFS/HFS+, VMFS/VHDX and snapshot technologies
Clear engineer-to-engineer communication and free diagnostics
Contact our Liverpool RAID engineers for free diagnostics today. We’ll stabilise the members, reconstruct parity or stripes/mirrors safely, repair the filesystem on the image, and return your data with a full technical report.




