The current bad-block strategy of rewriting blocks in another region of the disk is doubly flawed:
- extremely high-density disks should be treated as large rewritable Optical Disks. They are great at "seek and stream", but have exceedingly poor GB/access/sec ratios. Forcing the heads to move whilst streaming data affects performance radically and should be avoided to achieve consistent/predictable good performance.
- Just where and how should the spare blocks be allocated?
Not the usual "end of the disk", which forces worst case seeks.
"In the middle" forces long seeks, which is better, but not ideal.
"Close by", i.e. for every few shingled-write bands or regions, include spare blank tracks (remembering they are 5+ shingled-tracks wide).
- It assumes large shingled-write bands/regions: 1-4GB.
- Use a 4-16GB Flash memory as a full-region buffer, and perform continuous shingled-writes in a band/region. This allows the use of CD-ROM style Reed-Solomon product codes to cater for/correct long burst errors at low overhead.
- After write, reread the shingle-write band/region and look for errors or "problematic" recording (low read signal), then re-record. The new write stream can put "synch patterns" in the not to be used areas, the heads spaced over problematic tracks or the track-width widened for the whole or part of the band/region.
Should the strategy be tunable for the application? I'm not sure.
Firmware size and complexity must be minimal for high-reliability and low defect-rates. Only essential features can be included to achieve this aim...
No comments:
Post a Comment