John R Pierce wrote: > On 5/31/2017 8:04 AM, m.roth@xxxxxxxxx wrote: >> I've got an old RAID that I attached to a box. LSI card, and the RAID >> has 12 drives, for a total RAID size of 9.1TB, I think. I started shred >> /dev/sda the Friday before last... and it's still running. Is this >> reasonable for it to be taking this long...? > > not at all surprising, as that raid sounds like its built with older > slower drives. It's maybe from '09 or '10. I *think* they're 1TB (which would make sense, given the size of what I remember of the RAID). > > I would discombobulate the raid, turn it into 12 discrete drives, and use Well, shred's already been running for this long... <snip> > unless that volume has data that requires military level destruction, > where upon the proper method is to run the drives through a grinder so > they are metal filings. the old DoD multipass erasure specification > is long obsolete and was never that great. If I had realized it would run this long, I would have used DBAN.... For single drives, I do, and choose DoD 5220.22-M (seven passes), which is *way* overkill these days... but I sign my name to a certificate that gets stuck on the outside of the server, meaning I, personally, am responsible for the sanitization of the drive(s). And I work for a US federal contractor[1][2] mark 1. I do not speak for my employer, the US federal government agency I work at, nor, as my late wife put it, the view out my window (if I had a window). 2. I'm with the government, and I'm here to help you. (Actually, civilian sector, so yes, I am. _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx https://lists.centos.org/mailman/listinfo/centos