Hi, > >I've since learned it takes entirely too long to copy 1.3TB to two 2TB > >disks. I can't keep the system down that long. > > You don't need to. The problem is the LSI hardware RAID. All eight ports are consumed with the eight 240GB disks. The two 2TB disks are connected to the onboard SATA controllers. I forget the reason why I didn't just use the onboard SATA controllers when I installed the system seven years ago. I know there's only six ports, and I'm using eight disks on the LSI controller, but that wasn't the reason - the decision was made to use the LSI when there was only four regular SATA disks installed. Maybe that was the reason - the onboard were too slow. Using the LSI makes me nervous - there have been one or two times when I almost lost the array, but I'll probably keep using it. This means I have to use an interim server to hold the 2TB of data while rebuilding and restore the data to the original server. I'll probably set it up with the two 2TB regular disks, shift all the services to it, rebuild the existing production system, copy the data back, then shift the IP and services back to the original production machine. Another problem - just saw one of the 2TB disks I'm using for backup is failing: [411086.090668] ata6.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 [411086.091908] ata6.00: irq_stat 0x40000001 [411086.093071] ata6.00: failed command: READ DMA EXT [411086.094218] ata6.00: cmd 25/00:00:80:82:b9/00:05:49:00:00/e0 tag 16 dma 655360 in res 53/40:00:80:82:b9/00:00:49:00:00/00 Emask 0x8 (media error) [411086.096519] ata6.00: status: { DRDY SENSE ERR } [411086.097699] ata6.00: error: { UNC } [411086.099676] ata6.00: NCQ Send/Recv Log not supported [411086.101691] ata6.00: NCQ Send/Recv Log not supported [411086.102885] ata6.00: configured for UDMA/133 [411086.104086] sd 5:0:0:0: [sdb] tag#16 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [411086.105329] sd 5:0:0:0: [sdb] tag#16 Sense Key : Vendor Specific(9) [current] [411086.105950] sd 5:0:0:0: [sdb] tag#16 <<vendor>>ASC=0x80 ASCQ=0x0 [411086.106522] sd 5:0:0:0: [sdb] tag#16 CDB: Read(16) 88 00 00 00 00 00 49 b9 82 80 00 00 05 00 00 00 [411086.107675] print_req_error: I/O error, dev sdb, sector 1236894336 [411086.108296] ata6: EH complete # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[1] sda1[0] 1953381440 blocks super 1.2 [2/2] [UU] [======>..............] check = 31.6% (618322048/1953381440) finish=115514.5min speed=192K/sec bitmap: 0/15 pages [0KB], 65536KB chunk > Caveats: > > I'm hoping you do not have hardlinks in the tree to move. If you do, > this gets more expensive. You need to use tar|tar or rsync with the -H Thankfully no hardlinks. I will also take the opportunity to use XFS over ext4. > Do you have the hardware to assemble the new raidset with the new drives > and have both online at once (with two machines I suppose)? > > If so you can do the cp-then-rsync directly to the new drives without > the intermediate 2TB volume. Which means there's no time consuming copy > back. Because of the hardware RAID controller, I cannot. > Cheers, > Cameron Simpson <cs@xxxxxxxxxx> Thanks, mate. _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx