Re: xfsrestore performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, May 28, 2016 at 11:25:32AM +0200, xfs.pkoch@xxxxxxxx wrote:
> Dear XFS experts,
> 
> I checked the situation this morning:
> 
> ....
> xfsdump: status at 10:25:53: 473543/7886560 files dumped, 1.3% data dumped,
> 76211 seconds elapsed
> xfsdump: status at 10:35:46: 478508/7886560 files dumped, 1.3% data dumped,
> 76804 seconds elapsed
> 
> and decided to stop the process. With this kind of speed xfsdump |
> xfsrestore
> will need more than one month. Something must be seriously wrong here.
> 

I don't have much experience working with xfsdump so I couldn't really
comment here without spending some significant time playing around with
it. Hopefully somebody else can chime in on this.

> I see two options: Use rsync again to copy the data from the 14TB xfs
> filesystem to the new 20TB xfs filesystem. This won't be finished
> this weekend.
> 

Could you temporarily promote your 14TB fs to production in order to
make the data available while an rsync process migrates the data back
over to the properly formatted 20TB fs?

> Or use dd to copy the 14TB XFS filesystem into the 20TB volume and
> then grow the filesystem.
> 
> dd runs at 300MB/sec that's approx 1TB per hour, so I decided to go this
> way.
> 
> So here's another question: The new filesystem will run on a 20 disk raid10
> volume and was copied from a 16 disk raid5 volume. So swidth will be wrong.
> Also all the data will be within the first 15TB.
> 
> What should I do to fix this? Or will xfs_growfs fix it automatically?
> 

xfs_growfs will add more allocation groups based on the size of the
current storage. It won't reallocate existing files or anything of that
nature.

You can't change the stripe unit/width parameters after the fact, afaik.
The only way to accomplish this appropriately that I can think of is to
forcibly reformat the temporary filesystem with the settings targeted to
the longer term storage, reimport the data from the production volume
and then copy over the raw block device.

Brian

> Regards
> 
> Peter Koch

> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux