Pool migration using cache tiering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I'm going over the pool migration using a cache tier process explained
here [1]. There are a couple of issues with it:

1. if the pool has RBD images in it, it cannot be added as a tier:

$> ceph osd tier add newpool testpool --force-nonempty
Error ENOTEMPTY: tier pool 'testpool' has snapshot state; it cannot be
added as a tier without breaking the pool

This happens even though there are no snapshots in the pool, but it can
be mitigated by enabling mon_debug_unsafe_allow_tier_with_nonempty_snaps.

2. the 'forward' mode seems to be causing some issues as seen in this
issue [2] (writes reordering) and in the comment here [3]. Using the
'proxy' mode though seems to achieve the same ends for pool migration.

Are there any thoughts on this procedure? Should the 'proxy' mode be
used instead of the 'forward' mode? And how safe is it to enable tiering
with snapshot state on a production cluster?

Thanks,
Mohamad

[1] https://ceph.com/geen-categorie/ceph-pool-migration/
[2] http://tracker.ceph.com/issues/12814
[3]
https://github.com/ceph/ceph/commit/d7da68848a8551390a82cd6c46ffef00f98e9e59



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux