Online converting of pool type

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Every now and then someone asks if it's possible to convert a pool to a
different type (replicated vs erasure / change the amount of pg's /
etc), but this is not supported. The advised approach is usually to just
create a new pool and somehow copy all data manually to this new pool,
removing the old pool afterwards. This is both unpractical and very time
consuming.

Recently I saw someone on this list suggest that the cache tiering
feature may actually be used to achieve some form of online converting
of pool types. Today I ran some tests and I would like to share my results.

I started out with a pool test-A, created an rbd image in the pool,
mapped it, created a filesystem in the rbd image, mounted the fs and
placed some test files in the fs. Just to have some objects in the
test-A pool.

I then added a test-B pool and transferred the data using cache tiering
as follows:

Step 0: We have a test-A pool and it contains data, some of which is in use.
# rados -p test-A df
test-A          -                       9941           11            0
          0           0          324         2404           57         4717

Step 1: Create new pool test-B
# ceph osd pool create test-B 32
pool 'test-B' created

Step 2: Make pool test-A a cache pool for test-B.
# ceph osd tier add test-B test-A --force-nonempty
# ceph osd tier cache-mode test-A forward

Step 3: Move data from test-A to test-B (this potentially takes long)
# rados -p test-A cache-flush-evict-all
This step will move all data except the objects that are in active use,
so we are left with some remaining data on test-A pool.

Step 4: Move also the remaining data. This is the only step that doesn't
work "online".
Step 4a: Disconnect all clients
# rbd unmap /dev/rbd/test-A/test-rbd   (in my case)
Stab 4b: Move remaining objects
# rados -p test-A cache-flush-evict-all
# rados -p test-A ls  (should now be empty)

Step 5: Remove test-A as cache pool
# ceph osd tier remove test-B test-A

Step 6: Clients are allowed to connect with test-B pool (we are back in
"online" mode)
# rbd map test-B/test-rbd  (in my case)

Step 7: Remove the now empty pool test-A
# ceph osd pool delete test-A test-A --yes-i-really-really-mean-it


This worked smoothly. In my first try I actually used more steps, by creatig
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux