Re: EC pool migrations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm unlikely to get back to this in a hurry with a 100% confirmation
it works (by end-to-end testing from the client perspective), but
where I got to so far looked promising so I thought I'd share. Note
that this was done on a Hammer cluster. Notes/expansions on steps
inline:

The assumption here is that you have enough space in your cluster for
a replicated pool that will temporarily hold the intermediate data.

> On 7 February 2017 at 23:50, Blair Bethwaite <blair.bethwaite@xxxxxxxxx> wrote:
>> 1) insert a large enough temporary replicated pool as a cache tier

The cache-tiering feature is so useful just for this ability to do
online pool migrations that I really wish EC->EC tiering was possible.
Is this just disabled because it no-one considered this idea or is
there a technical reason it cannot be allowed?

>> 2) somehow force promotion of every object into the cache (don't see
>> any way to do that other than actually read them - but at least some
>> creative scripting could do that in parallel)

A rados stat seems to do the job. I used:
rados -p <old pool> ls | xargs -n1 -P64 -I{} rados -p <old pool> stat
{} 2>&1 | tee promote.log

Splitting up the object list and stat-ing across multiple clients
would probably significantly improve the overall promotion throughput.

>> 3) once #objects in cache = #objects in old backing pool
>> then stop radosgw services
>> 4) remove overlay and tier remove

No issues here.

>> 6) now we should have identical or newer data in the temporary
>> replicated pool and no caching relationship
>> then add the temporary replicated pool as a tier (--force-nonempty) to
>> the new EC pool

No issues here.

>> 7) finally cache-flush-evict-all and remove the temporary replicated pool

cache-flush-evict-all won't work until all objects have been dirtied.
It looks like a simple rados setxattr is enough to do that, so e.g.:
rados -p <new pool> ls | xargs -n1 -P64 -I{} rados -p <new pool>
setxattr {} flushed 1 2>&1 | tee demote.log

Then:
ceph tell osd.* injectargs '--osd_agent_max_ops 16'
and:
rados -p <temp cache pool> cache-flush-evict-all

-- 
Cheers,
~Blairo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux