Re: safest way to re-crush a pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Michael;

I run a Nautilus cluster, but all I had to do was change the rule associated with the pool, and ceph moved the data.

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx 
www.PerformAir.com



-----Original Message-----
From: Michael Thomas [mailto:wart@xxxxxxxxxxx] 
Sent: Tuesday, November 10, 2020 1:32 PM
To: ceph-users@xxxxxxx
Subject:  safest way to re-crush a pool

I'm setting up a radosgw for my ceph Octopus cluster.  As soon as I 
started the radosgw service, I notice that it created a handful of new 
pools.  These pools were assigned the 'replicated_data' crush rule 
automatically.

I have a mixed hdd/ssd/nvme cluster, and this 'replicated_data' crush 
rule spans all device types.  I would like radosgw to use a replicated 
SSD pool and avoid the HDDs.  What is the recommended way to change the 
crush device class for these pools without risking the loss of any data 
in the pools?  I will note that I have not yet written any user data to 
the pools.  Everything in them was added by the radosgw process 
automatically.

--Mike
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux