Re: Can Ceph Do The Job?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Its my understanding that pool snapshots would basically require us to be in a all or nothing situation were we would have to revert all RBD's in a pool. If we could clone a pool snapshot for filesystem level access like a rbd snapshot, that would help a ton. 

Thanks, 
Adam Boyhan 
System & Network Administrator 
MEDENT(EMR/EHR) 
15 Hulbert Street - P.O. Box 980 
Auburn, New York 13021 
www.medent.com 
Phone: (315)-255-1751 
Fax: (315)-255-3539 
Cell: (315)-729-2290 
adamb@xxxxxxxxxx 

This message and any attachments may contain information that is protected by law as privileged and confidential, and is transmitted for the sole use of the intended recipient(s). If you are not the intended recipient, you are hereby notified that any use, dissemination, copying or retention of this e-mail or the information contained herein is strictly prohibited. If you received this e-mail in error, please immediately notify the sender by e-mail, and permanently delete this e-mail. 


From: "Janne Johansson" <icepic.dz@xxxxxxxxx> 
To: "adamb" <adamb@xxxxxxxxxx> 
Cc: "ceph-users" <ceph-users@xxxxxxx> 
Sent: Thursday, January 30, 2020 10:06:14 AM 
Subject: Re:  Can Ceph Do The Job? 

Den tors 30 jan. 2020 kl 15:29 skrev Adam Boyhan < [ mailto:adamb@xxxxxxxxxx | adamb@xxxxxxxxxx ] >: 


We are looking to role out a all flash Ceph cluster as storage for our cloud solution. The OSD's will be on slightly slower Micron 5300 PRO's, with WAL/DB on Micron 7300 MAX NVMe's. 
My main concern with Ceph being able to fit the bill is its snapshot abilities. 
For each RBD we would like the following snapshots 
8x 30 minute snapshots (latest 4 hours) 
With our current solution (HPE Nimble) we simply pause all write IO on the 10 minute mark for roughly 2 seconds and then we take a snapshot of the entire Nimble volume. Each VM within the Nimble volume is sitting on a Linux Logical Volume so its easy for us to take one big snapshot and only get access to a specific clients data. 
Are there any options for automating managing/retention of snapshots within Ceph besides some bash scripts? Is there anyway to take snapshots of all RBD's within a pool at a given time? 




You could make a snapshot of the whole pool, that would cover all RBDs in it I gather? 
[ https://docs.ceph.com/docs/nautilus/rados/operations/pools/#make-a-snapshot-of-a-pool | https://docs.ceph.com/docs/nautilus/rados/operations/pools/#make-a-snapshot-of-a-pool ] 

But if you need to work in parallel with each snapshot from different times and clone them one by one and so forth, doing it per-RBD would be better. 

[ https://docs.ceph.com/docs/nautilus/rbd/rbd-snapshot/ | https://docs.ceph.com/docs/nautilus/rbd/rbd-snapshot/ ] 

-- 
May the most significant bit of your life be positive. 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux