Jiri,
if you colocate more Journals on 1 SSD (we do...), make sure to understand the following:
- if SSD dies, all OSDs that had their journals on it, are lost...
- the more journals you put on single SSD (1 journal being 1 partition), the worse performance, since total SSD performance is not i.e. dedicated/available to only 1 journal, since you are now i.e. colocating 6 journals on 1 SSD...so perromance is 1/6 for each journal...
Latenc will go up, bandwith will go down, the more journals you colocate... XFS recommended...
I suggest make balance between wanted performance and $$$ for SSDs...
best
On 29 September 2015 at 13:32, Jiri Kanicky <j@xxxxxxxxxx> wrote:
Hi Lionel.
Thank you for your reply. In this case I am considering to create separate partitions for each disk on the SSD drive. Would be good to know what is the performance difference, because creating partitions is kind of waste of space.
One more question, is it a good idea to move journal for 3 OSDs to a single SSD considering if SSD fails the whole node with 3 HDDs will be down? Thinking of it, leaving journal on each OSD might be safer, because journal on one disk does not affect other disks (OSDs). Or do you think that having the journal on SSD is better trade off?
Thank you
Jiri
On 29/09/2015 21:10, Lionel Bouton wrote:
Le 29/09/2015 07:29, Jiri Kanicky a écrit :
Hi,Yes, the general idea (stop, flush, move, update ceph.conf, mkjournal,
Is it possible to create journal in directory as explained here:
http://wiki.skytech.dk/index.php/Ceph_-_howto,_rbd,_lvm,_cluster#Add.2Fmove_journal_in_running_cluster
start) is valid for moving your journal wherever you want.
That said it probably won't perform as well on a filesystem (LVM as
lower overhead than a filesystem).
1. Create BTRFS over /dev/sda6 (assuming this is SSD partition alocateBTRFS is probably the worst idea for hosting journals. If you must use
for journal) and mount it to /srv/ceph/journal
BTRFS, you'll have to make sure that the journals are created NoCoW
before the first byte is ever written to them.
2. Add OSD: ceph-deploy osd create --fs-type btrfsI've no experience with ceph-deploy...
ceph1:sdb:/srv/ceph/journal/osd$id/journal
Best regards,
Lionel
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Andrija Panić
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com