Re: Can 16 server grade ssd's be slower then 60 hdds? (no extra journals)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have older LSi Raid controller with no HBA/JBOD option. So we expose the single disks as raid0 devices. Ceph should not be aware of cache status?
But digging deeper in to it it seems that 1 out of 4 serves is performing a lot better and has super low commit/applay rates while the other have a lot mor (20+) on heavy writes. This just applys fore the ssd. For the hdds I cant see a difference...

-----Ursprüngliche Nachricht-----
Von: Frank Schilder <frans@xxxxxx> 
Gesendet: Montag, 31. August 2020 13:19
An: VELARTIS Philipp Dürhammer <p.duerhammer@xxxxxxxxxxx>; 'ceph-users@xxxxxxx' <ceph-users@xxxxxxx>
Betreff: Re: Can 16 server grade ssd's be slower then 60 hdds? (no extra journals)

Yes, they can - if volatile write cache is not disabled. There are many threads on this, also recent. Search for "disable write cache" and/or "disable volatile write cache".

You will also find different methods of doing this automatically.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: VELARTIS Philipp Dürhammer <p.duerhammer@xxxxxxxxxxx>
Sent: 31 August 2020 13:02:45
To: 'ceph-users@xxxxxxx'
Subject:  Can 16 server grade ssd's be slower then 60 hdds? (no extra journals)

I have a productive 60 osd's cluster. No extra Journals. Its performing okay. Now I added an extra ssd Pool with 16 Micron 5100 MAX. And the performance is little slower or equal to the 60 hdd pool. 4K random as also sequential reads. All on dedicated 2 times 10G Network. HDDS are still on filestore. SSD on bluestore. Ceph Luminous.
What should be possible 16 ssd's vs. 60 hhd's no extra journals?

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux