Re: Question about delayed write IOs, octopus, mixed storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 12/03/2021 18:25, Philip Brown wrote:
well that is a very interesting statistic.
Where do you come up with the 30GB partition size limit number?

I believe it is using 28GB SSD per HDD disk :-/
So you are implying that if I "throw away" 1/8 of my HDDs, so that I can get that magic number 30GB+ per HDD, things will magically be improved?
Before I do that kind of rework, I would like to better understand the theory behind that please :)

I vaguely recall reading something about WAL, ssd, and " db mostly".
I believe there is some way to check the status of that, but google search is being difficult without a more specific search term.


----- Original Message -----
From: "Maged Mokhtar" <mmokhtar@xxxxxxxxxxx>
To: "Philip Brown" <pbrown@xxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxx>
Sent: Friday, March 12, 2021 8:04:06 AM
Subject: Re:  Question about delayed write IOs, octopus, mixed storage



as a side issue, i do not know how cephadm would configure the 2 x 100
GB SSDs for wal/db serving the 8 HDDs, you need over 30 GB partition
size else it would result in db mostly on slow HDDs.

/maged

I do not believe you are currently being slowed by this yet, else your cluster will show a  "WARN: BlueFS spillover detected"
but it will eventually happen as you write more and your db expands.

You can read more in this post by Nick Fisk
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-October/030913.html

/maged

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux