Re: Cache Tier Flush = immediate base tier journal sync?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 16, 2015 at 4:46 PM, Christian Balzer <chibi@xxxxxxx> wrote:
> On Mon, 16 Mar 2015 16:09:12 -0700 Gregory Farnum wrote:
>
>> Nothing here particularly surprises me. I don't remember all the
>> details of the filestore's rate limiting off the top of my head, but
>> it goes to great lengths to try and avoid letting the journal get too
>> far ahead of the backing store. Disabling the filestore flusher and
>> increasing the sync intervals without also increasing the
>> filestore_wbthrottle_* limits is not going to work well for you.
>> -Greg
>>
> While very true and what I recalled (backing store being kicked off early)
> from earlier mails, I think having every last configuration parameter
> documented in a way that doesn't reduce people to guesswork would be very
> helpful.

PRs welcome! ;)

More seriously, we create a lot of config options and it's not always
clear when doing so which ones should be changed by users or not. And
a lot of them (case in point: anything to do with changing journal and
FS interactions) should only be changed by people who really
understand them, because it's possible (as evidenced) to really bust
up your cluster's performance enough that it's basically broken.
Historically that's meant "people who can read the code and understand
it", although we might now have enough people at a mid-line that it's
worth going back and documenting. There's not a lot of pressure coming
from anybody to do that work in comparison to other stuff like "make
CephFS supported" and "make RADOS faster" though, for understandable
reasons. So while we can try and document these things some in future,
the names of things here are really pretty self-explanatory and the
sort of configuration reference guide  I think you're asking for (ie,
"here are all the settings to change if you are running on SSDs, and
here's how they're related") is not the kind of thing that developers
produce. That comes out of the community or is produced by support
contracts.

...so I guess I've circled back around to "PRs welcome!"

> For example "filestore_wbthrottle_xfs_inodes_start_flusher" which defaults
> to 500.
> Assuming that this means to start flushing once 500 inodes have
> accumulated, how would Ceph even know how many inodes are needed for the
> data present?

Number of dirtied objects, of course.

>
> Lastly with these parameters, there is xfs and btrfs incarnations, no
> ext4.
> Do the xfs parameters also apply to ext4?

Uh, looks like it does, but I'm just skimming source right now so you
should check if you change these params. :)
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux