Re: About Ceph Cache Tier parameters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> GODIN Vincent (SILCA)
> Sent: 07 May 2015 11:13
> To: ceph-users@xxxxxxxxxxxxxx
> Subject:  About Ceph Cache Tier parameters
> 
> Hi,
> 
> In Cache Tier parameters, there is nothing to tell the cache to flush
dirty
> objects on cold storage when the cache is under-utilized (as far as you
're
> under the "cache_target_dirty_ratio", it's look like dirty objects can be
> keeped in the cache for years).

Yes this is correct, I have played around with a cron job to flush the dirty
blocks when I know the cluster will be idle, this improves write performance
for the next bunch of bursty writes. I think the idea behind the current
cache thinking is more geared to something like running VM's where typically
the same hot blocks will be written to over and over again.

My workload involves a significant number of blocks which are written once
and then never again and so flushing the cache before each job run seems to
improve performance.

> 
> That is to say that the flush operations will always start during writes
and
> when we have reached the "cache_target_dirty_ratio" value : this will slow
> down the current writes IO.
> 
> Are some futur requests planned to improve this behavior ?

Not that I'm currently aware of, but I did post here a couple of weeks ago
suggesting that maybe having high and low watermarks for the cache flushing
might improve performance. At the low watermark, cache would be flushed with
a low/idle priority (much like scrub options) and at the high watermark the
current flushing behaviour would start. I didn't get any response, so I
think this idea may have hit a bit of a dead end. I did start having a
looking through the Ceph source to see if it was something I could try doing
myself, but I haven't found enough time to get my head round it.

> 
> Thanks for your response
> 
> Vince
> 
> 
> ________________________________________
> Ce message et toutes les pi?ces jointes (ci-apr?s le "Message") sont
> strictement confidentiels et sont ?tablis ? l'attention exclusive de ses
> destinataires.
> Si vous recevez ce message par erreur, merci de le d?truire et d'en
avertir
> imm?diatement l'exp?diteur par e-mail.
> Toute utilisation de ce message non conforme ? sa destination, toute
> modification, ?dition, ou diffusion totale ou partielle non autoris?e est
> interdite. SILCA d?cline toute responsabilit?; au titre de ce Message s'il
a ?t?
> alt?r?, d?form?, falsifi? ou encore ?dit? ou diffus? sans autorisation.
> This mail message and attachments (the "Message") are confidential and
> solely intended for the addressees.
> If you receive this message in error, please delete it and immediately
notify
> the sender by e-mail.
> Any use other than its intended purpose, review, retransmission,
> dissemination, either whole or partial is prohibited except if formal
approval
> is granted. SILCA shall not be liable for the Message if altered, changed,
> falsified , retransmeted or disseminated.






_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux