On Thu, 17 Oct 2019 at 06:22, Lars Marowsky-Bree <lmb@xxxxxxxx> wrote: > > On 2019-10-16T11:28:25, " Honggang(Joseph) Yang " <eagle.rtlinux@xxxxxxxxx> wrote: > > I'm glad to see more performance work and caching happening in Ceph! > > I admit calling this a "tier" (to get the bike shedding done first ;-) > is confusing me, because that used to mean something different. This > seems to me to be more of a BlueStore feature based on hints/access from > the upper layers? > User can explicitly send hint op or the do_op/agent send hint op based on object access statistics to trigger a migration. > So perhaps, at that level, it'd make sense to instead use the space on > the RocksDB partition/device for this caching operation, instead of yet > an additional device? (Intuitively, that's what most users already > expect it does, anyway.) yes, this is user friendly. > > How would this, compared to bcache, possibly handle situations where > multiple OSDs share one caching device? > SSD is split into multiple partitions. Each partition is assigned to an osd as fast partitions. > And does this only promote the local shard/replica? I'm wondering how > this would affect EC pools. > yes, only promote the local shard/replica. But there is still some work to do to support ec pool. > > Regards, > Lars > > -- > SUSE Linux GmbH, GF: Felix Imendörffer, Mary Higgins, Sri Rasiah, HRB 21284 (AG Nürnberg) > "Architects should open possibilities and not determine everything." (Ueli Zbinden)