Re: RHEL7/HAMMER cache tier doesn't flush or evict?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm using Inkscope to monitor my cluster and looking at the pool details I
saw that mode was set to none. I'm pretty sure there must be a ceph cmd line
to get the option state but I couldn't find anything obvious when I was
looking for it.

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Don Doerner
> Sent: 30 April 2015 18:47
> To: Nick Fisk; ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  RHEL7/HAMMER cache tier doesn't flush or evict?
> Sensitivity: Personal
> 
> Hi Nick,
> 
> For brevity, I didn't detail all of the commands I issued.  Looking back
through
> my command history, I can confirm that I did explicitly set cache-mode to
> writeback (and later reset it to forward to try flush-and-evict).
Question:
> how did you determine that your cache-mode was not writeback?  I'll do
> that, just to  confirm that this is the problem, then reestablish the
cache-
> mode.
> 
> Thank you very much for your assistance!
> 
> -don-
> 
> -----Original Message-----
> From: Nick Fisk [mailto:nick@xxxxxxxxxx]
> Sent: 30 April, 2015 10:38
> To: Don Doerner; ceph-users@xxxxxxxxxxxxxx
> Subject: RE: RHEL7/HAMMER cache tier doesn't flush or evict?
> Sensitivity: Personal
> 
> Hi Don,
> 
> I experienced the same thing a couple of days ago on Hammer. On
> investigation the cache mode wasn't set to writeback even though I'm sure
it
> accepted the command successfully when I set the pool up.
> 
> Could you reapply the cache mode writeback command and see if that
> makes a difference?
> 
> Nick
> 
> > -----Original Message-----
> > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf
> > Of Don Doerner
> > Sent: 30 April 2015 17:57
> > To: ceph-users@xxxxxxxxxxxxxx
> > Subject:  RHEL7/HAMMER cache tier doesn't flush or evict?
> > Sensitivity: Personal
> >
> > All,
> >
> > Synopsis: I can't get cache tiering to work in HAMMER on RHEL7.
> >
> > Process:
> > 1. Fresh install of HAMMER on RHEL7 went well.
> > 2. Crush map adapted to provide two "root" level resources a.
> > "ctstorage", to use as a cache tier based on very high-performance,
> high
> > IOPS SSD (intrinsic journal).  2 OSDs.
> > b. "ecstorage", to use as an erasure-coded poolbased on
> > low-performance, cost effective storage (extrinsic journal).  12 OSDs.
> > 3. Established a pool "ctpool", 32 PGs on ctstorage (pool size =
> > min_size
> = 1).
> > Ran a quick RADOS bench test, all worked as expected.  Cleaned up.
> > 4. Established a pool "ecpool", 256 PGs on ecstorage (5+3 profile).
> > Ran a quick RADOS bench test, all worked as expected.  Cleaned up.
> > 5. Ensured that both pools were empty (i.e., "rados ls" shows no
> > objects) 6. Put the cache tier on the erasure coded storage (one Bloom
> > hit set, interval 900 seconds), set up the overlay.  Used defaults for
> > flushing and eviction.  No errors.
> > 7. Started a 3600-second write test to ecpool.
> >
> > Objects piled up in ctpool (as expected) - went past the 40% mark (as
> > expected), then past the 80% mark (unexpected), then ran into the wall
> > (95% full - very unexpected).  Using "rados df", I can see that the
> > cache
> tier is
> > full (duh!) but not one single object lives in the ecpool.  Nothing
> > was
> ever
> > flushed, nothing was ever evicted.  Thought I might be misreading
> > that, so
> I
> > went back to SAR data that I captured during the test: the SSDs were
> > the
> only
> > [ceph] devices that sustained I/O.
> >
> > I based this experiment on another (much more successful) experiment
> > that I performed using GIANT (.1) on RHEL7 a couple of weeks ago
> > (wherein I used RAM as a cache tier); that all worked.  It seems there
> > are at least
> three
> > possibilities.
> > . I forgot a critical step this time around.
> > . The steps needed for a cache tier in HAMMER are different than the
> > steps needed in GIANT (and different than the documentation online).
> > . There is a problem with HAMMER in the area of cache tier.
> >
> > Has anyone successfully deployed cache-tiering in HAMMER?  Did you
> > have to do anything unusual?  Do you see any steps that I missed?
> >
> > Regards,
> >
> > -don-
> >
> > ________________________________________
> > The information contained in this transmission may be confidential.
> > Any disclosure, copying, or further distribution of confidential
> > information
> is not
> > permitted unless such privilege is explicitly granted in writing by
> Quantum.
> > Quantum reserves the right to have electronic communications,
> > including email and attachments, sent across its networks filtered
> > through anti
> virus
> > and spam software programs and retain such messages in order to comply
> > with applicable data security and retention requirements. Quantum is
> > not responsible for the proper and complete transmission of the
> > substance of this communication or for any delay in its receipt.
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux