Re: RHEL7/HAMMER cache tier doesn't flush or evict?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mohamed,

 

I did not.  But:

·        I confirmed that (by default) cache_target_dirty_ratio was set to 0.4 (40%) and cache_target_full_ratio was set to 0.8 (80%).

·        I did not set target_max_bytes (I felt that the simple, pure relative sizing controls were sufficient for my experiment).

·        I confirmed that (by default) cache_min_flush_age and cache_min_evict_age were set to 0 (so no required delay for either flushing or eviction).

 

Given these settings, I expected to see:

·        Flushing begin to happen at 40% of my cache tier size (~200 GB, as it happened), or about 80 GB.  Or earlier.

·        Eviction begin to happen at 80% of my cache tier size, or about 160 GB.  Or earlier.

·        Cache tier capacity would only exceed 80% only if the flushing process couldn’t keep up with the ingest process for fairly long periods of time (at the observed ingest rate of ~400 MB/sec, a few hundred seconds).

 

Am I misunderstanding something?

 

Thank you very much for your assistance!

 

-don-

 

From: Mohamed Pakkeer [mailto:mdfakkeer@xxxxxxxxx]
Sent: 30 April, 2015 10:52
To: Don Doerner
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

 

Hi Don,

 

Did you configure target_ dirty_ratio, target_full_ratio and target_max_bytes?

 

 

K.Mohamed Pakkeer

 

On Thu, Apr 30, 2015 at 10:26 PM, Don Doerner <Don.Doerner@xxxxxxxxxxx> wrote:

All,

 

Synopsis: I can’t get cache tiering to work in HAMMER on RHEL7.

 

Process:

1.      Fresh install of HAMMER on RHEL7 went well.

2.      Crush map adapted to provide two “root” level resources

a.       “ctstorage”, to use as a cache tier based on very high-performance, high IOPS SSD (intrinsic journal).  2 OSDs.

b.      “ecstorage”, to use as an erasure-coded poolbased on low-performance, cost effective storage (extrinsic journal).  12 OSDs.

3.      Established a pool “ctpool”, 32 PGs on ctstorage (pool size = min_size = 1).  Ran a quick RADOS bench test, all worked as expected.  Cleaned up.

4.      Established a pool “ecpool”, 256 PGs on ecstorage (5+3 profile).  Ran a quick RADOS bench test, all worked as expected.  Cleaned up.

5.      Ensured that both pools were empty (i.e., “rados ls” shows no objects)

6.      Put the cache tier on the erasure coded storage (one Bloom hit set, interval 900 seconds), set up the overlay.  Used defaults for flushing and eviction.  No errors.

7.      Started a 3600-second write test to ecpool.

 

Objects piled up in ctpool (as expected) – went past the 40% mark (as expected), then past the 80% mark (unexpected), then ran into the wall (95% full – very unexpected).  Using “rados df”, I can see that the cache tier is full (duh!) but not one single object lives in the ecpool.  Nothing was ever flushed, nothing was ever evicted.  Thought I might be misreading that, so I went back to SAR data that I captured during the test: the SSDs were the only [ceph] devices that sustained I/O.

 

I based this experiment on another (much more successful) experiment that I performed using GIANT (.1) on RHEL7 a couple of weeks ago (wherein I used RAM as a cache tier); that all worked.  It seems there are at least three possibilities…

·        I forgot a critical step this time around.

·        The steps needed for a cache tier in HAMMER are different than the steps needed in GIANT (and different than the documentation online).

·        There is a problem with HAMMER in the area of cache tier.

 

Has anyone successfully deployed cache-tiering in HAMMER?  Did you have to do anything unusual?  Do you see any steps that I missed?

 

Regards,

 

-don-

 


The information contained in this transmission may be confidential. Any disclosure, copying, or further distribution of confidential information is not permitted unless such privilege is explicitly granted in writing by Quantum. Quantum reserves the right to have electronic communications, including email and attachments, sent across its networks filtered through anti virus and spam software programs and retain such messages in order to comply with applicable data security and retention requirements. Quantum is not responsible for the proper and complete transmission of the substance of this communication or for any delay in its receipt.


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 

--

Thanks & Regards   
K.Mohamed Pakkeer
Mobile- 0091-8754410114

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux