Re: Activate Cache Tier on Running Pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I hope the data your running the CEPH server isn't important if your looking to run a Cache tier with just 2 SSDS / Replication of 2.

If your cache tier fails, you basically corrupt most data on the pool below.

Also as Wido said, as much as you may get it to work, I don't think it will give you the performance you expected, can you not create a separate small SSD pool as a scratch disk for the VM's for when they are doing heavy temp I/O?

Leaving your current data and setup untouched.



---- On Mon, 16 Sep 2019 18:21:47 +0800 Eikermann, Robert <eikermann@xxxxxxxxxx> wrote ----

We have terrible IO performance when multiple VMs do some file IO. Mainly do some java compilation on that servers. If we have 2 parallel jobs everything is fine, but having 10 jobs we see the warning “HEALTH_WARN X requests are blocked > 32 sec; Y osds have slow requests”. I have two enterprise SSDs which gained good results in ceph tested with fio. They are too small to have a separate “ssd_vms” pool and advertise it in openstack as a separate storage backend

 

--
-----------------------------------------------------------------

Robert Eikermann M.Sc.RWTH               | Software Engineering

Lehrstuhl für Software Engineering       | RWTH Aachen University

Ahornstr. 55, 52074 Aachen, Germany      | http://www.se-rwth.de

Phone ++49 241 80-21306 / Fax -22218     |

 

Von: Wido den Hollander [mailto:wido@xxxxxxxx]
Gesendet: Montag, 16. September 2019 11:52
An: Eikermann, Robert <eikermann@xxxxxxxxxx>; ceph-users@xxxxxxx
Betreff: Re: [ceph-users] Activate Cache Tier on Running Pools

 

 

On 9/16/19 11:36 AM, Eikermann, Robert wrote:

Hi,

 

I’m using Ceph in combination with Openstack. For the “VMs” Pool I’d like to enable writeback caching tier, like described here: https://docs.ceph.com/docs/luminous/rados/operations/cache-tiering/ .

 

Can you explain why? The cache tiering has some serious flaws and can even decrease performance instead of improve it.

What are you trying to solve?

Wido

Should it be possible to do that on a running pool? I tried to do so and immediately all VMs (Linux Ubuntu OS) running on Ceph disks got readonly filesystems. No errors were shown in ceph (but also no traffic arrived after enabling the cache tier). Removing the cache tier , rebooting the VMs and doing a filesystemcheck repaired everything.

 

Best

Robert

 



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux