Adding Cache Tier breaks rbd access

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I am experimenting with adding a SSD-Cache tier to my existing Ceph
0.94.5 Cluster.

Currently I have:
10 OSDs on 5 hosts (spinning disks).
2 OSDs on 1 host (SSDs)

I have followed the cache tier docs:
http://docs.ceph.com/docs/master/rados/operations/cache-tiering/

1st I created a new (spinning pool) and set up the SSDs as a cache tier.
All is fine. I can create/access images with rbd.
I can seed that it is used with "rados -p cache-pool ls"


Now, when I add a cache pool to my existing pools, for example the one
which hosts all my VM images all hell breaks loose.

The VMs all get I/O end request errors and remount their filesystems R/O.
If I shut them down and try to start them again, they won't start, virsh
gives me an error  "cannot read metadata header" (or something like that).

Also, "rados -p cache-pool ls" hangs forewer.

I have read here:
https://software.intel.com/en-us/blogs/2015/03/03/ceph-cache-tiering-introduction
that adding caches works on-the-fly.

Is there a special trick to add cache to a running, pool under load
(which is not very high, though)?

And what about the "osd tier add-cache <poolname> <poolname> <int>"
command? its supposed to add a cache to the 1st pool. I can't see this
be used in bost of the above links.

Thanks very much,
udo.

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux