I have a system w/ 7 hosts.
Each host has 1x1TB NVME, and 2x2TB SATA SSD.
The intent was to use this for openstack, having glance stored on the SSD, and cinder + nova running cache-tier replication pool on nvme into erasure coded pool on ssd.
The rationale is that, given the copy-on-write, only the working-set of the nova images would be dirty, and thus the nvme cache would improve the latency.
Also, the life span (TBW) of the nvme is much higher, and its rated IOPS is *much* higher (particularly at low queue depths compared to the SATA). So I believe this will give the longest-life, highest-perf for me.
I have installed Ceph 12.1.2 on Ubuntu 16.04.
Before I start: does someone have a different config to suggest w/ this equipment?
OK, so i started to config, but I ran into an (?error? warning?):
$ ceph osd erasure-code-profile set ssd k=2 m=1 plugin=jerasure technique=reed_sol_van crush-device-class=ssd
$ ceph osd crush rule create-replicated nvme default host nvme
$ ceph osd crush rule create-erasure ssd ssd
$ ceph osd pool create ssd-bulk 1200 erasure ssd
$ ceph osd pool create nvme-cache 1200 nvme
$ ceph osd pool set ssd-bulk allow_ec_overwrites true
$ ceph osd lspools
15 nvme-cache,16 ssd-bulk,
$ ceph osd tier add ssd-bulk nvme-cache
pool 'nvme-cache' is now (or already was) a tier of 'ssd-bulk'
$ ceph osd tier remove ssd-bulk nvme-cache
pool 'nvme-cache' is now (or already was) not a tier of 'ssd-bulk'
So what am I doing wrong? I'm following http://docs.ceph.com/docs/master/rados/operations/cache-tiering/
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com