What error are you seeing? It looks like it worked. The "or already was" is just their way of saying they didn't check but it is definitely not set this way.
On Thu, Aug 10, 2017, 8:45 PM Don Waterloo <don.waterloo@xxxxxxxxx> wrote:
I have a system w/ 7 hosts._______________________________________________Each host has 1x1TB NVME, and 2x2TB SATA SSD.The intent was to use this for openstack, having glance stored on the SSD, and cinder + nova running cache-tier replication pool on nvme into erasure coded pool on ssd.The rationale is that, given the copy-on-write, only the working-set of the nova images would be dirty, and thus the nvme cache would improve the latency.Also, the life span (TBW) of the nvme is much higher, and its rated IOPS is *much* higher (particularly at low queue depths compared to the SATA). So I believe this will give the longest-life, highest-perf for me.I have installed Ceph 12.1.2 on Ubuntu 16.04.Before I start: does someone have a different config to suggest w/ this equipment?OK, so i started to config, but I ran into an (?error? warning?):$ ceph osd erasure-code-profile set ssd k=2 m=1 plugin=jerasure technique=reed_sol_van crush-device-class=ssd$ ceph osd crush rule create-replicated nvme default host nvme$ ceph osd crush rule create-erasure ssd ssd$ ceph osd pool create ssd-bulk 1200 erasure ssd$ ceph osd pool create nvme-cache 1200 nvme$ ceph osd pool set ssd-bulk allow_ec_overwrites true$ ceph osd lspools15 nvme-cache,16 ssd-bulk,$ ceph osd tier add ssd-bulk nvme-cachepool 'nvme-cache' is now (or already was) a tier of 'ssd-bulk'$ ceph osd tier remove ssd-bulk nvme-cachepool 'nvme-cache' is now (or already was) not a tier of 'ssd-bulk'So what am I doing wrong? I'm following http://docs.ceph.com/docs/master/rados/operations/cache-tiering/
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com