So if you want, two more questions to you :
- How do you handle your ceph.conf configuration (default data pool by
user) / distribution ? Manually, config management, openstack-ansible...
?
- Did you made comparisons, benchmarks between replicated pools and EC
pools, on the same hardware / drives ? I read that small writes are not
very performant with EC.
ceph.conf with default data pool is only need for Cinder at image
creation time, after this luminous+ rbd client will be found feature
"data-pool" and will perform data-io to this pool.
# rbd info
erasure_rbd_meta/volume-09ed44bf-7d16-453a-b712-a636a0d3d812 <-----
meta pool !
rbd image 'volume-09ed44bf-7d16-453a-b712-a636a0d3d812':
size 1500 GB in 384000 objects
order 22 (4096 kB objects)
data_pool: erasure_rbd_data <----- our data pool
block_name_prefix: rbd_data.6.a2720a1ec432bf
format: 2
features: layering, exclusive-lock, object-map, fast-diff,
deep-flatten, data-pool <----- "data-pool" feature
flags:
create_timestamp: Sat Jan 27 20:24:04 2018
k
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com