Le samedi 21 juillet 2018, 15:56:31 CEST Satish Patel a écrit : > I am trying to deploy ceph-ansible with lvm osd scenario and reading > at http://docs.ceph.com/ceph-ansible/master/osds/scenarios.html > > I have all SSD disk and i don't have separate journal, my plan was > keep WAL/DB on same disk because all SSD and same speed. > > ceph-ansible doesn't create lvm so i have to create them by hand but > not what what i need to create what how? > > In doc they said layout should be following, does that mean i have to > create LVM group vg_vg1 (for /sdb) and create Logical volume name > data-lv1? is that true or i am missing something? > > --- > osd_objectstore: bluestore > osd_scenario: lvm > lvm_volumes: > - data: data-lv1 > data_vg: vg1 > crush_device_class: foo > > > If i have many drives then my group should like vg1, vg2, vg3 etc.. right ?? Yes, that's it. One PV+VG for each drive, with one LV. I've done that. I don't have my conf at hand, but it's sort of : On 2 SSD, one VG + one LV for WAL/DB for each HDD. One VG + one LV on each HDD. I provisionned them with an ansilble playbook using standard LVM modules. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com