What would be the most appropriate procedure to move blockdb/wal to SSD?
1.- remove the OSD and recreate it (affects the performance)
ceph-volume lvm prepare --bluestore --data <device> --block.wal <wal-device> --block.db <db-device>
2.- Follow the documentation
http://heiterbiswolkig.blogs.nde.ag/2018/04/08/migrating-bluestores-block-db/
3.- Follow the documentation
https://swamireddy.wordpress.com/2016/02/19/ceph-how-to-add-the-ssd-journal/
Thanks for the help
El dom., 7 jul. 2019 a las 14:39, Christian Wuerdig (<christian.wuerdig@xxxxxxxxx>) escribió:
One thing to keep in mind is that the blockdb/wal becomes a Single Point Of Failure for all OSDs using it. So if that SSD dies essentially you have to consider all OSDs using it as lost. I think most go with something like 4-8 OSDs per blockdb/wal drive but it really depends how risk-averse you are, what your budget is etc. Given that you only have 5 nodes I'd probably go for fewer OSDs per blockdb device.On Sat, 6 Jul 2019 at 02:16, Davis Mendoza Paco <davis.men.pa@xxxxxxxxx> wrote:Hi all,_______________________________________________
I have installed ceph luminous, witch 5 nodes(45 OSD), each OSD server supports up to 16HD and I'm only using 9
I wanted to ask for help to improve IOPS performance since I have about 350 virtual machines of approximately 15 GB in size and I/O processes are very slow.
You who recommend me?
In the documentation of ceph recommend using SSD for the journal, my question is
How many SSD do I have to enable per server so that the journals of the 9 OSDs can be separated into SSDs?
I currently use ceph with OpenStack, on 11 servers with SO Debian Stretch:
* 3 controller
* 3 compute
* 5 ceph-osd
network: bond lacp 10GB
RAM: 96GB
HD: 9 disk SATA-3TB (bluestore)--Davis Mendoza P.
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Davis Mendoza P.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com