Re: Ceph performance IOPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just set 1 or more SSDs for bluestore, as long as you're within the 4% rule I think it should be enough.


On Fri, Jul 5, 2019 at 7:15 AM Davis Mendoza Paco <davis.men.pa@xxxxxxxxx> wrote:
Hi all,
I have installed ceph luminous, witch 5 nodes(45 OSD), each OSD server supports up to 16HD and I'm only using 9

I wanted to ask for help to improve IOPS performance since I have about 350 virtual machines of approximately 15 GB in size and I/O processes are very slow.
You who recommend me?

In the documentation of ceph recommend using SSD for the journal, my question is
How many SSD do I have to enable per server so that the journals of the 9 OSDs can be separated into SSDs?

I currently use ceph with OpenStack, on 11 servers with SO Debian Stretch:
* 3 controller
* 3 compute
* 5 ceph-osd
  network: bond lacp 10GB
  RAM: 96GB
  HD: 9 disk SATA-3TB (bluestore)

--
Davis Mendoza P.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux