BlueStore with v11.1.0 Kraken

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi guys,

I'm building a first test cluster for homelab, and would like to start
using BlueStore since data loss is not critical. However, there are
obviously no official documentation for basic best usage online yet.

My original layout was using 2x single Xeon nodes with 24 GB RAM each
under Proxmox VE for the test application and two metadata servers, 
each as a VM guest. Each VM woud be about 8 GB, 16 GB max.

Ceph OSD etc. was 7x dual-core Opteron with 8 GB RAM each, and some
2x2 to 2x1 TB SATA drives. Current total is 24 TB SATA, 56 GB RAM.

Each node has 4x Gbit NIC, so I have two local storage networks each on a
dedicated unmanaged switch and two NICs serving the app data, on two
dedicated managed ones. I guess up to 0.6 GB/s worst case is more than
enough for dual core Opterons, especially with crappy (nVidia/Broadcom) NICs.

Question is, how is Bluestore changing the picture?

E.g. looking at http://www.slideshare.net/sageweil1/bluestore-a-new-faster-storage-backend-for-ceph-63311181
They say things like metadata is all in memory. 
So how much GB RAM for each TB disk then? I'm assuming 4 MB object size as default.

Slide 23 has four example cases. Assuming I have only two HDDs I 
guess my options are small partition for Linux boot/root, and the 
rest as raw partitions for rocksdb and object data. I could boot 
the nodes from an USB memory stick, of course. Would that work, 
or too much I/O still on the slow USB device?

Before I was limited due to 8 GB RAM to max 8 TB/node, 
so e.g. 2x 4 TB disks. Is this still the case for Bluestore?

Thanks!

Regards,
Eugen 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux