I use it on an archive type environment.
Hi George...
On my latest deployment we have set
# grep journ /etc/ceph/ceph.conf
osd journal size = 20000and configured the OSDs for each device running 'ceph-disk prepare'
# ceph-disk -v prepare --cluster ceph --cluster-uuid XXX --fs-type xfs /dev/sdd /dev/sdb
# ceph-disk -v prepare --cluster ceph --cluster-uuid XXX --fs-type xfs /dev/sde /dev/sdb
# ceph-disk -v prepare --cluster ceph --cluster-uuid XXX --fs-type xfs /dev/sdf /dev/sdb
# ceph-disk -v prepare --cluster ceph --cluster-uuid XXX --fs-type xfs /dev/sdg /dev/sdb
where sdb is an SSD. Once the previous commands finish, this is the partition layout they create for the journal
# parted -s /dev/sdb p
Model: DELL PERC H710P (scsi)
Disk /dev/sdb: 119GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 21.0GB 21.0GB ceph journal
2 21.0GB 41.9GB 21.0GB ceph journal
3 41.9GB 62.9GB 21.0GB ceph journal
4 62.9GB 83.9GB 21.0GB ceph journal
However, never tested more than 4 journals per ssd.
Cheers
G.
On 07/06/2016 10:03 PM, George Shuklin wrote:
Hello.
I've been testing Intel 3500 as journal store for few HDD-based OSD. I stumble on issues with multiple partitions (>4) and UDEV (sda5, sda6,etc sometime do not appear after partition creation). And I'm thinking that partition is not that useful for OSD management, because linux do no allow partition rereading with it contains used volumes.
So my question: How you store many journals on SSD? My initial thoughts:
1) filesystem with filebased journals
2) LVM with volumes
Anything else? Best practice?
P.S. I've done benchmarking: 3500 can support up to 16 10k-RPM HDD.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________-- Goncalo Borges Research Computing ARC Centre of Excellence for Particle Physics at the Terascale School of Physics A28 | University of Sydney, NSW 2006 T: +61 2 93511937
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com