Re: Ceph OSDs with bcache experience

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 13-11-15 10:56, Jens Rosenboom wrote:
> 2015-10-20 16:00 GMT+02:00 Wido den Hollander <wido@xxxxxxxx>:
> ...
>> The system consists out of 39 hosts:
>>
>> 2U SuperMicro chassis:
>> * 80GB Intel SSD for OS
>> * 240GB Intel S3700 SSD for Journaling + Bcache
>> * 6x 3TB disk
> 
> I'm currently testing a similar setup, but it turns out that setup and
> operations are a bit clumsy:
> 
> 1. ceph-disk-prepare when given the bcache device always wants to
> create a partition on it, which isn't possible, so as a workaround I
> have to manually put a FS onto the device, mount it and give ceph-disk
> the mount-point.
> 2. Since there is no partition, there is also no type UUID, so the
> usual udev based autostart mechanisms do not work. Running "ceph-disk
> activate /dev/bcache0" works fine though.
> 
> Did you come up with some more clever solutions for this?
> 

We are not using ceph-disk in those situations. These systems use
sysvinit with this in their configuration:

[osd.253]
  host = ceph39
  devs = /dev/disk/by-location/encl01-slot03
  osd uuid = 16585dc3-1f7e-4810-9d9e-4bf6651f54eb


We wrote some custom scripting to set up bcache and mkfs them afterwards.

> Also, do you have some experience with broken disks? I'm wondering if
> one failing disk will affect performance for the others running in the
> same cache set.
> 

We experienced multiple disk failures. They don't seem to affect other
processes.

Wido
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux