Re: journal placement for small office?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2x10gig lag across two switches with 2x40gig. should be enough bandwidth

On Mon, Feb 9, 2015 at 6:10 AM, Eneko Lacunza <elacunza@xxxxxxxxx> wrote:
> Hi,
>
> The common recommendation is to use a good (Intel S3700) SSD disk for
> journals for each 3-4 OSDs, or otherwise to use internal journal on each
> OSD. Don't put more than one journal on the same spinning disk.
>
> Also, it is recommended to use 500G-1TB disks, specially if you have a 1gbit
> network; otherwise when a OSD fails recover time can be quite long. Also
> look in the mailing list archives for some tunning of backfiling for smalls
> ceph clusters.
>
> Cheers.
> Eneko
>
>
> On 06/02/15 16:48, pixelfairy wrote:
>>
>> 3 nodes, each with 2x1TB in a raid (for /) and 6x4TB for storage. all
>> of this will be used for block devices for kvm instances. typical
>> office stuff. databases, file servers, internal web servers, a couple
>> dozen thin clients. not using the object store or cephfs.
>>
>> i was thinking about putting the journals on the root disk (this is
>> how my virtual cluster works, because in that version the osds are 4G
>> instead of 4TB), and keeping that on its current raid 1, for
>> resiliency but im worried about making a performance bottleneck.
>> tempted to swap these out with ssds. if so, how big should i get? is
>> 1/2TB enough?
>>
>> the other thought was little partitions on each osd. were doing xfs
>> because i dont know enough about brtfs to feel comfortable with that.
>> would the performance degredation be worse?
>>
>> is there a better way?
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> --
> Zuzendari Teknikoa / Director Técnico
> Binovo IT Human Project, S.L.
> Telf. 943575997
>       943493611
> Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
> www.binovo.es
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux