Did maximum performance reached?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Karan!

That's physical CentOS clients of CephFS mounted by kernel-module (kernel 4.1.3)

Thanks

>Hi
>
>What type of clients do you have.
>
>- Are they Linux physical OR VM mounting Ceph RBD or CephFS ??
>- Or they are simply openstack / cloud instances using Ceph as cinder volumes or something like that ??
>
>
>- Karan -

>> On 28 Jul 2015, at 11:53, Shneur Zalman Mattern <shzama@xxxxxxxxxxxx> wrote:
>>
>> We've built Ceph cluster:
>>    3 mon nodes (one of them is combined with mds)
>>     3 osd nodes (each one have 10 osd + 2 ssd for journaling)
>>     switch 24 ports x 10G
>>     10 gigabit - for public network
>>     20 gigabit bonding - between osds
>>     Ubuntu 12.04.05
>>     Ceph 0.87.2
>> -----------------------------------------------------
>> Clients has:
>>     10 gigabit for ceph-connection
>>     CentOS 6.6 with kernel 4.1.3 equipped by cephfs-kmodule
>>
>>
>>
 
 
************************************************************************************
This footnote confirms that this email message has been scanned by
PineApp Mail-SeCure for the presence of malicious code, vandals & computer viruses.
************************************************************************************



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux