Fwd: optimizing ceph-fuse performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I setup ceph cluster on 28 nodes.
24 nodes for osds...Each storage node has 16 drives.  raid0 on 4
drives. therefore i have 4 osds daemon on each node. each osd daemon
is allocated a raid volume. so total of 96 osds daemon in the entire
cluster.
3 nodes for mon ...
1 node for mds ...
I mounted a share with CEPH-FUSE and im generating load to the cluster
through the mount point.
Network bandwidth is fairly
mon nodes - 20 gb to switch
osd nodes - 1 gb to switch
mds node - 20 gb to switch


Questions:
1. How to optimize the network for better performance
2. Is 1 mds good for this cluster ?
3. How check for replication and what replication factors should be used?
4. What the debug logging configuration should look like?

All the aim is make ceph-fuse give its best in write,read and delete
operations... Pls kindly guide me.

Thanks







---------- Forwarded message ----------



Hi,

Pls can somebody help interprete what this configuration signify.

[global]
        debug ms = 0

[mon]
        debug mon = 20
        debug paxos = 20
        debug auth = 20

[osd]
        debug osd = 20
        debug filestore = 20
        debug journal = 20
        debug monc = 20

[mds]
        debug mds = 20
        debug mds balancer = 20
        debug mds log = 20
        debug mds migrator = 20


 I used the same configuration and my boot disk got full

ceph-mds.a.log =  50299281408
ceph-mon.a.log = 173858816

Moreover the results of my test using ceph-fuse to mount a share and
exportiing the share with samba isn't so good... How can i turn off
some of the logs to optimize ceph performance in a write,read and
delete operations. I will publish the result as soon as i can get the
cluster more optimized.


regards.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux