Re: Writes to only one OSD?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Without looking at your screencast - some thoughts: 

2 mons means increasing failure probability, not reducing it. if you lose one mon, the other mon will stop working. This is on intention. You need at least 3 mons to create a quorum. so using ceph with two nodes is a bad idea. 
as for the distribution on disks: this is something that your crush map governs. did you tweak this in any respect? what's the disk usage (df) of the osd's? how to you monitor disk usage, maybe you use the wrong tool? 

Wolfgang

Von: ceph-users-bounces@xxxxxxxxxxxxxx [ceph-users-bounces@xxxxxxxxxxxxxx]" im Auftrag von "Shaun Reitan [shaun.reitan@xxxxxxxxxxx]
Gesendet: Freitag, 01. März 2013 03:45
Bis: ceph-users@xxxxxxxxxxxxxx
Betreff: Writes to only one OSD?

 
I’m doing some basic testing of CEPH to see if it will fit our needs.  My current test deployment consists of 2 servers, each running a single Monitor process, 4 OSD’s, and separate 1gbit public and private networks.  I then have another server that is just acting as a client using rados/rbd.  I have collectl –sD –oT running on each storage server so that I can monitor the stats of each disk.  On the client server I’m running the following...
 
rbd bench-write benchdisk --io-size 256 --io-threads 256
 
and then I’m watching the collectl stats.  What I’m seeing is a single disk/osd only being used on both storage hosts.  Every so often I see another disk show some IO but nothing consistent and it’s only on one other drive.  I’m seeing this on both storage servers. What would cause this?  Shouldn’t all the disks be doing some work?
 
btw, My ceph configs are basic, nothing special, all the drives in the storage servers are the same make/model.
 
Here’s a screencast showing everything going for 2 minutes
 
 
--
Shaun R.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux