Re: Redundancy with Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Thomas,

Thanks a lot for the link and your help!
Now, the issue are clear.

The output of "ceph osd dump -o -" said that there shall be replication because max_osd is 3 and the size of all pools is 2. But osd2 was "out down".

# ceph osd dump -o -
...
max_osd 3
osd0 out down (up_from 0 up_thru 0 down_at 0 last_clean 0-0)
osd1 in weight 1 up   (up_from 2 up_thru 2 down_at 0 last_clean 0-0) 10.243.150.209:6801/20989 10.243.150.209:6802/20989
osd2 out down (up_from 0 up_thru 0 down_at 0 last_clean 0-0)

The reason: I forgot to start the ceph server at osd2.
Now, it looks better.

# ceph osd dump -o -
...
max_osd 3
osd0 out down (up_from 0 up_thru 0 down_at 0 last_clean 0-0)
osd1 in weight 1 up   (up_from 2 up_thru 6 down_at 0 last_clean 0-0) 10.243.150.209:6801/20989 10.243.150.209:6802/20989
osd2 in weight 1 up   (up_from 5 up_thru 5 down_at 0 last_clean 0-0) 10.212.118.67:6800/20983 10.212.118.67:6801/20983

Best Regards and thanks again,
   Christian


Am Montag, 5. Juli 2010 schrieb Thomas Mueller:
> Am Mon, 05 Jul 2010 00:33:50 +0200 schrieb Christian Baun:
> 
> > Hi,
> > 
> > I created 2 servers and one client
> > 
> > Server 1 => mon, mds, osd
> > Server 2 => osd
> > 
> > I tried some tests with iozone and I don't think Server 2 is used.
> 
> 
> did you read:
> 
> http://ceph.newdream.net/wiki/Adjusting_replication_level
> 
> 
> 
> - Thomas

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux