Re: Bobtail & Precise

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



John,

Thanks for your response.  I haven't spent a lot of time on this issue since then, so I'm still in the same situation.  I do remember seeing an error message about an unsupported feature at one point after setting the tunables to bobtail.

Bryan


On Thu, Apr 18, 2013 at 1:51 PM, John Wilkins <john.wilkins@xxxxxxxxxxx> wrote:
Bryan, 

It seems you got crickets with this question. Did you get any further? I'd like to add it to my upcoming CRUSH troubleshooting section.


On Wed, Apr 3, 2013 at 9:27 AM, Bryan Stillwell <bstillwell@xxxxxxxxxxxxxxx> wrote:
I have two test clusters running Bobtail (0.56.4) and Ubuntu Precise
(12.04.2).  The problem I'm having is that I'm not able to get either
of them into a state where I can both mount the filesystem and have
all the PGs in the active+clean state.

It seems that on both clusters I can get them into a 100% active+clean
state by setting "ceph osd crush tunables bobtail", but when I try to
mount the filesystem I get:

mount error 5 = Input/output error


However, if I set "ceph osd crush tunables legacy" I can mount both
filesystems, but then some of the PGs are stuck in the
"active+remapped" state:

# ceph -s
   health HEALTH_WARN 29 pgs stuck unclean; recovery 5/1604152 degraded (0.000%)
   monmap e1: 1 mons at {a=172.16.0.50:6789/0}, election epoch 1, quorum 0 a
   osdmap e10272: 20 osds: 20 up, 20 in
    pgmap v1114740: 1920 pgs: 1890 active+clean, 29 active+remapped, 1
active+clean+scrubbing; 3086 GB data, 6201 GB used, 3098 GB / 9300 GB
avail; 232B/s wr, 0op/s; 5/1604152 degraded (0.000%)
   mdsmap e420: 1/1/1 up {0=a=up:active}


Is any one else seeing this?

Thanks,
Bryan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
John Wilkins
Senior Technical Writer
Intank
john.wilkins@xxxxxxxxxxx
(415) 425-9599
http://inktank.com



--
Photobucket

Bryan Stillwell
SENIOR SYSTEM ADMINISTRATOR

E: bstillwell@xxxxxxxxxxxxxxx
O: 303.228.5109
M: 970.310.6085

Facebook TwitterPhotobucket

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux