Re: Bobtail & Precise

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There's not really a fix — either update all your clients so they support the tunables (I'm not sure how new a kernel you need), or else run without the tunables. In setups where your branching factors aren't very close to your replication counts they aren't normally needed, if you want to reshape your cluster a little bit.
-Greg

Software Engineer #42 @ http://inktank.com | http://ceph.com


On Thu, Apr 18, 2013 at 1:04 PM, Bryan Stillwell <bstillwell@xxxxxxxxxxxxxxx> wrote:
What's the fix for people running precise (12.04)?  I believe I see the same issue with quantal (12.10) as well.


On Thu, Apr 18, 2013 at 1:56 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
Seeing this go by again it's simple enough to provide a quick
answer/hint — by setting the tunables it's of course getting a better
distribution of data, but the reason they're optional to begin with is
that older clients won't support them. In this case, the kernel client
being run; so it returns an error.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Thu, Apr 18, 2013 at 12:51 PM, John Wilkins <john.wilkins@xxxxxxxxxxx> wrote:
> Bryan,
>
> It seems you got crickets with this question. Did you get any further? I'd
> like to add it to my upcoming CRUSH troubleshooting section.
>
>
> On Wed, Apr 3, 2013 at 9:27 AM, Bryan Stillwell <bstillwell@xxxxxxxxxxxxxxx>
> wrote:
>>
>> I have two test clusters running Bobtail (0.56.4) and Ubuntu Precise
>> (12.04.2).  The problem I'm having is that I'm not able to get either
>> of them into a state where I can both mount the filesystem and have
>> all the PGs in the active+clean state.
>>
>> It seems that on both clusters I can get them into a 100% active+clean
>> state by setting "ceph osd crush tunables bobtail", but when I try to
>> mount the filesystem I get:
>>
>> mount error 5 = Input/output error
>>
>>
>> However, if I set "ceph osd crush tunables legacy" I can mount both
>> filesystems, but then some of the PGs are stuck in the
>> "active+remapped" state:
>>
>> # ceph -s
>>    health HEALTH_WARN 29 pgs stuck unclean; recovery 5/1604152 degraded
>> (0.000%)
>>    monmap e1: 1 mons at {a=172.16.0.50:6789/0}, election epoch 1, quorum 0
>> a
>>    osdmap e10272: 20 osds: 20 up, 20 in
>>     pgmap v1114740: 1920 pgs: 1890 active+clean, 29 active+remapped, 1
>> active+clean+scrubbing; 3086 GB data, 6201 GB used, 3098 GB / 9300 GB
>> avail; 232B/s wr, 0op/s; 5/1604152 degraded (0.000%)
>>    mdsmap e420: 1/1/1 up {0=a=up:active}
>>
>>
>> Is any one else seeing this?
>>
>> Thanks,
>> Bryan
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
> John Wilkins
> Senior Technical Writer
> Intank
> john.wilkins@xxxxxxxxxxx
> (415) 425-9599
> http://inktank.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



--
Photobucket

Bryan Stillwell
SENIOR SYSTEM ADMINISTRATOR

E: bstillwell@xxxxxxxxxxxxxxx
O: 303.228.5109
M: 970.310.6085

Facebook TwitterPhotobucket


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux