Re: Calculate and increase pg_num

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dan,
this sound weird:
how can you run "cephfs /mnt/mycephfs set_layout 10" on a unmounted mountpoint?
My client says:
root@gw1:~# cephfs /mnt/ceph/ set_layout -p 3
Error setting layout: Inappropriate ioctl for device

And in IRC I've found that "path need to be a path to an already mounted cephfs"
(http://irclogs.ceph.widodh.nl/index.php?date=2012-11-26)

--
Marco Aroldi

2013/3/15 Dan van der Ster <dan@xxxxxxxxxxxxxx>:
> We eventually resolved the problem by doing "ceph mds add_data_pool
> 10; cephfs /mnt/mycephfs set_layout 10", where 10 is the id of our new
> "data" volume, and then rebooting the client machine (since the cephfs
> mount was hung).
> Cheers, Dan
>
> On Fri, Mar 15, 2013 at 4:13 PM, Marco Aroldi <marco.aroldi@xxxxxxxxx> wrote:
>> Same here,
>> Now mounting cephfs hangs for a minute then says "mount error 5 =
>> Input/output error"
>> Since the new pool has id=3, I've also executed "ceph mds
>> add_data_pool 3" and "ceph mds remove_data_pool 0"
>> The monitor log has this line:
>>
>> 2013-03-15 16:08:08.327049 7fe957441700  0 -- 192.168.21.11:6789/0 >>
>> 192.168.21.10:0/491826119 pipe(0x1b94c80 sd=23 :6789 s=0 pgs=0 cs=0
>> l=0).accept peer addr is really 192.168.21.10:0/491826119 (socket is
>> 192.168.21.10:54670/0)
>>
>> --
>> Marco Aroldi
>>
>> 2013/3/15 Dan van der Ster <dan@xxxxxxxxxxxxxx>:
>>> Hi,
>>>
>>>
>>> On Fri, Mar 15, 2013 at 9:52 AM, Sebastien Han <sebastien.han@xxxxxxxxxxxx>
>>> wrote:
>>>>
>>>> Hi,
>>>>
>>>> It's not recommended to use this command yet.
>>>>
>>>> As a workaround you can do:
>>>>
>>>> $ ceph osd pool create <my-new-pool> <pg_num>
>>>> $ rados cppool <my-old-pool> <my-new-pool>
>>>> $ ceph osd pool delete <my-old-pool>
>>>> $ ceph osd pool rename <my-new-pool> <my-old-pool>
>>>
>>>
>>>
>>> We've just done exactly this on the default pool data, and it leaves cephfs
>>> mounts in a hanging state. Is that expected?
>>>
>>> Cheers, Dan
>>>
>>>>
>>>>
>>>>
>>>> ––––
>>>> Sébastien Han
>>>> Cloud Engineer
>>>>
>>>> "Always give 100%. Unless you're giving blood."
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> PHONE : +33 (0)1 49 70 99 72 – MOBILE : +33 (0)6 52 84 44 70
>>>> EMAIL : sebastien.han@xxxxxxxxxxxx – SKYPE : han.sbastien
>>>> ADDRESS : 10, rue de la Victoire – 75009 Paris
>>>> WEB : www.enovance.com – TWITTER : @enovance
>>>>
>>>> On Mar 15, 2013, at 9:27 AM, Marco Aroldi <marco.aroldi@xxxxxxxxx> wrote:
>>>>
>>>> Hi,
>>>>
>>>> I have a new cluster with no data.
>>>> Now it has 44 osd and my goal is to increase in the next months to
>>>> reach a total of 88 osd.
>>>>
>>>> My pgmap is:
>>>> pgmap v841: 8640 pgs: 8640 active+clean; 8730 bytes data, 1733 MB
>>>> used, 81489 GB / 81491 GB avail
>>>> 2880 PG each for data, metadata and rbd pools
>>>> This value was set by mkcephfs
>>>>
>>>> Chatting on IRC channel, it was told me to calculate 100 pg per Osd
>>>> and round to near power-of-2
>>>> So in my case would be 8192 PG for each pool, right?
>>>>
>>>> My question:
>>>> Knowing to have to double the number of osd,
>>>> is it advisable to increase pg_num right now with the following commands?
>>>> ceph osd pool set data pg_num 8192 --allow-experimental-feature
>>>> ceph osd pool set metadata pg_num 8192 --allow-experimental-feature
>>>> ceph osd pool set rbd pg_num 8192 --allow-experimental-feature
>>>>
>>>> Thanks
>>>>
>>>> --
>>>> Marco Aroldi
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux