Re: radosgw breaking because of too many open files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Found the issue.

Upgrading to Octopus did replace /etc/init.d/radosgw which contained
some changes to the distribution detection and setting ulimits.

New radosgw init script:

-snip-
            echo "Starting $name..."
            if [ $DEBIAN -eq 1 ]; then
                start-stop-daemon --start -u $user -x $RADOSGW -p
/var/run/ceph/client-$name.pid -- -n $name
            else
                ulimit -n 32768
                core_limit=`ceph-conf -n $name 'core file limit'`
                if [ -z $core_limit ]; then
                    DAEMON_COREFILE_LIMIT=$core_limit
                fi
                daemon --user="$user" "$RADOSGW -n $name"
            fi
-snip-

Old radosgw init script (or at least one that we may have customized
over the years:
-snip-
            echo "Starting $name..."
            if [ $DEBIAN -eq 1 ]; then
                ulimit -n 32768
                start-stop-daemon --start -u $user -x $RADOSGW -p
/var/run/ceph/client-$name.pid -- -n $name
            else
                ulimit -n 32768
                core_limit=`ceph-conf -n $name 'core file limit'`
                if [ -z $core_limit ]; then
                    DAEMON_COREFILE_LIMIT=$core_limit
                fi
                daemon --user="$user" "$RADOSGW -n $name"
            fi
-snip-

Editing this file and putting back the first 'ulimit -n 32768'
followed by a 'systemctl daemon-reload' and bouncing the radosgw
process and we seem to be humming along nicely now.


On Tue, Oct 5, 2021 at 4:55 PM shubjero <shubjero@xxxxxxxxx> wrote:
>
> Just upgraded from Ceph Nautilus to Ceph Octopus on Ubuntu 18.04 using
> standard ubuntu packages from the Ceph repo.
>
> Upgrade has gone OK but we are having issues with our radosgw service,
> eventually failing after some load, here's what we see in the logs:
>
> 2021-10-05T15:55:16.328-0400 7fa47ffff700 -1 NetHandler create_socket
> couldn't create socket (24) Too many open files
> 2021-10-05T15:55:17.896-0400 7fa484b18700 -1 NetHandler create_socket
> couldn't create socket (24) Too many open files
> 2021-10-05T15:55:17.964-0400 7fa484b18700 -1 NetHandler create_socket
> couldn't create socket (24) Too many open files
> 2021-10-05T15:55:18.148-0400 7fa484b18700 -1 NetHandler create_socket
> couldn't create socket (24) Too many open files
>
> In Ceph Nautilus we used to set in ceph.conf the following which I
> think helped is avoid the situation:
>
> [global]
>   max open files = 131072
>
> This config option seems to be no longer recognized by ceph.
>
>
> Any help would be appreciated.
>
> Jared Baker
> Ontario Institute for Cancer Research
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux