Re: Mount error 12 = Cannot allocate memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Dec 4, 2013 at 7:15 AM, Mr.Salvatore Rapisarda
<salvorapi@xxxxxxxx> wrote:
> Hi,
>
> i have a ceph cluster with 3 nodes on Ubuntu 12.04.3 LTS and ceph version
> 0.72.1
>
> My configuration is the follow:
>
> * 3 MON
>   - XRVCLNOSTK001=10.170.0.110
>   - XRVCLNOSTK002=10.170.0.111
>   - XRVOSTKMNG001=10.170.0.112
> * 3 OSD
>   - XRVCLNOSTK001=10.170.0.110
>   - XRVCLNOSTK002=10.170.0.111
>   - XRVOSTKMNG001=10.170.0.112
> * 1 MDS
>   - XRVCLNOSTK001=10.170.0.110
>
> All it's ok...
>
> -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#
> root@XRVOSTKMNG001:/mnt# ceph -s
>     cluster b53078ff-2cd3-4c8f-ad23-16476658e4a0
>      health HEALTH_OK
>      monmap e2: 3 mons at
> {XRVCLNOSTK001=10.170.0.110:6789/0,XRVCLNOSTK002=10.170.0.111:6789/0,XRVOSTKMNG001=10.170.0.112:6789/0},
> election epoch 54, quorum 0,1,2 XRVCLNOSTK001,XRVCLNOSTK002,XRVOSTKMNG001
>      mdsmap e10: 1/1/1 up {0=XRVCLNOSTK001=up:active}
>      osdmap e62: 3 osds: 3 up, 3 in
>       pgmap v8375: 448 pgs, 5 pools, 716 MB data, 353 objects
>             6033 MB used, 166 GB / 172 GB avail
>                  448 active+clean
> -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#
>
> If i try to mount cephfs on first node 10.170.0.112, used for cluster deploy
> process, there is no problem.
> But if i try to mount cephfs on second node 10.170.0.110 or third node
> 10.170.0.111 i have a "mount error 12 = cannot allocate memory"
>
> -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#
> root@XRVCLNOSTK002:/mnt# mount -t ceph XRVCLNOSTK001:6789:/ /mnt/nova -o
> name=admin,secret=my_secret_key
>
> mount error 12 = Cannot allocate memory
> -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#
>
>
> Any idea? :)

I think everywhere the kernel client uses ENOMEM it means exactly that
— it failed to allocate memory for something. I'd check your memory
situation on that host, and see if you can reproduce it elsewhere.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux