Re: Ceph mount rbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>Of course, I always have to ask the use-case behind mapping the same image on multiple hosts. Perhaps CephFS would be a better fit if you are trying to serve out a filesystem?

Hi jason,

Currently I'm sharing rbd images between multiple webservers vm with ocfs2 on top.

They have old kernels, so can't use cephfs for now . 

some servers have also between 20-30millions files, so I need to test cephfs to see if it can handle between 100-150 millions files (which are handle by 5 rbd images).

Can cephfs handle so much files currently ?  (I waiting for luminous to test it)







----- Mail original -----
De: "Jason Dillaman" <jdillama@xxxxxxxxxx>
À: "Maged Mokhtar" <mmokhtar@xxxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Jeudi 29 Juin 2017 02:02:44
Objet: Re:  Ceph mount rbd

... additionally, the forthcoming 4.12 kernel release will support non-cooperative exclusive locking. By default, since 4.9, when the exclusive-lock feature is enabled, only a single client can write to the block device at a time -- but they will cooperatively pass the lock back and forth upon write request. With the new "rbd map" option, you can map a image on exactly one host and prevent other hosts from mapping the image. If that host should die, the exclusive-lock will automatically become available to other hosts for mapping. 
Of course, I always have to ask the use-case behind mapping the same image on multiple hosts. Perhaps CephFS would be a better fit if you are trying to serve out a filesystem? 

On Wed, Jun 28, 2017 at 6:25 PM, Maged Mokhtar < [ mailto:mmokhtar@xxxxxxxxxxx | mmokhtar@xxxxxxxxxxx ] > wrote: 





On 2017-06-28 22:55, [ mailto:lista@xxxxxxxxxxxxxxxxx | lista@xxxxxxxxxxxxxxxxx ] wrote: 

BQ_BEGIN



Hi People, 

I am testing the new enviroment, with ceph + rbd with ubuntu 16.04, and i have one question. 

I have my cluster ceph and mount the using the comands to ceph in my linux enviroment : 

rbd create veeamrepo --size 20480 
rbd --image veeamrepo info 
modprobe rbd 
rbd map veeamrepo 
rbd feature disable veeamrepo exclusive-lock object-map fast-diff deep-flatten 
mkdir /mnt/veeamrepo 
mount /dev/rbd0 /mnt/veeamrepo 

The comands work fine, but i have one problem, in the moment, i can mount the /mnt/veeamrepo in the same time in 2 machines, and this is a bad option for me in the moment, because this could generate one filesystem corrupt. 

I need only one machine to be allowed to mount and write at a time. 

Example if machine1 mount the /mnt/veeamrepo and machine2 try mount, one error would be displayed, show message the machine can not mount, because the system already mounted in machine1. 

Someone, could help-me with this or give some tips, for solution my problem. ? 

Thanks a lot 

_______________________________________________ 
ceph-users mailing list 
[ mailto:ceph-users@xxxxxxxxxxxxxx | ceph-users@xxxxxxxxxxxxxx ] 
[ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com | http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ] 






You can use Pacemaker to map the rbd and mount the filesystem on 1 server and in case of failure switch to another server. 

_______________________________________________ 
ceph-users mailing list 
[ mailto:ceph-users@xxxxxxxxxxxxxx | ceph-users@xxxxxxxxxxxxxx ] 
[ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com | http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ] 


BQ_END




-- 
Jason 

_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux