NFS over CEPH - best practice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>Should this be done on the iscsi target server? I have a default option to enable rbd caching as it speeds things up on the vms.
Yes, only on the iscsi target servers.

2014-05-08 1:29 GMT+12:00 Andrei Mikhailovsky <andrei at arhont.com>:
>> It's important to disable the rbd cache on tgtd host. Set in
>> /etc/ceph/ceph.conf:
>
>
> Should this be done on the iscsi target server? I have a default option to
> enable rbd caching as it speeds things up on the vms.
>
> Thanks
>
> Andrei
>
>
>
> ________________________________
> From: "Vlad Gorbunov" <vadikgo at gmail.com>
> To: "Sergey Malinin" <hell at newmail.com>
> Cc: "Andrei Mikhailovsky" <andrei at arhont.com>, ceph-users at lists.ceph.com
> Sent: Wednesday, 7 May, 2014 2:23:52 PM
>
> Subject: Re: NFS over CEPH - best practice
>
> It's easy to install tgtd with ceph support. ubuntu 12.04 for example:
>
> Connect ceph-extras repo:
> echo deb http://ceph.com/packages/ceph-extras/debian $(lsb_release -sc) main
> | sudo tee /etc/apt/sources.list.d/ceph-extras.list
>
> Install tgtd with rbd support:
> apt-get update
> apt-get install tgt
>
> It's important to disable the rbd cache on tgtd host. Set in
> /etc/ceph/ceph.conf:
> [client]
> rbd_cache = false
>
> Define permanent export rbd with iscsi in /etc/tgt/targets.conf:
>
> <target iqn.2014-04.rbdstore.example.com:volume512>
>     driver iscsi
>     bs-type rbd
>     backing-store iscsi/volume512
>     initiator-address 10.166.18.87
> </target
>
> service tgt reload
>
> Or use commands:
> tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 1
> --backing-store iscsi/volume512 --bstype rbd
> tgtadm -C 0 --lld iscsi --op bind --mode target --tid 1 -I 10.166.18.87
>
> tgt-admin -s
> show current iscsi settings and sessions.
>
>
>
> You can install tgtd on multiple osd/monitor hosts and connect iscsi
> initiator to this servers with multipath enabled.  Iscsi proxy servers not
> needed with tgtd.
>
> On Thu, May 8, 2014 at 12:20 AM, Sergey Malinin<hell at newmail.com>, wrote:
>>
>>
>> http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devices
>>
>> On Wednesday, May 7, 2014 at 15:06, Andrei Mikhailovsky wrote:
>>
>>
>> Vlad, is there a howto somewhere describing the steps on how to setup
>> iscsi multipathing over ceph? It looks like a good alternative to nfs
>>
>> Thanks
>>
>> ________________________________
>> From: "Vlad Gorbunov" <vadikgo at gmail.com>
>> To: "Andrei Mikhailovsky" <andrei at arhont.com>
>> Cc: ceph-users at lists.ceph.com
>> Sent: Wednesday, 7 May, 2014 12:02:09 PM
>> Subject: Re: NFS over CEPH - best practice
>>
>> For XenServer or VMware is better to use iscsi client to tgtd with ceph
>> support. You can install tgtd on osd or monitor server and use multipath for
>> failover.
>>
>> On Wed, May 7, 2014 at 9:47 PM, Andrei Mikhailovsky <andrei at arhont.com>
>> wrote:
>>
>> Hello guys,
>>
>> I would like to offer NFS service to the XenServer and VMWare hypervisors
>> for storing vm images. I am currently running ceph rbd with kvm, which is
>> working reasonably well.
>>
>> What would be the best way of running NFS services over CEPH, so that the
>> XenServer and VMWare's vm disk images are stored in ceph storage over NFS?
>>
>> Many thanks
>>
>> Andrei
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux