Re: about FC support in ceph and file objects location list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

> 2) does ceph support FC or how can i get FC supported in ceph? like we use
> iscsi target tool  to export rbd block device ,so windows can use it as a
> disk.

LIO Linux SCSI Target has option to export block device via FC among
other options. I guess you can use that to access ceph via FC.
http://linux-iscsi.org/wiki/Target

Ugis

2013/7/5 Gregory Farnum <greg@xxxxxxxxxxx>:
> On Thu, Jul 4, 2013 at 7:09 PM, huangjun <hjwsm1989@xxxxxxxxx> wrote:
>> hi,all
>> there are some questions about ceph.
>> 1) can i get the osd list that hold objects consist of a file by
>> command-line?
>
> If using CephFS you can use the cephfs tool to map offsets to
> locations. Is that what you mean?
>
>> 2) does ceph support FC or how can i get FC supported in ceph? like we use
>> iscsi target tool  to export rbd block device ,so windows can use it as a
>> disk.
>
> Hmm, don't think so yet. Does that concept even make sense for fibre
> channel connections?
>
>> 3) the question i thought many times, the crush hierarchy is
>> stack,host,device,and the replication level is 2, but if two disks failed in
>> different stack in 5 mins, how can i  cut down the  risk of data lose to
>> lowest? I  thought about disaster backup by building a remote datacenter,
>> but if another two disks also failed in that datacenter?
>
> The possibility of data loss due to simultaneous disk failures is
> pretty much a constant (though fairly unlikely!). If two disks doesn't
> provide the redundancy you need, you can use three. Separating the
> failure domains as much as possible of course helps reduce the odds
> more.
>
>> 4) "df -h" on client shows the data stored(include the replicated data) in
>> cluster, not data really used by user.
>
> Yep. Keeping in mind that due to using different replication settings
> etc we can't map the total space to a user-visible space, what would
> you rather see here?
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux