Re: CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi James,

if you have 2x OSD and if you let all data replicate 2x
then logically every OSD will have a copy of the data.

So its like a Raid 1.

-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:info@xxxxxxxxxxxxxxxxx

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 22.01.2016 um 18:19 schrieb James Gallagher:
> Hi there,
> 
> Got a quick question regarding the CephFileSystem. After implementing
> the setup from the quick start guide and having a Admin-Node,
> Monitor/Metadata Server, OSD0 and OSD1. If I were to implement the mount
> on the Admin-Node at the location /mnt/mycephfs, if I were then to
> transfer files to /mnt/mycephfs - will the files be then distributed
> amongst OSD0 and OSD1, where if the replication level is set at 2, the
> files should then be on both nodes?
> 
> If not, what implementation should I be looking to do, in order for the
> files to be distributed amongst these devices?
> 
> Thanks,
> 
> J. Gallagher
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux