-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 If you use RADOS gateway, RBD or CephFS, then you don't need to worry about striping. If you write your own application that uses librados, then you have to worry about it. I understand that there is a radosstriper library that should help with that. There is also a limit to the size of an object that can be stored. I think I've seen the number of 100GB thrown around. - ---------------- Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Wed, Sep 23, 2015 at 7:04 PM, Cory Hawkless wrote: > Ok, so I have found this > > > > “The objects Ceph stores in the Ceph Storage Cluster are not striped. Ceph > Object Storage, Ceph Block Device, and the Ceph Filesystem stripe their data > over multiple Ceph Storage Cluster objects. Ceph Clients that write directly > to the Ceph Storage Cluster via librados must perform the striping (and > parallel I/O) for themselves to obtain these benefits.” On - > http://docs.ceph.com/docs/master/architecture/ > > > > So it appears that the breaking a single object up into chunks(or stripes) > are the responsibility of the application writing to Ceph, not of the RADOS > engine itself? > > > > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of > Cory Hawkless > Sent: Thursday, 24 September 2015 10:22 AM > To: ceph-users@xxxxxxxxxxxxxx > Subject: Basic object storage question > > > > Hi all, > > > > I have basic question around how Ceph stores individual objects. > > Say I have a pool with a replica size of 3 and I upload a 1GB file to this > pool. It appears as if this 1GB file gets placed into 3PG’s on 3 OSD’s , > simple enough? > > > > Are individual objects never split up? What if I want to storage backup > files or Openstack Glance images totalling 100’s of GB’s. > > > > Potentially I could run into issues is I have an object who’s size exceeds > the available space on any of the OSD’s, say I have 1TB OSD’s and they are > all 50% full and I try to upload a 501GB image, I presume this would fail > even through there is sufficient space in the pool there is not a single OSD > with >500GB of space available. > > > > Do I have this right? If so is there any way around this? Ideally I’d like > to use Ceph as target for all of my servers backups, but some of these total > in the TB’s but none of my OSD’s are this big(Currently using 900GB SAS > disks). > > > > Thanks in advance > > Cory > > > > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -----BEGIN PGP SIGNATURE----- Version: Mailvelope v1.1.0 Comment: https://www.mailvelope.com wsFcBAEBCAAQBQJWA3acCRDmVDuy+mK58QAAoXkP/R6+fWoKhqhZv0gIUTFo tm0AEhbr/z7U1bwaeDLUgc01DuIg3GNPLSGXKB1kdO7A6nfl6KdEUJwFLUAR kBRV3CEtPpgCVxaLu91KbHrnlfVFFvzea+H65z7YhWNqiTiji76ZiEbr0K79 9eenwgkNsPqhdDjIbklCIyKz/Ny8sxht78j6V9gda81v6ZexuyNqJ4/chAA7 uw9PXPw4o+1onrOK2O5LCbMzcD5WOBO94B67GNkiSUTYavioOM4SMJhyXKB3 69VR47CMnrvPpMAWaPp5VngoCRaGzDfIRGFXsDgoTilPH4xAHO/u0kp1sm30 m/L8Eqfo9BR6ZEct2xdVs00puPUqR+pPHUTvHdodgGxumsBraA4D2gAU4yLA akEtnlmvI1GQjdMpQIIncs3D1KS3Y1enUBL5AbKbwfdfiJ0MqizqaoBvogbR 3AzeQptL1PuGvirQf9MmNI9i3FK4XeU1NFQVeV0FteA+sW38l6pnZ5503cBh 24MpDYDlQW578HRPIHaZfVLG8mauyKOZL9ntp3RjFT1MNvR0E+NFXou+At1+ 8UqpLdwnZscRtXpdBksAlyof+ArKFkujOvCZSIir5QZP4f/8MWsOVLcqFfde U4JxShKbN8XLkK/UJ+f2atqSBQtlfox3HZhA/nJYyZKexbpKS+vH5M1Awkj+ xIbz =xBpT -----END PGP SIGNATURE----- _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com