Re: Basic object storage question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all, thanks for the replies.
So my confusion was because I was using "rados put test.file someobject testpool"
This command does not seem to split my 'files' into chunks when they are saved as 'objects', hence the terminology

Upon bolting openstack Glance onto Ceph I can see hundreds of smaller objects are created per ISO, this is a much more expected behaviour!

So does the rados command line application not split files when it is writing them into Ceph?

-----Original Message-----
From: John Spray [mailto:jspray@xxxxxxxxxx] 
Sent: Thursday, 24 September 2015 6:04 PM
To: Cory Hawkless <Cory@xxxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Basic object storage question

On Thu, Sep 24, 2015 at 1:51 AM, Cory Hawkless <Cory@xxxxxxxxxxxxxx> wrote:
> Hi all,
>
>
>
> I have basic question around how Ceph stores individual objects.
>
> Say I have a pool with a replica size of 3 and I upload a 1GB file to 
> this pool. It appears as if this 1GB file gets placed into 3PG’s on 3 
> OSD’s , simple enough?

Well, you've gone straight from asking about *objects* to talking about uploading a *file*, so that doesn't make sense :-)

When you write a file in CephFS, it gets striped into many objects (4MB by default) in RADOS.  Same with objects in RGW, and block devices in RBD.  The stripes are 4MB by default, which results in good data distribution for most workloads.  So the short answer is that unless you're writing directly to RADOS (i.e with librados), you don't need to worry.

John

> Are individual objects never split up? What if I want to storage 
> backup files or Openstack Glance images totalling 100’s of GB’s.
>
>
>
> Potentially I could run into issues is I have an object who’s size 
> exceeds the available space on any of the OSD’s, say I have 1TB OSD’s 
> and they are all 50% full and I try to upload a 501GB image, I presume 
> this would fail even through there is sufficient space in the pool 
> there is not a single OSD with >500GB of space available.
>
>
>
> Do I have this right? If so is there any way around this? Ideally I’d 
> like to use Ceph as target for all of my servers backups, but some of 
> these total in the TB’s but none of my OSD’s are this big(Currently 
> using 900GB SAS disks).
>
>
>
> Thanks in advance
>
> Cory
>
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux