Re: Performance issues with writing files to Ceph via S3 API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The slashes don’t mean much if anything to Ceph.  Buckets are not hierarchical filesystems. 

You speak of millions of files.  How many millions?

How big are they?  Very small objects stress any object system.  Very large objects may be multi part uploads that stage to slow media or otherwise add overhead.  

Are you writing them to a single bucket?

How is the index pool configured?  On what media?
Same with the bucket pool.  

Which Ceph release? Sharding config?
Are you mixing in bucket list operations ?

It could be that you have an older release or a cluster set up on an older release that doesn’t effectively auto-reshard the bucket index.  If the index pool is set up poorly - slow media, too few OSDs, too few PGs - that may contribute. 

In some circumstances pre-sharding might help. 

Do you have the ability to utilize more than one bucket? If you can limit the number of objects in a bucket that might help.  

If your application keeps track of object names you might try indexless buckets.  

> On Feb 3, 2024, at 12:57 PM, Renann Prado <prado.renann@xxxxxxxxx> wrote:
> 
> Hello,
> 
> I have an issue at my company where we have an underperforming Ceph
> instance.
> The issue that we have is that sometimes writing files to Ceph via S3 API
> (our only option) takes up to 40s, which is too long for us.
> We are a bit limited on what we can do to investigate why it's performing
> so badly, because we have a service provider in between, so getting to the
> bottom of this really is not that easy.
> 
> That being said, the way we use the S3 APi (again, Ceph under the hood) is
> by writing all files (multiple millions) to the root, so we don't use *no*
> folder-like structure e.g. we write */<uuid>* instead of */this/that/<uuid>*
> .
> 
> The question is:
> 
> Does anybody know whether Ceph has performance gains when you create a
> folder structure vs when you don't?
> Looking at Ceph's documentation I could not find such information.
> 
> Best regards,
> 
> *Renann Prado*
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux