Re: CEPH S3 Listing Issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Certainly if you turn the RGW logging up to 20, you can see just about everything that it does.  It should be possible to see the reason, but of course there will be a lot of output in the logs.


Daniel

On 3/9/20 8:24 AM, rajiv chodisetti wrote:
Any other pointers on how to debug this issue ? Will I be able to find these kind of issues through logs and if yes where can I find these logs , I have checked the storage gateway logs and I couldn't find any useful logs over there

On Sat, Mar 7, 2020 at 9:35 AM rajiv chodisetti <rajivchodisetti54@xxxxxxxxx <mailto:rajivchodisetti54@xxxxxxxxx>> wrote:

    HI Romit,

    first of all thanks for your reply,

    The Code that I have attached above fetches blobs in the batches
    of 1000 because 1000 is what S3 client can fetch at a time.

    * The total no of objects in bucket are around 3000 files and
    total 2.8 GB in size
    * To rule out the possibility of error in my code, I tried various
    approaches in both java and as well as python (using boto) and by
    increasing client timeout, retry limit, but it's the same
    exception everywhere and I tried setting batch sizes to 10, 50,
    100, 200,500 and 1000 but every other time listing fails after N
    number of iterations, I have pasted the code snippets of various
    approaches I have tried below
    * I have increased the S3 gateway Pod count to rule out the
    possibility of S3 gateway pod being bombaded
    * I tried s3cmd recursive approach to get the file count of the
    bucket and there also I see errors while listing

    Please do let me know if there are any other ways I can check
    what's going on at backend

    Rajiv.Chodisetti@aakashonprem:~$ s3cmd ls -r s3://my-new-bucket/
    --no-ssl --host=${AWS_HOST} --host-bucket=  s3://my-new-bucket >
    listing.txt
    WARNING: Retrying failed request:
    /?marker=test-training-akash/du/latest/11340612-0.png.json
    WARNING: 500 (UnknownError)
    WARNING: Waiting 3 sec...
    WARNING: Retrying failed request:
    /?marker=test-training-akash/du/latest/11340612-0.png.json
    WARNING: 500 (UnknownError)
    WARNING: Waiting 6 sec...
    ^CSee ya!


    *    Approach 1:*
                     for (Bucket bucket : s3.listBuckets()) {
                    System.out.println(" - " + bucket.getName() + " "
                            + "(owner = " + bucket.getOwner()
                            + " "
                            + "(creationDate = " +
    bucket.getCreationDate());
                    ObjectListing objectListing = s3.listObjects(new
    ListObjectsRequest()
                            .withBucketName(bucket.getName()));
                    do {
                        for (S3ObjectSummary objectSummary :
    objectListing.getObjectSummaries()) {
                            System.out.println(" --- " +
    objectSummary.getKey() + " "
                                    + "(size = " +
    objectSummary.getSize() + ")" + " "
                                    + "(eTag = " +
    objectSummary.getETag() + ")");
                        }
                        objectListing =
    s3.listNextBatchOfObjects(objectListing);
                    } while (objectListing.isTruncated());
                }

    *Approach 2:*
        // where i is global variable to keep track of blobs listed so far
        // Recursive code to list blobs in Batches of N
        private void listDataset(Bucket bucket, AmazonS3 conn, String
    nextMarker) {
            try {
                ListObjectsRequest request = new ListObjectsRequest();
                request.setBucketName(bucket.getName());
                request.setMaxKeys(200);
                if (nextMarker != null && nextMarker.length() > 0) {
                    request.setMarker(nextMarker);
                }
                ObjectListing result;
                result = conn.listObjects(request);
                for (S3ObjectSummary objectSummary :
                        result.getObjectSummaries()) {
                    System.out.println(" - " + objectSummary.getKey()
    + "  " +
                            "(size = " + objectSummary.getSize() +
                            ")");
                    i = i + 1;
                }
                System.out.println("Total Blobs Fetched So far: " + i);
                if (result.getNextMarker() != null) {
                    System.out.println("Next Object Marker: " +
    result.getNextMarker());
                    nextMarker = result.getNextMarker();
                    listDataset(bucket, conn, nextMarker);
                }
            } catch (Exception e) {
                e.printStackTrace();
            }
        }

    *  Approach 3:*

        done = False
        marker = None
        while not done:
            if marker:
                data = s3.list_objects(Bucket='my-new-bucket',
    MaxKeys=10, Marker=marker)
            else:
                data = s3.list_objects(Bucket='my-new-bucket', MaxKeys=10)

            for key in data['Contents']:
                print(key)

            if data['IsTruncated']:
                marker = data['Contents'][-1]['Key']
            else:
                done = True

    *Approach 4:*

        paginator = s3.get_paginator('list_objects')
        page_iterator = paginator.paginate(Bucket='my-new-bucket')

        for page in page_iterator:
             if "Contents" in page:
                 for key in page["Contents"]:
                     print(key)

    On Sat, Mar 7, 2020 at 7:32 AM Romit Misra
    <romit.misra@xxxxxxxxxxxx <mailto:romit.misra@xxxxxxxxxxxx>> wrote:

        Some pointers:-

        1.What is the approx object count in the bucket...?
        2. Use range list,( iterate 1000 objects at a time, and store
        the last marker to move to the next list, also referred to as
        paginated listing)
        3.Check your default timeouts, and optimize tune according to
        environment
        4.Monitor the system metrics when listing is happening, ( CPU
        and memory)
        5.Use a alternative client like s3cmd, or Boto rule out the
        anamoly if in your code.


        Thanks
        Romit


        On Sat, 7 Mar 2020, 02:34 rajiv chodisetti,
        <rajivchodisetti54@xxxxxxxxx
        <mailto:rajivchodisetti54@xxxxxxxxx>> wrote:

            HI,

            I have created an S3 bucket backed by CEPH and through
            java S3 client and via S3 object gateway am listing all
            the files under the bucket and always the listing is
            failing some times after listing 1k+ blobs  or some times
            after listing 2k+ blobs and am not able to figure out how
            to debug this issue

            This is the Exception am getting,

            |com.amazonaws.services.s3.model.AmazonS3Exception: null
            (Service: Amazon S3; Status Code: 500; Error Code:
            UnknownError; Request ID:
            tx00000000000000000e7df-005e626049-1146-rook-ceph-store;
            S3 Extended Request ID:
            1146-rook-ceph-store-rook-ceph-store), S3 Extended Request
            ID: 1146-rook-ceph-store-rook-ceph-store at
            com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1799)|


            I tried with Boto as well and it's the same error over there,

            I have checked the s3 gateway pod logs but I couldn't find
            any relevant logs over there, so, please let me know how
            do I debug this issue or possible reasons for the same

            I have attached the java code am using for reference.

            Thanks & Regards,
            Rajiv
            UIPath
            _______________________________________________
            Dev mailing list -- dev@xxxxxxx <mailto:dev@xxxxxxx>
            To unsubscribe send an email to dev-leave@xxxxxxx
            <mailto:dev-leave@xxxxxxx>


        /-----------------------------------------------------------------------------------------/

        /This email and any files transmitted with it are
        confidential and intended solely for the use of the
        individual or entity to whom they are addressed. If you have
        received this email in error, please notify the system
        manager. This message contains confidential information and
        is intended only for the individual named. If you are not the
        named addressee, you should not disseminate, distribute or
        copy this email. Please notify the sender immediately by
        email if you have received this email by mistake and delete
        this email from your system. If you are not the intended
        recipient, you are notified that disclosing, copying,
        distributing or taking any action in reliance on the contents
        of this information is strictly prohibited./

        /Any views or opinions presented in this email are solely
        those of the author and do not necessarily represent those of
        the organization. Any information on shares, debentures or
        similar instruments, recommended product pricing, valuations
        and the like are for information purposes only. It is not
        meant to be an instruction or recommendation, as the case may
        be, to buy or to sell securities, products, services nor an
        offer to buy or sell securities, products or services unless
        specifically stated to be so on behalf of the Flipkart group.
        Employees of the Flipkart group of companies are expressly
        required not to make defamatory statements and not to
        infringe or authorise any infringement of copyright or any
        other legal right by email communications. Any such
        communication is contrary to organizational policy and
        outside the scope of the employment of the individual
        concerned. The organization will not accept any liability in
        respect of such communication, and the employee responsible
        will be personally liable for any damages or other liability
        arising./

        /Our organization accepts no liability for the content of
        this email, or for the consequences of any actions taken on
        the basis of the information /provided,/ unless that
        information is subsequently confirmed in writing. If you are
        not the intended recipient, you are notified that disclosing,
        copying, distributing or taking any action in reliance on the
        contents of this information is strictly prohibited./

        /-----------------------------------------------------------------------------------------/


_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux