Re: Cephalocon Seoul is canceled

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sage,
 Just read the news on Cancellation of Cephlacon 2020, although the site is
still  status quo/ Double checking that we can proceed with the
cancellation of logistics for South Korea

Thanks
Romit

On Tue, Feb 4, 2020 at 11:02 PM <ceph-users-request@xxxxxxx> wrote:

> Send ceph-users mailing list submissions to
>         ceph-users@xxxxxxx
>
> To subscribe or unsubscribe via email, send a message with subject or
> body 'help' to
>         ceph-users-request@xxxxxxx
>
> You can reach the person managing the list at
>         ceph-users-owner@xxxxxxx
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of ceph-users digest..."
>
> Today's Topics:
>
>    1. Re: More OMAP Issues (Paul Emmerich)
>    2. Re: More OMAP Issues (DHilsbos@xxxxxxxxxxxxxx)
>    3. Re: Bluestore cache parameter precedence (Igor Fedotov)
>    4. Re: Understanding Bluestore performance characteristics
>       (vitalif@xxxxxxxxxx)
>    5. Cephalocon Seoul is canceled (Sage Weil)
>    6. Re: Bluestore cache parameter precedence (Boris Epstein)
>    7. Bucket rename with  (EDH - Manuel Rios)
>
>
> ----------------------------------------------------------------------
>
> Date: Tue, 4 Feb 2020 17:51:40 +0100
> From: Paul Emmerich <paul.emmerich@xxxxxxxx>
> Subject:  Re: More OMAP Issues
> To: DHilsbos@xxxxxxxxxxxxxx
> Cc: ceph-users <ceph-users@xxxxxxx>
> Message-ID:
>         <
> CAD9yTbEp1BrAagzWzkaAQ-aCq-4ghyEwVJSDdyJzL-So52whBA@xxxxxxxxxxxxxx>
> Content-Type: text/plain; charset="UTF-8"
>
> Are you running a multi-site setup?
> In this case it's best to set the default shard size to large enough
> number *before* enabling multi-site.
>
> If you didn't do this: well... I think the only way is still to
> completely re-sync the second site...
>
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Tue, Feb 4, 2020 at 5:23 PM <DHilsbos@xxxxxxxxxxxxxx> wrote:
> >
> > All;
> >
> > We're backing to having large OMAP object warnings regarding our RGW
> index pool.
> >
> > This cluster is now in production, so I can simply dump the buckets /
> pools and hope everything works out.
> >
> > I did some additional research on this issue, and it looks like I need
> to (re)shard the bucket (index?).  I found information that suggests that,
> for older versions of Ceph, buckets couldn't be sharded after creation[1].
> Other information suggests the Nautilus (which we are running), can
> re-shard dynamically, but not when multi-site replication is configured[2].
> >
> > This suggests that a "manual" resharding of a Nautilus cluster should be
> possible, but I can't find the commands to do it.  Has anyone done this?
> Does anyone have the commands to do it?  I can schedule down time for the
> cluster, and take the RADOSGW instance(s), and dependent user services
> offline.
> >
> > [1]: https://ceph.io/geen-categorie/radosgw-big-index/
> > [2]: https://docs.ceph.com/docs/master/radosgw/dynamicresharding/
> >
> > Thank you,
> >
> > Dominic L. Hilsbos, MBA
> > Director - Information Technology
> > Perform Air International Inc.
> > DHilsbos@xxxxxxxxxxxxxx
> > www.PerformAir.com
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
> ------------------------------
>
> Date: Tue, 4 Feb 2020 17:04:24 +0000
> From: <DHilsbos@xxxxxxxxxxxxxx>
> Subject:  Re: More OMAP Issues
> To: <ceph-users@xxxxxxx>
> Cc: <paul.emmerich@xxxxxxxx>
> Message-ID:
>         <0670B960225633449A24709C291A525243605D57@COM01.performair.local>
> Content-Type: text/plain; charset="utf-8"
>
> Paul;
>
> Yes, we are running a multi-site setup.
>
> Re-sync would be acceptable at this point, as we only have 4 TiB in use
> right now.
>
> Tearing down and reconfiguring the second site would also be acceptable,
> except that I've never been able to cleanly remove a zone from a zone
> group.  The only way I've found to remove a zone completely is to tear down
> the entire RADOSGW configuration (delete .rgw.root pool from both clusters).
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director – Information Technology
> Perform Air International Inc.
> DHilsbos@xxxxxxxxxxxxxx
> www.PerformAir.com
>
>
>
> -----Original Message-----
> From: Paul Emmerich [mailto:paul.emmerich@xxxxxxxx]
> Sent: Tuesday, February 04, 2020 9:52 AM
> To: Dominic Hilsbos
> Cc: ceph-users
> Subject: Re:  More OMAP Issues
>
> Are you running a multi-site setup?
> In this case it's best to set the default shard size to large enough
> number *before* enabling multi-site.
>
> If you didn't do this: well... I think the only way is still to
> completely re-sync the second site...
>
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Tue, Feb 4, 2020 at 5:23 PM <DHilsbos@xxxxxxxxxxxxxx> wrote:
> >
> > All;
> >
> > We're backing to having large OMAP object warnings regarding our RGW
> index pool.
> >
> > This cluster is now in production, so I can simply dump the buckets /
> pools and hope everything works out.
> >
> > I did some additional research on this issue, and it looks like I need
> to (re)shard the bucket (index?).  I found information that suggests that,
> for older versions of Ceph, buckets couldn't be sharded after creation[1].
> Other information suggests the Nautilus (which we are running), can
> re-shard dynamically, but not when multi-site replication is configured[2].
> >
> > This suggests that a "manual" resharding of a Nautilus cluster should be
> possible, but I can't find the commands to do it.  Has anyone done this?
> Does anyone have the commands to do it?  I can schedule down time for the
> cluster, and take the RADOSGW instance(s), and dependent user services
> offline.
> >
> > [1]: https://ceph.io/geen-categorie/radosgw-big-index/
> > [2]: https://docs.ceph.com/docs/master/radosgw/dynamicresharding/
> >
> > Thank you,
> >
> > Dominic L. Hilsbos, MBA
> > Director - Information Technology
> > Perform Air International Inc.
> > DHilsbos@xxxxxxxxxxxxxx
> > www.PerformAir.com
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
> ------------------------------
>
> Date: Tue, 4 Feb 2020 20:10:20 +0300
> From: Igor Fedotov <ifedotov@xxxxxxx>
> Subject:  Re: Bluestore cache parameter precedence
> To: Boris Epstein <borepstein@xxxxxxxxx>, ceph-users@xxxxxxx
> Message-ID: <0cb36a39-7dba-01b5-5383-dc1116f459a4@xxxxxxx>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
> Hi Boris,
>
> general settings (unless they are set to zero) override disk-specific
> settings .
>
> I.e. bluestore_cache_size overrides both bluestore_cache_size_hdd and
> bluestore_cache_size_ssd.
>
> Here is the code snippet in case you know C++
>
>    if (cct->_conf->bluestore_cache_size) {
>      cache_size = cct->_conf->bluestore_cache_size;
>    } else {
>      // choose global cache size based on backend type
>      if (_use_rotational_settings()) {
>        cache_size = cct->_conf->bluestore_cache_size_hdd;
>      } else {
>        cache_size = cct->_conf->bluestore_cache_size_ssd;
>      }
>    }
>
> Thanks,
>
> Igor
>
> On 2/4/2020 2:14 PM, Boris Epstein wrote:
> > Hello list,
> >
> > As stated in this document:
> >
> >
> https://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
> >
> > there are multiple parameters defining cache limits for BlueStore. You
> have
> > bluestore_cache_size (presumably controlling the cache size),
> > bluestore_cache_size_hdd (presumably doing the same for HDD storage only)
> > and bluestore_cache_size_ssd (presumably being the equivalent for SSD).
> My
> > question is, does bluestore_cache_size override the disk-specific
> > parameters, or do I need to set the disk-specific (or, rather, storage
> type
> > specific ones separately if I want to keep them to a certain value.
> >
> > Thanks in advance.
> >
> > Boris.
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
> ------------------------------
>
> Date: Tue, 04 Feb 2020 20:22:30 +0300
> From: vitalif@xxxxxxxxxx
> Subject:  Re: Understanding Bluestore performance
>         characteristics
> To: Bradley Kite <bradley.kite@xxxxxxxxx>
> Cc: ceph-users@xxxxxxx
> Message-ID: <c381a59989a4f4f6760d061e745a281a@xxxxxxxxxx>
> Content-Type: text/plain; charset=US-ASCII; format=flowed
>
> SSD (block.db) partition contains object metadata in RocksDB so it
> probably loads the metadata before modifying objects (if it's not in
> cache yet). Also it sometimes performs compaction which also results in
> disk reads and writes. There are other things going on that I'm not
> completely aware of. There's the RBD object map... Maybe there are some
> locks that come into action when you parallel writes...
>
> There's a config option to enable RocksDB performance counters. You can
> have a look into it.
>
> However if you're just trying to understand why RBD isn't super fast
> then I don't think these reads are the cause...
>
> > Hi Vitaliy
> >
> > Yes - I tried this and I can still see a number of reads (~110 iops,
> > 440KB/sec) on the SSD, so it is significantly better, but the result
> > is still puzzling - I'm trying to understand what is causing the
> > reads. The problem is amplified with numjobs >= 2 but it looks like it
> > is still there with just 1.
> >
> > Like some caching parameter is not correct, and the same blocks are
> > being read over and over when doing a write?
> >
> > Could anyone advise on the best way for me to investigate further?
> >
> > I've tried strace (with -k) and 'perf record' but neither produce any
> > useful stack traces to help understand what's going on.
> >
> > Regards
> > --
> > Brad
>
> ------------------------------
>
> Date: Tue, 4 Feb 2020 17:24:37 +0000 (UTC)
> From: Sage Weil <sage@xxxxxxxxxxxx>
> Subject:  Cephalocon Seoul is canceled
> To: ceph-announce@xxxxxxx, ceph-users@xxxxxxx, dev@xxxxxxx,
>         ceph-devel@xxxxxxxxxxxxxxx
> Message-ID: <alpine.DEB.2.21.2002041649050.21136@piezo.novalocal>
> Content-Type: text/plain; charset=US-ASCII
>
> Hi everyone,
>
> We are sorry to announce that, due to the recent coronavirus outbreak, we
> are canceling Cephalocon for March 3-5 in Seoul.
>
> More details will follow about how to best handle cancellation of hotel
> reservations and so forth.  Registrations will of course be
> refunded--expect an email with details in the next day or two.
>
> We are still looking into whether it makes sense to reschedule the event
> for later in the year.
>
> Thank you to everyone who has helped to plan this event, submitted talks,
> and agreed to sponsor.  It makes us sad to cancel, but the safety of
> our community is of the utmost importance, and it was looking increasing
> unlikely that we could make this event a success.
>
> Stay tuned...
>
> ------------------------------
>
> Date: Tue, 4 Feb 2020 12:29:13 -0500
> From: Boris Epstein <borepstein@xxxxxxxxx>
> Subject:  Re: Bluestore cache parameter precedence
> To: Igor Fedotov <ifedotov@xxxxxxx>
> Cc: ceph-users@xxxxxxx
> Message-ID:
>         <CADeF1XHrPzTq1+8S_WG=ZH=SVNAbdLaY=
> FR7UaLGHn3O_yWLnw@xxxxxxxxxxxxxx>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi Igor,
>
> Thanks!
>
> I think the code needs to be corrected - the choice criteria for which
> setting to use when
>
> cct->_conf->bluestore_cache_size == 0
>
> should be as follows:
>
> 1) See what kind of storage you have.
>
> 2) Select type-appropriate storage.
>
> Is this code public-editable? I'll be happy to correct that.
>
> Regards,
>
> Boris.
>
> On Tue, Feb 4, 2020 at 12:10 PM Igor Fedotov <ifedotov@xxxxxxx> wrote:
>
> > Hi Boris,
> >
> > general settings (unless they are set to zero) override disk-specific
> > settings .
> >
> > I.e. bluestore_cache_size overrides both bluestore_cache_size_hdd and
> > bluestore_cache_size_ssd.
> >
> > Here is the code snippet in case you know C++
> >
> >    if (cct->_conf->bluestore_cache_size) {
> >      cache_size = cct->_conf->bluestore_cache_size;
> >    } else {
> >      // choose global cache size based on backend type
> >      if (_use_rotational_settings()) {
> >        cache_size = cct->_conf->bluestore_cache_size_hdd;
> >      } else {
> >        cache_size = cct->_conf->bluestore_cache_size_ssd;
> >      }
> >    }
> >
> > Thanks,
> >
> > Igor
> >
> > On 2/4/2020 2:14 PM, Boris Epstein wrote:
> > > Hello list,
> > >
> > > As stated in this document:
> > >
> > >
> >
> https://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
> > >
> > > there are multiple parameters defining cache limits for BlueStore. You
> > have
> > > bluestore_cache_size (presumably controlling the cache size),
> > > bluestore_cache_size_hdd (presumably doing the same for HDD storage
> only)
> > > and bluestore_cache_size_ssd (presumably being the equivalent for SSD).
> > My
> > > question is, does bluestore_cache_size override the disk-specific
> > > parameters, or do I need to set the disk-specific (or, rather, storage
> > type
> > > specific ones separately if I want to keep them to a certain value.
> > >
> > > Thanks in advance.
> > >
> > > Boris.
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
>
> ------------------------------
>
> Date: Tue, 4 Feb 2020 17:29:55 +0000
> From: EDH - Manuel Rios <mriosfer@xxxxxxxxxxxxxxxx>
> Subject:  Bucket rename with
> To: "ceph-users@xxxxxxx" <ceph-users@xxxxxxx>
> Message-ID:  <HE1P195MB02521946493264331A2CE2A3B0030@xxxxxxxxxxxxxxxxx
>         P195.PROD.OUTLOOK.COM>
> Content-Type: text/plain; charset="us-ascii"
>
> Hi
>
> Some Customer asked us for a normal easy problem, they want rename a
> bucket.
>
> Checking the Nautilus documentation looks by now its not possible, but I
> checked master documentation and a CLI should be accomplish this apparently.
>
> $ radosgw-admin bucket link --bucket=foo --bucket-new-name=bar --uid=johnny
>
> Will be backported to Nautilus? Or its still just for developer/master
> users?
>
> https://docs.ceph.com/docs/master/man/8/radosgw-admin/
>
> Regards
> Manuel
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
>
>
> ------------------------------
>
> End of ceph-users Digest, Vol 85, Issue 17
> ******************************************
>

-- 



*-----------------------------------------------------------------------------------------*


*This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the 
system manager. This message contains confidential information and is 
intended only for the individual named. If you are not the named addressee, 
you should not disseminate, distribute or copy this email. Please notify 
the sender immediately by email if you have received this email by mistake 
and delete this email from your system. If you are not the intended 
recipient, you are notified that disclosing, copying, distributing or 
taking any action in reliance on the contents of this information is 
strictly prohibited.*****

 ****

*Any views or opinions presented in this 
email are solely those of the author and do not necessarily represent those 
of the organization. Any information on shares, debentures or similar 
instruments, recommended product pricing, valuations and the like are for 
information purposes only. It is not meant to be an instruction or 
recommendation, as the case may be, to buy or to sell securities, products, 
services nor an offer to buy or sell securities, products or services 
unless specifically stated to be so on behalf of the Flipkart group. 
Employees of the Flipkart group of companies are expressly required not to 
make defamatory statements and not to infringe or authorise any 
infringement of copyright or any other legal right by email communications. 
Any such communication is contrary to organizational policy and outside the 
scope of the employment of the individual concerned. The organization will 
not accept any liability in respect of such communication, and the employee 
responsible will be personally liable for any damages or other liability 
arising.*****

 ****

*Our organization accepts no liability for the 
content of this email, or for the consequences of any actions taken on the 
basis of the information *provided,* unless that information is 
subsequently confirmed in writing. If you are not the intended recipient, 
you are notified that disclosing, copying, distributing or taking any 
action in reliance on the contents of this information is strictly 
prohibited.*


_-----------------------------------------------------------------------------------------_

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux