Re: Cache Tiering Question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think what also makes things seem a little disconnected is that the target_max_bytes and relative levels are at the pool level, however I Think the current eviction logic works at a per OSD/PG level and so these values are calculated into estimates per OSD. This means that that depending on how the stale objects are situated you can end up with a situation where it looks like flushing/eviction is not not strictly sticking to the entered values.

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Christian Balzer
> Sent: 16 October 2015 00:50
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Cache Tiering Question
> 
> 
> Hello,
> 
> Having run into this myself two days ago (setting relative sizing values doesn't
> flush things when expected) I'd say that the documentation is highly
> misleading when it comes to the relative settings.
> 
> And unclear when it comes to the size/object settings.
> 
> Guess this section needs at least one nice red paragraph and some further
> explanations.
> 
> Christian
> 
> On Thu, 15 Oct 2015 17:33:30 -0600 Robert LeBlanc wrote:
> 
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA256
> >
> > One more question. Is max_{bytes,objects} before or after replication
> > factor?
> > - ----------------
> > Robert LeBlanc
> > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
> >
> >
> > On Thu, Oct 15, 2015 at 4:42 PM, LOPEZ Jean-Charles  wrote:
> > > Hi Robert,
> > >
> > > yes they do.
> > >
> > > Pools don’t have a size when you create them hence the couple
> > > value/ratio that is to be defined for cache tiering mechanism. Pool
> > > only have a number of PGs assigned. So setting the max values and
> > > the ratios for dirty and full must be set explicitly to match your
> > > configuration.
> > >
> > > Note that you can at the same time define max_bytes and max_objects.
> > > The first of the 2 values that breaches using your ratio settings
> > > will trigger eviction and/or flushing. The ratios you choose apply
> > > to both values.
> > >
> > > Cheers
> > > JC
> > >
> > >> On 15 Oct 2015, at 15:02, Robert LeBlanc  wrote:
> > >>
> > >> -----BEGIN PGP SIGNED MESSAGE-----
> > >> Hash: SHA256
> > >>
> > >> hmmm...
> > >>
> > >> http://docs.ceph.com/docs/master/rados/operations/cache-
> tiering/#re
> > >> lative-sizing
> > >>
> > >> makes it sound like it should be based on the size of the pool and
> > >> that you don't have to set anything like max bytes/objects. Can you
> > >> confirm that cache_target_{dirty,dirty_high,full}_ratio works as a
> > >> ratio of target_max_bytes set?
> > >> - ----------------
> > >> Robert LeBlanc
> > >> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
> > >>
> > >>
> > >> On Thu, Oct 15, 2015 at 3:32 PM, Nick Fisk  wrote:
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>> -----Original Message-----
> > >>>> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On
> > >>>> Behalf Of Robert LeBlanc
> > >>>> Sent: 15 October 2015 22:06
> > >>>> To: ceph-users@xxxxxxxxxxxxxx
> > >>>> Subject:  Cache Tiering Question
> > >>>>
> > >>>> -----BEGIN PGP SIGNED MESSAGE-----
> > >>>> Hash: SHA256
> > >>>>
> > >>>> ceph df (ceph version 0.94.3-252-g629b631
> > >>>> (629b631488f044150422371ac77dfc005f3de1bc)) is showing some odd
> > >>>> results:
> > >>>>
> > >>>> root@nodez:~# ceph df
> > >>>> GLOBAL:
> > >>>>    SIZE       AVAIL      RAW USED     %RAW USED
> > >>>>    24518G     21670G        1602G          6.53
> > >>>> POOLS:
> > >>>>    NAME         ID     USED      %USED     MAX AVAIL     OBJECTS
> > >>>>    rbd          0      2723G     11.11         6380G     1115793
> > >>>>    ssd-pool     2          0         0          732G           1
> > >>>>
> > >>>> The rbd pool is showing 11.11% used, but if you calculate the
> > >>>> numbers
> > >>> there
> > >>>> it is 2723/6380=42.68%.
> > >>>
> > >>> I have a feeling that the percentage is based on the amount used
> > >>> of the total cluster size. Ie 2723/24518
> > >>>
> > >>>>
> > >>>> Will this cause problems with the relative cache tier settings?
> > >>>> Do I need
> > >>> to set
> > >>>> the percentage based on what Ceph is reporting here?
> > >>>
> > >>> The flushing/eviction thresholds are based on the target_max_bytes
> > >>> number that you set, they have nothing to do with the underlying
> > >>> pool size. It's up to you to come up with a sane number for this
> > >>> variable.
> > >>>
> > >>>>
> > >>>> Thanks,
> > >>>> - ----------------
> > >>>> Robert LeBlanc
> > >>>> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62
> > >>>> B9F1
> > >>>> ----- BEGIN PGP SIGNATURE-----
> > >>>> Version: Mailvelope v1.2.0
> > >>>> Comment: https://www.mailvelope.com
> > >>>>
> > >>>>
> wsFcBAEBCAAQBQJWIBVGCRDmVDuy+mK58QAAXEYQAKm5IBGn81Hlb9az4
> > >>>> 52x
> > >>>>
> hSH6onk7mJE7L2s5FnoJv2sNW4azhDEVKGQBE9vvhIVBhhtKtnqdzu3ytk6E
> > >>>>
> EUFuPBzUWLJyG3wQtp3QC0PdYzlGkS7bowdpZqk9PdaYZYgEdqG/cLEl/eAx
> > >>>>
> LGIUXmr6vIuNhnntGIIYeUAiWXA7b5qzOKbef6OlOp7Mz6Euel9S8ycZlSAR
> > >>>>
> eBQ5hdLSFoFai5ldyV+/hmqLnujOfanRFC8pIYr41aKe7wBOPOargLGQdka3
> > >>>>
> jswmcf+0hV7QqZSOjJijDYvOgRuHBFK6cdyP9SRKxWxG7uH+yDOvya0TqOob
> > >>>>
> 1yDomYC1zD2uzG9+L5Iv6at8fuBF5xFKPqax9N4WQj3Oj9fBwioQVBocNxHc
> > >>>>
> MIlQnvnLeq6OLtdfPoPignTAHIH2RrvAmdwYkSCuopjUSTkmBsyBLIiiz/KI
> > >>>>
> P4mSXAxZb0UF4pbCDgdYG6qUEywR/enGsT1lnmNLx4vY8W/yz9xQ3o3JnIpD
> > >>>>
> pWyo9zJ8Ugnwvihbo7xKe+EZOeJL0YF4BiyAprH5pKFdQcAWcV98zWHnLBxd
> > >>>>
> EFHyN9fHsVdw0UsxIUBZFfM1u4S7fchgVeFfiTSdGqd/dWHQCHKJPNBSJnae
> > >>>>
> aPKTyvg77N6zTn04VGspfenR+svGbkAtUfO2HJ1Kkd4/wZ9GIzsS1ovPZFsM
> > >>>> jJe4
> > >>>> =YSyj
> > >>>> -----END PGP SIGNATURE-----
> > >>>> _______________________________________________
> > >>>> ceph-users mailing list
> > >>>> ceph-users@xxxxxxxxxxxxxx
> > >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >>>
> > >>>
> > >>>
> > >>>
> > >>
> > >> -----BEGIN PGP SIGNATURE-----
> > >> Version: Mailvelope v1.2.0
> > >> Comment: https://www.mailvelope.com
> > >>
> > >>
> wsFcBAEBCAAQBQJWICJwCRDmVDuy+mK58QAAyTUQALkwOnB++bXto+cM
> 0iSZ
> > >>
> B3nZgvl9FKZnujb0MUIiS29a+Y2nnBpAGgHbF4Y9ngnDQYNZ0yf1DD2wYad2
> > >>
> rll6pYeWRRYSmaBCBfdPlqbbVw8WpjdXLR9FtLFfUR2V+Ghf4U83F8iKiWn1
> > >>
> +6DqouHMA/auHjEr49w+Ue0kpKSfItH/9LkVjYQBKp6E7tyOSsrzcM1milKR
> > >> lwsIOewiKvsg4neDLqkdqaO6+bYuaDJmgN+hEqzl7lxbzt5pJbzfknpiAewm
> > >> GTw8C2AUbzcYqIhzqWcY9Jiy6ZZkYAPDODsJpkc/Pubnq73jlkllB4JaQpJy
> > >>
> 2964DynNn8jBAI9JJpLyldtKPEofmkumzZ6tPXgLDuo2VuV+hp/wVadZKy2k
> > >>
> PDhms1dpeLFM8NsgOToSpO6Ej1l1857C5+cy3EeTlKqgs6z1QbTwNvUeeCpk
> > >>
> /ORObJQCa7teNEM1c33oEJ3V1LOx7SfsEn1A6PVaaUegmMEEa6Cb8Va2RYl8
> > >>
> 5fhXqIcsU9KWHDmq8+MZ9x67etAucXKJmPQpIzJD6M9WtsWsDupsuJ1MgCK
> B
> > >>
> pxhqjwujuaZWfF+W3HEuOOP7OcXbj2U3RO1V3HOr9N0cLFTf+vuefIzOtgs1
> > >>
> qdBPrxIUNznfYXarclFuJzCWPzKpDTdKbLwYUcbh9hKayRpll3DGOW7qUX3u
> > >> eNXR
> > >> =cI+5
> > >> -----END PGP SIGNATURE-----
> > >> _______________________________________________
> > >> ceph-users mailing list
> > >> ceph-users@xxxxxxxxxxxxxx
> > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> >
> > -----BEGIN PGP SIGNATURE-----
> > Version: Mailvelope v1.2.0
> > Comment: https://www.mailvelope.com
> >
> >
> wsFcBAEBCAAQBQJWIDfDCRDmVDuy+mK58QAA8qkQAIBtEorNvkAwVojMm
> OcW
> > /zEGPw9Hg0OgvoR7gv4DWSKO4y8raek3oL7BNE5WNrpkRkpKfjGe6OLLtTr+
> > 9b7K19Cv3oRQHVUG2S+rnwDzsg/4ORL90TZZSh729ThjE823g9PDpB1ThsdD
> > DApHvU4OoLEYVepCkxzZx4a8UztyaBnDl8/LCNK7Rzg30UWsiR9kRW4bru5F
> >
> igcFHslBmUSH0trbG0kxA9mrmnWq2m7i0QNVS1nUDJ7crDwqnJrnf17NG7NV
> > SQKKsAcuM2lmmAPkLIMy4J1oiBb8JXiCc27Bj+dtBG9Iqh8HdYvvmVd6O8Jv
> >
> bVgMUN7mmGGpuIs040Q3Fn4wSrhtGc5iUpzM5eJnemnrPi5ymE8WayHX6aa
> k
> >
> qA5vfM8WLNKMmPBORqg2DB/1co6OkvHOLAk+ZAUYUo88I+dVp7BIXadaZM
> hS
> > GKbTPfpZgDdn0bHbn4Dyma1a1JVarpQXCaLq4ayvfY7DQuoFVi2eOImxvc+Q
> > gFSmmdegK0uto3aTnySR1fRl1Yk9grd+LSwJgmsew4t2AHjAbAYgG1idnvJt
> > t5e6Aj4NnNK3f085gkoundV1rrp37lu3Ot82gMq7xyxNmlT/FsAmOFSEelJP
> >
> U26AQHlgDM7oV95IQMnKOtdziIq7NFdspuVuN+umf7JpnuYLbROSREG3dIrq
> > qdxB
> > =de2k
> > -----END PGP SIGNATURE-----
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> --
> Christian Balzer        Network/Systems Engineer
> chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
> http://www.gol.com/
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux