Re: Can a pool tier to other pools more than once ? 回复: Must host bucket name be the same with hostname ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Wed, 8 Jun 2016 15:16:32 +0800 秀才 wrote:

> Thanks!
> 
> 
> It seems to work!
> 
> 
> I configure my cluster's crush rulesest according to
> https://elkano.org/blog/ceph-sata-ssd-pools-server-editing-crushmap/.
> Then restart my cluster, things looks like ok.
> 
> 
> My tests have not finished.
> Go on making tier-cache.
> 
> 
> ceph osd tier add images images ssdpool
> ceph osd tier cache-mode ssdpool writeback
> ceph osd tier add images volumes ssdpool
> ceph osd tier cache-mode ssdpool writeback
> 
To answer your question below, no that won't work.
A cache pool can not be shared.
If you google for this, you may actually find a thread where I asked that
same question. 

> 
> 
> But 'ceph -s' replys:
> 
> 
> 1 cache pools are missing hit_sets
> 
If the 4 commands up there are all you did, your cache tier setup isn't
finished. 
Re-read the documentation and the various cache tier threads here,
including my "Cache tier operation clarifications" thread.

> 
> And then 'ceph osd tree' replys(a bit longer):
> 
> 
> ID  WEIGHT  TYPE NAME                    UP/DOWN REWEIGHT
> PRIMARY-AFFINITY  -10 0.59999 root
> default-ssd                                                 -6
> 0.14000     host
> bjd-01-control1-ssd                                      3
> 0.06999         osd.3                   down  1.00000
> 1.00000    4 0.06999         osd.4                   down
> 1.00000          1.00000   -7 0.14000     host
> bjd-01-control2-ssd                                     11
> 0.06999         osd.11                  down  1.00000          1.00000
> 12 0.06999         osd.12                  down  1.00000
> 1.00000   -8 0.17999     host
> bjd-01-compute1-ssd                                     18
> 0.09000         osd.18                  down  1.00000          1.00000
> 19 0.09000         osd.19                  down  1.00000
> 1.00000   -9 0.14000     host
> bjd-01-compute2-ssd                                     28
> 0.06999         osd.28                  down  1.00000          1.00000
> 29 0.06999         osd.29                  down  1.00000
> 1.00000   -1 6.06000 root
> default                                                     -2
> 1.50000     host
> bjd-01-control1                                          0
> 0.25000         osd.0                     up  1.00000
> 1.00000    2 0.25000         osd.2                     up
> 1.00000          1.00000    5 0.25000         osd.5
> up  1.00000          1.00000    6 0.25000
> osd.6                     up  1.00000          1.00000   22
> 0.25000         osd.22                    up  1.00000          1.00000
> 23 0.25000         osd.23                    up  1.00000
> 1.00000   -3 1.50000     host
> bjd-01-control2                                          7
> 0.25000         osd.7                     up  1.00000
> 1.00000    8 0.25000         osd.8                     up
> 1.00000          1.00000    9 0.25000         osd.9
> up  1.00000          1.00000   10 0.25000
> osd.10                    up  1.00000          1.00000   13
> 0.25000         osd.13                    up  1.00000          1.00000
> 14 0.25000         osd.14                    up  1.00000
> 1.00000   -4 1.56000     host
> bjd-01-compute1                                         15
> 0.25000         osd.15                    up  1.00000          1.00000
> 16 0.25000         osd.16                    up  1.00000
> 1.00000   17 0.26999         osd.17                    up
> 1.00000          1.00000   20 0.26999         osd.20
> up  1.00000          1.00000   21 0.26999
> osd.21                    up  1.00000          1.00000    1
> 0.25000         osd.1                     up  1.00000          1.00000
> -5 1.50000     host
> bjd-01-compute2                                         24
> 0.25000         osd.24                    up  1.00000          1.00000
> 25 0.25000         osd.25                    up  1.00000
> 1.00000   26 0.25000         osd.26                    up
> 1.00000          1.00000   27 0.25000         osd.27
> up  1.00000          1.00000   30 0.25000
> osd.30                    up  1.00000          1.00000   31
> 0.25000         osd.31                    up  1.00000          1.00000 
> 
> 
> In the end, i run ceph-osd manually, 'ceph-osd -i 12
> -c /etc/ceph/ceph.conf -f':
> 
> 
> SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0d 00 00 00
> 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 SG_IO:
> bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0d 00 00 00 00 20 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2016-06-08
> 06:21:58.335164 7fc376d74880 -1 osd.12 383 log_to_monitors

That's just very very bad, I don't know what you're doing there but you
either have HW or configuration problems.

Christian

> {default=true} ./include/interval_set.h: In function 'void
> interval_set<T>::erase(T, T) [with T = snapid_t]' thread 7fc34dbe9700
> time 2016-06-08 06:21:58.341270 ./include/interval_set.h: 386: FAILED
> assert(_size >= 0) ./include/interval_set.h: In function 'void
> interval_set<T>::erase(T, T) [with T = snapid_t]' thread 7fc34d3e8700
> time 2016-06-08 06:21:58.341246 ./include/interval_set.h: 386: FAILED
> assert(_size >= 0) ./include/interval_set.h: In function 'void
> interval_set<T>::erase(T, T) [with T = snapid_t]' thread 7fc34c3e6700
> time 2016-06-08 06:21:58.342349 ./include/interval_set.h: 386: FAILED
> assert(_size >= 0) ./include/interval_set.h: In function 'void
> interval_set<T>::erase(T, T) [with T = snapid_t]' thread 7fc34abe3700
> time 2016-06-08 06:21:58.341558 ./include/interval_set.h: 386: FAILED
> assert(_size >= 0) ./include/interval_set.h: In function 'void
> interval_set<T>::erase(T, T) [with T = snapid_t]' thread 7fc34bbe5700
> time 2016-06-08 06:21:58.342344 ./include/interval_set.h: 386: FAILED
> assert(_size >= 0) ceph version 0.94.6
> (e832001feaf8c176593e0325c8298e3f16dfb403) 1:
> (ceph::__ceph_assert_fail(char const*, char const*, int, char
> const*)+0x85) [0xbc9195] 2:
> (interval_set<snapid_t>::subtract(interval_set<snapid_t> const&)+0xc0)
> [0x81ec80] 3: (PGPool::update(std::tr1::shared_ptr<OSDMap const>)+0x54e)
> [0x7f21be] 4: (PG::handle_advance_map(std::tr1::shared_ptr<OSDMap
> const>, std::tr1::shared_ptr<OSDMap const>, std::vector<int,
> const>std::allocator<int> >&, int, std::vector<int, std::allocator<int>
> const>>&, int, PG::RecoveryCtx*)+0x2a2) [0x7f26f2] 5:
> const>>(OSD::advance_pg(unsigned int, PG*, ThreadPool::TPHandle&,
> const>>PG::RecoveryCtx*, std::set<boost::intrusive_ptr<PG>,
> const>>std::less<boost::intrusive_ptr<PG> >,
> const>>std::allocator<boost::intrusive_ptr<PG> > >*)+0x2da) [0x6a591a]
> const>>6: (OSD::process_peering_events(std::list<PG*,
> const>>std::allocator<PG*> > const&, ThreadPool::TPHandle&)+0x22c)
> const>>[0x6a643c] 7: (OSD::PeeringWQ::_process(std::list<PG*,
> const>>std::allocator<PG*> > const&, ThreadPool::TPHandle&)+0x28)
> const>>[0x701d88] 8: (ThreadPool::worker(ThreadPool::WorkThread*)+0xa76)
> const>>[0xbb9966] 9: (ThreadPool::WorkThread::entry()+0x10) [0xbba9f0]
> const>>10: (()+0x7dc5) [0x7fc3756ffdc5] 11: (clone()+0x6d)
> const>>[0x7fc3741e228d] NOTE: a copy of the executable, or `objdump -rdS
> const>><executable>` is needed to interpret this. 2016-06-08
> const>>06:21:58.355662 7fc34d3e8700 -1 ./include/interval_set.h: In
> const>>function 'void interval_set<T>::erase(T, T) [with T = snapid_t]'
> const>>thread 7fc34d3e8700 time 2016-06-08
> const>>06:21:58.341246 ./include/interval_set.h: 386: FAILED
> const>>assert(_size >= 0)
> 
> 
> 
> 
> 
> Q: can a pool tier to other pools more than once? like my case,
> ssdpool--tier--images & ssdpool--tier--volumes.
> 
> 
> Best regards,
> 
> 
> Xiucai
> ------------------ 原始邮件 ------------------
> 发件人: "Christian Balzer";<chibi@xxxxxxx>;
> 发送时间: 2016年6月8日(星期三) 中午11:16
> 收件人: "ceph-users"<ceph-users@xxxxxxxxxxxxxx>; 
> 抄送: "秀才"<hualingson@xxxxxxxxxxx>; 
> 主题: Re:  Must host bucket name be the same with hostname ?
> 
> 
> 
> 
> Hello,
> 
> you will want to read:
> https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
> 
> especially section III and IV.
> 
> Another approach w/o editing the CRUSH map is here:
> https://elkano.org/blog/ceph-sata-ssd-pools-server-editing-crushmap/
> 
> Christian
> 
> On Wed, 8 Jun 2016 10:54:36 +0800 秀才 wrote:
> 
> > Hi all,
> > 
> >     There are SASes & SSDs in my nodes at the same time.
> >     Now i want divide them into 2 groups, one composed of SASes and one
> > only contained SSDs. When i configure CRUSH rulesets, segment below:
> > 
> > 
> >         # buckets
> >         host robert-a {
> > 	id -2		# do not change unnecessarily
> > 	# weight 1.640
> > 	alg straw
> > 	hash 0	# rjenkins1
> > 	item osd.0 weight 0.250    #SAS
> > 	item osd.1 weight 0.250    #SAS
> > 	item osd.2 weight 0.250    #SSD
> > 	item osd.3 weight 0.250    #SSD
> > 
> >         }
> > 
> > 
> >     So, i am not sure must host bucket name be the same with hostname.
> > 
> > 
> >     Or host bucket name does no matter?
> > 
> > 
> > 
> > Best regards,
> > 
> > Xiucai
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux