Re: seg fault

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Samuel,

After Reading your Mail again carrfully, i see That my last questions are obsolet. 
I will surely Upgrade to 0.67.10 as soon as i can and take a closer look to the improvements.

As far as i understand the Release notes to 0.67.11 there are no ugrading fixes, so i hope there were no issues when going to firefly afterwards.

Thank you (all) again

Best
Philipp 


> Am 08.12.2014 um 23:52 schrieb Samuel Just <sam.just@xxxxxxxxxxx>:
> 
> To start with, dumpling itself is up to v0.67.11.  You are running
> v0.67.0.  There have been many bug fixes just in dumpling in that
> time.  You should start with upgrading to v0.67.11 even if you plan on
> upgrading to firefly or giant later (there were bug fixes in dumpling
> for bugs which only happen when upgrading to later versions).  Beyond
> that, it depends on your needs.  Giant won't be maintained for a
> particularly long time (like emperor), but firefly will (like
> dumpling).
> -Sam
> 
> On Mon, Dec 8, 2014 at 2:47 PM, Philipp von Strobl-Albeg
> <philipp@xxxxxxxxxxxx> wrote:
>> Thank you very much.
>> I planed this step already - so good to know ;-)
>> 
>> Do you recommend firefly or giant - without needing radosgw ?
>> 
>> 
>> Best
>> Philipp
>> 
>> 
>>> Am 08.12.2014 um 23:42 schrieb Samuel Just:
>>> 
>>> At a guess, this is something that has long since been fixed in
>>> dumpling, you probably want to upgrade to the current dumpling point
>>> release.
>>> -Sam
>>> 
>>> On Mon, Dec 8, 2014 at 2:40 PM, Philipp von Strobl-Albeg
>>> <philipp@xxxxxxxxxxxx> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> after using the ceph-cluster for months without any problems - thank you
>>>> for
>>>> that great piece of software -, i recognize one osd crashed with
>>>> following
>>>> output.
>>>> What are the recommondations - Just Upgrading or is this not a bug on
>>>> 0.67 ?
>>>> 
>>>> 
>>>> -1> 2014-11-08 04:24:51.127924 7f0d92897700  5 --OSD::tracker-- reqid:
>>>> client.9016.1:5037242, seq: 3484524, time: 2014-11-08 04:24:51.127924,
>>>> event: waiting_for_osdmap, request: osd_op(client.9016.1:5037242
>>>> rb.0.1798.6b8b4567.000000000076 [write 602112~4096] 2.c90060c7 snapc 7=[]
>>>> e554) v4
>>>>     0> 2014-11-08 04:24:51.141626 7f0d88ff9700 -1 *** Caught signal
>>>> (Segmentation fault) **
>>>> in thread 7f0d88ff9700
>>>> 
>>>> ceph version 0.67 (e3b7bc5bce8ab330ec1661381072368af3c218a0)
>>>> 1: ceph-osd() [0x802577]
>>>> 2: (()+0x113d0) [0x7f0db94d93d0]
>>>> 3: (std::string::compare(std::string const&) const+0xc)
>>>> [0x7f0db7e81c4c]
>>>> 4: (PGLog::check()+0x90) [0x76b8d0]
>>>> 5: (PGLog::write_log(ObjectStore::Transaction&, hobject_t
>>>> const&)+0x245)
>>>> [0x7672c5]
>>>> 6: (PG::append_log(std::vector<pg_log_entry_t,
>>>> std::allocator<pg_log_entry_t> >&, eversion_t,
>>>> ObjectStore::Transaction&)+0x31d) [0x71f03d]
>>>> 7: (ReplicatedPG::do_op(std::tr1::shared_ptr<OpRequest>)+0x36f3)
>>>> [0x623e63]
>>>> 8: (PG::do_request(std::tr1::shared_ptr<OpRequest>,
>>>> ThreadPool::TPHandle&)+0x619) [0x710a19]
>>>> 9: (OSD::dequeue_op(boost::intrusive_ptr<PG>,
>>>> std::tr1::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x330) [0x6663f0]
>>>> 10: (OSD::OpWQ::_process(boost::intrusive_ptr<PG>,
>>>> ThreadPool::TPHandle&)+0x4a0) [0x67cbc0]
>>>> 11: (ThreadPool::WorkQueueVal<std::pair<boost::intrusive_ptr<PG>,
>>>> std::tr1::shared_ptr<OpRequest> >, boost::intrusive_ptr<PG>
>>>>> 
>>>>> ::_void_process(void*, ThreadPool::TPHandle&)+0x9c) [0x6b893c]
>>>> 
>>>> 12: (ThreadPool::worker(ThreadPool::WorkThread*)+0x4e6) [0x8bb156]
>>>> 13: (ThreadPool::WorkThread::entry()+0x10) [0x8bcf60]
>>>> 14: (()+0x91a7) [0x7f0db94d11a7]
>>>> 15: (clone()+0x6d) [0x7f0db76072cd]
>>>> NOTE: a copy of the executable, or `objdump -rdS <executable>` is
>>>> needed to
>>>> interpret this.
>>>> 
>>>> --- logging levels ---
>>>>   0/ 5 none
>>>>   0/ 1 lockdep
>>>>   0/ 1 context
>>>>   1/ 1 crush
>>>>   1/ 5 mds
>>>>   1/ 5 mds_balancer
>>>>   1/ 5 mds_locker
>>>>   1/ 5 mds_log
>>>>   1/ 5 mds_log_expire
>>>>   1/ 5 mds_migrator
>>>>   0/ 1 buffer
>>>>   0/ 1 timer
>>>>   0/ 1 filer
>>>>   0/ 1 striper
>>>>   0/ 1 objecter
>>>>   0/ 5 rados
>>>>   0/ 5 rbd
>>>>   0/ 5 journaler
>>>>   0/ 5 objectcacher
>>>>   0/ 5 client
>>>>   0/ 5 osd
>>>>   0/ 5 optracker
>>>>   0/ 5 objclass
>>>>   1/ 3 filestore
>>>>   1/ 3 journal
>>>>   0/ 5 ms
>>>>   1/ 5 mon
>>>>   0/10 monc
>>>>   1/ 5 paxos
>>>>   0/ 5 tp
>>>>   1/ 5 auth
>>>>   1/ 5 crypto
>>>>   1/ 1 finisher
>>>>   1/ 5 heartbeatmap
>>>>   1/ 5 perfcounter
>>>>   1/ 5 rgw
>>>>   1/ 5 hadoop
>>>>   1/ 5 javaclient
>>>> 1/ 5 asok
>>>>   1/ 1 throttle
>>>>  -2/-2 (syslog threshold)
>>>>  -1/-1 (stderr threshold)
>>>>  max_recent     10000
>>>>  max_new         1000
>>>>  log_file /var/log/ceph/ceph-osd.2.log
>>>> 
>>>> --
>>>> Philipp Strobl
>>>> http://www.pilarkto.net
>>>> 
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
>> 
>> --
>> Philipp v. Strobl-Albeg
>> Dipl.-Ing.
>> 
>> Zellerstr. 19
>> 70180 Stuttgart
>> 
>> Tel   +49 711 121 58269
>> Mobil +49 151 270 39710
>> Fax   +49 711 658 3089
>> 
>> http://www.pilarkto.net
>> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux