logrotate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sage,

Even I am facing the problem.
ls -l /var/log/ceph/
total 54280
-rw-r--r-- 1 root root        0 Jul 17 06:39 ceph-osd.0.log
-rw-r--r-- 1 root root 19603037 Jul 16 19:01 ceph-osd.0.log.1.gz
-rw-r--r-- 1 root root        0 Jul 17 06:39 ceph-osd.1.log
-rw-r--r-- 1 root root 18008247 Jul 16 19:01 ceph-osd.1.log.1.gz
-rw-r--r-- 1 root root        0 Jul 17 06:39 ceph-osd.2.log
-rw-r--r-- 1 root root 17969054 Jul 16 19:01 ceph-osd.2.log.1.gz

Due to this , I lost logs, until I restarted the osds.

thanks
Sahana Lokeshappa
Test Development Engineer I

3rd Floor, Bagmane Laurel, Bagmane Tech Park
C V Raman nagar, Bangalore 560093
T: +918042422283
Sahana.Lokeshappa at SanDisk.com

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Uwe Grohnwaldt
Sent: Sunday, July 13, 2014 7:10 AM
To: ceph-users at ceph.com
Subject: Re: logrotate

Hi,

we are observing the same problem. After logrotate the new logfile is empty.
The old logfiles are marked as deleted in lsof. At the moment we are restarting osds on a regular basis.

Uwe

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces at lists.ceph.com] On Behalf
> Of James Eckersall
> Sent: Freitag, 11. Juli 2014 17:06
> To: Sage Weil
> Cc: ceph-users at ceph.com
> Subject: Re: [ceph-users] logrotate
>
> Hi Sage,
>
> Many thanks for the info.
> I have inherited this cluster, but I believe it may have been created
> with mkcephfs rather than ceph-deploy.
>
> I'll touch the done files and see what happens.  Looking at the logic
> in the logrotate script I'm sure this will resolve the problem.
>
> Thanks
>
> J
>
>
> On 11 July 2014 15:04, Sage Weil <sweil at redhat.com
> <mailto:sweil at redhat.com> > wrote:
>
>
>       On Fri, 11 Jul 2014, James Eckersall wrote:
>       > Upon further investigation, it looks like this part of the ceph
> logrotate
>       > script is causing me the problem:
>       >
>       > if [ -e "/var/lib/ceph/$daemon/$f/done" ] && [ -e
>       > "/var/lib/ceph/$daemon/$f/upstart" ] && [ ! -e
>       > "/var/lib/ceph/$daemon/$f/sysvinit" ]; then
>       >
>       > I don't have a "done" file in the mounted directory for any of my
> osd's.  My
>       > mon's all have the done file and logrotate is working fine for those.
>
>
>       Was this cluster created a while ago with mkcephfs?
>
>
>       > So my question is, what is the purpose of the "done" file and
> should I just
>       > create one for each of my osd's ?
>
>
>       It's used by the newer ceph-disk stuff to indicate whether the OSD
>       directory is propertly 'prepared' and whether the startup stuff
> should pay
>       attention.
>
>       If these are active OSDs, yeah, just touch 'done'.  (Don't touch
> sysvinit,
>       though, if you are enumerating the daemons in ceph.conf with host =
> foo
>       lines.)
>
>       sage
>
>
>
>       >
>       >
>       >
>       > On 10 July 2014 11:10, James Eckersall <james.eckersall at gmail.com
> <mailto:james.eckersall at gmail.com> > wrote:
>       >       Hi,
>       > I've just upgraded a ceph cluster from Ubuntu 12.04 with 0.73.1 to
>       > Ubuntu 14.04 with 0.80.1.
>       >
>       > I've noticed that the log rotation doesn't appear to work correctly.
>       > The OSD's are just not logging to the current ceph-osd-X.log file.
>       > If I restart the OSD's, they start logging, but then overnight, they
>       > stop logging when the logs are rotated.
>       >
>       > Has anyone else noticed a problem with this?
>       >
>       >
>       >
>       >
>


________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux