Re: Logging

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > This is really interesting.  This is on the partitions that have _just_ 
> > the OSD data? 
> 
> Yes, with a couple of extra layers. node01 keeps its OSD data on an ext4
> filesystem on top of a dm-crypt encrypted native disk partition. node02
> on the other hand has an mdadm RAID0 of two partitions on separate disks
> with dm-crypt and ext4 on top of that. This layering - in particular the
> encryption - consumes CPU and can slow down things, but for the rest it's
> rock-solid; I've been running systems with these setups for years and
> never had a problem with them even once.
> 
> Here's an example from this morning:
> 
> node01:
> /dev/mapper/sda6        232003      5914    212830   3% /mnt/osd
> 
> node02:
> /dev/mapper/md4         225716      5704    207112   3% /mnt/osd
> 
> client:
> 192.168.178.100:6789:/
>                         232002      5913    212829   3% /mnt/n01

Oh... I suspect that only one of the OSDs is active.  The ceph client's 
df/statfs result is really just a sum over the statfs results on all of 
the OSDs.  The fact that is corresponds to node01 suggests that node02 
isn't part of the cluster.  Can you post the output from

	ceph osd dump -o -

> At this point I unmounted ceph on the client and restarted ceph. A few minutes
> later I see this:
> 
> node01:
> /dev/mapper/sda6        232003      5907    212837   3% /mnt/osd
> 
> node02:
> /dev/mapper/md4         225716      5626    207190   3% /mnt/osd
> 
> Note how disk usage went down on both nodes, considerably on node02.
> 
> Then they start exchanging data and an hour later or so they're back in sync:
> 
> node01:
> /dev/mapper/sda6        232003      5906    212838   3% /mnt/osd
> 
> node02:
> /dev/mapper/md4         225716      5906    206910   3% /mnt/osd

I wouldn't read into the disk utilizations too closely.  There is logging 
going on at a couple of different levels that can make the utilization 
fluctuate depending on the timing of trimming.

> > Do you see any OSD flapping (down/up cycles) during this 
> > period?
> 
> I've been running without logs since yesterday, but my experience is that
> they don't flap; once an OSD goes down it stays down until ceph is restarted.

Also, one thing you should do during these tests is keep a 

	ceph -w

running to monitor changes in the cluster state (to see, for example, if 
it's marking either osd down).

Thanks!
sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux