Re: Old OSDs on new host, treated as new?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
perhaps an stupid question, but why you change the hostname?

Not tried, but I guess if you boot the node with an new hostname, the old hostname are in the crush map, but without any OSDs - because they are on the new host.
Don't know ( I guess not) if the degration level stay also on 5% if you delete the empty host from the crush map.

I would simply use the same hostconfig on an rebuildet host.

Udo

On 03.12.2014 05:06, Indra Pramana wrote:
Dear all,

We have a Ceph cluster with several nodes, each node contains 4-6 OSDs. We are running the OS off USB drive to maximise the use of the drive bays for the OSDs and so far everything is running fine.

Occasionally, the OS running on the USB drive would fail, and we would normally replace the drive with a pre-configured similar OS and Ceph running, so when the new OS boots up, it will automatically detect all the OSDs and start them. It works fine without any issues.

However, the issue is in recovery. When one node goes down, all the OSDs would be down and recovery will start to move the pg replicas on the affected OSDs to other available OSDs, and cause the Ceph to be degraded, say 5%, which is expected. However, when we boot up the failed node with a new OS, and bring back the OSDs up, more PGs are being scheduled for backfilling and instead of reducing, the degradation level will shoot up again to, for example, 10%, and in some occasion, it goes up to 19%.

We had experience when one node is down, it will degraded to 5% and recovery will start, but when we manage to bring back up the node (still the same OS), the degradation level will reduce to below 1% and eventually recovery will be completed faster.

Why the same behaviour doesn't apply on the above situation? The OSD numbers are the same when the node boots up, the crush map weight values are also the same. Only the hostname is different.

Any advice / suggestions?

Looking forward to your reply, thank you.

Cheers.


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux