Hi Marcus Please refer to the documentation: http://docs.ceph.com/docs/master/rados/operations/crush-map/#editing-a-crush-map I belive your suggestion only modifies the in-memory map and you never get a changed version written in the outfile, but it could easily be tested by decompiling the new version and checking the clear text version, but why not just do as the documentation suggests? But you really should just set some default sane usable values (depending on kernel versions and your clients) and NOT create your own settings, allow the cluster to remap after the new profiles has been applied and then change CRUSH weights to correct values before you attempt any customization of tunables.. Regards, Jens Dueholm Christensen Rambøll Survey IT -----Original Message----- From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Marcus Müller Sent: Wednesday, January 11, 2017 2:50 PM To: Shinobu Kinjo Cc: Ceph Users Subject: Re: PGs stuck active+remapped and osds lose data?! Yes, but everything i want to know is, if my way to change the tunables is right or not? > Am 11.01.2017 um 13:11 schrieb Shinobu Kinjo <skinjo@xxxxxxxxxx>: > > Please refer to Jens's message. > > Regards, > >> On Wed, Jan 11, 2017 at 8:53 PM, Marcus Müller <mueller.marcus@xxxxxxxxx> wrote: >> Ok, thank you. I thought I have to set ceph to a tunables profile. If I’m right, then I just have to export the current crush map, edit it and import it again, like: >> >> ceph osd getcrushmap -o /tmp/crush >> crushtool -i /tmp/crush --set-choose-total-tries 100 -o /tmp/crush.new >> ceph osd setcrushmap -i /tmp/crush.new >> >> Is this right or not? >> >> I started this cluster with these 3 nodes and each 3 osds. They are vms. I knew that this cluster would expand very big, that’s the reason for my choice for ceph. Now I can’t add more HDDs to the vm hypervisor and I want to separate the nodes physically too. I bought a new node with these 4 drives and now another node with only 2 drives. As I hear now from several people this was not a good idea. For this reason, I bought now additional HDDs for the new node, so I have two with the same amount of HDDs and size. In the next 1-2 months I will get the third physical node and then everything should be fine. But at this time I have no other option. >> >> May it help to solve this problem by adding the 2 new HDDs to the new ceph node? >> >> _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com