Re: Clarification on common tasks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/08/2016 7:13 PM, Gandalf Corvotempesta wrote:
1) kill the brick process (how can I ensure which is the brick process
to kill)?


glusterfsd is the prick status

Also "gluster volume status" lists the pid's of all the bricks processes.

2) unmount the brick, in example:
unmount /dev/sdc

3) remove the failed disk

4) insert the new disk
5) create an XFS filesystem on the new disk
6) mount the new disk where the previous one was

Yes to all that.


7) add the new brick to the gluster. How ?

No need. New brick is mounted where the old one was.

8) run "gluster v start force".

Yes.

Why should I need the step 8? If the volume is already started and
working (remember that I would like to change disk with no downtime,
thus i can't stop the volume), why should I "start" it again ?


This forces a restart of the glusterfsd process you killed earlier.

Next you do a :

  "gluster heal <VOLUME NAME> full"

That causes the files on the other bricks to be healed to the new brick.



B) let's assume I would like to add a bounch of new bricks on existing
servers. Which is the proper procedure to do so?

Different process altogether.


Ceph has a good documentation page where some common tasks are explained:
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/
i've not found anything similiar in gluster.


That would be good.


--
Lindsay Mathieson

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux