Re: Verifying the location of the wal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



IIRC there is a Command like

Ceph osd Metadata

Where you should be able to find Information like this

Hab
- Mehmet

Am 21. Oktober 2018 19:39:58 MESZ schrieb Robert Stanford <rstanford8896@xxxxxxxxx>:

 I did exactly this when creating my osds, and found that my total utilization is about the same as the sum of the utilization of the pools, plus (wal size * number osds).  So it looks like my wals are actually sharing OSDs.  But I'd like to be 100% sure... so I am seeking a way to find out

On Sun, Oct 21, 2018 at 11:13 AM Serkan Çoban <cobanserkan@xxxxxxxxx> wrote:
wal and db device will be same if you use just db path during osd
creation. i do not know how to verify this with ceph commands.
On Sun, Oct 21, 2018 at 4:17 PM Robert Stanford <rstanford8896@xxxxxxxxx> wrote:
>
>
>  Thanks Serkan.  I am using --path instead of --dev (dev won't work because I'm using VGs/LVs).  The output shows block and block.db, but nothing about wal.db.  How can I learn where my wal lives?
>
>
>
>
> On Sun, Oct 21, 2018 at 12:43 AM Serkan Çoban <cobanserkan@xxxxxxxxx> wrote:
>>
>> ceph-bluestore-tool can show you the disk labels.
>> ceph-bluestore-tool show-label --dev /dev/sda1
>> On Sun, Oct 21, 2018 at 1:29 AM Robert Stanford <rstanford8896@xxxxxxxxx> wrote:
>> >
>> >
>> >  An email from this list stated that the wal would be created in the same place as the db, if the db were specified when running ceph-volume lvm create, and the db were specified on that command line.  I followed those instructions and like the other person writing to this list today, I was surprised to find that my cluster usage was higher than the total of pools (higher by an amount the same as all my wal sizes on each node combined).  This leads me to think my wal actually is on the data disk and not the ssd I specified the db should go to.
>> >
>> >  How can I verify which disk the wal is on, from the command line?  I've searched the net and not come up with anything.
>> >
>> >  Thanks and regards
>> >  R
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux