Re: Best osd scenario + ansible config?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Wed, 4 Sep 2019 11:11:14 +0200
Yoann Moulin <yoann.moulin@xxxxxxx> ==> ceph-users@xxxxxxx :
> Le 04/09/2019 à 11:01, Lars Täuber a écrit :
> > Wed, 4 Sep 2019 10:32:56 +0200
> > Yoann Moulin <yoann.moulin@xxxxxxx> ==> ceph-users@xxxxxxx :  
> >> Hello,
> >>  
> >>> Tue, 3 Sep 2019 11:28:20 +0200
> >>> Yoann Moulin <yoann.moulin@xxxxxxx> ==> ceph-users@xxxxxxx :    
> >>>> Is it better to put all WAL on one SSD and all DBs on the other one? Or put WAL and DB of the first 5 OSDs on the first SSD and the 5 others on
> >>>> the second one.    
> >>>
> >>> I don't know if this has a relevant impact on the latency/speed of the ceph system but we use LVM on top of a SW RAID 1 over two SSDs for WAL & DB on this RAID1.    
> >>
> >> What is the recommended size for wall and db in my case?
> >>
> >> I have :
> >>
> >> 10x 6TB Disk OSDs (data)
> >>  2x 480G SSD
> >>
> >> Best,  
> > 
> > I'm still unsure with the size of the block.db and the wal.
> > This seems to be relevant:
> > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-May/035086.html
> > 
> > But it is also said that the pure WAL need just 1 GB of space. 
> > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-August/036509.html
> > 
> > So the conclusion would be to use 2*X(DB) + 1GB (WAL) if you put both on the same partition/LV.
> > With X being on of 3GB, 30GB or 300GB.
> > 
> > You have 10 OSDs. That means you should have 10 partitions/LVs for DBs & WALs.  
> 
> So, I don't have enough space on SSDs to do raid1, I must use 1 SSD for 5 disks.
> 
> 5x64GB + 5x2GB should be good, shouldn't it?

I'd put both on one LV/partition.


> 
> And I still don't know if the ceph-ansible playbook can manage the LVM setup of shall I need to prepare all VG and LV before.
> 

I did this manually before running the ansible-playbook.

host_vars/host3.yml
lvm_volumes:
  - data: /dev/sdb
    db: '1'
    db_vg: host-3-db
  - data: /dev/sdc
    db: '2'
    db_vg: host-3-db
  - data: /dev/sde
    db: '3'
    db_vg: host-3-db
  - data: /dev/sdf
    db: '4'
    db_vg: host-3-db
…


Lars
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux