Re: libnuma / ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 9 May 2015, Kyle Bader wrote:
> It looks like the NUMA problem [1] could easily be solved by:
> 
> #include <numa.h> in ceph-osd.cc
> numa_available() should return 0
> read from /var/lib/ceph/ceph-$id/numa_node
> make sure int
> make sure sure numa node exists
> int numa_bind(numa_node)
> 
> Then that OSD will only use the cpu cores and memory in that NUMA
> region. This could also be handy for converging ceph-mon/rgw/etc with
> OSD.
> 
> [mon.a]
> numa_node = 1
> 
> ceph-mon numa_bind(1)
> 
> [rgw.a]
> numa_node = 1
> 
> radosgw numa_bind(1)

Hmm!

So, numa_node default would probably be -1.  What about 
a zone_reclaim_mode option?  Or should we just do a warning?

sage

> 
> [1] https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg16274.html
> 
> -- 
> Kyle Bader - Red Hat
> Senior Solution Architect
> Ceph Storage Architectures
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux