On Wed, Mar 5, 2014 at 2:48 AM, Sandro "red" Mathys <red@xxxxxxxxxxxxxxxxx> wrote: > So, in our case hardware drivers are rather unnecessary and the kernel > experts might know other ways to shrink the footprint for our limited > use cases. The kernel we require supports all primary architectures > (i686 and x86_64 right now, ARM likely later) and is able to run on a > hypervisor (primarily KVM and Xen but support for ESXi and Hyper-V are > becoming crucial in private clouds). Only. No bare-metal. We also > require support for Linux Containers (LXC) to enable the use of > Docker. > > Now, I heard some people screaming when I said HW drivers are > unnecessary. Some people will want to make use of PCI passthrough and > while I think we don't want to ship the necessary modules by default, > they should be easily installable through a separate package. If some > more granularity is acceptable (and makes sense), I think one package > for SRIOV NICs, one for graphic cards (to enable mathematical > computation stuff) and one for everything else PCI passthrough (plus > one for all other HW drivers for the non-cloud products) would be > totally nice. I'm not overly thrilled with having multi-tiered driver packages. That leads to major headaches when we start shuffling things around from one driver package to another. The current solution I have prototyped is a kernel-core/kernel-drivers split. Here "core" could be analogous to "cloud", but I imagine it will be useful for other things. People wanting to do PCI passthrough can just install the kernel-drivers package. > What does everybody think about this? Can it be done? How is it best > done? What's the timeframe (we'd really like to see this implemented > in F21 Beta but obviously the earlier modular kernels can be tested, > the better)? Do you require additional input? Same questions I asked Sam and Matt: - Do you need/want a firewall (requires iptables, etc)? - Do you need/want NFS or other cloudy storage things (for gluster?)? - Do you need/want openvswitch? The list of modules I have in my local rawhide KVM guest is below. The snd_* related drivers probably aren't necessary. The btrfs and things it depends on can be ignored (unless you plan on switching from ext4). Anything that has table or nf in the module name is for the firewall. Matt already provided a much smaller module list for openstack and EC2, but I'm guessing we want to target the broadest usecase. Think about it and let me know. josh [jwboyer@localhost ~]$ lsmod Module Size Used by nls_utf8 12557 1 isofs 39794 1 uinput 17708 1 bnep 19735 2 bluetooth 445507 5 bnep 6lowpan_iphc 18591 1 bluetooth fuse 91190 3 ip6t_rpfilter 12546 1 ip6t_REJECT 12939 2 xt_conntrack 12760 9 cfg80211 583354 0 rfkill 22195 4 cfg80211,bluetooth ebtable_nat 12807 0 ebtable_broute 12731 0 bridge 135391 1 ebtable_broute stp 12946 1 bridge llc 14092 2 stp,bridge ebtable_filter 12827 0 ebtables 30833 3 ebtable_broute,ebtable_nat,ebtable_filter ip6table_nat 13015 1 nf_conntrack_ipv6 18777 6 nf_defrag_ipv6 100248 1 nf_conntrack_ipv6 nf_nat_ipv6 13213 1 ip6table_nat ip6table_mangle 12700 1 ip6table_security 12710 1 ip6table_raw 12683 1 ip6table_filter 12815 1 ip6_tables 26809 5 ip6table_filter,ip6table_mangle,ip6table_security,ip6table_nat,ip6table_raw iptable_nat 13011 1 nf_conntrack_ipv4 18791 5 nf_defrag_ipv4 12702 1 nf_conntrack_ipv4 nf_nat_ipv4 13199 1 iptable_nat nf_nat 25249 4 nf_nat_ipv4,nf_nat_ipv6,ip6table_nat,iptable_nat nf_conntrack 110550 8 nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,ip6table_nat,iptable_nat,nf_conntrack_ipv4,nf_conntrack_ipv6 iptable_mangle 12695 1 iptable_security 12705 1 iptable_raw 12678 1 snd_hda_codec_generic 66943 1 ppdev 17635 0 snd_hda_intel 56588 4 snd_hda_codec 133858 2 snd_hda_codec_generic,snd_hda_intel snd_hwdep 17650 1 snd_hda_codec snd_seq 65180 0 snd_seq_device 14136 1 snd_seq crct10dif_pclmul 14250 0 crc32_pclmul 13113 0 crc32c_intel 22079 0 snd_pcm 103502 2 snd_hda_codec,snd_hda_intel ghash_clmulni_intel 13259 0 microcode 216608 0 virtio_console 28109 1 serio_raw 13413 0 snd_timer 28806 2 snd_pcm,snd_seq virtio_balloon 13530 0 snd 83790 16 snd_hwdep,snd_timer,snd_pcm,snd_seq,snd_hda_codec_generic,snd_hda_codec,snd_hda_intel,snd_seq_device soundcore 14491 1 snd parport_pc 28048 0 parport 40605 2 ppdev,parport_pc pvpanic 12801 0 i2c_piix4 22155 0 btrfs 940276 1 xor 21366 1 btrfs raid6_pq 101472 1 btrfs qxl 74078 2 drm_kms_helper 50413 1 qxl virtio_pci 17713 0 8139too 33711 0 virtio_ring 20004 3 virtio_pci,virtio_balloon,virtio_console virtio 14172 3 virtio_pci,virtio_balloon,virtio_console ttm 85373 1 qxl 8139cp 32036 0 mii 13527 2 8139cp,8139too ata_generic 12910 0 drm 288814 4 qxl,ttm,drm_kms_helper i2c_core 38734 3 drm,i2c_piix4,drm_kms_helper pata_acpi 13038 0 _______________________________________________ cloud mailing list cloud@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/cloud Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct