On Tue, Apr 27, 2021 at 03:39:15PM +0530, Shreeya Patel wrote: > > > Hence, make UTF-8 encoding loadable by converting it into a module and > > > also add built-in UTF-8 support option for compiling it into the > > > kernel whenever required by the filesystem. > > The way this is implemement looks rather awkward. I think that's a bit awkard is the trying to create an abstraction separation between the unicode and utf8 layers, just in case, at some point, we want fs/unicode to support more than just utf8. I think we're better off being opinionated here, and say that the only unicode encoding that will be supported by the kernel is UTF-8. Period. In which case, we don't need to try to insert this unneeded abstraction layer. If you really want to make make fs/unicode support more than one encoding --- say, UTF-16LE, as used by NTFS --- at that point we can think about what the abstractions should look like. For example, it doesn't _actually_ make sense for the data-trie structures to be part of the utf-8 encoding. The normalization tables are for Unicode, and it wouldn't make sense for UTF-16 to have its own normalization tables, bloating the kernel even more. It *is* true that the normalization tables have been optimized for utf-8, because that's what the whole world actually uses; utf-16le is really a legacy use case. So presumably, we would probably find a way to code up the utf-16 functions in a way that used the utf-8 data tables, even if it wasn't 100% optimal in terms of speed. But it's probably not worth it at this point. > > Given that the large memory usage is for a data table and not for code, > > why not treat is as a firmware blob and load it using request_firmware? > > utf8 module not just has the data table but also has some kernel code. > The big part that we are trying to keep out of the kernel is a tree > structure that gets traversed based on a key that is the file name. > This is done when issuing a lookup in the filesystem, which has to be very > fast. So maybe it would not be so good to use request_firmware for > such a core feature. Speed really isn't a great argument here; the request_firmware is something that would only need to be done once, when a file system which requires Unicode normalization and/or case-folding is mounted. I think the better argument to make is just one of simplicity; separating the Unicode data table from the kernel adds complexity. It also reduces flexibility, since for use cases where it's actually _preferable_ to have Unicode functionality permanently built-in the kernel, we now force the use of some kind of initial ramdisk to load a module before the root file system (which might require Unicode support) could even be mounted. The argument *for* making the Unicode table be a loadable firmware is that it might make it possible to upgrade to a newer version of Unicode without needing to do a kernel recompile. On average, Unicode relases a new to support new character sets every year or so, or when there Japanese Emperor requiring a new reign name :-). Usually the new character sets are for obscure ancient alphabets, and so it's really not a big deal if the kernel doesn't support, say, Chorasmian[1] or Dives Akuru[2]. Perhaps people would make a much bigger deal about new Emoji characters, or new code points for the Creative Commons symbols. I'm personally not excited enough to claim that it's worth the extra complexity, but some people might think so. :-) [1] used in Central Asia across Uzbekistan, Kazakhstan, and Turkmenistan to write an extinct Eastern Iranian language. [2] historically used in the Maldives until the 20th century. Of course, using those new Emoji symbols in file names would reduce portability of that file system if Strict Normalization was mandated. Fortunately, ext4 and f2fs don't enable strict normalizaation by default, which is also good, because it means if we don't have the latest Unicode update in the kernel, it doesn't really matter that much.... again, not worth the extra complexity/headache IMHO. Cheers, - Ted