On Mon, 15 Mar 2021 01:49:19 +0900, Johannes Berg wrote: > > On Wed, 2021-01-20 at 11:27 +0900, Hajime Tazaki wrote: > > > > +++ b/arch/um/lkl/include/asm/atomic.h > > @@ -0,0 +1,11 @@ > > +/* SPDX-License-Identifier: GPL-2.0 */ > > +#ifndef __UM_LIBMODE_ATOMIC_H > > +#define __UM_LIBMODE_ATOMIC_H > > + > > +#include <asm-generic/atomic.h> > > + > > +#ifndef CONFIG_GENERIC_ATOMIC64 > > +#include "atomic64.h" > > +#endif /* !CONFIG_GENERIC_ATOMIC64 */ > > That's not actually configurable, is it? When is this on or off? This is off when CONFIG_64BIT is off, because in that case (!64BIT) we use generic atomic64 facility of in-kernel. > > +static inline void cpu_relax(void) > > +{ > > + unsigned long flags; > > + > > + /* since this is usually called in a tight loop waiting for some > > + * external condition (e.g. jiffies) lets run interrupts now to allow > > + * the external condition to propagate > > + */ > > + local_irq_save(flags); > > + local_irq_restore(flags); > > Hmm? > > If interrupts are enabled, then you'll get them anyway, and if not, then > this does nothing? You might be right; the original LKL does yield-like operation if this function is called, but this might not be the case after the integration with UML. I will investigate this and fix it. > > +static inline void arch_copy_thread(struct arch_thread *from, > > + struct arch_thread *to) > > +{ > > + panic("unimplemented %s: fork isn't supported yet", __func__); > > +} > > "yet" seems kind of misleading - I mean, it really *won't* be, since > that's basically what UML is, no? > > I mean, in principle, nothing actually stops you from building a > linux.so instead of linux binary, and having main() renamed to > linux_main() so the application can start up whenever it needs to, or > something like that? I commented this in the reply-thread of the [00/20] patch. > > diff --git a/arch/um/lkl/include/uapi/asm/bitsperlong.h b/arch/um/lkl/include/uapi/asm/bitsperlong.h > > new file mode 100644 > > index 000000000000..f6657ba0ff9b > > --- /dev/null > > +++ b/arch/um/lkl/include/uapi/asm/bitsperlong.h > > @@ -0,0 +1,13 @@ > > +/* SPDX-License-Identifier: GPL-2.0 */ > > +#ifndef __UM_LIBMODE_UAPI_BITSPERLONG_H > > +#define __UM_LIBMODE_UAPI_BITSPERLONG_H > > + > > +#ifdef CONFIG_64BIT > > +#define __BITS_PER_LONG 64 > > +#else > > +#define __BITS_PER_LONG 32 > > +#endif > > + > > +#include <asm-generic/bitsperlong.h> > > First, that include does nothing. We followed the way of arch/x86 (and other archs) for this. This actually includes include/asm-generic/bitsperlong.h, which does something I think. > Second, the comment there says: > > /* > * There seems to be no way of detecting this automatically from user > * space, so 64 bit architectures should override this in their > * bitsperlong.h. In particular, an architecture that supports > * both 32 and 64 bit user space must not rely on CONFIG_64BIT > * to decide it, but rather check a compiler provided macro. > */ > > > So you're doing exactly what it says *not* to? > > In fact, that totally makes sense - I mean, this is *uapi* and > applications don't have access to CONFIG_ defines etc. Sorry, this patch is wrong. There is additional diff in [10/20], which should be in the same patch here, which eliminates not to rely on CONFIG_*. I will fix this. > Do you expose that somehow to LKL users that makes this OK-ish? But it's > still very confusing, and you've not made sure the necessary header is > actually included. IIUC, lkl builds a (shared?) library, so it won't > really work this way? > > > +++ b/arch/um/lkl/include/uapi/asm/byteorder.h > > @@ -0,0 +1,11 @@ > > +/* SPDX-License-Identifier: GPL-2.0 */ > > +#ifndef __UM_LIBMODE_UAPI_BYTEORDER_H > > +#define __UM_LIBMODE_UAPI_BYTEORDER_H > > + > > +#if defined(CONFIG_BIG_ENDIAN) > > +#include <linux/byteorder/big_endian.h> > > Same here. Same problem here; we should not split this in the patch and [10/20]. Will fix this too. > > +++ b/arch/um/lkl/um/delay.c > > @@ -0,0 +1,31 @@ > > +// SPDX-License-Identifier: GPL-2.0 > > +#include <linux/jiffies.h> > > +#include <linux/delay.h> > > +#include <os.h> > > + > > +void __ndelay(unsigned long nsecs) > > +{ > > + long long start = os_nsecs(); > > + > > + while (os_nsecs() < start + nsecs) > > + ; > > At least do something like cpu_relax()? Although ... you made that do > nothing, so you probably want an arch-specific thing here? I think this isn't efficient at all, but portable enough, which is (sub)arch-specific requirement. -- Hajime