[RFC PATCH] Current status, suspend-to-disk support on ARM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

some status on making hibernation (suspend-to-disk) work on ARM.

I've simplified an updated patch set again, to make the pieces obvious. The attached patch set also cleanly compiles when adding TuxOnIce to it.


Pls. don't take this as a formal patch submission; this is discussion material and hence not currently based on any specific kernel revision.


The code essentially splits into two - a generic bit supplying the glue code that the framework needs, and SoC-specific code to do the core state suspend/resume.

The generic bits do two things:

	* implement some glue the suspend-to-disk framework requires:

		- pfn_is_nosave
		- save/restore_processor_state
		- swsusp_arch_suspend/resume entrypoints

	* ARM assembly for the "guts" of swsusp_arch_suspend/resume

		- save/restore current regset, CPSR and svc stack/lr
		- page restore loop to copy the pageset back
		- redirect to SoC-specific code for core suspend/resume

Hopefully what's in there is actually agnostic enough to qualify as "ARM generic". This stuff is quite clean by now.

There's one ugly thing in this set - I've changed a generic kernel header, <linux/suspend.h> to #define save/restore_processor_state() on ARM so that it only does preempt_disable/enable(). It's surprising that this isn't the default behaviour; all platforms need swsusp_arch_suspend/resume anyway so why force the existance of _two_ arch-specific hooks ?



In addition to this glue, one needs:

	* a SoC-dependent __save/__restore_processor_state.
          There's two examples attached for that, OMAP3 and Samsung 6450.

This bit is the "hacky" part of the patch; on ARM, the platform code is blissfully unaware of suspend-to-disk while suspend-to-ram is partially very complex code.

The shown diffs hook into the "inner guts" of existing suspend-to-ram on ARM, and while that looks like it does a large part of the job, there's surely a better way.

I've supplied those as merely illustrative... to show that the inline assembly orgies from previous patches are just unnecessarily duplicating already-existing functionality. The way this is reused here might not be the perfect way - intent only was to show how much code re-use is actually possible in the area.

Whoever wishes can instead substitute the __save/__restore_processor_state inline assembly orgies from previously posted patches here - it all hooks in very cleanly.



To test / extend this code on an ARM architecture other than the two shown, one can start with the generic bits and simply make NOP-impls for the SoC-specific bits; that should at the very least allow a snapshot to be created, and so validate that device tree quiesce/freeze resume/thaw work correctly. Resume from such an image wouldn't work, though. At least the MMU state is absolutely critical.




Re, how (if at all) to get this towards mainline:

It looks like the various ARM boards have _very_ different views towards what suspend_ops.enter() will have to do; some are quite simple there, others are exceedingly complex.

One would ultimately hope for as much code sharing between hibernation and suspend-to-mem as possible, for sure, but the current code isn't aware of this; in many cases, saving/restoring state is done all over the place in the "arch framework" bits of suspend_ops.enter(), before the actual CPU core state is saved/resumed. ARM is heavily biased towards optimizing the hell out of suspend-to-mem, and a "grand central store" for system state isn't really there, all boards have their own strategy for this, and their own assumptions on what the CPU state is when resumed from RAM. This is even harder to do for "secure" SoCs, where part of the functionality is handled by (non-kernel) internal ROM code.


Russell King has recently done a change to the CPU suspend core code to make that use a "more generic" interface (have them all dump state into a caller-supplied buffer, at the least). The attached patches aren't yet fully aware of that because of the need _not_ to suspend (but return after state save) when called through the hibernation code. My brain hasn't yet grown big enough to solve this well ...



Re, success with this patch: On OMAP3, I've got issues; the console doesn't properly suspend/resume and using no_console_suspend keeps the messages but looses console input after resume. Also, graphics doesn't come back, and successive hibernation attempts cause crashes in the USB stack. It works much better on the Samsung 64xx boards, for me at least. But then, quite a few people reported success with the older patches on OMAP, hence I wonder, who's got it "fully working" ?



Any comments ? What can be improved ?


Have fun,
FrankH.
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 6b6786c..b3c271f 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -195,6 +195,14 @@ config VECTORS_BASE
 	help
 	  The base address of exception vectors.
 
+config ARCH_HIBERNATION_POSSIBLE
+	bool
+	help
+	  If the machine architecture supports suspend-to-disk
+	  it should select this automatically for you.
+	  Otherwise, say 'Y' at your own peril.
+
 config ARCH_HAS_CPU_IDLE_WAIT
 	def_bool y
 
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 5421d82..23e93a6 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -191,6 +191,7 @@ static inline void *phys_to_virt(unsigned long x)
  */
 #define __pa(x)			__virt_to_phys((unsigned long)(x))
 #define __va(x)			((void *)__phys_to_virt((unsigned long)(x)))
+#define __pa_symbol(x)		__pa(RELOC_HIDE((unsigned long)(x),0))
 #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
 
 /*
diff --git a/arch/arm/include/asm/suspend.h b/arch/arm/include/asm/suspend.h
new file mode 100644
index 0000000..8857c79
--- /dev/null
+++ b/arch/arm/include/asm/suspend.h
@@ -0,0 +1,6 @@
+#ifndef __ASM_ARM_SUSPEND_H
+#define __ASM_ARM_SUSPEND_H
+
+static inline int arch_prepare_suspend(void) { return 0; }
+
+#endif	/* __ASM_ARM_SUSPEND_H */
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index c9b00bb..541ac3a 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -36,6 +36,7 @@ obj-$(CONFIG_ARM_THUMBEE)	+= thumbee.o
 obj-$(CONFIG_KGDB)		+= kgdb.o
 obj-$(CONFIG_ARM_UNWIND)	+= unwind.o
 obj-$(CONFIG_HAVE_TCM)		+= tcm.o
+obj-$(CONFIG_HIBERNATION)	+= cpu.o swsusp.o
 
 obj-$(CONFIG_CRUNCH)		+= crunch.o crunch-bits.o
 AFLAGS_crunch-bits.o		:= -Wa,-mcpu=ep9312
diff --git a/arch/arm/kernel/cpu.c b/arch/arm/kernel/cpu.c
new file mode 100644
index 0000000..4fa9b80
--- /dev/null
+++ b/arch/arm/kernel/cpu.c
@@ -0,0 +1,63 @@
+/*
+ * Hibernation support specific for ARM
+ *
+ * Based on work by:
+ *
+ * Ubuntu project, hibernation support for mach-dove,
+ *	https://lkml.org/lkml/2010/6/18/4
+ *
+ * Copyright (C) 2010 Nokia Corporation
+ *	Contact: Hiroshi DOYU <Hiroshi.DOYU@xxxxxxxxx>
+ *	https://lists.linux-foundation.org/pipermail/linux-pm/2010-June/027422.html
+ *
+ * Copyright (C) 2010 Texas Instruments, Inc.
+ *	via linux-omap mailing list, Teerth Reddy et al.
+ *	https://patchwork.kernel.org/patch/96442/
+ *
+ * Copyright (C) 2006 Rafael J. Wysocki <rjw@xxxxxxx>
+ *
+ * License terms: GNU General Public License (GPL) version 2
+ */
+
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <asm/ptrace.h>
+#include <asm/tlbflush.h>
+
+/* References to section boundaries */
+extern const void __nosave_begin, __nosave_end;
+
+/*
+ * pfn_is_nosave - check if given pfn is in the 'nosave' section
+ */
+notrace int pfn_is_nosave(unsigned long pfn)
+{
+	unsigned long nosave_begin_pfn = __pa_symbol(&__nosave_begin) >> PAGE_SHIFT;
+	unsigned long nosave_end_pfn = PAGE_ALIGN(__pa_symbol(&__nosave_end)) >> PAGE_SHIFT;
+
+	return (pfn >= nosave_begin_pfn) && (pfn < nosave_end_pfn);
+}
+
+/*
+ * wrap the assembly helpers here.
+ */
+extern void swsusp_arch_suspend_internal(void);
+extern int swsusp_arch_resume_internal(void);
+extern int swsusp_save(void);
+
+notrace int swsusp_arch_suspend(void)
+{
+	swsusp_arch_suspend_internal();
+	return swsusp_save();
+}
+
+notrace int swsusp_arch_resume(void)
+{
+	/*
+	 * Set pagedir to swapper (so that resume via initramfs can work).
+	 */
+	cpu_switch_mm(swapper_pg_dir, current->active_mm);
+
+	return swsusp_arch_resume_internal();
+}
+
diff --git a/arch/arm/kernel/swsusp.S b/arch/arm/kernel/swsusp.S
new file mode 100644
index 0000000..98e0e4d
--- /dev/null
+++ b/arch/arm/kernel/swsusp.S
@@ -0,0 +1,142 @@
+/*
+ * Hibernation support specific for ARM
+ *
+ * Based on work by:
+ *
+ * Ubuntu project, hibernation support for mach-dove,
+ *	https://lkml.org/lkml/2010/6/18/4
+ *
+ * Copyright (C) 2010 Nokia Corporation
+ *	Contact: Hiroshi DOYU <Hiroshi.DOYU@xxxxxxxxx>
+ *	https://lists.linux-foundation.org/pipermail/linux-pm/2010-June/027422.html
+ *
+ * Copyright (C) 2010 Texas Instruments, Inc.
+ *	via linux-omap mailing list, Teerth Reddy et al.
+ *	https://patchwork.kernel.org/patch/96442/
+ *
+ * Copyright (C) 2006 Rafael J. Wysocki <rjw@xxxxxxx>
+ *
+ * License terms: GNU General Public License (GPL) version 2
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+#include <asm/cache.h>
+#include <asm/memory.h>
+#include <asm/page.h>
+#include <asm/ptrace.h>
+
+/*
+ * Force ARM mode because:
+ *	- we use PC-relative addressing with >8bit offsets
+ *	- we use msr with immediates
+ */
+.arm
+
+.align	PAGE_SHIFT
+.Lswsusp_page_start:
+
+/*
+ * Save the current CPU state before suspend / poweroff.
+ */
+ENTRY(swsusp_arch_suspend_internal)
+	adr	r0, ctx
+	mrs	r1, cpsr
+	stm	r0!, {r1}		/* current CPSR */
+	msr	cpsr_c, #SYSTEM_MODE
+	stm	r0!, {r0-r14}		/* user regs */
+	msr	cpsr_c, #SVC_MODE
+	mrs	r2, spsr
+	stm	r0!, {r2, sp, lr}	/* SVC SPSR, SVC regs */
+	msr	cpsr, r1		/* restore original mode */
+	b	__save_processor_state
+ENDPROC(swsusp_arch_suspend_internal)
+
+
+/*
+ * Restore the memory image from the pagelists, and load the CPU registers
+ * from saved state.
+ * This runs in a very restrictive context - namely, no stack can be used
+ * before the CPU register state saved by swsusp_arch_suspend() has been
+ * restored.
+ */
+ENTRY(swsusp_arch_resume_internal)
+	/*
+	 * The following code is an assembly version of:
+	 *
+	 *	struct pbe *pbe;
+	 *	for (pbe = restore_pblist; pbe != NULL; pbe = pbe->next)
+	 *		copy_page(pbe->orig_address, pbe->address);
+	 *
+	 * Because this is the very place where data pages, including our stack,
+	 * are overwritten, function calls are obviously impossible. Hence asm.
+	 *
+	 * The core of the loop is taken almost verbatim from copy_page.S.
+	 */
+	ldr     r1, =(restore_pblist - 8)	/* "fake" pbe->next */
+	b	3f
+.ltorg
+.align L1_CACHE_SHIFT
+0:
+PLD(	pld	[r0, #0]			)
+PLD(	pld	[r0, #L1_CACHE_BYTES]		)
+	mov	r3, #(PAGE_SIZE / (2 * L1_CACHE_BYTES) PLD( -1 ))
+	ldmia	r0!, {r4-r7}
+1:
+PLD(	pld	[r0, #(2 * L1_CACHE_BYTES)]	)
+PLD(	pld	[r0, #(3 * L1_CACHE_BYTES)]	)
+2:
+.rept	(2 * L1_CACHE_BYTES / 16 - 1)
+	stmia	r2!, {r4-r7}
+	ldmia	r0!, {r4-r7}
+.endr
+	subs	r3, r3, #1
+	stmia	r2!, {r4-r7}
+	ldmgtia	r0!, {r4-r7}
+	bgt	1b
+PLD(	ldmeqia	r0!, {r4-r7}			)
+PLD(	beq	2b				)
+3:
+	ldr     r1, [r1, #8]		/* load next in list (pbe->next) */
+	cmp     r1, #0
+	ldrne	r0, [r1]		/* src page start address (pbe->address) */
+	ldrne	r2, [r1, #4]		/* dst page start address (pbe->orig_address) */
+	bne     0b
+
+	/*
+	 * Done - now restore the CPU state and return.
+	 */
+	msr	cpsr_c, #SYSTEM_MODE
+	adr	r0, ctx
+	ldm	r0!, {r1, sp, lr}	/* first word is CPSR, following are r0/r1 (irrelevant) */
+	msr	cpsr_cxsf, r1
+	ldm	r0!, {r2-r14}
+	msr	cpsr_c, #SVC_MODE
+	ldm	r0!, {r2, sp, lr}
+	msr	spsr_cxsf, r2
+	msr	cpsr_c, r1		/* use CPSR from above */
+
+	/*
+	 * From here on we have a valid stack again. Core state is
+	 * not restored yet, redirect to the machine-specific
+	 * implementation to get that done.
+	 * Note that at this point we have succeeded with restore;
+	 * if machine-specific code fails it'd need to panic, there
+	 * is no way anymore now to recover from "resume failure".
+	 */
+	mov	r1, #0
+	stmfd	sp!, {r1,lr}
+	bl	__restore_processor_state	/* restore core state */
+	ldmfd	sp!, {r0,pc}
+ENDPROC(swsusp_arch_resume_internal)
+
+.ltorg
+
+/*
+ * Save the CPU context (register set for all modes and mach-specific cp regs)
+ * here. Setting aside what remains of this CPU page, should be aplenty.
+ */
+.align L1_CACHE_SHIFT
+ENTRY(ctx)
+.space	(PAGE_SIZE - (. - .Lswsusp_page_start))
+END(ctx)
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index 4957e13..e691c77 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -153,7 +153,6 @@ SECTIONS
 		__init_end = .;
 #endif
 
-		NOSAVE_DATA
 		CACHELINE_ALIGNED_DATA(32)
 
 		/*
@@ -176,6 +175,8 @@ SECTIONS
 	}
 	_edata_loc = __data_loc + SIZEOF(.data);
 
+	NOSAVE_DATA
+
 #ifdef CONFIG_HAVE_TCM
         /*
 	 * We align everything to a page boundary so we can
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index b6e818f..0d39ae0 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -171,7 +171,7 @@
 #define NOSAVE_DATA							\
 	. = ALIGN(PAGE_SIZE);						\
 	VMLINUX_SYMBOL(__nosave_begin) = .;				\
-	*(.data.nosave)							\
+	.data.nosave : { *(.data.nosave) }				\
 	. = ALIGN(PAGE_SIZE);						\
 	VMLINUX_SYMBOL(__nosave_end) = .;
 
diff --git a/include/linux/suspend.h b/include/linux/suspend.h
index 5e781d8..476d4c3 100644
--- a/include/linux/suspend.h
+++ b/include/linux/suspend.h
@@ -274,8 +274,13 @@ static inline void hibernate_nvs_restore(void) {}
 #endif /* CONFIG_HIBERNATION_NVS */
 
 #ifdef CONFIG_PM_SLEEP
+#ifndef CONFIG_ARM
 void save_processor_state(void);
 void restore_processor_state(void);
+#else
+#define	save_processor_state	preempt_disable
+#define	restore_processor_state	preempt_enable
+#endif
 
 /* kernel/power/main.c */
 extern int register_pm_notifier(struct notifier_block *nb);
diff --git a/arch/arm/plat-omap/Kconfig b/arch/arm/plat-omap/Kconfig
index df5ce56..b4713ba 100644
--- a/arch/arm/plat-omap/Kconfig
+++ b/arch/arm/plat-omap/Kconfig
@@ -23,6 +23,7 @@ config ARCH_OMAP3
 	select CPU_V7
 	select COMMON_CLKDEV
 	select OMAP_IOMMU
+	select ARCH_HIBERNATION_POSSIBLE
 
 config ARCH_OMAP4
 	bool "TI OMAP4"
diff --git a/arch/arm/mach-omap2/sleep34xx.S b/arch/arm/mach-omap2/sleep34xx.S
index ea4e498..fd48417 100644
--- a/arch/arm/mach-omap2/sleep34xx.S
+++ b/arch/arm/mach-omap2/sleep34xx.S
@@ -328,6 +328,17 @@ restore:
 	.word	0xE1600071		@ call SMI monitor (smi #1)
 #endif
 	b	logic_l1_restore
+#ifdef CONFIG_HIBERNATION
+ENTRY(__restore_processor_state)
+	stmfd	sp!, { r0 - r12, lr }
+	str	sp, [r0]		@ fixup saved stack pointer
+	str	lr, [r0, #8]		@ fixup saved link register
+	mov	r3, r0
+	mov	r1, #1
+	b	.Llogic_l1_restore_internal
+ENDPROC(__restore_processor_state)
+#endif
+
 l2_inv_api_params:
 	.word   0x1, 0x00
 l2_inv_gp:
@@ -358,6 +369,7 @@ logic_l1_restore:
 	ldr	r4, scratchpad_base
 	ldr	r3, [r4,#0xBC]
 	adds	r3, r3, #16
+.Llogic_l1_restore_internal:
 	ldmia	r3!, {r4-r6}
 	mov	sp, r4
 	msr	spsr_cxsf, r5
@@ -433,6 +445,10 @@ ttbr_error:
 	*/
 	b	ttbr_error
 usettbr0:
+#ifdef CONFIG_HIBERNATION
+	cmp	r1, #1
+	ldmeqfd	sp!, { r0 - r12, pc }	@ early return from __restore_processor_state
+#endif
 	mrc	p15, 0, r2, c2, c0, 0
 	ldr	r5, ttbrbit_mask
 	and	r2, r5
@@ -471,6 +487,16 @@ usettbr0:
 	mcr	p15, 0, r4, c1, c0, 0
 
 	ldmfd	sp!, {r0-r12, pc}		@ restore regs and return
+
+#ifdef CONFIG_HIBERNATION
+ENTRY(__save_processor_state)
+	stmfd	sp!, {r0-r12, lr}
+	mov	r1, #0x4
+	mov	r8, r0
+	b	l1_logic_lost
+ENDPROC(__save_processor_state)
+#endif
+
 save_context_wfi:
 	/*b	save_context_wfi*/	@ enable to debug save code
 	mov	r8, r0 /* Store SDRAM address in r8 */
@@ -545,6 +571,10 @@ l1_logic_lost:
 	mrc	p15, 0, r4, c1, c0, 0
 	/* save control register */
 	stmia	r8!, {r4}
+#ifdef CONFIG_HIBERNATION
+	cmp	r1, #4
+	ldmeqfd	sp!, {r0-r12, pc}	@ early return from __save_processor_state
+#endif
 clean_caches:
 	/* Clean Data or unified cache to POU*/
 	/* How to invalidate only L1 cache???? - #FIX_ME# */
diff --git a/arch/arm/plat-s5p/sleep.S b/arch/arm/plat-s5p/sleep.S
index 2cdae4a..fd2b0a1 100644
--- a/arch/arm/plat-s5p/sleep.S
+++ b/arch/arm/plat-s5p/sleep.S
@@ -48,10 +48,17 @@
 	 *
 	 * entry:
 	 *	r0 = save address (virtual addr of s3c_sleep_save_phys)
-	*/
+	 *	r1 (_internal_ only) = CPU sleep trampoline (if any)
+	 */
 
-ENTRY(s3c_cpu_save)
+ENTRY(__save_processor_state)
+	mov	r1, #0
+	b	.Ls3c_cpu_save_internal
+ENDPROC(__save_processor_state)
 
+ENTRY(s3c_cpu_save)
+	ldr	r1, =pm_cpu_sleep	@ set trampoline
+.Ls3c_cpu_save_internal:
 	stmfd	sp!, { r3 - r12, lr }
 
 	mrc	p15, 0, r4, c13, c0, 0	@ FCSE/PID
@@ -67,11 +74,13 @@ ENTRY(s3c_cpu_save)
 
 	stmia	r0, { r3 - r13 }
 
+	mov	r4, r1
 	@@ write our state back to RAM
 	bl	s3c_pm_cb_flushcache
 
+	mov	r0, r4
+	ldmeqfd	sp!, { r3 - r12, pc }	@ if there was no trampoline, return
 	@@ jump to final code to send system to sleep
-	ldr	r0, =pm_cpu_sleep
 	@@ldr	pc, [ r0 ]
 	ldr	r0, [ r0 ]
 	mov	pc, r0
@@ -86,9 +95,19 @@ resume_with_mmu:
 	str	r12, [r4]
 
 	ldmfd	sp!, { r3 - r12, pc }
+ENDPROC(s3c_cpu_save)
+
+ENTRY(__restore_processor_state)
+	stmfd	sp!, { r3 - r12, lr }
+	ldr	r2, =.Ls3c_cpu_resume_internal
+	mov	r1, #1
+	str	sp, [r0, #40]		@ fixup sp in restore context
+	mov	pc, r2
+ENDPROC(__restore_processor_state)
 
 	.ltorg
 
+
 	@@ the next bits sit in the .data segment, even though they
 	@@ happen to be code... the s5pv210_sleep_save_phys needs to be
 	@@ accessed by the resume code before it can restore the MMU.
@@ -131,6 +150,7 @@ ENTRY(s3c_cpu_resume)
 	mcr	p15, 0, r1, c7, c5, 0		@@ invalidate I Cache
 
 	ldr	r0, s3c_sleep_save_phys	@ address of restore block
+.Ls3c_cpu_resume_internal:
 	ldmia	r0, { r3 - r13 }
 
 	mcr	p15, 0, r4, c13, c0, 0	@ FCSE/PID
@@ -152,6 +172,9 @@ ENTRY(s3c_cpu_resume)
 	mcr	p15, 0, r12, c10, c2, 0	@ write PRRR
 	mcr	p15, 0, r3, c10, c2, 1	@ write NMRR
 
+	cmp	r1, #0
+	bne	0f			@ only do MMU phys init
+					@ not called by __restore_processor_state
 	/* calculate first section address into r8 */
 	mov	r4, r6
 	ldr	r5, =0x3fff
@@ -175,6 +198,7 @@ ENTRY(s3c_cpu_resume)
 	str	r10, [r4]
 
 	ldr	r2, =resume_with_mmu
+0:
 	mcr	p15, 0, r9, c1, c0, 0		@ turn on MMU, etc
 
         nop
@@ -183,6 +207,7 @@ ENTRY(s3c_cpu_resume)
         nop
         nop					@ second-to-last before mmu
 
+	ldmnefd	sp!, { r3 - r12, pc }
 	mov	pc, r2				@ go back to virtual address
 
 	.ltorg
_______________________________________________
linux-pm mailing list
linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linux-foundation.org/mailman/listinfo/linux-pm

[Index of Archives]     [Linux ACPI]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [CPU Freq]     [Kernel Newbies]     [Fedora Kernel]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux