Read-only Root Filesystem

This page is my attempt to popularize the idea of using Linux distributions with their root filesystems mounted in read-only mode. This is somewhat different from the recent trend from Live CD makers, who would typically use unionfs and require several files in /etc to be writable for the system to even boot.

In a typical GNU/Linux software stack system (desktops, laptops, phones, appliances etc.), most of the contents of the root filesystem is the static application data, which only changes when you upgrade or install new packages. The only locations that need to store serious data and be writable during normal work are /home and /var. Other directories in the filesystem either don't need disk representation (/tmp, /proc, /sys, /dev, /media) or it is not desired for them to be writable while in normal operation (/usr, /lib, /boot, /bin, /sbin, /etc).

All the Linux boxes I manage run with their root file-systems mounted in read-only mode, and in many cases these read-only file systems reside on flash memory. Why should I put system files on a cheap pendrive, even if a machine has a hard drive for storing users' data? For one thing, it's faster. How is it that a pendrive is faster than a hard drive? The trick is, it's very fast for read-only access of small blocks and it's a separate drive, so disk-intensive operations performed in home directories do not interfere with reading application data.


Below is a quick list of what I think are important benefits from using a read-only root filesystem. When you realize precisely what "read-only root" means, you will easily see that below points are indeed true and in fact you may even think of more reasons.


The behavior of the system and applications is perfectly reproducible if the user does not install or remove any packages.


Users can easily enforce all the data to stay in volatile memory whenever they need to. Additionally, this layout enforces a better separation between system and user data. Any problems with this separation are immediately visible.


One can use a different physical drive for read-only partitions, thus avoiding disk scheduler waits.

Cost and power efficiency

Lowered disk wear and power consumption. Read-only partitions can reside on flash memory sticks without decreasing performance.


Yet another layer of security for malicious software to overcome. Possible to ensure it on hardware level.


If an user is aware of remounting file systems, he can ensure that no changes are done unwillingly.

Direct thin-client infrastructure

No need to prepare special images for multiple-client boot.

Direct Live-CD creation

File systems and kernels do not require special preparation before burning.

No shortcomings

All the directories of file system tree that need to be accessed for writing can be mounted on separate partitions, so this imposes no limits on system usage.

Unionfs is not suitable

It does not allow to remount a file system to read-write mode and permanently install some packages. Also, it is a huge waste of memory.

Frequently Asked Questions

How to switch my existing system to readonly root?

The process typically consists of two steps. Firstly, you need to ensure that boot scripts never remount root filesystem to "rw" mode.

While performing this operation, you probably shouldn't let the system mount any disk partition other than its root filesystem (so comment them out in /etc/fstab).

In some distributions, it will suffice to add the "ro" flag to the line that lists your root filesystem in /etc/fstab (flags are in the fourth column). If the boot scripts happily ignore the "ro" flag in /etc/fstab, you can just remove any lines similar to "remount whatever / -oremount,rw" from all places where they could be executed automatically.

After you do this, you can boot your system repeatedly and continue fixing boot scripts until you are sure that they never remount the root filesystem. It is safe to do this by trial and error, because even if the system doesn't work too well, a read only partition cannot be corrupted.

You can be virtually sure that after you execute this step, your system will print lots of error messages and many programs will stop working. It is not dangerous when the only mounted partition is read-only.

The second step is just fixing all errors, as it is discussed in the answer to the next question.

What do I need to change in my system to make it work properly with read-only root?

If you want your current system fully functional with read-only root filesystem, you will need to ensure that /var and several other locations are writable after reboot. Typically, the steps would be:

- replace /etc/mtab with a symlink to /proc/mounts,

- add lines to /etc/fstab that will cause tmpfs to be mounted on /tmp/media, and /root.

- if you use a dynamic /etc/resolv.conf (e.g. from DHCP), replace it with a symbolic link to some writable location,

- check if the dhcp client script handles a symlinked /etc/resolv.conf properly, for example replace "mv /etc/somefile /etc/resolv.conf" with "cat /tmp/somefile > /var/resolv.conf",

- if you don't want to use a writable disk partition for /var, make a script that will be executed very early in the boot up process, and will dump the contents of /var (preferably without non-empty files) from your real root filesystem, mount tmpfs on /var and then unpack saved directories and symlinks to it,

- if you want to use a tmpfs filesystem as /home (e.g. to create a Live CD) create your own boot script that will mount tmpfs at /home, create user directories and change their ownership,

- remove any commands that write to hardware clock from boot scripts.

How do I install/uninstall packages on a read only file system?

The easiest way is to:

root@lhost:~# mount - / -oremount,rw
root@lhost:~# # now you can use a package manager
root@lhost:~# mount - / -oremount,ro

But in case you have mounted tmpfs filesystems that cover /var (or other locations) in such a way that the contents of the root filesystem have been copied to tmpfs, you'll have to do something like this:

root@lhost:~# cd /media
root@lhost:/media# mkdir r
root@lhost:/media# mount / r -o bind
root@lhost:/media# cd r
root@lhost:/media/r# mount - . -o remount,rw
root@lhost:/media/r# mount proc -t proc proc
root@lhost:/media/r# mount sysfs -t sysfs sys
root@lhost:/media/r# mount /dev -o bind dev
root@lhost:/media/r# chroot .
root@lhost:/# # you are in chroot, you can use a package manager
root@lhost:/# exit
root@lhost:/media/r# umount proc sys dev
root@lhost:/media/r# cd /media
root@lhost:/media# umount r

How to reboot into read only mode as root?

What you probably want is to do some maintenance/repair and you would rather not change anything in your configuration. In this case, just boot with "init=/bin/bash" added to the list of kernel parameters. You might need to add a bootloader option for this, or if your bootloader has an interactive shell, you can set this at boot (e.g. after the GRUB menu appears, you can press "e" to edit commands and parameters). If you want to change something in your root filesystem, you will need to remount it to read-write mode temporarily:

bash: no job control in this shell
[root@(none) /]# mount - / -oremount,rw
[root@(none) /]# passwd # change root user password
[root@(none) /]# mount - / -oremount,ro

If you want to access other disk partitions and your distribution does not provide static /dev entries, you will have to mknod them or start udev daemon manually:

bash: no job control in this shell
[root@(none) /]# mount proc -t proc /proc
[root@(none) /]# mount sysfs -t sysfs /sys
[root@(none) /]# mount tmpfs -t tmpfs /dev
[root@(none) /]# /sbin/udevd --daemon
cannot open /dev/null
[root@(none) /]# /sbin/udevadm trigger # this will create /dev entries for your volumes
[root@(none) /]# mount tmpfs -t tmpfs /media
[root@(none) /]# cd media
[root@(none) media]# mkdir disk
[root@(none) media]# mount /dev/sda1 disk
[root@(none) media]# # do anything you need with other partitions
[root@(none) media]# umount disk
[root@(none) media]# cat /proc/mounts # check if you have unmounted everything
[root@(none) media]# sync # just to make sure
[root@(none) media]# /sbin/poweroff -f

Why does a file system go in read only mode?

Problem: you have never intended to set your filesystems in read-only mode, but sometimes when your system is up, they remount of their own will. Why does this happen?

Distributions often set an option in fstab that cause a filesystem to be remounted when an error occurs, which is typically caused by wrong on-disk format (maybe you have some bad blocks on your disk?). So check if the option errors=remount-ro is set on the filesystem:

[root@lhost ~]# cat /proc/mounts
/dev/disk / ext3 rw,relatime,errors=remount-ro 0 0

You could remove the said option, but it would be a really bad idea. Instead, power up another system or a Live CD and try fsck-ing your filesystems. If the errors keep appearing shortly after cleaning the filesystem, it might be time to replace the hard drive. There are also ways to add certain blocks to a list of badblocks, which might help in some cases.

More questions?

If you have a question that above points did not answer, contact me and I will try to help and possibly include your question in this FAQ.

File System Hierarchy

This is my recommendation for distributions and applications.

/usr, /boot, /lib, /bin, /sbin

Applications during normal operation (not being installed or uninstalled) must not attempt to write in these locations.

Installation scripts and package managers can assume these locations are writable during administrative actions initiated manually, and they should put here any data that they do not expect to change till next administrative action (e.g. upgrade or installation of other related packages).


Installation scripts can only create directories and stubs for configuration files, and these stubs must not contain any commands other than what is clearly interpreted as comments and whitespace. (They must not change any files in /etc on package upgrade or removal. Files containing default options should be placed in the same location as other immutable package files, preferably in /usr.).

On system installation, files in /etc can only be created automatically if they have an established format that is not likely to change for a lifetime of a system installation.

A front-end for configuring files in /etc should only create entries in configuration files for these options that were modified.


The system boot-up scripts can assume it is writable after mounting all fstab filesystems, but must not assume that any subdirectories or files exist in this location before boot.

Applications can assume it is writable and that it has some subdirectories with appropriate permissions, only if the existence of those subdirectories is assured by the boot-up scripts of the targeted systems.

If an application writes to directories with custom names in /var, it must check if they exist and in case they don't, attempt to create them.

Applications can assume that files put in custom subdirectories of /var will not be removed or changed until system shutdown.

/tmp, /dev, /home, /media, /proc, /root, /sys

These and other special-purpose locations should be mounted over the root file system and can be manipulated freely by processes that have permissions to access them.

Status of Distributions

The amount of work needed to install and use an upstream Linux distribution with a root file system in read-only mode is insubstantial, as proven by my linux-create scripts, but this method of using a system is not directly supported by any popular distribution (that I know of). Most of the changes could be avoided if upstream was aware of this use case and this is the exact reasoning behind this page.

Archlinux issues

Last tested on 2012-02-03:

- there's an unconditional mount -o remount / in rc.sysinit (it should use options from /etc/fstab if they are present),

- init scripts should do (at least) mkdir -p /var/lib /var/log after mounting filesystems,

- syslog-ng and crond do not create their required subdirectories in /var, but they are enabled in the default config and fail to start,

/var/lib/pacman/local should be in /usr (these state files only change when packages are installed or uninstalled, so they are not any more "variable data" than files contained in packages),

- with the default config, hardware clock shouldn't ever be written to.

Debian issues

Tested with Debian Lenny:

- /var/lib/dpkg should be in /usr,

- DHCP clients should have a separate file for automatically-updated nameserver information, or at the very least should do "cat xxx > /etc/resolv.conf" instead of "mv xxx /etc/resolv.conf",

- boot scripts should create required directories in /var if they are not present,

- boot scripts shouldn't touch hardware clock without user configuing them to do so.

Example scenario

Note: these aren't real instructions! This is how things SHOULD work. If you are a distribution developer, see what you can do to support this.

Suppose an user installs his favorite system. During installation, he has chosen to lay out partitions manually and has created separate partitions for /, /home and /var, and checked an option to mount tmpfs over /tmp. The created /etc/fstab might look like this:

/dev/disk/by-label/System / btrfs compress 0 1
/dev/disk/by-label/Home /home ext3 defaults 0 1
/dev/disk/by-label/Var /var ext2 defaults 0 1
tmpfs /tmp tmpfs defaults 0 0

After the system boots, all three disk partitions are mounted read-write. The user issues mount - / -o remount,ro and hopefully it works without a glitch. Now he has a root partition mounted read-only, but this state will not be preserved after reboot. The user wants to make it persistent, so he changes a line in /etc/fstab:

/dev/disk/by-label/System / btrfs ro,compress 0 1

The user reboots the system and verifies that not a single block has been written to the root partition. Also, his hardware clock has not been messed up.

Now the user wants to make a Live CD. He does "mount / -o bind /media/burn" and "mount - /media/burn -o remount,rw". For the purpose of burning, he changes /media/burn/etc/fstab to read:

rootfs / rootfs defaults 0 0
tmpfs /var tmpfs defaults 0 0
tmpfs /tmp tmpfs defaults 0 0
tmpfs /root tmpfs defaults 0 0
tmpfs /media tmpfs defaults 0 0

The user adds a boot script that is going to mount tmpfs at /home and create directories in it. Now the user only has to add a bootloader and burn the contents of "/media/burn" to a CD to create a fully functional Live CD with a snapshot of his system.