This page is my attempt to popularize the idea of using Linux distributions with their root filesystems mounted in read-only mode. This is somewhat different from the recent trend from Live CD makers, who would typically use unionfs and require several files in
In a typical GNU/Linux software stack system (desktops, laptops, phones, appliances etc.), most of the contents of the root filesystem is the static application data, which only changes when you upgrade or install new packages. The only locations that need to store serious data and be writable during normal work are
All the Linux boxes I manage run with their root file-systems mounted in read-only mode, and in many cases these read-only file systems reside on flash memory. Why should I put system files on a cheap pendrive, even if a machine has a hard drive for storing users' data? For one thing, it's faster. How is it that a pendrive is faster than a hard drive? The trick is, it's very fast for read-only access of small blocks and it's a separate drive, so disk-intensive operations performed in home directories do not interfere with reading application data.
Below is a quick list of what I think are important benefits from using a read-only root filesystem. When you realize precisely what "read-only root" means, you will easily see that below points are indeed true and in fact you may even think of more reasons.
The behavior of the system and applications is perfectly reproducible if the user does not install or remove any packages.
Users can easily enforce all the data to stay in volatile memory whenever they need to. Additionally, this layout enforces a better separation between system and user data. Any problems with this separation are immediately visible.
One can use a different physical drive for read-only partitions, thus avoiding disk scheduler waits.
Cost and power efficiency
Lowered disk wear and power consumption. Read-only partitions can reside on flash memory sticks without decreasing performance.
Yet another layer of security for malicious software to overcome. Possible to ensure it on hardware level.
If an user is aware of remounting file systems, he can ensure that no changes are done unwillingly.
Direct thin-client infrastructure
No need to prepare special images for multiple-client boot.
Direct Live-CD creation
File systems and kernels do not require special preparation before burning.
All the directories of file system tree that need to be accessed for writing can be mounted on separate partitions, so this imposes no limits on system usage.
Unionfs is not suitable
It does not allow to remount a file system to read-write mode and permanently install some packages. Also, it is a huge waste of memory.
How to switch my existing system to readonly root?
The process typically consists of two steps. Firstly, you need to ensure that boot scripts never remount root filesystem to "
While performing this operation, you probably shouldn't let the system mount any disk partition other than its root filesystem (so comment them out in
In some distributions, it will suffice to add the "
After you do this, you can boot your system repeatedly and continue fixing boot scripts until you are sure that they never remount the root filesystem. It is safe to do this by trial and error, because even if the system doesn't work too well, a read only partition cannot be corrupted.
You can be virtually sure that after you execute this step, your system will print lots of error messages and many programs will stop working. It is not dangerous when the only mounted partition is read-only.
The second step is just fixing all errors, as it is discussed in the answer to the next question.
What do I need to change in my system to make it work properly with read-only root?
If you want your current system fully functional with read-only root filesystem, you will need to ensure that
- add lines to /etc/fstab that will cause tmpfs to be mounted on /tmp, /media, and /root.
- if you use a dynamic
- check if the dhcp client script handles a symlinked
- if you don't want to use a writable disk partition for
- if you want to use a tmpfs filesystem as
- remove any commands that write to hardware clock from boot scripts.
How do I install/uninstall packages on a read only file system?
The easiest way is to:
root@lhost:~# mount - / -oremount,rw
But in case you have mounted tmpfs filesystems that cover
root@lhost:~# cd /media
How to reboot into read only mode as root?
What you probably want is to do some maintenance/repair and you would rather not change anything in your configuration. In this case, just boot with "
If you want to access other disk partitions and your distribution does not provide static
Why does a file system go in read only mode?
Problem: you have never intended to set your filesystems in read-only mode, but sometimes when your system is up, they remount of their own will. Why does this happen?
Distributions often set an option in
[root@lhost ~]# cat /proc/mounts
You could remove the said option, but it would be a really bad idea. Instead, power up another system or a Live CD and try fsck-ing your filesystems. If the errors keep appearing shortly after cleaning the filesystem, it might be time to replace the hard drive. There are also ways to add certain blocks to a list of badblocks, which might help in some cases.
If you have a question that above points did not answer, contact me and I will try to help and possibly include your question in this FAQ.
This is my recommendation for distributions and applications.
/usr, /boot, /lib, /bin, /sbin
Applications during normal operation (not being installed or uninstalled) must not attempt to write in these locations.
Installation scripts and package managers can assume these locations are writable during administrative actions initiated manually, and they should put here any data that they do not expect to change till next administrative action (e.g. upgrade or installation of other related packages).
Installation scripts can only create directories and stubs for configuration files, and these stubs must not contain any commands other than what is clearly interpreted as comments and whitespace. (They must not change any files in
On system installation, files in
A front-end for configuring files in
The system boot-up scripts can assume it is writable after mounting all fstab filesystems, but must not assume that any subdirectories or files exist in this location before boot.
Applications can assume it is writable and that it has some subdirectories with appropriate permissions, only if the existence of those subdirectories is assured by the boot-up scripts of the targeted systems.
If an application writes to directories with custom names in
Applications can assume that files put in custom subdirectories of
/tmp, /dev, /home, /media, /proc, /root, /sys
These and other special-purpose locations should be mounted over the root file system and can be manipulated freely by processes that have permissions to access them.
The amount of work needed to install and use an upstream Linux distribution with a root file system in read-only mode is insubstantial, as proven by my linux-create scripts, but this method of using a system is not directly supported by any popular distribution (that I know of). Most of the changes could be avoided if upstream was aware of this use case and this is the exact reasoning behind this page.
Last tested on 2012-02-03:
- there's an unconditional mount -o remount / in rc.sysinit (it should use options from /etc/fstab if they are present),
- init scripts should do (at least) mkdir -p /var/lib /var/log after mounting filesystems,
- syslog-ng and crond do not create their required subdirectories in /var, but they are enabled in the default config and fail to start,
- with the default config, hardware clock shouldn't ever be written to.
Tested with Debian Lenny:
- DHCP clients should have a separate file for automatically-updated nameserver information, or at the very least should do "
- boot scripts should create required directories in
- boot scripts shouldn't touch hardware clock without user configuing them to do so.
Note: these aren't real instructions! This is how things SHOULD work. If you are a distribution developer, see what you can do to support this.
Suppose an user installs his favorite system. During installation, he has chosen to lay out partitions manually and has created separate partitions for
/dev/disk/by-label/System / btrfs compress 0 1
After the system boots, all three disk partitions are mounted read-write. The user issues
/dev/disk/by-label/System / btrfs ro,compress 0 1
The user reboots the system and verifies that not a single block has been written to the root partition. Also, his hardware clock has not been messed up.
Now the user wants to make a Live CD. He does "
rootfs / rootfs defaults 0 0
The user adds a boot script that is going to mount tmpfs at /home and create directories in it. Now the user only has to add a bootloader and burn the contents of "