I published this short article to an hobby OS site in 2004. I don't think the idea has aged a bit!
This article offers feature suggestions to budding OS developers looking for that neat edge.
Wouldn't it be real nice if
People have been dreaming of 'mounting' remote filesystems on demand for a long time. It seems to be a popular pastime for architecture astronauts. Despite Joel's warnings to run from network transparency, I vote that you don't!
Client-side libraries allow a program to access remote resources in the same way as local ones, for example the excellent libferris.
A new operating system that integrated such handling into the platform level (rather than an additional, optional library) would have the advantage that each and every application could access the same resources. The 'ls' in the ported bash prompt would be able to list the contents of an FTP directory, and the notepad clone would load your text files whether they were local, on some window server, or the other side of the internet.
Users work with and are familiar with URIs, so URIs are the natural way of expressing a file name and location. The filesystem ought to work with URIs.
Imagine the following snippet of fictious commandline:
The use of URIs introduces what I term the 'Multi-Root Filesystem'. The protocol, e.g. "file", "smb" or "webdav", is a root. All protocols are peers of each other, and you can't navigate between them in relative paths (i.e. no "file://home/me/../../../http://remote").
Protocol handlers are global to the user or system. The file structure might be made upon demand, with the contents of "http://" reading like recent internet history; whereas "file://" might countain the local UNIX-style root "/" which contains in-turn "home" and "bin" etc.
Provide your system with a new protocol handler and suddenly all applications can use files available via that mechanism.
Obviously opening a file over ftp is possibly a million miles more complicated than opening a local file. When allowing the same API to be used to access both local and remote file-like resources, things that rarely go wrong on local operations (such as takes a long time, might fail) happen a lot more often. That the average programmer never checks that things fail/succeed and always putting IO into the UI thread is just plain bad, for both local and remote resources. It would have to be thought about. Massively multi-threaded message-passing operating systems might have the edge in this respect.
A unified, transparent network access for many protocols does not remove the need for dedicated protocol handling libraries for specialist programs. But it does make the average program suddenly much more powerful and useful to the average user!
It is worth mentioning an additional feature for the interested OS developer to research: auto-mounting archives and encrypted files transparently, e.g. "sftp://www.mycom.org/mail/archives/2004-07.zip.pgp/get rich quick.msg".