A typical web server serving static content relies on two layers of security to ensure that only the content intended to be shared is shared. At the low level, the file system has a general ACL mechanism that makes only certain files readable to the web server process. But that would not be sufficient to determine what should be served: Any errant world-readable file, anywhere in the system would suddenly be servable. So, web servers use a second mechanism, the configuration map, that explicitly states which files should be served (and under what URLs). This is a positive approach, "serve only that which is explicitly allowed".
However, in a typical dynamic web application, the situation is much different. Here, again, at the low level, there is usually a database (or other data store), and the scripts are allowed to access it with some account and privilege. The scripts are able to access any datum in the store that available to the configured web process. The second layer of protection is implemented in the scripts themselves: When invoked, each script reviews the inputs, and rejects that which is disallowed, then performs the operation. This is a negative approach, "serve that which isn't explicitly disallowed".
Belay adds the missing positive layer: When a script knows that a particular handler and data item combination should be allowed to a given client, it grants a BCAP URL to that specific combination, and returns that to the client. Only presentation of valid, non-revoked, BCAP URLs ever make it to the scripts:
In this way, Belay reduces the size of the attack surface considerably: It simply isn't possible to hit most of the API functionality of a web server with arbitrary input when most of the services are not directly exposed, and can only be accessed to an entity that a explicit URL has been handed.