We regularly see token machine in banking branch, hospital etc. This machine provides token number to each customer awaiting for some work. Now think each customer as an event and awaited work as action. So, token machine handles the event and generate the token fast instead of awaiting for work to be finished. In this way, token machine is able to cater high turnout of customers as well. If token machine doesn't handles event fast, then there will be consistently increasing queue in front of token machine. Eventually, queue will be full (think that standing space is filled up completely).
In event based system, event handling process waits for events and then process event and then again wait for event. For example, consider a process(say, user-id process) whose job is to provision user-id info to firewall when a user log-in to the AAA device. So, this process waits for user-log event. User-login rate varies. So, to support scale, user-id process should handle high log-in rate. Lets analyse it.
High log-in rate results in high number of event generation. Think about observer design pattern. Here, user-id process subscribes to login event. Consider that event is delivered via unix domain socket. Note that unix domain socket uses fixed queue size for holding unprocessed events. Ideally, user-id process should consume events faster than event generation rate. This is healthy behaviour and hence, queue will be empty for most of times. If user-id process consumes event slowly, then event queue will keep on increasing. Eventually, queue will be full. At this stage, there will be chicken-egg problem since discarding event is not desirable.
Note that in multiprocess environment, cpu cycles are shared among running processes. Consider below output of cpu usage. Here, user-id process is consuming 94.3% of cpu. If process holds CPU for long (multiple seconds) time, then it will not able to handle events generated during this period. These events will be awaiting in the queue. So, It indicates the danger of queue getting filled up.
top - 05:24:06 up 6:29, 0 users, load average: 0.10, 0.08, 0.04
Tasks: 123 total, 1 running, 122 sleeping, 0 stopped, 0 zombie
Cpu(s): 95%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 2054384k total, 841920k used, 1212464k free, 47312k buffers
Swap: 2048276k total, 49276k used, 1999000k free, 378780k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6667 root 20 0 76328 12m 10m S 94.3 0.6 0:16.10 user-id
1 root 20 0 3860 944 896 S 0.0 0.0 0:00.27 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 test
User-id process can react fast, if it maintains a separate queue of task. For each event, it creates a task command and enqueue it. Note that these tasks will be handled asynchronously. Event queue will be almost empty even though burst of events are generated. Someone can ask why this approach will work. To answer this, we need to understand real-life pattern of events. Most of time, program handles regular rate of event generation. But in special occasion(for example, think about banking branch use-case), event generation rates increases sharply for a while. Say it as burst of events. This pick stays for a while and then goes to normal. By processing event faster, actually user-id is enabling such bursts.