Concurrency

Concurrency is simply means doing more than 1 thing at a time. That means there are more than 1 control point (e.g. threads / processes). The resources are simultaneously accessed by more than one thread.

In the past, concurrency is meant to:

  • event threading for GUI rendering
  • isolate client's work for server
  • execute producer and consumer from work design

Now:

  • for scalability and performance

A design pattern is illustrated as follows:

Potentials

Everything in this world, in nature happens simultaneously with or without interacting with one another. Hence, it is normal to build a system running concurrent implementation, be it threading or parallelism.

Building concurrent implementation allows:

  • many tasks done in a very fast and asynchronously.
  • delegates tasks from one process to another.
  • effective spending for hardware resources (e.g. avoid hogging CPU)

Design Consideration: Atomicity

Before charging into writing code, we need to consider the task and data structure atomicity, that is: execution happens one at a time. Example:

  • Atomoic - x := 500
    • Store x by setting it to 500
  • Non-Atomic - x++
    • Load data from x into CPU accumulator
    • increment by 1
    • Store data from CPU accumulator into x


Purpose: avoid race conditions

If 2 processes managing the same information (data) at the same time asynchronously, there is a tendency to achieve a scary bug which is race conditions. Hence, you want to ensure the flow, execution, and synchronization happens clearly throughout your concurrent implementation.

Design Consideration: Contact Switching

The next thing to consider is the multiple processes communicating with one another. There are various ways to do this:

  • Naive Monitoring / Sentinel Looping
  • Cooperative Locking / Cooperative Threading / Non-Preemptive Threading
  • Preemptive Threading
  • Atomic Update Notification

This way, you control the contact switching between processes to ensure you do not create deadlock:

  1. data integrity during read or write
  2. wild waiting between processes (deadlock by waiting)
  3. CPU overwhelmed between processes (error due to non-atomic implementation on a single CPU)

Design Consideration: Immutability

Same case with contact switching. This is a design consideration where you are making sure the data mutability (changing in nature) is either a constant or variable. This is also for avoiding deadlock.

An immutable (non-changing) data is always the preferred choice in threading due to its simplicity. Otherwise, consider contact switching strategy to exchange data between processes.

Design Consideration: Hardware and Language/Compiler Supports

Lastly, you need to consider whether a given language and hardware can support such concurrency executions. Some CPU were designed by hardware to execute 1 thread effectively; some multi-core CPU allows parallelism.

Some languages like Java, Go, C++, C, etc facilitates both safe and unsafe concurrency implementations. Some does not / very complicated. Hence, you must ensure these facility are available before implementation.

Moving forward, you should refer to the hardware/language manuals for concurrency guidelines. There is no "one rule for all" in terms of concurrency.

That's all about concurrency design.