Hardware technology has undergone major advancements over the past decade. The number of cores on a chip grows exponentially, following Moore's Law. This allows transaction processing systems to execute more and more transactions in parallel. However, conventional transaction processing systems are ill-suited for modern and upcoming hardware architectures. As the number of concurrently-executing transactions increases, contended critical sections become scalability burdens. In typical transaction processing systems the centralized lock manager is often the first contended component and scalability bottleneck. As a result, the parallelism offered by underlying hardware cannot be exploited.

    We identify the conventional thread-to-transaction assignment policy as the primary cause of contention. Such assignment makes the transaction determine which data the thread is going to access; making it unpredictable which thread is going to access which data. Therefore, the transaction processing systems have to be pessimistic and protect the data with locks maintained by a centralized lock manager to achieve atomic updates.

    Instead of the thread-to-transaction policy, we propose data-oriented execution (DORA) that allows thread-to-data assignment policy. DORA first logically partitions data to assign a single thread to each partition and also decomposes each transaction to smaller actions to assign actions to threads based on which data each action is about to access. DORA’s design allows each thread to mostly access thread-local data structures, minimizing interaction with the contention-prone centralized lock manager.

    Click here to get the poster of our demo, which describes the problem and DORA design with graphs. Below there are the videos that shows different parts of our demo with detailed descriptions. The demo is composed of two parts; the DORA Designer and the Live Monitor. For the videos, we advise you to change the video quality to Original or some of the HD modes as offered by the menu under the videos when you play them if you want to watch the videos in full-screen.


    The first part of the demo presents the DORA Designer. DORA Designer is a tool to guide developers using DORA on how to split the transactions into smaller actions. Given the SQL code, DORA Designer finds the split points for a transaction and which parts can be run in parallel while which parts depend on each other. Then it gives a graphical representation of the both conventional and DORA execution plan.
    Below two videos show examples on how the Designer works. For simplicity, we already put the database schemas and SQL statements for some of the transactional benchmarks, which are used in the below videos.

Designer - TATP

Designer - TPC-C

    The second part of the demo has the live monitor, which monitors throughput, CPU utilization, data accesses, etc. For all of the videos below, we run TATP's GetSubscriberData transaction.

    Here the first video (Live - Throughput) compares the throughput and data access patters of the baseline Shore-MT (blue) and Shore-MT running based on DORA design (red) while they are running on two different sockets of a SPARC server each having support for 64 hardware contexts. The throughput and CPU utilization is reported on the right-hand side. The short drops in throughput happen due to the monitoring information being flushed to disc at that particular time. As we can see, DORA highly outperforms the baseline system in terms of both utilization and throughput. The main reason for this is shown on the access pattern graphs in the middle. On these graphs, x-axis is time, y-axis is data range, and each color represents a thread. What they show is when a particular thread accessed which data and what we see from the video is while the thread access patterns for baseline Shore-MT is very chaotic, DORA provides very predictable access patterns.

Live - Throughput

Live - Load Balancer


    The second video (Live - Load Balancer) shows how easy to repartition with DORA. One of the main problems of the partitioning-based designs is if the load is not equally distributed to all partitions then some will be more loaded than others and as a result some cores will be loaded while others stay under-utilized. It is very important to have an efficient dynamic load balancing mechanism and with DORA it is easy. DORA only have logical partitions. Therefore, repartitioning requires deciding on the new partitions based on the observed load at each of them and then just updating DORA's partitions table. The video shows the baseline Shore-MT (blue), Shore-MT running based on DORA design but with no dynamic load balancing applied (red), and Shore-MT running with DORA design and a lightweight dynamic load balancing mechanism (purple). The middle part shows the partitions of the two DORA systems with different colors and the right-hand side shows the throughput and a simple example on how our load balancing algorithm works. All systems start with uniform load, then after 10 seconds 50% of the load is sent to only 10% of the database. When this happens since the baseline system has no partitioning it doesn't get affected and DORA without load balancing gets a big hit in performance due to load imbalance, but DORA with load balancing manages to repartition very quickly and achieve its previous performance even under load imbalance. The middle part shows how partitions change for the Subscriber's table since the transaction we run only affects that table. The contended parts are split into many smaller ranged partitions while the less contended parts merged into bigger ranged partitions.