RRP-Xen Multi Core

NOTE: This scheduler is an experimental version. Works currently for Multi-core CPUs, Exclusively for CPUs (1, 3, 5, 7). The stable version will be out very soon...

RRP-Xen Multi-Core Scheduler: Link

RRP Multi-core Domain-0 Userspace Simulators: Link

RRP-Xen Architecture Overview

  • All the Launch Tables generated in the Domain-0 Userspace come with a tagged CPU-ID generated by Our RRP-Multi Core Algorithm.

  • These launch tables are ported to hypervisor RRP-Xen v2.0 scheduler residing in Xen Hypervisor space

  • All these launch tables are launched in sequence i.e., one after another in a sequential fashion, Doing so will ensure the channel will never be overloaded / clogged and maintains the whole transfer efficient and light-weight

  • Each launch table has multiple (one/more) instances of schedule entries set-up within the launch-table's hyperperiod.

RRP-Xen schedule Design - LaunchTable

  • Launch Table is the most important data structure in RRP-Xen, in that it helps deliver the schedule (already assigned a CPU-ID) from Domain-0 Userspace to the RRP-Xen scheduler residing in Xen hypervisor space.

  • RRP-Xen scheduler will then schedule these launch tables based on the CPU-ID it receives along with the launch table.

  • Each of these launch table entries can run for a user-defined timeslice length before do_schedule() in RRP-Xen brings up the next schedule entry from the launch table assigned to that CPU.

  • However, if it is a different CPU, then the control quickly shifts to that corresponding launch-table.

  • For instance, if a schedule entry that has not been assigned timeslice is encountered, then RRP-Xen quickly realizes it before a kernel panic is raised and assigns an IDLE_VCPU to run until a next valid schedule entry with a valid timeslice assignment is obtained.

Schedule Setup Procedure - RRP-Xen Multi Core

Case Study

  1. Consider Domains with Availability Factors: [Domain-1: 1/7, Domain-2: 2/7, Domain-3: 3/7, Domain-4: 4/5

  2. Given CPUs: 2 [ CPU #1, CPU #2]

  3. Remove CPUs # 1 and # 3 from Pool-0 using xl cpupool-cpu-remove commands

  4. Add these CPUs to newly created aaf_pool that runs the scheduler "aaf".

  5. Navigate to directory /etc/xen for the creation of domains.

  6. Perform xl create domuX.cfg pool=\"aaf_pool\" where X = {6,7,8,9}

  7. Navigate to directory /home/rtlabuser/rrp_multi_core/

  8. Open rrp_sched_entry_mc.c and comment the line "#define RRP_SCHED_SET_ENABLE"

  9. Now run the shell script "uuid_multi_core.sh" residing in the same directory as rrp_sched_entry_mc.c

  10. Note: Running this shell script will not place any of the domains into blocked state (---b---)

  11. On the first run of this shell script, we get the assignment of domain to a CPU as shown below.

  12. As can be seen from CPU ID - Partition ID assignment, the domains were provided with input availability factors of 1/7, 2/7, 3/7 and 4/5. We know that the Domain having an availability factor of 4/5 occupies the whole CPU (CPU #1 in this case) and other 3 domains are assigned to CPU #3. The domains are still not in blocked state at this point in time since the schedule_set hypercall was commented out as mentioned above in point 8 (as shown below).

  13. Now remove the domains and the cpus from aaf_pool.

  14. Add CPU #1 to the aaf_pool now and proceed with the addition of domain that has a hard-affinity for CPU #1 (can be seen in /etc/xen/domuX.cfg)

  15. Now repeat the same procedure for CPU #3 as well. Doing so will make sure there is a runnable VCPU for every valid schedule entry.

  16. We can finally see the domains in a blocked state. Once all the domains are set into blocked state, we are good for Guest OS installations. Also, setting all the domains into blocked state is a very important indicator for the correctness of the multi-core RRP-Xen.

CPU ID - Partition Assignment

Domains Invalid State

Domains Valid State