Getting Started

How to use the version we are upstreaming: gEDF scheduler only

If you are using the version (https://github.com/pennpanda/RT-Xen, branch rtxen-v2.0-upstream) we are upstreaming to Xen code, which only has gEDF scheduler inside, the following scenario shows you how to configure and use the real-time scheduler (rt).

Hardware

The machine has Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz, which has 6 cores (12 hardware threads). If you used some other machines with less cores, please make sure the cpu index you used in the command is less than the number of cores your machine has. 

Grub

The grub menu we used to boot the rt scheduler is as follows:

menuentry 'Ubuntu GNU/Linux, with RT-Xen 4.5(rt_ds) and Linux 3.11.0-15-generic' --class ubuntu --class gnu-linux --class gnu --class os --class xen {

    insmod part_msdos

    insmod ext2

    set root='(hd0,msdos1)'

    search --no-floppy --fs-uuid --set=root b26736f6-3b5e-4d16-b840-9a80081b93b5

    echo    'Loading Xen xen ...'

    multiboot   /boot/xen-4.5-unstable.gz placeholder dom0_max_vcpus=4 dom0_memory=2048M sched=rt_ds console=ttyS0 com1=115200n8 console=com1

    echo    'Loading Linux 3.11.0-15-generic ...'

    module  /boot/vmlinuz-3.11.0-15-generic placeholder root=UUID=b26736f6-3b5e-4d16-b840-9a80081b93b5 ro  quiet splash console=hvc0 earlyprintk=xen

    echo    'Loading initial ramdisk ...'

    module  /boot/initrd.img-3.11.0-15-generic

}

Scenario

After you boot the system with the rt scheduler, you can use the following scenario to try the functionality of this rt scheduler: 

//list each vcpu's parameters of each domain in cpu pools using rt scheduler

//the time is in microsecond

#xl sched-rt

Cpupool Pool-0: sched=EDF

Name ID VCPU Period Budget

Domain-0 0 0 10000 10000

Domain-0 0 1 20000 20000

Domain-0 0 2 30000 30000

Domain-0 0 3 10000 10000

litmus1 1 0 10000 4000

litmus1 1 1 10000 4000

//set the parameters of the vcpu 1 of domain litmus1:

# xl sched-rt -d litmus1 -v 1 -p 20000 -b 10000

//domain litmus1's vcpu 1's parameters are changed, display each VCPU's parameters separately:

# xl sched-rt -d litmus1

Name ID VCPU Period Budget

litmus1 1 0 10000 4000

litmus1 1 1 20000 10000

// list cpupool information

xl cpupool-list

Name CPUs Sched Active Domain count

Pool-0 12 rt y 2

//create a cpupool test

#xl cpupool-cpu-remove Pool-0 11

#xl cpupool-cpu-remove Pool-0 10

#xl cpupool-create name=\"test\" sched=\"credit\"

#xl cpupool-cpu-add test 11

#xl cpupool-cpu-add test 10

#xl cpupool-list

Name CPUs Sched Active Domain count

Pool-0 10 rt y 2

test 2 credit y 0

//migrate litmus1 from cpupool Pool-0 to cpupool test.

#xl cpupool-migrate litmus1 test

//now litmus1 is in cpupool test

# xl sched-credit

Cpupool test: tslice=30ms ratelimit=1000us

Name ID Weight Cap

litmus1 1 256 0

How to use full-featured RT-Xen 2.x

If you are using the full-featured version (https://github.com/pennpanda/RT-Xen, branch rtxen-v2.0-full-feature), which has global EDF, global RM, partitioned EDF and partitioned RM inside, the following scenario shows you how to configure and use those schedulers.

Please note: The rtglobal scheduler has global EDF and global RM inside, and you can switch between these two schedulers on the fly.

The rtpartition scheduler has partitioned EDF and partitioned RM inside; When you set the parameter of dom0, the scheduler will switch between partitioned EDF and partitioned RM interleavly. You could also follow the way rtglobal scheduler does (in function rtglobal_sys_cntl @ xen/common/sched_rtglobal.c) to implement the tool stack to set a specific scheduler.

Hardware (same with above)

The machine has Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz, which has 6 cores (12 hardware threads). If you used some other machines with less cores, please make sure the cpu index you used in the command is less than the number of cores your machine has. 

Grub

The grub menu we used to boot the rtglobal scheduler is as follows:

menuentry 'Ubuntu GNU/Linux, with RT-Xen 4.3.0(rtglobal) and Linux 3.11.0-15-generic' --class ubuntu --class gnu-linux --class gnu --class os --class xen {

    insmod part_msdos

    insmod ext2

    set root='(hd0,msdos1)'

    search --no-floppy --fs-uuid --set=root b26736f6-3b5e-4d16-b840-9a80081b93b5

    echo    'Loading Xen xen ...'

    multiboot   /boot/xen-4.3.0.gz placeholder dom0_max_vcpus=4 dom0_memory=2048M sched=rtglobal console=ttyS0 com1=115200n8 console=com1

    echo    'Loading Linux 3.11.0-15-generic ...'

    module  /boot/vmlinuz-3.11.0-15-generic placeholder root=UUID=b26736f6-3b5e-4d16-b840-9a80081b93b5 ro  quiet splash console=hvc0 earlyprintk=xen

    echo    'Loading initial ramdisk ...'

    module  /boot/initrd.img-3.11.0-15-generic

}

Scenario

After you boot the system with the rtglobal scheduler, you can use the following scenario to try the functionality of this rtglobal scheduler: 

//list each vcpu's parameters of each domain in cpu pools using rtglobal scheduler

//the time is in millisecond

#xl sched-rtglobal

Cpupool Pool-0: sched=EDF

Name ID VCPU Period Budget Extra

Domain-0 0 0 10 10 0

Domain-0 0 1 20 20 0

Domain-0 0 2 30 30 0

Domain-0 0 3 10 10 0

litmus1 1 0 10 4 0

litmus1 1 1 10 4 0

(Note: Extra column has no meaning in this version and is always 0.)

//set the parameters of the vcpu 1 of domain litmus1:

# xl sched-rtglobal -d litmus1 -v 1 -p 20 -b 10

//domain litmus1's vcpu 1's parameters are changed, display each VCPU's parameters separately:

# xl sched-rtglobal -d litmus1

Name ID VCPU Period Budget Extra

litmus1 1 0 10 4 0

litmus1 1 1 20 10 0

// list cpupool information

xl cpupool-list

Name CPUs Sched Active Domain count

Pool-0 12 rtglobal y 2

//create a cpupool test

#xl cpupool-cpu-remove Pool-0 11

#xl cpupool-cpu-remove Pool-0 10

#xl cpupool-create name=\"test\" sched=\"credit\"

#xl cpupool-cpu-add test 11

#xl cpupool-cpu-add test 10

#xl cpupool-list

Name CPUs Sched Active Domain count

Pool-0 10 rtglobal y 2

test 2 credit y 0

//migrate litmus1 from cpupool Pool-0 to cpupool test.

#xl cpupool-migrate litmus1 test

//now litmus1 is in cpupool test

# xl sched-credit

Cpupool test: tslice=30ms ratelimit=1000us

Name ID Weight Cap

litmus1 1 256 0

//change the scheduling policy on the fly

//change to glbal RM scheduler to global EDF scheduler

# xl sched-rtglobal -s RM

Input schedule scheme from user is: RM

Schedule scheme is RM now

//change to global EDF scheduler from global RM scheduler

# xl sched-rtglobal -s EDF

Input schedule scheme from user is: EDF 

Schedule scheme is EDF now

How to use rt-xen v1.x

Hardware: 

Dell Q9400 quad-core machine without hyper-threading. Each core runs at 2.66 GHz. Domian 0 was given one core and 1 GB memory, other guest domains were pinned to another core and given 256 MB memory each.

Software:

OS: Fedora 13, 64 bit with para kernel 2.6.32.25.

Xen: Compiled from 4.0.1 source.

How to Test RT-XEN

Please goto Publication section to see the details and results of the evaluation.

We are using Sporadic Server Scheduler as the hypervisor scheduler, and Rate Monotonic scheduler as the Linux scheduler.

In [1], there are in total four requirements to the task running on it:

To create a periodic real time task in Linux, we use the CLOCK_REALTIME to trigger the periodic task, and measured 1ms workload on our test platform. The details of this periodic task can be found here.

Following the task requirement, you can configure the budget and period of the domain with corresponding generated tasks. The program we use can be found here.

In the test program, we record each job's start time, dispatch time, finish time, and deadline. Then we can use this record to calculate the deadline miss ratio of each task.

How to See if My task is schedulable or not?

Following [1]'s theory requirement, we developed a small tool to test if the task set is theoretically schedulable on RT-Xen or not. Download it here. A sample input is here.

The sample input should contains:

-------------

number of domains

for each domain:

number of tasks

tasks' priority, execution time, deadline(we assume that deadline = period)

-------------

to use it, use the following command:

gcc feasible_taiwei.c -o test.out

./test.out test1.txt

The sample output should be:

----------------

turtle141:Tools xisisu$ gcc feasible_taiwei.c

turtle141:Tools xisisu$ ./a.out test1.txt 

In total, 5 domains

Domain 0, have 5 tasks

priority: 99, exec_time: 1, deadline: 75

priority: 98, exec_time: 1, deadline: 100

priority: 97, exec_time: 12, deadline: 125

priority: 97, exec_time: 1, deadline: 125

priority: 96, exec_time: 1, deadline: 200

The Server's Budget: 5, Period: 25

total utilization: 0.132333, utilization bound: 0.146954

Domain 1, have 10 tasks

priority: 99, exec_time: 7, deadline: 150

priority: 99, exec_time: 3, deadline: 150

priority: 99, exec_time: 2, deadline: 150

priority: 99, exec_time: 2, deadline: 150

priority: 99, exec_time: 3, deadline: 150

priority: 99, exec_time: 1, deadline: 150

priority: 99, exec_time: 1, deadline: 150

priority: 99, exec_time: 1, deadline: 150

priority: 98, exec_time: 1, deadline: 350

priority: 97, exec_time: 1, deadline: 1450

The Server's Budget: 10, Period: 50

total utilization: 0.136880, utilization bound: 0.143090

Domain 2, have 10 tasks

priority: 99, exec_time: 12, deadline: 300

priority: 99, exec_time: 13, deadline: 300

priority: 99, exec_time: 9, deadline: 300

priority: 99, exec_time: 4, deadline: 300

priority: 99, exec_time: 2, deadline: 300

priority: 99, exec_time: 1, deadline: 300

priority: 99, exec_time: 2, deadline: 300

priority: 99, exec_time: 2, deadline: 300

priority: 98, exec_time: 1, deadline: 400

priority: 97, exec_time: 1, deadline: 500

The Server's Budget: 22, Period: 100

total utilization: 0.154500, utilization bound: 0.157399

Domain 3, have 6 tasks

priority: 99, exec_time: 14, deadline: 600

priority: 99, exec_time: 3, deadline: 600

priority: 99, exec_time: 2, deadline: 600

priority: 99, exec_time: 3, deadline: 600

priority: 99, exec_time: 3, deadline: 600

priority: 98, exec_time: 60, deadline: 800

The Server's Budget: 35, Period: 200

total utilization: 0.116667, utilization bound: 0.127510

Domain 4, have 5 tasks

priority: 99, exec_time: 11, deadline: 1200

priority: 99, exec_time: 27, deadline: 1200

priority: 99, exec_time: 13, deadline: 1200

priority: 99, exec_time: 5, deadline: 1200

priority: 98, exec_time: 164, deadline: 1600

The Server's Budget: 82, Period: 400

total utilization: 0.149167, utilization bound: 0.150628

Total U is 1.000000

Passed Utilization check!

Passed Harmonic check!

Passed Single Schedulability check!

Schedulable!!!

----------------

Reference:

[1] T.-W. Kuo and C.-H. Li, “A fixed-priority-driven open environment for real-time applications,” Real-Time Systems Symposium, IEEE International, vol. 0, p. 256, 1999.:

How to setup the environment for network experiments

RT-Xen doesn't do any change to the network subsystem of Xen. If you want to run some network experiements on RT-Xen, please setup the environment based on this and 

this tutorial. 

Basically, you need to 

1. Setup the software bridge utility in Domain 0. 

In Fedora/CentOS, please run 

yum install bridge-utils

to install the bridge package. 

In Ubuntu/Debian, run

apt-get install bridge-utils

to install. You may need the 'sudo' priviledge to run the commands above.

2. Modify the network interface configuration files.

In Fedora/CentOS, the configuration files are under 

/etc/sysconfig/network-scripts/

Normally, you will find two configuration files under this path, ifcfg-eth0 and ifcfg-peth0 (names could be slightly different according to the Xen version), which are created by the network-bridge script of xend (under /etc/xen/scripts/). By default, eth0 is the bridge device, while the peth0 is the network device of Domain 0. In ifcfg-eth0, there must be:

DEVICE=eth0 TYPE=Bridge BOOTPROTO=static # =dhcp if you need a dynamic IP address from your network service provider BROADCAST=*.*.*.* IPADDR=*.*.*.* NETMASK=*.*.*.* ONBOOT=yes NM_CONTROLLED=no

The ifcfg-peth0 should be like:

DEVICE=peth0 TYPE=Ethernet BOOTPROTO=static # or =dhcp BROADCAST=*.*.*.* IPADDR=*.*.*.* NETMASK=*.*.*.* ONBOOT=yes BRIDGE=eth0 NM_CONTROLLED=no

In Ubuntu/Debian, the configuration file is

/etc/network/interfaces

Please follow this instructions to add the setup for eth0 and peth0 in the configuration file.

3. Modify the config file of the guest domain. 

If your guest domain has a network interface, please add the following line into the config file:

vif=[ 'mac=*:*:*:*:*:*,bridge=eth0' ]

We recommend you to pin Domain 0 to a dedicated physical CPU core to better handle the communication and interrupts. To configure this, you can modify the grub and specify

'dom0_max_vcpus=1, dom0_vcpus_pin' as part of the kernel start-up parameters.