Lab Session 1
To use OpenMP, you have to
- include <omp.h> if you use specific API calls
- compile with -fopenmp
In all exercices it is important to measure the execution time of your program. There are usually 3 ways to do it
- the shell built-in
time
command - the
time
command (usually in /usr/bin/time) - C call
gettimeofday()
As a general rule, it's better to only have a single source code to be compiled as 2 executables files, with and without OpenMP. Specific OpenMP code (not pragma) can be protected with #ifdef _OPENMP
Some code sample can be found here
Exercise 1 (parallel for loop)
Exercise 1 (parallel for loop)
Take the code available here. It will be used as a base for the rest of the exercise.
- Write a C code which takes 2 int as a parameter. The first int will be used as the size of an array to be filled with random values between 0 and the second int.
- What is the difference between static and dynamic allocation in C ?
- What is the maximum size of an array with static and dynamic allocation?
- Write a method
fillRandomArray(...)
which fills the array as described in the previous question. - Write a method
testOnRandom(...)
which performs a primality test on all elements in the array and returns a boolean array.- Mesure its execution time for various sizes of array and a fixed maximum value (e.g. 20 000)
- What do you observe ?
- Add OpenMP pragmas to all your for loops to make them parallel.
- Measure the execution time of the whole program and each for loop
- Perform a series of experiments to see how the execution time varies as a function of the array size for both the sequential and parallel version.
- A campaign of experimentations should be fully automated, from the execution to the analysis of experimental results.
- Discuss the impact of parallelisation on the overall execution time (Results may depend on your operating system)
- Using the OpenMP C API, have your program display the number of threads at runtime.
- Modify your code to change the number of threads from 1 to 20 (use either the API or the OMP_NUM_THREADS environment variable)
- Measure the execution time of the whole program and the
testOnRandom(...)
function. - Draw the execution time of
testOnRandom(...)
as a function of the number of threads - Compare the execution time of your OpenMP code with 1 thread with the sequential version.
- Discuss the results.
- Measure the execution time of the whole program and the
- With the time command, you can differentiate between user space and kernel space execution time. How does this repartition varies with the number of parallel threads ?
Exercise 2 (Shared and private variables)
Exercise 2 (Shared and private variables)
We want to compute the sum of all the elements in our array. For this exercice we won't care overflows/underflows.
- Write a function
sum(...)
which takes an array of int as a parameter and has- an
int
total declared before a for loop. - a
for
loop which adds the values to total
- an
- Compare the result for the sequential and the parallel version for large arrays
- Explain the results.
- Using pragma options, indicate the total variable is now private. What is final result ? explain why.
- Use an OpenMP critical section to compute the correct result
- Discuss the impact and performance of this solution
- OpenMP supports reduction on variables in for loop.
- What is a reduction ?
- How do you use it in OpenMP ?
- Perform a reduction and measure the performance under various conditions. Discuss the results.