Even though computing devices are becoming more powerful with new technological advances, they can hardly keep up with resource-hungry applications that require large amounts of CPU power, memory and battery. Computation offloading can help to augment the computational capabilities beyond the boundaries of a local device by distributing the workload to remote computing resources. However, as soon as a task leaves the local device, it is exposed to varying network conditions and is likely to face a heterogeneous resource pool. Mobile cloud computing (MCC) was introduced to provide reliable computing power at scale in a pay-per-use manner. Despite all its advantages, MCC might bring along considerable network latencies, deployment costs, and privacy concerns. Mobile edge computing (MEC) can be considered an alternative to MCC as it makes use of distributed edge servers in proximity which reduces communication latencies and enhances privacy. Ad-hoc computing makes opportunistic use of sporadically available resources which might even be unreliable end-user devices. This paradigm facilitates a better resource utilization, new resource sharing opportunities, and highly decentralized architectures but introduces a new level of complexity in terms of resource management, scheduling decisions, and fault tolerance.
All these paradigms make use of computational offloading. In one way or another, the computational workload needs to be transferred from the local device to any remote resource. This raises many questions regarding the design and implementation of computation offloading architectures. How can the program code be partitioned, extracted, transferred, and eventually executed on remote devices which might be heterogeneous in their nature? How is the resource management performed and which entity makes the scheduling decisions? How can volatile context information be taken into account when placing tasks on remote resources? Which task is to offload, and which is more efficient to compute on the local device? How to ensure guarantees for reliability, privacy, and security? The answers to these and other questions determine for which applications and which environments a particular distributed computing architecture is applicable.
Compared to the large number of research endeavors in the domain of computation offloading in ad-hoc or (mobile) edge computing, there are only few approaches which provide prototypical implementations of real-world offloading systems. This leaves many questions open as to how the envisioned systems could be implemented and whether they would perform similarly in real-world environments. In this workshop, we particularly encourage researchers who are at any stage of the design or implementation process for computation offloading architectures to share their current state of work. We envision to foster a lively discussion on opportunities, solutions, and pitfalls which have been disclosed in the process of designing and implementing an offloading architecture or parts of it. Thus, in addition to presentations of technical papers, the workshop will provide time to discuss results and experiences in a guided discussion session. A keynote will provide insights from implementing computation offloading architectures and conducting experiments in the real-world scenarios.