Title | Student(s) | Supervisor | Description |
Using a Raspberry Pi as an Edge device | 1 | Fedor Smirnov | details |
Agile development of serverless functions with portable function templates | 1 | Sashko Ristov | details |
Cross-layered resource management in Cloud continuum | 1 | Sashko Ristov | details |
Experiments and data analysis for serverless computing | 1 | Sashko Ristov, | details |
Experimente und Datenanalyse für Clouds | 1 | Thomas Fahringer | details |
Title | Using a Raspberry Pi as an Edge device |
Number of students | 1 |
Language | German or English |
Supervisor | Fedor Smirnov |
Description | The goal of this thesis in the the design, the implementation, and the evaluation of the infrastructure required to use a Raspberry Pi (https://www.raspberrypi.org/) as an edge device for the execution of (serverless) functions. In addition to the implementation of this basic functionality, the created software is to be integrated into the Apollo platform (https://github.com/Apollo-Core) developed by the DPS group. |
Tasks |
|
Theoretical Skills | Distributed Systems, Cloud Computing |
Practical Skills | Java |
Additonal Information | One (or multiple, if necessary) Raspberry Pi (and potentially other edge devices) will be provided for the work on this topic. Depending on the main focus of interest, this topic can be laid out into various directions: From an extensive research and comparison of different Edge Frameworks (focus on technology review), over the implementation of an automatic detection of edge devices with a subsequent deployment and invocation of functions on them (focus on software development) to the implementation of an experimental network of fog/edge devices (focus on working with edge hardware). |
Title | Cross-layered resource management in Cloud continuum |
Number of students | 1 |
Language | English |
Supervisor | Sashko Ristov |
Description | Cloud continuum offers a variety of heterogeneous computing resources, each with specific properties in terms of scalability, latency, performance, capacity, provisioning delay, economic costs, flexibility, portability, etc. For example, VMs are cheaper and more flexible, but with much higher provisioning delay compared to serverless functions. Many of the existing computing engines use resources of a single Cloud provider or of a single resource type, which locks the user with their pros and cons. This thesis will research methods to develop CrossFlow, a scalable and portable platform to run complex applications across various types of cloud continuum resources. This will allow the user to obtain maximum features of each resource type. The applications are built with the existing AFCL language developed by the DPS group and run with the existing enactment engine for serverless applications, which should be extended for cross-layered resources. |
Tasks |
|
Theoretical Skills | Distributed Systems, Cloud Computing, Functions as a Service, Fault tolerance. |
Practical Skills | Java, Cloud providers APIs. |
Additonal Information | The following material / tools are useful for this thesis:
|
Title | Agile development of serverless functions with portable function templates |
Number of students | 1 |
Language | English |
Supervisor | Sashko Ristov |
Description | Porting a serverless function from one to another cloud provider is a complex task as it may require a huge development effort to rewrite all cloud services that the function uses (e.g. S3, RDS, …). For example, a user may prefer the migrated function from IBM to AWS to use S3 rather than IBM Cloud Storage in order to reduce latency. This requires that the function developer should rewrite the function to use S3 instead of IBM storage. The goal of this master thesis is to research methods to simplify the portability of serverless functions by developing a dependency aware faasifier that allows developers to develop “function templates” with annotations, independently from cloud providers. Students will explore and learn how to model cloud service types that a function template uses in order to abstract them from a cloud provider. Once a function template is developed, the faasifier will adapt the code of the function template into function implementations for the cloud FaaS providers where function implementations should run. |
Tasks |
|
Theoretical Skills | Distributed Systems, Cloud Computing, Functions as a Service. |
Practical Skills | Java, Node.js, Cloud providers APIs. |
Additonal Information | Tools / material that can help for this master thesis:
student: Jakob Wallnöfer, supervisor: Sashko Ristov, https://github.com/qngapparat/js2faas |
Title | Cross-layered resource management in Cloud continuum |
Number of students | 1 |
Language | English |
Supervisor | Sashko Ristov |
Description | Cloud continuum offers a variety of heterogeneous computing resources, each with specific properties in terms of scalability, latency, performance, capacity, provisioning delay, economic costs, flexibility, portability, etc. For example, VMs are cheaper and more flexible, but with much higher provisioning delay compared to serverless functions. Many of the existing computing engines use resources of a single Cloud provider or of a single resource type, which locks the user with their pros and cons. This thesis will research methods to develop CrossFlow, a scalable and portable platform to run complex applications across various types of cloud continuum resources. This will allow the user to obtain maximum features of each resource type. The applications are built with the existing AFCL language developed by the DPS group and run with the existing enactment engine for serverless applications, which should be extended for cross-layered resources. |
Tasks |
|
Theoretical Skills | Distributed Systems, Cloud Computing, Functions as a Service, Fault tolerance. |
Practical Skills | Java, Cloud providers APIs. |
Additonal Information | The following material / tools are useful for this thesis:
|
Title | Experiments and data analysis for serverless computing |
Number of students | 1 |
Language | English |
Supervisor | Sashko Ristov |
Description | The aim of this master thesis is to conduct a series of experiments to evaluate the properties and constraints of multiple regions of widely-known FaaS systems (e.g. AWS Lambda, IBM Cloud Functions, Google Cloud Functions, Alibaba Function Compute, etc.). Numerous function implementations of serverless applications represented as functions choreographies (FCs) will be tested for various configurations (concurrency, assigned memory, latency, region, programming language, etc). The times for the functions and FCs are measured and then evaluated. The measured times include: time until a function request is submitted and function is started, time for the execution of the function and the whole FC (with measurement of memory and CPU consumption), time to receive the response from the FaaS system, and much more. A large number of experiments are started. The measured data must be stored in a database and then statistically evaluated and visualized. A special feature is the consideration of highly scalable FCs, which run e.g. tens of thousands functions. The aim of this work is a better understanding of serverless computing for different FCs and FaaS systems. The trade-off between performance and costs will be examined more closely. The applications are built with the existing AFCL language developed by the DPS group and run with the existing enactment engine. |
Tasks |
|
Theoretical Skills | Distributed Systems, Cloud Computing, Functions as a Service. |
Practical Skills | Java, Cloud providers APIs. |
Additonal Information | The following material / tools are useful for this thesis:
|
Title | Experimente und Datenanalyse für Clouds |
Number of students | 1 |
Language | German |
Supervisors | Thomas Fahringer |
Description | Das Ziel dieser Arbeit ist die Durchführung einer Serie von Experimenten, um die Eigenschaften und Fähigkeiten von Cloud Infrastrukturen (z.B. Amazon EC2) zu evaluieren. Es werden dabei zahlreiche Virtual Machine Instanzen (VMs) für kleinere Programme getestet. Dabei werden die Zeiten für die VMs und die Programme gemessen und anschließend ausgewertet. Zu den gemessenen Zeiten gehören: Zeit bis eine VM zugewiesen und gestartet wird, Zeit für die Ausführung der Programme (mit Messung von Speicher und CPU Verbrauch), Zeit um die VM wieder freizugeben, uva. Es werden dabei eine große Zahl von Experimenten gestartet (Script Programm). VMs und Programme müssen vorher instrumentiert werden. Die gemessenen Daten müssen in einer Datenbank abgelegt und dann statistisch ausgewertet und visualisiert werden. Eine Besonderheit ist dabei die Berücksichtigung von Spot Instances, die besonders billig aber vom Cloud Provider jederzeit entzogen werden können. Um solche Spot VMs zu bekommen, muss ein sogenanntes Bieterverfahren implementiert werden. Das Ziel dieser Arbeit ist ein besseres Verständnis von Cloud Ressourcen für verschiedene Programme. Dabei soll der Trade-off zwischen Performance und Kosten genauer untersucht werden. |
Tasks |
|
Theoretical skills | einfache Kenntnisse im Bereich der Statistik |
Practical skills | Script Sprache, Datenbanken, Visualisierung von Daten |
Additional information |