Scheduling algorithms for the dynamic management, placement and scaling of applications embedded in Docker containers and composed as microservices ac

Lead Research Organisation: University of St Andrews
Department Name: Computer Science

Abstract

With the sheer scale of devices that are being built and used day by day, there has
come with it the simultaneous growth of new types of applications that are highly
latency-sensitive. For these applications the traditional cloud model, whereby an
application communicates with a data centre(s) at some point in space, is not
optimal for its latency sensitive requirements. The fog, initially proposed by Cisco,
is a middle layer between the cloud and the edge of the network, allowing
applications to be run on hardware and resources throughout the network, such as
routers and switches. This allows the application to meets its latency sensitive
requirements by using already provisioned network hardware close to the edge of
the network.

With the rise of the fog, a new set of research challenges have emerged. Some of
these include how to program for the fog, how to manage and place applications
throughout the hardware resources on the network, and how to scale those
applications in an optimal manner. There is currently no way of allowing developers
to be able to program their applications across the "full stack" from the edge to the
cloud (and make full use of the advantages that come with them).

My solution is this: For a developer to be to achieve this full stack programming
capability by composing their application as a series of microservices which will
then be embedded inside of Docker containers and subsequently orchestrated
across the full stack of edge, fog, and cloud. My PhD will focus on building the
scheduling algorithms that will be responsible for making decisions as to
how to intelligently orchestrate the user's application in order to meet its desired
requirements.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/N509759/1 01/10/2016 30/09/2021
1950772 Studentship EP/N509759/1 27/09/2017 30/06/2021 Nnamdi Ekwe-Ekwe
 
Description This work is still ongoing but I will be in the position to publish concrete outcomes for the next submission period.

Introduction:

There has been a sizeable and significant increase in the amount and variety of devices being built and used on a daily basis. Contiguous to this increase, are the growth of new types of applications that are highly latency sensitive, requiring a near-immediate response to meet application requirements. An example of this is an application that a self-driving car uses in order to process an object in front of it.

Such an application requires a near immediate response and so sending an image of that object to the cloud (in some remote datacenter) to be processed/recognised and then sent back to the car, is too long a time for the car to be able to obtain the information and take the necessary action. In a case such as this, one might have to consider running that processing on the car itself, or elsewhere close to the car in order to avoid this latency. For this class of workload therefore, the traditional cloud model, where an application communicates with a data centre(s) is no longer optimal for its latency sensitive requirements.

We are looking at using other devices not normally used within standard application deployments called edge and fog devices. Edge devices are devices such as phones, TVs, computers, etc. Fog devices can be seen as more powerful than these such as wireless routers or switches. These devices have limited amounts of processing power and storage, but they can still be used to run workloads on them and have the advantage of being physically close to the end-user, thereby delivering a faster response. This award is still ongoing, but what we are looking to do is to be able to use any device to act as a potential server, allowing us to be able to run the most latency sensitive of workloads on them, meeting their requirements and avoiding the use of the (in some cases) latency-high cloud. My research focuses on building scheduling algorithms that are "resource-aware" placing these applications on the best device available in order to meet its requirements.

Thus far, I've performed the initial motivating experiments in order to prove our theory - I'm now focusing on building the algorithms to be able to perform the necessary scheduling.
Exploitation Route This research can help to understand the advantages and disadvantages of these edge and fog devices when deploying latency sensitive applications to them. It can also be used to benefit sectors listed below - as these typically host applications that require fast responses to their requests.
Sectors Aerospace, Defence and Marine,Agriculture, Food and Drink,Digital/Communication/Information Technologies (including Software),Energy,Healthcare,Retail,Transport