Edge Migration Using Containers

Project Objectives

The exponential growth of mobile devices and Internet of Things (IoT) has inspired the concept of bringing cloud computation closer to the user. Installing small cloud like infrastructure at the network edge enables the user to access required resources within few hop distance; this infrastructure enables real-time application that needs very low latency, to get their desired service. Another point of concern that comes with proximity is how to provide users with good performance even while they are in motion across various locations. This constraint led to a layered framework for migrating real-time application running inside a container. This project includes network design, performance evaluation and large-scale prototyping.

Technology Rationale

Migrating computing resources to a less loaded server or close to the user (in network sense) improves QoS. We perform application level live container migration enabling very low to no service downtime, this offers tangible benefits over VM (Virtual Machine). This technique can be undertaken during system failures, load balancing and reallocating resources and usually takes very low to no service downtime. The goal of this research is to provide more efficient solutions to the service-provisioning platforms in edge clouds. Moreover, the central SDN (Software Defined Network) controller in our architecture regulates the edge clouds and reduces network complexities, which in turn assists in achieving better functionalities to support real time applications.

Technology Approach

In this project we are integrating orbit resources into a large-scaled distributed platform, enabling us to test container migration technique across multiple networks thereby creating a flexible, realistic and scalable set-up. As the mobile user moves from one location to the other, the resource manager at the node determines the existing resources at neighboring edge while service manager does a pre-decisive reporting to SDN depending upon required latency constraints. SDN commands on control and decision logic, and the application logic at the next edge serves the UE in case of migration, providing the desired QoS, as shown in below figure.

Migration takes place in three defined phases: decision, initiation and completion. The local decision phase is undertaken at the edge, and the SDN is informed about the degrading service quality. While globally, SDN polls all the edges and determines the available resources, which in turns enables it to make the appropriate migration decision. The previously connected edge, upon receiving SDN’s migration decision, initiates transfer of memory as well data to the new edge. Ultimately, SDN instructs the new edge to start the migrated container and the old container stops service continuity to the user.

Project Status

Real time results have been achieved my emulating resources in the Orbit test bed. We have collected application processing time, delay statistics and various other migration parameters. Also, the effect of load variation at each edge node, while running multiple applications has been collected for determining the event of migration. The other results are in the stages of preparation and submission and due to appear in paper.


Prof. Dipankar Raychaudhuri
ray (AT) winlab (DOT) rutgers (DOT) edu


Maheshwari, Sumit, Shalini Choudhury, Ivan Seskar, and Dipankar Raychaudhuri. “Traffic-aware dynamic container migration for real-time support in mobile edge clouds.” In 2018 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS), pp. 1-6. IEEE, 2018.

Maheshwari, Sumit, Wuyang Zhang, Ivan Seskar, Yanyong Zhang, and Dipankar Raychaudhuri. “EdgeDrive: Supporting Advanced Driver Assistance Systems using Mobile Edge Clouds Networks.” In IEEE INFOCOM 2019-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pp. 1-6. IEEE, 2019.