The Octopus System for IoT

Project Objectives

This project is aimed at developing middleware abstractions and programming models for the Internet-of-Things. The goal is to enable non-specialized developers to deploy sensors and applications without detailed knowledge of the underlying technologies and network. The architecture also develops aggregated context information from observed low-level sensor readings, with the objective of converting raw data into knowledge.

Technology Rationale

In recent years many people in the research and industry communities have forecast “Internet of Things” applications as the next big thing. To date, there have been many sensor deployments in a wide variety of environments, but unfortunately the people deploying and using these systems are primarily experts in the field. Widespread adoption of ubiquitous systems is hindered by the high level of technical knowledge required to develop and deploy these systems. For sensor networks and ubiquitous computing technology to achieve widespread adoption for home, business, and industrial use, IT staff must be able to use these systems without specialized training.

To fill this need we developed Octopus, an open source middleware for ubiquitous systems. A clear separation of concerns between application developers, sensor designers, and system administrators reduces the complexity of application and sensor deployment. The Octopus abstraction layers allow non-specialized developers to deploy sensors and create applications with only intermediate knowledge in a single area of the deployment. The two abstraction layers are called the World Model and the Aggregator. The world model separates the concerns of an application developer from data analysis and processing in the system. The aggregator separates the sensing layer from data analysis and processing and also allows the system to seamlessly support multiple sensing layers during data processing.

Technical Approach

Application Development: The application developer views the system through the world model, a named hierarchy of physical and conceptual objects and their attributes, similar in spirit to LDAP with time. Each item in the world model is a name and a set of attributes with primitive or user-defined types. In Octopus, every attribute has a creation time and an expiration time that denote when the value is valid. This allows queries to capture temporal relationships along with hierarchical relationships.

Queries can capture implicit relationships between items through their names or implicit relationships through their attributes. For instance the URI structure <location>.<type class>.<name> implicitly captures the relationship between items in the same location or of the same type class. If a client wishes to find information for all of the doors located on the 2nd floor of the Louvre it could search for objects that match the pattern “Louvre.2.door.*”.

Figure 1: The world model allows developers to use a TCP protocol to query and subscribe to system information through a simple API.

Names do not change over time so only immobile items should have implicit locations in their names. Attributes can change over time, so if an object moves it would have its location explicitly stated as an attribute rather than implicitly stated in its name. Thus a client that wishes to draw the current locations of all mobile items would request all items that had a “location” attribute.

Sensor Deployment

The sensor expert’s view of the system is a sink that expects data in a specified format. A sensor provides the aggregator with its physical layer, sensor ID, and sensed data. This allows a sensor expert to add any kind sensor or sensor middleware to the system after formatting sensor data to the format expected by the Octopus aggregator. The details of data processing and user interfaces are hidden during sensor deployment. This allows the system to support low power and transmit-only sensors because they do not need to respond to requests.

This does not restrict sensors from being more interactive – if a sensor is powerful enough (e.g. a cell phone) it can interact with the system in the analysis layer rather than in the sensor layer. This layer is for sensors and sensor networks that are too energy- or resource-constrained to perform their own analysis or that have programming models or network architectures that make data analysis very difficult.

Figure 2: Sensors and sensor middleware format information for the aggregator. The details of data processing are hidden from the sensors

Data Analysis

The data analyst views the system as an aggregator with raw sensor data and a world model with processed sensor data and other high-level information. Analysis software, called solvers in Octopus, subscribe to sensor data from the aggregator by specifying patterns of physical layer IDs and sensor IDs. Solvers request data from the world model in the same way that client applications query the system. Solvers are also free to use information from sources outside of the aggregator and world model. When solvers create new data they send it back to the world model. This mechanism allows each solver to be a standalone process independent of other components.

The world model and aggregator APIs allow developers to use any combination of platforms and development tools that support TCP/IP, from smart phones to desktop computers. Tools to simplify development are available in multiple languages, including Java, C++, Actionscript, and Ruby. Although the possible complexity of a solver is high, we advocate using the UNIX philosophy of small, independent units whose results can be combined to create high-level results, encouraging data reuse. and data analysis and has been used by other groups for demos and data collection.

Figure 3: Data analysis is performed by independent programs called “solvers.”

Results To Date & Future Work Plan

Prototype Deployment

The simplest solvers take raw sensor information from the aggregator, process it, and put it into context in the world model. For example, a temperature value from a sensor will be processed by a temperature solver and associated with an item in the world model. More complicated processing of that data is left to other solvers. This means that many solvers do not process sensor data directly and will never need to connect to the aggregator. During the past year, we have used Octopus to power a “smart building” at WINLAB that tracks assorted office information such as room and device usage, item locations, and social gatherings. The system provides users information through a live status map, email, and Twitter. The system simplifies sensor deployment. The most important part of Octopus is its ability to adapt to user needs. There is no way to predict everything that users want when a system is first deployed – this is why we cannot expect experts to always deploy sensor systems. By taking this technology from the hands of experts and putting it into the hands of intermediate users we have created a system that is much more useful. We believe that bringing this technology into the reach of interested users is the next step towards successful Internet of Things deployments.

Figure 4: A live status map tracks the current location and state of objects in real time. A dropdown box allows users to filter viewable objects by type and name.

Figure 5: Different users have different needs – in our lab the status of fresh coffee was of the utmost importance so coffee status was distributed through email or Twitter.


Prof. Yanyong Zhang
732-932-6857 Ext. 646
yanyong.zhang (DOT) gmail (DOT) com

Prof. Richard Martin
rmartin (AT) cs (DOT) rutgers (DOT) edu

Prof. Rich Howard
732-932-6857 Ext. 645
reh (AT) winlab (DOT) rutgers (DOT) edu


Xu, C.; Firner, B.; Zhang, Y.; Howard, R.; Li, J.; Lin, X.: “Improving RF-Based Device-Free Passive Localization in Cluttered Indoor Environments Through Probabilistic Classification Methods”, IPSN’12, pp. 209-220

Firner, B.; Xu, C.; Howard, R; Zhang, Y.: “Multiple Receiver Strategies for Minimizing Packet Loss in Dense Sensor Networks”, MobiHoc’10, pp. 211-220, September 20-24, 2010