The complexity of implementing large scale distributed computations has motivated new programming models. Software frameworks like MapReduce, Hadoop, Dryad, DryadLINQ, PIG, and Scope aim to hide complex details of data partitioning and distribution, scheduling, synchronization, and fault tolerance. Our experiences indicate that real-life applications still must be implemented as a collection of related programs when using some of these frameworks. Since the execution of these programs must be monitored and coordinated externally, several issues concerning scheduling, synchronization, and fault tolerance resurface.


To address these limitations, we have developed a series of alternatives to MapReduce, the original programming model and run-time from Google. One such system is Oivos, a high-level declarative programming model and its underlying runtime, where computations can span multiple heterogeneous and interdependent data sets. Another is Update Maps that combines the convenience of a key/value database with the performance of a batch-oriented approach. Cogsetis a step further in the process of re-thinking the original MapReduce architecture, a system that fuses reliable storage and parallel data processing into a single system that ensures good data locality.

We have also been involved in developing Nornir, a similar run-time primarily developed by our partners at UiO. Currently, we are also building a Hadoop compatible version of a run-time targeting massive live video streaming analysis.


The Broadway project is a successor to Cogset, which established an efficient computing platform for batch-oriented analytics. The focus in Broadway is expanded to include real-time computations, more compute-intensive workloads, and more flexible and elastic deployment scenarios. Beyond executing conventionally on a local cluster, Broadway applications can also migrate seamlessly between machines, transparently checkpoint their state in the cloud, and scale out to employ available and possibly transient computing resources in the cloud. The programming API is based on the actor model, in which lightweight processes communicate through asynchronous message exchanges, and generally aims to maximize location transparency. Broadway is built on top of Akka, a currently state-of-the-art actor framework. Progress so far includes extending Akka with a cloud-hosted class repository (to enable code mobility), and building an initial application stack for machine learning to help shape requirements. Work in progress includes low-level refinements to support more elastic routing of messages and placement of actors, and the design of additional higher-level applications, aiming to identify the most appropriate higher-level abstractions.


At present, users are using mobile devices to create, store and access large digital libraries of personal digital media, such as photos, music or video content. As the amount of content increases, the challenge is to support the user in managing their personal data on a mobile (or any device) without imposing a large overhead on the user to manually spend a lot of time organizing their personal data. We have developed the Gardi framework for digital libraries as a solution to the challenge of storing and managing large quantities of personal digital data.


Efficient matching of incoming events to persistent queries is fundamental to event pattern matching, complex event processing, and publish/subscribe systems. Recent processing engines based on non-deterministic finite automata (NFAs) have demonstrated scalability in the number of queries that can be efficiently executed on a single machine. However, existing NFA based systems are limited to processing events on a single machine. We have developed Johka in close collaboration with Cornell, a system that efficiently scales beyond a single node so that event processing capacity can be increased by adding more machines.


We have developed DEB (Dynamic Enterprise Bus), an autonomic computing run-time where we suggest use of application level failure detectors adhering to the well-known end-end argument. We also suggest and demonstrate an implementation of a proactive failure handling scheme, well in line with the expected nature of an autonomous system.