The Internet of Things (IoT) promise is to gradually connect every object online to digitalize their environment, enabling a new set of services with improved security and efficiency. With such a large array of connected devices, processing data only in centralized cloud is not a viable strategy anymore. Now, to take advantage of these new opportunities, we need to collect, process and analyze this massive amount of streaming data as fast as it is produced, calling for big challenges.
In this article, we explain what these challenges are, and how to deal with them effectively by detailing a reference architecture for managing these data streams from connected devices.
Legacy stream deﬁnition refers to bounded vs unbounded data, but we will see that stream computing mechanisms involved to process this kind of data can be highly valuable to maximize the value we can get from any kind of data, bounded and unbounded.
Bounded data is a known size dataset, where we can count the number of elements it contains. Thus, even if the dataset is very large, we can have a hardware system with ﬁnite memory and storage capacities to store and then process it.
With unbounded data, we cannot count the number of elements as there is no dataset. Its size is theoretically inﬁnite. Thus, we cannot have a hardware system to store and process it with a legacy batch approach. Actually, we often say that we can observe the data in movement: the system is receiving a stream of events over the time.
Thus, base challenge of stream computing is to process a potentially inﬁnite stream of data on a system with a bounded amount of computing resources. But doing this, as data processing occurs much sooner after its generation, it enables great new opportunities to detect keys events and create high value-added applications. This is why stream computing mechanisms are also often applied on datasets where batch solutions can work, but stream computing offers much more valuable results.
With IoT, a large amount of very heterogeneous data can potentially be produced. Processing the right data at the right time may be a key factor for some applications. And more than in other ﬁelds of use, IoT will introduce more dynamicity both in the production of streams, but also in their processing.
These challenges require the setup of mechanisms and tools capable of operating efﬁciently and taking advantage of this dynamicity.
Reference Architecture for IoT
Above ﬁgure presents a reference architecture for IoT.
End devices and apps: On the bottommost layer, there are the end devices like sensors or even apps which can be installed on an end device to enhance its functionalities. They may have energy performance constraints and reduced even zero computational capabilities.
- Edge devices and gateways: On the next layer, edge devices and gateways are used by the end devices below for communicating between themeselves, with the edge devices, and the cloud. They most likely rely on heterogeneous and wireless protocols such as BLE, Wiﬁ, 5G, LoRa or Sigfox, depending on device capabilities and requirements. Communication latencies can vary from 1-2ms with 5G technology to 1s with very low power protocols such as LoRa. Some edge device might have computational capabilities, allowing to offer services from the edge.
- Network: The network layer lies between the edge end the cloud. Latencies here are typically between 10 to 200ms.
- Cloud services: The next layer contains cloud services, providing almost inﬁnite computational and storage capacity to process IoT tasks and store produced data.
- Middleware: On the side, we have the middleware, which offer cross-functional services. It ensures key functionalities for an IoT project, including scheduling, resource management, resilience and security.
- IoT Application: Finally, on topmost layer, IoT applications provide services to end-users.
Edge Computing for IoT Applications
There are many applications, each with speciﬁc constraints, that require an intermediate approach where processes are distributed accross the infrastructure. They can run on the device itself or in a centralized cloud, but also at the edge of the network, to decrease the latency and network congestion. This approach is often referred as Edge Computing.
- Real-time applications: In some use-cases, such as autonomous vehicles, most of the processing will run either on the vehicle or on the edge, because a minimal response time is a key factor for the application.
- Local knowledge base: The device may also need knowledge about its local environment. In such a case, the use of a centralized cloud could be irrelevant, adding unnecessary complexity to the system for both the device itself, but also to the backend.
- Caching: Edge capabilities can be used to cache a dataset required for processing events generated by an IoT device, like for object detection. In a ﬁeld other than IoT, edge resources can also be used as part of web browsing, to cache some components of a page and thus improve the performance of a website.
- Pre-processing or data-trimming: Finally, the edge resources can also be used for an intermediate step before sending data to a centralized cloud. This may be the case, for example, of a pre-processing step for a video stream. Sending huge volumes of raw data to the cloud will induce high costs, and lead to both core network and data center congestion.
- Autonomous applications: Some use case require that the IoT devices should works by themselves, and keep running even without network connection. But, at the same time, if a network connection is availble, being able to cooperate with other device and the cloud to share information, raise some alerts or update its internal knowledge and improve autonomous process. For example: an autonomous drone that detect ﬁre (see ﬁgure below) or drones for smart agriculture (read: https://ryaxtechnologi.wpengine.com/use-cases-agriculture/ and https://ryaxtechnologi.wpengine.com/smart-agriculture-cybele-2019/)
Stream Computing as a Critical Challenge
All the sensors producing data, whether temperatures, GPS positions, video streams, or even heart rate measurements, will constitute data streams from the device to Edge or Cloud services. Depending on the application, a result can be sent back to the device.
Many organizations are trying to collect and store as much data as possible. But then, they need to extract valuable information as fast as events are produced. Data obsolescence velocity reﬂects the rate at which an information will lose its value. If you wait to start an analysis after the data is stored, you could miss part of its value.
Running the analysis while the data is still hot and in motion, on the stream, can make a great difference to provide innovative services.
In the context of IoT, working with data streams and process them on the edge can be challenging:
- jitter: Data can be transmitted on a regular basis, at a rate which depends on its generation. But it can also suffer from jitter, which occur when there is a variation in signal periodicity. To overcame this effect, buffering strategies is often required.
- churn rate: With IoT, everything is dynamic, network connectivity may not always be available. To save energy, or because the mobile object is no longer within range of a gateway. Some data could then be lost, or the device can send batchs as soon as it is back online. This intermittent connection requires a fault tolerant system, allowing to put analytics on hold when a data source or a computational resource is temporary unavailble, and resume the work in the proper state when everything is back online.
- task placement and resource provisionning: Data sources and edge resources can evolve. We need to keep track of the state of availble resources on the Edge, and in the Cloud to continuously identify the best candidates to hold incoming streams and tasks for execution.
Ryax Platform for Stream Computing
Ryax platform provides solution for data engineering on hybrid infrastructures: it allows to efﬁciently orchestrate processes along the data stream path, taking advantage of resources on-premise, on the Edge and in the Cloud. It allows to provision the right resource at the right time to achieve the best task placement.
As a middleware for hybrid infrastructures, Ryax platform relies on several key features:
Workﬂow Model for Data Streams
All the steps to process a stream are deﬁned in a comprehensive workﬂow model:
- connect to data sources
- run multiple data treatments at the right place, on the edge and in the cloud
Dynamic and Fault Tolerant Resource Management
Relying on cutting-edge and production-ready infrastructure management solutions, such as Kubernetes, Ryax Platform can effectively allocate resources on-demand where it is required. It also monitor resource availability to enable fault tolerance on streaming processes. In case of a resource failure, Ryax Platform will put impacted part of the workﬂow on hold, remediate the issue allocating a new resource and resume workﬂow execution.
Data stream computing allows to process a theoretically inﬁnite volume of data, which cannot be stored on a system with ﬁnite capacity. By processing the data while it is still in motion, we reduce the time between the production of the event and its exploitation, thus making it possible to extract the maximum value.
In an IoT architecture, the stream computing processes can be deployed in multiple places: on the devices themselves, the gateways, the edge or the cloud. This strong heterogeneity of the environment and its dynamicity, especially in the context of mobile objects, applies strong constraints on the IoT application to deploy it and keep it running effectively.
The Ryax platform was designed from the begining with these constraints in mind, to deliver a middleware which works great on hybrid and heterogeneous infrastructures. It provides fault-tolerance for intermittent connectivity and devices churns when processing IoT streams.
Brian Amedro for Ryax.