Standards, Protocols and Technology for Building IoT Platforms – Part 2 of 3
This second part of the Expert View series is once again aimed at technology decision-makers in mechanical and plant engineering. The first part dealt with the question of what the status quo in the mechanical and plant engineering industry currently looks like in the context of digitisation and was primarily aimed at strategy decision-makers. The second part of the Expert View series, “From machine constructor to platform operator”, focuses on the protocols, standards, and technologies that need to be considered when setting up an IoT platform.
The fundamental question is “make or buy?”. This question greatly depends on the IoT strategy. The more the products and platforms are positioned in the market as a revenue and success factor for the company, the more the company has to provide its own services in terms of technology and business case. Based on predefined standards and technologies, the decision-makers should give the IoT platforms an individual touch – “Buy and create IoT” is the motto of the future.
But above all, the outdated IT and technology architecture is a central challenge that companies must overcome in order to implement their IoT projects.
We therefore look at the most important points in implementing an IoT platform, especially topics relevant to prototyping, platform design and sourcing, implementation and integration, and production and scale:
Fast IoT platform prototype with important core functions
Services and documentation to support the creation of a prototype (PoC)/minimum viable product (MVP) that can fulfil essential functional areas within a test environment. This also includes performance measurement tools, scaling, and data analytics.
Compatibility with all common communication protocols
Ensuring messaging and data transfer between multiple devices from the IoT edge (sensor) to the IT infrastructure is one of the most important points to consider when building an IoT platform. Machine-transmittable and machine-readable information is transferred through the common communication protocols and can thus be processed and fed back. IoT platforms and services require compatibility and integration with as many standards as possible because some of them build on and complement each other.
In doing so, the IoT platform can filter and summarise corresponding data at all levels. For example, sensors do not send all data to the cloud but rather only those pieces of data necessary for error analysis. Other data required for local control mechanisms may never leave production.
These standards include HTTP protocols. HTTP (hypertext transfer protocol) is advantageous because it can be integrated in many software tools and implemented quickly when it comes to transferring data. As long as the data volume does not play a major role, HTTP can indeed be a viable alternative. HTTP makes sense especially when small amounts of data are to be sent to clients, when it is used with REST APIs, and when the internet/network connection is fast and stable enough.
However, as soon as the bandwidth and the data volume are crucial, it is better to rely on MQTT (message queuing telemetry transport), CoAP (constrained application protocol), or XMPP (extensible messaging and presence protocol). Although XMPP is being used increasingly less, is still useful for certain applications.
On the other hand, MQTT is one of the most widely used protocols. MQTT is highly scalable and particularly useful for large IoT environments with many devices. Extensive IoT use cases, especially in production processes, can be well covered in this way.
In general, there is no standard formula for determining which IoT protocol is most suitable. Depending on the application, this depends on the performance of the devices, the speed and bandwidth of the network, the programming language used, and the location. In the end, companies that deal with the IoT can hardly avoid at least testing all relevant protocols.
Identical devices (from the same manufacturer) usually communicate in binary. When it becomes more open across manufacturers, XML and web protocols are usually recommended. This also includes REST calls with a JSON format.
Local communication (e.g. between a robot and the assembly line) is mostly synchronous and binary. If the assembly line stops sending data, the robot stops immediately (this would be a typical OPC UA application).
For remote communication, either HTTP – especially if it does not have to be reliable communication – or MQTT is recommended; with this protocol, you can set the Quality of Service so that a data set is successfully transmitted exactly once.
Status open platform communication
Another communication protocol is OPC UA. Open platform communication is one of the most important communication protocols for the IoT environment. This is essentially developed and promoted by the OPC Foundation (approx. 700 members).
OPC standardises access to machines, devices, and other systems in the industrial environment and enables the exchange of data in a uniform and manufacturer-independent manner.
The UA in OPC UA stands for “unified architecture” and describes the latest specification of the standard. The difference to its predecessor is its platform independence; it moves away from COM/DCOM to purely binary TCP/IP or SOAP communication. OPC UA also supports a semantic description of data, among many other improvements.
In general, OPC semantics are suitable for describing data and communicating with the cloud. Most people associate OPC with local binary communication; however, this is precisely what should not be done via the cloud.
If you do use OPC UA in the machine system and want to transport data to the cloud, you should use a gateway that converts an OPC UA source (e.g. into an MQTT queue or into HTTP).
OPC UA thus bridges the gap between the IP-based IT world and the manufacturing plants. All manufacturing process data is transferred via a single protocol – whether within a machine, between machines, or between a machine and a database in the cloud.
Scalability of data volumes and cost optimisation
The ability to respond flexibly to the increasing data volumes brought about by connected devices should definitely be considered when building an IoT platform. It should be possible for the infrastructure (server, storage, network, etc.) to absorb strongly increasing data volumes through a horizontally scalable platform. It must be ensured that devices, applications, data types, and protocols can be entirely captured and integrated in real time.
At the same time, scaling must be cost-efficient in that utilisation is as high as possible but expansion of the infrastructure as flexible as possible. Thus, high infrastructure elasticity must be ensured depending on the dynamics of the load behaviour.
Scaling is achieved, among other things:
- by setting up an edge device between the sensor/actuator and the cloud.
- by choosing an appropriate fault-tolerant protocol (e.g. MQTT if many sensors/actuators are used) or by converting binary and synchronous protocols into fault-tolerant protocols locally and selecting the data locally (e.g. with a gateway or a mini- or medium edge).
- by putting workloads in the right place within the topology. If you run a video analysis in the machine 24/7, for example, it is better off in a heavy edge. Computing power in 24/7 operation is not much cheaper in the cloud than professionally in the edge. However, the workloads that aggregate or collaborate data between plants or along the supply chain should be in the cloud.
Applications for monitoring
When operating the IoT platform, it should be possible to use monitoring solutions to obtain metrics about the components of the application. These can be provided via dashboards and aggregated monitoring tools. Important standard information includes machine integrity, utilisation, and maintenance status. These are transmitted via the aforementioned or other specific protocols. Alarm systems and event-based queries can also be relevant.
Interfaces (e.g. ERP & MES)
The offer of APIs and standard connectors to common inventory systems for the retrieval and exchange of data (e.g. SAP), including the management and extension of these APIs and interfaces, should, in any case, be available.
Reference projects and ecosystem from the mechanical and plant engineering industry
Reference projects and best practices from the field of digital twins in mechanical engineering on the respective platform should serve as a blueprint as far as possible – as should dedicated services for the respective industry/area of application and customisation options on the part of the provider.
In addition to the IoT product and the respective use of technology, another elementary topic is the ecosystem. Especially innovative companies that are already far advanced in the context of IoT cannot develop and evolve in a vacuum. Instead, companies rely on different resources and different platforms, technologies, processes, and standards in order to create collaborative networks and build business models.
The use of tools and platform services as well as the integration of common standards and third party services to create automated responses based on sensor data and the respective machine conditions is essential. Overall, these include the automation of the infrastructure, the IoT platform, and the data services.
Platform independence, cloud provider, on-premise etc.
The independence of the IoT platform is also an important point to consider when choosing the right partners. If you restrict yourself to one technology provider from the outset, you risk vendor lock-in, lack of individualisation options for the technology, and insufficient virtualisation and management services as well as extensibility of the architecture through additional cloud or on-premise architectures (hybrid cloud).
Access to the machine fleet for the manufacturer and the customer
For the machine manufacturer, one of the most central points is the possibility of networking and accessing relevant (anonymised) data from all machines in live operation. The pivotal points are the authorisation management, the encryption, and the concretisation of the interdependence between platform, machine, customer, and manufacturer.
However, it must be possible for the machine operator to provide individual user groups or to decouple the customer from the rest of the IoT platform in order to set up its own IoT environment with dedicated control, development, and management on the part of this customer using only the manufacturer’s meta-management functions.
The crucial point is that the plant operator should provide the plant manufacturer with enough data that the manufacturer is able to offer predictive maintenance and further optimise the plant. Manufacturers and operators should therefore have an open dialogue about the appropriate allocation of data. The situation is different when the equipment manufacturer offers the machine as equipment as a service. The manufacturer may then own the data, and the operator will agree on a confidentiality arrangement.
Summary and self-check IoT platform
In summary, it can be said that the choice of technology highly depends on the requirements of a particular industry and the digital strategy of the company. In particular, external help should be sought to evaluate the technology.
If we look at the necessary technological components without going into specific products, these can basically be divided into three groups:
- Software running on the edge device
- Software needed to manage the edge devices and communicate with them (device control management)
- Cloud back ends for the creation of further application logic.
There are also infrastructure components, which in turn ensure consistent platform management.
IoT platform design criteria – check
|Interoperability between machines|
|Compatibility with all common
communication protocols such as
MQTT, CoAP, HTTP, and OPC UA
|Scalability of data volumes
and cost optimisation
|Fast IoT prototype with
important core functions
|Applications for monitoring
(e.g. real-time progress, machine availability)
|Interfaces (e.g. ERP)|
Cloud provider, on-premise
|Access for customers to their machines|
|Access for manufacturers to the machine fleet|
|Application scenarios through digital twins|
The third and last part of the Expert View series “From machine constructor to platform operator” is specifically about how digital business succeeds in industry and machine and plant constructors can achieve a platform economy (use cases, products, and business models).
Sharing Expertise – book your free initial workshop!
Book your free initial workshop today to identify and leverage the digitalisation potential in your company.
Identification workshop on the following topics: Digital strategy and data-driven business models, IT infrastructure strategy, data science, process automation, customised production software, digital twin simulation, application modernisation, and customised software development.
- Best practices
- Requirements analysis
- Target image/solution concept
- Ideas for proof of concept