Understanding Edge Computing

Guest Contributor: Rittal

With the growth of Artificial Intelligence or AI machinery that takes in information, learns and makes decisions, Edge computing will become not only necessary, but mandatory. The need to process data at the source to ensure acceptable performance will continue to grow with AI and AI will only be able to grow as fast as data storage capabilities grow. edge_923x340


To ensure acceptable performance of data processing at the source and reduce latency, Edge Computing will become more important. Formerly only used by large corporations, Edge is now being utilized by small to medium businesses that need services such as peer-to-peer networking, mobile signature analysis, mobile data acquisition, and AI. In the case of machinery, this puts Edge Computing outside of a traditional data center environment and the need for small portable data centers with cooling will spread. According to a recent IDC study by 2020, more than 70% of infrastructure-centric partners will become involved in IoT and Edge Deployment.

image-for-edge-of-possibility (1)

Rittal started in the Industrial Market which is geared towards machinery and outside applications including dust/moisture proof NEMA 12 enclosures here in the U.S. in the 1980’s. Rittal continues to lead the world in global enclosure solutions that include all types of environments. From dirty and extreme temperature fluctuations, to typical clean and climate-controlled environments, Rittal has the right solution for you.

Edge Computing Defined

Edge computing houses data processing capability at or near the “edge” of a network. Usually, servers are contained in a micro data center, with as few as one or two enclosures. Data which is mission-critical, such as a system fail, is captured and available in real-time on site. Edge computing is valuable in capturing bandwidth intensive and latency sensitive data for analysis, lowering operating costs and improving energy efficiency. Lower priority data can be sent to the cloud or to a remote data center.

In Edge Computing, client data is processed at the periphery of the network, as close to the source of the originating data as possible. Companies are moving toward edge computing, driven by economics and efficiency. In edge computing architecture, critical data is processed at the point of origin via a server in close proximity to the output, for immediate and easy access. Data which is not as time sensitive is sent to the cloud or a data center for longer term storage, analysis or compliance record keeping.

The practice of edge computing alleviates the load on network resources. By processing data at the source, only the data required for transfer is shifted to a remote data center or cloud. The amount of data transmitted reduces the strain on bandwidth, and by specifying criteria, data can be sorted to provide key analytics at the site and to push non-essential data to the center.

With IoT and the proliferation of smart devices, edge computing becomes particularly valuable when massive data pushes would overload a data center. When monitoring enclosure temperature for example, it is unnecessary to upload data which will only be valuable to the operations manager in real time. If this data has historical value, it can be pushed to a data center at a later time, or when bandwidth is not at a premium. With edge computing, this illustrates one of its major benefits.

Since edge computing reduces response time to milliseconds, adjustments at the site level can be made almost simultaneously. However, the cloud and data centers will not be made obsolete, since the long term storage capacity is still needed.

Although edge reduces latency and improves accessibility, security concerns and configuration architecture must be addressed. With the distributed architecture of an edge security system, points are increased for system attack. Security breaches and infectious malware may be introduced at vulnerable points.

With the configuration of the device, secure default passwords need to be placed on each device, and vigilance applied to the updating of software to avoid infiltration of malware. Even with the potential points of vulnerability, the overwhelming advantage of the decreased latency and the instant data accessibility overwhelming support the use of edge computing to improve efficiency.

Learn more:


Safely Switch Off Cylinders While Transmitting Field Data

Guest contributor: Matthias Wolfer, Balluff


Is it possible to safely switch off cylinders while simultaneously transmitting field data and set up the system in accordance with standards? Yes!

In order to rule out a safety-critical fault between adjacent printed circuit board tracks/contact points (short circuit) according to DIN EN ISO 13849, clearance and creepage distances must be considered. One way to eliminate faults is to provide galvanic isolation by not interconnecting safety-relevant circuits/segments. This means  charge carriers from one segment cannot switch over to the other, and the separation makes it possible to connect the safety world with automation — with IO-Link. Safely switching off actuators and simultaneously collecting sensor signals reliably via IO-Link is possible with just one module. To further benefit from IO-Link and ensure safety at the same time, Balluff’s I/O module is galvanically isolated with a sensor and an actuator segment. The two circuits of the segments are not interconnected, and the actuator segment can be safely switched off without affecting the sensors. Important sensor data can still be monitoring and communicated.

The topological structure and the application of this safety function is shown in this figure as an example:


  1. A PLC is connected to an IO-Link master module via a fieldbus system.
  2. The IO-Link master is the interface to all I/O modules (IO-Link sensor/actuator hubs) or other devices, such as IO-Link sensors. The IO-Link communication takes place via a standardized M12 connector.|
  3. Binary switching elements can be connected to the galvanically isolated sensor/actuator hub (BNI IOL-355). The four connection ports on the left correspond to the sensor segment and the four ports on the right correspond to the actuator segment. Communication of the states is done via IO-Link.
  4. The power supply for both segments takes place via a 7/8″ connection, whereby attention must be paid to potential separated routing of the sensor and actuator circuits. Both the power supply unit itself and the wiring to the IO-Link device with the two segments must also ensure external galvanic isolation. This is made possible by separating the lines with a splitter.
  5. An external safety device is required to safely interrupt the supply voltage of the actuator segment (four ports simultaneously). Thus, the module can implement safety functions up to SIL2 according to EN62061/PLd and ISO 13849.

For example, this can happen through the use of a safety relay, whereby the power supply is safely disconnected after actuation of peripheral safety devices (such as emergency stops and door switches). At the same time, the sensor segment remains active and can provide important information from the field devices.

The module can handle up to eight digital inputs and outputs. If the IO-Link connection is interrupted, the outputs assume predefined states that are retained until the IO-Link connection is restored. Once the connection is restored, this unique state of the machine can be used to continue production directly without a reference run.

An application example for the interaction of sensors and actuators in a safety environment is the pneumatic clamping device of a workpiece holder. The position feedback of the cylinders is collected by the sensor segment, while at the same time the actuator segment can be switched off safely via its separately switchable safety circuit. If the sensor side is not required for application-related reasons, galvanically isolated IO-Link modules are also available with only actuator segments (BNI IOL 252/256). An isolated shutdown can protect up to two safety areas separately.

cropped-cmafh-logo-with-tagline-caps.pngCMA/Flodyne/Hydradyne is an authorized  Balluff distributor in Illinois, Wisconsin, Iowa and Northern Indiana.

In addition to distribution, we design and fabricate complete engineered systems, including hydraulic power units, electrical control panels, pneumatic panels & aluminum framing. Our advanced components and system solutions are found in a wide variety of industrial applications such as wind energy, solar energy, process control and more.

RFID in the Manufacturing Process: A Must-Have for Continuous Improvement

Guest Contributor:  Wolfgang Kratenzenberg, Balluff

There is quite an abundance of continuous improvement methodologies implemented in manufacturing processes around the globe. Whether it’s Lean, Six Sigma, Kaizen, etc., there is one thing that all of these methodologies have in common, they all require actionable data in order to make an improvement.  So, the question becomes: How do I get my hands on actionable data?

All data begins its life as raw data, which has to be manipulated to produce actionable data. Fortunately, there are devices that help automate this process. Automatic data collection (ADC), which includes barcode and RFID technology, provides visibility into the process. RFID has evolved to become the more advanced method of data collection because it doesn’t require a centralized database to store the data like barcode technology. RFID stores the data directly on the product or pallet in the process, which allows for much more in-depth data collection.


RFID’s greatest impact on the process tends to be improving overall quality and efficiency. For example, Company X is creating widgets and there are thirty-five work cells required to make a widget. Between every work cell there is a quality check with a vision system that looks for imperfections created in the prior station. When a quality issue is identified, it is automatically written to the tag.  In the following work cell the RFID tag is read as soon as it enters the station. This is where the raw data becomes actionable data. As soon as a quality issue has been identified, someone or something will need to take action. At this point the data becomes actionable because it has a detailed story to tell. While the error code written to the tag might just be a “10”, the real story is: Between cells five and six the system found a widget was non-conforming. The action that can be taken now is much more focused. The process at cell five can be studied and fixed immediately, opposed to waiting until an entire batch of widgets are manufactured with a quality issue.

Ultimately, flawless execution is what brings success to organizations.  However, in order to execute with efficiency and precision the company must first have access to not only data, but actionable data. Actionable data is derived from the raw data that RFID systems automatically collect.

Learn more about RFID technology at

cropped-cmafh-logo-with-tagline-caps.pngCMA/Flodyne/Hydradyne is an authorized  Balluff distributor in Illinois, Wisconsin, Iowa and Northern Indiana.

In addition to distribution, we design and fabricate complete engineered systems, including hydraulic power units, electrical control panels, pneumatic panels & aluminum framing. Our advanced components and system solutions are found in a wide variety of industrial applications such as wind energy, solar energy, process control and more.

5 Common IIoT Mistakes and How to Avoid Them

Guest contributor: Pat Millott, Balluff

IIoT is the perfect solution for all your data accessibility needs, right? If you check out my previous blogs, I discussed the many benefits of using the Industrial Internet of Things (IIoT) to remotely access data. However, if not used properly, IIoT can get you into some trouble. Let’s review 5 common mistakes to avoid when building your IIoT application.

1. Excluding your IT department
It’s crucial to make sure your Information Technology group is involved in this project. IIoT applications can be very taxing on your network. It’s easy to forget some key aspects like bandwidth and network traffic when developing your application. But when your application is finished, your IT department is going to want to know what network resources that are being used. Some questions they might ask include:

  • How many potential clients will the server have at any given time?
  • What is the max refresh rate of your application?
  • How frequent do you query the SQL server?
  • How are your queries structured?
  • What might be some vulnerabilities on this application?
  • What measures are you taking to protect these vulnerabilities?

It’s going to be a lot easier if they are included right away so everyone has a good understanding of what resources are available and how to protect them.

2. Excluding OT and Controls Engineers
Similar to the IT department, it’s important to include the controls engineer especially if you plan on hosting data from a PLC. The controls engineer is going to want to determine what data is publicly available and what data should be kept private. Some questions the controls engineer(s) might ask include:

  • What is your application trying to show?
  • What PLC data do you want to use for this?
  • Is your application going to write data to the PLC?
  • Do any modifications need to be made to the PLC code?

Keep in mind that any modifications that need to be made to the PLC will probably have to go through the controls engineer. This is to ensure that no code changes on the PLC will impact the efficiency and safety of production.

3. Running out of date software
Software that you write and the software that your application relies on should always be up to date. In other words, if you use a module or library in your code, it’s important to make sure you have the most up to date version. Also, it’s important to keep updating your application for additional security and functionality. Out of date software can lead to potential application crashes or even vulnerabilities for cyber attacks. Keep in mind, an application that runs on out of date software makes the server host vulnerable as well as its clients.

IIoT_Pyramid4. Unorganized data flow
Data flow is an important concept to consider early on in the development of your application. Say you have a server forwarding PLC data to a SQL database that is then utilized in a web application. The web application acts as a historian and analyzes data change over time. Is it better to calculate the data in the back-end application, the SQL database, the server forwarding the data or the PLC? The answer depends on the situation but typically, it’s best to keep the data calculations as close to the source as possible. For example, say your back-end application calculates percentages based on yesterday’s production compared to today’s. In this situation, if the back-end application crashes, you lose historian calculations. Typically, a SQL database is much more reliable as far as downtime and crashes and it will run whether your back-end application is functional or not. Therefore, it would be better to do these calculations in the SQL database rather than the back-end script. Continuing this concept, what if the PLC could do this calculation? Now the forwarding server, the SQL database, and the back-end script can all crash and you would still have your historian data for when they go back up. For this reason, the closer to the source of data you get, the more reliable your calculations based on that data will be.

5. Unprotected sensitive data
Possibly one of the most important things to remember when developing your application. Even simple applications that just display PLC data can give a hacker enough for an attack. Think about this IoT scenario: Say I have a server that hosts data from my personal home such as whether or not my front door is locked. This information is important to me if I want to check if someone forgot to lock the front door. But to a burglar, this data is just as useful if not more as he/she can now check the status of my door without having to leave their car. If I don’t protect this data, I am openly advertising to the world when my front door is unlocked. This is why encryption is crucial for sensitive data. This is also why it’s important to discuss you project with the controls engineer. Data that seems harmless might actually be detrimental to host publicly.

Data accessibility is evolving from a convenience to a necessity. Everyone’s in a hurry to get their data into the cloud but keeping these ideas in mind early on in the application development process will save everyone a headache later on. That way, IIoT really can be the perfect solution for all you data accessibility needs.

To learn more about IIoT visit

About Us


CMA/Flodyne/Hydradyne is an authorized  Balluff distributor in Illinois, Wisconsin, Iowa and Northern Indiana.

In addition to distribution, we design and fabricate complete engineered systems, including hydraulic power units, electrical control panels, pneumatic panels & aluminum framing. Our advanced components and system solutions are found in a wide variety of industrial applications such as wind energy, solar energy, process control and more.