Layered Control Optimizes Machine Operation
Controlling today's sophisticated machines requires a layered architecture and careful attention to proper interfacing.
By: Kristin Lewotsky, Contributing Editor
There was a time that machines consisted primarily of gearing and fixed-speed motors, along with some relays to run the overall system. Today, a machine might incorporate a vision system, feedback, process control, robotics, a user interface, and safety, in addition to dozens of axes of highly synchronized motion. Such sophisticated designs require ever more complex control architectures that feature multiple control layers, programming languages, and communications protocols, all of which need to mesh seamlessly so that the machine can operate as required. Overcoming the challenges requires careful design, keeping in mind best practices.
Layered control typically starts with the device layer, which consists of individual components—servo motors, drives, the HMI, robotics, etc. Collections of devices with similar functions can be grouped into cells, networked together on a communications bus and coordinated by a cell controller. A machine might consist of multiple cells that communicate controller-to-controller at the supervisory control layer to exchange status and diagnostic information. The supervisory layer, in turn, connects to the overall plant layer for process control and shop-floor to top-floor communications.
For successful operation, control tasks need to be divided up throughout the system, and the control hardware and software tuned for optimal performance with the specific constraints of each layer. “Generally in a system, some levels need to have tighter coordination with some components than with others, both in the amount of data to be shared, and in the timeliness of the data," says Donald Labriola, President of Quicksilver Controls Inc. “Other portions need coordination more at the scheduling level, and further up, at the goals level.”
When they set out to build a 45-axis analytical instrument, Labriola and his team took a divide and conquer approach. Shared processors distributed throughout the system controlled pairs of axes. They set groups of related systems on the same serial communications loops and used fiber-optic links to break ground loops. As a result, the hardware/software groups associated with specific functional areas such as sample handling were able to control the communications to associated modules on the timescale required while minimizing interference with other subsystems.
Generally speaking, the degree to which cell controllers handle synchronization varies depending on whether the control architecture within the cell is centralized or distributed. The classic centralized control architecture consists of a collection of dumb devices overseen by a controller. In a classic distributed architecture, control resides within each of a collection of smart components that boast local logic and memory. They can operate daisy chained together in a master-slave configuration.
The reality is a bit less cut and dried. Even smart devices need to be overseen at some level to guarantee prioritization of commands across the machine. An intelligent subsystem like a robot arm may be able to run a program that controls its own motion and sequencing, but to ensure that it passes on critical commands and data, it ultimately needs to be treated as a slave device to the cell controller. The cell controller may even handle some sequencing of the robot. This would prevent a scenario in which safety subsystem, for example, sent a command that the robot would not recognize because it was in the middle of a sequence of its own.
The development of logic buses like CANbus and ModBus, as well as Ethernet-based communications protocols, has simplified the process of connecting devices together. That said, interfacing separate control loops can be challenging. A PLC, for example, might operate on ladder logic whereas a dedicated motion controller might leverage C or C++, but the two need to communicate. "Without sound software practices such as object-oriented programming, you can encounter difficulties not easily corrected," says Kevin Liu, Product Manager for industrial automation at Kollmorgen (Radford, Virginia). "In an earlier career, I integrated dedicated pieces of hardware from multiple vendors. I’ve ended up having to add in fudge factors, timer statements, and hard-coded, vendor-specific statements to get the hand shaking working right.” The problem with these types of workarounds is that they tend to be temporary fixes. If something changes in the firmware of a device on the network, for example, a timing fix may no longer be valid. "Now, there are interactions involving all of those little hardcoded pieces that were needed to get something working between different dedicated pieces of hardware," he says. "Net-effect, sustainability becomes a challenge."
The issue is bigger than just protocols. Even the physical layers may be different—fiber optic cables, Ethernet cables, RS-232 lines. They need to be interfaced, and in a way that does not impact their ability to transmit a signal. Another issue to keep in mind is that every interface has the potential to both reduce signal strength and introduce latency. This kind of heterogeneous integration causes inherent complexity that increases both points of failure and debugging time. "From a conceptual level, it makes a whole lot of sense to say, ‘I've got this set of functions and I'll just encapsulate each within a dedicated component and I'm not going to have to worry about it,’" says Liu. "It's really easy to begin a design with that mindset but then you uncover all these other issues when you peel back the layers of the onion."
Following best practices is critical. “Minimize the number of interfaces, and make the remaining ones robust and testable," says Labriola. "Minimize the interactions between interfaces because the permutations for testing quickly exceed the time or manpower to test them.” It is important to monitor the interfaces to detect illegal communications and report faults by other modules.
A key aspect of designing a layered control architecture is preventing a failure in one subsystem from affecting the others. Working with smaller subsystems coordinated through a hierarchy yields a more robust architecture. While it is all but impossible to design a machine without a critical component or subsystem whose failure can shut down the entire system, the key is to minimize those vulnerabilities. Critical subsystems need to be designed to be as robust as possible, and with redundancy in mind.
As if the various challenges of layered control weren’t difficult enough already, today's systems must operate in a world of hackers and industrial espionage, which means network security has to be designed in from the beginning. “You have to take each interface and consider how it may be properly used—and abused," says Labriola. “You have to consider how and where to fire-wall the system to prevent undesired communications while allowing the needed handshakes. This needs to involve partitioning and may involve multiple interconnections—CANopen for local low level handshaking within a system or subsystem, with various Ethernet interfacing at higher levels, and routers, encryption, and isolation to block and limit access from the outside. Designing to be protected from intentional threats also helps with finding and fixing the accidental communications problems typically cleaned up at the various levels of integration.”
The alternative to the layered approach is to choose a centralized solution. At first blush, it seems like a good alternative—fewer components to buy, fewer interfaces to manage. Centralization comes with its own perils, however. "Material cost makes a single processor look wonderful, until you realize just how often the motherboards change—processors, companion chips, video cards, hard drives, bios, operating system patches, and so on," says Labriola. "Any of these can make for subtle changes in the operation of the system. Corrections to one part can break other areas quite easily.”
Combining the PLC and a motion controller in a single package can in theory provide greater access to data and greater ease of integration—if it's done properly. "At one level, it makes a lot of sense—they're in the same package, they're running on the same core and in the same CPU," says Liu. "The problem is that the interaction between those two may not be as seamless as you think. This causes performance-robbing inefficiency. You essentially have one box that's responsible for closing all of the loops and responding to I/O and all the logic that goes with that, and camming and gearing and more. It can actually become a significant burden on that one CPU. That's one of the challenges with a centralized topology.”
Ways to address this concern include offloading some of the burden onto slave axes, for example the task of closing the loops. Today's programmable automation controllers include software designed to ensure that the processor is not overloaded and to prevent the HMI, for example, from disrupting the synchronization of axes. Depending on the application, a PAC may be a good fit, but it's important to make sure the design can overcome the types of challenges discussed above.
The best rule of thumb when working with layered control architectures is to plan the control and network architecture from the very beginning. “As machine automation content increases, it becomes more important to consider a mechatronic design approach," says Bob Hirschinger, Product Marketing Manager for motion, Rockwell Automation (Mequon, Wisconsin). Start with top-line cost, performance, and functionality goals. Plan the general control architecture, aligning it with your network architecture, distributed device specifications, the amount and type of data that has to be shared, and so on. "You need to have a design that can accommodate all that and is extensible without a redesign of the entire line if you have to add an additional capability in future," adds Hirschinger.
Above all, it's important to plan for errors and expect the unexpected. "In larger real systems, only 20% to 25% of the complexity is involved in making the system work properly," says Labriola. “The rest lies in how to detect faults and gracefully recover from them."