Skip to main content

12 posts tagged with "OPC UA"

OPC UA protocol and connectivity

View All Tags

How to Set Up OPC UA Connectivity for Legacy Manufacturing Equipment

· 12 min read
MachineCDN Team
Industrial IoT Experts

Your factory runs on equipment that was installed before the iPhone existed. The PLCs controlling your injection molders speak Modbus RTU. Your CNC machines communicate via Ethernet/IP. That packaging line from 2004? It has a proprietary serial protocol that only one retired engineer understood.

Welcome to the reality of brownfield manufacturing — where 85% of installed equipment predates modern IIoT connectivity standards, and replacing it would cost millions you don't have.

The good news: OPC UA (Open Platform Communications Unified Architecture) was designed to solve exactly this problem. The bad news: most guides skip the messy details of actually connecting equipment that wasn't designed to be connected.

This guide covers what actually works on real factory floors — not in vendor demo environments.

OPC-UA Pub/Sub Over TSN: Building Deterministic Industrial Networks [2026 Guide]

· 12 min read

OPC-UA Pub/Sub over TSN architecture

The traditional OPC-UA client/server model has served manufacturing well for decades of SCADA modernization. But as factories push toward converged IT/OT networks — where machine telemetry, MES transactions, and enterprise ERP traffic share the same Ethernet fabric — the client/server polling model starts to buckle under latency requirements that demand microsecond-level determinism.

OPC-UA Pub/Sub over TSN solves this by decoupling data producers from consumers entirely, while TSN's IEEE 802.1 extensions guarantee bounded latency delivery. This guide breaks down how these technologies work together, the pitfalls of real-world deployment, and the configuration patterns that actually work on production floors.

Why Client/Server Breaks Down at Scale

In a typical OPC-UA client/server deployment, every consumer opens a session to every producer. A plant with 50 machines and 10 data consumers (HMIs, historians, analytics engines, edge gateways) generates 500 active sessions. Each session carries its own subscription, and the server must serialize, authenticate, and deliver data to each client independently.

The math gets brutal quickly:

  • 50 machines × 200 tags each = 10,000 data points
  • 10 consumers polling at 1-second intervals = 100,000 read operations per second
  • Session overhead: ~2KB per subscription keepalive × 500 sessions = 1MB/s baseline traffic before any actual data moves

In practice, most OPC-UA servers in PLCs hit their connection ceiling around 15-20 simultaneous sessions. Allen-Bradley Micro800 series and Siemens S7-1200 controllers — the workhorses of mid-market automation — will start rejecting connections well before you've connected all your consumers.

Pub/Sub eliminates the N×M session problem by introducing a one-to-many data distribution model where publishers push data to the network without knowing (or caring) who's consuming it.

The Pub/Sub Architecture: How Data Actually Flows

OPC-UA Pub/Sub introduces three key concepts that don't exist in the client/server model:

Publishers and DataSets

A publisher is any device that produces data — typically a PLC, edge gateway, or sensor hub. Instead of waiting for client requests, publishers periodically assemble DataSets — structured collections of tag values with metadata — and push them to the network.

A DataSet maps directly to the OPC-UA information model. If your PLC exposes temperature, pressure, and flow rate variables in an ObjectType node, the corresponding DataSet contains those three fields with their current values, timestamps, and quality codes.

The publisher configuration defines:

  • Which variables to include in each DataSet
  • Publishing interval (how often to push updates, typically 10ms-10s)
  • Transport protocol (UDP multicast for TSN, MQTT for cloud-bound data, AMQP for enterprise messaging)
  • Encoding format (UADP binary for low-latency, JSON for interoperability)

Subscribers and DataSetReaders

Subscribers declare interest in specific DataSets by configuring DataSetReaders that filter incoming network messages. A subscriber doesn't connect to a publisher — it listens on a multicast group or MQTT topic and selectively processes messages that match its reader configuration.

This is the critical architectural shift: publishers and subscribers are completely decoupled. A publisher doesn't know how many subscribers exist. A subscriber can receive data from multiple publishers without establishing any sessions.

WriterGroups and NetworkMessages

Between individual DataSets and the wire, Pub/Sub introduces WriterGroups — logical containers that batch multiple DataSets into a single NetworkMessage for efficient transport. A single NetworkMessage might contain DataSets from four temperature sensors, two pressure transducers, and a motor current monitor — all packed into one UDP frame.

This batching is crucial for TSN. Each WriterGroup maps to a TSN traffic class, and each traffic class gets its own guaranteed bandwidth reservation. By grouping DataSets with similar latency requirements into the same WriterGroup, you minimize the number of TSN stream reservations needed.

TSN: The Network Layer That Makes It Deterministic

Standard Ethernet is "best effort" — frames compete for bandwidth with no delivery guarantees. TSN (IEEE 802.1) adds four capabilities that transform Ethernet into a deterministic transport:

Time Synchronization (IEEE 802.1AS-2020)

Every device on a TSN network synchronizes to a grandmaster clock with sub-microsecond accuracy. This is non-negotiable — without a shared time reference, scheduled transmission is meaningless.

In practice, configure your TSN switches as boundary clocks and your edge gateways as slave clocks. The synchronization protocol (gPTP) runs automatically, but you need to verify accuracy after deployment:

# Check gPTP synchronization status on a Linux-based edge gateway
pmc -u -b 0 'GET CURRENT_DATA_SET'
# Look for: offsetFromMaster < 1000ns (1μs)

If your offset exceeds 1μs consistently, check cable lengths (asymmetric path delay), switch hop count (keep it under 7), and whether any non-TSN switches are breaking the timing chain.

Scheduled Traffic (IEEE 802.1Qbv)

This is the heart of TSN for industrial use. 802.1Qbv implements time-aware shaping — the switch opens and closes transmission "gates" on a strict schedule. During a gate's open window, only frames from that traffic class can transmit. During the closed window, frames are queued.

A typical gate schedule for a manufacturing cell:

Time SlotDurationTraffic ClassContent
0-250μs250μsTC7 (Scheduled)Motion control data (servo positions)
250-750μs500μsTC6 (Scheduled)Process data (temperatures, pressures)
750-5000μs4250μsTC0-5 (Best Effort)IT traffic, diagnostics, file transfers

The cycle repeats every 5ms (200Hz), giving motion control data a guaranteed 250μs window every cycle — regardless of how much IT traffic is on the network.

Stream Reservation (IEEE 802.1Qcc)

Before a publisher starts transmitting, it reserves bandwidth end-to-end through every switch in the path. The reservation specifies maximum frame size, transmission interval, and latency requirement. Switches that can't honor the reservation reject it — you find out at configuration time, not at 2 AM when the line goes down.

Frame Preemption (IEEE 802.1Qbu)

When a high-priority frame needs to transmit but a low-priority frame is already in flight, preemption splits the low-priority frame, transmits the high-priority data, then resumes the interrupted frame. This reduces worst-case latency from one maximum-frame-time (12μs at 1Gbps for a 1500-byte frame) to near-zero.

Mapping OPC-UA Pub/Sub to TSN Traffic Classes

Here's where theory meets configuration. Each WriterGroup needs a TSN traffic class assignment based on its latency and jitter requirements:

Motion Control Data (TC7, under 1ms cycle)

  • Servo positions, encoder feedback, torque commands
  • Publishing interval: 1-4ms
  • UADP encoding (binary, no JSON overhead)
  • Fixed DataSet layout (no dynamic fields — the subscriber knows the structure at compile time)
  • Configuration tip: Set MaxNetworkMessageSize to fit within one Ethernet frame (1472 bytes for UDP). Fragmentation kills determinism.

Process Data (TC6, 10-100ms cycle)

  • Temperatures, pressures, flow rates, OEE counters
  • Publishing interval: 10-1000ms
  • UADP encoding for edge-to-edge, JSON for cloud-bound paths
  • Variable DataSet layout acceptable (metadata included in messages)

Diagnostic and Configuration (TC0-5, best effort)

  • Alarm states, configuration changes, firmware updates
  • No strict timing requirement
  • JSON encoding fine — human-readable diagnostics matter more than microseconds

Practical Configuration Example

For a plastics injection molding cell with 6 machines, each reporting 30 process variables at 100ms intervals:

# OPC-UA Pub/Sub Publisher Configuration (conceptual)
publisher:
transport: udp-multicast
multicast_group: 239.0.1.10
port: 4840

writer_groups:
- name: "ProcessData_Cell_A"
publishing_interval_ms: 100
tsn_traffic_class: 6
max_message_size: 1472
encoding: UADP
datasets:
- name: "IMM_01_Process"
variables:
- barrel_zone1_temp # int16, °C × 10
- barrel_zone2_temp # int16, °C × 10
- barrel_zone3_temp # int16, °C × 10
- mold_clamp_pressure # float32, bar
- injection_pressure # float32, bar
- cycle_time_ms # uint32
- shot_count # uint32

- name: "Alarms_Cell_A"
publishing_interval_ms: 0 # event-driven
tsn_traffic_class: 5
encoding: UADP
key_frame_count: 1 # every message is a key frame
datasets:
- name: "IMM_01_Alarms"
variables:
- alarm_word_1 # uint16, bitfield
- alarm_word_2 # uint16, bitfield

The Data Encoding Decision: UADP vs JSON

OPC-UA Pub/Sub supports two wire formats, and choosing wrong will cost you either bandwidth or interoperability.

UADP (UA DataPoints Protocol)

  • Binary encoding, tightly packed
  • A 30-variable DataSet encodes to ~200 bytes
  • Supports delta frames — after an initial key frame sends all values, subsequent frames only include changed values
  • Requires subscribers to know the DataSet layout in advance (discovered via OPC-UA client/server or configured statically)
  • Use for: Edge-to-edge communication, TSN paths, anything latency-sensitive

JSON Encoding

  • Human-readable, self-describing
  • The same 30-variable DataSet expands to ~2KB
  • Every message carries field names and type information
  • No prior configuration needed — subscribers can parse dynamically
  • Use for: Cloud-bound telemetry, debugging, integration with IT systems

The Hybrid Pattern That Works

In practice, most deployments run UADP on the factory-floor TSN network and JSON on the cloud-bound MQTT path. The edge gateway — the device sitting between the OT and IT networks — performs the translation:

  1. Subscribe to UADP multicast on the TSN interface
  2. Decode DataSets using pre-configured metadata
  3. Re-publish as JSON over MQTT to the cloud broker
  4. Add store-and-forward buffering for cloud connectivity gaps

This is exactly the pattern that platforms like machineCDN implement — the edge gateway handles protocol translation transparently so that neither the PLCs nor the cloud backend need to understand each other's wire format.

Security Considerations for Pub/Sub Over TSN

The multicast nature of Pub/Sub changes the security model fundamentally. In client/server OPC-UA, each session is authenticated and encrypted end-to-end with X.509 certificates. In Pub/Sub, there's no session — data flows to anyone on the multicast group.

SecurityMode Options

OPC-UA Pub/Sub defines three security modes per WriterGroup:

  1. None — no encryption, no signing. Acceptable only on physically isolated networks with no IT connectivity.
  2. Sign — messages are signed with the publisher's private key. Subscribers verify authenticity but data is readable by anyone on the network.
  3. SignAndEncrypt — messages are both signed and encrypted. Requires key distribution to all authorized subscribers.

Key Distribution: The Hard Problem

Unlike client/server where keys are exchanged during session establishment, Pub/Sub needs a Security Key Server (SKS) that distributes symmetric keys to publishers and subscribers. The SKS rotates keys periodically (recommended: every 1-24 hours depending on sensitivity).

In practice, deploy the SKS on a hardened server in the DMZ between OT and IT networks. Use OPC-UA client/server (with mutual certificate authentication) for key distribution, and Pub/Sub (with those distributed keys) for data delivery.

Network Segmentation

Even with encrypted Pub/Sub, follow defense-in-depth:

  • Isolate TSN traffic on dedicated VLANs
  • Use managed switches with ACLs to restrict multicast group membership
  • Deploy a data diode or unidirectional gateway between the TSN network and any internet-facing systems

Common Deployment Pitfalls

Pitfall 1: Multicast Flooding

TSN switches handle multicast natively, but if your path crosses a non-TSN switch (even one), multicast frames flood to all ports. This can saturate uplinks and crash unrelated systems. Verify every switch in the path supports IGMP snooping at minimum.

Pitfall 2: Clock Drift Under Load

gPTP synchronization works well at low CPU load, but when an edge gateway is processing 10,000 tags per second, the system clock can drift because gPTP packets get delayed in software queues. Use hardware timestamping (PTP-capable NICs) — software timestamping adds 10-100μs of jitter, which defeats the purpose of TSN.

Pitfall 3: DataSet Version Mismatch

When you add a variable to a publisher's DataSet, all subscribers with static configurations will misparse subsequent messages. UADP includes a DataSetWriterId and ConfigurationVersion — increment the version on every schema change and implement version checking in subscriber code.

Pitfall 4: Oversubscribing TSN Bandwidth

Each TSN stream reservation is guaranteed, but the total bandwidth allocated to scheduled traffic classes can't exceed ~75% of link capacity (the remaining 25% prevents guard-band starvation of best-effort traffic). On a 1Gbps link, that's 750Mbps for all scheduled streams combined. Do the bandwidth math before deployment, not after.

When to Use Pub/Sub vs Client/Server

Pub/Sub over TSN isn't a universal replacement for client/server. Use this decision matrix:

ScenarioRecommended Model
HMI reading 50 tags from one PLCClient/Server
Historian collecting from 100+ PLCsPub/Sub
Real-time motion control (under 1ms)Pub/Sub over TSN
Configuration and commissioningClient/Server
Cloud telemetry pipelinePub/Sub over MQTT
10+ consumers need same dataPub/Sub
Firewall traversal requiredClient/Server (reverseConnect)

The Road Ahead: OPC-UA FX

The OPC Foundation's Field eXchange (FX) initiative extends Pub/Sub with controller-to-controller communication profiles — enabling PLCs from different vendors to exchange data over TSN without custom integration. FX defines standardized connection management, diagnostics, and safety communication profiles.

For manufacturers, FX means the edge gateway that today bridges between incompatible PLCs will eventually become optional for direct PLC-to-PLC communication — while remaining essential for the cloud telemetry path where platforms like machineCDN normalize data across heterogeneous equipment.

Key Takeaways

  1. Pub/Sub eliminates the N×M session problem that limits OPC-UA client/server at scale
  2. TSN provides deterministic delivery with bounded latency guaranteed by the network infrastructure
  3. UADP encoding on TSN, JSON over MQTT is the hybrid pattern that works for most manufacturing deployments
  4. Hardware timestamping is non-negotiable for sub-microsecond synchronization accuracy
  5. Security requires a Key Server — Pub/Sub's multicast model doesn't support session-based authentication
  6. Budget 75% of link capacity for scheduled traffic to prevent guard-band starvation

The convergence of OPC-UA Pub/Sub and TSN represents the most significant shift in industrial networking since the migration from fieldbus to Ethernet. Getting the architecture right at deployment time saves years of retrofitting — and the practical patterns in this guide reflect what actually works on production floors, not just in vendor demo labs.

OPC-UA Information Modeling and Subscriptions: A Deep Dive for IIoT Engineers [2026]

· 12 min read

If you've spent time wiring Modbus registers to cloud platforms, you know the pain: flat address spaces, no built-in semantics, and endless spreadsheets mapping register 40004 to "Mold Temperature Zone 2." OPC-UA was designed to solve exactly this problem — but its information modeling layer is far richer (and more complex) than most engineers realize when they first encounter it.

This guide goes deep on how OPC-UA structures industrial data, how subscriptions efficiently deliver changes to clients, and how security policies protect the entire stack. Whether you're evaluating OPC-UA for a greenfield deployment or bridging it into an existing Modbus/EtherNet-IP environment, this is the practical knowledge you need.

Securing Industrial MQTT and OT Networks: TLS, Certificates, and Zero-Trust for the Factory Floor [2026]

· 13 min read

The edge gateway sitting on your factory floor is talking to the cloud. It's reading temperature, pressure, and flow data from PLCs over Modbus, packaging it into MQTT messages, and publishing to a broker that might be Azure IoT Hub, AWS IoT Core, or a self-hosted Mosquitto instance. The question isn't whether that data path is valuable — it's whether anyone else is listening.

Industrial MQTT security isn't a theoretical exercise. A compromised edge gateway can inject false telemetry (making operators think everything is fine when it isn't), intercept production data (exposing process parameters to competitors), or pivot into the OT network to reach PLCs directly. This guide covers the practical measures that actually protect these systems.

OPC-UA Subscriptions and Monitored Items: Engineering Low-Latency Data Pipelines for Manufacturing [2026]

· 10 min read

If you've worked with industrial protocols long enough, you know there are exactly two categories of data delivery: polling (you ask, the device answers) and subscriptions (the device tells you when something changes). OPC-UA's subscription model is one of the most sophisticated data delivery mechanisms in industrial automation — and one of the most frequently misconfigured.

This guide covers how OPC-UA subscriptions actually work at the wire level, how to configure monitored items for different manufacturing scenarios, and the real-world performance tradeoffs that separate a responsive factory dashboard from one that lags behind reality by minutes.

How OPC-UA Subscriptions Differ from Polling

In a traditional Modbus or EtherNet/IP setup, the client polls registers on a fixed interval — every 1 second, every 5 seconds, whatever the configuration says. This is simple and predictable, but it has fundamental limitations:

  • Wasted bandwidth: If a temperature value hasn't changed in 30 minutes, you're still reading it every second
  • Missed transients: If a pressure spike occurs between poll cycles, you'll never see it
  • Scaling problems: With 500 tags across 20 PLCs, fixed-interval polling creates predictable network congestion waves

OPC-UA subscriptions flip this model. Instead of the client pulling data, the server monitors values internally and notifies the client only when something meaningful changes. The key word is "meaningful" — and that's where the engineering gets interesting.

The Three Layers of OPC-UA Subscriptions

An OPC-UA subscription isn't a single thing. It's three nested concepts that work together:

1. The Subscription Object

A subscription is a container that defines the publishing interval — how often the server checks its monitored items and bundles any pending notifications into a single message. Think of it as the heartbeat of the data pipeline.

Publishing Interval: 500ms
Max Keep-Alive Count: 10
Max Notifications Per Publish: 0 (unlimited)
Priority: 100

The publishing interval is NOT the sampling rate. This is a critical distinction. The publishing interval only controls how often notifications are bundled and sent to the client. A 500ms publishing interval with a 100ms sampling rate means values are checked 5 times between each publish cycle.

2. Monitored Items

Each variable you want to track becomes a monitored item within a subscription. This is where the real configuration lives:

  • Sampling Interval: How often the server reads the underlying data source (PLC register, sensor, calculated value)
  • Queue Size: How many value changes to buffer between publish cycles
  • Discard Policy: When the queue overflows, do you keep the oldest or newest values?
  • Filter: What constitutes a "change" worth reporting?

3. Filters (Deadbands)

Filters determine when a monitored item's value has changed "enough" to warrant a notification. There are two types:

  • Absolute Deadband: Value must change by at least X units (e.g., temperature must change by 0.5°F)
  • Percent Deadband: Value must change by X% of its engineering range

Without a deadband filter, you'll get notifications for every single floating-point fluctuation — including ADC noise that makes a temperature reading bounce between 72.001°F and 72.003°F. That's not useful data. That's noise masquerading as signal.

Practical Configuration Patterns

Pattern 1: Critical Alarms (Boolean State Changes)

For alarm bits — compressor faults, pressure switch trips, flow switch states — you want immediate notification with zero tolerance for missed events.

Subscription:
Publishing Interval: 250ms

Monitored Item (alarm_active):
Sampling Interval: 100ms
Queue Size: 10
Discard Policy: DiscardOldest
Filter: None (report every change)

Why a queue size of 10? Because boolean alarm bits can toggle rapidly during fault conditions. A compressor might fault, reset, and fault again within a single publish cycle. Without a queue, you'd only see the final state. With a queue, you see the full sequence — which is critical for root cause analysis.

Pattern 2: Process Temperatures (Slow-Moving Analog)

Chiller outlet temperature, barrel zone temps, coolant temperatures — these change gradually and generate enormous amounts of redundant data without deadbanding.

Subscription:
Publishing Interval: 1000ms

Monitored Item (chiller_outlet_temp):
Sampling Interval: 500ms
Queue Size: 5
Discard Policy: DiscardOldest
Filter: AbsoluteDeadband(0.5) // °F

A 0.5°F deadband means you won't get notifications from ADC noise, but you will catch meaningful process drift. At a 500ms sampling rate, the server checks the value twice per publish cycle, ensuring you don't miss a rapid temperature swing even with the coarser publishing interval.

Pattern 3: High-Frequency Production Counters

Cycle counts, part counts, shot counters — these increment continuously during production and need efficient handling.

Subscription:
Publishing Interval: 5000ms

Monitored Item (cycle_count):
Sampling Interval: 1000ms
Queue Size: 1
Discard Policy: DiscardOldest
Filter: None

Queue size of 1 is intentional here. You only care about the latest count value — intermediate values are meaningless because the counter only goes up. A 5-second publishing interval means you update dashboards at a reasonable rate without flooding the network with every single increment.

Pattern 4: Energy Metering (Cumulative Registers)

Power consumption registers accumulate continuously. The challenge is capturing the delta accurately without drowning in data.

Subscription:
Publishing Interval: 60000ms (1 minute)

Monitored Item (energy_kwh):
Sampling Interval: 10000ms
Queue Size: 1
Discard Policy: DiscardOldest
Filter: PercentDeadband(1.0) // 1% of range

For energy data, minute-level resolution is typically sufficient for cost allocation and ESG reporting. The percent deadband prevents notifications from meter jitter while still capturing real consumption changes.

Queue Management: The Hidden Performance Killer

Here's what most OPC-UA deployments get wrong: they set queue sizes too small and wonder why their historical data has gaps.

Consider what happens during a network hiccup. The subscription's publish cycle fires, but the client is temporarily unreachable. The server holds notifications in the subscription's retransmission queue for a configurable number of keep-alive cycles. But the monitored item queue is independent — it continues filling with new samples.

If your monitored item queue size is 1 and the network is down for 10 seconds at a 100ms sampling rate, you've lost 100 samples. When the connection recovers, you get exactly one value — the last one. The history is gone.

Rule of thumb: Set the queue size to at least (expected_max_outage_seconds × 1000) / sampling_interval_ms for any tag where you can't afford data gaps.

For a process that needs 30-second outage tolerance at 500ms sampling:

Queue Size = (30 × 1000) / 500 = 60

That's 60 entries per monitored item. Multiply by your tag count and you'll understand why OPC-UA server memory sizing matters.

Sampling Interval vs. Publishing Interval: Getting the Ratio Right

The relationship between sampling interval and publishing interval determines your system's behavior:

RatioBehaviorUse Case
Sampling = PublishingSample once, publish onceSimple monitoring, low bandwidth
Sampling < PublishingMultiple samples per publish, deadband filtering effectiveProcess control, drift detection
Sampling << PublishingHigh-resolution capture, batched deliveryVibration, power quality

Anti-pattern: Setting sampling interval to 0 (fastest possible). This tells the server to sample at its maximum rate, which on some implementations means every scan cycle of the underlying PLC. A Siemens S7-1500 scanning at 1ms will generate 1,000 samples per second per tag. With 200 tags, that's 200,000 data points per second — most of which are identical to the previous value.

Better approach: Match the sampling interval to the physical process dynamics. A barrel heater zone that takes 30 seconds to change 1°F doesn't need 10ms sampling. A pneumatic valve that opens in 50ms does.

Subscription Diagnostics and Health Monitoring

OPC-UA provides built-in diagnostics that most deployments ignore:

Subscription-Level Counters

  • NotificationCount: Total notifications sent since subscription creation
  • PublishRequestCount: How many publish requests the client has outstanding
  • RepublishCount: How many times the server had to retransmit (indicates network issues)
  • TransferredCount: Subscriptions transferred between sessions (cluster failover)

Monitored Item Counters

  • SamplingCount: How many times the item was sampled
  • QueueOverflowCount: How many values were discarded due to full queues — this is your canary
  • FilteredCount: How many samples were suppressed by deadband filters

If QueueOverflowCount is climbing, your queue is too small for the sampling rate and publish interval combination. If FilteredCount is near SamplingCount, your deadband is too aggressive — you're suppressing real data.

How This Compares to Change-Based Polling in Other Protocols

OPC-UA subscriptions aren't the only way to get change-driven data from PLCs. In practice, many IIoT platforms — including machineCDN — implement intelligent change detection at the edge, regardless of the underlying protocol.

The pattern works like this: the edge gateway reads register values on a schedule, compares them to the previously read values, and only transmits data upstream when a meaningful change occurs. Critical state changes (alarms, link state transitions) bypass batching entirely and are sent immediately. Analog values are batched on configurable intervals and compared using value-based thresholds.

This approach brings subscription-like efficiency to protocols that don't natively support it (Modbus, older EtherNet/IP devices). The tradeoff is latency — you're still polling, so maximum detection latency equals your polling interval. But for processes where sub-second change detection isn't required, it's remarkably effective and dramatically reduces cloud ingestion costs.

Real-World Performance Numbers

From production deployments across plastics, packaging, and discrete manufacturing:

ConfigurationTagsBandwidthUpdate Latency
Fixed 1s polling, no filtering5002.1 Mbps1s
OPC-UA subscriptions, 500ms publish, deadband500180 Kbps250ms–500ms
Edge change detection + batching50095 Kbps1s–5s (configurable)
OPC-UA subs + edge batching combined50045 Kbps500ms–5s (priority dependent)

The bandwidth savings from proper subscription configuration are typically 10–20x compared to naive polling. Combined with edge-side batching for cloud delivery, you can achieve 40–50x reduction — which matters enormously on cellular connections at remote facilities.

Common Pitfalls

1. Ignoring the Revised Sampling Interval

When you request a sampling interval, the server may revise it to a supported value. Always check the response — if you asked for 100ms and the server gave you 1000ms, your entire timing assumption is wrong.

2. Too Many Subscriptions

Each subscription has overhead: keep-alive traffic, retransmission buffers, and a dedicated publish thread on some implementations. Don't create one subscription per tag — group tags by priority class and use 3–5 subscriptions total.

3. Forgetting Lifetime Count

The subscription's lifetime count determines how many publish cycles can pass without a successful client response before the server kills the subscription. On unreliable networks, set this high enough to survive outages without losing your subscription state.

4. Not Monitoring Queue Overflows

If you're not checking QueueOverflowCount, you have no idea whether you're losing data. This is especially insidious because everything looks fine on your dashboard — you just have invisible gaps in your history.

Wrapping Up

OPC-UA subscriptions are the most capable data delivery mechanism in industrial automation today, but capability without proper configuration is just complexity. The fundamentals come down to:

  1. Match sampling intervals to process dynamics, not to what feels fast enough
  2. Use deadbands aggressively on analog values — noise isn't data
  3. Size queues for your worst-case outage, not your average case
  4. Monitor the diagnostics — OPC-UA tells you when things are wrong, if you're listening

For manufacturing environments where protocols like Modbus and EtherNet/IP dominate the device layer, an edge platform like machineCDN provides change-based detection and intelligent batching that delivers subscription-like efficiency regardless of the underlying protocol — bridging the gap between legacy equipment and modern analytics pipelines.

The protocol layer is just plumbing. What matters is getting the right data, at the right time, to the right system — without burying your network or your cloud budget under a mountain of redundant samples.

OPC-UA Pub/Sub vs Client/Server: Choosing the Right Pattern for Your Plant Floor [2026]

· 10 min read

OPC-UA Architecture

If you've spent any time connecting PLCs to cloud dashboards, you've run into OPC-UA. The protocol dominates industrial interoperability conversations — and for good reason. Its information model, security architecture, and cross-vendor compatibility make it the lingua franca of modern manufacturing IT.

But here's what trips up most engineers: OPC-UA isn't a single communication pattern. It's two fundamentally different paradigms sharing one information model. Client/server has been the workhorse since OPC-UA's inception. Pub/sub, ratified in Part 14 of the specification, is the newer pattern designed for one-to-many data distribution. Picking the wrong one can mean the difference between a system that scales to 500 machines and one that falls over at 50.

Let's break down when you need each, how they actually behave on the wire, and where the real-world performance boundaries lie.

The Client/Server Model: What You Already Know (and What You Don't)

OPC-UA client/server follows a familiar request-response paradigm. A client establishes a secure channel to a server, opens a session, creates one or more subscriptions, and receives notifications when monitored item values change.

How Subscriptions Actually Work

This is where many engineers have an incomplete mental model. A subscription isn't a simple "tell me when X changes." It's a multi-layered construct:

  1. Monitored Items — Each tag you want to observe becomes a monitored item with its own sampling interval (how often the server checks the underlying data source) and queue size (how many values to buffer between publish cycles).

  2. Publishing Interval — The subscription itself has a publishing interval that determines how frequently the server packages up change notifications and sends them to the client. This is independent of the sampling interval.

  3. Keep-alive — If no data changes occur within the publishing interval, the server sends a keep-alive message. After a configurable number of missed keep-alives, the subscription is considered dead.

The key insight: sampling and publishing are decoupled. You might sample a temperature sensor at 100ms but only publish aggregated notifications every 1 second. This reduces network traffic without losing fidelity at the source.

Real-World Performance Characteristics

In practice, a single OPC-UA server can typically handle:

  • 50-200 concurrent client sessions (depending on hardware)
  • 5,000-50,000 monitored items per server across all sessions
  • Publishing intervals down to ~50ms before CPU becomes the bottleneck
  • Secure channel negotiation takes 200-800ms depending on security policy

The bottleneck isn't usually bandwidth — it's the server's CPU. Every subscription requires the server to maintain state, evaluate sampling queues, and serialize notification messages for each connected client independently. This is the fan-out problem.

When Client/Server Breaks Down

Consider a plant with 200 machines, each exposing 100 tags. A central historian, a real-time dashboard, an analytics engine, and an alarm system all need access. That's four clients × 200 servers × 100 tags each.

Every server must maintain four independent subscription contexts. Every data change gets serialized and transmitted four times — once per client. The server doesn't know or care that all four clients want the same data. It can't share work between them.

At moderate scale, this works fine. At plant-wide scale with hundreds of devices and dozens of consumers, you're asking each embedded OPC-UA server on a PLC to handle work that grows linearly with the number of consumers. That's the architectural tension pub/sub was designed to resolve.

The Pub/Sub Model: How It Actually Differs

OPC-UA Pub/Sub fundamentally changes the relationship between data producers and consumers. Instead of maintaining per-client connections, a publisher emits data to a transport (typically UDP multicast or an MQTT broker) and subscribers independently consume from that transport.

The Wire Format: UADP vs JSON

Pub/sub messages can be encoded in two ways:

UADP (UA Data Protocol) — A compact binary encoding optimized for bandwidth-constrained networks. A typical dataset message with 50 variables fits in ~400 bytes. Headers contain security metadata, sequence numbers, and writer group identifiers. This is the format you want for real-time control loops.

JSON encoding — Human-readable, easier to debug, but 3-5x larger on the wire. Useful when messages need to traverse IT infrastructure (firewalls, API gateways, log aggregators) where binary inspection is impractical.

Publisher Configuration

A publisher organizes its output into a hierarchy:

Publisher
└── WriterGroup (publishing interval, transport settings)
└── DataSetWriter (maps to a PublishedDataSet)
└── PublishedDataSet (the actual variables)

Each WriterGroup controls the publishing cadence and encoding. A single publisher might have one WriterGroup at 100ms for critical process variables and another at 10 seconds for auxiliary measurements.

DataSetWriters bind the data model to the transport. They define which variables go into which messages and how they're sequenced.

Subscriber Discovery

One of pub/sub's elegant features is publisher-subscriber decoupling. A subscriber doesn't need to know the publisher's address. It subscribes to a multicast group or MQTT topic and discovers available datasets from the messages themselves. DataSet metadata (field names, types, engineering units) can be embedded in the message or discovered via a separate metadata channel.

In practice, this means you can add a new analytics consumer to a running plant network without touching a single PLC configuration. The publisher doesn't even know the new subscriber exists.

Head-to-Head: The Numbers That Matter

DimensionClient/ServerPub/Sub (UADP/UDP)Pub/Sub (JSON/MQTT)
Latency (typical)5-50ms1-5ms10-100ms
Connection setup200-800msNone (connectionless)Broker-dependent
Bandwidth per 100 tags~2-4 KB/s~0.5-1 KB/s~3-8 KB/s
Max consumers per dataset~50 practicalUnlimited (multicast)Broker-limited
SecuritySession-level encryptionMessage-level signing/encryptionTLS + message-level
Firewall traversalEasy (single TCP)Hard (multicast)Easy (TCP to broker)
Deterministic timingNoYes (with TSN)No

The Latency Story

Client/server latency is bounded by the publishing interval plus network round-trip plus serialization overhead. The server must evaluate all monitored items in the subscription, package the notification, encrypt it, and transmit it — for each client independently.

Pub/sub with UADP over UDP can achieve sub-millisecond delivery when combined with Time-Sensitive Networking (TSN). The publisher serializes the dataset once, and the network fabric handles delivery to all subscribers simultaneously. There's no per-subscriber work on the publisher side.

Security Trade-offs

Client/server has the more mature security story. Each session negotiates its own secure channel with certificate-based authentication, message signing, and encryption. The server knows exactly who's connected and can enforce fine-grained access control.

Pub/sub security is message-based. Publishers sign and optionally encrypt messages using security keys distributed through a Security Key Server (SKS). Subscribers must obtain the appropriate keys to decrypt and verify messages. This works, but key distribution and rotation add operational complexity that client/server doesn't have.

Practical Architecture Patterns

Pattern 1: Client/Server for Configuration, Pub/Sub for Telemetry

The most common hybrid approach uses client/server for interactive operations — reading configuration parameters, writing setpoints, browsing the address space, acknowledging alarms — while pub/sub handles the high-frequency telemetry stream.

This plays to each model's strengths. Configuration operations are infrequent, require acknowledgment, and benefit from the request/response guarantee. Telemetry is high-volume, one-directional, and needs to scale to many consumers.

Pattern 2: Edge Aggregation with Pub/Sub Fan-out

Deploy an edge gateway that connects to PLCs via client/server (or native protocols like Modbus or EtherNet/IP), normalizes the data, and re-publishes it via OPC-UA pub/sub. The gateway absorbs the per-device connection complexity while providing a clean, scalable distribution layer.

This is exactly the pattern that platforms like machineCDN implement — the edge software handles the messy reality of multi-protocol PLC communication while providing a unified data stream that any number of consumers can tap into.

Pattern 3: MQTT Broker as Pub/Sub Transport

If your plant network can't support UDP multicast (many can't, due to switch configurations or security policies), use an MQTT broker as the pub/sub transport. The publisher sends OPC-UA pub/sub messages (JSON-encoded) to MQTT topics. Subscribers consume from those topics.

You lose the latency advantage of raw UDP, but you gain:

  • Standard IT infrastructure compatibility
  • Built-in persistence (retained messages)
  • Existing monitoring and management tools
  • Firewall-friendly TCP connections

The overhead is measurable — expect 10-50ms additional latency per hop through the broker — but for most monitoring and analytics use cases, this is perfectly acceptable.

Migration Strategy: Moving from Pure Client/Server

If you're running a pure client/server architecture today and hitting scale limits, don't rip and replace. Migrate incrementally:

  1. Identify high-fan-out datasets — Which datasets have 3+ consumers? Those are your first pub/sub candidates.

  2. Deploy an edge pub/sub gateway — Stand up a gateway that subscribes to your existing OPC-UA servers (via client/server) and republishes via pub/sub. Existing consumers continue to work unchanged.

  3. Migrate consumers one at a time — Move each consumer from direct server connections to the pub/sub stream. Monitor for data quality and latency differences.

  4. Push pub/sub to the source — Once proven, configure PLCs and servers that support native pub/sub to publish directly, eliminating the gateway hop for those devices.

When to Use Which: The Decision Matrix

Choose Client/Server when:

  • You need request/response semantics (writes, method calls)
  • Consumer count is small and stable (< 10 per server)
  • You need to browse and discover the address space interactively
  • Security audit requirements demand per-session access control
  • Your network doesn't support multicast

Choose Pub/Sub when:

  • You have many consumers for the same dataset
  • You need deterministic, low-latency delivery (especially with TSN)
  • Publishers are resource-constrained (embedded PLCs)
  • You're distributing data across network boundaries (IT/OT convergence)
  • You want to decouple publisher lifecycle from consumer lifecycle

Choose both when:

  • You're building a plant-wide platform (this is most real deployments)
  • Configuration and telemetry have different reliability requirements
  • You need to scale consumers independently of device count

The Future: TSN + Pub/Sub

The convergence of OPC-UA Pub/Sub with IEEE 802.1 Time-Sensitive Networking is arguably the most significant development in industrial networking since Ethernet hit the plant floor. TSN provides guaranteed bandwidth allocation, bounded latency, and time synchronization at the network switch level. Combined with UADP encoding, this enables OPC-UA to replace proprietary fieldbus protocols in deterministic control applications.

We're not there yet for most brownfield deployments. TSN-capable switches are expensive, and PLC vendor support is still rolling out. But for greenfield installations making architecture decisions today, TSN-ready pub/sub infrastructure is worth designing for.

Getting Started

If you're evaluating OPC-UA patterns for your plant:

  1. Audit your current fan-out — Count how many consumers connect to each data source. If any source serves 5+ consumers, pub/sub will reduce its load.

  2. Test your network for multicast — Many industrial Ethernet switches support multicast, but it may not be configured. Work with your network team to test IGMP snooping and multicast routing.

  3. Start with MQTT transport — If multicast isn't viable, MQTT-based pub/sub is the lowest-friction path. You can always migrate to UADP/UDP later.

  4. Consider an edge platform — Platforms like machineCDN handle the protocol translation and data normalization layer, letting you focus on the analytics and business logic rather than wrestling with transport plumbing.

The choice between client/server and pub/sub isn't either/or. It's understanding which pattern serves which data flow — and designing your architecture accordingly.

OPC-UA Security Policies: Certificate Management for Industrial Networks [2026 Guide]

· 11 min read

OPC-UA Security Certificate Management

If you've ever deployed OPC-UA in a production environment, you've hit the certificate wall. Everything works beautifully in development with self-signed certs and None security — then the IT security team shows up, and suddenly your perfectly functioning SCADA bridge is a compliance nightmare.

This guide cuts through the confusion. We'll cover how OPC-UA security actually works at the protocol level, what the security policies mean in practice, and how to manage certificates across a fleet of industrial devices without losing your mind.

Securing Industrial IoT: TLS for MQTT, OPC-UA Certificates, and Zero-Trust OT Networks [2026]

· 12 min read

Industrial OT Security Architecture

Here's a uncomfortable truth from the field: most industrial IoT deployments I've seen have at least one Modbus TCP device exposed without any authentication. No TLS. No access control. Just port 502, wide open, on a "segmented" network that's one misconfigured switch from the corporate LAN.

The excuse is always the same: "It's air-gapped." It never actually is.

This guide covers what securing industrial protocol communications looks like in practice — not the compliance checkbox version, but the engineering decisions that determine whether an attacker who lands on your OT network can read holding registers, inject false sensor data, or shut down a production line.

Best OPC UA Data Platforms 2026: Connecting Industrial Equipment to Modern Analytics

· 8 min read
MachineCDN Team
Industrial IoT Experts

OPC UA has become the de facto standard for industrial data interoperability, but choosing a platform that actually handles OPC UA data well — from edge collection to cloud analytics — remains one of the most confusing decisions in manufacturing IT. Most platforms claim OPC UA support. Far fewer deliver seamless, production-ready implementations that manufacturing engineers can deploy without a six-month integration project.

OPC-UA Information Modeling for IIoT: Beyond Simple Tag Reads [2026 Guide]

· 10 min read

OPC-UA Information Modeling Architecture

If you've spent any time polling PLC tags over EtherNet/IP or reading Modbus registers, you've felt the pain: flat address spaces, no self-description, and zero standardized semantics. Register 40001 on one chiller means something completely different on another vendor's dryer. You end up maintaining sprawling JSON configuration files that map register addresses to human-readable names, data types, element counts, and polling intervals — for every single device variant.

OPC-UA was designed to solve exactly this problem. But most guides treat it as an abstract specification. This article breaks down what actually matters when you're building industrial IoT infrastructure that needs to talk to real equipment.