Skip to main content

EtherNet/IP and CIP: A Practical Guide to Implicit vs Explicit Messaging for Plant Engineers [2026]

· 12 min read

EtherNet/IP is everywhere in North American manufacturing — from plastics auxiliary equipment to automotive assembly lines. But the protocol's layered architecture confuses even experienced controls engineers. What's the actual difference between implicit and explicit messaging? When should you use connected vs unconnected messaging? And how does CIP fit into all of it?

This guide breaks down EtherNet/IP from the wire up, with practical configuration considerations drawn from years of connecting real industrial equipment to cloud analytics platforms.

Understanding CIP: The Protocol Inside the Protocol

EtherNet/IP is really two things: standard Ethernet (TCP/IP and UDP) as the transport, and the Common Industrial Protocol (CIP) as the application layer. CIP is what makes EtherNet/IP speak "automation" — it's the same protocol used by DeviceNet and ControlNet, just running on different physical layers.

CIP models every device as a collection of objects. Each object has a class, an instance, and attributes. Think of it like an object-oriented database sitting inside every device on the network:

Object Model:
├── Identity Object (Class 0x01)
│ ├── Vendor ID
│ ├── Device Type
│ ├── Product Name
│ └── Serial Number
├── Connection Manager (Class 0x06)
│ └── Manages I/O and explicit connections
├── Assembly Object (Class 0x04)
│ ├── Input Assembly (Instance 101)
│ └── Output Assembly (Instance 100)
└── Vendor-Specific Objects (Class 0x64+)
└── Custom machine parameters

When you read a tag value from a Micro800 or CompactLogix PLC via EtherNet/IP, what actually happens at the wire level is a CIP request targeting a specific object class, instance, and attribute. The tag name gets resolved to a symbolic path that maps to memory inside the controller.

Why This Matters for Data Collection

The object model has a direct impact on how you architect your data collection. When an edge gateway connects to a PLC over EtherNet/IP, it needs to:

  1. Resolve tag names to CIP symbolic paths
  2. Determine element sizes — a BOOL is 1 byte, an INT16 is 2 bytes, an INT32/FLOAT is 4 bytes
  3. Handle arrays by specifying element counts and start indices
  4. Manage connection lifecycle — creating, maintaining, and recovering connections

A typical tag request for a Micro800 controller looks conceptually like this:

Protocol:    EtherNet/IP (ab-eip)
Gateway: 192.168.5.5
CPU: micro800
Tag name: Tank_Temperature
Element size: 2 (int16)
Element count: 1
Timeout: 2000ms

The element size and count are critical. Get them wrong, and you'll read garbage data or crash the connection. A 32-bit float occupying 4 bytes at index 0 will return completely different data if you accidentally read it as two 16-bit integers.

Implicit vs Explicit Messaging: Know the Difference

This is where most of the confusion lives. CIP defines two fundamentally different messaging paradigms, and choosing the wrong one for your use case will either waste bandwidth or miss critical events.

Explicit Messaging (Client/Server)

Explicit messaging is request/response. The client (your HMI, gateway, or SCADA system) sends a request, the server (the PLC) responds. Every message contains full routing information — think of it like HTTP requests.

Characteristics:

  • Runs over TCP (port 44818)
  • Each request contains complete addressing (class, instance, attribute)
  • Unscheduled — happens when the client decides to read/write
  • Higher overhead per message (~100-200 bytes of protocol framing)
  • Reliable delivery (TCP guarantees)
  • Typical round-trip: 2-10ms on a clean network

Best for:

  • Configuration reads/writes
  • Diagnostic data
  • Tag-by-tag polling at moderate rates (1-60 second intervals)
  • Reading serial numbers, firmware versions, device metadata
  • One-time reads that don't need continuous updates

In practice, explicit messaging is what most SCADA and IIoT systems use. When an edge gateway polls 200 tags from a chiller controller every 60 seconds, it's almost certainly using explicit messaging — opening a connection, reading each tag or batch of tags, processing the response, and closing or reusing the connection.

Implicit Messaging (I/O Messaging)

Implicit messaging is scheduled, connection-based data exchange. Once configured, the PLC and its I/O devices exchange data at fixed intervals without per-message addressing overhead. The data is pre-mapped through assembly objects.

Characteristics:

  • Runs over UDP (typically port 2222)
  • Minimal per-message overhead (~20-40 bytes)
  • Scheduled at a fixed RPI (Requested Packet Interval)
  • Connection-oriented — requires setup phase
  • No guaranteed delivery (UDP)
  • Typical cycle: 1-100ms RPI

Best for:

  • Real-time I/O control (drives, valves, motion)
  • High-speed data that must arrive on deterministic schedules
  • Scanner/adapter relationships (PLC ↔ remote I/O)
  • Applications where latency matters more than data integrity

Real-World Decision Matrix

FactorExplicitImplicit
Update rate needed> 500ms< 500ms
Data typeDiagnostics, configProcess I/O
Network overheadHigher per messageLower per message
Connection managementSimplerComplex
Recovery from disconnectionAutomatic retryRequires re-establishment
Typical payloadVariableFixed-size assemblies

The common mistake: Engineers try to use implicit messaging for monitoring and analytics. Don't. Implicit messaging is designed for control loops, not data collection. You'll burn through connection limits on the PLC and introduce unnecessary complexity. Explicit messaging at sensible polling intervals (1-60 seconds for most process variables) gives you everything you need for analytics.

Scanner/Adapter Architecture

In EtherNet/IP terminology:

  • Scanner = the device that initiates and manages connections (typically the PLC)
  • Adapter = the device that responds (I/O modules, drives, sensors)

This is a strict hierarchy. The scanner "owns" the communication relationship. When you're building an IIoT data collection layer, your edge gateway acts as an explicit messaging client — which is neither scanner nor adapter in the traditional sense. It's an originator of unconnected or connected explicit messages.

Connection Limits: The Hidden Bottleneck

Every EtherNet/IP device has a finite number of CIP connections it can handle. A typical Micro800 might support 16-32 concurrent TCP connections. A CompactLogix might handle 64-128.

When multiple systems try to read from the same PLC — SCADA, HMI, historian, IIoT gateway — you can exhaust these connection limits. Symptoms include:

  • Connection refused errors on new clients
  • Intermittent timeouts as connections fight for slots
  • PLC scan time increases as the controller services more connections

Mitigation strategies:

  1. Connection reuse: Open one TCP connection and multiplex multiple tag reads through it. Don't open a new socket per tag.
  2. Read batching: Read multiple consecutive tags in a single request. If tags occupy contiguous memory, one read request with a higher element count is far more efficient than individual reads.
  3. Intelligent polling intervals: Not every tag needs 1-second updates. Temperature setpoints change rarely — poll them every 3600 seconds. Alarm words need every-second polling. Match the interval to the data's rate of change.
  4. Tag prioritization: Some tags only need to be read after a triggering event. For example, recipe values and batch weights only matter when a batch counter increments — making them dependent on a parent tag rather than independently polled.

Practical Configuration Patterns

Configuring Tags for Data Collection

When setting up an edge gateway to collect data from EtherNet/IP devices, tag configuration is where most of the engineering effort goes. Each tag needs:

  • Tag name — the symbolic name as programmed in the PLC
  • Data type — bool, int16, uint16, int32, uint32, float32
  • Element count — 1 for scalars, N for arrays
  • Start index — where in the array to begin reading (0 for single values)
  • Poll interval — how often to read, in seconds
  • Change detection — whether to compare against last value before transmitting

Here's a realistic configuration pattern for a multi-circuit industrial chiller:

# Process variables — moderate polling rate
- name: "Tank Temperature"
type: int16
interval: 60 # Slow-changing, 1-minute is fine
compare: false # Always report (for trending)

# Alarm words — fast polling, change-only
- name: "Circuit 1 Alarm Word"
type: uint16
interval: 1 # Check every second
compare: true # Only transmit on change
priority: high # Don't batch — send immediately

# Static metadata — very infrequent
- name: "Serial Number"
type: int16
interval: 3600 # Once per hour is plenty
compare: true # Only if it changes (almost never)

Alarm Words: Bit-Level Extraction

Industrial PLCs often pack multiple alarm states into a single 16-bit register. Each bit represents a different fault condition:

Alarm Word Register (uint16):
Bit 0: High pressure fault
Bit 1: Low pressure fault
Bit 2: Sensor failure
Bit 3: Communication timeout
Bit 4: Overtemperature
...
Bit 15: General fault

A well-designed edge gateway will support calculated tags that extract individual bits from a parent register. When the parent alarm word changes, the system automatically computes the new state of each individual alarm bit and transmits it. This gives you per-alarm granularity for dashboards and notifications without multiplying the number of PLC reads.

The parent → calculated tag pattern:

  1. Read the alarm word register (1 read)
  2. For each bit position, apply shift and mask: (value >> shift) & mask
  3. Compare result against last known state
  4. Only transmit changes

This is far more efficient than reading each alarm as a separate BOOL tag, which would require 16 separate reads per alarm word.

Dependent Tag Patterns

In batch manufacturing (blending, mixing, extrusion), certain measurements only matter when a batch event occurs. For example:

  • Batch counter increments → read target weights, actual weights, hopper inventory, throughput
  • Recipe change → read new recipe values for all material stations

Modeling these as dependent tags means the gateway only reads them when the parent tag's value changes. This dramatically reduces network traffic and PLC load. A batch blender with 8 material stations might have 50+ dependent tags that only fire when the batch counter increments — perhaps every 30-90 seconds during production.

Data Batching and Transmission

Raw tag reads generate a lot of data. A chiller with 160+ process variables polled every 60 seconds produces roughly 10,000 data points per hour. Transmitting each one individually would be wasteful.

The standard approach is batch collection:

  1. Group readings by timestamp (all tags read in the same polling cycle share a timestamp)
  2. Accumulate groups until the batch reaches a size threshold (e.g., 4000 bytes) or a time threshold (e.g., 60 seconds)
  3. Transmit the complete batch as a single payload to your cloud platform

This reduces the number of network transmissions by 10-100x while maintaining complete data fidelity. The batch can be formatted as JSON for readability or binary for efficiency — binary typically reduces payload size by 40-60% compared to JSON.

Binary vs JSON Payloads

JSON format:

{
"serial": 12345,
"type": 8,
"groups": [
{
"t": 1709107200,
"values": [
{"id": 1, "s": 0, "v": 4520},
{"id": 2, "s": 0, "v": 3210}
]
}
]
}

Binary format packs the same data into a dense byte stream with fixed-width fields — device serial (4 bytes), device type (2 bytes), group count, timestamp (4 bytes), value count, then tag ID + status + value for each reading. No field names, no delimiters, no whitespace.

For bandwidth-constrained environments (cellular modems, satellite links), binary format can be the difference between a workable system and one that blows through data caps in days.

Network Design Considerations

VLAN Segmentation

EtherNet/IP traffic should live on its own VLAN, separate from IT traffic. A recommended architecture:

VLAN 10: Corporate IT (email, web, file shares)
VLAN 20: Control Network (PLC ↔ HMI, PLC ↔ I/O)
VLAN 30: IIoT/Monitoring (edge gateways → cloud)
VLAN 40: Safety Network (safety PLCs, E-stops)

The edge gateway sits on both VLAN 20 (to read from PLCs) and VLAN 30 (to transmit to the cloud), acting as a controlled bridge point.

Firewall Rules

At minimum:

  • Allow TCP 44818 (EtherNet/IP explicit messaging) from gateway to PLCs
  • Allow UDP 2222 (implicit messaging) only if needed
  • Block all inbound connections to PLCs from VLAN 30
  • Allow outbound MQTT/HTTPS from gateway to cloud

Connection Recovery

Network disruptions happen. A robust edge gateway handles them gracefully:

  1. Detect disconnection within 2-5 seconds (TCP keepalive or application-level heartbeat)
  2. Buffer data locally during outage (store-and-forward pattern)
  3. Reconnect with backoff — don't hammer a recovering PLC with connection attempts
  4. Resume normal operation once connection is re-established, flushing buffered data

The buffering capacity determines how long you can survive an outage without data loss. A 500KB buffer holding binary-format batches can store 2-4 hours of process data for a typical machine.

Where machineCDN Fits

machineCDN's edge infrastructure handles all of this complexity — EtherNet/IP tag resolution, intelligent polling intervals, change detection, dependent tag cascading, data batching, and store-and-forward buffering. It supports both Micro800 and CompactLogix controllers out of the box, with automatic protocol and device detection.

For plant engineers, this means you configure your tag maps, specify your intervals, and let the platform handle the rest — connection management, data format optimization, and reliable delivery to the cloud.

Key Takeaways

  1. Use explicit messaging for data collection — implicit is for real-time control, not analytics
  2. Batch your reads — one request for 10 contiguous registers beats 10 individual requests
  3. Match poll intervals to data dynamics — alarms every second, temperatures every minute, serial numbers every hour
  4. Plan for connection limits — reuse connections, multiplex reads, don't over-poll
  5. Design for disconnection — store-and-forward buffering is non-negotiable in production
  6. Use dependent tags for event-driven data — batch counters, recipe changes, mode transitions

EtherNet/IP is a mature, well-supported protocol. The challenge isn't the protocol itself — it's building reliable, efficient data collection infrastructure on top of it. Understanding CIP's object model, messaging types, and connection management is the foundation for getting that right.