User Tools

Site Tools


data_core:tcp

TCP Event Source and Sink

Data Core objects “TCP Event Source” and “TCP Event Sink” provide a mechanism to route Data Core event messages between Data Core Nodes, potentially crossing network boundaries and firewalls.

 A Data Core Node is an instance of Intelligent Plant's
 data-routing and data-access software operating as a Windows
 service.
 
 App Store Connect is a type of Data Core Node pre-configured
 to work with the App Store.

When to use TCP Event Sink and Source

The TCP Event Sink and Source provides a reliable and secure way to relay data across network boundaries and firewalls. Message data is signed, encrypted and relayed on a strictly one-way data-only channel. Minimal firewall configuration is required. This makes it particulary suitable for moving data across high-security militarized networks: for example, from a Process Control Network to a Process Information Network to a Business Network.

The diagram below illustrates such a configuration.

 500

Alarms & Events (A&E) arrive on a serial feed and are captured by Serial Port Listener, data is collected then relayed. Three distinct data core nodes act as stepping stones across network boundaries. Firewalls at each boundary only require a single rule to allow the downstream flow of secure TCP traffic.

Secure TCP Connection

Connections between the TCP Event Sink and Source are only permitted if an autheticated, encrypted and signed communication is established.

  • TCP authentication on Windows Server 2012 or above (recommended) uses the Kerebos protocol and requires a system account (see “System Account Requirements” below).
  • Data signing helps to protect the integrity of the data. namely, it helps the recipient determine whether the data has been tampered with while in transit.
  • Encryption protects the privacy of the data. It helps to ensure that while data is in transit it cannot be deciphered by third parties.

Resilient Data Transfer

The TCP Event Source and Sink supports resilient data transfer.

The diagram below includes a relient flow where transmission is guaranteed and an optional fast flow for when speed is essential.

To configure a resilient flow, the first step is to configure a local collector (this is essential if data is arriving on an ephemeral flow). The Big Data Components can be utilised to grab and save the data to a local database efficiently. The TCP Out component is then chained to the Big Data store.

Next, the TCP Event Sink is configured with the “Check Response” property set to true. This acts as a guaranteed delivery mechanism - repeating attempts to relay data downstream until a positive acknowledgement is received.

Local data collection and Ack verification will introduce a lag on the data relay process. On the fast flow, the TCP components are connected directly to the datasource (before local collection) and configured to “fire-and-forget” by setting the “Check Reponse” property to false. Each event will pass through both flows and are consolidated at their final destination.

System Account Requirements

For TCP authentication, the TCP Client supplies credentials for the TCP server to verify. This could be a service account available on a common domain, or if the Data Core Nodes are on separate networks, a local windows account defined on the computer hosting the TCP Event Source (as in the picture below).

The username and password is then supplied in the TCP Event Sink configuration.

NB. Sensitive Data Core configuration is encrypted.

data_core/tcp.txt · Last modified: 2020/08/07 10:31 by su