Data Connectors

Data Connector introduction

This guide dives into the details of Data Connectors. If you were looking for a general orientation on what a Data Connector is and how it works, please see:

An Introduction to Data Connectors

Using a Data Connector is the easiest and most reliable way to get your sensor data out of DT Cloud and into an external service or database for further storage, processing, and analysis.

Table of content:

  1. Creating a Data Connector
  2. What is sent?
  3. Configuring a Data Connector
  4. Receiving events

Before diving into how to create and configure a Data Connector, there are a few things worth keeping in mind when working with Data Connectors:

Multiple Projects Data Connectors are created and configured on a per-Project basis. If you want multiple Projects to send data to the same Endpoint URL, you have to create one Data Connector per Project.
Access control To create a Data Connector, your User (or Service Account) has to be a Project Administrator. To modify a Data Connector, it’s enough to be a Project Developer.
API access In the API, you may find Data Connector related methods under the /projects/{projec}/dataconnectors endpoint. See the Data Connector API in the API Reference.
Multiple Data Connectors A Project can have several Data Connectors. If it does, all events are sent to all active Data Connectors.


Creating a Data Connector

To create a Data Connector, follow these steps:

  1. Navigate to your Project in DT Studio and locate Data Connectors in the main menu and then press Add Data Connector

  2. Fill in the mandatory fields:

    Data Connector display name: An name that describes the Data Connector

    Endpoint URL: The URL to which the Data Connector should send all events.

    Trying it out without your own server

    If you just want to test how the Data Connector work, you can use a 3rd party, such as

    This is free and requires no configuration or accounts, just do the following:

    1. First set the Endpoint URL to (replacing UNIQUE-NAME-HERE with something else).
    2. Then, after the Data Connector has been set up, and some sensor data has been sent, you can see the events in real-time by navigating to and pressing the raw tab.

    If you see a "[the] thing isn’t a thing" in, it means that it hasn't received an event yet. Make sure that you see at least a few successes on the Data Connector (step 4) and then refresh again.

    Please note that Disruptive Technologies is not affiliated with in any way and that data sent in this way is publicly available. It is a free, hosted, service that fulfills the same role as your server would.

  3. Ignore the rest of the configuration options for now and instead scroll down and press SAVE NEW DATA CONNECTOR

  4. There we go!

    All new events from all sensors and Cloud Connectors in this Project will now be sent to the Endpoint URL.

    After your sensors have generated a few events, you should be able to see something like this if you open the Data Connector again:


What is sent?

Each event is set over an HTTPS POST request encoded as JSON to the Endpoint URL.

An example of a touch event is shown below.

  "event": {
    "eventId": "bboqciu55u1g00c0g9n0",
    "targetName": "projects/bbbk89v86c6000c19pbg/devices/bapo55k1hbj000f5l0ig",
    "eventType": "touch",
    "data": {
      "touch": {
        "updateTime": "2018-05-08T13:29:47.543486780Z"
    "timestamp": "2018-05-08T13:29:47.543483560Z"
  "labels": {
    "name": "Coffee machine service button"

Please see the Event article to read more about the anatomy of an event and what different types of events there are.

Configuring Data Connectors

A Data Connector with the default configuration works in a lot of cases. However, there are a few reasons to change it, such as when:

  • You only want to send specific types of events
  • You want to sign each event with a secret
  • You want to include sensor meta-data with each event
  • You want to add custom HTTP headers

Enable and disable

It is possible to enable and disable the Data Connector by toggling the Data Connector enabled switch (saving is required by pressing Update Data Connector).

Disabled Data Connectors have no at-least-once guarantee

When a Data Connector is disabled, undelivered events and events generated after it was disabled will not be sent.

Re-enabling the Data Connector will not backfill data from the period it was disabled. This means that the only way to programmatically fetch events generated during the time it was disabled is via the REST API.

Events to include

The default configuration is to send all events. By unchecking Forward All Events, you can instead select only the specific events that should be sent.

The following image illustrates how it could look like if you only want to send the events containing sensor data (temperature, someone pressed the sensor etc.).


For a description of all the different events, see the Events article.

Including Labels

If you want to include Labels that you have set on your sensors or Cloud Connectors with each event sent by the Data Connector, then you have to explicitly add these to the Data Connector.

A few examples where including Labels in each event may be useful are:

  • When you have added labels containing information about where you installed them
  • When you have labels containing a custom ID

To include labels, add them to INCLUDE SENSOR & CLOUD CONNECTOR LABEL DATA.


The image above adds two custom labels, location and custom-id, with each event.

Note that the label name is always included by default when you create the Data Connector.

Adding HTTP headers

In some cases, you might need to include your own HTTP headers with each request the Data Connector makes. A typical use-case would be to include additional authentication information that is needed on the receiving side.

Custom HTTP headers can be added via the CUSTOM HTTP REQUEST HEADERS option.


The image above shows how it could look if you have added your own header called custom-token. In this case, an HTTP header named custom-token (and the value is shown above) would be included in each and every one of the requests that the Data Connector makes to the Endpoint URL.

Signing events

It is possible to sign each sent event with a secret key. By doing this, you can make sure that each event originated from your Data Connector and the data has not been tampered with.

Highly recommended

It is highly recommended to use the Signature secret, described below, in production code.

This process consists of two parts:

  1. Add a secret key to the Signature secret of your Data Connector.
  2. In the code running at the Endpoint URL, verify the checksum of each incoming request using the same secret key.

Signing sent events

To sign all sent events with your secret, add a Signature secret to the Data Connector by filling in the field.


Verifying received events

When the Data Connector is configured with a Signature secret it will automatically add an HTTP header called x-dt-signature with each event.

The x-dt-signature field contains a JWT, JSON Web Token, which in turn contains a checksum field. The entire JWT is signed with the configured Signature secret before it is sent.

To verify the received requests at the Endpoint URL, you need to:

  1. Extract the JWT from the HTTP header x-dt-signature of the received request,
  2. Verify the JWT's signature with the Signature secret
  3. Calculate a SHA1 checksum over the entire request body
  4. Compare the body checksum with the checksum contained in the JWT
  5. If these checksums are identical, you can be be assured that the event has not been tampered with and it originated from your Data Connector

Below is a code snippet (in JavaScript) that illustrates how this process could look like on the receiving side. 

const jwt = require('jsonwebtoken')
const crypto = require('crypto')

var token = request.headers["x-dt-signature"];
var jwtPayload=jwt.verify(token,SECRET)
var hash = crypto.createHash('sha1');
var validRequest = (jwtPayload.checksum == hash.digest('hex'))

Receiving events

When you receive an event at the Endpoint URL from a Data Connector, you typically process it in some way that adds value. This could be saving data to a database, putting it on an event queue or something else use-case specific.

This section summarizes some of the most important things to keep in mind when integrating with a Data Connector. 

Acknowledging received event

A request-reply flow on the Endpoint URL should be implemented as follows:

  1. Your endpoint receives an HTTPS POST event
  2. Your service processes the data. This could involve saving the data into a database or maybe forwarding it to the next service inside your cloud.
  3. Your service replies to the event request with a 200 OK response

What is important to note here is that the request should never return an HTTP 200 OK response before you are done processing it. This is because when our cloud receives that 200 OK, the event will be taken off the internal Data Connector queue and checked off as received.

The Golden Rule

Do not reply 200 OK until your database, event queue (or whatever it may be) has confirmed that the event has been saved or added.

It is also important to note that the at-least-once guarantee is just that. An event could be delivered more than once in some circumstances, and it is up to the receiving end to handle this.

The at-least-once guarantee

Every event received by DT Cloud is put in a dedicated, per-Data Connector, queue. Messages are removed from this queue once acknowledged, or if the message is older than 12 hours. This means that if your endpoint goes offline for a while, you will still receive the data when it comes back up again because the Data Connector will retransmit each message for up to 12 hours.

An important side effect of this delivery guarantee is that, under certain conditions, you may receive duplicates of the same event. This will be rare, but you should make sure that your receiving code can handle this.

Retry policy

The Data Connector will retry anytime it does not receive a successful (HTTP 200 OK) response from your endpoint.

The retry interval is calculated by a exponential back-off policy. Meaning that it will increase the resend interval for each failure.

The formula to calculate exponential back-off retries is as follows:

{initial_interval} * 2 ^{retry_count - 1}
  • 1st failed attempt = 8 sec until retry
  • 2nd failed attempt = 16 sec until retry
  • 3rd failed attempt = 32 sec until retry
  • 6th failed attempt = 1024 sec until retry
  • ...

The initial retry interval is 8 sec and the maximum retry interval is 1 hour.

For very slow endpoints the minimum retry interval will be 4x the response time