How to use the Raptor Streaming API

Mathias Kolind -

Raptor Services provides an API that supports streaming data into Raptor Services Customer Data Platform through the Data Manager. The API is intended for “server to server” streaming.

The endpoint is customer specific and is split into different streams by sending a header that states the streamIdA streamId is defined by setting up a stream-dataflow in the Data Manager. This is also where you set up the schema of the input data and do transformation to match the schema of the CDP.

Once the schema is settled, it is only allowed to add new columns, but not delete or change any columns.

If you wish to change the schema, besides adding new columns to the input schema, it is done by creating a new stream-dataflow, which points to the same destination.

Note: It is advised to consider the streamId a secret, like an API key or password.


The endpoint for streaming is:{accountId}

AccountId is found in the Raptor Control Panel (four or five digits)


  • POST


  • x-streamid: {streamId} (guid provided by Data Manager)
  • Content-Type: application/json
  • x-isdraft: Set this header to true as long as the dataflow is in draft mode. Remove the header when pushing the dataflow into production


  • a JSON array of objects



Supported datatypes

  • String
  • Boolean (true/false)
  • Number
  • DateTime (ISO 8601)
  • Array of objects
  • Array of values
  • Null (for deleting a value in a property)

Supported Schema types

  • Person data
  • Interaction data


Remarks about updating data

The API only supports adding and updating data. If you wish to delete a Person Data row, please use the GDPR endpoint (please contact Raptor to get access to the GDPR endpoint). It is currently not possible to delete an Interaction Data row.

Interaction Data is always appended. Person Data is always upserted (updated if the person exists, otherwise inserted)

When Person Data is updated, it will either succeed by appending to or overwriting existing data.  

By appending new data rows will update existing data rows or are added to the dataset already in the CDP, but any other existing rows are left untouched. By overwriting the original dataset is completely removed and replaced with the new. 

Status codes (of the response)

200 Ok. Data is acknowledged and will eventually be added to system (if schema is correct)

400 Bad Request (data is not valid json and/or the content-type is not sent)

401 Unauthorized (streamid or accountid is invalid or missing)

408 Timeout (try again)

413 Payload too long (send fewer entries in request)




Q: Can I really send anything JSON?
A: Yes. When setting up the stream-dataflow, you settle on the schema of the data you are sending in. This schema must be obeyed going forward.

Q: What happens with data that doesn’t have a correct schema?
A: it is acknowledged by the endpoint (status code 200 OK) but it might fail when transforming the data to match the CDP schema. And then an error will be sent to the Operational Insights, which could send an email with info on the error.

It is also possible to see samples of failed entries in the Data Manager.

Q: What is considered a failing schema?
A: Wrong datatypes, Missing columns that are required. Empty entries.


Q: what if I send fewer columns than the schema dictates?
A: All columns from the schema must be present. Even though they might have empty, null or default values.

Q: What if I send more columns than expected?
A: Extra columns would be ignored until they are mapped in the Data Manager

Have more questions? Submit a request


Powered by Zendesk