Simplest way to start publishing to Hermes is sending POST request on topic endpoint::

curl -X POST -H "Content-Type: application/json" http://hermes-frontend/topics/group.topic -d '{"message": "Hello world!"}'

Creating group

As data model describes, topics are gathered into topic groups. If you don't have a group yet, you need to create one by sending POST with application/json content type at


Request body must contain groupName field, which is the name of the group.

Sample request:

    "groupName": "my-group"

Creating topic

Use Hermes Management REST API to create topic by sending POST request with application/json content type on topics resource:


Request body must contain at least:

Minimal request:

    "name": "",
    "description": "This is my topic",
    "contentType": "JSON",
    "retentionTime": {
        "duration": 1
    "owner": {
        "source": "Plaintext",
        "id": "My Team"

Other options:

Option Description Options Default value
ack acknowledgement level ALL, LEADER LEADER
trackingEnabled track incoming messages? - false

Request that specifies all available options:

    "name": "myTopic",
    "description": "This is my topic",
    "ack": "LEADER",
    "retentionTime": {
        "duration": 1
    "trackingEnabled": false,
    "contentType": "JSON"

Message format

Each topic has a defined content type that describes the format of data sent to Kafka. At this point Hermes supports messages sent in JSON and AVRO.


When topic has content type set to JSON it will accept messages in JSON format and they will be stored as JSON in Kafka.

This mode is dedicated for simple use-cases and offers no validation for published messages.


Avro is the recommended message format for topics in Hermes. It has many advantages over plain JSON, e.g. has built-in message validation (against defined schema) and lowers the volume of data sent to Kafka.

Read detailed documentation for publishing messages in Avro format here.

Response format

Message Id

Response will contain special header: Hermess-Message-Id. This is event UUID generated by Hermes, which can be used to track how event flew through the system.

Response codes

There are two possible response status codes that represent success:

Failure statuses:

Acknowledgment level

Each topic can define level of acknowledgement (ACK):

For most of the topic leader ACK is enough. This guarantees roughly 99.999..% reception rate. Only in rare cases, during Kafka cluster rebalancing or nodes outage Kafka might confirm that message was received, while it was not saved and it will be lost.

What does it mean in practice? Numbers differ per case and they are affected by multiple factors like frequency of rebalancing taking place on Kafka clusters, Kafka version etc. In our production environment using ACK leader means we falsely believe message was received by Kafka once per 20 million events. This is a very rough estimate that should show you the scale, if you need numbers to base your decision on - please conduct own measurements.

If you need 100% guarantee that message was saved, force all replicas to send ACK. The downside of this is much longer response times, they tend to vary a lot as well. Thanks to Hermes buffering (described in paragraphs below), we are able to guarantee some sane response times to our clients even in ACK all mode.


Hermes administrator can set maximum time, for which Hermes will wait for Kafka acknowledgment. By default it is set to 65ms. After that time, 202 response is sent to client. Event is kept in Kafka producer buffer and it's delivery will be retried until successful.

This makes Hermes resilient to any Kafka malfunctions or hiccups, and we are able to guarantee maximum response time to clients. Also in case of Kafka cluster failure, Hermes is able to receive incoming events and send them when Kafka is back online.

Buffer persistence

By default events are buffered in memory only. This raises the question about what happens in case of Hermes node failure (or force kill of process). Hermes Frontend API exposes callbacks that can be used to implement persistence model of buffered events.

Default implementation uses OpenHFT ChronicleMap to persist unsent messages to disk. Map structure is continuously persisted to disk, as it is stored in offheap memory as memory mapped file.