Handle Safe indexing events from Transaction Service and deliver as HTTP webhooks. This service should be connected to the Safe Transaction Service:
- Transaction service sends events to RabbitMQ.
- Events service holds a database with services to send webhooks to, and some filters like
chainIdoreventTypecan be configured. - Events service connects to RabbitMQ and subscribes to the events. When an event matches filters for a service, a webhook is posted.
Available endpoints:
- /health/ -> Check health for the service.
- /admin/ -> Admin panel to edit database models.
- /events/sse/{CHECKSUMMED_SAFE_ADDRESS} -> Server side events endpoint. If
SSE_AUTH_TOKENis defined then authentication will be enabled and headerAuthorization: Basic $SSE_AUTH_TOKENmust be added to the request.
If you want to integrate with the events service, you need to:
- Build a REST API with an endpoint that can receive
json/applicationrequests (take a look at Events Supported). - Endpoint need to answer with:
HTTP 202status- Nothing in the body.
- It should answer as soon as possible, as events service will timeout in 5 seconds by default (configurable via
HTTP_TIMEOUT), if multiple timeouts are detected service will stop sending requests to your endpoint. So you should receive the event, return a HTTP response and then act upon it. - Each delivery includes a
X-Delivery-Idheader with a unique UUID. The same UUID is kept across retry attempts for the same event, so you can use it to deduplicate re-deliveries. - Configuring HTTP Basic Auth in your endpoint is recommended so a malicious user cannot post fake events to your service.
Some parameters are common to every event:
address: Safe address.type: Event type.chainId: Chain id.
{
"address": "<Ethereum checksummed address>",
"type": "NEW_CONFIRMATION",
"owner": "<Ethereum checksummed address>",
"safeTxHash": "<0x-prefixed-hex-string>",
"chainId": "<stringified-int>"
}{
"address": "<Ethereum checksummed address>",
"type": "EXECUTED_MULTISIG_TRANSACTION",
"safeTxHash": "<0x-prefixed-hex-string>",
"to": "<Ethereum checksummed address>",
"data": "<0x-prefixed-hex-string>" | null,
"failed": "true" | "false",
"txHash": "<0x-prefixed-hex-string>",
"chainId": "<stringified-int>"
}{
"address": "<Ethereum checksummed address>",
"type": "PENDING_MULTISIG_TRANSACTION",
"safeTxHash": "<0x-prefixed-hex-string>",
"to": "<Ethereum checksummed address>",
"data": "<0x-prefixed-hex-string>" | null,
"chainId": "<stringified-int>"
}{
"address": "<Ethereum checksummed address>",
"type": "DELETED_MULTISIG_TRANSACTION",
"safeTxHash": "<0x-prefixed-hex-string>",
"chainId": "<stringified-int>"
}{
"address": "<Ethereum checksummed address>",
"type": "INCOMING_ETHER" | "OUTGOING_ETHER",
"txHash": "<0x-prefixed-hex-string>",
"value": "<stringified-int>",
"chainId": "<stringified-int>"
}{
"address": "<Ethereum checksummed address>",
"type": "INCOMING_TOKEN" | "OUTGOING_TOKEN",
"tokenAddress": "<Ethereum checksummed address>",
"txHash": "<0x-prefixed-hex-string>",
"value": "<stringified-int>",
"chainId": "<stringified-int>"
}{
"address": "<Ethereum checksummed address>",
"type": "INCOMING_TOKEN" | "OUTGOING_TOKEN",
"tokenAddress": "<Ethereum checksummed address>",
"txHash": "<0x-prefixed-hex-string>",
"tokenId": "<stringified-int>",
"chainId": "<stringified-int>"
}{
"address": "<Ethereum checksummed address>",
"type": "MESSAGE_CREATED" | "MESSAGE_CONFIRMATION",
"messageHash": "<0x-prefixed-hex-string>",
"chainId": "<stringified-int>"
}{
"type": "REORG_DETECTED",
"blockNumber": "<int>",
"chainId": "<stringified-int>"
}{
"address": "<Ethereum checksummed address>" | null,
"type": "NEW_DELEGATE" | "UPDATED_DELEGATE" | "DELETED_DELEGATE",
"delegate": "<Ethereum checksummed address>",
"delegator": "<Ethereum checksummed address>",
"label": "<string>",
"expiryDateSeconds": "<int>" | null,
"chainId": "<stringified-int>"
}Not currently.
No, this event is only meant to be run by companies running the Safe Transaction Service. You need to develop your own endpoint as explained in How to integrate with the service
Indexing can take 1-2 minutes in the worst cases and less than 15 seconds in good cases.
Yes. The service retries up to HTTP_MAX_RETRIES times (default: 2) with exponential backoff on transient network errors (e.g. ECONNRESET, ETIMEDOUT) and on 429 / 5xx responses. Every delivery attempt for the same event shares the same X-Delivery-Id header value, so your endpoint can use it to deduplicate re-deliveries.
In case our systems go down, messages should be stored in our queue and when the systems are up resending should be restored (unless queue is overflowed because services have been done for a while and some old messages are discarded).
Yes, and we can configure the chains you want to get events from.
You get webhooks for all Safes, it currently cannot be configured.
No, we would like to keep webhook information minimal. Doing queries afterwards to the service is ok, but we are not planning on doing the webhooks the source of information for the service. The idea for webhooks is to remove the need for polling the services.
Every webhook request includes a X-Delivery-Id header containing a UUID that is unique per delivery and stable across retries. You can use it to implement idempotent processing on your end.
How do you handle confirmed/unconfirmed blocks and reorgs. When do you send an event? After waiting for confirmation or immediately? If a transaction is removed due to a chain reorg, would you still send the event before it is confirmed?
We don't send notifications when a reorg happens. We send the events as soon as we detect them, no waiting for confirmations. So you should always come to the API and make sure the data is what you expect. This events feature is something built for notifying so we prevent people http polling our API, but it shouldn't be taking the events as a source of trust, only as a signal to come back to the API (that's why we don't send a lot of informations in the events).
Node 24 LTS is required.
$ corepack enable
$ pnpm install --frozen-lockfileDocker compose is required to run RabbitMQ and Postgres
cp .env.sample .env
docker compose up -d
# development
$ pnpm run start
# watch mode
$ pnpm run start:dev
# production mode
$ pnpm run start:prodNote: It's important that web is not running during tests, as it can consume messages
and tests will fail.
cp .env.sample .envSimple way:
bash ./scripts/run_tests.shManual way:
docker compose down
docker compose up -d rabbitmq db db-migrations
# unit tests
pnpm test
# e2e tests
pnpm run test:e2e
# test coverage
pnpm run test:covAll configuration is done through environment variables. See .env.sample for a full template.
| Variable | Required | Default | Description |
|---|---|---|---|
DATABASE_URL |
Yes | — | PostgreSQL connection URL |
AMQP_URL |
Yes | — | RabbitMQ connection URL |
AMQP_EXCHANGE |
Yes | — | RabbitMQ exchange name |
AMQP_QUEUE |
Yes | safe-events-service |
RabbitMQ queue name |
ADMIN_EMAIL |
Yes | — | Admin panel login email |
ADMIN_PASSWORD |
Yes | — | Admin panel login password |
ADMIN_COOKIE_SECRET |
Yes | — | Secret used to sign admin session cookies |
ADMIN_SESSION_SECRET |
Yes | — | Secret used to encrypt admin sessions |
ADMIN_WEBHOOK_AUTH |
Yes | — | Bearer token for webhook management endpoints |
SSE_AUTH_TOKEN |
No | "" (disabled) |
Base64 token for SSE endpoint (Authorization: Basic <token>). Auth is disabled when empty. |
NODE_ENV |
No | — | Set to production to disable schema auto-sync and enable production mode |
URL_BASE_PATH |
No | "" |
Global URL prefix (e.g. /v1) |
DATABASE_SSL_ENABLED |
No | false |
Enable SSL for database connection |
DATABASE_CA_PATH |
No | — | Path to CA certificate file for database SSL |
HTTP_TIMEOUT |
No | 5000 |
Webhook HTTP client timeout in milliseconds |
HTTP_MAX_RETRIES |
No | 2 |
Max retry attempts for transient network errors and 5xx/429 responses |
DB_HEALTH_CHECK_TIMEOUT |
No | 5000 |
Database health check timeout in milliseconds |
AMQP_PREFETCH_MESSAGES |
No | 100 |
RabbitMQ prefetch message count |
WEBHOOK_AUTO_DISABLE |
No | false |
Auto-disable webhooks that exceed the failure threshold |
WEBHOOK_FAILURE_THRESHOLD |
No | 90 |
Failure rate percentage (0–100) above which a webhook is auto-disabled |
WEBHOOK_HEALTH_MINUTES_WINDOW |
No | 60 |
Rolling window in minutes used to compute per-webhook failure rates |
LOG_LEVEL |
No | log |
Log verbosity: verbose, debug, log, warn, error, fatal |
By default, the local dockerized migrations database will be used (test should not be used as it doesn't use migrations).
To use a custom database for migrations, set MIGRATIONS_DATABASE_URL environment variable.
Remember to add the new database entities to ./src/datasources/db/database.options.ts
bash ./scripts/db_generate_migrations.sh RELEVANT_MIGRATION_NAMEThis repository contains code developed under two different ownership and licensing regimes, split by a defined cut-over date.
- Up to and including February 16, 2026: code is Copyright (c) Safe Ecosystem Foundation and licensed under the MIT License. The final SEF-owned MIT snapshot is tagged as
sef-mit-final. - From February 17, 2026 onward: new development is Copyright (c) Safe Labs GmbH and licensed under the Functional Source License, Version 1.1 (MIT Future License).
Users who require a purely MIT-licensed codebase should base their work on the sef-mit-final tag. The historical MIT-licensed code remains MIT and is not retroactively relicensed.
For details, see LICENSE and NOTICE.
