You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/altinity-kb-setup-and-maintenance/users_in_keeper.md
+15-15Lines changed: 15 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ Before diving into the details, the core concept is:
23
23
- With `user_directories.replicated`, ClickHouse stores the RBAC model in Keeper under a configured path (for example `/clickhouse/access`) and every node watches that path.
24
24
- Each node maintains a local in-memory cache of replicated access entities and updates it via Keeper watch callbacks. As a result, access checks are fast and performed locally in memory, while RBAC modifications depend on Keeper availability and propagation.
25
25
26
-
Flow of this KB:
26
+
The flow of this article:
27
27
1. Why this model helps.
28
28
2. How to configure it on a new cluster.
29
29
3. How to validate and operate it.
@@ -36,14 +36,14 @@ Flow of this KB:
36
36
In practice, it fans out the query through the distributed DDL queue (also Keeper/ZooKeeper-dependent) to currently known cluster nodes.
37
37
It does not automatically replay old RBAC DDL for replicas/shards added later.
38
38
39
-
Keeper-backed RBAC solves that:
39
+
Keeper-backed RBAC differences:
40
40
- one shared RBAC state for the cluster;
41
41
- new servers read the same RBAC state when they join;
42
42
- no need to remember `ON CLUSTER` for every RBAC statement.
43
43
44
44
Mental model: Keeper-backed RBAC replicates access state, while `ON CLUSTER` fans out DDL to currently known nodes.
45
45
46
-
### 1.1 Pros and Cons
46
+
### 1.1 Pros and Cons of Keeper-backed RBAC
47
47
48
48
Pros:
49
49
- Single source of truth for RBAC across nodes.
@@ -53,13 +53,13 @@ Pros:
53
53
- Integrates with access-entity backup/restore.
54
54
55
55
Cons:
56
-
- Writes depend on Keeper availability. `CREATE/ALTER/DROP USER` and `CREATE/ALTER/DROP ROLE`, plus`GRANT/REVOKE`, fail if Keeper is unavailable, while existing authentication/authorization may continue from already loaded cache until restart.
56
+
- Writes depend on Keeper availability. `CREATE/ALTER/DROP USER/ROLE` and`GRANT/REVOKE` fail if Keeper is unavailable, while existing authentication/authorization may continue from already loaded cache until restart.
57
57
- Operational complexity increases (Keeper health directly affects RBAC operations).
58
58
- Keeper data loss or accidental Keeper path damage can remove replicated RBAC state, and users may lose access; keep regular RBAC backups and test restore procedures.
59
59
- Can conflict with `ON CLUSTER` if both mechanisms are used without guard settings.
60
60
- Invalid/corrupted payload in Keeper can be skipped or be startup-fatal, depending on `throw_on_invalid_replicated_access_entities`.
61
61
- Very large RBAC sets (thousands of users/roles or very complex grants) can increase Keeper/watch pressure.
62
-
- If Keeper is unavailable during server startup and replicated RBAC storage is configured, startup can fail, so you may be unable to log in until startup succeeds.
62
+
- If Keeper is unavailable during server startup and replicated RBAC storage is configured, the server may fail to start.
63
63
64
64
## 2. Configure Keeper-backed RBAC on a new cluster
65
65
@@ -157,7 +157,7 @@ FROM system.user_directories
157
157
ORDER BY precedence;
158
158
```
159
159
160
-
Example expected result (values can vary by version/config; precedence values are relative and order matters):
160
+
Expected result (values can vary by version/config; precedence values are relative and order matters):
161
161
162
162
```text
163
163
name type precedence
@@ -173,7 +173,7 @@ FROM system.users
173
173
ORDER BY name;
174
174
```
175
175
176
-
Example expected result for SQL-created user:
176
+
Expected result for a SQL-created user:
177
177
178
178
```text
179
179
name storage
@@ -226,13 +226,13 @@ For production, prefer configuring this in a profile (for example `default` in `
226
226
227
227
## 6. Migrate existing clusters/users
228
228
229
-
Before switching to Keeper-backed RBAC, treat this as a storage migration.
229
+
Switching to Keeper-backed RBAC should be treated as a storage migration..
230
230
231
231
**Important:** replay/restore RBAC on one node only. Objects are written to Keeper and then reflected on all nodes.
232
232
233
233
Key facts before migration:
234
234
- Changing `user_directories` storage or changing `zookeeper_path` does **not** move existing SQL RBAC objects automatically.
235
-
- If path changes, old users/roles are not deleted, but become effectively hidden from the new storage path.
235
+
- If the path changes, old users and roles are not deleted but become effectively hidden from the new storage path.
236
236
-`zookeeper_path` cannot be changed at runtime via SQL.
237
237
238
238
Recommended high-level steps:
@@ -250,7 +250,7 @@ This path is useful when:
250
250
- Replaying `SHOW ACCESS` output is idempotent only if you handle `IF NOT EXISTS`/cleanup; otherwise prefer restoring into an empty RBAC namespace.
251
251
252
252
Recommended SQL-only flow:
253
-
1. On source, check where entities are stored (local vs replicated):
253
+
1. On the source, check where the entities are stored (local vs. replicated):
254
254
255
255
```sql
256
256
SELECT name, storage FROMsystem.usersORDER BY name;
@@ -261,19 +261,19 @@ SELECT name, storage FROM system.row_policies ORDER BY name;
261
261
SELECT name, storage FROMsystem.masking_policiesORDER BY name;
262
262
```
263
263
264
-
2. Export RBAC DDL from source:
264
+
2. Export RBAC DDL from the source:
265
265
- simplest full dump:
266
266
267
267
```sql
268
268
SHOW ACCESS;
269
269
```
270
270
271
-
Save output as SQL (for example `rbac_dump.sql`) in your repo/artifacts.
271
+
Save the output as SQL (for example `rbac_dump.sql`) in your repo/artifacts.
272
272
273
273
You can also export individual objects with `SHOW CREATE USER/ROLE/...` when needed.
274
274
275
-
3. Switch config to replicated `user_directories` on target cluster and restart/reload.
276
-
4. Replay exported SQL on one node (without `ON CLUSTER` in replicated mode).
275
+
3. Switch the configuration to replicated `user_directories` on the target cluster and restart/reload.
276
+
4. Replay the exported SQL on one node (without `ON CLUSTER` in replicated mode).
277
277
5. Validate from another node (`SHOW CREATE USER ...`, `SHOW GRANTS FOR ...`).
278
278
279
279
### 6.2 Migration with `clickhouse-backup` (`--rbac-only`)
@@ -336,7 +336,7 @@ Operational implication:
336
336
| New replica has no historical users/roles | Team used only `... ON CLUSTER ...` before scaling | Enable Keeper-backed RBAC so new nodes load shared state |
337
337
|`CREATE USER ... ON CLUSTER` throws "already exists in replicated" | Query fan-out + replicated storage both applied | Remove `ON CLUSTER` for RBAC or enable `ignore_on_cluster_for_replicated_access_entities_queries`|
338
338
|`CREATE USER`/`GRANT` fails with Keeper/ZooKeeper error | Keeper unavailable or connection lost | Check `system.zookeeper_connection`, `system.zookeeper_connection_log`, and server logs |
339
-
| RBAC writes still go local though `replicated`exists |`local_directory` remains first writable storage | Use `user_directories replace="replace"` and avoid writable local SQL storage in front of replicated |
339
+
| RBAC writes still go to `local_directory` even though `replicated`is configured |`local_directory` remains the first writable storage | Use `user_directories replace="replace"` and avoid writable local SQL storage in front of `replicated`|
340
340
| Server does not start when Keeper is down; no one can log in | Replicated access storage needs Keeper during initialization | Restore Keeper first, then restart; if needed use a temporary fallback config and keep a break-glass `users.xml` admin |
341
341
| Startup fails (or users are skipped) because of invalid RBAC payload in Keeper | Corrupted/invalid replicated entity and strict validation mode | Use `throw_on_invalid_replicated_access_entities` deliberately: `true` fail-fast, `false` skip+log; fix bad Keeper payload before re-enabling strict mode |
342
342
| Two independent clusters unexpectedly share the same users/roles | Both clusters point to the same Keeper ensemble and the same `zookeeper_path`| Use unique RBAC paths per cluster (recommended), or isolate with Keeper chroot (requires Keeper metadata repopulation/migration) |
0 commit comments