You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/altinity-kb-setup-and-maintenance/users_in_keeper.md
+59Lines changed: 59 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -233,6 +233,65 @@ During restore:
233
233
234
234
Only the host that acquires this node restores that replicated access storage.
235
235
236
+
### 10.4 Support in clickhouse-backup tool
237
+
238
+
`clickhouse-backup` supports replicated RBAC (`--rbac`) by directly reading and writing Keeper state for replicated access storages.
239
+
Its behavior is similar in goal to native `BACKUP`/`RESTORE`, but implementation is different: it does not use ClickHouse native backup-coordination `repl_access` znodes. Instead, it performs explicit Keeper subtree dump/restore from the host running the tool.
240
+
241
+
#### 10.4.1 What is backed up
242
+
243
+
For `--rbac`, the tool backs up both:
244
+
245
+
- Local access files (`*.sql`) from ClickHouse access storage path.
246
+
- Replicated access entities from Keeper for each replicated user directory.
247
+
248
+
Replicated directories are discovered via:
249
+
250
+
-`SELECT name FROM system.user_directories WHERE type='replicated'`
251
+
252
+
For each such directory, the tool:
253
+
254
+
- Resolves its Keeper path from `config.xml` (`/user_directories/<name>/zookeeper_path`).
255
+
- Checks that `<zookeeper_path>/uuid` has children.
Keeper connection settings are taken from ClickHouse preprocessed `config.xml`:
269
+
270
+
-`/zookeeper/node` endpoints
271
+
- optional TLS (secure + `/openSSL/client/*`)
272
+
- optional digest auth
273
+
- optional Keeper root prefix
274
+
275
+
So the tool uses the same Keeper connectivity model as ClickHouse server config.
276
+
277
+
#### 10.4.3 Restore behavior in replicated mode
278
+
279
+
During restore `--rbac`, the tool:
280
+
281
+
1. Scans backed-up RBAC (`*.sql` and `*.jsonl`) and resolves conflicts against existing RBAC.
282
+
2. Applies conflict policy:
283
+
- general.rbac_conflict_resolution: recreate (default) or fail
284
+
-`--drop` also forces dropping existing conflicting entries
285
+
3. Restores local access files.
286
+
4. Restores replicated Keeper data from JSONL files back into replicated access paths.
287
+
288
+
JSONL-to-directory mapping rule:
289
+
290
+
- If file name matches `<user_directory_name>.jsonl`, it is restored to that directory.
291
+
- If no match is found, it falls back to the first replicated user directory.
292
+
293
+
After local RBAC restore, the tool creates `need_rebuild_lists.mark`, removes `*.list`, and restarts ClickHouse (same as with configs restore) so access metadata is rebuilt correctly.
0 commit comments