Overview
Daemon Mode runs continuously on client machines, automatically backing up devices according to a schedule. Configuration is managed via a YAML file at /etc/xreplicator/agent.yaml (Linux) or C:\ProgramData\xreplicator\agent.yaml (Windows).
Starting the Daemon
# Start the daemon
sudo systemctl start backup-agent
# Enable automatic startup on boot
sudo systemctl enable backup-agent
# Check status
sudo systemctl status backup-agent
# View logs
sudo journalctl -u backup-agent -f# Windows (run as Administrator)
Start-Service XReplicatorAgent
Get-Service XReplicatorAgentConfiguration File
Edit the config file, then restart the service to apply changes:
sudo nano /etc/xreplicator/agent.yaml
sudo systemctl restart backup-agent# Windows
notepad C:\ProgramData\xreplicator\agent.yaml
Restart-Service XReplicatorAgentExample Agent Configs
- Linux:
/docs/configuration/agent-config-linux - Windows:
/docs/configuration/agent-config-windows
Device Configuration
device.path (string, required)
Path to the block device or file to back up.
device:
path: "/dev/vdb"device.paths (array, optional)
Use this instead of device.path when backing up multiple devices.
device:
paths:
- "/dev/vdb"
- "/dev/vdc"device.sync_wait_ms (integer, default: 500)
Wait time in milliseconds after filesystem sync before backup. Ensures buffered writes are committed to disk.
| Storage Type | Recommended Value |
|---|---|
| Local SSD/HDD | 500 (default) |
| Network (GCP/AWS EBS) | 2000–3000 |
| High-latency storage | 5000 |
Storage Configuration
storage.type (string, required)
Storage backend type. Options: "local" or "grpc".
storage.path (string, required if type is "local")
Local directory for the repository. Created automatically if it doesn’t exist.
storage:
type: "local"
path: "/var/lib/backup/repo"gRPC Storage Options
| Parameter | Type | Default | Description |
|---|---|---|---|
storage.grpc.server_address | string | — | Server address in host:port format |
storage.grpc.timeout | duration | "5m" | RPC timeout |
storage.grpc.max_retries | integer | 3 | Max retry attempts. Use 0 to retry forever; otherwise 1–10. |
For long-running agents, set storage.grpc.max_retries: 0 so the daemon keeps reconnecting after server restarts.
TLS Options
| Parameter | Type | Default | Description |
|---|---|---|---|
storage.grpc.tls.enabled | boolean | false | Enable TLS encryption |
storage.grpc.tls.cert_file | string | — | Path to client certificate |
storage.grpc.tls.key_file | string | — | Path to client private key |
storage.grpc.tls.ca_file | string | — | Path to CA certificate |
storage.grpc.tls.insecure_skip_verify | boolean | false | Skip server cert verification (development only) |
Never set insecure_skip_verify: true in production environments.
Change Tracking (eBPF)
| Parameter | Type | Default | Description |
|---|---|---|---|
ebpf.enabled | boolean | true | Enable eBPF change tracking |
ebpf.block_size_kb | integer | 64 | Block size in KB (must be power of 2: 4–1024) |
ebpf.dirty_block_min_age | duration | "60s" | Min age of dirty blocks before syncing |
ebpf.fallback_on_error | boolean | true | Fall back to full scan if tracking fails |
ebpf.bitmap_persist_path | string | /var/lib/backup/bitmap.db | Path to persist dirty bitmap |
ebpf.state_persist_path | string | /var/lib/backup/tracking_state.json | Path to persist tracking state |
ebpf.heartbeat_interval | duration | "30s" | Heartbeat update interval |
ebpf.max_tracking_gap | duration | "5m" | Max heartbeat gap before recovery mode |
Recovery Options
| Parameter | Type | Default | Description |
|---|---|---|---|
ebpf.recovery.on_interruption | string | "full_incremental" | Recovery strategy when interruption detected |
ebpf.recovery.verify_on_startup | boolean | true | Verify tracking continuity on startup |
ebpf.recovery.log_recovery_details | boolean | true | Log detailed recovery info |
Recovery strategies:
"full_incremental"— Scan entire device with deduplication (safest, recommended)"full_backup"— Force full backup without deduplication"trust_bitmap"— Use persisted bitmap (fastest but risky)
Block Group (ext4)
| Parameter | Type | Default | Description |
|---|---|---|---|
ebpf.block_group_aware | boolean | true | Enable block group-aware sync for ext4 |
ebpf.block_group_detection | string | "auto" | Detection method: "auto", "manual", "disabled" |
ebpf.block_group_sync_strategy | string | "full" | Sync strategy: "full", "partial", "smart" |
ebpf.block_group_sync_threshold | float | 0.25 | Threshold for "smart" strategy (0.0–1.0) |
Backup Schedule
Full Backups
| Parameter | Type | Default | Description |
|---|---|---|---|
schedule.full_backup.enabled | boolean | true | Enable scheduled full backups |
schedule.full_backup.frequency | string | "weekly" | Options: "daily", "weekly", "monthly" |
schedule.full_backup.day_of_week | string | "sunday" | Day for weekly backups |
schedule.full_backup.time | string | "02:00" | Time in HH:MM 24-hour format |
schedule.full_backup.force_if_missing | boolean | true | Force full backup if none exists |
schedule.full_backup.max_age_days | integer | 30 | Force full backup if last full is older than N days |
Incremental Backups
| Parameter | Type | Default | Description |
|---|---|---|---|
schedule.incremental_backup.enabled | boolean | true | Enable incremental backups |
schedule.incremental_backup.frequency | string | "hourly" | Options: "hourly", "daily", "custom" |
schedule.incremental_backup.interval | duration | — | Custom interval (e.g., "15m"), used when frequency is "custom" |
schedule.incremental_backup.skip_if_no_changes | boolean | true | Skip if no changes detected (requires eBPF) |
Retention Policy
| Parameter | Type | Default | Description |
|---|---|---|---|
retention.keep_last_n_full | integer | 4 | Number of full backups to retain |
retention.keep_last_n_incremental | integer | 24 | Number of incremental backups to retain |
retention.auto_cleanup | boolean | true | Automatically delete old snapshots |
Pipeline Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
daemon.pipeline.chunk_size_avg_kb | integer | 4096 | Average chunk size in KB (supported: 256–8192) |
daemon.pipeline.fixed_block_size_mb | integer | 16 | Fixed-block boundary size (power of 2, 2–256 MB) |
daemon.pipeline.max_chunk_size_mb | integer | 32 | Max chunk size in MB (must be larger than fixed_block_size_mb) |
daemon.pipeline.batch_size | integer | 100 | Chunks to process before upload (10–1000) |
daemon.pipeline.workers | integer | 4 | Parallel compression workers |
daemon.pipeline.io_workers | integer | 0 | I/O workers (0 = auto) |
daemon.pipeline.max_io_queue_depth | integer | 20 | Max blocks in I/O queue |
daemon.pipeline.channel_buffer_mult | integer | 8 | Channel buffer multiplier |
daemon.pipeline.max_pipeline_memory_mb | integer | 2048 | Max pipeline memory in MB |
daemon.pipeline.backpressure_enabled | boolean | true | Enable adaptive backpressure control |
daemon.pipeline.bandwidth_target_mbps | float | — | Auto-calculate batch size from bandwidth target |
daemon.pipeline.zero_copy | boolean | true | Enable zero-copy pipeline optimizations |
daemon.pipeline.optimized_io | boolean | true | Enable O_DIRECT and large buffers |
Keep chunk_size_avg_kb consistent across all backups. Changing it breaks deduplication with previous backups.
Tuning for Larger Disks (Best Practices)
For larger disks, tune for the outcome you want:
- Lower restore CPU and metadata overhead: increase
chunk_size_avg_kb - Better dedup granularity: decrease
chunk_size_avg_kb - Lower memory pressure and fewer stalls: reduce
batch_sizeandworkers
Rules to keep configurations safe
- Keep
fixed_block_size_mbthe same on agent and server. - Keep
max_chunk_size_mb > fixed_block_size_mb. - Keep
chunk_size_avg_kb < fixed_block_size_mb * 1024. - Start conservative, then scale up one setting at a time.
Quick chunk-count estimation
Approximate chunk count:
chunks ~= disk_size / average_chunk_size
Example for a 100 GB disk:
chunk_size_avg_kb: 1024(~1 MB average): about100,000chunkschunk_size_avg_kb: 2048(~2 MB average): about50,000chunkschunk_size_avg_kb: 4096(~4 MB average): about25,000chunks
workers and batch_size change throughput/memory behavior, not the chunk count target.
Smaller chunk counts usually improve restore speed for mount-and-browse workflows because the system resolves, reads, and reconstructs fewer chunk records before exposing files.
Recommended profiles
Profile A: Large disks, balanced (recommended starting point)
daemon:
pipeline:
chunk_size_avg_kb: 4096
fixed_block_size_mb: 16
max_chunk_size_mb: 32
batch_size: 100
workers: 4Profile B: Large disks on memory-constrained hosts
daemon:
pipeline:
chunk_size_avg_kb: 2048
fixed_block_size_mb: 8
max_chunk_size_mb: 16
batch_size: 25
workers: 1
max_io_queue_depth: 2
channel_buffer_mult: 1
max_pipeline_memory_mb: 256Profile C: High-throughput hosts (more CPU and RAM)
daemon:
pipeline:
chunk_size_avg_kb: 4096
fixed_block_size_mb: 16
max_chunk_size_mb: 32
batch_size: 200
workers: 6Troubleshooting guidance
If agent memory grows rapidly or backups appear stuck:
- Reduce
batch_sizefirst (for example,200 -> 100 -> 25). - Reduce
workersnext (for example,4 -> 2 -> 1). - Keep chunk settings stable while troubleshooting.
Compression
| Parameter | Type | Default | Description |
|---|---|---|---|
daemon.compression.enabled | boolean | true | Enable compression |
daemon.compression.level | integer | 1 | Level 1–19 (1 = fast, 19 = maximum) |
| Level | CPU Usage | Ratio |
|---|---|---|
1 | Minimal | ~2x |
3 | Moderate | ~3x |
5 | Higher | ~3.5x |
19 | Very high | ~4x |
Logging
| Parameter | Type | Default | Description |
|---|---|---|---|
daemon.log_level | string | "info" | Options: "debug", "info", "warn", "error" |
Configuration Examples
Local Storage
storage:
type: "local"
path: "/var/lib/backup/repo"gRPC with TLS (Production)
storage:
type: "grpc"
grpc:
server_address: "backup.prod.example.com:50051"
timeout: 10m
max_retries: 5
tls:
enabled: true
cert_file: "/etc/backup/tls/client.crt"
key_file: "/etc/backup/tls/client.key"
ca_file: "/etc/backup/tls/ca.crt"Daily Full Backups (Database Server)
schedule:
full_backup:
enabled: true
frequency: "daily"
time: "03:00"
max_age_days: 7
incremental_backup:
enabled: true
frequency: "custom"
interval: 15mHigh Bandwidth (1 Gbps+)
daemon:
pipeline:
batch_size: 400
workers: 8
compression:
enabled: true
level: 1Low Bandwidth (10 Mbps)
daemon:
pipeline:
batch_size: 20
workers: 2
compression:
enabled: true
level: 3Memory-Constrained (1 GB Limit)
daemon:
pipeline:
max_io_queue_depth: 3
channel_buffer_mult: 2
max_pipeline_memory_mb: 1024
workers: 2
fixed_block_size_mb: 8
compression:
enabled: true
level: 1