Skip to Content
ConfigurationDaemon Mode

Overview

Daemon Mode runs continuously on client machines, automatically backing up devices according to a schedule. Configuration is managed via a YAML file at /etc/xreplicator/agent.yaml (Linux) or C:\ProgramData\xreplicator\agent.yaml (Windows).

Starting the Daemon

# Start the daemon sudo systemctl start backup-agent # Enable automatic startup on boot sudo systemctl enable backup-agent # Check status sudo systemctl status backup-agent # View logs sudo journalctl -u backup-agent -f
# Windows (run as Administrator) Start-Service XReplicatorAgent Get-Service XReplicatorAgent

Configuration File

Edit the config file, then restart the service to apply changes:

sudo nano /etc/xreplicator/agent.yaml sudo systemctl restart backup-agent
# Windows notepad C:\ProgramData\xreplicator\agent.yaml Restart-Service XReplicatorAgent

Example Agent Configs

  • Linux: /docs/configuration/agent-config-linux
  • Windows: /docs/configuration/agent-config-windows

Device Configuration

device.path (string, required)

Path to the block device or file to back up.

device: path: "/dev/vdb"

device.paths (array, optional)

Use this instead of device.path when backing up multiple devices.

device: paths: - "/dev/vdb" - "/dev/vdc"

device.sync_wait_ms (integer, default: 500)

Wait time in milliseconds after filesystem sync before backup. Ensures buffered writes are committed to disk.

Storage TypeRecommended Value
Local SSD/HDD500 (default)
Network (GCP/AWS EBS)20003000
High-latency storage5000

Storage Configuration

storage.type (string, required)

Storage backend type. Options: "local" or "grpc".

storage.path (string, required if type is "local")

Local directory for the repository. Created automatically if it doesn’t exist.

storage: type: "local" path: "/var/lib/backup/repo"

gRPC Storage Options

ParameterTypeDefaultDescription
storage.grpc.server_addressstringServer address in host:port format
storage.grpc.timeoutduration"5m"RPC timeout
storage.grpc.max_retriesinteger3Max retry attempts. Use 0 to retry forever; otherwise 1–10.

For long-running agents, set storage.grpc.max_retries: 0 so the daemon keeps reconnecting after server restarts.

TLS Options

ParameterTypeDefaultDescription
storage.grpc.tls.enabledbooleanfalseEnable TLS encryption
storage.grpc.tls.cert_filestringPath to client certificate
storage.grpc.tls.key_filestringPath to client private key
storage.grpc.tls.ca_filestringPath to CA certificate
storage.grpc.tls.insecure_skip_verifybooleanfalseSkip server cert verification (development only)

Never set insecure_skip_verify: true in production environments.


Change Tracking (eBPF)

ParameterTypeDefaultDescription
ebpf.enabledbooleantrueEnable eBPF change tracking
ebpf.block_size_kbinteger64Block size in KB (must be power of 2: 4–1024)
ebpf.dirty_block_min_ageduration"60s"Min age of dirty blocks before syncing
ebpf.fallback_on_errorbooleantrueFall back to full scan if tracking fails
ebpf.bitmap_persist_pathstring/var/lib/backup/bitmap.dbPath to persist dirty bitmap
ebpf.state_persist_pathstring/var/lib/backup/tracking_state.jsonPath to persist tracking state
ebpf.heartbeat_intervalduration"30s"Heartbeat update interval
ebpf.max_tracking_gapduration"5m"Max heartbeat gap before recovery mode

Recovery Options

ParameterTypeDefaultDescription
ebpf.recovery.on_interruptionstring"full_incremental"Recovery strategy when interruption detected
ebpf.recovery.verify_on_startupbooleantrueVerify tracking continuity on startup
ebpf.recovery.log_recovery_detailsbooleantrueLog detailed recovery info

Recovery strategies:

  • "full_incremental" — Scan entire device with deduplication (safest, recommended)
  • "full_backup" — Force full backup without deduplication
  • "trust_bitmap" — Use persisted bitmap (fastest but risky)

Block Group (ext4)

ParameterTypeDefaultDescription
ebpf.block_group_awarebooleantrueEnable block group-aware sync for ext4
ebpf.block_group_detectionstring"auto"Detection method: "auto", "manual", "disabled"
ebpf.block_group_sync_strategystring"full"Sync strategy: "full", "partial", "smart"
ebpf.block_group_sync_thresholdfloat0.25Threshold for "smart" strategy (0.0–1.0)

Backup Schedule

Full Backups

ParameterTypeDefaultDescription
schedule.full_backup.enabledbooleantrueEnable scheduled full backups
schedule.full_backup.frequencystring"weekly"Options: "daily", "weekly", "monthly"
schedule.full_backup.day_of_weekstring"sunday"Day for weekly backups
schedule.full_backup.timestring"02:00"Time in HH:MM 24-hour format
schedule.full_backup.force_if_missingbooleantrueForce full backup if none exists
schedule.full_backup.max_age_daysinteger30Force full backup if last full is older than N days

Incremental Backups

ParameterTypeDefaultDescription
schedule.incremental_backup.enabledbooleantrueEnable incremental backups
schedule.incremental_backup.frequencystring"hourly"Options: "hourly", "daily", "custom"
schedule.incremental_backup.intervaldurationCustom interval (e.g., "15m"), used when frequency is "custom"
schedule.incremental_backup.skip_if_no_changesbooleantrueSkip if no changes detected (requires eBPF)

Retention Policy

ParameterTypeDefaultDescription
retention.keep_last_n_fullinteger4Number of full backups to retain
retention.keep_last_n_incrementalinteger24Number of incremental backups to retain
retention.auto_cleanupbooleantrueAutomatically delete old snapshots

Pipeline Configuration

ParameterTypeDefaultDescription
daemon.pipeline.chunk_size_avg_kbinteger4096Average chunk size in KB (supported: 256–8192)
daemon.pipeline.fixed_block_size_mbinteger16Fixed-block boundary size (power of 2, 2–256 MB)
daemon.pipeline.max_chunk_size_mbinteger32Max chunk size in MB (must be larger than fixed_block_size_mb)
daemon.pipeline.batch_sizeinteger100Chunks to process before upload (10–1000)
daemon.pipeline.workersinteger4Parallel compression workers
daemon.pipeline.io_workersinteger0I/O workers (0 = auto)
daemon.pipeline.max_io_queue_depthinteger20Max blocks in I/O queue
daemon.pipeline.channel_buffer_multinteger8Channel buffer multiplier
daemon.pipeline.max_pipeline_memory_mbinteger2048Max pipeline memory in MB
daemon.pipeline.backpressure_enabledbooleantrueEnable adaptive backpressure control
daemon.pipeline.bandwidth_target_mbpsfloatAuto-calculate batch size from bandwidth target
daemon.pipeline.zero_copybooleantrueEnable zero-copy pipeline optimizations
daemon.pipeline.optimized_iobooleantrueEnable O_DIRECT and large buffers

Keep chunk_size_avg_kb consistent across all backups. Changing it breaks deduplication with previous backups.

Tuning for Larger Disks (Best Practices)

For larger disks, tune for the outcome you want:

  • Lower restore CPU and metadata overhead: increase chunk_size_avg_kb
  • Better dedup granularity: decrease chunk_size_avg_kb
  • Lower memory pressure and fewer stalls: reduce batch_size and workers

Rules to keep configurations safe

  • Keep fixed_block_size_mb the same on agent and server.
  • Keep max_chunk_size_mb > fixed_block_size_mb.
  • Keep chunk_size_avg_kb < fixed_block_size_mb * 1024.
  • Start conservative, then scale up one setting at a time.

Quick chunk-count estimation

Approximate chunk count:

chunks ~= disk_size / average_chunk_size

Example for a 100 GB disk:

  • chunk_size_avg_kb: 1024 (~1 MB average): about 100,000 chunks
  • chunk_size_avg_kb: 2048 (~2 MB average): about 50,000 chunks
  • chunk_size_avg_kb: 4096 (~4 MB average): about 25,000 chunks

workers and batch_size change throughput/memory behavior, not the chunk count target.

Smaller chunk counts usually improve restore speed for mount-and-browse workflows because the system resolves, reads, and reconstructs fewer chunk records before exposing files.

daemon: pipeline: chunk_size_avg_kb: 4096 fixed_block_size_mb: 16 max_chunk_size_mb: 32 batch_size: 100 workers: 4
Profile B: Large disks on memory-constrained hosts
daemon: pipeline: chunk_size_avg_kb: 2048 fixed_block_size_mb: 8 max_chunk_size_mb: 16 batch_size: 25 workers: 1 max_io_queue_depth: 2 channel_buffer_mult: 1 max_pipeline_memory_mb: 256
Profile C: High-throughput hosts (more CPU and RAM)
daemon: pipeline: chunk_size_avg_kb: 4096 fixed_block_size_mb: 16 max_chunk_size_mb: 32 batch_size: 200 workers: 6

Troubleshooting guidance

If agent memory grows rapidly or backups appear stuck:

  • Reduce batch_size first (for example, 200 -> 100 -> 25).
  • Reduce workers next (for example, 4 -> 2 -> 1).
  • Keep chunk settings stable while troubleshooting.

Compression

ParameterTypeDefaultDescription
daemon.compression.enabledbooleantrueEnable compression
daemon.compression.levelinteger1Level 1–19 (1 = fast, 19 = maximum)
LevelCPU UsageRatio
1Minimal~2x
3Moderate~3x
5Higher~3.5x
19Very high~4x

Logging

ParameterTypeDefaultDescription
daemon.log_levelstring"info"Options: "debug", "info", "warn", "error"

Configuration Examples

Local Storage

storage: type: "local" path: "/var/lib/backup/repo"

gRPC with TLS (Production)

storage: type: "grpc" grpc: server_address: "backup.prod.example.com:50051" timeout: 10m max_retries: 5 tls: enabled: true cert_file: "/etc/backup/tls/client.crt" key_file: "/etc/backup/tls/client.key" ca_file: "/etc/backup/tls/ca.crt"

Daily Full Backups (Database Server)

schedule: full_backup: enabled: true frequency: "daily" time: "03:00" max_age_days: 7 incremental_backup: enabled: true frequency: "custom" interval: 15m

High Bandwidth (1 Gbps+)

daemon: pipeline: batch_size: 400 workers: 8 compression: enabled: true level: 1

Low Bandwidth (10 Mbps)

daemon: pipeline: batch_size: 20 workers: 2 compression: enabled: true level: 3

Memory-Constrained (1 GB Limit)

daemon: pipeline: max_io_queue_depth: 3 channel_buffer_mult: 2 max_pipeline_memory_mb: 1024 workers: 2 fixed_block_size_mb: 8 compression: enabled: true level: 1
Last updated on