Fluxme.io
FluxCloud Advanced Deployment

Shared Databases & Auto-Replication

Deploy databases on Flux with persistent storage — MongoDB, PostgreSQL, Redis replication strategies and backup patterns.

12 min read
databasereplicationmongodbpostgresql

Shared Databases & Auto-Replication

Deploying databases on a decentralized network introduces unique challenges: each instance runs on a different node, data must stay consistent across instances, and node changes can happen at any time. Flux provides mechanisms for persistent storage and supports databases with built-in replication to handle these challenges.

The Decentralized Database Challenge

When you deploy a Flux app with 3 instances, each instance runs on a different node. Each node has its own local storage. Unlike centralized cloud providers where you can attach a shared network volume, Flux uses local persistent volumes per instance. This means:

  • Each instance has its own separate data copy
  • Data written on Instance A is not automatically visible to Instance B
  • If a node goes offline and the instance migrates, local data on the old node is lost
  • Database replication must be handled at the application level, not the infrastructure level

Using containerData for Persistence

The containerData field in your Flux app spec mounts a persistent volume at the specified path. For databases, this is where data files are stored. Data persists across container restarts on the same node.

MongoDB component with persistent storage

{
  "name": "mongodb",
  "repotag": "mongo:7",
  "ports": [],
  "containerPorts": [27017],
  "domains": [],
  "environmentParameters": [
    "MONGO_INITDB_ROOT_USERNAME=admin",
    "MONGO_INITDB_ROOT_PASSWORD=secretpassword"
  ],
  "commands": [],
  "containerData": "/data/db",
  "cpu": 1,
  "ram": 1024,
  "hdd": 10
}

Replication Strategies

  1. 1

    MongoDB Replica Set

    MongoDB natively supports replica sets for automatic data synchronization across instances. Configure the MONGODB_REPLICA_SET environment variable and use a replica set connection string. MongoDB handles election of primary/secondary nodes and data sync.

  2. 2

    PostgreSQL with Streaming Replication

    PostgreSQL supports primary-standby replication with Write-Ahead Log (WAL) streaming. The primary instance handles writes; standbys replicate asynchronously. Use environment variables to configure replication settings.

  3. 3

    Redis with Sentinel

    Redis Sentinel provides automatic failover for Redis instances. Configure one instance as master with others as replicas. Sentinel monitors and promotes replicas on master failure.

  4. 4

    Application-level sync

    For simpler setups, use one instance as the primary (handling all writes) and implement read-only replicas that sync via application logic or periodic data dumps.

Primary/Standby Pattern

The most common pattern on Flux is primary/standby: one instance is designated as the primary (handling all writes), while other instances run in standby mode for redundancy. When the primary goes down, a standby can be promoted. This pattern works for databases, game servers (like the Minecraft example), and any stateful application.

Never expose database ports to the public internet. Use empty ports arrays [] in your spec and access the database only via internal DNS from your application components. Always set strong authentication credentials.

Backup Strategies

  • Scheduled dumps: Use cron jobs inside the container to run mongodump, pg_dump, or equivalent tools on a schedule
  • External backup: Push backup files to an external S3-compatible storage service
  • Application-level export: Build export functionality into your application that can serialize data to an external service
  • Volume snapshots: If your data fits in containerData, periodic snapshots provide point-in-time recovery