β Deployment β
This page explains how Flowcontrol is deployed today in staging and production, and how that differs from local development.
The goal is to make the runtime model clear for new developers:
- GitHub Actions validates code and publishes images
- GitHub Container Registry (GHCR) stores the published images
- Portainer manages the deployed stack
- Docker Swarm is the runtime platform behind that stack
π§ Scope β
This page is about the deployed environments.
It is not the local development guide.
Use:
- Getting started for local setup
- CI/CD for workflow triggers and image publication
- Versioning for release version rules
ποΈ Runtime model β
The deployed stack is defined from the root docker-compose.yml.
That file is deployment-oriented, not local-dev-oriented. You can see this from the use of:
deploy:blocks- external Docker network usage
- external Docker Swarm configs
- external Docker Swarm secrets
In practice the runtime flow is:
- Code is merged into a protected branch.
- GitHub Actions publishes the new images to GHCR.
- Portainer applies the rollout behavior configured for that environment.
- Docker Swarm runs the services defined in the stack.
That environment-specific behavior matters:
- staging is configured for auto-deploy from the published staging tags
- production is rolled forward manually in Portainer by updating
APP_VERSIONandIMAGE_TAGand then triggering the stack update
π§± What belongs to what β
π» Local development β
Local development uses the development compose files, such as:
docker-compose-development-postgres.ymldocker-compose-development-mssql.yml
These are for running and testing the system locally.
β Deployed environments β
Staging and production are based on the root deployment stack:
docker-compose.yml
This is the file that reflects the Swarm/Portainer deployment model.
π Required external resources β
The deployment stack depends on several resources that must already exist in the Docker/Swarm environment.
π External network β
The stack expects this external network:
flowcontrol
If that network does not exist, the stack will not deploy correctly.
π External config β
The stack expects this external config:
GLOBAL_CONFIGURATION
This config is mounted into services as:
/run/configs/global.properties
It is meant for shared non-secret configuration.
π External secrets β
The stack also expects external Swarm secrets. The current deployment compose declares:
DATABASE_PASSWORDDATABASE_USERNAMEDATABASE_ECO_PASSWORDDATABASE_ECO_USERNAMEKEYCLOAK_PASSWORDKEYCLOAK_USER_SERVICE_SECRETSMTP_USERNAMESMTP_PASSWORDAWS_ACCESS_KEY_IDAWS_SECRET_KEY
Important:
- do not store real secret values in the repository
- document secret names and purpose, not their contents
- some services remap secret targets internally, so the exposed filename inside the container may differ from the external secret name
π§© Stack inputs β
The deployed stack uses three main input types:
- stack environment variables
- external config
- external secrets
π·οΈ Stack environment variables β
These are used for things like:
- image names
- image tags
- runtime version display
- service ports and hostnames
- Keycloak and gateway URLs
- infrastructure versions
- database host and port references
Two variables matter especially for releases:
IMAGE_TAG: selects which published application image tag the stack should runAPP_VERSION: the version value exposed inside the running services and frontend
Those two should normally describe the same rollout.
π Config β
Use the external GLOBAL_CONFIGURATION config for shared non-sensitive runtime configuration.
Examples of things that belong here:
- shared Spring properties
- shared hostnames
- common non-secret runtime defaults
π Secrets β
Use Swarm secrets for sensitive values.
Examples:
- database usernames and passwords
- Keycloak credentials
- SMTP credentials
- AWS credentials
Do not move those values into plain stack env unless there is a very specific reason.
π³ Swarm details that matter β
The deployment stack is written with Swarm semantics in mind.
That has a few practical consequences:
deploy:settings are meaningful in the deployed environment- update and rollback behavior is defined per service in the compose file
- external configs and secrets are expected to be managed by the platform
- using the root deployment compose file as if it were a normal local compose file will not reflect the real deployment model
This is one of the reasons local development uses separate compose files.
π Deploying a release β
For a normal release, the process starts the same way in both environments:
- Merge through the protected branch flow so GitHub Actions publishes the intended GHCR images.
- Choose the image tag that the stack should run.
π§ͺ Staging rollout β
For staging:
- Merge into
staging. - Let the publish workflow push the staging tags to GHCR.
- Portainer auto-deploys the staging stack from those published staging tags.
- Verify the stack after rollout.
π Production rollout β
For production:
- Merge into
master. - Let the publish workflow push the immutable version tag to GHCR.
- Set or verify
IMAGE_TAGin the Portainer-managed production stack environment. - Set or verify
APP_VERSIONso the runtime version shown by the application matches the deployed release. - Trigger the production stack update manually in Portainer so it pulls the selected image version.
- Verify the stack after rollout.
After deployment, check:
- the expected image tag is in use
- the application reports the expected version
- the gateway, client, and service-to-service communication still work
- required secrets and configs were mounted correctly
β©οΈ Rolling back β
Rollback is straightforward if the old image tag still exists in GHCR.
Typical rollback flow:
- Change
IMAGE_TAGback to the previous known-good tag. - Change
APP_VERSIONback as well if it was changed for the failed rollout. - Redeploy the stack in Portainer.
- Verify health and connectivity again.
If IMAGE_TAG is rolled back but APP_VERSION is not, the system may run the old image while reporting a newer version string. Avoid that mismatch.
π§ͺ Staging vs production β
The stack model is the same, but the release inputs differ.
Typically:
- staging uses staging-oriented image tags and Portainer auto-deploys them
- production uses immutable release version tags and Portainer is updated manually
- for production,
APP_VERSIONandIMAGE_TAGare part of the manual rollout step in Portainer
The exact tags are documented in CI/CD and Versioning.
π οΈ Adding a new service β
If a new service must be deployable, updating only docker-compose.yml is not enough.
You usually also need to:
- Add the service to the deployment stack with the correct env, configs, secrets, volumes, and resources.
- Add the service to the CI/publish workflows so its image is actually validated and published.
- Decide which config belongs in stack env, which belongs in
GLOBAL_CONFIGURATION, and which must be a Swarm secret. - Make sure networking and routing are correct.
- Update the documentation pages that describe pipeline, versioning, and deployment.
β οΈ Common pitfalls β
- Treating
docker-compose.ymlas the local development compose file. - Forgetting to create an external network, config, or secret before deploying.
- Updating
IMAGE_TAGbut forgetting to alignAPP_VERSION. - Adding a service to the stack but not to the GitHub Actions publish workflow.
- Moving secret values into plain env variables instead of Swarm secrets.
- Assuming Portainer automation behavior from the repository alone. In our current setup, staging auto-deploys, but production still requires a manual Portainer update with
APP_VERSIONandIMAGE_TAG.
β Recommended rule for new devs β
When you think about deployment, split it into three layers:
- repository layer: code, workflows, compose definitions
- image layer: GHCR tags published by GitHub Actions
- runtime layer: Portainer and Docker Swarm configuration
That mental model prevents most confusion around:
- where versions come from
- where images are selected
- where secrets live
- what is controlled in git and what is controlled in the deployment platform