ArgoCD Repo Initializer

Answer a few questions about your app and we'll generate everything — Dockerfile, docker-compose for local dev, and all Kubernetes manifests for ArgoCD on OVH GRA9. No Docker or Kubernetes knowledge required.

Step 1 of 6
What's your app called?
This will be used to name all the files and resources we create.
App name
Lowercase letters, numbers and hyphens only. e.g. my-api, user-service
Container image name
The name of your app's Docker image. We'll use ghcr.io/alenia-consulting/{name} — the tag is set automatically by GitHub Actions on every push.
Which port does your app listen on?
Node.js often uses 3000, Python/FastAPI 8000, Spring Boot 8080.

Step 2 of 6
What language & runtime does your app use?
We'll generate a Dockerfile, .dockerignore, and docker-compose.yml automatically from your answers.
Runtime
Node.js
Python
Java
Go
.NET
PHP
Rust
Static / Nginx
Runtime version
e.g. 20 for Node 20, 3.12 for Python, 21 for Java 21
Does your app depend on any backing services?
These will be added as services in docker-compose.yml for local dev, and noted in the README for production (managed separately in K8s).
Advanced: Dockerfile build options

Step 3 of 6
Does your app need to save data permanently?
If your app is stateless (APIs, web apps, workers) choose "No". If it runs a database or writes files that must survive a restart, choose "Yes".
No — stateless app
APIs, web frontends, workers. Restarts cleanly with no data loss.
Yes — needs persistent storage
Databases, file stores, queues. Data must survive restarts.

Step 4 of 6
How many instances should run?
Running more than one instance means your app stays up if one crashes and can handle more traffic.
Staging (for testing)
Usually 1 is fine — just for checking things work.
Production (live)
At least 2 recommended so your app stays up if one instance has a problem.
Scale automatically under load?
The platform can spin up more instances when your app gets busy, and scale back down when it's quiet.
Fixed number of instances
Always runs exactly the number you chose above.
Yes — scale automatically
Spins up extra instances when busy, scales back down when quiet.
Scale up when busier than
CPU % that triggers adding more instances
Maximum instances (production)
Hard cap — will never exceed this
Advanced: memory & CPU limits
These are estimated automatically from your runtime and backing services. They affect the cost estimate and the Kubernetes resource requests. You can override them here.
CPU reserved auto-estimated
100m = 0.1 of a CPU core. Limit is set to 2× this value.
Memory reserved auto-estimated
256Mi = 256 MB. Limit is set to 2× this value.

Step 5 of 6
Where will your app be reachable?
Set the web address for each environment. HTTPS is set up automatically.
Staging URL
Where you'll test before going live
Production URL
Your live, public-facing address
HTTPS certificates are issued automatically by Let's Encrypt on first deployment. No manual setup needed.
Advanced HTTPS settings
TLS secret name
Defaults to {app-name}-tls. Override only if needed.
Certificate issuer
Use letsencrypt-staging to test without rate limits.

Step 6 of 6
Any environment variables or config?
Non-secret config your app reads at startup. Secrets (passwords, API keys) should be managed separately in a secrets manager.

Estimated monthly cost

OVH Public Cloud · D2-8 node pool · GRA9 · ex. VAT

Staging
/month
Production
/month
Platform overhead — covers your app's share of the shared load balancer, gateway & public IP
10%
Generated files — click any file to preview its contents
Download for:
Bash script for Linux & macOS — run with bash setup-${app}.sh