Build a Docker image from the src/ directory and run it locally or deploy it to Azure Container Apps — the proxy listens on port 443 (HTTP) and exposes a health probe server on port 9000.
TL;DR
- Build from
src/— the Dockerfile requiresShared/andSimpleL7Proxy/side-by-side; build context must besrc/.- Probe paths are in the Host connection string — use
Host1=host=https://api.example.com;probe=/health(not separateProbe_path1=variables).- Fastest path to Azure: (1)
.azure/setup.sh(check prerequisites, choose scenario), (2)azd provision(create Container App + App Configuration + ACR), (3)deployment/AppConfiguration/deploy.sh(seed App Configuration from your config), (4).azure/deploy.sh(build image, push to ACR, update Container App).
| Port | Purpose |
|---|---|
443 |
Main proxy traffic (HTTP, not HTTPS — TLS is terminated by ACA ingress) |
9000 |
Health probe server — serves /liveness, /readiness, /startup |
Note
Port 443 carries plain HTTP inside the container. TLS termination is done by the Azure Container Apps ingress or an upstream load balancer.
Full end-to-end path with automated scripts:
# From repo root
.azure/setup.shWhat it does:
- Checks prerequisites (azd, Azure CLI)
- Authenticates to Azure and selects subscription
- Guides you through selecting a deployment scenario (local-with-cloud, full-cloud, secure-vnet)
- Initializes AZD environment
Output: .azure/.env file with resource names and subscription ID.
azd provisionWhat it does:
- Creates resource group, Container App, Azure Container Registry, App Configuration store
- Sets up managed identity for Container App
- Configures networking based on your scenario selection
- Exports deployment variables to
.azure/.envfor downstream scripts
Time: ~5–10 minutes.
cd deployment/AppConfiguration
cp deploy.parameters.example.sh deploy.parameters.sh
# Edit deploy.parameters.sh with your resource names
./deploy.shWhat it does:
- Discovers all publishable settings from
src/SimpleL7Proxy/Config/ProxyConfig.cs(marked with[ConfigOption(...)]) - Reads current values from the running Container App (or falls back to local shell variables)
- Seeds App Configuration with all discovered keys in both Warm (hot-reload) and Cold (restart) modes
- Publishes them under prefixes
Warm:*andCold:*so you can toggle reload behavior instantly from the portal - Sets up the
Warm:Sentinelkey as the refresh trigger
Parameters required:
CONTAINER_APP_NAME: deployed Container App nameCONTAINER_APP_RESOURCE_GROUP: resource group containing the Container AppAPPCONFIG_NAME: App Configuration store nameRESOURCE_GROUP: resource group for App Configuration (usually same as Container App)LOCATION: Azure region
Output: All settings now visible in Azure Portal App Configuration > Configuration Explorer; operators can modify Warm settings without restarting.
.azure/deploy.shWhat it does:
- Extracts Docker image name and version from
src/SimpleL7Proxy/Constants.cs - Logs in to ACR (from AZD environment)
- Builds Docker image from
src/directory (correct context) - Pushes image to ACR with version tags (e.g.,
simple-l7-proxy:1.0.0,simple-l7-proxy:latest) - Optionally: applies an environment template (Standard Production, High Performance, Cost Optimized, High Availability)
- Updates running Container App to the new image
- Restarts container with updated configuration
Output: Container App running latest image; all Warm settings hot-reloaded within ~30 seconds; Cold settings require a second restart.
.azure/setup.sh
↓ (authenticate, select scenario)
azd provision
↓ (create resources)
cd deployment/AppConfiguration && ./deploy.sh
↓ (seed all proxy settings into App Config)
.azure/deploy.sh
↓ (build, push, deploy)
✅ Proxy running in Azure with App Configuration hot-reload enabled
Key takeaway: After the first deployment, you can change Warm settings in the Azure Portal without restarting; just update Warm:Sentinel to trigger a refresh cycle.
Rule: Use src/SimpleL7Proxy/build.sh — it handles the correct build context, version extraction from Constants.cs, ACR login, and push automatically.
export ACR=myregistry # your ACR name, without .azurecr.io
cd src/SimpleL7Proxy
./build.shThe script:
- Reads the version from
Constants.cs(VERSION = "...") and prefixesvif needed - Logs in to ACR via
az acr login - Runs
docker buildfromsrc/(the correct context that includesShared/) - Pushes
$ACR.azurecr.io/myproxy:<version>to ACR - Prints the
PROXY_VERSIONexport line to paste intodeploy.parameters.sh
Note
ACR can also be set in deployment/proxy-with-sidecar/deploy.parameters.sh — the script sources it automatically if that file exists.
# 1. Bump the version in Constants.cs if needed
# VERSION = "2.x.x"
# 2. Rebuild and push
export ACR=myregistry
cd src/SimpleL7Proxy
./build.sh
# 3. Update the running Container App to the new image
# (the script prints the exact version — use it below)
az containerapp update \
--name $ACANAME \
--resource-group $GROUP \
--image $ACR.azurecr.io/myproxy:<version>Tip
If you deployed with AZD, run .azure/deploy.sh instead of az containerapp update — it reads the version and environment from azd env get-values automatically.
Only needed if you are not using the sidecar deployment or want a custom image name:
# Must run from src/ — Dockerfile references Shared/ at the same level
cd src
docker build -t simplel7proxy:latest -f SimpleL7Proxy/Dockerfile .Warning
Running docker build from the repository root (not src/) will fail — COPY Shared/ will not resolve.
If Docker is not available locally (corporate restrictions, CI/CD runners, etc.), build directly in Azure Container Registry:
# From repo root
export ACR=myregistry # Your ACR name, without .azurecr.io
export VERSION=v2.2.11 # Or read from Constants.cs
az acr build
--registry $ACR
--image simple-l7-proxy:$VERSION
--file src/SimpleL7Proxy/Dockerfile
srcdocker run -p 8000:443 \
-e "Host1=host=https://api.example.com;probe=/health" \
-e "Timeout=2000" \
-e "Workers=10" \
simplel7proxy:latestCreate .env:
Host1=host=https://api1.example.com;probe=/health
Host2=host=https://api2.example.com;probe=/health
Workers=20
MaxQueueLength=1000
Timeout=5000
APPINSIGHTS_CONNECTIONSTRING=your-connection-stringdocker run -p 8000:443 --env-file .env simplel7proxy:latestservices:
proxy:
build:
context: ./src
dockerfile: SimpleL7Proxy/Dockerfile
ports:
- "8000:443"
environment:
- Host1=host=https://api1.example.com;probe=/health
- Host2=host=https://api2.example.com;probe=/health
- Workers=20
- MaxQueueLength=1000
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/liveness"]
interval: 30s
timeout: 10s
retries: 3docker compose up -dTip
Troubleshooting: Health check failures using port 443 will always fail — the probe server runs on port 9000. Use http://localhost:9000/liveness.
This is the recommended path for provisioning all required Azure resources (ACR, Container Apps environment, managed identity) in one step.
- Azure CLI
- Azure Developer CLI (azd)
- Docker (optional; only needed for local image builds)
- For no-Docker environments, use the remote ACR build workflow in Building without Docker (Remote ACR Build)
Linux/macOS
chmod +x ./.azure/setup.sh
./.azure/setup.shWindows
.\.azure\setup.ps1The setup script asks for:
- Deployment scenario — choose one:
local-proxy-public-apim— proxy runs locally, backends on public APIMaca-proxy-public-apim— proxy deployed as ACA, backends on public APIMvnet-proxy-deployment— proxy inside a VNet
- Environment template (optional, see table below)
azd provisionLinux/macOS
chmod +x ./.azure/deploy.sh
./.azure/deploy.shWindows
.\.azure\deploy.ps1The deploy script builds the image, pushes it to the provisioned ACR, and updates the Container App. It reads all variables from azd env get-values.
Templates live in .azure/env-templates/. Apply one during setup or when prompted by deploy.sh.
| Template | Best for |
|---|---|
| Standard Production | Balanced performance and cost — good default |
| High Performance | Maximum throughput, higher worker counts |
| Cost Optimized | Minimal resource usage |
| High Availability | Multiple backends, aggressive failover |
| Local Development | dotnet run on localhost backends |
| Container Development | Containerized dev/test scenarios |
Use this path when you already have an ACR and Container Apps environment.
export ACR=<your-acr-name> # Without .azurecr.io
export GROUP=<resource-group>
export ACENV=<containerapp-env>
export ACANAME=<containerapp-name>
export LOCATION=eastus# Build from src/ directory
cd src
docker build -t $ACR.azurecr.io/simple-l7-proxy:latest -f SimpleL7Proxy/Dockerfile .
cd ..
az acr login --name $ACR
docker push $ACR.azurecr.io/simple-l7-proxy:latestaz group create --name $GROUP --location $LOCATION
az containerapp env create \
--name $ACENV \
--resource-group $GROUP \
--location $LOCATIONA ready-to-use YAML template is at deployment/containerapp-single.yaml. Edit the placeholder values, then:
az containerapp create \
--name $ACANAME \
--resource-group $GROUP \
--yaml deployment/containerapp-single.yamlOr deploy inline with minimal settings:
az containerapp create \
--name $ACANAME \
--resource-group $GROUP \
--environment $ACENV \
--image $ACR.azurecr.io/simple-l7-proxy:latest \
--target-port 443 \
--ingress external \
--registry-server $ACR.azurecr.io \
--registry-identity system \
--min-replicas 2 --max-replicas 10 \
--cpu 1.0 --memory 2Gi \
--env-vars \
"Host1=host=https://api1.example.com;probe=/health" \
"Host2=host=https://api2.example.com;probe=/health" \
"Workers=20" \
"MaxQueueLength=1000" \
"Timeout=5000" \
"APPINSIGHTS_CONNECTIONSTRING=your-connection-string" \
--query properties.configuration.ingress.fqdnNote
Use --registry-identity system (managed identity) instead of --registry-username/--registry-password to avoid storing credentials. Grant the Container App's system identity AcrPull on the registry.
The container exposes a built-in probe server on port 9000. Configure these in your Container App YAML:
probes:
- type: Liveness
httpGet:
path: /liveness
port: 9000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 3
- type: Readiness
httpGet:
path: /readiness
port: 9000
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 3
- type: Startup
httpGet:
path: /startup
port: 9000
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 5
failureThreshold: 30For deployments where the health probe runs as a dedicated sidecar container alongside the proxy in the same Container App revision, see SIDECAR_DEPLOYMENT.md.
cd src
docker build -t $ACR.azurecr.io/simple-l7-proxy:v2 -f SimpleL7Proxy/Dockerfile .
docker push $ACR.azurecr.io/simple-l7-proxy:v2
cd ..
az containerapp update \
--name $ACANAME \
--resource-group $GROUP \
--image $ACR.azurecr.io/simple-l7-proxy:v2# Send 0% traffic to the new revision initially
az containerapp revision copy \
--name $ACANAME \
--resource-group $GROUP \
--image $ACR.azurecr.io/simple-l7-proxy:v2
# Gradually shift traffic
az containerapp ingress traffic set \
--name $ACANAME --resource-group $GROUP \
--revision-weight latest=50 previous=50
# Complete the switch
az containerapp ingress traffic set \
--name $ACANAME --resource-group $GROUP \
--revision-weight latest=100# Stream live logs
az containerapp logs show \
--name $ACANAME --resource-group $GROUP --follow
# Recent logs
az containerapp logs show \
--name $ACANAME --resource-group $GROUP --tail 100az containerapp revision list \
--name $ACANAME --resource-group $GROUP -o table| Symptom | Likely cause | Fix |
|---|---|---|
| Container fails to start | Missing required env var or bad image | Check az containerapp logs show |
503 Service Unavailable |
All circuit breakers OPEN | Verify backend hosts are reachable from inside ACA |
| Probe failures, container cycling | Wrong probe port | Ensure probes target port 9000, not 443 |
docker build fails with COPY Shared/ error |
Built from wrong directory | Run docker build from src/, not repo root |
- BACKEND_HOSTS.md — Host connection string format including probe paths
- CONFIGURATION_SETTINGS.md — All environment variables
- HEALTH_CHECKING.md — Health probe internals
- OBSERVABILITY.md — Application Insights setup
