Linux Consumption is going away, and Microsoft’s migration guidance points at Flex Consumption. But is Flex actually the right destination for your app(s), or are the other paths (Azure Functions on Azure Container Apps, Container Apps Jobs, AKS with KEDA, or plain ACA) worth a closer look?
The retirement notice landed in your inbox last quarter. Apps still running the (already end-of-life) Functions runtime v3 on Linux Consumption stop running entirely on September 30, 2026, and the Linux Consumption plan itself retires on September 30, 2028 (Microsoft Lifecycle). The cost comparison most teams draw up leaves out things that change the answer.
This post walks through the migration paths away from Linux Consumption Functions, and what’s caught up outside the Functions runtime in the meantime.
Acronyms used in this post
- ACA — Azure Container Apps. Microsoft’s managed container platform built on Kubernetes + KEDA, without the Kubernetes itself.
- ACR — Azure Container Registry. Where you push the image ACA pulls.
- KEDA — Kubernetes Event-Driven Autoscaling. The scaler family that watches a queue, HTTP RPS, or a custom metric and tells the platform when to add or remove instances.
- Dapr — A sidecar runtime that gives apps portable APIs for state, pub/sub, bindings, and workflows, regardless of the underlying broker or store.
- DTS — Durable Task Scheduler. Microsoft’s managed orchestration backend for Durable Functions, billed per action.
- OTLP — OpenTelemetry Protocol. The wire format an OpenTelemetry exporter speaks to a collector or backend.
- Isolated worker — The Functions hosting model where your code runs in its own .NET process alongside the Functions host (the only supported model on .NET 10), as opposed to the in-process model where your code loads into the host itself.
A short detour: how we got here
Azure Functions launched in 2016 on the original Consumption plan and quickly became the default answer for what the team called the event-driven story: a queue message lands, a Service Bus event fires, a Cosmos document changes, a Blob is uploaded, a timer ticks, and your code runs in response.
The whole programming model was built around that shape. Triggers handled the wake-up, bindings handled the input/output plumbing (consumer loop, deserialization, checkpointing, poison-message handling, output sinks), and scale-to-zero meant a quiet event source cost nothing between bursts.
The runtime made the rest of the decisions for you: logging, retries, concurrency, lifecycle. You wrote the function body and the platform did the event plumbing. Linux Consumption arrived later as the cross-platform variant of the same event-driven deal.
The landscape moved while Functions stood still on that core pitch. KEDA brought event-driven scaling to Kubernetes and then to Container Apps. Dapr layered portable bindings and pub/sub on top of plain services.
The .NET model itself shifted from in-process to isolated worker, which sands down some of the binding ergonomics Functions used to lead on. None of this kills Functions, but it does mean the question “where should this code run” has more honest answers in 2026 than it did in 2018, and the Linux Consumption retirement is the moment most teams actually have to stop and answer it.
The in-process retirement is the clock that runs first
Two clocks force you off Linux Consumption, and they don’t run at the same speed. The plan retires September 30, 2028, but the in-process .NET model retires on November 10, 2026, the same day .NET 8 hits EOL. In-process only ever supported LTS releases, .NET 8 is the last one, and there is no in-process build of .NET 10 on any plan, Linux or Windows. After November 10, 2026, in-process apps still run, but they get no security updates and no support.
From today (May 2026), that’s roughly six months of breathing room, not the two years the 2028 retirement might suggest. If you’ve already moved to isolated worker on .NET 8, you’ve paid this tax and you can skim the next section. If you haven’t, the rewrite is the immediate problem.
The rewrite, if you haven’t done it
If your code still looks like this, the bill is coming due no matter where you go:
// In-process model, gone in .NET 10
public static class HttpTriggerFunction
{
[FunctionName(nameof(Run))]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post")]
HttpRequest req,
ILogger log)
{
log.LogInformation("HTTP trigger function processed a request.");
return new OkObjectResult("ok");
}
}
What you need to refactor it to looks like this:
// Isolated worker on .NET 10
var builder = FunctionsApplication.CreateBuilder(args);
builder.ConfigureFunctionsWebApplication();
builder.Services
.AddApplicationInsightsTelemetryWorkerService()
.ConfigureFunctionsApplicationInsights();
builder.Build().Run();
public class HttpTriggerFunction(ILogger<HttpTriggerFunction> logger)
{
[Function(nameof(Run))]
public IActionResult Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post")]
HttpRequest req)
{
logger.LogInformation("HTTP trigger function processed a request.");
return new OkObjectResult("ok");
}
}
Small in lines, big in surface area. DI, middleware, startup, logging, and App Insights wiring all change shape. Test fixtures break, integration tests that booted the host need new boilerplate, custom binding extensions have to be re-evaluated. The rewrite alone often takes longer than the “platform” piece of the migration.
And while Program.cs is open, the destination is also up for grabs
Given the landscape above, the question worth asking while you’re already rewiring DI, middleware, startup, and logging is whether the triggers and Durable orchestrations you still rely on are pulling enough weight to justify the runtime, the host.json, and the Functions upgrade cadence. If your code is dense with [ServiceBusTrigger], [CosmosDBTrigger], and Durable orchestrations, Flex (or Functions-on-ACA if you want the runtime in a container) is the natural destination. If it’s a CRUD HTTP service that picked Functions for the price tag and now owns a host.json it doesn’t really use, the rewrite is the cheapest moment to drop the runtime entirely.
What if we migrate to Flex
If you’re coming from Linux Consumption, the implicit expectation is “same shape, just on the supported plan”: scale to zero, pay per execution, near-zero idle bill, and a cold start you’ve already learned to live with. Run Flex at zero always-ready and you’re roughly in that neighbourhood. The two things that change the math regardless of how you set always-ready are the memory tier and the per-function scaling model.
Always-ready buys warm starts Consumption never offered
Always-ready instances are Flex’s answer to “we want better cold starts than Consumption gave us.” They’re billed by the second whether traffic shows up or not — a continuous floor, not a per-execution charge. That’s a fair price for warm starts, but it’s a new charge, not one that carries over from Consumption. If your old Consumption app papered over cold starts with a keep-warm ping, Flex without always-ready is the apples-to-apples comparison and the bill stays small. (Functions pricing for current rates.)
The memory tier is a concurrency knob, not just a price knob
Linux Consumption gave you 1.5GB per instance and didn’t ask you to think about it. Flex turns instance memory into a choice: 512MB, 2,048MB (the default), or 4,096MB. Pricing scales roughly linearly with the tier, but so does what each instance can absorb: a larger instance handles more concurrent executions and more memory-hungry workloads, so picking 4GB is sometimes a way to reduce total cost by needing fewer instances under load, not increase it.
That makes the knob worth tuning rather than dreading. The traps are at the edges. The 512MB tier is rarely sufficient for anything beyond a thin HTTP shim: an isolated worker process plus the Functions host plus typical dependencies (App Insights, EF Core, a JSON serializer) leaves little headroom on first request, and the failure mode is out-of-memory crashes that don’t reproduce on the dev box. The 4GB tier is sometimes a smaller bill than 2× 2GB depending on your concurrency profile, and sometimes a bigger one. The right answer is to measure your own working set and per-instance concurrency under load rather than trust a number from a blog post.
Per-function scale-out is really per-function for most triggers
Per-function scaling is the headline upgrade over Linux Consumption, where the whole app scaled as one unit. The Flex model is more granular than the marketing line suggests. Per the Flex per-function scaling table and the event-driven scaling docs:
- All HTTP (and SignalR) triggers in the app are scaled as one unit on shared instances.
- All Blob (Event Grid) triggers are scaled as one unit on shared instances.
- All Durable Functions triggers (orchestration, activity, entity) are scaled as one unit on shared instances.
- Every other trigger (Service Bus, Storage Queue, Event Hubs, Cosmos, Timer) is scaled per individual function. Two Service Bus triggered functions in the same app run on two separate sets of instances.
New-instance allocation is capped at once per second for HTTP and once every 30 seconds for non-HTTP.
That has consequences for chained workflows. A common shape is HTTP receives a request, drops a message on Service Bus, a Service Bus trigger picks it up, then writes to a Storage Queue for downstream work. Inside one Function app that’s three separate scale units, each with its own cold start, and the non-HTTP hops scaling on a 30-second cadence. Linux Consumption ran all of that in one process, warm together, scaled together. Flex separates the units by design.
For workflows where those triggers really are independent, that isolation is the point. For tightly chained pipelines, it adds latency at each hop and instance count at each unit, both of which need to be on the migration sheet.
One last thing worth flagging for anyone coming from Consumption: the pricing-relevant configuration (always-ready count, memory tier, per-function maximums) lives in ARM/Bicep, not in host.json. On Consumption, host.json was effectively the whole knob set; on Flex, the bill is set in your infra-as-code instead.
What if we move to ACA, with or without the Azure Functions runtime
Container Apps with the Functions runtime image is the path most teams don’t draw on the whiteboard, and it’s the one that tends to come out ahead once the costs above stop hiding. The behaviour is closer to what you had on Linux Consumption than Flex is: a single Container App scales as one unit, so a chained HTTP → Service Bus → Queue flow in one app stays warm together rather than splitting into separate scale units the way Flex forces. You also get more knobs surfaced where you can see them, instead of behind platform defaults.
You own the container, and that’s a feature
The objections that come up in every discussion are the same three: you have to publish a container image, maintain a Dockerfile, and run an Azure Container Registry.
The answer is shorter than it used to be: containers in 2026 are well-trodden ground. VS Code generates the Dockerfile, Aspire scaffolds the build pipeline, multi-stage builds are a copy-paste away, and ACR + GitHub Actions templates exist for every common shape. The “you’ll have to learn Docker” objection that landed in 2018 lands less in 2026, and the artifact runs the same locally and in production, which solves an entire category of “works on my machine” bugs that Flex doesn’t.
A Functions container for ACA is not exotic. It’s the standard mcr.microsoft.com/azure-functions/dotnet-isolated base image, a multi-stage build to compile and copy the published output, and a couple of AzureWebJobsScriptRoot env vars. Twenty lines of Dockerfile that you’d want for reproducibility regardless.
The bigger upside is what you can put in your container that you can’t put in Flex. Flex inherits the Functions runtime image and that’s it: no apt-get, no custom system packages, no swapping the base for something better suited. Real examples this matters for:
- Playwright / headless browsers: people have been trying to run Playwright on Linux Consumption for years, and the documented experience is sandbox restrictions, missing
libnss3and friends, and weeks of workarounds that break on the next platform update (microsoft/playwright#24455, horihiro/azure-functions-playwright-dotnet5-linux-consumption). On ACA, the Dockerfile startsFROM mcr.microsoft.com/playwright/dotnetand you’re done. - FFmpeg, ImageMagick, LibreOffice, anything native: Microsoft’s official Flex pattern for FFmpeg is “upload the binary to an Azure Files SMB share and mount it” (Functions + Azure Files + FFmpeg sample). It works, but it’s a workaround for what’s one line of
apt-get install ffmpegin a container. Flex’s main execution stack is also read-only, so some native binaries segfault even after you mount them. - Custom system libraries: P/Invoke into a
.soyou ship works on Linux Functions, but only if every transitive system dependency is already in the platform image. SkiaSharp font rendering, anything pullinglibfontconfig,libstdc++of a specific version, orlibnss3regularly trips on this. On Flex you can’t add those libraries; on a container, it’s aRUN apt-get installaway. - Newer .NET runtimes: Flex supports a fixed list of language stack versions (currently .NET 8/9/10 isolated, Node 20/22, Python 3.10-3.13, etc.). Want to test a preview SDK or pin a patch version that hasn’t reached the platform image yet? Container path supports that the day .NET ships it. Flex catches up later.
Owning the container isn’t a tax. It’s the line that puts “what runs in production” back under your control, instead of waiting for the platform image to catch up to the dependency you needed three quarters ago.
Min-replicas surfaces the cold-start trade-off directly
ACA puts the cold-start trade-off on the configuration. Either you set minReplicas: 0 and accept the cold-start cost, or you set minReplicas: 1+ and pay for the always-warm capacity. The configuration knob is the same as the pricing knob — there’s no separate “always-ready” tier dressing it up.
properties:
configuration:
activeRevisionsMode: Single
ingress:
external: true
targetPort: 8080
transport: auto
template:
scale:
minReplicas: 1
maxReplicas: 10
rules:
- name: http-rule
http:
metadata:
concurrentRequests: '50'
containers:
- name: my-functions-app
image: myacr.azurecr.io/my-functions-app:1.0.0
resources:
cpu: 0.5
memory: 1Gi
The idle bill on a single min-1 ACA app is in the same neighbourhood as a Flex 2GB always-ready instance — close enough that pure-idle on one app is a wash. Where ACA’s pricing pulls ahead is at multi-app scale (the free grants are per-subscription, shared across all your ACA apps) and under real traffic (the active/idle two-tier rate compounds when the workload mostly waits and bursts occasionally). Run the numbers against current rates on ACA billing and Functions pricing for your region — both pages move.
KEDA-derived scaling, visible in your Bicep
ACA uses KEDA-derived scalers (Kubernetes Event-Driven Autoscaling — the same scaler family the cloud-native world uses for queues, HTTP, and custom metrics).
It isn’t stock KEDA: not every upstream KEDA scaler is supported, and behaviour like cooldown and polling intervals is platform-managed. But the relevant primitives are there, and the rules live in your Bicep instead of inside the platform. Queue depth, HTTP requests per second, custom metrics — all first-class scale rules, mixable per app rather than shoehorned into “one rule per trigger.”
For workloads with three trigger types sharing code, a single ACA app with one unified scale rule tends to cost less and behave more predictably than the same workload split across three Flex per-function units. And if you actually want the Flex shape (one Container App per trigger, isolated scale, isolated cold starts), you can do that too: split into multiple Container Apps in the same environment. The choice is yours instead of the platform’s.
Container registry, image pulls, and Log Analytics
Three resources ACA breaks out separately so you can size and tune them, instead of folding them into the platform price:
- ACR: three SKUs — Basic for a single small registry, Standard once you need more storage and webhooks, Premium for geo-replication and private endpoints. If you already run an ACR for AKS, Aspire local dev, or another workload, the marginal cost of one more repository is effectively zero.
- Image pull egress: same-region pulls from ACR to ACA don’t incur egress, which covers the common case. Cross-region replicas pay per-GB, so a large base image multiplied by a wide scale event becomes real bandwidth. Pre-pull or use a smaller base if your topology is multi-region. ACA also caches images per node, so the worst case is the first cold replica per node, not every replica.
- Log Analytics ingestion: ACA’s default Container Apps Environment ships logs to a Log Analytics workspace, billed per GB ingested. For chatty Functions apps this can dwarf compute. You can swap to Azure Monitor basic logs (substantially cheaper per GB, with reduced query capability) or a storage account destination, but the default is the expensive one. Configure log levels and retention before traffic starts.
None of these are unique to ACA in cost terms: Flex bills App Insights ingestion per-GB on the same rate card, and any non-trivial Flex deployment ends up running an ACR for shared base images anyway. The three charges look bigger on ACA only because they’re broken out as separate resources you can see and tune, instead of bundled into a platform price you can’t.
The networking story comes built in
ACA bundles VNet integration, managed certificates, custom domains, and ingress in one place. Flex has the same primitives — VNet integration and private endpoints to Storage, Service Bus, and Key Vault are GA and well-documented, and the gap to ACA has narrowed considerably since 2024. The honest difference today is shape rather than capability: Flex networking is configured per-Function-App through a slightly different surface; ACA exposes it through the Container Apps Environment, which other services in the same environment share. If you’re standing up a single Functions app, Flex is fine. If you already operate an ACA Environment for adjacent services, plugging the Functions app into the same VNet posture is the cheaper path measured in engineering hours.
The costs you only see after you’ve shipped
This is the part of the comparison that decides which path is actually cheaper, and it’s the part most migration plans skip because the costs are paid in engineering time rather than on the Azure invoice.
Local dev loop
Flex apps run locally on the Functions Core Tools, same as before. ACA apps run locally as containers, ideally orchestrated by Aspire. The first feels familiar. The second feels like more work for the first week and considerably less work after that, because the local environment matches production.
It’s common to hit a class of bug that only reproduces in the cloud on Flex, because the local Functions host environment is not the Flex environment. ACA teams tend to hit fewer of those because the container is the same artifact in both places.
Observability
Both paths give you Application Insights, and both now support OpenTelemetry. On Flex you opt in by setting "telemetryMode": "OpenTelemetry" in host.json and adding Microsoft.Azure.Functions.Worker.OpenTelemetry in the worker, which gets you correlated host + worker traces and an OTLP exporter to any backend (Use OpenTelemetry with Azure Functions). On ACA you wire OpenTelemetry the same way you would in any .NET service, and the Container Apps environment can also forward platform logs and metrics independently of what the app emits.
The honest difference is scope. Flex’s OpenTelemetry support is scoped to what the Functions host and your worker emit. ACA’s is scoped to the whole container plus anything else you run alongside it (a Dapr sidecar, a sampling proxy, a custom OTLP collector). For Functions-shaped workloads both are sufficient. For workloads where Functions is one service among several already on OpenTelemetry, ACA reuses the pattern your other services already follow.
Secrets and identity
Key Vault references work in both. Managed identity works in both. The shape is slightly different: Flex apps use the Functions identity model, ACA apps use the standard managed identity flow that the rest of your Azure stack already uses. The cost is consistency overhead if your team runs non-Functions services alongside, or one extra concept to learn if Functions is your entire universe.
Engineering hours
This is the big one, and it depends entirely on where your team is starting. For a team that has never run a containerized service, moving to Flex is the lower-effort path. For a team that already ships containers, runs Aspire locally, or has a Dapr or Kubernetes adjacent service in production, the effort flips: ACA reuses what you know, Flex introduces a hosting model you don’t.
The migration documentation defaults to Flex regardless of which bucket you’re in. Audit your own bucket before assuming the documentation’s default matches it.
A worked example
Three apps, three different cost shapes. These are illustrative orders of magnitude, not invoices, and your numbers will differ. The point isn’t the absolute figures, it’s where the money actually goes.
Small HTTP API, low traffic, no networking
A single HTTP function, ten requests per minute during business hours, no private dependencies, sub-second latency tolerated.
| What you pay for | Flex (2GB, 0 always-ready) | ACA (0.5 vCPU, 1GB, min 0) |
|---|---|---|
| Platform baseline | ~$0/mo idle | ~$0/mo idle |
| Per-execution cost | low | low |
| Cold-start latency | 1-2s post-idle (no always-ready) | 5-15s untuned, less with ReadyToRun + smaller image |
| Engineering hours to migrate | low | medium |
| Notes | Workload Flex was built for | Viable, more upfront setup |
If most of your apps are this shape, the Flex default lines up with the workload.
Medium business app, mixed triggers, private dependencies
HTTP + Service Bus queue + timer trigger, private SQL, private Service Bus, 200 requests per second sustained during business hours, sub-300ms p95 latency required (i.e. 95% of requests under 300ms).
| What you pay for | Flex (2GB, 1 always-ready) | ACA (1 vCPU, 2GB, min 1) |
|---|---|---|
| Platform baseline | continuous always-ready bill | continuous min-replica bill |
| Per-execution cost | meaningful | meaningful |
| Networking complexity | high (private endpoints) | low (VNet integrated) |
| Per-trigger warm pool tax | yes | no |
| Engineering hours to migrate | high | medium |
| Notes | Three cost lines compound | One operational concept reused |
This is where the costs add up differently. Flex’s always-ready bill plus the per-function scale-out cost plus the private endpoint configuration time often exceed ACA’s min-replica bill plus the container plumbing.
Spiky workload, deep scale, single trigger
A queue-triggered processor that goes from zero to a thousand concurrent invocations in seconds, then back to zero. Single trigger type. No private dependencies.
| What you pay for | Flex (2GB, 0 always-ready) | ACA (1 vCPU, 2GB, min 0) |
|---|---|---|
| Scale-up cadence | 1 new instance / 30s for queues | KEDA queue-length scaler, similar order |
| Cold-start tax per scale event | low platform cold start | medium, lower with ReadyToRun + smaller image |
| Per-execution cost | low | low |
| Engineering hours to migrate | medium | medium |
| Notes | Fast platform cold-start, fixed allocation cadence | Scale rule is yours to tune, base image is yours to shrink |
Neither plan does zero-to-thousand in seconds for a queue trigger. Flex caps non-HTTP allocation at one new instance per 30 seconds, ACA’s KEDA scaler operates on a similar polling cadence. If your workload genuinely needs sub-second burst-out, both plans want a non-zero baseline (Flex always-ready or ACA min-replicas), and the comparison shifts to the medium-app table above.
If you use Durable Functions, there are two decisions, not one
If your migration touches Durable Functions, untangle two things that the migration documentation tends to bundle together: the runtime rewrite and the backend choice.
The runtime rewrite is its own job. Durable Functions on isolated worker is fully supported, but migrating from in-process is a separate rewrite on top of the regular Functions rewrite. The packages change (Microsoft.Azure.WebJobs.Extensions.DurableTask → Microsoft.DurableTask.*), IDurableOrchestrationContext becomes TaskOrchestrationContext, attribute and binding shapes change, and orchestration state from the in-process model isn’t directly portable to the isolated equivalents. Microsoft’s Durable Functions migration guide walks the API mapping, but plan it as a discrete piece of work, not a side-effect of the .NET 10 isolated-worker move.
The backend choice is independent. Once you’re on isolated worker, the Azure Storage backend most Durable apps run on today still works. You don’t have to migrate the backend to migrate the runtime. Microsoft’s recommended forward path is the Durable Task Scheduler (DTS), a managed orchestration backend that replaces Azure Storage, but it’s a recommendation, not a requirement.
DTS is where the cost trap lives. It comes in two SKUs: a Dedicated tier with a flat per-Capacity-Unit monthly floor, and a Consumption tier billed per orchestration action. The Consumption rate looks small until you count actions: starting an orchestration is one action, every activity call is two (one to schedule, one for the result), and timers, external events, sub-orchestrations, and continue-as-new each add their own. A modest fan-out with five activities is 1 + 2 × 5 = 11 actions per run, before timers and events. At 100k runs per month that’s over a million actions, which lands two to three orders of magnitude above what the Azure Storage backend bills for the same workload in storage transactions. Verify current rates on the Durable Task Scheduler pricing page before committing.
For most workloads “isolated worker on Azure Storage backend” is the path that ships first and costs least. DTS earns its keep on specific shapes (heavy fan-out/fan-in, strict ordering guarantees the Storage backend doesn’t enforce, history that’s outgrown a single Storage account’s partitioning, or operational ergonomics like a managed control plane). Run your last 30 days of orchestration history through 1 + 2N + timers + events before you commit.
A migration sequence that buys back optionality
The order you make these changes in affects how much of the work is throwaway. Each step either keeps you on a Microsoft-managed runtime image (and you redo the migration when the platform image changes shape) or moves you toward owning the container artifact (and the language-runtime upgrade becomes a Dockerfile bump you control).
This isn’t an argument that ACA is retirement-proof. ACA pushes platform versions too: KEDA bumps, runtime updates, default-image changes. The honest difference is which layer moves under you. On Flex, the platform owns the language stack version, the runtime image, and the deployment shape. On ACA, you own the language stack and the runtime image; the platform still owns the host, scaler, and ingress. That trades one set of upgrade events for another, smaller set.
.NET in-process"] --> B["Step 1: Isolated worker
still on Linux Consumption"] B --> C["Step 2: Containerize
+ run locally"] C --> D{"Step 3: Pick
a host"} D --> E["Flex Consumption
rezip, re-lock to platform image"] D --> G["Functions on ACA
your container, Functions runtime inside"] D --> F["Plain ACA
your container, no Functions runtime"] G -.->|"later, if you outgrow
the Functions runtime"| F style C fill:#f9e2af,stroke:#585b70,color:#1e1e2e,stroke-width:2px style G fill:#a6e3a1,stroke:#585b70,color:#1e1e2e style F fill:#a6e3a1,stroke:#585b70,color:#1e1e2e
Step 2 is highlighted because it’s where the artifact stops being “a zip the platform unpacks” and starts being “a container you control.” After step 2, the day-one cost of all three step-3 destinations is similar; what differs is which upgrade events you absorb later.
Step 1: Isolated worker, still on Linux Consumption. Don’t change platforms and runtime models in the same PR. Migrate to .NET 10 isolated worker while still on Linux Consumption (or App Service plan, briefly). Ship it. Stabilize. This is the deprecation tax, paid in isolation.
Step 2: Containerize the app. Add a Dockerfile, run it locally, wire Aspire if you haven’t. Verify the container is the same artifact you’d ship anywhere. This step is the inflection point. After it, the artifact is yours: the same image runs locally, in a Functions-on-ACA app, in a plain ACA app, on AKS, on a developer’s laptop, on a self-hosted node. Before it, the artifact is whatever Microsoft’s platform image expects.
Step 3: Pick a host, knowing what each one costs you in optionality.
- Flex Consumption. Path of least change: rezip, redeploy, done. The trade-off is that the runtime version, the host configuration, and the deployment shape stay platform-owned, so the next platform shift (a runtime image change, a host.json schema bump, the next .NET LTS deadline) lands on you in whatever shape Microsoft chose. Pick this if the Functions programming model is exactly what your app wants and you don’t expect to outgrow it.
- Azure Functions on Azure Container Apps. The halfway house. Same container as Flex (you’re already there from step 2), same
[ServiceBusTrigger]and Durable orchestrations, but the host underneath is ACA: per-app KEDA scale rules, multi-revision deployments, the standard VNet integration story. The win is control over the base image and the host configuration. The cost is two managed surfaces instead of one — when something breaks at the boundary between the Functions runtime and ACA, you have a wider blast radius to debug. - Plain ACA. Drop the Functions runtime entirely. Your container is just a .NET host (
Microsoft.Extensions.Hosting, ASP.NET Core, whatever fits) with KEDA scalers, Service Bus clients, and Cosmos change-feed processors wired explicitly. More code, fewer concepts that only exist inside the Functions runtime, and the upgrade events you track collapse to .NET LTS (which you’re tracking anyway) plus ACA platform changes.
The dashed arrow from Functions on ACA to plain ACA is honest about what that move actually is: the container artifact carries over, but every trigger and binding is a rewrite. [ServiceBusTrigger] becomes a ServiceBusProcessor in an IHostedService; [CosmosDBTrigger] becomes a change-feed processor; Durable orchestrations become DTS, hand-rolled state, or a move to a backend that supports workflows. It’s a smaller jump than Functions-on-Linux-Consumption to plain ACA, because you’ve already paid the container tax. It’s not free.
Step 4 (if you’re on ACA, either flavour): the Dapr question opens up. Dapr is a runtime that sits as a sidecar (a separate container running alongside yours) and gives your app portable APIs for state, pub/sub, and bindings. On ACA it’s a flag. Workflows are the one gotcha: managed Dapr on ACA doesn’t support the Workflow API yet, so if workflows are why you wanted Dapr, the options are AKS with the Dapr extension, Diagrid Conductor, or Durable Task Scheduler running against your Functions app.
Closing
The clock that runs first is November 10, 2026, when the in-process .NET model retires. That’s the rewrite-shaped event you can’t defer. The 2028 plan retirement is a longer runway, but it’s not the binding constraint.
Three destinations are real today: Flex, Azure Functions on Azure Container Apps, and plain ACA. The migration documentation points at Flex by default. The other two ask for more work up front, in exchange for control over the artifact and the host. Whether that trade is worth it depends on your app: cost shape, trigger mix, networking story, and how much of your stack already runs in containers.
If you do one thing this week, do step 1 in isolation: move the runtime to .NET 10 isolated worker on the platform you already run, ship it, stabilize. The destination decision is easier to make on a stable codebase than on one mid-rewrite. The cost comparison, the container question, and the Durable backend choice all wait for that.