Azure App Service is the PaaS that every .NET shop in the world has at least one of, and probably thirty of, and probably forgotten about two thirds of them. It is still, in 2024, one of the most-used Azure services, and it is also one of the most credential-leaky services by default. Not because the platform is insecure — the platform is mature and has shipped a lot of hardening in the last two years — but because the path of least resistance for deploying an App Service is still publish profiles and FTP, and the defaults are still tuned for "it should just work" rather than "it should be the most secure configuration available."
This post covers the deployment security story for App Service as of late 2024: how code actually gets onto the app, which credentials matter, and the settings that make the difference between a production-grade deployment and an easy target.
The Deployment Paths That Exist
App Service has accumulated deployment paths over the last decade. The currently supported ones are:
- Zip deploy (
/api/zipdeploy) — the original modern endpoint. Accepts a zip and expands it onto the filesystem. Synchronous. - OneDeploy (
/api/publish) — the newer endpoint. Supports more artifact types (war, jar, static), better diagnostics, partial deploys. Asynchronous. - Run-from-package — a mode where the app runs directly from a mounted read-only zip, specified by the
WEBSITE_RUN_FROM_PACKAGEapp setting. - Container deploy — for App Service on Linux / Web App for Containers, pull an image from a registry on start.
- FTP/FTPS — legacy, still enabled on many apps.
- WebDeploy (MSDeploy) — legacy, mostly used from Visual Studio.
- Local Git push — legacy, rarely seen in production.
- CI/CD via GitHub Actions or Azure Pipelines — usually wraps zip deploy or container deploy under the hood.
The security differences between these are significant. Zip deploy expands files onto a mutable filesystem, which means a compromised deployment credential can modify individual files on the running app. Run-from-package does not — the package URL changes or the package is replaced atomically, which is both more predictable (no partial state) and harder to tamper with. FTP is cleartext-equivalent from a credential perspective; FTPS is better but still secret-based. WebDeploy has historically been a rich target for attackers because it accepts arbitrary file overwrites and MSBuild transformations.
The first hardening pass is to pick one deployment path and disable the rest. Run-from-package for code-based apps, container deploy for containerized apps, and nothing else.
Publish Profiles and Basic Auth
The publish profile is the single most-shared credential in the Azure ecosystem. It is an XML file containing two credential sets — one for WebDeploy, one for FTP — scoped to the App Service's SCM site. I have found publish profiles committed to Git, pasted into Slack, mailed as attachments, and uploaded to file-sharing services. Every one of those is a full compromise of the App Service.
Microsoft's position, since 2023, is that basic authentication on the SCM site should be disabled by default on new apps. The setting is scmBasicAuthPublishingDisabled (JSON) or "Basic authentication for SCM Publish" (portal), and the equivalent for the app endpoint is ftpsState: Disabled. Turn both off. There is an Azure Policy built-in for each (mentioned in the Policy post) — deploy the policy at the management group level and let it enforce.
With basic auth disabled, deployments authenticate through Azure AD. The deployment client uses az webapp deploy with an Azure AD token, and the App Service's Azure AD permissions on the resource are what grant deployment access. That token is short-lived, can be scoped with Conditional Access, and leaves an Azure AD audit trail that a publish profile does not. The migration from publish profiles to Azure AD deploys is a few days of pipeline changes for most organizations.
Kudu, the SCM Site, and What It Runs
Every App Service has a companion site at <app-name>.scm.azurewebsites.net. This is Kudu — the deployment engine that accepts zip deploys, runs build steps, serves the console and process explorer, and hosts extensions. Kudu is powerful and, historically, has been the surface that attackers pivot through once they have any credential on the app.
Three controls matter on Kudu:
- Disable basic auth (above) so the SCM site only accepts Azure AD.
- Restrict access to the SCM site with access restrictions — the
scmIpSecurityRestrictionslist. Apps accessible from0.0.0.0/0on the SCM site are accessible for exploitation globally; restricting to corporate IP ranges or Azure DevOps / GitHub Actions runner ranges reduces the attack surface significantly. - Turn on
scmIpSecurityRestrictionsUseMain: falseif the SCM and main site restrictions should be separate. This is easy to miss — on by default, the SCM site inherits the main site's restrictions, which are usually more permissive than SCM should be.
Kudu also supports extensions — small web apps that run inside the SCM site. These are almost never needed, have historically been a source of vulnerabilities, and should be disabled (clientCertEnabled: true on SCM; sitelocked extension mode off). The legacy "Kudu extensions" feature is worth auditing once and disabling if your app does not use it.
Run-From-Package in Practice
Run-from-package is the deployment mode I recommend for any App Service that can support it. The mechanics: set WEBSITE_RUN_FROM_PACKAGE to a SAS URL pointing at a zip in a storage account, or to 1 if using zip deploy's integrated package mode. The App Service mounts the zip as a read-only filesystem and runs from it.
The security properties are:
- The running filesystem is immutable. Code-level compromises cannot modify the running app.
- Deployments are atomic. No partial state.
- The deployment credential (SAS URL or Azure AD to the storage account) is scoped to the storage container, not to the App Service. Separation of concerns.
- Rollback is trivial — point the setting at a previous package URL.
The tradeoffs are that Kudu-managed build steps (Oryx) are not available, which means the deployment pipeline has to produce a ready-to-run artifact, and some web frameworks that write to the filesystem need their log and temp directories redirected elsewhere (to /tmp or App Service mounted storage).
For the SAS URL path specifically, use stored access policies with a 90-day cap and rotate the policy on a schedule. A SAS URL without an expiry is a long-lived credential; with an expiry and a rotation, it is a rotating shared secret, which is acceptable for this use case.
Slot Swaps and the Settings That Move
App Service deployment slots are one of the best features of the platform and one of the most misconfigured. A slot is a full clone of the app with its own content and configuration; swapping promotes staging content to production without downtime.
The gotcha is that some settings are sticky to the slot and some are not. Sticky settings (the default) stay with the slot they were configured on after a swap: the resource-level identity (system-assigned managed identity), the resource-level certificates, the inbound IP addresses, the deployment source configuration. Non-sticky settings move with the code during a swap: most app settings and connection strings unless explicitly marked sticky.
For supply chain security, the settings you want to be sticky are identity-related — the staging slot has a staging identity, the production slot has a production identity, and a swap does not move the identities. The settings you want to be non-sticky are feature flags and version-specific configuration that should follow the code.
Marking a setting as sticky is a single checkbox in the portal or a property on the ARM resource. There is no default answer for "which settings should be sticky" — it depends on what the setting represents. The discipline is to review sticky-ness as part of every configuration change review, which does not happen without a process.
Managed Identity on the App and the Credentials It Replaces
Every App Service should have a system-assigned managed identity and should use it for outbound authentication to Azure resources. The combination that matters for supply chain:
- Key Vault references for secrets (
@Microsoft.KeyVault(...)) — uses the managed identity to fetch secrets at app-start time, so the secret is never in app settings. - Identity-based connections to Azure SQL, Storage, Service Bus, Event Hubs — the app gets a token through the identity, no connection string in app settings.
- Outbound calls to custom APIs through Azure AD tokens — the app authenticates as itself to downstream services.
Each of these replaces a long-lived credential with a short-lived token. The app settings become configuration (resource names, endpoints) rather than secrets, which is both more secure and easier to manage.
How Safeguard Helps
Safeguard enumerates every App Service in the tenant and reports on the exact configuration drift that produces incidents — basic auth still enabled, FTPS not disabled, SCM IP restrictions permissive, run-from-package not in use, app settings containing connection strings when identity-based alternatives exist. The findings tie back to the policies that should be enforcing them, so "why is this app non-compliant" and "what policy should have caught this" answer together. For App Service estates that span dozens of subscriptions, the platform produces a concrete hardening backlog instead of a compliance snapshot.