Back

BitBucket Status

Operational

Last incident: 4/16/2026

Current Status
Overall StatusOperational
Last IncidentBitbucket Pipelines degraded performance
Incident Statusresolved

Recent Incidents

Bitbucket Pipelines degraded performance
4/16/2026, 8:23:10 PM
On April 16 at 8:00PM UTC Bitbucket Pipelines users may have experienced performance degradation running new pipelines. The issue has now been resolved, and the service is operating normally for all affected customers.

Affected Components:

Website
API
Git via SSH
Authentication and user management
Git via HTTPS
Webhooks
Source downloads
Pipelines
Git LFS
Email delivery
Purchasing & Licensing
Signup
Users experiencing issues with login across Atlassian products
4/13/2026, 7:29:45 AM
### Summary On April 13, 2026, between 05:49 and 06:29 UTC, customers experienced failures when attempting to log in, sign up, reset passwords, and complete multi-factor authentication flows across Atlassian cloud products. Approximately 90% of authentication requests failed during the peak impact window, affecting users in the US East and EU regions. The incident was mitigated within 40 minutes through manual intervention, and full service was restored by 06:29 UTC. ### **IMPACT** * **Duration**: ~40 minutes \(05:49–06:29 UTC, April 13, 2026\) * **Affected regions**: US East and EU \(authentication infrastructure serves EU traffic from US East, with traffic primarily from EU at this time of day\). * **Affected products**: All Atlassian cloud products requiring authentication, including Jira, Confluence, Jira Service Management, and Trello. * **Customer experience**: Users attempting to log in, sign up, reset passwords, or complete MFA flows received errors. Users already logged in with active sessions were unaffected. ### **ROOT CAUSE** This incident had several contributing factors that combined to produce a failure that the system could not recover from without manual intervention. **The primary cause** was a recently enabled change that caused our authentication infrastructure to retry requests to a downstream identity service when those requests were slow to respond. This retry behaviour was rolled out to 100% of traffic earlier the same day. Under normal conditions this would be benign, but it meant that any slowness in the downstream service was amplified. Since multiple upstream services were also independently retrying their own failed requests, the amplification compounded further into a retry storm. **The trigger** was a burst of legitimate user traffic. A pattern of many parallel link preview requests for a single user caused a concentrated load spike on a downstream identity service, pushing its response times above the retry threshold. On its own, this kind of spike had occurred many times before and always recovered. With the retry amplification now in effect, the spike instead created a runaway feedback loop: slow responses caused retries, retries increased load, increased load caused slower responses, preventing recovery. The incident was mitigated by manually scaling up the downstream identity service to provide sufficient capacity to absorb the amplified load. Once scaled, the service recovered immediately, bringing authentication error rates to zero within one minute. **REMEDIAL ACTIONS PLAN & NEXT STEPS** We are taking the following actions designed to prevent recurrence and improve our resilience: 1. **Immediate**: The retry-on-timeout change has been disabled. 2. **Load shedding and self-healing**: We are adding load shedding capabilities to our authentication services so that they can automatically shed excess load and self-recover during traffic spikes, without requiring action before automatic scaling starts. 3. **Reducing request fan-out**: We are reviewing patterns where a single user action can generate many parallel downstream requests, and will introduce methods where possible to reduce the amplification potential. We apologize to customers whose services were interrupted by this incident and we are taking immediate steps to improve the platform’s reliability. Thanks, Atlassian Customer Support

Affected Components:

Website
API
Git via SSH
Authentication and user management
Git via HTTPS
Webhooks
Source downloads
Pipelines
Git LFS
Email delivery
Purchasing & Licensing
Signup
Degraded performance of Bitbucket cloud
3/12/2026, 8:48:06 AM
We have successfully mitigated the incident, and the affected service is now fully operational. Our teams have verified that normal functionality has been restored and the service is performing as expected.

Affected Components:

Website
API
Git via SSH
Authentication and user management
Git via HTTPS
Webhooks
Source downloads
Pipelines
Git LFS
Email delivery
Purchasing & Licensing
Signup
Disrupted Bitbucket availability
3/6/2026, 2:44:40 AM
### Summary On March 6, 2026, between 02:19 UTC and 04:00 UTC, Bitbucket Cloud experienced an incident impacting the web app, API, CLI, and Pipelines operations. This was caused by the Bitbucket application hitting a regional provisioning API rate limit with our hosting provider, preventing application workers from handling website traffic. The incident was detected within 1 minute by automated monitoring and mitigated by scaling systems down and then back up to full capacity which put Atlassian systems into a known good state. ### **IMPACT** The incident resulted in a Bitbucket Cloud services being unavailable for 1 hour and 6 minutes on March 6, 2026 between 02:19 UTC and 03:25 UTC, followed by degraded website performance until 04:00 UTC. During this time, customers were unable to access Bitbucket services including the web app, Git operations \(clone, push, pull over HTTPS and SSH\), API, and running builds in Pipelines. ### **ROOT CAUSE** The issue stemmed from a change to an internal deployment system that increased use of a platform credential service, hitting a quota with our hosting provider. This blocked Bitbucket services from deploying additional capacity because new application nodes request the credential service on startup and were rate limited. This caused degradation of Bitbucket experiences and more failed requests to Bitbucket Cloud’s website and public APIs. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** The incident response team manually scaled down Bitbucket services, then gradually scaled them back up while closely monitoring our quota. We simultaneously engaged with our hosting provider to temporarily increasing this limit to unblock bringing more Bitbucket service capacity online. We know that outages impact your productivity. While we have a number of testing and preventative processes in place, Bitbucket services lacked necessary boundaries to be resilient to upstream platform system changes. To help minimise the impact of breaking changes to our environments, we will implement additional preventative measures such as: * Improve monitoring of shared Atlassian platform resources. * Update Bitbucket application bootstrapping to prevent new capacity from failing during resource contention of shared platform services. * Reduce Bitbucket’s dependency on shared hosting provider services. * Deploy Bitbucket services across multiple regions to reduce single-region failure risk. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support

Affected Components:

Website
API
Git via SSH
Authentication and user management
Git via HTTPS
Webhooks
Source downloads
Pipelines
Git LFS
Email delivery
Purchasing & Licensing
Signup
Disrupted Bitbucket availability in eu-west-1
1/28/2026, 4:49:04 PM
On January 28, 2026, affected Bitbucket Cloud users in eu-west-1 may have experienced some service disruption. The issue has now been resolved, and the service is operating normally for all affected customers.

Frequently Asked Questions