Status Checker

Linode Status

Operational

Last incident: 8/1/2025

Current Status
Overall StatusOperational
Last IncidentEmail Deliverability issues to Yahoo! addresses
Incident Statusresolved

Recent Incidents

Email Deliverability issues to Yahoo! addresses
8/1/2025, 11:12:04 PM
At this time, we believe that the issue with mail deliverability to Yahoo! addresses has been resolved. Customers who are still experiencing issues with logging into Cloud Manager should visit our Contact page to fill out the corresponding form. Our team is also available by phone, 24/7: - U.S. 855-454-6633 - Global +1-609-380-7100
Service Issue - All Services - US-EAST (Newark)
7/27/2025, 10:08:08 AM
On July 27, 2025, at approximately 08:30 UTC, Akamai services in the Newark, NJ \(us-east\) data center experienced a critical outage due to overheating of infrastructure. This overheating was triggered by a utility power failure at the facility, which resulted in the loss of HVAC functionality. Although the data center itself remained operational, the failure of the cooling system caused elevated temperatures that directly impacted Akamai hardware, leading to the shutdown of core services. The outage affected Linode Compute Instances \(commonly referred to as 'Linodes'\), Object Storage, NodeBalancers, and Linode Kubernetes Engine \(LKE\) within the Newark region. Additionally, internal dependencies on Newark infrastructure caused degraded performance for LKE services in other regions, including Dallas, Fremont, Sydney, Tokyo 2, Toronto, and Washington DC. ‌In response, we initiated mitigation efforts that included replacing failed network hardware, rerouting traffic away from Newark, and migrating impacted workloads to backup systems where possible. Restoration efforts were executed in phases once temperatures stabilized, prioritizing services based on severity and customer impact. While some services began recovering on July 28th, full recovery was completed by July 29th at 16:22 UTC. We are currently conducting a comprehensive post-incident review to identify opportunities to improve our resilience. This includes auditing cross-region service dependencies that allowed an issue isolated to Newark to affect services elsewhere. We are also evaluating architectural improvements to better isolate services by region and increase fault tolerance. In parallel, we are reviewing our monitoring and alerting systems to improve early detection of cooling-related risks and ensure faster mitigation. Our commitment to transparency, accountability, and service reliability remains steadfast as we work to strengthen our systems and prevent future occurrences. ‌This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.

Affected Components:

US-East (Newark)
US-East (Newark) Block Storage
US-East (Newark) NodeBalancers
US-East (Newark) Backups
US-East (Newark) Object Storage
US-East (Newark) Linode Kubernetes Engine
US-East (Newark)
US-Central (Dallas) Linode Kubernetes Engine
US-West (Fremont) Linode Kubernetes Engine
Longview
US-IAD (Washington) Linode Kubernetes Engine
CA-Central (Toronto) Linode Kubernetes Engine
AP-Northeast-2 (Tokyo 2) Linode Kubernetes Engine
AP-Southeast (Sydney) Linode Kubernetes Engine
Emerging Service Issue - Object Storage - Chicago
7/24/2025, 8:25:43 PM
This incident has been resolved.

Affected Components:

US-ORD (Chicago) Object Storage
Connectivity Issue - US-MIA (Miami)
7/19/2025, 2:09:50 PM
Starting around 15:20 UTC on July 17, 2025, some edge delivery server sets in the US location began experiencing connectivity issues. This resulted in availability issues for the traffic traversed through the affected server sets. The impact was on both HTTP & HTTPS server sets, and approximately 15-20% of the overall traffic was affected on those server sets. Akamai’s proactive monitoring system detected this issue, triggered internal alerts, and the appropriate actions were taken. Out of an abundance of caution, Akamai removed the affected server sets from the production service. This action was completed at around 17:15 UTC on July 17, 2025. Following these actions, the internal alerts were cleared, and the impact was mitigated. ‌ Akamai discovered a common error logged on multiple core network devices and escalated it to the third-party router vendor. Upon investigation, it was determined that an invalid route of 0.0.0.0/1 announced by one of Akamai's peers triggered a software defect in the core devices logging the error.  Unfortunately, the third-party vendor was unable to provide remediation options beyond upgrades and reloads. ‌ During these remediation upgrades in our Miami datacenter, packet loss was detected when accessing Akamai’s Compute site in Miami after an initial device upgrade took place starting around 12:57 UTC on July 19, 2025. As a precautionary measure, emergency actions were taken to complete the remediation upgrade on the remaining Miami devices, which fully returned to service at 16:30 UTC on July 19, 2025. Follow-up investigation later revealed a collision of maintenance actions in 2 metros that shared connectivity likely contributed to the loss. ‌ Akamai upgraded the firmware version of all the impacted network routers in phases across the platform. Akamai eliminated the risk of recurrence as of 10:47 UTC on July 29, 2025, after all the impacted network routers were upgraded and rebooted. ‌ To prevent recurrence, Akamai will strengthen the defensive posture of BGP policies and update internal router SQA tests for invalid routes. ‌ This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing, and any information herein is subject to change.

Affected Components:

US-MIA (Miami)
Service Issue - LKE and Object Storage - All Regions
7/17/2025, 8:35:50 PM
On July 17, Akamai began upgrading its internal Kubernetes control plane used to implement several core services of our Compute products. In the course of troubleshooting a problem with the upgrade, our technical team unexpectedly triggered an automatic process that incorrectly rebuilt several resources in active use and critical to the operation of our products. As a result, the following products experienced downtime or degraded performance starting at 19:47 UTC. * Object Storage Key and bucket management were unavailable via the Linode Cloud Manager and Linode APIv4. * Object Storage: E2 and E3 endpoints were unavailable in Chicago, Frankfurt, London, Melbourne, Mumbai, Seattle, Singapore, Tokyo, and Washington. * LKE and LKE-E: the operations to create new clusters or restore clusters from backup were unavailable. * Image Service: the operations to create and restore images were unavailable. * Managed Databases: the operations to provision new instances, resize, resume suspended databases, or restore from backup were unavailable. In addition, the automatic recreation of failed nodes in multi-node databases was unavailable. To address the problem, we initiated a comprehensive process to correctly rebuild the affected resources and to gradually bring back systems into operation. Throughout this process, we conducted extensive testing and validation operations. At approximately 00:20 UTC on July 18, 2025, the following services began to come back online: * Image Service: full operation restored. * LKE and LKE-Enterprise: new cluster provisioning available, except in London and Chicago. * Managed Databases: all operations fully available. The affected Object Storage E2 and E3 endpoints were re-established at the following hours. It is important to note that no data stored in these Object Storage systems was lost or damaged during this incident. | **Region** | **Mitigation Time \(UTC\)** | **Date** | | --- | --- | --- | | Melbourne \(au-mel\) | 22:44 | July 18, 2025 | | London \(gb-lon\) | 07:55 | July 19, 2025 | | Tokyo \(jp-tyo-3\) | 11:40 | July 19, 2025 | | Washington \(us-iad\) | 12:13 | July 19, 2025 | | Singapore \(sg-sin-2\) | 12:30 | July 19, 2025 | | Frankfurt \(de-fra-2\) | 13:20 | July 19, 2025 | | Mumbai \(in-bom-2\) | 13:40 | July 19, 2025 | | Seattle \(us-sea\) | 15:14 | July 19, 2025 | | Chicago \(us-ord\) | 15:24 | July 19, 2025 | To prevent a recurrence of this problem, we have frozen changes to the Kubernetes systems involved in this incident and are reviewing all designs and supporting operational processes, with the expectation that we will implement additional safeguards. This summary provides an overview of our current understanding of the incident, given the information available. Our investigation is ongoing, and any information herein is subject to change.

Affected Components:

US-East (Newark) Object Storage
US-East (Newark) Linode Kubernetes Engine
US-Central (Dallas) Linode Kubernetes Engine
US-Southeast (Atlanta) Object Storage
US-West (Fremont) Linode Kubernetes Engine
US-IAD (Washington) Object Storage
US-Southeast (Atlanta) Linode Kubernetes Engine
US-ORD (Chicago) Object Storage
EU-Central (Frankfurt) Object Storage
US-IAD (Washington) Linode Kubernetes Engine
AP-South (Singapore) Object Storage
US-ORD (Chicago) Linode Kubernetes Engine
CA-Central (Toronto) Linode Kubernetes Engine
FR-PAR (Paris) Object Storage
EU-West (London) Linode Kubernetes Engine
SE-STO (Stockholm) Object Storage
EU-Central (Frankfurt) Linode Kubernetes Engine
US-SEA (Seattle) Object Storage
FR-PAR (Paris) Linode Kubernetes Engine
JP-OSA (Osaka) Object Storage
AP-South (Singapore) Linode Kubernetes Engine
IN-MAA (Chennai) Object Storage
AP-Northeast-2 (Tokyo 2) Linode Kubernetes Engine
ID-CGK (Jakarta) Object Storage
AP-West (Mumbai) Linode Kubernetes Engine
BR-GRU (Sao Paulo) Object Storage
AP-Southeast (Sydney) Linode Kubernetes Engine
ES-MAD (Madrid) Object Storage
SE-STO (Stockholm) Linode Kubernetes Engine
US-SEA (Seattle) Linode Kubernetes Engine
JP-OSA (Osaka) Linode Kubernetes Engine
GB-LON (London 2)
IN-MAA (Chennai) Linode Kubernetes Engine
AU-MEL (Melbourne)
ID-CGK (Jakarta) Linode Kubernetes Engine
NL-AMS (Amsterdam) Object Storage
IT-MIL (Milan) Object Storage
BR-GRU (São Paulo) Linode Kubernetes Engine
US-MIA (Miami) Object Storage
NL-AMS (Amsterdam) Linode Kubernetes Engine
US-LAX (Los Angeles) Object Storage
ES-MAD (Madrid) Linode Kubernetes Engine
GB-LON (London 2) Object Storage
IT-MIL (Milan) Linode Kubernetes Engine
AU-MEL (Melbourne) Object Storage
US-MIA (Miami) Linode Kubernetes Engine
IN-BOM-2 (Mumbai 2) Object Storage
US-LAX (Los Angeles) Linode Kubernetes Engine
DE-FRA-2 (Frankfurt 2) Object Storage
GB-LON (London 2) Linode Kubernetes Engine
SG-SIN-2 (Singapore 2) Object Storage
AU-MEL (Melbourne) Linode Kubernetes Engine
JP-TYO-3 (Tokyo 3) Object Storage
IN-BOM-2 (Mumbai 2) Linode Kubernetes Engine
DE-FRA-2 (Frankfurt 2) Linode Kubernetes Engine
SG-SIN-2 (Singapore 2) Linode Kubernetes Engine
JP-TYO-3 (Tokyo 3) Linode Kubernetes Engine

Frequently Asked Questions