Linode Status
Operational
Last incident: 9/3/2025
Current Status
Overall StatusOperational
Last IncidentService Issue - Block Storage (multiple regions)
Incident Statusresolved
Recent Incidents
Service Issue - Block Storage (multiple regions)
9/3/2025, 4:05:15 PM
We haven’t observed any additional issues with the Block Storage service in any region, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
Affected Components:
US-East (Newark) Block Storage
US-Central (Dallas) Block Storage
US-West (Fremont) Block Storage
US-Southeast (Atlanta) Block Storage
US-IAD (Washington) Block Storage
US-ORD (Chicago) Block Storage
CA-Central (Toronto) Block Storage
EU-West (London) Block Storage
EU-Central (Frankfurt) Block Storage
FR-PAR (Paris) Block Storage
AP-South (Singapore) Block Storage
AP-Northeast-2 (Tokyo 2) Block Storage
AP-West (Mumbai) Block Storage
AP-Southeast (Sydney) Block Storage
SE-STO (Stockholm) Block Storage
US-SEA (Seattle) Block Storage
JP-OSA (Osaka) Block Storage
IN-MAA (Chennai) Block Storage
BR-GRU (São Paulo) Block Storage
NL-AMS (Amsterdam) Block Storage
ES-MAD (Madrid) Block Storage
IT-MIL (Milan) Block Storage
US-MIA (Miami) Block Storage
ID-CGK (Jakarta) Block Storage
US-LAX (Los Angeles) Block Storage
GB-LON (London 2) Block Storage
AU-MEL (Melbourne) Block Storage
IN-BOM-2 (Mumbai 2) Block Storage
DE-FRA-2 (Frankfurt 2) Block Storage
SG-SIN-2 (Singapore 2) Block Storage
JP-TYO-3 (Tokyo 3) Block Storage
Connectivity Issue - All Regions
8/27/2025, 1:10:08 PM
Akamai deployed a configuration change to a router in our Oslo data center \(DC\) at around 10:30 UTC on August 27, 2025. Following this change, Compute and CDN services across multiple geographic regions started experiencing availability issues since the change resulted in certain congesting points in the Akamai network. In parallel, Akamai's proactive monitoring detected losses extending beyond the expected site \(Oslo DC\). We rolled back the configuration change for immediate mitigation. The rollback action was completed at 10:38 UTC on August 27, 2025. All associated internal alerts were cleared by 10:50 UTC on August 27, 2025.
The configuration change was intended to improve monitoring for Akamai internal systems in Oslo by adjusting the communities tagged on certain prefixes, allowing them to be forwarded to the monitoring system. However, given that this change was non-critical it has been placed on hold. Normal operations can continue without it, and no functional impact was expected from the rollback.
As previously mentioned, CDN and Cloud services experienced disruptions due to this change. Affected regions included STO, MAA, BOM, SG2, FRA2, AMS, as well as select routes within major European cloud environments. CDN disruptions were most prominent in Australia, though intermittent impact was observed globally, as confirmed by internal monitoring.
At this time, recurrence of the issue is unlikely. A detailed review of the change and its deployment process is underway, including targeted lab testing to determine the root cause. The configuration change will remain paused until a full assessment is completed.
Akamai will continue to investigate the root cause and will take appropriate preventive actions. We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to our systems in an effort to prevent recurrence.
This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing, and any information herein is subject to change.
Affected Components:
US-East (Newark)
US-Central (Dallas)
US-West (Fremont)
US-Southeast (Atlanta)
US-IAD (Washington)
US-ORD (Chicago)
CA-Central (Toronto)
EU-West (London)
EU-Central (Frankfurt)
FR-PAR (Paris)
AP-South (Singapore)
AP-Northeast-2 (Tokyo 2)
AP-West (Mumbai)
AP-Southeast (Sydney)
SE-STO (Stockholm)
US-SEA (Seattle)
IT-MIL (Milan)
JP-OSA (Osaka)
IN-MAA (Chennai)
ID-CGK (Jakarta)
BR-GRU (São Paulo)
NL-AMS (Amsterdam)
US-MIA (Miami)
US-LAX (Los Angeles)
ES-MAD (Madrid)
AU-MEL (Melbourne)
GB-LON (London 2)
IN-BOM-2 (Mumbai 2)
SG-SIN-2 (Singapore 2)
DE-FRA-2 (Frankfurt 2)
JP-TYO-3 (Tokyo 3)
ZA-JNB (Johannesburg)
NZ-AKL (Auckland)
CO-BOG (Bogota)
US-DEN (Denver)
DE-HAM (Hamburg)
US-HOU (Houston)
MY-KUL (Kuala Lumpur)
FR-MRS (Marseille)
MX-QRO (Queretaro)
CL-SCL (Santiago)
Connectivity Issue - ES-MAD (Madrid)
8/26/2025, 6:15:34 PM
We haven’t observed any additional connectivity issues in our Madrid data center, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
Affected Components:
ES-MAD (Madrid)
Service Issue - Linode Cloud Manager, API, and CLI - All Regions
8/26/2025, 3:06:50 PM
We haven't observed any additional issues with the Cloud Manager or API, and will now consider this incident resolved. If you continue to experience issues, please contact us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to [email protected] for assistance.
Affected Components:
Linode.com
Linode Manager and API
Connectivity Issue - ES-MAD (Madrid)
8/26/2025, 12:12:38 AM
We have identified an issue with edge delivery where users would have experienced availability issues \(5xx responses or connectivity failures\). Our investigation determined that the issue happened in one of our data centers located in Madrid \(Spain\) and occurred due to network connectivity issues. The issue started around 22:42 UTC on August 25, 2025, and would have mostly affected Compute customers, but not limited to this. The affected router links have been isolated from the production service, and the mitigation of the issue was completed by 23:56 UTC on August 25, 2025, and is no longer occurring.
Further investigation indicated that the issue occurred due to a third-party service provider's maintenance activity, expected to be non-impacting, that caused multiple link flaps on our routers.. We will continue to work with the service provider to get to the root cause and will take appropriate preventive actions. We apologize for the impact this incident may have caused and appreciate your patience and continued support. We remain committed to making continuous improvements to make our systems better and prevent a recurrence of this issue. We thank you for your patience.
This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing, and any information herein is subject to change.
Affected Components:
ES-MAD (Madrid)