Linode Status
Operational
Last incident: 3/20/2026
Current Status
Overall StatusOperational
Last IncidentConnectivity Issue - Jakarta, ID (id-cgk)
Incident Statusmonitoring
Recent Incidents
Connectivity Issue - Jakarta, ID (id-cgk)
3/20/2026, 11:29:18 AM
At this time we have been able to correct the issues affecting connectivity in our Jakarta, ID data center. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please open a Support ticket for assistance.
Affected Components:
ID-CGK (Jakarta)
AU-MEL (Melbourne) Linode Kubernetes Engine
SG-SIN-2 (Singapore 2) Linode Kubernetes Engine
Service Issue - Linode Kubernetes Engine Enterprise (LKE-E) - Chicago, IL (us-ord)
3/17/2026, 1:04:22 PM
Between approximately 01:00 and 14:12 UTC on March 17, 2026, customers were unable to deploy new Linode Kubernetes Engine Enterprise \(LKE-E\) clusters in the Chicago \(us-ord\) region. This was caused by exceeding the allowable number of DNS records in the DNS zone used for provisioning, which prevented the creation of new records required for cluster deployment. Existing LKE-E clusters and standard LKE clusters in the region continued to operate normally.
After receiving the first monitoring alert at 12:00 UTC, we investigated and identified the underlying issue. We found that DNS records associated with deleted LKE-E clusters were not being properly cleaned up, which led to a gradual buildup of unused records. This accumulation eventually reached the limit for the DNS zone, preventing new records from being created and blocking new cluster deployments. Between 13:00 and 13:20 UTC, we deleted approximately 500 obsolete domain records, which relieved the record limit and allowed new clusters to provision successfully. We restored impacted clusters to a healthy state and confirmed that deployments were functioning as expected. After a brief period to monitor these fixes, the incident was considered fully mitigated at 14:12 UTC the same day.
We are continuing to clean up additional obsolete domain records and estimate that about 11,000 records will qualify for deletion based on the number of active clusters. To help prevent similar incidents and ensure reliable cluster provisioning in the future we are enhancing our record cleanup mechanisms and monitoring.
This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
Affected Components:
US-ORD (Chicago) Linode Kubernetes Engine
Emerging Service Issue - Networking in Washington, DC (US-IAD)
3/13/2026, 5:58:43 PM
_On March 13, 2026, between approximately 17:35 UTC and 18:54 UTC, our Washington \(US-IAD\) data center experienced connectivity issues. During this time, customers may have noticed disruptions across all services hosted in this data center._
_Our investigation revealed that three servers previously used for testing were released back to inventory without the drives being wiped. As a result, one of these servers was sent to the warehouse and later returned to the Washington data center. This led to a host broadcasting an invalid path, which caused the service disruption._
_We promptly isolated the affected host, and service began to improve around 18:54 UTC. After thorough monitoring and system checks, we confirmed that the issue was fully resolved by 19:01 UTC._
_To help prevent similar issues in the future, we will conduct a detailed investigation to determine why this host was incorrectly configured and why our network propagated the invalid path. Based on our findings, we will implement corrective measures to prevent this type of misconfiguration from recurring._
_We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to make our systems better and prevent recurrence._
_This summary provides an overview of our current understanding of the incident, given the information available. Our investigation is ongoing, and any information herein is subject to change._
Affected Components:
US-IAD (Washington)
Service Issue - LKE-Enterprise - JP-OSA (Osaka)
3/13/2026, 12:45:59 AM
Starting around 19:15 UTC on March 12, 2026, customers were unable to create Linode Kubernetes Engine for Enterprise \(LKE-E\) clusters and unable to perform administrative tasks such as LKE-E version upgrades, Control Plane ACL changes, etc., in the Osaka data center \(JP-OSA\). The creation process was stalled indefinitely when attempting to deploy LKE-E clusters in this data center. Akamai’s investigation revealed that the issue started following a phased rollout of the LKE software release in the Osaka data center.
Akamai deployed a software fix to mitigate the impact. The impact was mitigated around 01:55 UTC on March 13, 2026. The clusters that were created during the impacted window resumed their creation.
Our subject matter experts will continue to investigate the root cause and will take appropriate preventive actions. We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to make our systems better and prevent recurrence.
This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing, and any information herein is subject to change.
Affected Components:
JP-OSA (Osaka) Linode Kubernetes Engine
Service Issue - Compute Hosts - IAD (Washington, DC)
3/9/2026, 6:39:01 PM
Beginning at 18:09 UTC on March 9, 2026, approximately 25% of Compute hosts in our IAD3 \(Washington DC\) data center experienced degraded network connectivity due to the application of an updated routing configuration intended to improve network performance. This updated router configuration was extensively tested and has been running in production in another region for several weeks without any issues. An investigation revealed that application of this change inadvertently caused loss of connectivity to these host due to a network configuration specific to the IAD3 data center.
This resulted in loss of access and control of Linodes and clusters hosted on those machines until the issue was mitigated. Mitigation steps took longer than expected due to the specific networking nature of the issue and degradation of internal service visibility to impacted devices, which required additional planning to effectively apply a rollback to the prior configuration.
We were able to successfully rollback the routing configuration and mitigate impact at 21:16 UTC on March 9, 2026. To prevent a reoccurrence of this issue, we are developing a new routing configuration deployment plan that avoids the identified failure modes and others inferred from our observations.
This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
Affected Components:
US-IAD (Washington)