On March 20, 2026, at approximately 10:40 UTC, the Jakarta, ID (id-cgk) data center experienced power fluctuations that caused outages across all services in Jakarta, with downstream impacts to services in the Singapore Expansion, SP (sg-sin-2), and Melbourne, AU (au-mel) regions. The facility switched to backup power, which triggered a simultaneous reboot of all data center network equipment and machines. Although switching to generator power should not have caused disruption, unexpected issues with the uninterrupted power supplies (UPS) led to outages. The data center’s facilities engineering team has identified that a transient under-voltage condition (voltage flicker) occurred on the utility power sources of our third-party data center. During this event, two of the data center’s UPS systems did not respond as designed, resulting in a power outage for the affected racks. The local data center team engaged a manual bypass of the affected UPS systems to generator power. Once utility power stabilized, the site transitioned off generator power and is now working with the UPS system vendor for further analysis. Content Delivery services experienced some performance degradation but recovered by 11:10 UTC. By 12:00 UTC, physical compute hosts were gradually becoming operational, but many Linode instances remained unavailable, resulting in service interruptions for workloads hosted in Jakarta. Ongoing reliance on backup power led to intermittent performance issues until full restoration.
Customers in the Singapore Expansion, SP (sg-sin-2), and Melbourne, AU (au-mel) regions reported issues with Linode Kubernetes Engine (LKE) and Object Storage deployments. Investigation revealed that Object Storage and LKE were impacted in the cgk1, sin2, and mel1 regions, affecting the deployment of new clusters, nodes, and node pools.
The incident was initially traced to under-voltage on the data center’s power sources. Akamai teams prioritized restoring data center networking, followed by Akamai products and services operating in Jakarta, ID. The data center returned to grid power at approximately 13:20 UTC. All services were restored, and the incident transitioned to monitoring at 17:28 UTC. Teams closely monitored all products and services in the affected data center and assessed the likelihood of another power event. With no issues identified, the incident was considered fully resolved at 18:27 UTC.
Efforts are ongoing to address dependencies between services and data centers to improve resilience and prevent recurrences. The data center is also working with their UPS system vendor to further analyze the UPS response during the event. We are committed to making continuous improvements to our systems to prevent similar incidents in the future. We apologize for the impact and thank you for your patience and continued support.
This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.