Block Storage Performance Issues – Newark
Incident Report for Linode
Postmortem

On July 17th, at approximately 8:50 UTC, we became aware of service degradation affecting a portion of Block Storage volumes within our Newark data center. Our Administrators were able to identify a system within the affected Block Storage cluster which caused cascading I/O delays for a section of this cluster. This was escalated to our Block Storage team, who removed the faulty system from the cluster at which point all I/O returned to a normal running state.

A pending investigation into the specific hardware failure and the associated redundant systems will be completed by our Operations team. No further impact to service will occur as a result of this investigation.

Posted Aug 02, 2019 - 19:27 UTC

Resolved
Performance of Block Storage volumes in Newark has remained consistent since we have corrected the issue, and we're confident this matter has been resolved.
Posted Jul 17, 2019 - 14:46 UTC
Monitoring
Performance with Block Storage volumes in our Newark data center has returned to normal. We will continue to monitor the situation, and we do not expect any future issues at this time.
Posted Jul 17, 2019 - 14:14 UTC
Identified
We've identified the cause of this issue, and we're working to restore normal Block Storage service in Newark.
Posted Jul 17, 2019 - 13:35 UTC
Update
Our team is continuing to investigate performance issues with Block Storage in our Newark data center.
Posted Jul 17, 2019 - 11:26 UTC
Investigating
Our team is investigating performance issues with Block Storage in our Newark data center. We will continue to provide updates as the situation develops.
Posted Jul 17, 2019 - 10:11 UTC
This incident affected: Block Storage (US-East (Newark) Block Storage).