Compute Instability
Incident Report for Keen
Resolved
Annnnd query durations are back to where they should be. We experienced a burst in traffic at our load balancers at the same time we experienced a spike in query durations which created a bottleneck in both our read and load balancer tiers, which obviously wasn't great. We were able to shift some traffic in our read cluster to lighten the pressure and are working on adding additional capacity in our load balancer tier as we speak. We apologize again for the inconvenience, and thank you kindly for your patience!
Posted Jun 29, 2017 - 06:55 PDT
Monitoring
Query durations are now back to normal levels and we are continuing to monitor.
Posted Jun 29, 2017 - 06:35 PDT
Identified
We've moved some higher than normal duration query loads to isolation to ease the pressure and are seeing quick improvement. More to come.
Posted Jun 29, 2017 - 06:28 PDT
Investigating
We're currently experiencing some compute instability in our read cluster. This instability is currently resulting in a higher than normal rate of 500 errors. We apologize for the inconvenience and will update you as soon as we have additional information.
Posted Jun 29, 2017 - 06:19 PDT
This incident affected: Compute API.