Datastore servicing
Incident Report for Keen
Postmortem

Usage patterns pushed us to capacity and after some debugging the query pattern ended and durations settled to normal.

As this event calmed down we doubled the capacity of our query processing API and increased backend query workers by 33% in hopes of preventing further slowdowns. We're also continuing to work on an internal rate limiting problem to prevent this from occurring.

We apologize for the inconvenience and appreciate you bearing with us during these problems!

Posted Feb 17, 2015 - 16:05 PST

Resolved
Query durations have returned to normal levels. We are continuing to deploy more capacity now that incident has completed in an effort to prevent further slowdowns.
Posted Feb 17, 2015 - 15:03 PST
Update
We are still working on deploying more capacity as well as researching the root cause of query slowdowns.
Posted Feb 17, 2015 - 14:44 PST
Update
We're continuing to investigate the root cause and are currently deploying more capacity for reads to try and bring down overall durations.
Posted Feb 17, 2015 - 13:45 PST
Update
We are currently experiencing increased response times on our API. The team is working on the issue.
Posted Feb 17, 2015 - 13:12 PST
Investigating
We are making some minor changes to our storage backend which may cause a minor service issue. This is expected to be completed within the next 30 minutes.
Posted Feb 17, 2015 - 12:55 PST