We’ve identified endpoints that were not properly rate limited and when receiving a high volume of traffic were causing infrastructure issues. We’re working on better rate limiting coverage rolled out to prevent further outages.
Posted Oct 06, 2018 - 07:58 UTC
We have now resolved this incident and identified the cause. The engineering team are now doing a postmortem of the event to prevent this happening in the future.
Posted Oct 05, 2018 - 23:22 UTC
We are continuing to monitor for any further issues.
Posted Oct 05, 2018 - 23:19 UTC
We are now monitoring the situation the situation and all our monitoring tools are reporting the system is operating within expected parameters.
Posted Oct 05, 2018 - 19:36 UTC
We have identified the source of the problem that has been causing an exceptional high load.
Posted Oct 05, 2018 - 19:34 UTC
We have seen some performance issues that are causing some 502 and 504 errors. We are working hard to see where these are occurring, we will update this as we continue to find the root cause. All alert systems are operating as expected and now we are going through platform monitoring tool
Posted Oct 05, 2018 - 19:19 UTC
We've rolled out changes to try resolve issues accessing AskNicely, and are monitoring current status.