All Systems Operational
Dashboard - runscope.com   ? Operational
API Tests   ? Operational
Runscope URL Gateways - *.runscope.net   ? Operational
Runscope API - api.runscope.com   Operational
On-premises Agents   ? Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
May 23, 2018

No incidents reported today.

May 22, 2018

No incidents reported.

May 21, 2018

No incidents reported.

May 20, 2018

No incidents reported.

May 19, 2018

No incidents reported.

May 18, 2018

No incidents reported.

May 17, 2018

No incidents reported.

May 16, 2018

No incidents reported.

May 15, 2018

No incidents reported.

May 14, 2018

No incidents reported.

May 13, 2018

No incidents reported.

May 12, 2018

No incidents reported.

May 11, 2018
Resolved - The system has been stable for over 12 hours. We are continuing work on adding additional performance and stability enhancements, but the main cause of the downtime yesterday has been resolved. If you have any questions, please feel free to contact us at help@runscope.com.
May 11, 06:58 PDT
Update - The effects of losing connectivity on such a large portion of the cluster and the ensuing restoration have caused some delays in processing tests or starting new runs as queue processing is saturated in different areas of the test run pipeline. We're doing what we can to work to resolve the bottlenecks as they arise, but you may continue to see occasional issues while we work on a full resolution.
May 10, 17:04 PDT
Identified - Two separate security updates applied to our instances caused them to fail to boot under certain conditions (we are still determining the exact set of circumstances that caused the failed boots). We have reverted or reconfigured the hosts to work around this issue and service levels are currently returning to normal. We don't expect any regressions, but we are not yet out of the woods. We will keep this incident open while we finalize a resolution.
May 10, 11:26 PDT
Update - We have narrowed down the issue to a few possible causes and are rolling back the associated changes while deploying new instances. As we restore instances you may see some service levels restored to normal, but this process will take some time due to the number of instances affected.
May 10, 10:35 PDT
Update - We are continuing to work with AWS support to pinpoint the source of the issue causing our instances to have connectivity failures after being launched. We are working through different remedies to eliminate possible causes and restore service levels to normal.
May 10, 09:27 PDT
Update - We are continuing to experience ongoing issues across our EC2 cluster. Newly-created and existing instances are failing connectivity checks. We're working as fast as we can to try to restore service.
May 10, 08:15 PDT
Update - We are continuing to restore affected services through new instances and failing over to secondary systems. You may see intermittent availability while services are brought online, but expect continued issues until the instances are fully restored.
May 10, 07:44 PDT
Update - We experienced a sudden loss of multiple instances in our AWS cluster. We are continuing to investigate the cause while restoring instances and failing over to secondary systems.
May 10, 07:27 PDT
Update - The issue is also causing intermittent errors loading the dashboard. We are continuing to investigate.
May 10, 07:17 PDT
Investigating - We are investigating an issue causing test runs not to run. Updates to follow as we know more.
May 10, 07:14 PDT
May 9, 2018

No incidents reported.