Sunday 17th December 2017

Network Network Outage, scheduled 6 years ago

7:30 AM EST: external report received, high rate of packet loss on switch side. Investigating.

8:05 AM EST: DoS confirmed. Working with upstream provider to null-route some of the offending IP addresses as well as remove the expected target from our network.

8:48 AM EST: DoS has been contained. Network connectivity operational once again. Network connectivity will continue to be monitored over the next 24 hours.

9:07 PM EST: Received word from the data center that the origin was not an external DoS, but an internal by an improperly configured switch on a neighboring subnet:

This problem looks to have been caused by a new customer that moved in today. Their incorrectly configured switch caused a Spanning-tree reconvergence that sends out alot of traffic across the network. We have move their uplink to their Firewall/Router combo like it should have been as the permanent resolution. Sorry about the trouble, let us know if you see any other problems.

Update 1:38 AM: Network problem has relapsed. Data center network engineers investigating.

3:19 AM: Network engineers still working on troubleshooting the origin.

3:43 AM: Network back up. Continuing to monitor.