We would like to apologise for not informing everyone of the maintenance window between 5pm and 7pm on Tuesday 28 February 2017.

We were moving a physical between racks which hosted some of our servers including knockando our mail server.


We would like to apologise for the outage affecting various services between approximately 18:15 and 18:45 this evening.

This was caused by a configuration error whilst deploying a private interconnection with a specific customer. Unfortunately a layer2 routing loop was created, which flooded a core network ethernet segment with broadcast traffic.


There was an unexpected switch cluster (Beatrix) reboot.

This is now working again.

We will investigate the cause of this and post an update.


Shortly after 18:00 we suffered an interconnectivity glitch lasting a few minutes affecting leased line (fibre and EoFTTC) traffic.
All services are restored and we would like to apologise for the interruption in service.


There was an outage around 12.30 today for around 10 minutes in one of our data centers.,

This provides connectivity for all leased lines and EoFTTC in our network.

This seems similar to the outage we experienced last week.

This has been fixed and we are currently looking into the exact cause of this.


The TalkTalk DSL connections have gone down again. As soon as we have had an update from our supplier we will post an update.

Update 10:58 – service to the lines has been restored again.


All DSL connections from us using TalkTalk Business circuits are currently down. Our wholesale supplier is investigating – the fault is currently believed to be within TalKTalk’s network. We will publish more information as soon as we have it.

Update 08:50 – our supplier has notified us that the problem is a power issue in Telehouse North. It seems that many providers are severely affected by this, including TalkTalk and BT. There is no ETA yet, but we will pass on any updates as we receive them.

Update 08:55 – our supplier has stated “This has been confirmed as a power issue within Telehouse North affecting multiple areas. We will post a further update by 10:30 BST.”

Update 10:25 – service is being restored to the affected TalkTalk DSL lines. We will be observing the lines to to make sure all lines come back up.

Update 10:40 – our supplier has updated us with the following “The suite had power, but at the exact moment of the wider Telehouse power issue today the in-rack UPS unit appears to have failed. We believe a power surge may have damaged the equipment but engineers are looking through the logs to try and understand exactly what happened.”


There was an outage around 8am this morning for 30 minutes in one of our data centers.,

This provides connectivity for all leased lines and EoFTTC in our network.

This has been fixed and we are currently looking into the exact cause of this.


We apologise to our TalkTalk DSL customers who have suffered an outage to their service at approximately 01:50 till approximately 04:10 this morning?

We do know that it wasn’t an interconnectivity problem so we are seeking an explanation from our TalkTalk Wholesaler and will post an update when we know more.

We apologise for any inconvenience this may have caused.


Starting at approx 19:55 this evening, our main link to Amsterdam started showing high latency and packet loss. This is considerably more disruptive than a complete outage, as the auto-failover mechanisms don’t kick in.
All data is currently manually routed over the backup link. A ticket has been raised with the link supplier.

Update at 21:22 – our monitoring is showing the main link to be behaving normally once again. We will leave the maanual route override in place until we have heard from the supplier what has been done.

Update at 22:05 – An attempt to put the main link back in service was not successful, suggesting the link behaviour is changing with traffic. We remain on the backup link for now. Further investigations will continue.

We apologies for the outage, and also for the slow posting of the initial update – investigation of the problem took priority.

Update at 10:05 on 25/05/16 – The main link was successfully re-enabled at 09:30 this morning and has been behaving normally since.

The fault is believed to be with the supplier who routes traffic between our datacentre and the science park in Amsterdam. We have requested an explanation and will update this status with anything we receive.


We are aware that our link to Amsterdam had some glitches overnight which resulted in a series of short outages of a few minutes each, the back up connection kicked in automatically as intended. However, due to the outages being within minutes of each other this glitch was more noticeable due to the back up taking a minute or so to kick in each time. The link has been stable since 04:00 this morning.

We believe the cause of the glitches was down to a problem with one of our interconnectivity suppliers.

We apologise for any inconvenience this may have caused.


Update: We have received notification that the connectivity provider in question is changing a switch.

At around 14:15 there appears to have been an issue with a connectivity provider in Telehouse that affected DSL connections and some routes between our data centres.

The issue appears to have been resolved and customers already have or should see their connection restored.

We will post an update when we are aware of the cause.

We apologise to those customers affected.

Current Status

    GoodNo current issues

Status Archive