• /
  • Status Updates
RSS Feed

Reason for Outage (RFO): Network Connectivity Incident – 5 February 2026

Scheduled on 05/02/2026 16:00:00 Status Resolved Fault / Issue Estimated finish 05/02/2026 16:40:00

Reason for Outage (RFO)

Date: 5th February 2026
Outage Time: 16:00–16:40
Location: Blynx AIM Ltd – Saxon House Data Centre
Affected Services: IP connectivity for IP ranges originating via legacy Juniper network



Summary

A network outage occurred due to a broadcast storm originating from legacy Juniper network equipment inherited from Safe Hosts Internet LLP. The storm interfered with BGP sessions between the legacy Juniper network and the new Arista network, causing multiple IP ranges to become unreachable.



Timeline (GMT)
• 15:45 – A configuration change was applied to an internal device and tested successfully
• 16:00 – Network monitoring alerts triggered for connectivity issues on several systems
• 16:05 – Engineers began investigation; recent changes were reverted
• 16:15 – Network services had not recovered following the rollback
• 16:20 – A second engineer joined the investigation and was briefed
• 16:24 – A broadcast storm was identified originating from the legacy Juniper network and traversing into the Arista network
• 16:28 – Corrective changes were applied; the broadcast storm began to subside
• 16:35 – BGP sessions between the Juniper and Arista networks stabilised and affected IP ranges were re-announced
• 16:40 – All network operations confirmed fully restored



Root Cause

A broadcast storm generated within the legacy Juniper network propagated into the new Arista network. This caused disruption to BGP sessions responsible for announcing IP ranges that still originated from the Juniper infrastructure, resulting in loss of external connectivity for those IPs.



Resolution
• Broadcast storm source identified and isolated
• Corrective configuration applied to stop propagation into the Arista network
• BGP sessions recovered and IP announcements restored
• Network stability fully confirmed



Preventative Actions / Future Remediation
• The legacy Juniper network is already largely decommissioned
• A formal status update will be issued confirming that during the weekend of 7–8 March, the remaining Juniper network routers will be fully removed
• Connections currently terminating on the Juniper routers will be redirected directly to the Arista equipment, removing an unnecessary layer of complexity and legacy configuration that has contributed to network issues over the past 8–12 months
• Complete removal will eliminate remaining interoperability risks and legacy-related issues
• All core and high-level routing functions will then be handled largely by the Arista platform, which is designed to support:
• Multi-terabit transit capacity
• Up to 400 Gbit/s internal network connectivity



Closing Statement

We apologise for any disruption caused and thank customers for their patience. Blynx AIM Ltd has been working continuously to resolve legacy infrastructure issues inherited from Safe Hosts Internet LLP. This incident relates to the final components of the old Juniper network, which is now scheduled for complete removal, ensuring a more resilient, higher-capacity network moving forward.

Engineers continue to monitor the network closely and remain on standby should any further issues arise.

Kind Regars
NOC Team
Blynx AIM Ltd.

Related servers / services

Incident Update: Brief Interruption on B Feed Power Supply

Scheduled on 28/01/2026 14:35:00 Status In-Progress Estimated finish 08/02/2026 23:59:00

Incident Summary
Yesterday, we experienced a brief interruption affecting the B Feed power supply within the data hall.

Initial Assessment
Initial inspection suggested a potential failure of a Power Distribution Unit (PDU) within the data hall, which was believed to have caused an overload on part of the B Feed circuit.

Further Investigation
Following a more detailed review of electrical and UPS logs, we have confirmed that the root cause was a brief interruption in the incoming electricity supply to the data centre, commonly referred to as a brownout.

A brownout is a very short-duration voltage drop and is often only noticeable as a brief flicker of lighting, after which normal voltage levels return.

UPS Behaviour
During this event:
- The A Feed UPS operated as expected and successfully carried the data centre load.
- The B Feed UPS, however, did not fully sustain the load during the brownout, resulting in a brief interruption to the B Feed supply affecting sections of the data hall.

Current Status & Actions
- A UPS service engineer has been engaged and will attend site to inspect the B Feed UPS.
- The purpose of this visit is to establish why the UPS did not maintain load during the brief supply disturbance and to carry out any required remedial work.
- Until this investigation and any necessary repairs are completed, the B Feed power supply is being treated as “at risk.”

Customer Impact & Guidance
- Customers with single-powered equipment connected only to the B Feed should consider this equipment at risk.
- If you would like your equipments power feed to be migrated from B Feed to A Feed in a controlled manner, please contact us and we will arrange an agreed change window.

Apology & Monitoring
We apologise for any inconvenience caused. Our engineering team continues to monitor all systems closely to ensure any further impact is kept to an absolute minimum.

If you have any questions or would like to discuss mitigation options, please do not hesitate to get in touch.

Kind regards,
Data Centre Operations
Blynx AIM Ltd

Related servers / services

Informational Notice: A-Feed PLC Power Supply Issue (No Service Impact)

Scheduled on 03/01/2026 18:28:00 Status In-Progress Estimated finish 08/01/2026 13:00:00

Dear Customer,

This notice is to inform you of an at-risk condition identified today, 3 January 2026, relating to one of the PLC controllers associated with our A-feed switchgear.

18:28 – An issue was detected affecting the electronic power supply device serving the A-feed PLC controller.
18:35 – Engineers were on site and investigating.
18:55 – The root cause was identified as a failed electronic power supply device.

The B-feed remains stable and fully operational. As a precautionary measure, signalling from the B-feed generator has been configured to operate the A-feed, and both generators are capable of supplying the A-feed if required. The affected PLC controller also powers up correctly once generator power is present.

A replacement component has been ordered, and an engineer is scheduled to attend on Monday to install the part and return the system to normal configuration.

There is no significant risk to customer services, and this notification is issued for information purposes only.

Further confirmation will be provided once the replacement has been installed.

Kind regards,
Operations Team
Blynx AIM Ltd.

Related servers / services