
- Home
- Risks management
- Lessons Learned: How Eindhoven University of Technology Faced a Major Cyberattack
Lessons Learned: How Eindhoven University of Technology Faced a Major Cyberattack



How did you first discover the attack? What were the initial signs?
We first discovered it through the University’s security monitoring system, which alerted us to the installation of remote access tooling on the domain controllers. This happened around 9:30 PM on a Saturday evening. It was quite clear that this wasn’t a planned change from our team – that’s when we knew something was wrong.
As soon as the alert came in, our team of experts began working to remediate the alerts. It very quickly became obvious that the criminals had gained high-privileged access. I think within the first hour, we realized we needed to disconnect the network from the internet.
We were still battling with the criminals for about two hours after the initial alerts. During this time, we needed to take decisive action to regain control. We made the decision to completely disconnect the network at 1 AM on Sunday morning. So there were approximately three and a half hours between when we were first notified of the attack and when we actually disconnected.
Once you discovered the scale of the attack, what were your first priorities from both a technical and crisis management perspective?
From a technical perspective, as soon as you discover something like this, you try to assess how extensive it is and what you need to do to quickly mitigate the damage. Within a few hours, we understood that this was too big to contain while staying online. They had compromised our domain controllers, which basically meant they had access to everything in our environment.
We were still actively battling with them during this period. We figured out that if we continued the way we were going, we wouldn’t be able to gain the upper hand and win the battle. That’s why we ultimately decided to disconnect the network. Of course, we also had to ensure that we could still access our own network to begin recovery work. In our case, regaining control meant completely disconnecting from the internet.
From a crisis management perspective, as soon as it became clear that this was a major crisis requiring the highest level of response, I was informed as the CISO within the first hour. The IT department was fully mobilized during the night. While we were in the process of disconnecting the university from the network, we knew we had a major incident on our hands. We immediately started preparations for our central crisis team to convene on Sunday morning.
So essentially, you have two parallel responsibilities: one is the technical response – trying to get the situation under control. The other is crisis management – assessing the full impact on the university and coordinating the response. We knew students would need to study for exams that were scheduled for the following week. They would need to understand that they wouldn’t be able to access the network. We knew there would be a lot of speculation and guessing unless we communicated clearly about what was happening.
How did you organize the crisis team? Who was involved, and how did you coordinate between internal and external partners?
From a technical point of view, we had an external partner who helped us with the analysis and response. They arrived on campus at 3 AM on Sunday morning and were basically embedded with us throughout the incident.
Our crisis organization operates at multiple levels. Within the IT department, we have a Crisis Resolution Team that handles the technical perspective of the situation. This team coordinates the technical resolution while other teams within IT focus on their specific restoration activities. Once you understand what needs to be restored and brought back online, all these teams activate to help restore systems. Then we have a Crisis Management Team which coordinates the crisis through out the IT department and communicates to the Central Crisis Team.
The Central Crisis Team at the university level manages all the other areas affected – everything from exam schedules to external communications. They coordinate the broader organizational response beyond just the technical aspects.
In terms of coordination, we maintained regular discussions and updates across all these teams. We had clearly defined roles and responsibilities. I personally kept my fellow CISOs at other Dutch universities informed about our situation. We also coordinated closely with SURF – the collaborative organization for ICT in Dutch education and research – keeping them informed about the technical aspects of the attack.
At the executive board level, we informed the Ministry of Education in the Netherlands and the education inspection authorities. We maintained communication with all relevant government agencies throughout the incident.
Can you tell us about your communication strategy during the crisis?
Our Central Crisis Team convened at 9 AM on Sunday morning. Starting around noon on Sunday, we began our regular communication cadence. Every day at the end of the afternoon, we would provide an update about the status and what people could expect for the next day – whether they could study or work normally.
On Sunday and Monday, we communicated that there would be no network and service the next day. –On Tuesday we communicated that there wouldn’t be any network connectivity available for at least a week. The key was to give everyone clear, consistent messages. You don’t want people constantly speculating or feeling anxious about whether they can work or study tomorrow. By providing daily updates at a predictable time, we helped reduce uncertainty.
We ultimately decided to postpone all exams by one week, giving everyone adequate time to prepare once systems were restored.
What was the attack vector? How did the criminals gain access, and what tactics did they use?
The attackers basically used three main techniques to gain initial access.
First, they used credentials that were found on the dark web – usernames and passwords from students and employees that had been compromised in previous breaches elsewhere. These were users who had been identified as having leaked credentials and were instructed to update their passwords, but they could simply reused the same password again as there was a configuration issue that allowed to do that
Second, they used these leaked credentials to access our VPN connection, which at that time did not yet have multi-factor authentication. MFA implementation was actually scheduled around this time, before the summer.
Once they were on the network, they began reconnaissance, looking for environments where they could escalate privileges. They were able to exploit legacy protocols that we needed to maintain for compatibility with older software systems in our environment. Using these protocols, they were able to gain administrative access to critical systems.
We discovered the full attack chain on Monday, after we had shut down the systems and were able to conduct a thorough investigation. That’s when we understood exactly how they had compromised our environment.
After shutting down the systems, which services did you prioritize for restoration? What was your recovery strategy?
We were able to restore to a point from early Saturday morning. We had to carefully verify that our backups were clean – we confirmed that the Saturday morning backup was made before any compromise and that backups from Sunday and Monday were potentially infected.
We used that clean Saturday backup to restore our domain controller environment. But before turning anything back on, we had to ensure everything was properly secured.
In total, besides the domain controllers, we had 14 servers that needed to be completely rebuilt and restored. Out of our total infrastructure of about 77 servers that showed some level of attacker activity, most only had login attempts with no other malicious activity. We had to scan all of them thoroughly to ensure they were clean before bringing them back online.
What security measures did you implement to strengthen resilience against future attacks?
We identified and fixed several vulnerabilities. First, we corrected the misconfiguration in our password reset procedure. Now when people reset their passwords, the system actively checks to ensure they’re not reusing old passwords that may have been compromised.
Second, we fully implemented multi-factor authentication on our VPN solution. We also hardened our systems to prevent the use of legacy authentication protocols wherever possible.
Additionally, we accelerated the implementation of several other security measures that were already on our roadmap. The incident allowed us to fast-track these improvements and implement them with greater urgency than originally planned.
What advice would you give to CISOs at other universities based on your experience?
I think there are two main pieces of advice I would offer.
First, be very cautious about allowing legacy systems and protocols on your network. Yes, you may need them for compatibility with older systems – we certainly did. But looking back, we should have implemented better compensating controls.
Second, and this is critical: regularly exercise your cyber crisis organization. Conduct drills at all levels – from the technical response team to the crisis management team to the central crisis leadership. Make sure everyone has clear responsibilities and knows the escalation procedures. When something actually happens, these exercises pay off enormously.
The regular training really helped us because everyone knew what to do. People knew who was responsible for what, who would chair which meetings, and how the whole process would work. Even under extreme pressure, people knew their roles because we had practiced.
Take the exercises seriously, even if it’s “just a drill.” You learn how to handle the pressure, make decisions when you don’t have all the information, and work effectively when time is critical.
Is there anything else you’d like to add?
Maybe one other thing that I think went well is that during the crisis, we were very aware of the human side. We placed emphasis on the well-being of the people who were actually working on resolving the crisis, to ensure they were okay and not crushing under the pressure.
From Sunday onwards, when we were in control, we were able to prioritize this. From a personal perspective, we made sure that people took their rest time during the crisis. Because in the end, it’s people who are actually doing this work. They can only perform well if they’re getting rest.
the newsletter
the newsletter