What are blue and red teams and penetration tests
Many people who work in cybersecurity are familiar with red and blue teams.
The blue team is responsible for defending an organisation from potential cybersecurity threats. This team typically consists of security professionals such as security operations centre (SOC) engineers, incident response engineers, and security-focused DevOps engineers. The blue team’s main focus is to identify potential security risks, implement safeguards to prevent attacks, and monitor systems for any suspicious activity.
On the other hand, the red team is a group of security experts that simulate real-world attacks on an organisation’s systems and networks. The goal of the red team is to test the effectiveness of an organisation’s security measures and identify any vulnerabilities and weaknesses that could be exploited by malicious actors. The red team uses a variety of techniques, including social engineering and vulnerability exploitation, to try to breach an organisation’s defence layers.
Red team activity and penetration testing are largely similar. However, there is difference between the two.
Red teams typically focus on simulating full-scale attacks on an organisation’s systems, including social, physical, and cyber elements. This involves using a wide range of techniques and approaches to try to breach an organisation’s defences, with the goal of testing the effectiveness of their security measures and ability to detect intrusions.
In contrast, penetration testers typically focus on a specific aspect of an organisation’s security, such as testing the security of a particular network or system. This typically involves using a more targeted set of techniques to try to breach the target systems’ defences, with the goal of identifying specific vulnerabilities that can be addressed.
The engagement process for both typically involves the following steps:
- Security management of the organisation defines the scope of the offensive operation;
- The offensive testing party gathers information;
- The offensive testing party performs the penetration test;
- The offensive testing party identifies vulnerabilities;
- Offensive testing party analyse and report;
- The blue team of the organisation remediates vulnerabilities.
This process although working has few cavities.
Offensive team testing often lacks gives and collects feedback from the owners of the systems, which can result that deviations from the scope and even bypasses are not detected. This means that parts of the systems and human operations could be completely passed by real attackers and still get them to their targets. This is known as the “door on a clear field” problem.
Another problem is the length of the cycle. This makes the security of the organisation always lags behind the other business processes and exposes the company to additional risks of missing newly developed attacks and vulnerabilities. It especially concerns the IoT industry.
Furthermore, there is often a lack of cooperation between the teams due to the difference in KPIs for red and blue teams. While red teams are praised for their achievements in breaking into the limited scope, blue teams deal with great uncertainty because their area of responsibility is much broader. So that the blue teams more often fall to management misjudgment and at the same time often starts inadequately reacting on the security incidents supposing that these are the results of the red team actions.
The concept of a “purple team” was developed to tackle these problems.
The purple colour appears when you mix blue and red. The same applies to purple teams. It is not a team per se but rather a communication and process framework between at least red and blue teams. But security architecture and security management teams are often involved as well.
One of the key features of purple teams is that the blue team is involved in attack planning and actively participates in the testing process. This allows the blue team to gain valuable insights into potential attacks and to improve their defences.
Purple teams often use attack trees or maps to plan their engagements and focus on business assets or values, such as intellectual property, customer data, and money. Because they are not restricted by scope, purple teams can explore shorter routes to their targets, even if this means skipping some parts of the system or exploiting human behaviour. This makes it easier to detect and eliminate real-world hacking scenarios.
Purple teams blur specialisation by cross-training red and blue team members to focus on each other’s areas of expertise. This is beneficial even for organisations that only have blue team members, as it makes the blue team aware of potential attacks and enables them to plan their defences.
The constant simultaneous engagement of both offensive and defensive teams in purple teams shortens the reaction time to new threats and attack techniques and makes testing more specific to the subject matter.
The specific way that purple teams use to plan their engagements leads to less biased risk measurement and mitigation prioritisation.
Shorter cycles with intense involvement from both teams improve the blue team’s ability to monitor incidents and predict and solve potential problems, sometimes as early as the development phase. Blue teams are typically more relaxed, cooperative, and open to improvement when working in a purple team.
Finally, this approach helps security management to structure and document systems, and extract the most value out of the activities of the red team.
Where purple teams are a better fit
Generally speaking, purple teams work well in most environments, but they can provide the most value in setups with agile life cycles and short development and deployment cycles. Purple teams are the best option when continuous security improvement is a key to business success.
Implementing purple teams in regulated business environments can be challenging. It is still possible and beneficial, but it may require adapting to security management programs and regulatory requirements. Implementing purple teams in large environments, while incredibly beneficial, is also complex. It may require a proportionally large team with a greater focus on the blue side.
Additionally, the way purple teams typically structure their movements can be difficult to fit into traditional security risk measures, such as risk matrices. It may require a different approach.
If you are interested in learning more, please don’t hesitate to contact us.