Cybersecurity Maturity Models. Essential tools or useless burden?
TLDR
A cybersecurity maturity assessment typically serves as the starting point for more complex and structured development plans. However, the concept is less straightforward than it seems at first glance, with different models mixing “capability”, “capacity”, and “sophistication” and not indicating with precision what can be considered "mature".
On top of that, the standard approach to cybersecurity assessment does not resonate with technical and operational staff, creating scepticism that is compounded by the high costs these assessments usually require.
However, there are strategies that countries and governments can implement to increase their capacity to run assessments and their effectiveness in doing that, including a more strategic approach to data collection and use, adoption of new technologies, and novel paradigm on how to involve stakeholders.
Researcher doing online desktop research.
Why Maturity Models are crucial
In every aspect of human life, the first step toward progress is to understand the starting point. This principle also applies to national cybersecurity. Countries that set out to achieve specific goals (such as those outlined in a national cybersecurity strategy) require an understanding of the current status quo to weigh their options: what are the strengths and weaknesses of the country? Who are the relevant stakeholders or the national champions who can help? What constraints could they face? Cybersecurity maturity assessments are one of the most effective ways to address these questions.
The term “cybersecurity maturity assessment” represents an abstract concept that provides little guidance on the practical steps necessary to perform it. To do so, various entities and organisations have created Cybersecurity Maturity Models (CMMs). CMMs represent the methodologies and practical approaches used to plan, design, and conduct a cybersecurity assessment. CMMs provide policymakers and national leaders with a tool to take a systematic and replicable snapshot of the current status quo. CMMs are the necessary first step for any country interested in improving its cybersecurity while optimising the use of its resources.
Moreover, cybersecurity – especially at the national level – is a complex endeavour that requires coordination across multiple domains and stakeholders. Without proper tools and mechanisms, cybersecurity becomes a fragmented set of activities, making it challenging to achieve the identified goals and presenting inefficiencies and resource waste. CMMs transform this fragmented set of activities into an evidence-based, staged improvement program. This helps with prioritisation, budgeting, donor coordination, regulatory readiness, and public accountability, and they provide stakeholders with a framework of reference they can use to measure progress over the years.
Additionally, CMMs provide a shared vocabulary across line ministries, regulators, critical sectors, donors, and other stakeholders. Once a country has adopted a CMM, coordination across relevant stakeholders becomes easier.
What is it to be assessed. Scoping maturity
Maturity is not always a well-defined concept. This happens for at least two reasons. First, maturity does not exist in itself, but is always linked to other criteria. Indeed, when a CMM is deployed, it is first necessary to clarify what the object of assessment is, as different objects may have varying levels of maturity.
There is no standardised list of objects of a cybersecurity maturity assessment, but we can attempt to draft a list of typical ones:
- Capacity: it is the maximum potential throughput of a process. It represents what a given subject can achieve when it focuses its efforts and resources. For instance, a security operation team might have the capacity to monitor up to 2,000 devices in real-time.
- Capability: it is the perimeter of possible action. It represents the sum of knowledge, expertise, and the practical limits of a subject. For instance, a security operation team might only be capable of monitoring laptops and mobile devices, but not industrial OT systems.
- Process sophistication: it is the measure of how much a given process is advanced, usually starting from ad-hoc unstructured processes up to data-driven optimised processes. This can be a relatively vague concept, but it remains a key component of early CMMs and is still largely in use. CMMs often refer to the “Process sophistication” as “capability”. This is not wrong, since capability is one of the aspects that contribute to the sophistication of a process. However, they do not entirely overlap, and, therefore, we prefer to keep them separated here at XP.
Different models approach this differently. However, in general, readily available models tend to focus on a blend of Capability and Process sophistication.
The second reason why the term maturity is not a well-defined concept is the complexity in defining what a mature entity is and what it looks like. This is a highly challenging feat, as it can be interpreted differently depending on the context. For instance, a key aspect typically measured in maturity models is the presence of a legal framework. However, in some contexts, customary rules may have an equal, if not stronger, impact than black-letter rules. This is one of the direct consequences of CMMs being input-based rather than output-based. CMMs do not approach maturity as the output of an input (e.g., “the country managed to lower cybercrime by 10% in 5 years”), but as a static picture of commitment that is assumed to automatically produce outputs (e.g., “the country has published a cybercrime law and, therefore, it is reasonable to assume that cybercrime will go down”).
The lack of quantifiable indicators has been a challenge that the industry has been addressing over the last few years. Still, to date, reliable quantitative indicators in cyber capacity building have not been established.
Which assessment models are out there?
The market for CMM in national cybersecurity is not particularly large, but it does have some prominent options.
- Oxford CMM: One of the most used CMMs. Although not a standard, many consider this the de facto standard approach. Created by Oxford University, which still retains the right to use, it covers many aspects of national cybersecurity.
- World Bank SCMM: It adopts an approach similar to the Oxford CMM, enriched with considerations coming from the academic domain of complex system analysis to address the specificities of the sub-national level of assessment.
- Potomac CRI: Developed by the Potomac Institute, it focuses on the readiness of a country as an indicator of maturity. Despite its different focus, this model has many overlaps with the other models and, therefore, can be considered a comparable option. The model is no longer being updated, but its validity remains.
In addition to these, indices and benchmarks could also be considered. These are outputs that compare different subjects, often to create a ranking. Indices are usually built on assessments conducted for specific countries. Thus, while they retain a different objective, it is possible to include indices in the discussion about CMMs. Two of the most well-known ones in national cybersecurity are:
- ITU GCI: It is one of the oldest – if not the oldest – structured approaches to measure cybersecurity comprehensively at the national level. It is based on a questionnaire submitted annually to all UN Member States. The exact questions and evaluation methods have changed over time, but it remains one of the most regarded and authoritative sources for understanding how countries are maturing in cybersecurity.
- eGA NCSI: One of the leading contenders for the GCI. It covers different facets of national cybersecurity and presents them in an easy-to-use online dashboard.
- Other custom-made CMMs: Consulting firms, NGOs, and other organisations have developed their own CMM, either tailoring existing models or creating them from scratch. These are usually proprietary tools and thus are not readily available.
Different models adopt different approaches: some focus more on desktop research and evidence collection, while others prefer an interview-based approach. Some are designed for national perimeters, others for sectoral ones, and yet others for organisational assessments. Some adopt a more granular approach, while others remain at a higher level. Understanding the best approach requires a prior analysis of the context, including the scope of the assessment, the number of stakeholders to be involved, and the complexity of the institutional architecture. And, of course, the resources available to complete the assessment.
When the maturity assessment model falls short
There are several issues in almost any of the maturity models mentioned in the article.
First, they measure input but are not suited for measuring output. We mentioned this point above. The problem is that by measuring the input only, assessors work on the assumption that the input to be measured, as indicated in their assessment model, will have the intended effect regardless of the circumstances. We can all appreciate how this line of thinking is flawed. However, this is not entirely a fault of assessors. Unfortunately, cybersecurity is a domain where performance data is still rarely collected or available. KPIs are still in an early stage of development, with cybersecurity experts often not equipped to design them appropriately (especially in the domain of cybersecurity governance and policy making; cybersecurity at the technical and operational levels is certainly better equipped to do so). This is slowly changing, but the sector is not yet there, making it almost impossible to identify and collect the data necessary to evaluate the outcome of any given cybersecurity activity (e.g., measuring how a strategy or policy positively impacts specific criteria).
Second, outputs are lengthy. Traditionally, assessment reports include a wealth of information. Sometimes, introductory or contextual information is more extensive than the presentation of the actual assessment results. There are primarily two reasons for this. First, assessments can be challenging to complete. An assessor can never be certain that they have investigated all possible resources or entirely rely on the information received during interviews or consultations. Thus, assessors tend to spend a lot of words introducing guardrails and assumptions to protect themselves and their work in case someone challenges the results. The second reason is that these documents are often produced for institutional clients, like ministries and the public sector. These “institutional machines” are typically large bodies, where information does not travel quickly. Thus, to make the assessment result accessible to all parties, assessors must present all the contextual elements that ensure any reader can understand the document's content. The combined effects of these – and other – issues have crystallised a common practice of "more is better", leading to the creation of piles of paper. To make things worse, the outcomes are often presented with a distinct academic flavour rather than focusing on being practical and actionable. Thus, maturity assessments are frequently perceived as existing in a limbo between real usefulness and mere political or advertising documents. Because of this, operational and technical personnel, who are tasked with acting on the recommendations of these reports, consider them with mixed feelings. Over the years, we had so many conversations with technical staff telling us, "We don't need another 100-page document, we need a 5-page table that tells us what to do". They are right.
Third, maturity assessments depend too much on the skills of the assessors. This is a problem because the need to understand cybersecurity maturity is growing faster than the number of skilled assessors. Being an assessors is not easy: not only one has to be conversant with cybersecurity, national policy, politics, and several other topics (due to the cross sector nature of cyber), but he/she also needs to be a skilled researcher (to perform desk research), psychologist (to run compelling interviews), facilitator (to run practical workshops), and public speaker (to present results to key national leaders). Such profiles are incredibly rare. One way to deal with the issue is to have large assessment teams. However, it is often impossible to find all the necessary expertise to assemble a team that can deliver top-notch quality in its assessments. And when this can be done, the advisory bills will be huge, and costs will surge. This limits the number of countries that are willing (or sometimes even capable) of conducting such assessments without the support of third-party donors, thereby limiting the freedom of sovereign countries to decide when and how to perform these assessments. If costs do not decrease, this is unlikely to happen anytime soon.
How to move forward and drive maturity assessment into the next stage
Outlining the challenges and issues with CMMs is only one side of the coin. Let’s dive deeper into how the international cybersecurity community can address these challenges, what is currently being done, and what is expected for the future.
Data is king
Invest in data-driven cybersecurity. This will create data that can be used to measure the effectiveness and the outcomes of initiatives. This is not easy and requires many things. First, a country needs to establish standards and identify the right entry points for data. Let’s take the example of risk data aggregated from the risk assessment performed by organisations operating in the countries. For central administrations to utilise this data, a specific taxonomy in a standard format must first be established to meet their data needs. Then, the collection methods must be implemented: will the data be collected using a top-down (administration collecting data from organisations) or bottom-up (organisations sending data to central administration) approach? Second, the country needs to have the proper infrastructure in place to collect, store, and utilise data. Given the sensitivity of cybersecurity data, it would be advisable to rely on sovereign cloud computing.
Laser focus your presentation
Focus on dashboarding rather than reporting. Produce data-driven and actionable indications, not lengthy report documents. There are numerous solutions available to present data, ranging from the most basic ones, such as spreadsheets (e.g., MS Excel, LibreOffice Calc), to more complex data visualisation tools (e.g., Power BI, Tableau). These tools can also be adapted to present qualitative data – you do not necessarily need to have only numbers. However, this requires an intellectual effort to understand how this qualitative data should be structured.
Kill the costs, whatever it takes
One thing we have been doing at XP is rethinking the structure of maturity models to make them more accessible to everyone. We spent a lot of time thinking and experimenting, realising that there is a considerable cost that can be reduced in the desktop research phase, as this step can be partially automated. Before the advent of AI, we looked into automating OSINT techniques to make them work for the assessors (such as Google dorking). However, AI has changed everything, opening up the possibility to move the needle even further. This requires making assessment models that are designed to be AI-ready; otherwise, the risk is to fall prey to AI hallucination (something that is not acceptable in such high-stakes and high-precision work). Our staff is working on it and has already created promising prototypes that achieve their intended purpose. We are excited for this and happy to provide you with samples (reach out to us at info@experiree.com)
Do not involve stakeholders: empower them
Lastly, another aspect that can make maturity assessment more sustainable is to change the paradigm from an assessor-driven approach to an assessor-facilitated approach. Rather than the standard interview, the paradigm should move to a workshop one. Experience on the ground taught us that whereas interviews often feel like tests (and thus limit the willingness to be open or participate), workshops have the advantage of putting participants at the steering wheel. They will feel empowered, rather than "assessed", and they will naturally produce the information the assessor needs to complete the task. Moreover, workshops are flexible, with tons and tons of ways to be facilitated (such as the wonderful Gamestorming from Dave Gray and Sunni Brown). One aspect we have been working on over the past few years is how to effectively gamify workshops, so that participants not only feel part of the process but also enjoy and have fun doing so. The results are extraordinary, and this appears to be confirmed by the trend to gamify that is clearly evident in cybersecurity today.
Where to next
Cybersecurity maturity assessments are a crucial component of national cybersecurity governance. Thus, the industry must work to clear the way of obstacles that might impair countries' ability to conduct these assessments.
Cybersecurity is undergoing a period of change: relevant stakeholders are increasingly dedicating effort to getting this right, and technology is providing innovative tools to simplify the process. The outlook is optimistic, and we believe the future of cybersecurity assessment looks bright.