Report Compilation
Review your assessment and select areas for improvement
You’ve completed your cloud maturity assessment. Review each answer below and check “I need to do better” for any areas where you’d like to improve.
Your report will then show detailed guidance for only the areas you’ve selected, helping you focus on what matters most to your organization.
Loading your responses...
Cost & Sustainability
How do you set capacity for live services? [change your answer]
Your answer:
We set capacity for the busiest times, even if it’s not always needed.
How to determine if this good enough
When an organisation provisions capacity solely based on the highest possible load (peak usage), it generally results in:
-
High Reliance on Worst-Case Scenarios
- You assume your daily or seasonal peak might occur at any time, so you allocate enough VMs, containers, or resources to handle that load continuously.
- This can be seen as “good enough” if your traffic is extremely spiky, mission-critical, or your downtime tolerance is near zero.
-
Predictable But Potentially Wasteful Costs
- By maintaining peak capacity around the clock, your spend is predictable, but you may overpay substantially during off-peak hours.
- This might be acceptable if your budget is not severely constrained or if your leadership prioritises simplicity over optimisation.
-
Minimal Operational Complexity
- No advanced autoscaling or reconfiguration scripts are needed, as you do not scale up or down dynamically.
- For teams with limited cloud or DevOps expertise, “peak provisioning” might be temporarily “good enough.”
-
Compliance or Regulatory Factors
- Certain government services may face strict requirements that demand consistent capacity. If scaling or re-provisioning poses risk to meeting an SLA, you may choose to keep peak capacity as a safer option.
You might find “Peak Provisioning” still acceptable if cost oversight is low, your risk threshold is minimal, and you prefer operational simplicity. However, with public sector budgets under increasing scrutiny and user load patterns often varying significantly, this approach often wastes resources—both financial and environmental.
Your answer:
We adjust capacity when needed.
How to determine if this good enough
This stage represents an improvement over peak provisioning: you size your environment around typical usage rather than the maximum. You might see this as “good enough” if:
-
Periodic But Manageable Traffic Patterns
- You may only observe seasonal spikes (e.g., monthly end-of-period reporting, yearly enrollments, etc.). Manually scaling before known events could be sufficient.
- The overhead of full autoscaling might not seem worthwhile if spikes are infrequent and predictable.
-
Comfortable Manual Operations
- You have a change-management process that can quickly add or remove capacity on a known schedule (e.g., scaling up ahead of local council tax billing cycles).
- If your staff can handle these tasks promptly, the organisation might see no urgency in adopting automated approaches.
-
Budgets and Costs Partially Optimised
- By aligning capacity to average usage (rather than peak), you reduce some waste. You might see moderate cost savings compared to peak provisioning.
- The cost overhead from less frequent or smaller over-provisioning might be tolerable.
-
Stable or Slow-Growing Environments
- If your cloud usage is not rapidly increasing, a manual approach might not yet lead to major inefficiencies.
- You have limited real-time or unpredictable usage surges.
That said, manual scaling can become a bottleneck if usage unexpectedly grows or if multiple applications need frequent changes. The risk is human error (forgetting to scale back down), delayed response to traffic spikes, or missed budget opportunities.
Your answer:
Some systems scale automatically.
How to determine if this good enough
At this stage, you’ve moved beyond purely manual methods: some of your workloads automatically scale in or out when CPU, memory, or queue depth crosses a threshold. This can be “good enough” if:
-
Limited Service Scope
- You have identified a few critical or high-variance components (e.g., your front-end web tier) that benefit significantly from autoscaling.
- Remaining workloads may be stable or less likely to see large traffic swings.
-
Simplicity Over Complexity
- You deliberately keep autoscaling rules straightforward (e.g., CPU > 70% for 5 minutes) to avoid over-engineering.
- This might meet departmental objectives, provided the load pattern doesn’t vary unpredictably.
-
Reduced Manual Overhead
- Thanks to autoscaling on certain components, you rarely intervene during typical usage spikes.
- You still handle major events or seasonal shifts manually, but day-to-day usage is more stable.
-
Partially Controlled Costs
- Because your most dynamic workloads scale automatically, you see fewer cost overruns from over-provisioning.
- You still might maintain some underutilised capacity for other components, but it’s acceptable given your risk appetite.
If your environment only sees moderate changes in demand and leadership doesn’t demand full elasticity, “Basic Autoscaling for Certain Components” can suffice. However, if your user base or usage patterns expand, or if you aim for deeper cost optimisation, you could unify autoscaling across more workloads and utilise advanced triggers.
Your answer:
Most systems scale automatically.
How to determine if this good enough
You’ve expanded autoscaling across many workloads: from front-end services to internal APIs, possibly including some data processing components. However, you’re mostly using CPU, memory, or standard throughput metrics as triggers. This can be “good enough” if:
-
Comprehensive Coverage
- Most of your core applications scale automatically as demand changes. Manual interventions are rare and usually revolve around unusual events or big product launches.
-
Efficient Day-to-Day Operations
- Cost and capacity usage are largely optimised since few resources remain significantly underutilised or idle.
- Staff seldom worry about reconfiguring capacity for typical fluctuations.
-
Satisfactory Performance
- Using basic metrics (CPU, memory) covers typical load patterns adequately.
- The risk of slower scale-up in more complex scenarios (like surges in queue lengths or specific user transactions) might be acceptable.
-
Stable or Predictable Load Growth
- Even with widespread autoscaling, if your usage grows in somewhat predictable increments, basic triggers might suffice.
- You rarely need to investigate advanced logs or correlation with end-user response times to refine scaling.
If your service-level objectives (SLOs) and budgets remain met with these simpler triggers, you may be comfortable. However, more advanced autoscaling can yield better responsiveness for spiky or complex applications that rely heavily on queue lengths, user concurrency, or custom application metrics (e.g., transactions per second, memory leaks, etc.).
Your answer:
Everything scales automatically.
How to determine if this good enough
In this final, most mature stage, your organisation applies advanced autoscaling across practically every production workload. Detailed logs, queue depths, user concurrency, or response times drive scaling decisions. This likely means:
-
Holistic Observability and Telemetry
- You collect and analyze logs, metrics, and traces in near real-time, correlating them to auto-scale events.
- Teams have dashboards that reflect business-level metrics (e.g., transactions processed, citizen requests served) to trigger expansions or contractions.
-
Proactive or Predictive Scaling
- You anticipate traffic spikes based on historical data or usage trends (like major public announcements, election result postings, etc.).
- Scale actions happen before a noticeable performance drop, offering a seamless user experience.
-
Minimal Human Intervention
- Manual resizing is rare, reserved for extraordinary circumstances (e.g., emergent security patches, new application deployments).
- Staff focus on refining autoscaling policies, not reacting to capacity emergencies.
-
Cost-Optimised and Performance-Savvy
- Because you rarely over-provision for extended periods, your budget usage remains tightly aligned with actual needs.
- End-users or citizens experience consistently fast response times due to prompt scale-outs.
If you find that your applications handle usage spikes gracefully, cost anomalies are rare, and advanced metrics keep everything stable, you have likely achieved an advanced autoscaling posture. Nevertheless, with the rapid evolution of cloud services, there are always methods to iterate and improve.
How do you run services in the cloud? [change your answer]
Your answer:
Long-running virtual machines (VMs).
How to determine if this good enough
An organisation that relies on “Long-Running Homogeneous VMs” typically has static infrastructure: they stand up certain VM sizes—often chosen arbitrarily or based on outdated assumptions—and let them run continuously. For a UK public sector body, this may appear straightforward if:
-
Predictable, Low-Complexity Workloads
- Your compute usage doesn’t fluctuate much (e.g., a small number of internal line-of-business apps with stable user counts).
- You don’t foresee major surges or dips in demand.
- The overhead of changing compute sizes or rearchitecting to dynamic services might seem unnecessary.
-
Minimal Cost Pressures
- If your monthly spend is low enough to be tolerated within your departmental budget or you lack strong impetus from finance to optimise further.
- You might feel that it’s “not broken, so no need to fix it.”
-
Legacy Constraints
- Some local authority or government departments could be running older applications that are hard to containerise or re-platform. If you require certain OS versions or on-prem-like architectures, homogeneous VMs can seem “safe.”
-
Limited Technical Skills or Resources
- You may not have in-house expertise to manage containers, function-based services, or advanced orchestrators.
- If your main objective is stability and you have no immediate impetus to experiment, you might remain with static VM setups.
If you fall into these categories—low complexity, legacy constraints, stable usage, minimal cost concerns—then “Long-Running Homogeneous VMs” might indeed be “good enough.” However, many UK public sector cloud strategies now emphasize cost efficiency, scalability, and elasticity, especially under increased scrutiny of budgets and service reliability. Sticking to homogeneous, always-on VMs without optimisation can lead to wasteful spending, hamper agility, and prevent future readiness.
Your answer:
VMs with containers or serverless for a few minor things.
How to determine if this good enough
Organisations in this stage have recognised the benefits of more dynamic compute models—like containers or serverless—but apply them only in a small subset of cases. You might be “good enough” if:
-
Core Workloads Still Suited to Static VMs
- Perhaps your main applications are large, monolithic solutions that can’t easily shift to containers or functions.
- The complexity of re-platforming may outweigh the immediate gains.
-
Selective Use of Modern Compute
- You have tested container-based or function-based solutions for simpler tasks (e.g., cron jobs, internal scheduled data processing, or small web endpoints).
- The results are encouraging, but you haven’t had the internal capacity or business priority to expand further.
-
Comfortable Cost Baseline
- You’ve introduced auto-shutdown or partial right-sizing for your VMs, so your costs are not spiraling.
- Leadership sees no urgent impetus to push deeper into containers or serverless, perhaps because budgets remain stable or there’s no urgent performance/elasticity requirement.
-
Growing Awareness of Container or Serverless Advantages
- Some staff or teams are championing more frequent usage of advanced compute.
- The IT department sees potential, but organisational inertia, compliance considerations, or skill gaps limit widespread adoption.
If the majority of your mission-critical applications remain on VMs and you see stable performance within budget, this may be “enough” for now. However, if the cloud usage is expanding, or if your department is under pressure to modernise, you might quickly find you miss out on elasticity, cost efficiency, or resilience advantages that come from broader container or serverless adoption.
Your answer:
A mix of VMs and containers or serverless for some key systems.
How to determine if this good enough
This stage indicates a notable transformation: your organisation uses multiple compute paradigms. You have container-based or serverless workloads in production, you sometimes spin up short-lived VMs for ephemeral tasks, and you’re actively right-sizing. It may be “good enough” if:
-
Functional, Multi-Modal Compute Strategy
- You’ve proven that containers or serverless can handle real production demands (e.g., public-facing services, departmental applications).
- VMs remain important for some workloads, but you adapt or re-size them more frequently.
-
Solid Operational Knowledge
- Your teams are comfortable deploying to a container platform (e.g., Kubernetes, ECS, Azure WebApps for containers, etc.) or using function-based services in daily workflows.
- Monitoring and alerting are configured for both ephemeral and long-running compute.
-
Balanced Cost and Complexity
- You have a handle on typical monthly spend, and finance sees a correlation between usage spikes and cost.
- You might not be fully optimising everything, but you rarely see large, unexplained bills.
-
Clear Upsides from Modern Compute
- You’ve recognised that certain microservices perform better or cost less on serverless or containers.
- Cultural buy-in is growing: multiple teams express interest in flexible compute models.
If these points match your environment, your “Mixed Use” approach might currently satisfy your user needs and budget constraints. However, you might still see opportunities to refine deployment methods, unify your management or monitoring, and push for greater elasticity. If you suspect further cost savings or performance gains are possible—or you want a more standardised approach across the organisation—further advancement is likely beneficial.
Your answer:
Containers or serverless.
How to determine if this good enough
When your organisation regularly uses ephemeral or short-lived compute models, containers, and functions, you’ve likely embraced cloud-native thinking. This suggests:
-
Frequent Scaling and Automated Lifecycle
- You seldom keep large VMs running 24/7 unless absolutely necessary.
- Container-based architectures or ephemeral VMs scale up to meet demand, then terminate when idle.
-
High Automation in CI/CD
- Deployments to containers or serverless happen automatically via pipelines.
- Infrastructure provisioning is likely codified in IaC (Infrastructure as Code) tooling (Terraform, CloudFormation, Bicep, etc.).
-
Performance and Cost Efficiency
- You typically pay only for what you use, cutting down on waste.
- Application performance can match demand surges without manual intervention.
-
Multi-Service Observability
- Monitoring covers ephemeral workloads, with logs and metrics aggregated effectively.
If you have reached this point, your environment is more agile, cost-optimised, and aligned with modern DevOps. However, you may still have gaps in advanced scheduling, deeper security or compliance integration, or a formal approach to evaluating each new solution (e.g., deciding between containers, serverless, or a managed SaaS).
Your answer:
We pick the best tool for each job.
How to determine if this good enough
At this highest maturity level, you explicitly choose the most appropriate computing model—often starting from SaaS (Software as a Service) if it meets requirements, then serverless if custom code is needed, then containers, and so on down to raw VMs only when necessary. Indicators that this might be “good enough” include:
-
Every New Project Undergoes a Thorough Fit Assessment
- Your solution architecture process systematically asks: “Could an existing SaaS platform solve this? If not, can serverless do the job? If not, do we need container orchestration?” and so forth.
- This approach prevents defaulting to IaaS or large container clusters without strong justification.
-
Rigorous Continual Right-sizing
- Teams actively measure usage and adjust resource allocations monthly or even weekly.
- Underutilised resources are quickly scaled down or replaced by ephemeral compute. Over-stressed services are scaled up or moved to more robust solutions.
-
Sophisticated Observability, Security, and Compliance
- With multiple service layers, you maintain consistent monitoring, security scanning, and compliance checks across SaaS, FaaS, containers, and IaaS.
- You have well-documented runbooks and automated pipelines to handle each technology layer.
-
Cost Efficiency and Agility
- Budgets often reflect usage-based spending, and spikes are quickly noticed.
- Development cycles are faster because you adopt higher-level services first, focusing on business logic rather than infrastructure management.
If your organisation can demonstrate that each new or existing application sits in the best-suited compute model—balancing cost, compliance, and performance—this is typically considered the pinnacle of cloud compute maturity. However, continuous improvements in vendor offerings, emerging technologies, and changing departmental requirements mean there is always more to refine.
How do you track sustainability? [change your answer]
Your answer:
We don’t track sustainability ourselves.
How to determine if this good enough
In this stage, your organisation trusts its cloud provider to meet green commitments through mechanisms like carbon offsetting or renewable energy purchases. You likely have little to no visibility of actual carbon metrics. For UK public sector bodies, you might find this acceptable if:
- Limited Scope and Minimal Usage
- Your cloud footprint is extremely small (e.g., a handful of testing environments).
- The cost and complexity of internal measurement may not seem justified at this scale.
- No Immediate Policy or Compliance Pressures
- You face no urgent departmental or legislative requirement to detail your carbon footprint.
- Senior leadership may not yet be asking for sustainability reports.
- Strong Confidence in Vendor Pledges
- Your contract or statements of work (SoW) reassure you that the provider is pursuing net zero or carbon neutrality.
- You have no immediate impetus to verify or go deeper into the supply chain or usage details.
If you are in this situation and operate with minimal complexity, “Basic Vendor Reliance” might be temporarily “good enough.” However, the UK public sector is increasingly required to evidence sustainability efforts, particularly under initiatives like the Greening Government Commitments. Larger or rapidly growing workloads will likely outgrow this approach. If you anticipate expansions, cost concerns, or scrutiny from oversight bodies, it is wise to move beyond vendor reliance.
Your answer:
We have some sustainability goals and prefer greener providers.
How to determine if this good enough
At this stage, you have moved beyond “vendor says they’re green.” You may have a written policy stating that you will prioritise environmentally responsible suppliers or aim to reduce your cloud emissions. For UK public sector organisations, “Initial Awareness” may be adequate if:
-
Formal Policy Exists, but Execution Is Minimal
- You have a documented pledge or departmental instruction to pick greener vendors or to reduce carbon, but it’s largely aspirational.
-
Some Basic Tracking or Guidance
- Procurement teams might refer to environmental credentials during tendering, especially if you’re using Crown Commercial Service frameworks.
- Staff are aware that sustainability is important, but lack practical steps.
-
Minimal External Oversight
- You might not yet be required to publish detailed carbon metrics in annual reports or meet stringent net zero timelines.
- The policy helps reduce reputational risk, but you have not turned it into tangible workflows.
This approach is a step up from total vendor reliance. However, it often lacks robust measurement or accountability. If your workload, budget, or public scrutiny around environmental impact is increasing—particularly in line with the Greening Government Commitments you will likely need more rigorous strategies soon.
Your answer:
We measure the environmental impact of our cloud use and set targets.
How to determine if this good enough
Here, you have begun quantifying your cloud-based carbon output. You might set yearly or quarterly reduction goals (e.g., a 10% decrease in carbon from last year). You also factor environmental impacts into your choice of instance types, storage classes, or regions. Signs this might be “good enough” include:
-
Regular Carbon Footprint Data
- You have monthly or quarterly reports from vendor dashboards or a consolidated internal system (e.g., pulling data from cost/billing APIs plus vendor carbon intensity metrics).
-
Formal Targets and Milestones
- Leadership acknowledges these targets. They appear in your departmental objectives or technology strategy.
-
Procurement Reflects Sustainability
- RFQs or tenders explicitly weigh sustainability factors, awarding points to vendors or proposals that commit to lower carbon usage.
- You might require prospective suppliers to share energy efficiency data for their services.
-
Leadership or External Bodies Approve
- Senior managers or oversight bodies see your target-setting approach as credible.
- Your reports are used in annual reviews or compliance documentation.
While “Active Measurement and Target Setting” is a robust step forward, you may still discover that your usage continues to increase due to scaling demands or new digital services. Additionally, you might lack advanced optimisation practices like continuous resource right-sizing or dynamic load shifting.
Your answer:
We monitor sustainability and make changes to improve.
How to determine if this good enough
At this stage, sustainability isn’t a separate afterthought—it’s part of your default operational processes. Indications that you might be “good enough” for UK public sector standards include:
-
Frequent/Automated Monitoring
- Carbon metrics are tracked at least weekly, if not daily, using integrated dashboards.
- You have alerts for unexpected surges in usage or carbon-intense resources.
-
Cultural Adoption Across Teams
- DevOps, procurement, and governance leads all know how to incorporate sustainability criteria.
- Architects regularly consult carbon metrics during design sessions, akin to how they weigh cost or security.
-
Regular Public or Internal Reporting
- You might publish simplified carbon reports in your annual statements or internally for senior leadership.
- Stakeholders can see monthly/quarterly improvements, reflecting a stable, integrated practice.
-
Mapping to Strategic Objectives
- The departmental net zero or climate strategy references your integrated approach as a key success factor.
- You can demonstrate tangible synergy: e.g., your cost savings from scaling down dev environments are also cutting carbon.
Despite these achievements, additional gains can still be made, especially in advanced workload scheduling or region selection. If you want to stay ahead of new G-Cloud requirements, carbon scoring frameworks, or stricter net zero mandates, you may continue optimising your environment.
Your answer:
We use automated approaches to optimise sustainability.
How to determine if this good enough
At the pinnacle of cloud sustainability maturity, your organisation leverages sophisticated methods such as:
-
Real-Time or Near-Real-Time Workload Scheduling
- When feasible and compliant with data sovereignty, you shift workloads to times/locations with lower carbon intensity.
- You may monitor the UK grid’s real-time carbon intensity and schedule large batch jobs during off-peak, greener times.
-
Full Lifecycle Carbon Costing
- Every service or data set has an associated “carbon cost,” influencing decisions from creation to archival/deletion.
- You constantly refine how your application code runs to reduce unnecessary CPU cycles, memory usage, or data transfers.
-
Continuous Improvement Culture
- Teams treat carbon optimisation as essential as cost or performance. Even minor improvements (e.g., 2% weekly CPU usage reduction) are celebrated.
-
Cross-Government Collaboration
- As a leader, you might share advanced scheduling or dynamic region selection techniques with other public sector bodies.
- You might co-publish guidance for G-Cloud or Crown Commercial Service frameworks on advanced sustainability requirements.
If you have truly dynamic optimisation but remain within the constraints of UK data protection or performance needs, you have likely achieved a highly advanced state. However, there’s almost always room to push boundaries, such as exploring new hardware (e.g., ARM-based servers) or adopting emergent best practices in green software engineering.
How do you manage costs? [change your answer]
Your answer:
Our finance or management team can see spending reports.
How to determine if this good enough
Restricted Billing Visibility typically implies that your cloud cost data—such as monthly bills, usage breakdowns, or detailed cost analytics—remains siloed within a small subset of individuals or departments, usually finance or executive leadership. This might initially appear acceptable if you believe cost decisions do not directly involve engineering teams, product owners, or other stakeholders. It can also seem adequate when your organisation is small, or budgets are centrally controlled. However, carefully assessing whether this arrangement still meets your current and emerging needs requires a closer look at multiple dimensions: stakeholder awareness, accountability for financial outcomes, cross-functional collaboration, and organisational growth.
-
Stakeholder Awareness and Alignment
- When only a narrow group (e.g., finance managers) knows the full cost details, other stakeholders may make decisions in isolation, unaware of the larger financial implications. This can lead to inflated resource provisioning, missed savings opportunities, or unexpected billing surprises.
- Minimal cost visibility might still be sufficient if your organisation’s usage is predictable, your budget is stable, and your infrastructure is relatively small. In such scenarios, cost control may not be a pressing concern. Nevertheless, even in stable environments, ignoring cost transparency could result in incremental increases that go unnoticed until they become significant.
-
Accountability for Financial Outcomes
- Finance teams that are solely responsible for paying the bill and analyzing cost trends might not have enough granular knowledge of the engineering decisions driving those costs. If your developers or DevOps teams are not looped in, they cannot easily optimise code, infrastructure, or architecture to reduce waste.
- This arrangement can be considered “good enough” if your service-level agreements demand minimal overhead from engineers, or if your leadership structure is comfortable with top-down cost directives. However, the question remains: are you confident that your engineering teams have no role to play in optimising usage patterns? If the answer is that engineers do not need to see cost data to be efficient, you might remain in this stage without immediate issues. But typically, as soon as your environment grows in complexity, the limitation becomes evident.
-
Cross-Functional Collaboration
- Siloed billing data hinders cross-functional input and collaboration. Product managers, engineering leads, and operational teams may not easily communicate about the cost trade-offs associated with new features, expansions, or refactoring.
- This might be “good enough” if your operating model is highly centralised and decisions about capacity, performance, or service expansion are made primarily through a few financial gatekeepers. Yet, even in such a centralised model, growth or changing business goals frequently demand more nimble, collaborative approaches.
-
Scalability Concerns and Future Growth
- When usage scales or new product lines are introduced, a lack of broader cost awareness can quickly escalate monthly bills. If your environment remains small or has limited growth, you might not face immediate cost explosions.
- However, any potential business pivot—such as adopting new cloud services, launching in additional regions, or implementing a continuous delivery model—might cause your costs to spike in ways that a small finance-only group cannot effectively preempt.
-
Risk Assessment
- A direct risk in “Restricted Billing Visibility” is the possibility of accumulating unnecessary spend because the people who can make technical changes (engineers, developers, or DevOps) do not have the insight to detect cost anomalies or scale down resources.
- If your usage remains modest and you have a proven track record of stable spending without sudden spikes, maybe it is still acceptable to keep cost data limited to finance. Nonetheless, you run the risk of missing optimisation pathways if your environment changes or if external factors (e.g., vendor price adjustments) affect your spending patterns.
In summary, this approach may be “good enough” for organisations with very limited complexity or strictly centralised purchasing structures where cost fluctuations remain low and stable. It can also suffice if you have unwavering trust that top-down oversight alone will detect anomalies. But if you see any potential for cost spikes, new feature adoption, or a desire to empower engineering with cost data, it might be time to consider a more transparent model.
Your answer:
Finance uses billing data to plan spending.
How to determine if this good enough
In many organisations, cloud finance teams or procurement specialists negotiate contracts with cloud providers for discounted rates based on committed spend, often referred to as “Reserved Instances,” “Savings Plans,” “Committed Use Discounts,” or other vendor-specific programs. This approach can result in significant cost savings if done correctly. Understanding when this level of engagement is “good enough” often depends on the maturity of your cost forecasting, the stability of your workloads, and the alignment of these financial decisions with actual technical usage patterns.
-
Consistent, Predictable Workloads
- If your application usage is relatively stable or predictably growing, pre-committing spend for a year or multiple years may deliver significant savings. In these situations, finance-led deals—where finance is looking at historical bills and usage curves—can cover the majority of your resource requirements without risking over-commitment.
- This might be “good enough” if your organisation already has a stable architecture and does not anticipate major changes that could invalidate these predictions.
-
Finance Has Access to Accurate Usage Data
- The success of pre-commit or reserved instances depends on the accuracy of usage forecasts. If finance can access granular, up-to-date usage data from your environment—and if that data is correct—then they can make sound financial decisions regarding commitment levels.
- This approach is likely “good enough” if your technical teams and finance teams have established a reliable process for collecting and interpreting usage metrics, and if finance is skilled at comparing on-demand rates with potential discounts.
-
Minimal Input from Technical Teams
- Sometimes, organisations rely heavily on finance to decide how many reserved instances or committed usage plans to purchase. If your technical environment is not highly dynamic or if there is low risk that engineering changes will undermine those pre-commit decisions, centralising decision-making in finance might be sufficient.
- That said, if your environment is subject to bursts of innovation, quick scaling, or sudden shifts in resource types, you risk paying for commitments that do not match your actual usage. If you do not see a mismatch emerging, you might feel comfortable with the status quo.
-
No Urgent Need for Real-Time Adjustments
- One reason an exclusively finance-led approach might still be “good enough” is that you have not observed frequent or large mismatches between your committed usage and your actual consumption. The cost benefits appear consistent, and you have not encountered major inefficiencies (like leftover capacity from partially utilised commitments).
- If your workloads are largely static or have a slow growth pattern, you may not require real-time collaboration with engineering. Under those circumstances, a purely finance-driven approach can still yield moderate or even significant savings.
-
Stable Vendor Relationships
- Some organisations prefer to maintain strong partnerships with a single cloud vendor and do not plan on multi-cloud or vendor migration strategies. If you anticipate staying with that vendor for the long haul, pre-commits become less risky.
- If you have confidence that your vendor’s future services or pricing changes will not drastically shift your usage patterns, you might view finance’s current approach as meeting your needs.
However, this arrangement can quickly become insufficient if your organisation experiences frequent changes in technology stacks, product lines, or scaling demands. It may also be suboptimal if you do not track how the commitments are being used—or if finance does not engage with the technical side to refine usage estimates.
Your answer:
We use some automation to help reduce costs.
How to determine if this good enough
Cost-Effective Resource Management typically reflects an environment where you have implemented proactive measures to eliminate waste in your cloud infrastructure. Common tactics include turning off development or testing environments at night, using auto-scaling to handle variable load, and continuously auditing for idle resources. The question becomes whether these tactics alone suffice for your organisational goals or if further improvements are necessary. To evaluate, consider the following:
-
Monitoring Actual Savings
- If you have systematically scheduled non-production workloads to shut down or scale down during off-peak hours, you should be able to measure the direct savings on your monthly bill. Compare your pre-implementation spending to current levels, factoring in seasonal usage patterns. If your cost has dropped significantly, you might conclude that the approach is providing tangible value.
- However, cost optimisation rarely stops at shutting down test environments. If you still observe large spikes in bills outside of work hours or suspect that production environments remain over-provisioned, you may not be fully leveraging the potential.
-
Resource Right-sizing
- Simply scheduling off-hours shutdowns is beneficial, but right-sizing resources can yield equally impactful or even greater results. For instance, if your production environment runs on instance types or sizes that are consistently underutilised, there is an opportunity to downsize.
- If you have not yet performed or do not regularly revisit right-sizing exercises—analyzing CPU and memory usage, optimising storage tiers, or removing unused IP addresses or load balancers—your “Cost-Effective Resource Management” might only be addressing part of the savings puzzle.
-
Lifecycle Management of Environments
- Shutting down entire environments for nights or weekends helps reduce cost, but it is only truly effective if you also manage ephemeral environments responsibly. Are you spinning up short-lived staging or test clusters for continuous integration, but forgetting to tear them down after usage?
- If you have robust processes or automation that handle the entire lifecycle—creation, usage, shutdown, deletion—for these environments, then your current approach could be “good enough.” If not, orphaned or abandoned environments might still be draining budgets.
-
Auto-Scaling Maturity
- Auto-scaling is a cornerstone of cost-effective resource management. If you have implemented it for your production and major dev/test environments, that may appear “good enough” initially. But is your scaling policy well-optimised? Are you aggressively scaling down during low traffic, or do you keep large buffer capacities?
- Evaluate logs to check if you have frequent periods of near-zero usage but remain scaled up. If auto-scaling triggers are not finely tuned, you could be missing out on further cost reductions.
-
Cost vs. Performance Trade-Offs
- Some teams accept a degree of cost inefficiency to ensure maximum performance. If your organisation is comfortable paying for extra capacity to handle traffic bursts, the existing environment might be adequate. But if you have not explicitly weighed the financial cost of that performance margin, you could be inadvertently overspending.
- “Good enough” might be an environment where you have at least set baseline checks to prevent runaway spending. Yet, if you want to refine performance-cost trade-offs further, additional tuning or service re-architecture could unlock more savings.
-
Empowerment of Teams
- Another dimension is whether only a small ops or DevOps group is responsible for shutting down resources or if the entire engineering team is cost-aware. If the latter is not the case, you may have manual processes that lead to inconsistent application of off-hour shutdowns. A more mature approach would see each team taking responsibility for their resource usage, aided by automation.
- If your processes remain centralised and manual, your approach might hit diminishing returns as you grow. Achieving real momentum often requires embedding cost awareness into the entire software development lifecycle.
When you reflect on these factors, “Cost-Effective Resource Management” is likely “good enough” if you have strong evidence of direct savings, a minimal presence of unused resources, and a consistent approach to shutting down or scaling your environments. If you still detect untracked resources, underused large instances, or an absence of automated processes, there are plenty of next steps to enhance your strategy.
Your answer:
Developers can see and consider costs.
How to determine if this good enough
Introducing “Cost-Aware Development Practices” means your engineering teams are no longer coding in a vacuum. Instead, they have direct or near-direct access to cost data and incorporate budget considerations throughout their software lifecycle. However, measuring if this approach is “good enough” requires assessing how deeply cost awareness is embedded in day-to-day technical activities, as well as the outcomes you achieve.
-
Extent of Developer Engagement
- If your developers see cloud cost dashboards daily but rarely take any action based on them, the visibility may not be translating into tangible benefits. Are they actively tweaking infrastructure choices, refactoring code to reduce memory usage, or questioning the necessity of certain services? If not, your “awareness” might be superficial.
- Conversely, if you see frequent pull requests that address cost inefficiencies, your development team is likely using their visibility effectively.
-
Integration in the Software Development Lifecycle
- Merely giving developers read access to a billing console is insufficient. If your approach is truly effective, cost discussions happen early in design or sprint planning, not just at the end of the month. The best sign is that cost considerations appear in architecture diagrams, code reviews, and platform selection processes.
- If cost is still an afterthought—addressed only when a finance or leadership team raises an alarm—then the practice is not yet “good enough.”
-
Tooling and Automated Feedback
- Effective cost awareness often involves integrated tooling. For instance, developers might see near real-time cost metrics in their Git repositories or continuous integration workflows. They might receive a Slack notification if a new branch triggers resources that exceed certain thresholds.
- If your environment lacks this real-time or near-real-time feedback loop, and developers only see cost data after big monthly bills, the awareness might be lagging behind actual usage.
-
Demonstrable Cost Reductions
- A simple yardstick is whether your engineering teams can point to quantifiable cost reductions linked to design decisions or code changes. For example, a team might say, “We replaced a full-time VM with a serverless function and saved $2,000 monthly.”
- If such examples are sparse or non-existent, you might suspect that cost awareness is not yet translating into meaningful changes.
-
Cultural Embrace
- A “good enough” approach sees cost awareness as a normal part of engineering culture, not an annoying extra. Team leads, product owners, and developers frequently mention cost in retrospectives or stand-ups.
- If referencing cloud spend or budgets still feels taboo or is seen as “finance’s job,” you have further to go.
-
Alignment with Company Goals
- Finally, consider how your cost-aware practices align with broader business goals—whether that be margin improvement, enabling more rapid scaling, or launching new features within certain budgets. If your engineering changes consistently support these objectives, your approach might be sufficiently mature.
- If leadership is still blindsided by unexpected cost overruns or if big swings in usage go unaddressed, it is likely that your cost-aware culture is not fully effective.
Your answer:
We manage cost as a team, get alerts and look for savings.
Comprehensive Cost Management and Optimisation represents a mature stage in your organisation’s journey toward efficient cloud spending. At this point, cost transparency and accountability span multiple layers—from frontline developers to senior leadership. You have automated alerting structures in place to catch anomalies quickly, you track cost optimisation initiatives with the same rigor as feature delivery, and you’ve embedded cost considerations into operational runbooks. Below are key characteristics and actionable guidance to maintain or further refine this approach:
-
Robust and Granular Alerting Mechanisms
- In a comprehensive model, you’ve configured multi-tier alerts that scale with the significance of cost changes. For instance, a modest daily threshold might notify a DevOps Slack channel, while a larger monthly threshold might email department heads, and an even bigger spike might trigger urgent notifications to executives.
- Ensure these alerts are not just numeric triggers (e.g., “spend exceeded $X”), but also usage anomaly detections. For example, if a region’s usage doubles overnight or a new instance type’s cost surges unexpectedly, the right people receive immediate alerts.
- Each major cloud provider offers flexible budgeting and cost anomaly detection:
-
Cross-Functional Cost Review Cadences
- You have regular reviews—often monthly or quarterly—where finance, engineering, operations, and leadership analyze trends, track the outcomes of previous optimisation initiatives, and identify new areas of improvement.
- During these sessions, metrics might include cost per application, cost per feature, cost as a percentage of revenue, or carbon usage if sustainability is also a focus. This fosters a culture where cost is not an isolated item but a dimension of overall business performance.
-
Prioritisation of Optimisation Backlog
- In a comprehensive system, cost optimisation tasks are often part of your backlog or project management tool (e.g., Jira, Trello, or Azure Boards). Engineers and product owners treat these tasks with the same seriousness as performance issues or feature requests.
- The backlog might include refactoring older services to more modern compute platforms, consolidating underutilised databases, or migrating certain workloads to cheaper regions. By regularly ranking and scheduling these items, you show a commitment to continuous improvement.
-
End-to-End Visibility into Cost Drivers
- True comprehensiveness means your teams can pinpoint exactly which microservice, environment, or user activity drives each cost spike. This is usually achieved through detailed tagging strategies, advanced cost allocation methods, or third-party tools that break down usage in near-real-time.
- If a monthly cost review reveals that data transfer is trending upward, you can directly tie it to a new feature that streams large files, or a microservice that inadvertently calls an external API from an expensive region. You then take targeted action to reduce those costs.
-
Forecasting and Capacity Planning
- Beyond reviewing past or current costs, you systematically forecast future spend based on product roadmaps and usage growth. This might involve building predictive models or leveraging built-in vendor forecasting tools.
- Finance and engineering collaborate to refine these forecasts, adjusting resource reservations or scaling strategies accordingly. For example, if you anticipate doubling your user base in Q3, you proactively adjust your reservations or budgets to avoid surprises.
-
Policy-Driven Automation and Governance
- Comprehensive cost management often includes policy enforcement. For instance, you may have automated guardrails that prevent developers from spinning up large GPU instances without approval, or compliance checks that ensure data is placed in cost-efficient storage tiers when not actively in use.
- Some organisations implement custom or vendor-based governance solutions that block resource creation if it violates cost or security policies. This ensures cost best practices become part of the standard operating procedure.
-
Continuous Feedback Loop and Learning
- The hallmark of a truly comprehensive approach is the cyclical process of learning from cost data, making improvements, measuring outcomes, and then repeating. Over time, each iteration yields a more agile and cost-efficient environment.
- Leadership invests in advanced analytics, A/B testing for cost optimisation strategies (e.g., testing a new auto-scaling policy in one region), and might even pilot different cloud vendors or hybrid deployments to see if further cost or performance benefits can be achieved.
-
Scaling Best Practices Across the Organisation
- In a large enterprise, you may have multiple business units or product lines. A comprehensive approach ensures that cost management practices do not remain siloed. You create a central repository of best practices, standard operating procedures, or reference architectures to spread cost efficiency across all teams.
- This might manifest as an internal “community of practice” or “center of excellence” for FinOps, where teams share success stories, compare metrics, and continually push the envelope of optimisation.
-
Aligning Cost Optimisation with Business Value
- Ultimately, cost optimisation should serve the broader strategic goals of the business—whether to improve profit margins, free up budget for innovation, or support sustainability commitments. In the most advanced organisations, decisions around cloud architecture tie directly to metrics like cost per transaction, cost per user, or cost per new feature.
- Senior executives see not just raw cost figures but also how those costs translate to business outcomes (e.g., revenue, user retention, or speed of feature rollout). This alignment cements cost optimisation as a catalyst for better products, not just an expense reduction exercise.
-
Evolving Toward Continuous Refinement
- Even with a high level of maturity, the cloud landscape shifts rapidly. Providers introduce new instance types, new discount structures, or new services that might yield better cost-performance ratios. An ongoing commitment to learning and experimentation keeps you ahead of the curve.
- Your monthly or quarterly cost reviews might always include a segment to evaluate newly released vendor features or pricing models. By piloting or migrating to these offerings, you ensure you do not stagnate in a changing market.
In short, “Comprehensive Cost Management and Optimisation” implies that every layer—people, process, and technology—is geared toward continuous financial efficiency. Alerts ensure no cost anomaly goes unnoticed, cross-functional reviews nurture a culture of accountability, and an active backlog of cost-saving initiatives keeps engineering engaged. Over time, this integrated approach can yield substantial and sustained reductions in cloud spend while maintaining or even enhancing the quality and scalability of your services.
Keep doing what you’re doing, and consider writing up your experiences in blog posts or internal knowledge bases, then submitting pull requests to this guidance so that others can learn from your successes. By sharing, you extend the culture of cost optimisation not only across your organisation but potentially across the broader industry.
How do you choose where to run workloads and store data? [change your answer]
Your answer:
Everything in one region.
How to determine if this good enough
- Moderate Tolerance for Region-Level Outages
You may handle an AZ-level failure but might be vulnerable if the entire region goes offline. - Improved Availability Over Single AZ
Achieving at least multi-AZ deployment typically satisfies many public sector continuity requirements, referencing NCSC’s resilience guidelines. - Cost vs. Redundancy
Additional AZ usage may raise costs (like cross-AZ data transfer fees), but many find the availability trade-off beneficial.
If you still have concerns about entire regional outages or advanced compliance demands for multi-region or cross-geography distribution, consider a multi-region approach. NIST SP 800-53 CP (Contingency Planning) controls often encourage broader geographical resiliency if your RPO/RTO goals are strict.
Your answer:
We use another region for non-production workloads or backup and recovery.
How to determine if this good enough
- Basic Multi-Region DR or Lower-Cost Testing
You might offload dev/test to another region or keep backups in a different region for DR compliance. - Minimal Cross-Region Dependencies
If you only replicate data or run certain non-critical workloads in the second region, partial coverage might suffice. - Meets Certain Compliance Needs
Some public sector entities require data in at least two distinct legal jurisdictions—this setup may address that in limited scope.
If entire production workloads are mission-critical for national services or must handle region-level outages seamlessly, you might consider a more robust multi-region active-active approach. NIST SP 800-34 DR guidelines often advise multi-region for critical continuity.
Your answer:
We pick regions based on cost, performance, or sustainability.
How to determine if this good enough
- Advanced Region Flexibility
You pick the region that offers the best HPC, GPU, or AI services, or one with the lowest carbon footprint or cost. - Sustainability & Cost Prioritised
If your organisation strongly values green energy sourcing or cheaper nighttime rates, you shift workloads accordingly. - No Hard Legal Data Residency Constraints
You can store data outside the UK or EEA as permitted, and no critical constraints block you from picking any global region.
If you want to adapt in real time based on cost or carbon intensity or maintain advanced multi-region failover automatically, consider a dynamic approach. NCSC’s guidance on green hosting or multi-region usage and NIST frameworks for dynamic cloud management can guide advanced scheduling.
Your answer:
We automatically move workloads to different regions to save money or energy.
How to determine if this good enough
Your organisation pursues a true multi-region, multi-AZ dynamic approach. Automated processes shift workloads based on real-time cost (spot prices) or carbon intensity, while preserving performance and compliance. This may be “good enough” if:
-
Highly Automated Infrastructure
- You rely on complex orchestration or container platforms that can scale or move workloads near-instantly.
-
Advanced Observability
- A robust system of metrics, logging, and anomaly detection ensures seamless adaptation to cost or sustainability triggers.
-
Continuous Risk & Compliance Checks
- Even though workloads shift globally, you remain compliant with relevant data sovereignty or classification rules, referencing NCSC data handling or departmental policies.
Nevertheless, you can refine HPC or AI edge cases, adopt chaos testing for dynamic distribution, or integrate advanced zero trust for each region shift. NIST SP 800-207 zero-trust architecture principles can help ensure each region transition remains secure.
Data
How do you manage data storage and usage? [change your answer]
Your answer:
Teams decide for themselves how to store and use data.
How to determine if this good enough
In the “Decentralised and Ad Hoc Management” stage, each department, team, or project might handle data in its own way, with minimal organisational-level policies or guidance. You might consider this setup “good enough” under the following conditions:
-
Very Small or Low-Risk Datasets
- If your organisation handles mostly unclassified or minimal-risk data, and the volume is modest enough that the cost of implementing formal oversight isn’t easily justified.
-
Early Phases or Pilot Projects
- You might be in a startup-like environment testing new digital services, with no immediate demand for robust data governance.
-
Minimal Regulatory/Compliance Pressure
- If you’re not subject to strict data protection, privacy regulations, or public accountability—for example, a small-scale internal project with no personally identifiable information (PII).
-
Low Complexity
- If your data usage is straightforward (e.g., only a few spreadsheets or simple cloud storage buckets), with minimal sharing across teams or external partners.
However, for most UK public sector bodies, even “unofficial” data systems can become large or sensitive over time. In addition, compliance requirements from the UK GDPR, the Data Protection Act 2018, and departmental data security policies (e.g., Government Security Classifications) often dictate at least a baseline level of oversight. Therefore, truly “Decentralised and Ad Hoc” management is rarely sustainable.
Your answer:
Teams follow our organisation’s policies on data storage and usage.
How to determine if this good enough
Here, you’ve moved from having no formal oversight to each team at least keeping track of their data usage—potentially in spreadsheets or internal wikis. You might view this as sufficient if:
-
Moderate Complexity but Clear Ownership
- Each department or project has well-defined data owners who can articulate what data they store, how sensitive it is, and where it resides.
-
Manual Policy is Consistently Applied
- You have a basic organisational data policy, and each team enforces it themselves, without heavy central governance.
- So far, you haven’t encountered major incidents or confusion over compliance.
-
Low Rate of Cross-Team Data Sharing
- If data seldom flows between departments, manual documentation might not be overly burdensome.
-
Acceptable Accuracy
- Although the process is manual, your teams keep it reasonably up to date. External audits or departmental reviews find no glaring misalignment.
However, manual adherence becomes error-prone as soon as data volumes grow or cross-team collaborations increase. The overhead of maintaining separate documentation can lead to duplication, versioning issues, or compliance gaps—particularly in the UK public sector, where data sharing among services can escalate quickly.
Your answer:
We inventory and classify all our data
How to determine if this good enough
Now you have a formal data inventory that might combine manual inputs from teams and automated scans to detect data types (e.g., presence of national insurance numbers or other PII). This can be “good enough” if:
-
You Know Where Your Data Lives
- You’ve mapped key data stores—cloud buckets, databases, file systems—and keep these records relatively up to date.
-
Consistent Data Classification
- You apply recognised categories like “OFFICIAL,” “OFFICIAL-SENSITIVE,” “RESTRICTED,” or other departmental terms.
- Teams are aware of which data must follow special controls (e.g., personal data under UK GDPR, payment card data under PCI-DSS, etc.).
-
Proactive Compliance
- You can respond to data subject requests or FOI (Freedom of Information) inquiries quickly, because you know which systems contain personal or sensitive data.
- Auditors or data protection officers can trace the location of specific data sets.
-
Clarity on Retention and Disposal
- You have at least basic retention timelines for certain data types (e.g., “Keep these records for 2 years, then archive or securely delete”).
- This helps you reduce storage bloat and security risk.
If your organisation can maintain this inventory without excessive overhead, meet compliance requirements, and quickly locate or delete data upon request, you might be satisfied. However, if data usage is growing or you’re facing more complex analytics and cross-team usage, you likely need more advanced governance, lineage tracking, and automation.
Your answer:
We inventory and classify all our data and review this often. We understand how data moves through our systems but this isn’t always documented.
How to determine if this good enough
In this phase, your organisation has established processes to classify and review data regularly. You likely have:
-
Well-Established Inventory and Processes
- You know exactly where crucial data resides (cloud databases, file shares, analytics platforms).
- Teams reliably classify new data sets, typically with centralised or automated oversight.
-
Ongoing Compliance Audits
- Internal audits or external assessors confirm that data is generally well-managed, meeting security classifications and retention rules.
- Incidents or policy violations are rare and quickly addressed.
-
Partial Lineage Documentation
- Teams can verbally or via some diagrams explain how data flows through the organisation.
- However, it’s not uniformly captured in a single system or data catalog.
-
Confidence in Day-to-Day Operations
- You have fewer unexpected data exposures or confusion over who can access what.
- Cost inefficiencies or data duplication might still lurk if lineage isn’t fully integrated into everyday tools.
If your broad compliance posture is solid, and your leadership or data protection officer is satisfied with the frequency of reviews, you might remain comfortable here. Yet incomplete lineage documentation can hamper advanced analytics, complicated cross-team data usage, or hamper efficient data discoverability.
Your answer:
We have a catalogue describing all out data, its quality, use and origins.
How to determine if this good enough
In this final stage, your organisation has an extensive data catalog that covers:
-
Comprehensive Metadata and Glossary
- You store definitions, owners, classification details, transformations, and usage patterns in a single platform.
- Non-technical staff can also search and understand data context easily (e.g., “Which dataset includes housing records for local authorities?”).
-
Automated Lineage from Source to Consumption
- ETL pipelines, analytics jobs, and data transformations are captured, so you see exactly how data moves from one place to another.
- If a compliance or FOI request arises, you can trace the entire path of relevant data instantly.
-
Embedded Data Quality and Governance
- The catalog might track data quality metrics (e.g., completeness, validity, duplicates) and flags anomalies.
- Governance teams can set or update policy rules in the catalog, automatically enforcing them across various tools.
-
High Reusability and Collaboration
- Teams discover and reuse existing data sets rather than re-collect or replicate them.
- Cross-departmental projects benefit from consistent definitions and robust lineage, accelerating digital transformation within the UK public sector.
If you meet these criteria with minimal friction or overhead, your advanced catalog approach is likely “good enough.” Nonetheless, technology and data demands evolve—particularly with new AI/ML, geospatial, or real-time streaming data. Ongoing iteration keeps your catalog valuable and aligned with shifting data strategies.
What is your approach to data retention? [change your answer]
Your answer:
We have policies and everyone knows them.
How to determine if this good enough
If your entire organisation has a defined data retention policy—aligning with UK legislative requirements (such as the Data Protection Act 2018, UK GDPR) or departmental mandates—and all relevant teams know they must comply, you might consider this stage “good enough” under these conditions:
-
Clear, Written Policy
- Your organisation publishes retention durations for various data types, including official government data, personal data, or any data with a defined statutory retention period.
-
Widespread Awareness
- Projects and programs understand how long to store data (e.g., 2 years, 7 years, or indefinite for certain record types).
- Staff can articulate the policy at a basic level when asked.
-
Minimal Enforcement Overhead
- If your data is relatively small or low-risk, the cost of automating or auditing might not seem immediately justified.
- No major incidents or compliance breaches have surfaced yet.
-
Simplicity Over Complexity
- You have a “one-size-fits-all” approach to retention because your data usage is not highly diverse.
- The overhead of implementing multiple retention categories might not be warranted yet.
In short, if you maintain a straightforward environment and your leadership sees no pressing issues with data retention, organisation-level policy awareness might suffice. However, for many UK public sector bodies, data sprawl and diverse workloads can quickly complicate retention, making manual approaches risky.
Your answer:
Teams must show they follow our policies.
How to determine if this good enough
In this stage, each project/program must explicitly confirm they follow the retention rules. This might happen through project gating, sign-offs, or periodic reviews. You can consider it “good enough” if:
-
Documented Accountability
- Each project lead or manager signs a statement or includes a section in project documentation confirming adherence to the retention schedule.
- This accountability often fosters better data hygiene.
-
Compliance Embedded in Project Lifecycle
- When new projects or services start, part of the onboarding includes discussing data retention needs.
- Projects are less likely to “slip” on retention because they must address it at key milestones.
-
Reduced Risk of Oversight
- If an audit occurs, you can point to each project’s attestation as evidence of compliance.
- This stage often prevents ad hoc or “forgotten” data sets from persisting indefinitely.
However, attestation can be superficial if not backed by validation or partial audits. Teams might sign off on compliance but still store data in ways that violate policy. As data footprints grow, manual attestations can fail to catch hidden or newly spun-up environments.
Your answer:
We check that we’re following policy and fix any problems we find.
How to determine if this good enough
Once regular audits and reviews are in place, your organisation systematically verifies whether teams are adhering to the mandated retention policies. This can be “good enough” if:
-
Scheduled, Transparent Audits
- Every quarter or half-year, a designated group (e.g., an internal compliance team) or external auditor reviews data lifecycle settings, actual usage, and retention logs.
-
Actionable Findings
- Audit outcomes lead to real change—if a project is over-retaining or missing a lifecycle rule, they must fix it promptly, with a follow-up check.
-
Reduction in Non-Compliance Over Time
- Each review cycle sees fewer repeated issues or new violations, indicating the process is effective.
-
Support from Leadership
- Senior leadership or governance boards take these findings seriously, dedicating resources to address them.
If your audits reveal minimal breaches and the cycle of reporting → fixing → re-checking runs smoothly, you might meet the operational needs of most public sector compliance frameworks. However, as data volumes scale, purely manual or semi-annual checks may miss real-time issues, leading to potential non-compliance between audits.
Your answer:
We follow policy, make checks and record exceptions in a central risk register.
How to determine if this good enough
At this stage, your organisation recognises that not all data fits neatly into standard retention policies. Some sensitive projects or legal hold scenarios might require exceptions. You might be “good enough” if:
-
Risk Awareness
- You systematically capture exceptions—like extended retention for litigation or indefinite archiving for historical records—within your official risk register.
-
Clear Exception Processes
- Teams that need longer or shorter retention follow a documented procedure, including justification and sign-off from legal or governance staff.
-
Risk-Based Decision Making
- Leadership reviews these exceptions periodically and weighs the potential risks (e.g., data breach, cost overhead, privacy concerns) against the need for extended retention.
-
Traceable Accountability
- Each exception has an owner who ensures compliance with any additional safeguards (e.g., encryption, restricted access).
Such a model keeps compliance tight, as unusual retention cases are formally recognised and managed. Still, some organisations lack robust automation or real-time checks that link risk registers to actual data settings, leaving room for human error.
Your answer:
We follow policy and use automated cloud tools to check that we do.
How to determine if this good enough
In this final, mature stage, your organisation uses automation to continuously track, enforce, and remediate data retention policies across all environments. It’s generally considered “good enough” if:
-
Policy-as-Code
- Retention rules are embedded in your Infrastructure as Code templates or pipelines. When new data storage is provisioned, the lifecycle or retention policy is automatically set.
-
Real-Time or Near Real-Time Enforcement
- If a project forgets to configure lifecycle rules or tries to extend retention beyond the allowed maximum, an automated policy corrects it or triggers an alert.
-
Central Visibility
- A dashboard shows the overall compliance posture in near-real-time, flagging exceptions or misconfigurations.
- Governance teams can quickly drill into any resource that violates the standard.
-
Minimal Manual Intervention
- Staff rarely need to manually fix retention settings; automation handles the majority of routine tasks.
- Audits confirm a high compliance rate, with issues addressed rapidly.
Although this represents a best-practice scenario, continuous improvements arise as new cloud services emerge or policy requirements change. Ongoing refinement ensures your automated approach stays aligned with departmental guidelines, security mandates, and potential changes in UK public sector data legislation.
Governance
How do you decide who handles the different aspects of cloud security? [change your answer]
Your answer:
We don’t have a standard approach, which leads to gaps and misunderstandings.
How to determine if this good enough
When an organisation minimally accounts for the shared responsibility model, it often treats cloud services like a traditional outsourcing arrangement, assuming the provider handles most (or all) tasks. This might be considered “good enough” if:
-
Limited Complexity or Strictly Managed Services
- You consume only highly managed or software-as-a-service (SaaS) solutions, so the cloud vendor’s scope is broad, and your responsibilities are minimal.
- In such cases, misunderstandings about lower-level responsibilities might not immediately cause problems.
-
Small Scale or Low-Risk Workloads
- You deploy minor pilot projects or non-sensitive data with minimal security or compliance overhead.
- The cost and effort of clarifying responsibilities could feel disproportionate.
-
Short-Term or Experimental Cloud Usage
- You might be running proof-of-concepts or test environments that you can shut down quickly if issues arise.
- If a gap in responsibility surfaces, it may not significantly impact operations.
However, as soon as you scale up, handle sensitive information, or rely on the cloud for critical services, ignoring the shared responsibility model becomes risky. For most UK public sector bodies, data security, compliance, and operational continuity are paramount—overlooking even a small portion of your obligations can lead to non-compliance or service disruptions.
Your answer:
Some people know about the shared responsibility model, but it’s not always used.
How to determine if this good enough
At this stage, your teams recognise that some aspects of security, patching, and compliance belong to you and others fall to the cloud provider. You might see this as “good enough” if:
-
General Understanding Among Key Staff
- Cloud architects, security leads, or DevOps teams can articulate the main points of the shared responsibility model.
- They know the difference between SaaS, PaaS, and IaaS responsibilities.
-
Minimal Incidents
- You’ve not encountered major operational issues or compliance failures that trace back to confusion over who handles what.
- Day-to-day tasks (e.g., OS patches, DB backups) proceed smoothly in most cases.
-
No Large, Complex Workloads
- If your usage is still relatively simple or in early phases, you might not need a fully systematic approach yet.
However, as soon as your environment grows or you onboard new teams or more complex solutions, “basic awareness” can be insufficient. If you rely on an ad hoc approach, you risk missing certain obligations (like security event monitoring or identity governance) and undermining consistent compliance.
Your answer:
We use the shared responsibility model.
How to determine if this good enough
At this level, your organisation actively references the shared responsibility model when selecting, deploying, or scaling cloud services. You might consider this approach “good enough” if:
-
Consistent Inclusion in Architecture and Procurement
- Whenever a new application is planned, an architecture review clarifies who will handle patching, logging, network security, etc.
- The procurement or project scoping includes the vendor’s responsibilities vs. yours, documented in service agreements.
-
Reduced Misconfigurations
- You see fewer incidents caused by someone assuming “the vendor handles it.”
- Teams rarely have to scramble for post-deployment fixes related to neglected responsibilities.
-
Cross-Functional Alignment
- Security, DevOps, finance, and governance teams share the same interpretation of the model, preventing blame shifts or confusion.
-
Auditable Evidence
- If challenged by an internal or external auditor, you can present decision logs or architecture documents showing how you accounted for each aspect of shared responsibility.
If your cloud consumption decisions reliably incorporate these checks and remain transparent to all stakeholders, you might meet day-to-day operational needs. Still, you can enhance the process by making it even more strategic, with regular updates and risk-based evaluations.
Your answer:
We plan with the shared responsibility model, review often, and record responsibilities.
How to determine if this good enough
At this stage, your organisation not only references shared responsibilities when building or buying new solutions, but actively uses them to shape strategic roadmaps and service-level agreements. You might see this as “good enough” if:
-
Proactive Vendor Collaboration
- You regularly discuss boundary responsibilities with the cloud provider, clarifying tasks that remain in-house and tasks the vendor can adopt.
- Contract renewals or expansions include updates to these responsibilities if needed.
-
Routine Audits on Allocation of Responsibilities
- Perhaps every 6–12 months, you review how the model is working in practice—are vendor-managed responsibilities handled properly? Are your in-house tasks well-executed?
-
Clear Documentation of In-House Retained Tasks
- For tasks like specialised security controls, data classification, or unique compliance checks, you deliberately keep them in house. You note these exceptions in your governance or vendor communication logs.
-
Enhanced Risk Management
- The risk register or compliance logs show minimal “unknown responsibility” gaps, and there’s a structured process for addressing new or changing requirements.
If your cloud planning and vendor engagements revolve around the shared responsibility model, ensuring alignment at both strategic and operational levels, you might meet advanced governance requirements in the UK public sector. Still, you can deepen the approach to ensure ongoing optimisation of cost, performance, and compliance.
Your answer:
Shared responsibility is at the centre of all our cloud decisions. We review often to make sure we have clear roles and get the best value.
How to determine if this good enough
This final maturity level sees the shared responsibility model as a cornerstone of your cloud strategy:
-
Continuous Governance and Optimisation
- Teams treat shared responsibilities as a dynamic factor—constantly reviewing how tasks, risk, or cost can be best allocated between you and the vendor.
- It’s integrated with your architecture, security, compliance, and financial planning.
-
Live Feedback Loops
- When new cloud features or vendor service upgrades appear, you evaluate if shifting responsibilities (e.g., to a new managed service) is beneficial or if continuing in-house control is more cost-effective or necessary for compliance.
-
Frequent Collaboration with Vendor
- You hold regular “architecture alignment” or “service optimisation” sessions with the cloud provider, ensuring your responsibilities remain well-balanced as your environment evolves.
-
High Transparency and Minimal Surprises
- Incidents or compliance checks rarely expose unknown gaps.
- You have robust confidence in your risk management, cost forecasting, and operational readiness.
If you operate at this level, you’re likely reaping the full benefit of cloud agility, cost optimisation, and compliance. Even so, continued vigilance is needed to adapt to new regulations, technology changes, or organisational shifts.
How do you manage and store build artefacts (files created when building software)? [change your answer]
Your answer:
We don’t, and people often change code on live servers.
How to determine if this good enough
In this stage, your organisation lacks formal processes to create or store build artifacts. You might find this approach “good enough” if:
-
Limited or Non-Critical Services
- You run only small-scale or temporary services where changes can be handled manually, and downtime or rollback is not a major concern.
-
Purely Experimental or Low-Sensitivity
- The data or systems you manage are not subject to stringent public sector regulations or sensitivity classifications (e.g., prototyping labs, dev/test sandboxes).
-
Single-Person or Very Small Team
- A single staff member manages everything, so there’s minimal confusion about versions or changes.
- The risk of accidental overwrites or lost code is recognised but considered low priority.
However, even small teams can face confusion if code is edited live on servers, making it hard to replicate environments or roll back changes. For most UK public sector needs—especially with compliance or audit pressures—lack of artifact management eventually becomes problematic.
Your answer:
We rebuild artefacts in each environment, which can cause problems.
How to determine if this good enough
In this scenario, your organisation has some automation but rebuilds the software in dev, test, and production separately. You might see this as “good enough” if:
-
Low Risk of Version Drift
- The codebase and dependencies rarely change, or you have a small dev team that carefully ensures each environment has identical build instructions.
-
Limited Formality
- If you’re still in early stages or running small services, you might tolerate the occasional mismatch between environments.
-
Few Dependencies
- If your project has very few external libraries or minimal complexity, environment-specific rebuilds don’t cause many issues.
However, environment-specific rebuilds can cause subtle differences, making debugging or compliance audits more complex—especially in the UK public sector, where consistent deployments are often required to ensure stable and secure services.
Your answer:
We save artefacts, sometimes with version control, but there’s no focus on making them secure or unchangeable.
How to determine if this good enough
Here, your organisation has progressed to storing build artifacts in a central place, often with versioning. This can be considered “good enough” if:
-
You Can Reproduce Past Builds
- You label or tag artifacts, so retrieving an older release is relatively straightforward.
- This covers basic audit or rollback needs.
-
Moderate Risk Tolerance
- You handle data or applications that don’t require the highest security or immutability (e.g., citizen-facing website with low data sensitivity).
- Rarely face formal audits demanding cryptographic integrity checks.
-
No Enforcement of Immutability
- Your system might allow artifact overwrites or deletions, but your teams rarely abuse this.
- The risk of malicious or accidental tampering is minimal under current conditions.
While this is a decent midpoint, the lack of immutability or strong security measures can pose challenges if you must prove the authenticity or integrity of a specific artifact, especially in regulated public sector contexts.
Your answer:
We lock down artefact dependencies and check them with digital signatures or hashes.
How to determine if this good enough
Here, your build pipelines ensure that not only your application code but also every library or dependency is pinned to a specific version, and you verify these via cryptographic means. You might consider this approach “good enough” if:
-
High Confidence in Artifact Integrity
- You can guarantee the code and libraries used in staging match those in production.
- Security incidents involving compromised packages are less likely to slip through.
-
Robust Supply Chain Security
- Attackers or misconfigured servers have a harder time injecting malicious code or outdated dependencies.
- This is crucial for UK public sector services handling personal or sensitive data.
-
Comprehensive Logging
- You track which pinned versions (e.g.,
libraryA@v2.3.1
) were used for each build. - This improves forensic investigations if a vulnerability is discovered later.
- You track which pinned versions (e.g.,
-
Controlled Complexity
- Pinning and verifying dependencies might slow down upgrades or require more DevOps effort, but your teams accept it as a valuable security measure.
If you rely on pinned dependencies and cryptographic verification, you’re covering a big portion of software supply chain risks. However, you might still enhance final artifact immutability or align with advanced threat detection in your build process.
Your answer:
All build artefacts are unchangeable, signed, and stored for audits. We can recreate any environment if needed.
How to determine if this good enough
At this final stage, your organisation has robust, end-to-end artifact management. You consider it “good enough” if:
-
Full Immutability and Cryptographic Assurance
- Every production artifact is sealed (signed), ensuring no one can alter it post-build.
- You store these artifacts in a tamper-proof or strongly controlled environment (e.g., WORM storage).
-
Long-Term Retention for Audits
- You can quickly produce the exact code, libraries, and container images used in production months or years ago, aligning with public sector mandates (e.g., 2+ years or more if relevant).
-
Ability to Recreate Environments
- If an audit or legal inquiry arises, you can spin up the environment from these artifacts to demonstrate what was running at any point in time.
-
Compliance with Regulatory/Criminal Investigation Standards
- If part of your remit includes potential criminal investigations (e.g., digital forensics for certain public sector services), the chain of custody for your artifacts is guaranteed.
If you meet these conditions, you are at a high maturity level, ensuring minimal risk of supply chain attacks, compliance failures, or untraceable changes. Periodic revalidations keep your process evolving alongside new threats or technologies.
How do you manage and update access policies, and how do you tell people about changes? [change your answer]
Your answer:
We don’t have formal policies. People decide based on what they think is best.
How to determine if this good enough
When access policies are managed in an ad-hoc manner:
-
Small Scale, Low Risk
- You may be a small team with limited scope. If you only handle low-sensitivity or non-critical information, an ad-hoc approach might not have caused major issues yet.
-
Minimal Regulatory Pressures
- If you’re in a part of the public sector not subject to specific frameworks (e.g., ISO 27001, Government Security Classifications), you might feel less pressure to formalise policies.
-
Very Basic or Temporary Environment
- You could be running short-lived experiments or pilot projects with no extended lifespans, so detailed policy management feels excessive.
However, this level of informality quickly becomes a liability, especially in the UK public sector. Requirements for compliance, security best practices, and data protection (including UK GDPR considerations) often demand a more structured approach. Inconsistent or undocumented policies can lead to significant vulnerability and confusion when staff rotates or scales up.
Your answer:
We document our access policies, but updates and communication are irregular.
How to determine if this good enough
At this stage, you have a documented policy—likely created once and updated occasionally. You might consider it “good enough” if:
-
Visibility of the Policy
- Stakeholders can find it in a shared repository, intranet, or file system.
- There’s a moderate awareness among staff.
-
Some Level of Consistency
- Access controls typically align with the documented policy, though exceptions may go unnoticed.
- Projects mostly follow the policy, but not always systematically.
-
Few or Minor Incidents
- You haven’t encountered major security or compliance issues from poor access control.
- Audits might find some improvement areas but no critical failings.
However, a lack of regular updates or structured communication means staff may be uninformed when changes occur. Additionally, bigger or cross-department projects can misinterpret or fail to adopt these policies if not regularly reinforced.
Your answer:
We review and update policies and tell the right people, but not always in a transparent way.
How to determine if this good enough
You conduct reviews on a known schedule (e.g., quarterly or bi-annually), and policy updates follow a documented communication plan. This might be “good enough” if:
-
Predictable Review Cycles
- Teams know when to expect policy changes and how to provide feedback.
- Surprises or sudden changes are less common.
-
Structured Communication Path
- You send out formal emails, intranet announcements, or notifications to staff and relevant teams whenever changes occur.
- The updates typically highlight “what changed” and “why.”
-
Most Stakeholders Are Informed
- While not fully collaborative, key roles (like security, DevOps, compliance leads) always see updates promptly.
- Regular staff might be passively informed or updated in team briefings.
-
Less Chaos in Access Controls
- The process reduces ad-hoc or unauthorised changes.
- Audits show improvements in the consistency of applied policies.
If your approach largely prevents confusion or major policy gaps, you’ve reached a good operational level. However, for advanced alignment—especially for larger or cross-government programs—you may want more transparency and active collaboration.
Your answer:
We review and update policies and tell the right people. Everyone understands the process.
How to determine if this good enough
In this scenario, the policy process is well-structured and inclusive:
-
Collaborative Policy Updates
- Stakeholders from various departments (security, finance, operations, legal, etc.) collaborate to shape and approve changes.
-
Clear, Consistent Communication
- Staff know exactly where to look for upcoming policy changes, final decisions, and rationale.
- The policy is more likely to be understood and adopted, reducing friction.
-
Fewer Exemptions or Gaps
- Because the right people are involved from the start, there are fewer last-minute exceptions.
- Auditors typically find the system robust and responsive to new requirements.
-
Measured Efficiency
- While more complex to coordinate, the integrated process might still be streamlined to avoid bureaucratic delays.
If your integrated approach ensures strong buy-in and minimal policy confusion, you are likely meeting the needs of most public sector compliance standards. You may still evolve by embracing a code-based approach or embedding continuous testing.
Your answer:
We store policies in version control. Anyone can see or suggest changes. Updates are open and tested like software.
How to determine if this good enough
At this top maturity level, policy management is treated like software development:
-
Full Transparency and Collaboration
- Anyone in the organisation (or designated roles) can propose, review, or comment on policy changes.
- Policy changes pass through a formal PR (pull request) or code review process.
-
Automated Testing or Validations
- Updates to policy are tested—either by applying them in a staging environment or using policy-as-code testing frameworks.
- This ensures changes do what they’re intended to do.
-
Instant Visibility of Policy State
- A central dashboard or repository shows the current “approved” policy version and any in-progress updates.
- Historical records of every previous policy version are readily available.
-
Regulatory Confidence
- Auditors or compliance officers see an extremely robust, traceable approach.
- Exemptions or special cases are handled via code-based merges or feature branches, ensuring full transparency.
If you meet these criteria, you’re likely an exemplar of policy governance within the UK public sector. Regular retrospectives can still uncover incremental improvements or expansions to new services or cross-department integrations.
How do you manage your cloud environment? [change your answer]
Your answer:
Manually, when needed, with no set process.
How to determine if this good enough
Your organisation relies on the cloud provider’s GUIs or consoles to handle tasks, with individual admins making changes without formal processes or documentation. This might be “good enough” if:
-
Small, Low-Risk Projects
- You handle a small number of resources or have minimal production environments, and so far, issues have been manageable.
-
Exploratory Phase
- You’re testing new cloud services for proof-of-concept or pilot projects, with no immediate scaling needs.
-
Limited Compliance Pressures
- No strong mandates from NCSC supply chain or DevOps security guidelines or internal governance requiring rigorous configuration management.
However, purely manual approaches risk misconfigurations, leftover resources, security oversights, and inconsistent environments. NIST SP 800-53 CM controls and NCSC best practices encourage structured management to reduce such risks.
Your answer:
Documented manual processes. Test environments may not match live ones.
How to determine if this good enough
Your organisation documents step-by-step procedures for the cloud environment, with a test or staging environment that somewhat mirrors production. However, small differences frequently occur. It might be “good enough” if:
-
Moderate Complexity
- While you maintain a test environment, changes must still be repeated manually in production.
-
Consistent, Though Manual
- Admins do follow a standard doc for each operation, reducing accidental misconfigurations.
-
Some Variation Tolerated
- You can accommodate minor environment discrepancies that don’t cause severe issues.
However, manually repeating steps can lead to drift over time, especially if some updates never make it from test to production (or vice versa). NCSC operational resilience approaches and NIST SP 800-53 CM controls typically advocate more consistent, automated management to ensure parity across environments.
Your answer:
Some things are scripted, but we still do a lot by hand.
How to determine if this good enough
Your organisation uses scripts (e.g., Bash, Python, PowerShell) or partial IaC for routine tasks, while specialised or complex changes remain manual. This might be “good enough” if:
-
Significant Time Savings Already
- You see reduced misconfigurations for routine tasks (like creating instances or networks), but still handle complex or one-off scenarios manually.
-
Mixed Skill Levels
- Some staff confidently script or write IaC, others prefer manual steps, leading to a hybrid approach.
-
Minor Environment Discrepancies
- Since not all is automated, drift can occur but is less frequent.
You can further unify your scripts into a consistent pipeline or adopt a more complete Infrastructure-as-Code strategy. NCSC’s DevSecOps best practices and NIST SP 800-53 CM controls support extended automation for better security and consistency.
Your answer:
Most things are standardised and automated. We often review and make improvements.
How to determine if this good enough
Your organisation employs a robust Infrastructure-as-Code or automation-first approach, with minimal manual steps. This may be “good enough” if:
-
Consistent Environments
- Dev, test, and production are nearly identical, drastically reducing drift.
-
Frequent Delivery & Minimal Incidents
- You can deploy or update resources swiftly, with lower misconfiguration rates.
- referencing NCSC’s DevSecOps approach or NIST SP 800-160 Vol 2 for secure engineering.
-
Adherence to Security & Compliance
- Automated pipelines incorporate security scanning or compliance checks, referencing AWS Config, Azure Policy, GCP Org Policy, OCI Security Zones.
To push further, you could adopt advanced drift detection, code-based policy enforcement, or real-time security scanning for each pipeline. NIST SP 800-137 for continuous monitoring and NCSC’s protective monitoring approaches might guide deeper expansions.
Your answer:
Everything is automated using code and we get alerts if anything changes unexpectedly.
How to determine if this good enough
At this advanced stage, every resource is defined in code (e.g., Terraform, CloudFormation, Bicep, ARM templates, Deployment Manager, or other). The environment automatically reverts or alerts on changes outside of pipelines. Typically “good enough” if:
-
Zero Manual Changes
- All modifications go through code merges and CI/CD, preventing confusion or insecure ad-hoc changes.
-
Instant Visibility
- If drift occurs (someone clicked in the console or an unexpected event occurred), an alarm triggers, prompting immediate rollback or investigation.
-
Rapid & Secure Deployments
- Security, cost, and performance optimisations can be tested and deployed quickly without risk of untracked manual variations.
You can further refine HPC/AI ephemeral resources, cross-department pipeline sharing, or advanced policy-as-code with AI-based compliance. NCSC’s advanced DevSecOps or zero trust guidance and NIST SP 800-53 CM controls for automated configuration management encourage continuous iteration.
How do you apply and enforce policies? [change your answer]
Your answer:
We don’t.
How to determine if this good enough
If policies are not actively applied, your organisation may still be at a very early or exploratory stage. You might perceive this as “good enough” if:
-
No Critical or Sensitive Operations
- You operate minimal or non-critical services, handling little to no sensitive data or regulated workloads.
- There’s no immediate requirement (audit, compliance, security) pressing for formal policy usage.
-
Limited Scale or Temporary Projects
- Teams are small and can coordinate informally, or the entire environment is short-lived with minimal risk.
-
No Internal or External Mandates
- No formal rules require compliance with recognised governance frameworks (e.g., ISO 27001, NCSC Cloud Security Principles).
- Organisational leadership has not mandated policy implementation.
However, as soon as you store personal, official, or sensitive data—or your environment becomes critical to a public service—lack of policy application typically leads to risk of misconfigurations, data leaks, or compliance failures.
Your answer:
We have policies, but don’t check whether people follow them.
How to determine if this good enough
Here, your organisation may have documented policies, but there is no real mechanism to ensure staff or systems comply. You might consider this “good enough” if:
-
Policies Are Referenced, Not Mandatory
- Teams consult them occasionally but can ignore them with minimal consequences.
- Leadership or audits haven’t flagged major non-compliance issues—yet.
-
Low Regulatory Pressure
- You might not be heavily audited or regulated, so the absence of enforcement tools has not been problematic.
-
Early in Maturity Journey
- You wrote policies to set direction, but formal enforcement mechanisms aren’t established. You rely on staff cooperation.
Over time, lack of enforcement typically leads to inconsistent implementation and potential security or compliance gaps. The risk escalates with more complex or critical workloads.
Your answer:
We use processes to apply policies, but not much technology.
How to determine if this good enough
In this scenario, your organisation integrates policies into formal workflows (e.g., ticketing, approval boards, or documented SOPs), but relies on manual oversight rather than automated technical controls. It could be “good enough” if:
-
Stable, Well-Understood Environments
- Your systems don’t change frequently, so manual approvals or reviews remain feasible.
- The pace of service updates is relatively slow.
-
Well-Trained Staff
- Teams consistently follow these processes, knowing policy steps are mandatory.
- Leadership or compliance officers occasionally check random samples for compliance.
-
Low Complexity
- A small number of applications or resources means manual reviews remain practical, and the risk of missing a violation is relatively low.
However, process-driven approaches can become slow and error-prone with scale or complexity. If you spin up ephemeral environments or adopt rapid CI/CD, purely manual processes might lag behind or fail to catch mistakes.
Your answer:
We use processes and some technology to apply policies.
How to determine if this good enough
At this stage, your organisation uses well-defined processes to ensure policy compliance, supplemented by some technical controls (e.g., partial automation or read-only checks). You might consider it “good enough” if:
-
Consistent, Repeatable Processes
- Your staff frequently comply with policy steps.
- Automated checks (like scanning for open ports or misconfigurations) reduce human errors.
-
Reduced Overheads
- Some tasks are automated, but you still rely on manual gating in certain high-risk or high-sensitivity areas.
- This balance feels manageable for your scale and risk profile.
-
Positive Audit Outcomes
- Internal or external audits indicate that your policy application is robust, with only minor improvements needed.
However, if you want to handle larger workloads or adopt faster continuous delivery, you might need more comprehensive technical enforcement that eliminates many manual steps and further reduces the chance of oversight.
Your answer:
We have robust processes and technology to ensure that policies are always followed.
How to determine if this good enough
At this final stage, policy application is deeply woven into both organisational processes and automated technical controls:
-
End-to-End Enforcement
- Every step of resource creation, modification, or retirement is governed by your policy—there’s no easy workaround or manual override without documented approval.
-
High Automation, High Reliability
- The majority of policy compliance checks and remediation are automated. Staff rarely need to intervene except for unusual exceptions.
-
Predictable Governance
- Audits or compliance reviews are smooth. Minimal policy violations occur, and if they do, they’re swiftly detected and addressed.
-
Alignment with Public Sector Standards
- You likely meet or exceed typical security or compliance frameworks, easily demonstrating robust controls to oversight bodies.
Even at this apex, continuous improvement remains relevant. Evolving technology or new departmental mandates might require ongoing updates to maintain best-in-class enforcement.
How do you use version control and branch strategies? [change your answer]
Your answer:
There is very little use of version control.
How to determine if this good enough
Your organisation may store code in a basic repository (sometimes not even using Git) with minimal branching or tagging. This might be “good enough” if:
-
Small/Short-Term Projects
- Projects with a single developer or short lifespans, where overhead from advanced version control might not be justified.
-
Low Collaboration
- Code changes are infrequent, or there’s no simultaneous development that requires merges or conflict resolution.
-
Non-Critical Systems
- Failure or regression from insufficient version control poses a manageable risk with minimal user impact.
Still, even small projects benefit from modern version control practices (e.g., Git-based workflows). NCSC’s advice on code security and NIST SP 800-53 CM controls recommend robust version control to ensure traceability, reduce errors, and support better compliance and security.
Your answer:
We have our own way of managing branches, not standard methods.
How to determine if this good enough
Your team might have created a unique branching model. This can be “good enough” if:
-
Small Team Agreement
- Everyone understands the custom approach, and the risk of confusion is low.
-
Limited Cross-Team Collaboration
- You rarely face external contributors or multi-department merges, so you haven’t encountered significant friction.
-
Works for Now
- The custom approach meets current needs and hasn’t caused major merge issues or frequent conflicts yet.
That said, widely recognised branch strategies (GitFlow, GitHub Flow, trunk-based development) typically reduce confusion and are better documented. NCSC’s developer best practices and NIST SP 800-160 secure engineering frameworks encourage standard solutions for consistent security and DevOps.
Your answer:
We use a recognised strategy (like GitFlow or GitHubFlow), with changes to better suit us.
How to determine if this good enough
You follow a known model (GitFlow, trunk-based, or GitHub Flow) but adapt it for local constraints. This is often “good enough” if:
-
Shared Terminology
- Most developers grasp main concepts (e.g., “feature branches,” “release branches”), reducing confusion.
-
Appropriate for Complexity
- If your application requires multiple parallel releases or QA stages, GitFlow might be suitable, or if you have frequent small merges, trunk-based might excel.
-
Relatively Low Merge Conflicts
- The adapted approach helps you handle concurrent changes with minimal chaos.
If you still encounter friction (e.g., complex release branches rarely used, too many merges), you could refine or consider a simpler approach. NCSC’s DevSecOps guidance and NIST SP 800-53 CM controls underscore the importance of an approach that’s not overly burdensome yet robust enough for security and compliance.
Your answer:
We follow a recognised strategy suited to complex projects (such as GitFlow).
How to determine if this good enough
You employ a formal version of GitFlow (or a similarly structured approach) with separate “develop,” “release,” “hotfix,” and “feature” branches. It can be “good enough” if:
-
Complex or Multiple Releases
- You manage multiple versions or release cycles in parallel, which GitFlow accommodates well.
-
Stable Processes
- Teams understand and follow GitFlow precisely, with few merges or rebase conflicts.
-
Clear Roles
- Release managers or QA teams appreciate the distinct “release branch” or “hotfix branch” logic, referencing NCSC’s secure release patterns or NIST SP 800-160 DevSecOps suggestions.
If you see minor friction in fast iteration or dev complaining about overhead, you might consider a simpler trunk-based approach. GOV.UK Service Manual on continuous delivery suggests simpler flows often suffice for agile teams.
Your answer:
We follow a recognised strategy suited to continuous delivery and simplified collaboration (such as GitHubFlow).
How to determine if this good enough
You adopt a minimal branching approach—like trunk-based development or GitHub Flow—emphasizing rapid merges and continuous integration. It’s likely “good enough” if:
-
Frequent Release Cadence
- You can deploy changes daily or multiple times per day without merge conflicts piling up.
-
Highly Agile Culture
- The team is comfortable merging into
main
ortrunk
quickly, with automated tests ensuring no regressions.
- The team is comfortable merging into
-
Confidence in Automated Tests
- A robust CI pipeline instills trust that quick merges rarely break production.
Still, for some large or multi-release scenarios (like long-term LTS versions), a more complex branching model might help. NCSC agile DevSecOps guidance and NIST SP 800-160 for secure engineering at scale provide additional references on maintaining code quality with frequent releases.
How do you provision cloud services? [change your answer]
Your answer:
Manually, with no automation.
How to determine if this good enough
If your organisation primarily provisions cloud services using manual methods—such as web consoles, command-line interfaces, or custom ad hoc scripts—this might be considered “good enough” if:
-
Very Small or Low-Risk Environments
- You run minimal workloads, handle no highly sensitive data, and rarely update or modify your cloud infrastructure.
-
Limited Scalability Needs
- You do not expect frequent environment changes or expansions, so the overhead of learning automation might seem unnecessary.
-
No Immediate Compliance Pressures
- You might not be heavily audited or required to meet advanced DevOps or infrastructure-as-code (IaC) practices just yet.
However, as soon as your environment grows, new compliance demands appear, or you onboard more users, manual provisioning can lead to inconsistencies and difficulty tracking changes—particularly in the UK public sector, where robust governance is often required.
Your answer:
We use some scripts, but there are no standards or consistency.
How to determine if this good enough
In this scenario, your organisation uses partial automation or scripting, but each team might have its own approach, with no centralised or standardised method. You might consider it “good enough” if:
-
Small to Medium Environment
- Teams are somewhat comfortable with their own scripting techniques.
- No pressing requirement to unify them under a single approach.
-
Mixed Expertise
- Some staff are proficient with scripting (Python, Bash, PowerShell), but others prefer manual console methods.
- You haven’t faced major issues from inconsistent naming or versioning.
-
Infrequent Collaboration
- Your departments rarely need to share cloud resources or code, so differences in scripting style haven’t caused big problems.
However, as soon as cross-team projects arise or you face compliance demands for consistent infrastructure definitions, this fragmentation can lead to duplication of effort, confusion, and errors.
Your answer:
We use automation for some services, but not everything.
How to determine if this good enough
Declarative automation (often in the form of Infrastructure as Code) is partially adopted, but not every team or environment follows it consistently. This might be “good enough” if:
-
Sisable Gains in Some Areas
- Some major projects are stable, reproducible, and versioned via IaC, reducing manual errors.
- Other smaller or legacy teams might still rely on older methods.
-
Limited Conflict Among Teams
- While some teams use IaC and others don’t, there isn’t a high need to integrate or share resources.
- Each team can operate fairly independently without causing confusion.
-
Compliance and Control
- Where the stakes are high (e.g., production or sensitive data), you likely already enforce declarative approaches.
- Lower-priority or test environments remain behind, but that may be acceptable for now.
If partial declarative automation meets your current needs, you may decide it’s sufficient. However, you might miss out on consistent governance, easier cross-team collaboration, and uniform operational efficiency.
Your answer:
Most teams use automation to set up cloud services.
How to determine if this good enough
In this phase, a large majority of your teams rely on IaC or declarative templates to provision and manage cloud services, yielding consistency and reliability. You might consider it “good enough” if:
-
High Reusability and Efficiency
- Teams share modules, templates, or code with minimal duplication.
- Common services (e.g., VPC, subnets, security groups) are easily spun up.
-
Improved Compliance and Auditing
- Audits show that configurations match version-controlled definitions—reducing manual drift.
- Staff can quickly roll back or replicate environments for test or disaster recovery.
-
Reduced Operational Overhead
- Fewer manual changes mean fewer untracked variations.
- Teams typically see improved speed for launching new environments.
If your use of declarative automation is broad but not yet mandated for every environment, you might still face occasional manual exceptions or unapproved changes. This can lead to minor inconsistencies.
Your answer:
All cloud services are set up by CI/CD pipelines.
How to determine if this good enough
This final stage means your organisation has fully embraced IaC—any production environment changes occur only through a pipeline and must be defined declaratively. It’s likely “good enough” if:
-
Extremely Consistent Environments
- No drift, as manual changes in production are disallowed or quickly overwritten by pipeline definitions.
-
Robust Governance
- Audits and compliance are straightforward—everything is in version control and accompanied by pipeline logs.
-
Seamless Reproducibility
- Dev, staging, and production can match precisely, barring data differences.
- Rapid rollback is possible by reverting to a previous commit.
-
High Organisational Discipline
- All stakeholders adhere to the policy that “no code, no deploy”—any infrastructure change must be made in IaC first.
You already operate at a high maturity level. Still, continuous improvement might revolve around advanced testing, policy-as-code integration, and cross-organisational collaboration.
Operations
Do you use continuous integration and continuous deployment (CI/CD) tools? [change your answer]
Your answer:
We don’t, we build test, and deploy by hand.
How to determine if this good enough
Your organisation may still rely on manual or semi-manual processes for building, testing, and deploying software. You might consider this “good enough” if:
-
Small or Non-Critical Projects
- You run a limited set of applications with low release frequency, so manual processes remain manageable.
-
Low Risk Tolerance
- The team is not yet comfortable adopting new automation tools or processes, and there is no immediate driver to modernise.
-
Minimal Compliance Pressures
- Formal requirements (e.g., from internal governance, GDS Service Standards, or security audits) haven’t mandated an automated pipeline or detailed audit trail for deployments.
However, as your projects grow, manual building and deploying typically becomes time-consuming and prone to human error. This can lead to inconsistency, difficulty replicating production environments, and a slower pace of iteration.
Your answer:
We use some CI/CD, but there’s no standard.
How to determine if this good enough
When some teams have adopted CI/CD pipelines for build and deploy tasks, but others remain manual or partially automated, you might find this “good enough” if:
-
Partial Automation Success
- Projects that do have CI/CD show faster releases and fewer errors, indicating the benefits of automation.
-
Mixed Team Maturity
- Some teams have the skills or leadership support to implement pipelines, while others do not, and there’s no pressing need to unify.
-
No Major Interdependence
- Projects that use CI/CD operate somewhat independently, not forcing standardisation across the entire organisation.
While this can work for a period, inconsistent CI/CD adoption often leads to uneven release quality, slower integration across departments, and missed opportunities for best-practice sharing.
Your answer:
Most teams use CI/CD, but each chooses its own tools.
How to determine if this good enough
This stage sees widespread CI/CD usage across the organisation, but with each team choosing different pipelines, scripts, or orchestrators. You might consider it “good enough” if:
-
Strong Automation Culture
- Almost every project has some form of automated build/test/deploy.
- Productivity and reliability are generally high.
-
High Team Autonomy
- Teams appreciate the freedom to select the best tools for their stack.
- Little friction arises from differences in pipeline tech, as cross-team collaboration is limited or well-managed.
-
No Major Standardisation Requirement
- Your department or top-level governance body hasn’t mandated a single CI/CD framework.
- Audits or compliance checks are typically satisfied by each team’s pipeline logs and versioning practices.
Though beneficial for agility, this approach can hinder knowledge sharing and pose onboarding challenges if staff move between teams. Maintaining multiple toolchains might also increase overhead.
Your answer:
Nearly all teams use CI/CD, but tools and processes vary.
How to determine if this good enough
In this stage, nearly all projects have automated pipelines, but there may still be variety in the tooling. Traditional or manual deploys exist only in niche situations. You might consider this “good enough” if:
-
Robust Automation Coverage
- A large percentage of code changes are tested and deployed automatically, minimising manual overhead.
- Releases are quicker and more reliable.
-
Limited Governance or Standardisation Issues
- Management is not demanding a single solution, and teams are content with the performance and reliability of their pipelines.
-
Minor Complexity
- While multiple CI/CD solutions exist, knowledge sharing is still manageable, and staff do not struggle excessively when rotating between teams.
If your approach still creates confusion for new or cross-functional staff, you might gain from more standardisation. Also, advanced compliance or security scenarios may benefit from a more centralised approach.
Your answer:
Everyone uses the same CI/CD process.
How to determine if this good enough
At this stage, your organisation has converged on a common CI/CD approach. You might consider it “good enough” if:
-
Uniform Tools and Processes
- All teams share a similar pipeline framework, leading to consistent build, test, security, and deployment steps.
- Onboarding is smoother—new staff learn one method rather than many.
-
High Governance and Compliance Alignment
- Auditing deployments is straightforward, as logs, artifacts, and approvals follow the same structure.
- Security or cost-optimisation checks are consistently applied across all services.
-
Continuous Improvement
- Each pipeline improvement (e.g., adding new test coverage or scanning) benefits the entire organisation.
- Teams collaborate on pipeline updates rather than reinventing the wheel.
While standardisation solves many issues, organisations must remain vigilant about tool stagnation. If the environment evolves (e.g., new microservices, containers, or serverless solutions), you should continuously update your pipeline approach.
How fast are your builds and deployments? [change your answer]
Your answer:
They take hours or days, but we don’t track it.
How to determine if this good enough
At this level, your organisation may treat builds and deployments as irregular events with minimal oversight. You might consider it “good enough” if:
-
Very Low Release Frequency
- You only release occasionally (e.g., once every few months), so tracking speed or efficiency seems less critical.
- Slow deployment cycles are acceptable due to stable requirements or minimal user impact.
-
Limited Pressure from Stakeholders
- Internal or external stakeholders do not demand quick rollouts or frequent features, so extended lead times go unchallenged.
-
No Critical Deadlines
- Lacking strict compliance or operational SLA obligations, you might not prioritise faster release cadences.
However, as soon as your environment grows, user demands increase, or compliance regulations require more frequent updates (e.g., security patches), slow processes can create risk and bottlenecks.
Your answer:
We track times, but things are often delayed.
How to determine if this good enough
At this level, you record how long builds and deployments take, but you still experience extended lead times. You might consider it “good enough” if:
-
Moderately Frequent Releases
- You release a new version monthly or quarterly, and while not fast, it meets your current expectations.
-
Limited Pressure from Users
- Stakeholders occasionally push for quicker releases, but the demand remains manageable.
- You deliver essential updates without major user complaints.
-
Some Awareness of Bottlenecks
- You know where delays occur (e.g., environment setup, manual test cycles), but you haven’t tackled them systematically.
If your team can tolerate these delays and no critical issues arise, you might remain here temporarily. However, you risk frustrating users or missing security patches if you can’t accelerate when needed.
Your answer:
Reasonably fast and we do some monitoring.
How to determine if this good enough
If you see mostly consistent build and deploy times—often measured in hours or under a day—and have some checks to ensure timely releases, you might consider it “good enough” if:
-
Regular Release Cadence
- You release weekly or bi-weekly, and while it’s not fully streamlined, you meet user expectations.
-
Intermediate Automation
- CI/CD pipelines handle building, testing, and packaging fairly reliably, with occasional manual steps.
-
Some Monitoring of SLAs
- You measure deployment times for important services. If they exceed certain thresholds, you investigate.
-
Sporadic Improvement Initiatives
- You occasionally gather feedback from dev teams or ops to tweak the pipeline, but you don’t have a continuous improvement loop.
If this approach satisfies your current workloads and stakeholder demands, you may feel it’s sufficient. However, you could still improve deployment speed, reduce manual overhead, and achieve faster feedback cycles.
Your answer:
Builds and deployments are quick and times are checked often.
How to determine if this good enough
At this level, your builds and deployments are typically quick (tens of minutes or fewer) and monitored in near real time. You might consider it “good enough” if:
-
Predictable Release Cycles
- You release multiple times a week (or more frequently) with minimal disruptions or user complaints.
- Stakeholders trust the release process.
-
CI/CD Tools Are Widely Adopted
- Dev and ops teams rely on a mostly automated pipeline for build, test, and deploy steps.
- Manual intervention is needed only for critical approvals or exception handling.
-
Proactive Monitoring
- You gather metrics on build times, test coverage, deployment frequency, and quickly spot regressions.
- Reports or dashboards are regularly reviewed by leadership.
-
Collaboration on Improvement
- Teams occasionally refine the pipeline or test processes, though not always in a continuous improvement cycle.
If your organisation can reliably deliver updates swiftly, you’ve likely avoided major inefficiencies. Yet there is usually room to refine further, aiming for near real-time feedback and single-digit-minute pipelines.
Your answer:
Builds and deployments finish in minutes, we monitor this and make improvements.
How to determine if this good enough
At this final stage, your builds and deployments are lightning-fast, happening in minutes for most projects. You might consider it “good enough” if:
-
Highly Automated, Highly Reliable
- DevOps and security teams trust the pipeline to handle frequent releases with minimal downtime or errors.
- Manual approval steps exist only for the most sensitive changes, and they’re quick.
-
Real-Time Monitoring and Feedback
- You track pipeline performance metrics, code quality checks, and security scans in real time, swiftly adjusting if numbers dip below thresholds.
-
Continuous Innovation
- The pipeline is never considered “finished”; you constantly adopt new tools or practices that further reduce overhead or increase confidence.
-
Robust Disaster Recovery
- Rapid pipeline execution means quick redeploys in case of failure or environment replication.
- With single-digit-minute pipelines, rollback or rebuild times are also minimised.
Though exemplary, there’s always an opportunity to embed more advanced practices (e.g., AI/ML for anomaly detection in release metrics) and to collaborate with other public sector entities to share your high-speed processes.
How do you monitor your systems? [change your answer]
Your answer:
We check things when building or when there’s a problem.
How to determine if this good enough
At this stage, monitoring is minimal or ad hoc, primarily triggered by developer curiosity or urgent incidents. You might consider it “good enough” if:
-
Small-Scale, Low-Criticality
- Your applications or infrastructure handle low-priority workloads with few users, so the cost of more advanced monitoring might feel unjustified.
-
Occasional Issues
- Incidents happen rarely, and when they do, developers can manually troubleshoot using logs or ad hoc queries.
-
No Formal SLAs
- You haven’t promised end users or other stakeholders strict uptime or performance guarantees, so reactive observation hasn’t caused major backlash.
While this might be workable for small or test environments, ignoring continuous monitoring typically leads to slow incident response, knowledge gaps, and difficulty scaling. In the UK public sector, especially if you handle official or personally identifiable data, a lack of proactive observability is risky.
Your answer:
We use simple tools and check systems by hand.
How to determine if this good enough
Here, your organisation uses straightforward dashboards or partial metrics from various cloud services, but lacks integration or automation. You might consider it “good enough” if:
-
Steady Workloads, Infrequent Changes
- Infrastructure or application changes rarely happen, so manual checks remain sufficient to catch typical issues.
-
Limited Cross-Service Dependencies
- If your environment is not very complex, you might get away with separate dashboards for each service.
-
No Urgent Performance or SLA Pressures
- Although you have some basic visibility, you haven’t seen pressing demands to unify or automate deeper monitoring.
However, as soon as you need a single view into your environment, or if you must detect cross-service problems quickly, relying on manual checks and siloed dashboards can hinder timely responses.
Your answer:
Monitoring and alerts for problems, but its not yet an integrated system.
How to determine if this good enough
At this stage, you have systematic monitoring, likely with a range of alerts for infrastructure-level events and some application-level checks. You might consider it “good enough” if:
-
Reliable Incident Notifications
- Issues rarely go unnoticed—teams are informed promptly of CPU spikes, database errors, or performance slowdowns.
-
Moderate Integration
- You combine some app logs with system metrics, but the correlation might not be seamless.
- High-level dashboards exist, but deeper analysis might require manually cross-referencing data sources.
-
SLAs Are Tracked but Not Always Guaranteed
- You monitor operational metrics that relate to your SLAs, but bridging them with application performance (like user transactions) can be patchy.
If your environment is relatively stable or the partial integration meets day-to-day needs, you may consider it sufficient. However, a more holistic approach can cut troubleshooting time and reduce guesswork.
Your answer:
Advanced monitoring tools with some integration between infrastructure and applications.
How to determine if this good enough
Here, your organisation invests in advanced monitoring or APM solutions, has robust metrics/alerts, and partial correlation across layers (e.g., logs, infrastructure usage, application performance). You might consider it “good enough” if:
-
Wide Observability Coverage
- Most services—compute, storage, container orchestration—are monitored, along with main application metrics or user experiences.
- Teams rarely scramble for data in incidents.
-
Significant Cross-Data Correlation
- You can jump from an app alert to relevant infrastructure metrics within the same platform, though some manual steps might remain.
-
Flexible Dashboards
- Stakeholders can view customised dashboards that show real-time or near real-time health.
-
Occasional Gaps
- Some older systems or sub-services might still lack advanced instrumentation.
- Full-blown correlation (like linking distributed traces to container CPU usage) might not always be frictionless.
If your advanced tools already deliver quick incident resolution and meet compliance or user demands, your approach might suffice. But full integration could further streamline triaging complex issues.
Your answer:
An integrated monitoring system, with insights from both infrastructure and applications.
How to determine if this good enough
At this top level, your organisation has an advanced platform or combination of tools that unify logs, metrics, traces, and alerts into a cohesive experience. You might consider it “good enough” if:
-
Full Observability
- From server CPU usage to request-level app performance, all data is aggregated in near real time, and dashboards elegantly tie them together.
-
Proactive Issue Detection
- Teams often find anomalies or performance drifts before they cause incidents.
- MTTR (Mean Time to Resolution) is very low.
-
Data-Driven Decision-Making
- Observability data informs capacity planning, cost optimisation, and reliability improvements.
- Leadership sees clear reports on how changes affect performance or user experience.
-
High Automation
- Beyond alerting, some aspects of remediation or advanced analytics might be automated.
Even so, continuous evolution is possible—particularly in adopting AI/ML-based analytics, implementing even more automated healing, or orchestrating global multi-cloud monitoring.
How do you get real-time data and insights? [change your answer]
Your answer:
Specialists check data and give answers, not always in real time.
How to determine if this good enough
If your organisation primarily relies on a small group of subject matter experts (SMEs) to interpret raw data and produce insights, you might consider it “good enough” if:
-
Low Frequency of Data-Driven Questions
- Your operational or policy decisions rarely hinge on up-to-the-minute insights.
- Data queries happen sporadically, and a slower manual approach remains acceptable.
-
Very Specific Domain Knowledge
- Your SMEs possess deep domain expertise that general reporting tools cannot easily replicate.
- The data sets are not extensive, so manually correlating them still works.
-
No Immediate Performance or Compliance Pressures
- You do not face urgent NCSC or departmental mandates to provide real-time transparency.
- Stakeholders accept periodic updates from SMEs instead of continuous data streams.
While this may work in smaller, stable environments, relying heavily on a few experts for analysis often creates bottlenecks, raises single-point-of-failure risks, and lacks scalability. Additionally, GOV.UK and NCSC guidance often encourage better data literacy and real-time monitoring for government services.
Your answer:
We get reports, but data arrives late
How to determine if this good enough
If your organisation employs a standard BI or reporting tool (e.g., weekly or monthly data refreshes), you might regard it as “good enough” if:
-
Acceptable Lag
- Stakeholders generally tolerate the existing delay, as they do not require sub-daily or immediate data.
-
Modest Data Volume
- Data sets are not enormous, so overnight or batch processing remains practical for your current use cases.
-
Basic Audit/Compliance
- You meet essential compliance with government data handling rules (e.g., anonymising personal data, restricted access for sensitive data), and the time lag doesn’t violate any SLAs.
While functional for monthly or weekly insights, delayed reporting can hinder quick decisions or hamper incident response when faster data is needed. In alignment with GDS Service Manual, near real-time data often improves service iteration.
Your answer:
We get some real-time insights, but not for everything.
How to determine if this good enough
In this stage, your organisation has partial real-time analytics for select key metrics, while other data sets update less frequently. You might see it as “good enough” if:
-
Focused Real-Time Use Cases
- Critical dashboards (e.g., for incident management or user traffic) provide near real-time data, satisfying immediate operational needs.
-
Hybrid Approach
- Some systems remain batch-oriented for complexity or cost reasons, while high-priority services stream data into dashboards.
-
Occasional Gaps
- Some data sources or teams still rely on older processes, but you have enough real-time coverage for essential decisions.
If your partial real-time insights effectively meet operational demands and user expectations, it can suffice. However, expanding coverage often unlocks deeper cross-functional analyses and faster feedback loops.
Your answer:
We have advanced tools giving real-time data to many people.
How to determine if this good enough
At this level, your organisation invests in robust analytics solutions (e.g., data warehouses, near real-time dashboards, possibly machine learning predictions). You might consider it “good enough” if:
-
Wide Real-Time Visibility
- Most or all key data streams update in minutes or seconds, letting staff see live operational trends.
-
Data-Driven Decision Culture
- Leadership and teams rely on metrics for day-to-day decisions, verifying progress or pivoting quickly.
-
Machine Learning or Predictive Efforts
- You may already run ML models for forecasting or anomaly detection, leveraging near real-time feeds for training or inference.
-
Sufficient Data Literacy
- Users outside the data team can navigate dashboards or ask relevant questions, with moderate skill in interpretation.
If you already see minimal delays and strong adoption, you’re likely well-aligned with GOV.UK’s push for data-driven services. Still, full self-service or advanced ML might remain partially underutilised.
Your answer:
Anyone can use dashboards to get real-time insights.
How to determine if this good enough
In this final stage, your organisation has a fully realised self-service analytics environment, with real-time data at users’ fingertips. You might consider it “good enough” if:
-
High Adoption
- Most staff, from frontline teams to senior leadership, know how to navigate dashboards or create custom views, significantly reducing reliance on specialised data teams.
-
Minimal Bottlenecks
- Data is curated, well-governed, and updated in real-time or near real-time. Users rarely encounter outdated or inconsistent metrics.
-
Data Literacy Maturity
- Employees across departments can interpret charts, filter data, and ask relevant questions. The environment supports immediate insights for operational or policy decisions.
-
Continuous Improvement Culture
- Dashboards evolve rapidly based on feedback, and new data sets are easily integrated into the self-service platform.
Even at this apex, there might be scope to embed advanced predictive analytics, integrate external data sources, or pioneer AI-driven functionalities that interpret data automatically.
How do you release updates? [change your answer]
Your answer:
We stop services to update them, then restart.
How to determine if this good enough
Your organisation might tolerate taking production offline during updates if:
-
Low User Expectations
- The service is internal-facing with predictable usage hours, so planned downtime does not disrupt critical workflows.
-
Simple or Infrequent Releases
- You rarely update the application, so the cost and user impact of downtime remain acceptable.
-
Minimal Data Throughput
- If the application doesn’t handle large volumes of data or real-time requests, a brief outage may not cause serious issues.
However, in the UK public sector environment—where services can be integral for citizens, healthcare, or internal government operations—planned downtime can erode trust and hamper 24/7 service expectations. Additionally, rollbacks relying on backups can be risky if not regularly tested.
Your answer:
We update parts of our services at a time, usually during scheduled windows.
How to determine if this good enough
At this stage, your organisation has moved past full downtime, using a rolling mechanism that replaces or updates a subset of instances at a time. You might consider it “good enough” if:
-
Limited User Impact
- Some capacity is taken offline during updates, but carefully scheduled windows or off-peak hours minimise issues.
-
Predictable Workloads
- If your usage patterns allow for stable maintenance windows (e.g., nights or weekends), then capacity hits don’t severely affect performance.
-
Moderate Release Frequency
- The organisation has relatively few feature updates, so scheduled windows remain acceptable for user expectations.
While better than full downtime, rolling updates that rely on maintenance windows can still cause disruptions for 24/7 services or hamper urgent patch releases.
Your answer:
We deploy new versions and switch over by hand, with manual rollback if needed.
How to determine if this good enough
This approach is somewhat akin to a blue/green deployment but with a manually triggered cut-over. You might consider it “good enough” if:
-
Limited Release Frequency
- You update only occasionally, and a scheduled manual switch is acceptable to your stakeholders.
-
Manual Control Preference
- You desire explicit human oversight for compliance or security reasons (e.g., sign-off from a designated manager before cut-over).
-
Rollback Confidence
- Retaining the old version running in parallel offers an easy manual fallback if issues arise.
While this drastically reduces downtime compared to in-place updates, manual steps can introduce human error or delay. Over time, automating the cut-over can speed releases and reduce overnight tasks.
Your answer:
We use canary or blue/green releases, usually without maintenance windows.
How to determine if this good enough
Here, your organisation uses modern deployment patterns (canary or blue/green) but triggers the actual traffic shift manually. You might consider it “good enough” if:
-
High Control Over Releases
- Your ops or dev team can watch key metrics (error rates, performance) before deciding to cut fully.
- Reduces risk of automated changes if something subtle goes wrong.
-
Flexible Schedules
- You’re no longer constrained by a formal maintenance window, as the environment runs both old and new versions.
- You only finalise the transition once confidence is high.
-
Minimal User Impact
- Users experience near-zero downtime, with only a potential brief session shift if done carefully.
If your manual step ensures a safe release, meets compliance requirements for sign-off, and you have the capacity to staff this process, it can be fully viable. However, further automation can accelerate releases, especially if you deploy multiple times daily.
Your answer:
We use canary or blue/green releases and switch users with no need for maintenance windows.
How to determine if this good enough
At this pinnacle, your organisation deploys new versions seamlessly, shifting traffic automatically or semi-automatically. You might consider it “good enough” if:
-
Continuous Deployment
- You can safely release multiple times a day with minimal risk.
- Pipeline-driven checks ensure swift rollback if anomalies arise.
-
Zero Downtime
- Users rarely notice updates—there are no enforced windows or service interruptions.
-
Real-Time Feedback
- Observability tools collect usage metrics and error logs, auto-deciding if further rollout is safe.
- Manual intervention is minimal except for major changes or exceptional circumstances.
-
Strong Compliance & Audit Trails
- Each release is logged, including canary results, ensuring alignment with NCSC operational resilience guidance or internal audit requirements.
- This meets or exceeds NIST guidelines for continuous monitoring and secure DevOps.
If you’ve reached near-instant deployments, zero-downtime strategies, and robust monitoring, your process is highly mature. You still might push further into A/B testing or advanced ML-driven optimisation.
How do you manage deployment and QA? [change your answer]
Your answer:
By hand, on a schedule.
How to determine if this good enough
In this stage, your organisation relies on human-driven steps (e.g., emailing code changes to QA testers, manual approval boards, or ad hoc scripts) for both deployment and testing. You might consider it “good enough” if:
-
Very Limited Release Frequency
- You update your applications once every few months, and thus can handle manual overhead without major inconvenience.
-
Low Criticality
- The services do not require urgent patches or security updates on short notice, so the lack of continuous integration poses minimal immediate risk.
-
Simplicity and Stability
- The application is relatively stable, and major functional changes are rare, making manual QA processes manageable.
However, manual scheduling severely limits agility and can introduce risk if errors go unnoticed due to a lack of automated testing. For many UK public sector services, NCSC guidelines encourage more frequent updates and better security practices, which usually involve continuous integration.
Your answer:
Some automation, but most deployments are manual and rare.
How to determine if this good enough
If your organisation has introduced some automated tests or a partial CI pipeline (e.g., unit tests running on commits), yet still deploys rarely or with manual checks, you might find it “good enough” if:
-
Low or Medium Release Velocity
- Even with some test automation, you prefer scheduled or larger releases rather than continuous iteration.
-
Limited Immediate Risk
- The application can handle occasional updates without strong demands for real-time patches or new features.
-
Stable Funding or Resource Constraints
- You have a moderate DevOps or QA budget, which doesn’t push for fully automated, frequent deployments yet.
While partial automation improves reliability, infrequent deployments may slow responses to user feedback or security issues. NCSC guidance on secure system development encourages a faster feedback loop to patch vulnerabilities promptly.
Your answer:
Integrated approach, with some automation and regular checks.
How to determine if this good enough
In this scenario, your pipelines are well-defined. Automated tests run for each build, and you have a consistent process connecting deployment to QA. You might judge it “good enough” if:
-
Predictable Release Cycles
- You typically deploy weekly or bi-weekly, and your environment has minimal issues.
-
Moderately Comprehensive Testing
- You have decent coverage across unit, integration, and some acceptance tests.
-
Stable or Evolving DevOps Culture
- Teams trust the pipeline, and it handles the majority of QA checks automatically, though some manual acceptance or security tests might remain.
If your current approach reliably meets user demands and mitigates risk, it can suffice. Yet you can usually speed up feedback and further reduce manual overhead by adopting advanced CI/CD techniques.
Your answer:
We use CI/CD pipelines with automated testing and frequent deployments.
How to determine if this good enough
Here, your organisation relies on a sophisticated, automated pipeline that runs on every code commit or merges. You might consider it “good enough” if:
-
High Release Frequency
- Deployments can happen multiple times a week or day with minimal risk.
-
Robust Automated Testing
- Your pipeline covers unit, integration, functional, and security tests, with little reliance on manual QA steps.
-
Low MTTR (Mean Time to Recovery)
- Issues discovered post-deployment can be quickly rolled back or patched, reflecting a mature DevOps culture.
-
Compliance and Audit-Friendly
- Pipeline logs, versioned artifacts, and automated checks document the entire release cycle for compliance with NCSC guidelines or NIST requirements.
Even so, you may refine or extend your pipeline (e.g., ephemeral testing environments, advanced canary releases, or ML-based anomaly detection in logs) to further boost agility and reliability.
Your answer:
We create short-lived test environments as needed, with a high degree of automation.
How to determine if this good enough
At this top maturity level, your pipelines can spin up full-stack test environments for each feature branch or bug fix, and once tests pass, they’re torn down automatically. You might consider it “good enough” if:
-
High Flexibility, Minimal Resource Waste
- QA can test multiple features in parallel without overhead of long-lived staging environments.
-
Extremely Fast Feedback Loops
- Developers receive near-instant validation that their changes work end-to-end.
-
Advanced Automation and Observability
- The pipeline not only provisions environments but also auto-injects test data, runs comprehensive tests, and collects logs/metrics for quick analysis.
-
Seamless Integrations
- Data security, user auth, or external services are seamlessly mocked or linked without complex manual steps.
While ephemeral environments typically reflect leading-edge DevOps, there’s always scope for refining cost efficiency, improving advanced security automation, or further integrating real-time analytics.
How do you develop and implement your cloud strategy? [change your answer]
Your answer:
We don’t have a cloud strategy team.
How to determine if this good enough
Your organisation may run cloud operations without a formal cloud-oriented structure, relying on legacy or on-prem roles. This might be considered “good enough” if:
-
Low Cloud Adoption
- You only use minimal cloud services for pilot or non-critical workloads, making specialised cloud roles seem unnecessary.
-
Stable or Limited Growth
- Infrastructure demands rarely change, so a dedicated cloud team is not yet recognised as a priority.
-
No Formal Strategy
- Senior leadership or departmental heads are content with the status quo. No urgent requirement (e.g., cost optimisation, advanced digital services) drives a need for specialised cloud skills.
However, lacking a dedicated cloud focus often results in uncoordinated efforts, missed security best practices, and slow adoption of modern technologies. NCSC cloud security guidelines encourage establishing clear accountability and specialised skills for public sector cloud operations.
Your answer:
Some people have cloud skills and help others when needed.
How to determine if this good enough
When some staff have cloud knowledge and organically help colleagues, your organisation achieves partial cloud collaboration. This may be “good enough” if:
-
Moderate Cloud Adoption
- You already operate a few production workloads in the cloud, and ad hoc experts resolve issues or give guidance sufficiently well.
-
Flexible Culture
- Teams are open to sharing cloud tips and best practices, but there’s no formal structure or authority behind it.
-
No Pressing Need for Standardisation
- Departments might be content with slight variations in cloud usage as long as top-level goals are met.
While better than complete silos, purely informal networks can cause challenges in scaling solutions, ensuring consistent security measures, or presenting a cohesive cloud vision at the organisational level.
Your answer:
We have a cross-team group that guides all cloud work.
How to determine if this good enough
At this stage, you’ve established a Cloud Center of Excellence (COE) or similar body that offers resources, best practices, and guidelines for cloud usage. It may be “good enough” if:
-
Visibility and Authority
- The COE is recognised by senior management or departmental leads, shaping cloud-related decisions across the organisation.
-
Standardised Practices
- The COE maintains patterns for infrastructure as code, security baselines, IAM policies, and cost optimisation.
- Teams typically consult these guidelines for new cloud projects.
-
Growing Cloud Adoption
- The COE’s existence accelerates confident use of cloud resources, boosting agility without sacrificing compliance.
If the COE is well-integrated and fosters consistent cloud usage, it might suffice. However, you can further embed COE standards into daily workflows or empower product teams with more autonomy.
Your answer:
Cloud teams use shared standards and patterns.
How to determine if this good enough
Here, the COE’s guidance and patterns have been widely adopted. Project-specific cloud teams incorporate cross-functional roles (e.g., security, networking, DevOps). You might see it as “good enough” if:
-
Unified Governance
- Nearly all new cloud deployments adhere to COE-sanctioned architectures, security configurations, and cost policies.
-
Broad Collaboration
- Teams across the organisation share knowledge, follow standard templates, and integrate cloud best practices early in development.
-
Accelerated Delivery
- Because each project leverages proven patterns, time to deliver new cloud-based services is significantly reduced.
Still, certain advanced areas—like fully autonomous product teams or dynamic ephemeral environments—might remain underutilised, and you might expand the COE’s influence further.
Your answer:
Teams work together across roles, with experts in all the areas we need them.
How to determine if this good enough
At this final stage, you have a highly sophisticated COE model where product teams are fully empowered with cloud skills, processes, and governance. You might consider it “good enough” if:
-
High Autonomy, Low Friction
- Teams can spin up secure, cost-efficient cloud resources independently, referencing well-documented patterns, without bottlenecks or repeated COE approvals.
-
Robust Governance
- The COE remains a guiding entity rather than a gatekeeper, ensuring continuous compliance with NCSC guidelines or NIST standards via automated controls.
-
Continuous Innovation
- Because cross-functional teams handle security, DevOps, architecture, and user needs holistically, new services roll out quickly and reliably.
-
Data-Driven & Secure
- Cost usage, security posture, and performance metrics are all visible organisation-wide, enabling proactive decisions and swift incident response.
Though you’re at an advanced state, ongoing adaptation to new cloud technologies, security challenges, or legislative updates remains crucial for sustained leadership in digital transformation.
Who manages your cloud operations? [change your answer]
Your answer:
Developers handle all cloud operations.
How to determine if this good enough
If developers handle cloud deployments, architecture, security, and day-to-day management without a specialised cloud team, you might consider it “good enough” if:
-
Small, Simple Environments
- The cloud footprint is minimal, with one or two services that developers can handle without overhead.
-
Low Operational Complexity
- The services don’t require advanced resilience, multi-region failover, or intricate compliance demands.
- Developer skill sets are adequate to manage basic cloud tasks.
-
Limited Budget or Staffing
- Your department lacks the resources to form a dedicated cloud or DevOps team, and you can handle ongoing operations with the existing developer group.
However, if your environment grows or demands 24/7 uptime, developer-led ops can hinder productivity and conflict with advanced security or compliance best practices recommended by NCSC or NIST SP 800-53.
Your answer:
A supplier manages all cloud operations and strategy.
How to determine if this good enough
When all aspects of cloud—deployment, maintenance, security, strategy—are handled by an external vendor, you might consider it “good enough” if:
-
Limited Internal Capacity
- You do not have the in-house resources or time to recruit a dedicated cloud team.
- Outsourcing meets immediate needs without major overhead.
-
Tight Budget
- The contract with an external supplier may appear cost-effective at present, covering both ops and strategic planning.
-
Stable Workloads
- Your environment rarely changes, so a third-party can manage updates or occasional expansions without heavy internal oversight.
However, outsourcing strategic direction can leave the organisation dependent on external decisions, potentially misaligned with your departmental goals or public sector guidelines. NCSC’s recommendations often emphasize maintaining a degree of internal oversight for security and compliance reasons.
Your answer:
A supplier runs operations, but we set the strategy.
How to determine if this good enough
Here, your organisation retains the cloud vision and strategy, while day-to-day ops remain outsourced. It might be “good enough” if:
-
High-Level Control
- You define the roadmap (e.g., which services to adopt, target costs, security posture), while the vendor handles operational execution.
-
Alignment with Department Goals
- Because strategy is owned internally, solutions remain consistent with your policy, user needs, and compliance.
-
Balanced Resource Usage
- Outsourcing ops can reduce staff overhead, allowing your in-house team to focus on strategic or domain-specific tasks.
If this arrangement effectively supports agile improvements, meets cost targets, and respects data security guidelines (from NCSC or NIST SP 800-53)—while you retain final say on direction—then it can suffice. But you can enhance synergy and reduce possible knowledge gaps further.
Your answer:
We use in-house and supplier teams, with our leaders in charge.
How to determine if this good enough
When you blend internal expertise with external support—for instance, your staff handle architecture and day-to-day governance, while a vendor offers specialised services—this arrangement can be “good enough” if:
-
Flexible Resource Allocation
- You can easily scale up external help for advanced tasks (e.g., HPC workloads, complex migrations) or 24/7 on-call coverage without overstaffing internally.
-
Strong Collaboration
- Regular communication ensures your internal team remains involved, learning from the vendor’s advanced capabilities.
-
Cost-Effective
- Outsourcing only targeted areas (e.g., overnight ops or specialised DevOps) while your team handles strategic decisions can keep budgets manageable and transparent.
However, inconsistent processes between internal staff and vendor resources can cause friction or confusion about accountability. NCSC’s guidance on supplier assurance often emphasizes the importance of well-defined contracts and security alignment.
Your answer:
We have a strong in-house team for each cloud platform, with a shared roadmap.
How to determine if this good enough
If your organisation has an in-house cloud team for each major platform (e.g., AWS, Azure, GCP, Oracle Cloud), or at least one broad team covering multiple platforms, you might consider it “good enough” if:
-
Comprehensive Expertise
- Your staff includes architects, DevOps engineers, security specialists, and cost analysts, ensuring all critical angles are covered.
-
Clear Organisational Roadmap
- A well-defined strategy for cloud migration, new service adoption, cost optimisation, or security posture, shared by leadership.
-
Strong Alignment with Public Sector Objectives
- The team ensures compliance with GOV.UK cloud policy, NCSC best practices, and possibly advanced NIST frameworks.
-
High Independence
- The team can rapidly spin up new projects, respond to incidents, and deliver advanced capabilities without external vendor lock-in.
Though at a high maturity level, ongoing improvements in team structure, cross-functional collaboration with developer squads, or advanced innovation remain possible.
How do you plan for incidents? [change your answer]
Your answer:
We don’t have a formal incident plan.
How to determine if this good enough
If your organisation responds to incidents (e.g., system outages, security breaches) in an improvised manner—relying on a few knowledgeable staff with no documented plan—you might consider it “good enough” if:
-
Few or Infrequent Incidents
- You have a small, stable environment where major disruptions are rare, so ad-hoc responses haven’t caused major negative impacts or compliance issues.
-
Low-Risk Services
- The application or data in question is not critical to citizen services or departmental operations.
- Failure or compromise does not pose significant security or privacy risks.
-
Very Limited Resources
- Your team lacks the time or budget to formalise a plan, and you can handle occasional incidents with minimal fuss.
However, purely ad-hoc responses often lead to confusion, slower recovery times, and higher risk of mistakes. NCSC’s incident management guidance and NIST SP 800-61 on Computer Security Incident Handling recommend having at least a documented process to ensure consistent, timely handling.
Your answer:
We create a plan when launching a new service.
How to determine if this good enough
Your organisation mandates that each new service or application must have a written incident response plan before going live. You might see it as “good enough” if:
-
Consistent Baseline
- All teams know they must produce at least a minimal IR plan for each service, preventing complete ad-hoc chaos.
-
Alignment with Launch Processes
- The IR plan is part of the “go-live” checklist, ensuring a modicum of readiness.
- Teams consider logs, metrics, and escalation paths from the start.
-
Improved Communication
- Stakeholders (e.g., dev, ops, security) discuss incident preparedness prior to launch, reducing confusion later.
While requiring IR documentation at service launch is beneficial, plans can become outdated if not revisited. Also, if the IR plan remains superficial, your team may not be fully prepared for evolving threats.
Your answer:
We have plans that we keep up to date.
How to determine if this good enough
Here, your organisation’s IR plan is living documentation. You might consider it “good enough” if:
-
Periodic Reviews
- Your security or ops teams revisit the IR plan at least quarterly or after notable incidents.
- Updates reflect changes in architecture, threat landscape, or staff roles.
-
Cross-Team Collaboration
- Dev, ops, security, and possibly legal or management teams give input on the IR plan, ensuring a well-rounded approach.
-
Moderate Testing
- You occasionally run tabletop exercises or partial simulations to validate the plan.
Even so, you may enhance integration with broader IT continuity strategies or increase the frequency and realism of exercises. NCSC’s incident response maturity guidance typically advocates regular testing and cross-functional involvement.
Your answer:
Incident plans are part of wider IT and business continuity, and tested often.
How to determine if this good enough
In this scenario, your incident response plan doesn’t sit in isolation— it’s part of a holistic approach to continuity, including DR (Disaster Recovery) and resilience. You might consider it “good enough” if:
-
Seamless Coordination
- If an incident occurs, your teams know how to escalate, who to contact in leadership, and how to pivot to DR or business continuity plans.
-
Frequent Drills
- You test different scenarios (network outages, data breaches, cloud region failovers) multiple times per year, refining the plan each time.
-
Proactive Risk Management
- The plan includes risk assessment outputs from continuity or resiliency committees, ensuring coverage of the top threats.
If you frequently test and unify IR with continuity, you likely handle incidents with minimal confusion. However, you can still refine procedures by adding ephemeral environment testing or advanced threat simulations. NCSC guidance on exercising incident response often recommends more thorough cross-team exercises.
Your answer:
We test our plans, keep them up to date and can recover critical systems within a day.
How to determine if this good enough
At the highest maturity level, your IR plan is thoroughly integrated, tested, and refined. You might consider it “good enough” if:
-
Regular Full-Scale Exercises
- You conduct realistic incident drills—maybe even involving third-party audits or multi-department collaboration.
- Failover or system restoration is verified with near real-time performance metrics.
-
Near-Immediate Recovery
- Critical systems can be restored or replaced within hours, if not minutes, meeting strict RPO (Recovery Point Objective) and RTO (Recovery Time Objective) requirements.
-
Cross-Government Readiness
- You coordinate IR planning with other public sector bodies where interdependencies exist (e.g., healthcare, local councils).
While already impressive, continuous improvement is possible through refining automation, advanced threat hunting, or adopting chaos engineering to test response to unknown failure modes. NCSC’s advanced incident management guidelines recommend ongoing learning and adaptation.
People
How does your organisation work with cloud providers? [change your answer]
Your answer:
We just use services, with little or no contact with providers.
How to determine if this good enough
Your organisation may simply use a cloud provider’s console or basic services without actively engaging them for training, account management, or technical guidance. This might be considered “good enough” if:
-
Low Cloud Adoption
- You only run a small set of workloads, and your staff have enough expertise to handle them without external help.
-
Limited Requirements
- You have no pressing need for advanced features, cost optimisation, or architectural guidance.
-
No Advanced Security/Compliance Demands
- Basic usage without deeper collaboration may suffice if your environment has minimal compliance or security constraints and your internal skills are adequate.
However, minimal engagement often leads to missed opportunities for cost savings, architecture improvements, or robust security best practices that a provider’s support team could offer—especially given NCSC cloud security recommendations for public sector contexts.
Your answer:
We sometimes get basic help or support from providers.
How to determine if this good enough
Your organisation has begun reaching out to the cloud provider’s support channels for assistance (e.g., tickets, phone calls, or chat) on an as-needed basis. This could be “good enough” if:
-
Occasional Issues
- You typically resolve common problems quickly using vendor documentation, and only open support tickets for unusual or moderate complexity issues.
-
Low Complexity or Growth
- Your environment is stable, not requiring advanced architecture reviews or cost optimisation sessions with provider specialists.
-
Reasonable Timely Assistance
- The basic support meets your current operational SLA—especially if downtime or critical incidents remain infrequent.
Yet, to maximise public sector service resilience and cost efficiency, you might benefit from more proactive outreach, architecture reviews, or training options. NCSC’s operational resilience guidance often recommends deeper engagement for critical digital services.
Your answer:
We regularly talk to providers and use their training and support.
How to determine if this good enough
At this stage, your organisation has established a relationship with the provider’s account or technical teams, periodically engaging them for advice or standard support. You might consider it “good enough” if:
-
Frequent Exchanges
- Monthly or quarterly calls, email updates, or Slack channels with the provider’s team, leading to timely advice on new services.
-
Technical Workshops
- You’ve participated in fundamental training or architecture sessions that help refine your environment.
-
Clear Escalation Paths
- If major incidents occur or you need advanced cost optimisation, you know how to escalate within the provider’s organisation.
While this approach can keep your environment stable and cost-aware, you could further deepen the partnership for tailored solutions, specialised trainings, or advanced architecture reviews aligned with public sector compliance. NCSC’s supply chain guidance encourages robust vendor relationships that go beyond minimal interactions.
Your answer:
We work closely with providers and get training and support tailored to our needs.
How to determine if this good enough
In this scenario, your interactions with the provider aren’t just frequent—they’re customised for your department’s unique challenges and objectives. You might see it as “good enough” if:
-
Joint Planning
- Provider and internal teams hold planning sessions (quarterly or bi-annual) to match new services with your roadmap.
-
Customised Training
- You have in-person or virtual workshops focusing on your tech stack (e.g., AWS for HPC, Azure for AI, GCP for serverless, OCI for specialised Oracle workloads) and departmental constraints.
-
Aligned Security & Compliance
- Providers work closely with you on meeting NCSC cloud security guidelines or internal audits, possibly crafting special architectures for compliance.
-
High Adoption of Best Practices
- You regularly adopt well-architected reviews, cost optimisation sessions, or advanced managed services to streamline operations.
If your environment thrives under this proactive arrangement, you likely gain from reduced operational overhead and timely adoption of new features. Nonetheless, you can often elevate to a fully strategic partnership that involves co-marketing or advanced cloud transformation programs.
Your answer:
Providers are our partners. We work together often, get lots of support and can show our work on their platforms.
How to determine if this good enough
At this highest maturity stage, your organisation forms a deep strategic alliance with the provider, leveraging broad support and showcasing initiatives publicly. You might consider it “good enough” if:
-
Integrated Strategic Alignment
- You and the provider co-plan multi-year roadmaps, ensuring cloud solutions directly serve departmental missions (e.g., digital inclusion, citizen service modernisation).
-
Extensive Training & Development
- Your staff frequently attend advanced workshops or immersion days, possibly earning official cloud certifications.
-
Joint Marketing or Showcasing
- The provider invites you to speak at summits or user conferences, highlighting your innovative public sector achievements.
-
Robust and Timely Innovation
- You often test or adopt new services early (alpha/beta features) with the provider’s help, shaping them to UK public sector needs.
While you may already be a leader in cloud adoption, continuous adaptation to new threats, technologies, and compliance updates remains essential. NCSC’s agile resilience approach suggests regular updates and real-world exercises to preserve top-tier readiness.
How does your organisation support cloud training and certification? [change your answer]
Your answer:
There is no formal support or plan for cloud training.
How to determine if this good enough
If your organisation has no structured approach to cloud training—leaving staff to self-educate without guidance or incentives—it might be “good enough” if:
-
Minimal Cloud Adoption
- Cloud usage is negligible, so advanced training isn’t yet critical to daily operations.
-
Tight Budget or Staffing Constraints
- Leadership is unable or unwilling to allocate resources for training, preferring ad-hoc learning.
-
No Immediate Compliance Demands
- You have no pressing requirement for staff to hold certifications or demonstrate skill levels in areas like security or cost optimisation.
However, ignoring staff development can lead to skill gaps, security vulnerabilities, and missed cost-saving or operational improvement opportunities. NCSC’s cloud security guidance and NIST frameworks emphasize trained personnel as a cornerstone of secure and effective cloud operations.
Your answer:
Training is up to each manager and not tracked.
How to determine if this good enough
When some managers actively encourage or fund cloud training while others do not, you might consider it “good enough” if:
-
Decentralised Teams
- Each team’s manager sets development priorities, leading to variability but still some access to training budgets.
-
Moderate Demand
- Some staff are obtaining certifications or improved knowledge, though there’s no overarching organisational push for uniform cloud competencies.
-
Acceptable Skills Coverage
- You can handle day-to-day cloud tasks, with no glaring skill shortage in critical areas like security or cost optimisation.
Yet, inconsistency can result in some teams lagging behind, risking security or performance issues. NIST SP 800-53 “Personnel Security” controls and NCSC workforce security guidelines recommend more structured approaches for critical technology roles.
Your answer:
Training is supported, tracked, and reported for all teams.
How to determine if this good enough
Your organisation invests in cloud training at a corporate level, providing funds and tracking progress. You might consider it “good enough” if:
-
Clear Funding & Targets
- A portion of the budget is allocated for staff to attend vendor courses, exam fees, or relevant conferences.
-
Consistency Across Departments
- Each department sets training goals, reports progress, and aligns with overall skill objectives. This ensures no single team lags behind.
-
Organisational Visibility
- Leadership sees monthly/quarterly metrics on certifications achieved, courses completed, and can address shortfalls.
This robust structure fosters a learning culture, but you can refine it by tailoring training to specific roles or tasks, and by integrating self-assessment or advanced incentives. NCSC’s workforce development advice often supports role-specific skill mapping, especially around security.
Your answer:
Training is linked to each person’s role and development plan. People check their own progress.
How to determine if this good enough
In this scenario, training is not only supported at a corporate level, but also each role has a defined skill progression, and staff regularly measure themselves. You might consider it “good enough” if:
-
Strong Ownership of Growth
- Employees see a clear path: e.g., from Cloud Practitioner to Solutions Architect Professional or from DevOps Associate to Security Specialist.
-
Regular Reflection
- Staff hold self-assessment sessions (quarterly or semi-annually) to gauge progress and plan next certifications.
-
Alignment with Team & Organisational Goals
- Each role’s recommended cert directly supports the team’s mission, whether optimising costs, enhancing security, or building new services.
If your approach fosters a culture of self-driven learning supported by structured role paths, it’s likely quite effective. Yet you can deepen it with formal incentives or broader organisational recognition programs.
Your answer:
People get rewards for completing training. Progress is checked and achievements recognised.
How to determine if this good enough
At this top level, your organisation not only maps roles to training paths but also actively rewards certifications, publicises achievements, and ensures ongoing development. You might consider it “good enough” if:
-
Formal Recognition & Incentives
- Staff see a direct benefit (financial or career progression) upon earning relevant certs or completing advanced training.
-
Regular Assessments
- Beyond self-assessment, formal checks (e.g., exam simulations or performance evaluations) confirm skill proficiency.
-
Public Acknowledgment
- Achievements are recognised across teams or even externally (e.g., internal newsletters, [GovUKCloudBadges-like digital badges], vendor’s success stories).
-
Continuous Evolution
- As cloud services evolve, employees are encouraged to re-certify or pursue new advanced specialisations.
Even so, you can push further by connecting training outcomes directly to advanced strategic goals or building multi-department training programs. NCSC’s emphasis on robust workforce readiness often suggests cross-organisational knowledge sharing.
How important is cloud experience when hiring leaders, suppliers, and contractors? [change your answer]
Your answer:
Cloud experience is not required.
How to determine if this good enough
Your organisation’s job postings do not mention or require cloud knowledge from applicants—even for senior/leadership roles. This could be “good enough” if:
-
Minimal or No Cloud Usage
- You operate almost entirely on-premises, with no plan or mandate to expand cloud operations in the near term.
-
Highly Specialised Legacy Roles
- Your roles focus on traditional IT (e.g., mainframe, specialised on-prem hardware), making cloud background less immediately relevant.
-
Solely Vendor or Outsourced Cloud Expertise
- You rely on a third-party supplier for cloud design and operations, so hiring for in-house cloud capability seems unnecessary.
However, ignoring cloud experience can become a blocker if your organisation decides to modernise or scale digital services. NCSC’s strategic cloud adoption guidance and GOV.UK’s Cloud First policy often suggest building at least some internal cloud capability to ensure secure and efficient usage.
Your answer:
Some roles ask for cloud experience.
How to determine if this good enough
Your organisation mentions cloud skills for roles that clearly need them (e.g., DevOps, security engineering), while other positions (senior leadership, less technical roles) remain silent on cloud. This may be “good enough” if:
-
Targeted Cloud Adoption
- Only certain teams or projects are using cloud extensively, so broad-based cloud requirements aren’t mandatory.
-
Reasonable Cost/Benefit
- The budget and number of critical cloud roles are matched, so your approach to selectively recruiting cloud talent covers current demands.
-
Manager-Led Approach
- Hiring managers decide which roles should involve cloud experience, ensuring teams that do need it get the right people.
While this step ensures crucial roles have the necessary cloud skills, it may cause gaps in leadership or strategic roles if they remain cloud-agnostic. GDS leadership roles often emphasize digital knowledge, so integrating cloud awareness can future-proof your organisation’s direction.
Your answer:
All relevant roles must have cloud experience.
How to determine if this good enough
Your organisation has moved to standardising cloud skill requirements in line with official frameworks, such as the GOV.UK DDaT profession capability framework. This may be “good enough” if:
-
Clear, Public Guidance
- Each role linked to “DDaT job family” or similar has explicit cloud knowledge expectations in the job descriptions.
-
Established Cloud Culture
- Colleagues in relevant fields (DevOps, architecture, security) all share a baseline of cloud competencies, ensuring consistent approaches across teams.
-
Confidence in Ongoing Staff Development
- You provide channels for employees to refresh or deepen their cloud skills (e.g., training budgets, exam vouchers).
If this meets your organisational scale—balancing modern service delivery with consistent cloud capabilities—it might be sufficient. Still, you can refine existing roles and adapt as the cloud environment evolves, ensuring continuous alignment with best practices from NCSC and NIST.
Your answer:
We have updated all roles to need cloud skills, not just new hires.
How to determine if this good enough
Your organisation not only mandates cloud experience for new roles but also revises current positions, ensuring all necessary staff have relevant cloud responsibilities. You might see it as “good enough” if:
-
Comprehensive Role Audit
- You have completed an organisation-wide review of each position’s cloud skill requirements.
-
Seamless Transition
- Incumbent staff received training or redefined job objectives, mapping on-prem tasks to modern cloud tasks.
-
Consistent Cloud Readiness
- Department-wide, roles reflect a cloud-first approach—nobody is left operating purely on older skill sets if they have critical cloud duties.
Yet, as your environment or services evolve, you may consider advanced role specialisation (e.g., HPC, big data, AI/ML) or deeper multi-cloud skills. NCSC’s security frameworks might also push you to refine role-based security responsibilities.
Your answer:
Every role now needs cloud experience. We have reviewed and updated all roles.
How to determine if this good enough
At this top maturity level, your organisation has fully embraced a cloud-first model: all new and existing roles incorporate cloud knowledge. You might consider it “good enough” if:
-
Uniform Cloud Culture
- Cloud capabilities are not a niche skill; the entire workforce, from leadership to IT specialists, understands cloud fundamentals.
-
Frequent Revisits to Role Definitions
- If new technologies or security best practices emerge, roles adapt quickly.
-
Minimal Silos
- Cross-functional collaboration is straightforward, as everyone shares a baseline cloud understanding.
-
Strong Public Sector Alignment
- Your approach aligns with NCSC guidelines for secure cloud usage, NIST frameworks, and GOV.UK cloud-first policy expectations.
Even so, continuous refinement remains important. Evolving multi-cloud strategies, advanced DevSecOps, or specialised HPC/AI solutions might require targeted skill sets.
How do you choose suppliers and partners for cloud work? [change your answer]
Your answer:
We choose suppliers based on their marketing or being on frameworks.
How to determine if this good enough
If your organisation’s cloud supplier or partner selection relies mostly on brochures, websites, or the fact they appear on commercial frameworks (e.g., G-Cloud, DOS), you might see it as “good enough” if:
-
Limited Cloud Adoption
- You procure minimal cloud services, so in-depth vetting seems excessive.
-
Budget and Time Constraints
- There isn’t enough capacity to run thorough due diligence or procurement evaluations.
-
No High-Risk or Mission-Critical Projects
- Supplier performance is not yet vital to delivering crucial citizen-facing or secure workloads.
However, relying on marketing and basic framework presence can miss critical details like deep technical expertise, security maturity, or alignment with public sector compliance requirements. NCSC supply chain guidance and NIST SP 800-161 for supply chain risk management generally recommend a more robust approach.
Your answer:
We do basic checks to make sure suppliers meet standards.
How to determine if this good enough
Your organisation requires potential suppliers to pass some level of scrutiny—like verifying security certifications, relevant public sector framework compliance, or minimal references. You might consider it “good enough” if:
-
Compliance-Heavy or Standard Services
- The workloads require known certifications (e.g., Cyber Essentials, ISO 27001), and checking these meets your current risk appetite.
-
Occasional Cloud Projects
- For less frequent procurements, a standardised due diligence set (like a standard RFP template) covers enough detail.
-
Stable Risk Profile
- You have not encountered major incidents from suppliers, so the basic compliance approach seems adequate so far.
Still, basic checks do not confirm the supplier’s depth in technical cloud knowledge, cultural fit, or ability to handle advanced or evolving demands. NIST SP 800-161 supply chain risk management best practices and NCSC supplier assurance guidelines frequently recommend a more robust approach.
Your answer:
We check suppliers’ experience and make sure they fit our needs.
How to determine if this good enough
If your procurement team reviews suppliers by confirming they have verifiable cloud experience, meet standard public sector compliance, and fit your overarching strategic aims, you might deem it “good enough” if:
-
Consistent Approach Across Projects
- You use a standard set of criteria (e.g., data protection compliance, security posture, previous public sector references).
-
Moderate Cloud Maturity
- Your environment or projects have grown enough to demand thorough screening but not so large as to require specialised advanced partner relationships.
-
Proven Track Record in Delivery
- So far, these moderately screened partners have provided stable, cost-efficient solutions.
However, you might strengthen the process by including deeper due diligence around advanced areas like cost optimisation approaches, multi-cloud strategies, or specialised domain knowledge (e.g., HPC, AI) relevant to your departmental needs. NCSC’s approach to supplier assurance often encourages deeper, scenario-based evaluation.
Your answer:
We check for technical skills, values and ability to support our objectives.
How to determine if this good enough
Here, your organisation’s procurement approach encompasses in-depth assessments—covering not only technical prowess but also cultural fit and ethical standards. You might consider it “good enough” if:
-
Robust Vetting Process
- You scrutinise suppliers for cloud certifications, proven track record, security compliance, sustainability practices, and ethical supply chain standards.
-
Ethical and Green Priorities
- The supplier’s carbon footprint, corporate social responsibility (CSR), or alignment with UK government sustainability guidelines factor into selection.
-
Tailored Cloud Approach
- You ensure the supplier can deliver solutions matching your unique departmental use cases (e.g., HPC for research, serverless for citizen service web apps).
If your approach systematically ensures suppliers meet both technical and ethical standards, it likely fosters positive public sector outcomes. However, you can deepen the relationship by exploring strategic co-development or advanced partner statuses.
Your answer:
We look for suppliers with a strong track record, clear leadership, and long-term value.
How to determine if this good enough
At this highest maturity level, your selection process for cloud suppliers goes beyond technical checks—factoring in leadership alignment, risk transparency, and a future-facing approach. You might consider it “good enough” if:
-
Holistic Procurement
- You weigh track records, references from other government bodies, ethical stances, training or apprenticeship programs, and cost-effectiveness over time.
-
Strong Partnership
- The supplier aligns with your leadership’s strategic cloud vision, co-owning the roadmap for advanced digital transformation.
-
Defined KPIs & Metrics
- Contracts include measurable performance indicators (e.g., cost savings, user satisfaction, innovation initiatives), ensuring ongoing accountability.
-
Security and Compliance Embedded
- They proactively address NCSC cloud security guidelines or relevant [NIST SP 800-53/800-161] controls, not waiting for you to raise concerns.
If you’ve reached a stage where each new supplier or partner truly integrates with your organisational goals and strategic direction, you likely ensure sustainable, high-value cloud engagements. Yet continual refinement remains essential to adapt to evolving requirements and technology.
How do you help staff with little or no cloud experience move into cloud roles? [change your answer]
Your answer:
We don’t plan for this.
How to determine if this good enough
Your organisation may not offer any structured way for employees to learn cloud technologies. You might consider it “good enough” if:
-
Minimal Cloud Footprint
- Your cloud usage is extremely limited, so extensive skill-building programs seem unnecessary.
-
No Immediate Skill Gaps
- Current projects do not require additional cloud expertise, and operational requirements are met without training investments.
-
Short-Term Budget or Resource Constraints
- Funding or leadership support for formal cloud training is unavailable at present.
However, a complete lack of development opportunities can lead to skill shortages if your cloud usage suddenly expands, or if staff who do have cloud expertise leave. NCSC’s workforce security guidance and NIST workforce frameworks often emphasize proactive skill-building to maintain operational security and resilience.
Your answer:
We give basic on-the-job training.
How to determine if this good enough
You may have a modest training approach, usually overseen by a line manager or a more experienced colleague. This can be “good enough” if:
-
Gradual Cloud Adoption
- The environment is evolving slowly, so incremental on-the-job training meets the immediate need.
-
In-House Mentors
- If there are enough knowledgeable staff who can guide newcomers on day-to-day tasks without overloading or risking burnout.
-
Basic Organisational Support
- A policy exists allowing some time for new staff to learn cloud basics, but no formal structured training plan is in place.
While more robust than having no path, purely on-the-job learning can be inconsistent. Some staff might receive thorough guidance, while others do not, depending on who they pair with. A standardised approach can yield faster, more uniform results—aligned with NCSC’s emphasis on skill-building for secure cloud operations.
Your answer:
We have training and mentoring programmes.
How to determine if this good enough
Here, your organisation invests in formal training paths or bootcamps, plus assigned mentors or peer learning groups. You might consider it “good enough” if:
-
Standardised Curriculum
- All new cloud-related hires or existing staff can follow a consistent set of modules or labs for fundamental cloud tasks.
-
Clear Mentorship Framework
- Each junior or novice staff is paired with a specific mentor who checks in regularly, possibly with set learning milestones.
-
Frequent Feedback and Peer Exchange
- Staff share experiences in group sessions or Slack channels dedicated to troubleshooting and tips.
If such structured programs yield consistent, secure, and cost-effective cloud practices, it meets many public sector skill-building needs. Yet you can incorporate further advanced features—like external certification readiness or specialised domain training (e.g., HPC, AI). NCSC’s workforce security improvement guidelines often advocate deeper, continuous training expansions.
Your answer:
We offer in-house or external training to build cloud skills.
How to determine if this good enough
Your organisation provides robust training—like in-house cloud courses, external bootcamps, or vendor collaborations (AWS, Azure, GCP, OCI). You might consider it “good enough” if:
-
Managed End-to-End
- Employees sign up for consistent programs, from beginner to advanced, with recognised certification paths.
-
Frequent Engagement
- Regular classes or workshops ensure continuous skill growth, not just a one-time orientation.
-
Positive Impact
- Observed improvements in staff morale, faster cloud project delivery, and fewer errors or security incidents.
This approach likely meets most skill-building needs. Nonetheless, you can push for advanced or specialised tracks (e.g., HPC, AI/ML, security) or adopt apprenticeship or “bootcamp + aftercare” models. GOV.UK or GDS Academy courses may also be integrated to reinforce public sector-specific skill sets.
Your answer:
We have apprenticeship, career change and bootcamp programmes, with ongoing support.
How to determine if this good enough
At the highest level, your organisation runs a fully-fledged apprenticeship or bootcamp approach to converting staff with little-to-no cloud background into proficient cloud practitioners—backed by ongoing mentorship. You might see it “good enough” if:
-
High Conversion Rates
- Most participants complete the program and effectively fill cloud roles.
-
Post-Program Support
- After finishing, participants continue to receive coaching, refreshers, or advanced modules so their skills remain current.
-
Strategic Workforce Planning
- This pipeline of new cloud talent meets growing departmental or cross-government demands, minimising reliance on external hires.
Even so, continuous improvement can come through specialised advanced tracks, collaborating with other agencies on multi-disciplinary programs, or adding recognised certifications. NCSC guidance on building a secure workforce and NIST NICE frameworks reinforce deep, ongoing skill progression.
How much do you rely on third parties for cloud work? [change your answer]
Your answer:
Third parties do all our cloud work and have full access.
How to determine if this good enough
Your organisation might rely entirely on external suppliers or integrators to handle every aspect of your cloud environment (deployment, operations, security, cost optimisation). You may see this “good enough” if:
-
Minimal Internal Capability or Resource
- Your team lacks capacity or skills to manage cloud tasks in-house, so outsourcing everything seems more efficient.
-
Stable, Low-Risk Environments
- You have not encountered major issues or compliance demands; the environment is small enough that handing all access to a trusted third party is acceptable.
-
Rigid Budget Constraints
- Management prefers paying a single supplier cost rather than investing in building in-house skills or a DevOps team.
However, complete third-party control often creates risk if the supplier fails, is compromised, or does not align with NCSC best practices on supply chain security. Also, NIST SP 800-161 supply chain risk management advises caution in giving total external control over strategic assets.
Your answer:
Third parties do a lot of our cloud work and have full access.
How to determine if this good enough
If your organisation still grants external partners or suppliers broad control of cloud resources, but you handle some tasks in-house, you might deem it acceptable if:
-
Shared Responsibilities
- Your staff can manage day-to-day tasks while suppliers handle complex architecture, major updates, or advanced security.
-
Periodic Oversight
- You monitor or audit the supplier’s activity at intervals, ensuring alignment with departmental standards.
-
Reasonable Security and Compliance
- The supplier meets basic compliance checks and commits to NCSC supply chain security best practices or relevant [NIST SP 800-53/800-161] controls.
However, full account-level access can still introduce risk—particularly around misconfigurations, cost overruns, or insufficient security hardening if not carefully supervised. Evolving your posture can ensure robust, granular control.
Your answer:
Third parties help with specialist tasks and have emergency access only.
How to determine if this good enough
Here, your organisation typically handles daily operations, but calls on external experts for advanced tasks or emergencies—granting them only minimal privileged credentials. You might see it as “good enough” if:
-
Mature Internal Team
- Your staff can handle common issues; third parties fill skill gaps in HPC, ML, or specialised security incidents.
-
Controlled Access
- The supplier can escalate to “admin” only under defined protocols (e.g., break-glass accounts), reducing continuous broad privileges.
-
Balanced Costs
- You avoid paying for full outsourcing; instead, pay for specialised or on-demand engagements.
This approach offers strong security control while ensuring advanced expertise is available if required. [NCSC’s principle of “least privilege” and “need-to-know” aligns with limiting third-party access in normal operations. NIST SP 800-161 supply chain risk guidance similarly endorses restricting vendor privileges.
Your answer:
Third parties give advice but have no special access.
How to determine if this good enough
Your organisation fully manages its cloud environment, relying on external experts for design reviews, architecture guidance, or training—but without granting them direct infrastructure permissions. This might be “good enough” if:
-
Sufficient In-House Ops and Security
- You have a capable ops and security team able to implement supplier recommendations without handing over admin keys.
-
Low Risk of Supply Chain Compromise
- Restricting external access to “view-only” or no direct access ensures minimal risk of unauthorised actions by a third party.
-
Strong Cultural Collaboration
- Communication flows well; suppliers can guide your staff effectively on advanced topics.
However, if you need external support for certain operational tasks, not giving them any direct access could slow response times or hamper complex troubleshooting. NCSC’s supply chain security advice advocates balancing minimal necessary access with real-world support requirements.
Your answer:
We do not use third parties, or they only help as extra staff with no special access.
How to determine if this good enough
At this highest maturity level, your organisation has robust internal cloud teams, perhaps occasionally hiring contract staff or specialised freelancers to augment efforts—but with no exclusive control or privileged role. You might consider it “good enough” if:
-
Self-Sufficient Internal Capability
- Your workforce covers all major cloud operations (DevOps, security, architecture, cost optimisation), reducing dependence on external vendors.
-
Minimal or Temporary Outsourcing
- External help is short-term, under strict direction, and does not lead or own critical processes.
-
Complete Knowledge Ownership
- No vendor or contractor has unique knowledge. All runbooks, configurations, or code remain well documented in-house.
If your internal team effectively manages all cloud tasks, external specialists only add temporary capacity. However, if new advanced needs arise (e.g., HPC, AI, specialised security audits), you might reintroduce deeper third-party involvement—so readiness for that possibility is key.
What does success look like for your cloud team? [change your answer]
Your answer:
The team doesn’t have specific ways to measure success.
How to determine if this good enough
Your cloud team lacks explicit metrics, goals, or success factors to gauge progress. This can feel acceptable if:
-
Minimal Cloud Footprint
- The team is in an exploratory or very early stage, with limited resources.
- There’s no immediate pressure to produce measurable outcomes.
-
Short-Term or Experimental Cloud Efforts
- The team is focusing on small PoCs without a formal success framework.
-
Uncertain Organisational Direction
- Senior management hasn’t outlined a precise cloud strategy, so the team lacks guidance on what “success” means.
However, without defined criteria, it’s difficult to justify budgets, measure progress, or ensure your efforts meet public sector demands. NCSC’s cloud security best practices and GOV.UK’s technology code of practice emphasize measurable outcomes for transparency and accountability.
Your answer:
Success means completing initial projects or building a basic cloud platform.
How to determine if this good enough
Your cloud team measures success by delivering small PoCs—like a pilot application running in the cloud or a “minimum viable” platform—for demonstration. This may be “good enough” if:
-
Early Adoption Phase
- You’re focusing on demonstrating feasibility and building internal confidence in cloud approaches.
-
Positive Reception
- Stakeholders are satisfied with these pilot results, seeing the potential for cost savings or faster deployments.
-
Limited Scale
- Organisationally, large-scale cloud migrations or complex workloads aren’t yet on the horison.
Though better than having no success criteria, limiting measurements to “PoCs delivered” can hamper progression to full production readiness. NCSC operational resilience and NIST risk management frameworks often encourage planning for broader usage once pilot success is proven.
Your answer:
Success is launching one or more services in the cloud for real users.
How to determine if this good enough
In this scenario, your cloud team’s success criteria revolve around deploying real-world services or applications for actual users in cloud infrastructure. It may be “good enough” if:
-
Demonstrable Production Usage
- You can point to at least one or two services fully operating in the cloud, serving user or departmental needs.
-
Basic Reliability & Cost Gains
- Deployments show improved uptime, easier scaling, or partial cost savings over on-prem approaches.
-
Foundation for Expansion
- Success in these production workloads fosters confidence and sets a blueprint for migrating additional services.
Still, measuring success only by “production usage” can neglect other vital areas (like cost optimisation, security posture, or user satisfaction). NCSC’s cloud security guidance and NIST SP 800-53 controls underscore the importance of compliance, security checks, and continuous monitoring beyond just “it’s running in production.”
Your answer:
Success is moving core business services to the cloud.
How to determine if this good enough
The cloud team’s success is measured by graduating from smaller apps to significant, mission-critical systems. You might consider it “good enough” if:
-
Mission-Critical Cloud Adoption
- Key departmental or citizen-facing services run in the cloud, showcasing tangible operational or cost benefits.
-
Validated Resilience & Performance
- The services handle real production loads, meeting NCSC operational resilience best practices and departmental SLAs.
-
Cross-Functional Buy-In
- Architecture, finance, and security teams support your approach, indicating trust in cloud solutions for vital workloads.
However, you can refine success criteria to include advanced features like global failover, zero-downtime deployments, or integrated DevSecOps. NIST SP 800-160 systems security engineering often suggests deeper security integration once critical services are cloud-based.
Your answer:
Success means using the cloud to innovate, deliver real value by tranforming core services, and support the organisation’s overall goals.
How to determine if this good enough
At this top maturity level, success measures for the cloud team emphasize innovation, experimentation, and direct ties to strategic value creation (e.g., cost savings, user satisfaction, or cross-government collaboration). You might see it “good enough” if:
-
Clear Strategic Link
- Each new cloud feature or pilot directly supports organisational goals (e.g., citizen service improvement, efficiency targets).
-
Ongoing Experimentation
- The team fosters a culture of trying new services (e.g., AI/ML, serverless, HPC), measuring success with prototypes, while being able to fail fast and learn.
-
Demonstrable Value
- Whether it’s improved user experience, shortened delivery cycles, or significant cost reduction, the cloud initiatives produce measurable benefits recognised by leadership.
-
Comprehensive Security & Compliance
- As per NCSC cloud security principles or NIST controls, the environment remains robustly secure—balancing innovation with risk management.
Even at this level, you can refine success criteria by further integrating synergy with multi-cloud or cross-department projects, shaping a broader public sector digital transformation. GOV.UK’s digital transformation agenda encourages maximising user value with minimal friction.
Do leadership support your move to the cloud? [change your answer]
Your answer:
There is no support from leadership.
How to determine if this good enough
Your initiative to adopt 100% cloud is effectively grassroots-driven, without support from executive-level leaders (CEO, CFO, CIO, or equivalent). It might be “good enough” if:
-
Minimal Cloud Usage
- The organisation is still in a very early exploration stage, so top leadership’s involvement appears non-essential.
-
Limited or No Critical Workloads
- Cloud adoption does not yet impact vital citizen services or departmental mandates, so leadership sees no urgency.
-
No Current Funding/Resourcing Requirements
- The teams can sustain small pilot efforts within existing budgets or staff capacity without requiring strategic direction.
However, lacking executive buy-in often results in stalled progress, inability to scale secure cloud usage, and missed opportunities for cost optimisation or digital transformation. NCSC’s cloud security guidance and GOV.UK Cloud First policy typically advise leadership alignment to ensure secure, efficient, and future-proof adoption.
Your answer:
Senior managers support the move.
How to determine if this good enough
You have some backing from directors or departmental heads (below C-level) who champion the cloud initiative. This can be “good enough” if:
-
Visible Progress
- The department can proceed with cloud transformations in everyday operations.
- Key middle-management fosters departmental collaboration.
-
Partial Funding
- Senior managers can authorise training or pilot spending, but might need higher sign-off for large-scale expansions.
-
Some Accountability
- Senior managers track progress, but significant strategic shifts remain out of scope because top execs are not fully engaged.
Though beneficial, lacking the highest-level sponsorship might hinder cross-department alignment or hamper big-ticket modernisation. NCSC’s supply chain and cloud security frameworks often call for robust leadership direction for consistent security across the organisation.
Your answer:
Top executives (C-level) support the move.
How to determine if this good enough
Your cloud initiative is backed by a C-level executive (CIO, CTO, CFO, or equivalent), signaling strong leadership emphasis. It might be “good enough” if:
-
Robust Funding & Priority
- The sponsor secures budgets, champions cloud at board meetings, ensuring departmental alignment.
-
Influence Across Departments
- Cross-functional teams or other directors respect the executive’s authority, facilitating faster decisions.
-
Tangible Results
- With high-level backing, the cloud initiative can accelerate modernisation, cost savings, or improved service delivery.
Still, to sustain this advantage, you can adopt a structured roadmap, define deeper cultural changes, or integrate advanced DevSecOps. GOV.UK’s approach to agile/digital transformation guidance and NCSC well-architected security best practices can guide deeper integration.
Your answer:
Top executives support the move and there is a clear plan.
How to determine if this good enough
Here, your cloud initiative enjoys comprehensive executive support, with a well-defined plan across multiple departments or services. You might consider it “good enough” if:
-
Clear Multi-Department Involvement
- The entire leadership team endorses a unified cloud strategy, establishing integrated goals.
-
Detailed Migration & Transformation Plan
- A collaborative roadmap outlines which apps or services move first, timelines for HPC or AI expansions, or how to integrate new DevOps pipelines.
-
Measured Organisational Impact
- You can show cost savings, improved reliability, or user satisfaction correlating with the roadmap’s progress.
Though advanced, you can refine metrics, extend advanced HPC/AI usage, or further embed a “cloud-first” ethos across every level of staff. NCSC’s “cloud first” security posture advice and GOV.UK digital transformation frameworks remain relevant for continuous improvement.
Your answer:
Top executives lead a cloud-first culture and push for innovation.
How to determine if this good enough
At this ultimate stage, your organisation’s top leadership proactively cultivates a “cloud-first” mindset, championing experimentation and innovation. You might consider it “good enough” if:
-
Embedded Cloud Thinking
- Staff across all levels default to considering cloud solutions first for new projects, referencing GOV.UK Cloud First policy.
-
High Experimentation & Safe Fail
- DevSecOps teams conduct frequent PoCs, quickly pivoting from unsuccessful trials with minimal friction or blame.
-
Relentless Focus on Value & Security
- The culture merges cost-awareness, user-centric design, and NCSC security best practices at every step.
-
Confidence and Autonomy
- Teams can easily spin up new resources or adopt new services within guardrails—thanks to strong governance, automated compliance checks, and constant exec support.
Though already advanced, you can still refine cross-department synergy, adopt emerging HPC/AI capabilities, or serve as a best-practice model for other public sector organisations. Continuous improvement aligns with NIST cybersecurity frameworks and frequent updates to NCSC secure cloud guidelines.
Security
How do you manage accounts used by software, not people? [change your answer]
Your answer:
We use basic usernames and passwords.
How to determine if this good enough
In this scenario, your organisation creates standard user accounts (with a username/password) for services or scripts to authenticate within the cloud environment. It might be “good enough” if:
-
Minimal Cloud Usage
- Only a few workloads exist, and they don’t require advanced identity/access management or rigorous security controls.
-
Low-Risk Services
- The data or resources accessed by these service accounts do not involve sensitive citizen data or mission-critical infrastructure.
-
No Internal Skill for Advanced Approaches
- The team lacks time or resources to implement more secure methods of service account authentication.
However, user/password-based credentials can be easily leaked or shared, risking unauthorised access. NCSC’s Cloud Security Guidance and NIST SP 800-63 on digital identity guidelines often advise stronger or more automated credential management to avoid credential sprawl or reuse.
Your answer:
We use API keys that don’t change often.
How to determine if this good enough
If your service accounts rely on API keys for authentication—commonly found in scripts or CI/CD jobs—this might be acceptable if:
-
Limited Attack Surface
- The system is small-scale, and your keys do not provide broad or highly privileged access.
-
Reasonable Operational Constraints
- You only occasionally manage these keys, storing them in private repos or basic secret storage.
-
No Strict Security/Compliance Mandates
- You’re not handling data that triggers heightened security or compliance requirements beyond basic standards.
However, API keys can be compromised if not rotated or stored carefully. NCSC’s guidance on credential hygiene recommends more dynamic or short-lived solutions. Similarly, [NIST SP 800-63] suggests limited-lifespan credentials for improved security.
Your answer:
We use a central place to store secrets and sometimes rotate them.
How to determine if this good enough
Your organisation employs a central solution (like AWS Secrets Manager, Azure Key Vault, GCP Secret Manager, or OCI Vault) to hold service account credentials. Some credentials rotate automatically, while others might still be static. This might be “good enough” if:
-
Enhanced Security Posture
- You have significantly reduced the chance of plain-text credentials being lost or shared in code repos.
-
Operational Efficiency
- Teams no longer manage credentials ad hoc. The secret store offers a single source for retrieving keys, tokens, or passwords.
-
Some Automated Rotation
- Certain credentials—like RDS, database, or particular account keys—rotate on a schedule, improving security.
To further strengthen security, you could expand rotation across all credentials, adopt advanced ephemeral tokens, or integrate mutual TLS. NCSC’s guidance on secrets management and zero-trust approaches supports such expansions.
Your answer:
We use certificates (mutual TLS) for secure connections.
How to determine if this good enough
Your organisation deploys mutual TLS (mTLS)—each service has a certificate, and the server also presents a certificate to the client, ensuring bidirectional trust. This may be “good enough” if:
-
Secure End-to-End
- Services handle particularly sensitive data (e.g., health records, citizen data) requiring robust authentication.
-
Compliance with Zero-Trust or Strict Policies
- mTLS aligns with NCSC zero-trust architecture principles and NIST SP 800-207 zero trust frameworks.
-
Operational Maturity
- You maintain a solid PKI or certificate authority infrastructure, rotating and revoking certificates as needed.
However, implementing mTLS can be complex, requiring thorough certificate lifecycle management and robust observability. You might refine usage by embedding short-lived, dynamic certificates or adopting service mesh solutions that automate mTLS.
Your answer:
We use short-lived, strongly checked identities that change for each use.
How to determine if this good enough
Your approach for non-human accounts employs ephemeral tokens or federated identity solutions—limiting each credential’s lifespan and ensuring each request is securely verified. You might see it “good enough” if:
-
Zero Standing Privileges
- No permanent credentials exist. Each service obtains a short-lived token or identity just before usage.
-
Granular, Real-Time Validation
- Policies and claims are checked with each or frequent requests, reflecting advanced zero-trust models recommended by NCSC or NIST zero-trust frameworks.
-
High Assurance of Security
- The risk of stolen or misused credentials is drastically reduced, as tokens expire rapidly.
Though highly advanced, you might further optimise performance, adopt specialised identity standards (e.g., OAuth2, JWT-based systems), or integrate with multi-cloud identity solutions. NCSC’s and NIST’s advanced DevSecOps suggestions encourage ongoing improvement in ephemeral, short-lived identity usage.
How does your organisation manage user identities and authentication? [change your answer]
Your answer:
There are a few rules for managing user identities, and little checking.
How to determine if this good enough
Your organisation may lack formal identity and password guidelines, or each team creates ad hoc rules. This might be seen as acceptable if:
-
Minimal Access Needs
- Only a handful of staff use cloud resources, making the risk of misconfiguration or credential sharing relatively low.
-
No Strict Compliance
- You operate in an environment where official audits or regulatory demands for identity controls are currently absent.
-
Limited Cloud Adoption
- You are still at an exploratory stage, so formalising identity policies hasn’t been prioritised yet.
However, lacking standard policies can result in weak or inconsistent credential practices, inviting security breaches. NCSC’s Password Guidance and NIST SP 800-63 on digital identity guidelines emphasize robust policy frameworks to mitigate credential-based threats.
Your answer:
There are some rules, and these are sometimes checked by hand.
How to determine if this good enough
Your organisation has some formal rules for passwords, MFA, or user provisioning, but verifying compliance requires manual checks, sporadic log reviews, or retrospective audits. You might see it “good enough” if:
-
Limited-Scale or Low Risk
- You can manage manual checks if you have fewer user accounts or only a small set of privileged users.
-
Existing Staff Processes
- The team can handle manual policy checks (like monthly password rotation reviews), although it’s time-consuming.
-
No Immediate Audit Pressures
- You have not recently encountered external security audits or compliance enforcements that require continuous, automated reporting.
While this approach fosters some consistency, manual processes often fail to catch misconfigurations promptly, risking security lapses. NCSC’s identity management best practices and NIST frameworks generally advise automation to quickly detect and address policy violations.
Your answer:
There are rules, such as two-factor authentication for key accounts, and some automated checks.
How to determine if this good enough
You have implemented some automation for identity management—like requiring 2FA for admin roles and using scripting or built-in cloud tools for scanning compliance. It might be “good enough” if:
-
Reduction in Manual Oversight
- Automated checks detect certain policy violations or stale accounts, though not everything is covered.
-
Broader Governance
- The organisation has standard identity controls. Teams typically follow them, but some manual interventions remain.
-
Improved Security Baseline
- Regular or partial identity audits reveal fewer misconfigurations or abandoned accounts.
You still can refine these partial automations to fully handle user lifecycle management, integrate single sign-on for all users, or adopt real-time security responses. NIST SP 800-53 AC controls and NCSC identity recommendations consistently recommend deeper automation.
Your answer:
There are central rules for all users, with most checks and enforcement automated. Single Sign-On and two-factor authentication are widely used.
How to determine if this good enough
Here, your organisation enforces a centralised identity solution with automated checks. Some manual steps may remain for edge cases, but 2FA or SSO is standard for all staff. This approach might be “good enough” if:
-
High Standardisation
- All departments follow a uniform identity policy, with minimal exceptions.
-
Frequent Automated Audits
- Tools or scripts detect anomalies (e.g., unused accounts, role expansions) and flag them without manual effort.
-
User-Friendly SSO
- Staff log in once, accessing multiple cloud services, ensuring better compliance with security measures (like forced MFA).
Though highly mature, you can further refine short-lived credentials for non-human accounts, adopt advanced or zero-trust patterns, and integrate additional threat detection. NIST SP 800-207 zero-trust architecture guidelines and NCSC cloud security frameworks suggest continuous iteration.
Your answer:
All identity rules and checks are fully centralised and automated. This includes strong authentication, automated approval processes, and good reporting, especially for sensitive data access.
How to determine if this good enough
Your organisation has reached the top maturity level, with a fully centralised, automated identity management solution. You might see it “good enough” if:
-
Enterprise-Grade IAM
- Every user (human or non-human) is governed by a central directory, applying strong MFA/SSO, with role-based or attribute-based controls for all resources.
-
Zero Standing Privilege
- Privileged credentials are ephemeral, enforced by JIT or automated workflows.
- Minimises exposure from compromised accounts.
-
Continuous Compliance & Reporting
- Real-time dashboards or logs show who can access what, enabling immediate audits for regulatory or internal policy checks.
-
Seamless Onboarding & Offboarding
- Automated provisioning grants roles upon hire or team assignment, revoking them upon departure to ensure no orphaned accounts.
Though highly advanced, you can refine multi-cloud identity federation, adopt specialised HPC/AI or cross-government identity sharing, and embed advanced DevSecOps patterns. NCSC’s security architecture advice and [NIST SP 800-53] encourage continuous improvement in a dynamic threat landscape.
How do you make sure people have the right access for their role? [change your answer]
Your answer:
We review users’ access when we need to.
How to determine if this good enough
Your organisation lacks a formal or scheduled approach to verifying user access, relying on admin discretion. This might be acceptable if:
-
Small or Static Environments
- Fewer staff changes, so new or removed accounts are manageable without a structured process.
-
No Critical Data or Systems
- Low sensitivity or risk if accounts remain overprivileged or are never deactivated.
-
Minimal Budgets/Resources
- The current state is all you can handle, with no immediate impetus to formalise.
However, ad-hoc reviews often result in outdated or excessive privileges, violating the NCSC’s principle of least privilege and ignoring NIST SP 800-53 AC (Access Control) controls. This can lead to security breaches or cost inefficiencies.
Your answer:
We sometimes review access, but rarely remove it.
How to determine if this good enough
Your organisation periodically inspects user entitlements—maybe annually or every six months—but rarely adjusts them, fearing interruptions if privileges are revoked. This might be considered “good enough” if:
-
Basic Governance in Place
- At least you have a schedule or routine for checking access.
-
Minimal Overhead
- The burden of frequent changes or potential disruptions might exceed perceived risk from leftover permissions.
-
No Evidence of Abuse
- If you haven’t encountered security incidents or cost leaks due to over-privileged accounts.
Yet continuously retaining excessive privileges invites risk. NCSC’s guidelines and NIST SP 800-53 AC-6 on least privilege emphasize actively removing unneeded privileges to shrink your attack surface.
Your answer:
We review access often, but mostly add new access rather than remove it.
How to determine if this good enough
Your organisation systematically checks user access on a regular basis, but typically only grants new privileges (additive changes). Rarely do you remove or reduce existing entitlements. This may be “good enough” if:
-
Frequent or Complex Role Changes
- Staff rotate roles or new tasks come up often, so you keep adding privileges to accommodate new responsibilities.
-
Better Than Irregular Audits
- At least you’re reviewing systematically, capturing some improvements over purely ad-hoc or partial reviews.
-
No Major Security Incidents
- You haven’t experienced negative consequences from leftover or stale permissions yet.
However, purely additive processes lead to privilege creep. Over time, users accumulate broad access, conflicting with NCSC’s least privilege principle and NIST SP 800-53 AC-6 compliance. Reductions are vital to maintain a minimal attack surface.
Your answer:
Access is reviewed regularly, with expiry dates set for each role.
How to determine if this good enough
Your organisation systematically reviews user access with clear renewal or expiry deadlines, ensuring no indefinite privileges. This indicates a strong security posture. It’s likely “good enough” if:
-
Automated or Well-Managed Reviews
- The process is consistent, with each role or permission requiring re-validation after a certain period.
-
Minimal Privilege Creep
- Because roles expire, staff or contractors do not accumulate unneeded rights over time.
-
High Confidence in Access Data
- You maintain accurate data on who has which roles, and changes occur only after formal approval or re-certification.
Though robust, you can further refine by integrating real-time risk signals or adopting advanced identity analytics. NCSC’s operational resilience and NIST SP 800-53 Access Controls (AC-2, AC-3) generally encourage continuous improvement in automated checks.
Your answer:
Reviews are automated. Access changes when roles change, and all access has expiry dates.
How to determine if this good enough
At the apex of maturity, your organisation uses a fully automated, risk-based system for managing user permissions. You might consider it “good enough” if:
-
Zero Standing Privileges
- Privileges are automatically granted, adjusted, or revoked based on real-time role changes, with minimal human intervention.
-
Frequent or Continuous Verification
- A system or pipeline regularly checks each user’s entitlements, triggers escalations if anomalies arise.
-
Synchronised with HR Systems
- Staff transitions—new hires, promotions, departures—instantly reflect in user permissions, preventing orphaned or leftover access.
-
Strong Governance
- The process enforces compliance with NCSC identity best practices or relevant NIST AC (Access Control) guidelines through policy-as-code or advanced IAM solutions.
Although highly mature, you can still enhance cross-government collaboration or adopt real-time risk-based authentication. NCSC’s zero-trust architecture or advanced DevSecOps suggestions encourage ongoing adaptation to new technology or threat vectors.
How do you create and manage user accounts for cloud systems? [change your answer]
Your answer:
People share accounts, or accounts are managed by hand.
How to determine if this good enough
Your organisation might rely on shared or manually managed individual accounts for cloud systems, with minimal traceability. This can feel “good enough” if:
-
Minimal Operational Complexity
- The cloud usage is small-scale, and staff prefer quick, ad-hoc solutions.
-
Limited or Non-Critical Workloads
- The risk from poor traceability is low if the environment does not hold sensitive data or mission-critical services.
-
Short-Term or Pilot
- You see the current manual or shared approach as a temporary measure during initial trials or PoCs.
However, sharing accounts blurs accountability, violates NCSC’s principle of user accountability and contravenes NIST SP 800-53 AC-2 for unique identification. Manually managing accounts can also lead to mistakes (e.g., failing to revoke ex-employee access).
Your answer:
We use a central directory (like Active Directory), but links to cloud systems are inconsistent.
How to determine if this good enough
Your organisation might store all user info in a standard directory (e.g., Active Directory or LDAP) but each cloud integration is handled manually. This can be “good enough” if:
-
Consistent On-Prem Directory
- You can reliably create and remove user entries in your on-prem directory, so internal processes generally work.
-
Limited Cloud Footprint
- Only a few cloud services rely on these user accounts, so manual processes don’t create major friction.
-
Medium Risk Tolerance
- The environment accepts manual integrations, though certain compliance or security requirements aren’t strict.
However, manual synchronisation or ad-hoc provisioning to cloud systems often leads to out-of-date accounts, security oversights, or duplication. NCSC’s identity and access management guidance and NIST SP 800-53 AC (Access Controls) recommend consistent, automated user lifecycle management across on-prem and cloud.
Your answer:
We have standard ways of working with cloud systems and try to avoid services that won’t work with this.
How to determine if this good enough
Your organisation has established guidelines for user provisioning, adopting standard protocols (e.g., SAML, OIDC) or dedicated identity bridging solutions. This is likely “good enough” if:
-
Consistent Approach
- Teams or new projects follow the same identity integration pattern, reducing one-off solutions.
-
Moderate Automation
- User accounts typically auto-provision or sync from a central IDP, though some edge cases may require manual effort.
-
Reduced Shadow IT
- You discourage or block cloud services that lack compliance with standard identity integration, referencing NCSC supply chain security guidance.
You may strengthen these standards by further automating account lifecycle, ensuring short-lived credentials for privileged tasks, or integrating advanced analytics for anomaly detection. [NIST SP 800-63 or 800-53] highlight deeper identity proofing and continuous monitoring strategies.
Your answer:
Identity management is automated. Non-standard systems are kept separate.
How to determine if this good enough
Your organisation’s identity is seamlessly managed by a central provider, with minimal manual intervention:
-
Automatic User Lifecycle
- Hiring, role changes, or terminations sync instantly to cloud services—no manual updates needed.
-
Strong Policy Enforcement
- Services without SAML/OIDC or SCIM compliance are either disallowed or strictly sandboxed.
-
Robust Security & Efficiency
- The user experience is simplified with single sign-on, while security logs track every permission change, referencing NCSC’s recommended identity assurance levels.
You might further refine by adopting ephemeral credentials or advanced risk-based access policies. NIST SP 800-207 zero trust architecture suggests dynamic, continuous verification of user sessions.
Your answer:
We have a single cloud-based directory for all users. All accounts are managed in one place, and non-standard systems are gone.
How to determine if this good enough
At the highest maturity, your organisation uses a single, cloud-based IdP (e.g., Azure AD, AWS SSO, GCP Identity, or third-party SSO) for all user lifecycle events, and systems not integrating with it are deprecated or replaced. You might see it “good enough” if:
-
Complete Lifecycle Automation
- All new hires automatically get relevant roles, moving staff trigger role changes, and departures instantly remove access.
-
Zero Trust & Full Federation
- Every service or app you rely on supports SAML, OIDC, or SCIM, leaving no manual provisioning.
-
Strong Compliance & Efficiency
- Auditors easily confirm who has access to what, and staff enjoy a frictionless SSO experience.
- Aligns well with NCSC’s guidelines for enterprise identity solutions and NIST’s recommended identity frameworks.
Even so, you can continuously refine cross-department identity, advanced DevSecOps integration, or adopt next-gen identity features (e.g., risk-based authentication or passwordless technologies).
How do you manage non-human service accounts in the cloud? [change your answer]
Your answer:
Service accounts are like user accounts, with long-lived passwords.
How to determine if this good enough
Your organisation may treat service accounts as if they were human users, granting them standard usernames and passwords (or persistent credentials). This might be acceptable if:
-
Low-Risk, Low-Criticality Services
- The services run minimal workloads without high security, compliance, or cost risks.
-
No Complex Scaling
- You rarely spin up or down new services, so manual credential management seems manageable.
-
Very Small Teams
- Only a handful of people need to coordinate these credentials, reducing the chance of confusion.
However, long-lived credentials that mimic human user accounts typically violate NCSC’s cloud security principles and NIST SP 800-53 AC (Access Control) due to potential credential sharing, lack of accountability, and higher risk of compromise.
Your answer:
Service accounts use long-lived API keys, managed by each team.
How to determine if this good enough
In this setup, non-human accounts are assigned API keys (often static), managed by the project team. You might see it as “good enough” if:
-
Limited Cross-Project Needs
- Each project operates in isolation, with minimal external dependencies or shared services.
-
Few Cloud Services
- The environment is small, so local management doesn’t cause major confusion or risk.
-
Low Security/Compliance Requirements
- No strong obligations for rotating or logging key usage, or a short-term approach that hasn’t caught up with best practices yet.
Still, static API keys managed locally can easily be lost, shared, or remain in code, risking leaks. NCSC supply chain or credential security guidance and NIST SP 800-63 on digital identity credentials advise more dynamic, centralised strategies.
Your answer:
All service accounts use a central secret store, which everyone must use.
How to determine if this good enough
Your organisation mandates storing service account credentials in a secure, central location (e.g., an enterprise secret store). This might be “good enough” if:
-
Reduced Credential Sprawl
- No more local storing of secrets in code or random text files.
- Standard enforcement ensures consistent usage.
-
Better Rotation & Auditing
- The secret store possibly automates or at least supports rotating credentials.
- You can track who accessed which secret, referencing NCSC’s credential management recommendations.
-
Strong Baseline
- This approach typically covers a major part of recommended practices from NIST SP 800-63 or 800-53 for credentials.
However, using a secret store alone doesn’t guarantee ephemeral or short-lived credentials. You can further adopt ephemeral tokens and embed attestation-based identity to limit credentials even more. NCSC’s zero trust advice also encourages dynamic authentication steps.
Your answer:
Service accounts use short-lived identities, checked each time.
How to determine if this good enough
Your organisation has moved beyond static credentials, using ephemeral tokens or certificates derived from environment attestation (e.g., the instance or container proves it’s authorised). This can be considered “good enough” if:
-
Near Zero Standing Privilege
- Non-human services only acquire valid credentials at runtime, with minimal risk of stolen or leaked credentials.
-
Cloud-Native Security
- You heavily rely on AWS instance profiles, Azure Managed Identities, GCP Service Account tokens, or OCI dynamic groups + instance principals to authenticate workloads.
-
Robust Automation
- The pipeline or infrastructure automatically provisions ephemeral credentials, referencing NCSC and NIST recommended ephemeral identity flows.
You might refine or strengthen with additional zero-trust checks, rotating ephemeral credentials frequently, or adopting code-managed identities for cross-department federations. NCSC zero trust architecture guidance might suggest further synergy with policy-based access.
Your answer:
Service accounts are managed as code, with trust set up across the whole organisation.
How to determine if this good enough
At this final level, your organisation defines service identities in code (e.g., Terraform, AWS CloudFormation, Azure Bicep, GCP Deployment Manager), and enforces trust relationships through a central identity federation. This is typically “good enough” if:
-
Full Infrastructure as Code
- All resource definitions, including service accounts or roles, are under version control, automatically deployed.
- Minimises manual steps or inconsistencies.
-
Seamless Federation
- Multi-department or multi-cloud environments rely on a single identity trust model—no specialised per-service or per-team trust links needed.
-
Robust Continuous Delivery
- Automated pipelines update identities, rotating credentials or ephemeral tokens as part of routine releases.
-
Holistic Governance & Observability
- Management sees a single source of truth for identity definitions and resource provisioning, aligning with NCSC supply chain and zero trust recommendations and NIST SP 800-53 policies.
Though advanced, you may refine ephemeral solutions further, adopt advanced zero-trust posture, or integrate multi-department synergy. Continuous improvements remain essential for evolving threat landscapes.
How do you manage risks? [change your answer]
Your answer:
Informally, by individuals.
How to determine if this good enough
Your organisation’s risk management approach is largely ad hoc—no formal tools or consistent methodology. It might be “good enough” if:
-
Limited Scale or Maturity
- You run small, low-criticality projects where major incidents are rare, so an informal approach hasn’t caused big issues yet.
-
Tight Budget or Short Timescale
- Adopting more structured processes may currently seem out of reach.
-
No External Compliance Pressures
- You aren’t subject to rigorous audits requiring standardised risk registers or processes.
Nevertheless, purely informal risk management can lead to overlooked threats—particularly in cloud deployments, which often demand compliance with NCSC security guidance and NIST risk management frameworks.
Your answer:
Tracked in spreadsheets by each team.
How to determine if this good enough
Your organisation does track risks in spreadsheets, but each project manages them independently, with no overarching or centralised view. This might be “good enough” if:
-
Limited Inter-Project Dependencies
- Each project’s risks are fairly separate, so missing cross-program synergy or consolidated reporting doesn’t cause issues.
-
Basic Consistency
- Each spreadsheet might follow a similar format (like risk ID, likelihood, impact, mitigations), but there’s no single consolidated tool.
-
Low Complexity
- The organisation’s scale is small enough to handle manual processes, and no major audits require advanced solutions.
Spreadsheets can lead to inconsistent categories, scattered ownership, and difficulty identifying enterprise-wide risks—especially for cloud security or data privacy. NCSC guidance and NIST risk frameworks often advocate a centralised or standardised method for managing overlapping concerns.
Your answer:
Teams keep risk registers, which they review and update.
How to determine if this good enough
Your organisation uses structured risk registers—most likely Excel-based or a simple internal tool—and schedules regular reviews (e.g., monthly or quarterly). This is likely “good enough” if:
-
Consistent Methodology
- Teams follow a standardised approach: e.g., risk descriptions, scoring, mitigations, owners, due dates.
-
Regular Governance
- Directors, program managers, or a governance board reviews and signs off on updated risks.
-
Integration with Cloud Projects
- Cloud-based services or migrations are documented in the risk register, capturing security, cost, or vendor concerns.
While fairly robust, you may further unify these registers across multiple programs, introduce real-time automation or advanced analytics, and incorporate risk-based prioritisation. NCSC’s operational resilience guidance and NIST SP 800-37 risk management framework advise continual refinement.
Your answer:
We have a central risk system, which we review and update.
How to determine if this good enough
Your organisation has a singular system (e.g., a GRC platform) for capturing, prioritising, and reviewing risks from multiple streams, including cloud transformation efforts. It’s likely “good enough” if:
-
Enterprise-Wide Visibility
- Senior leadership and departmental leads see aggregated risk dashboards, no longer limited to siloed project registers.
-
Consistent Method & Language
- Risk scoring, categories, and statuses are uniform, reducing confusion or mismatches.
-
Active Governance
- A central board or committee regularly reviews top risks, ensures accountability, and drives mitigations.
To further strengthen, you may embed advanced threat intelligence or real-time monitoring data, adopt risk-based budgeting, or unify cross-department risk sharing. NCSC’s supply chain security approach and NIST ERM guidelines both mention cross-organisational alignment as vital for robust risk oversight.
Your answer:
We use an advanced risk tool for all teams, which helps us spot and escalate risks.
How to determine if this good enough
At this final level, your organisation uses a sophisticated enterprise risk platform that automatically escalates or notifies stakeholders when certain thresholds are met. This approach is typically “good enough” if:
-
Near Real-Time Insights
- The tool collects data from multiple sources (e.g., CI/CD pipelines, security scans, cost anomalies) and auto-updates risk profiles.
-
Proactive Alerts
- If a new vulnerability emerges or usage surpasses a cost threshold, the system escalates to management or security leads.
-
High Maturity Culture
- Teams understand and act on risk metrics, fostering a supportive environment for quick mitigation.
Although quite mature, you might refine further by adopting advanced AI-based analytics, cross-organisation risk sharing (e.g., multi-department or local councils), or continuously updating zero-trust or HPC/AI risk frameworks. NCSC’s advanced risk guidance and NIST’s enterprise risk management frameworks suggest iterative refinement.
How do you manage staff identities? [change your answer]
Your answer:
Each service manages its own identities.
How to determine if this good enough
Your organisation might allow each application or service to store and manage user accounts in its own silo. This can be considered “good enough” if:
-
Very Small Scale
- Each service supports only a handful of internal users; overhead of separate sign-ons or user directories is minimal.
-
Low Risk or Early Pilot
- No critical data or compliance need to unify identities; you’re still evaluating core cloud or digital services.
-
No Immediate Need for Central Governance
- With minimal overlap among applications, the cost or effort of centralising identities doesn’t seem justified yet.
While this approach can initially appear simple, it typically leads to scattered identity practices, poor visibility, and heightened risk (e.g., orphaned accounts). NCSC’s Identity and Access Management guidance and NIST SP 800-53 AC controls suggest unifying identity for consistent security and reduced overhead.
Your answer:
We have a central identity system, but not all services use it.
How to determine if this good enough
Your organisation has introduced a centralised identity solution (e.g., Active Directory, Azure AD, or an open-source LDAP), but only some cloud services plug into it. This might be “good enough” if:
-
Partial Coverage
- Key or high-risk systems already rely on centralised accounts, while less critical or legacy apps remain separate.
-
Reduced Complexity
- The approach cuts down on scattered logins for a majority of staff, although not everyone is unified.
-
Tolerable Overlap
- You can manage a few leftover local identity systems, but the overhead is not crushing yet.
To improve further, you can unify or retire the leftover one-off user stores and adopt standards like SAML, OIDC, or SCIM. NCSC identity management best practices and NIST SP 800-63 digital identity guidance typically encourage full integration for better security posture.
Your answer:
Most services use our central identity system, but a few don’t.
How to determine if this good enough
Your organisation leverages a robust central identity solution for the majority of apps, but certain or older niche services remain separate. It might be “good enough” if:
-
Dominant Coverage
- The central ID system handles 80-90% of user accounts, giving broad consistency and security.
-
Exceptions Are Low-Risk or Temporary
- The leftover independent systems are less critical or slated for retirement/replacement.
-
Clear Process for Exceptions
- Any new service wanting to remain outside central ID must justify the need, preventing random fragmentation.
To move forward, you can retire or integrate these final exceptions and push for short-lived, ephemeral credentials or multi-cloud identity federation. NIST SP 800-53 AC controls and NCSC’s identity approach both stress bridging all apps for consistent security posture.
Your answer:
Nearly all services use our central system, and we keep them in sync.
How to determine if this good enough
You have a highly integrated identity solution covering nearly all apps, with consistent provisioning, SSO, and robust security controls like MFA or conditional access. This is likely “good enough” if:
-
Minimal Manual Overhead
- Onboarding, offboarding, or role changes propagate automatically to most systems without admin intervention.
-
High Security & Governance
- You can quickly see who has access to what, referencing NCSC’s recommended best practices for identity governance.
- MFA or advanced authentication is standard.
-
Frequent Audits & Reviews
- Identity logs are consolidated, enabling quick detection of anomalies or orphan accounts.
While robust, you could refine ephemeral or short-lived credentials for non-human accounts, integrate cross-department identity, or adopt advanced risk-based authentication. NIST SP 800-63 or 800-53 AC controls highlight the potential for continuous identity posture improvements.
Your answer:
Every service uses one identity system, with one identity per person.
How to determine if this good enough
At this top maturity level, your organisation enforces one authoritative identity system for every service. All staff have exactly one account, disallowing duplicates or shared credentials. You might consider it “good enough” if:
-
Complete Uniformity
- All cloud and on-prem solutions integrate with the same directory/IDP.
- No leftover local accounts exist.
-
Strong Accountability
- A single “human <-> identity” mapping yields perfect traceability for actions across environments.
- Aligns with NCSC best practices on user accountability.
-
Robust Automation & Onboarding
- Upon hire or role change, the single identity is updated automatically, provisioning only the needed roles.
- Offboarding is likewise immediate and consistent.
Even so, you can expand advanced or zero-trust patterns (e.g., ephemeral tokens, risk-based authentication) or multi-department identity federation for cross-government collaboration. NIST SP 800-207 zero trust architecture or NCSC’s advanced identity frameworks might offer further insights.
How do you reduce the risk from staff with high-level access? [change your answer]
Your answer:
We vet all staff with high-level access.
How to determine if this good enough
Your organisation might ensure privileged users have been vetted by internal or external means (e.g., security clearances or supplier checks). This may be considered “good enough” if:
-
Rigorous Personnel Vetting
- Individuals with admin or root-level privileges have the relevant UK security clearance (e.g., BPSS, SC, DV) or supplier equivalent.
-
No Major Incidents
- Having not experienced breaches or insider threats, you feel comfortable with existing checks.
-
Minimal Cloud Scale
- The environment is small enough that close oversight of a handful of privileged users seems straightforward.
Still, user vetting alone does not fully address the risk of privileged misuse (either malicious or accidental). NCSC’s insider threat guidance and NIST SP 800-53 PS (Personnel Security) controls typically recommend continuous monitoring and robust logging for privileged accounts.
Your answer:
Systems keep logs, but logs are not checked or centralised.
How to determine if this good enough
In your organisation, each system generates logs to satisfy a broad requirement (“we must have logs”), yet there is no centralised approach or deep analysis. It might be “good enough” if:
-
Meeting Basic Compliance
- You have documentation stating logs must exist, fulfilling a minimal compliance or policy demand.
-
No Frequent Incidents
- So far, you’ve not needed advanced correlation or instant threat detection from logs.
-
Limited Complexity
- Logging requirements are not high or the environment is small, so manual or local checks suffice.
To enhance threat detection and privileged user oversight, you could unify logs centrally and add real-time monitoring. NCSC’s logging guidance and NIST SP 800-92 on log management emphasize the importance of consistent, centralised logging for security and accountability.
Your answer:
We check logs before going live, but not regularly.
How to determine if this good enough
Your organisation ensures that each new system or release passes an ITHC or security check verifying logs exist, but ongoing monitoring or correlation might not happen. This could be “good enough” if:
-
Meeting Basic Security Gate
- You confirm audit logs exist before go-live, mitigating total absence of logging.
-
High Manual Effort
- Teams may do point-in-time checks or random sampling of logs without continuous oversight.
-
Some Minimal Risk Tolerance
- If no major security incidents forced you to need real-time log analysis, you remain comfortable with the status quo.
Yet, post-launch, missing continuous log analysis can hamper early threat detection or wrongdoing by privileged users. NCSC protective monitoring guidance and NIST SP 800-53 AU controls highlight the importance of daily or real-time monitoring, not just checks at go-live.
Your answer:
Logs are stored in one place, can’t be changed, and are checked automatically.
How to determine if this good enough
Your organisation ensures all logs flow into a tamper-proof or WORM (write-once, read-many) storage with automated processes for retention and monitoring. This may be “good enough” if:
-
Complete Coverage
- Every system relevant to security or privileged actions ships logs to a central store with read-only or append-only policies.
-
Daily or Real-Time Analysis
- Automated scanners or scripts detect anomalies (e.g., unauthorised attempts, suspicious off-hours usage).
-
Confidence in Legal/Evidential Status
- The logs are immutable, meeting NCSC guidance or relevant NIST guidelines for evidential integrity if legal investigations arise.
Still, you might expand cross-department correlation (e.g., combining logs from multiple agencies), adopt advanced threat detection (AI/ML), or align with zero-trust. Continuous improvement helps keep pace with evolving insider threats.
Your answer:
We have regular audits with legal checks to make sure logs are complete and can be used as evidence.
How to determine if this good enough
At this highest maturity level, your organisation not only has robust logging but also runs frequent legal and forensic validations. This approach is typically “good enough” if:
-
Thorough Testing & Legal Assurance
- Auditors simulate real investigations, confirming the logs meet evidential standards for UK legal frameworks.
- Aligns with NCSC’s guidance on evidential logging or digital forensics.
-
Confidence in Potential Criminal Cases
- If insider misuse occurs, logs can stand up in court, verifying chain-of-custody and authenticity.
-
Mature Culture & Processes
- Teams are trained to handle forensic data, ensuring minimal disruption or tampering when collecting logs for review.
You may further refine by adopting next-generation forensics tools, cross-department collaborations, or advanced capabilities for HPC/AI-based anomaly detection. NIST SP 800-86 for digital forensics processes or NCSC advanced forensic readiness guidance highlight continuous improvement potential.
How do you keep your software supply chain secure? [change your answer]
Your answer:
We don’t track software dependencies. Updates are done as needed.
How to determine if this good enough
Your organisation or team may install open-source or third-party packages in an unstructured, manual manner, without consistent dependency manifests or version locks. This might be “good enough” if:
-
Limited or Non-Critical Software
- You only run small, low-risk applications where you’re comfortable with less stringent controls.
-
Short-Lived, Experimental Projects
- Minimal or proof-of-concept code that’s not used in production, so supply chain compromise would have little impact.
-
No Strong Compliance Requirements
- There’s no immediate demand to generate or maintain an SBOM, or to comply with stricter public sector security mandates.
However, ignoring structured dependency management often leads to vulnerabilities, unknown or out-of-date libraries, and risk. NCSC’s supply chain security guidance and NIST SP 800-161 on supply chain risk management recommend tracking dependencies to mitigate malicious or outdated code infiltration.
Your answer:
We set dependencies at the start and update for big changes. Some teams use tools to check security.
How to determine if this good enough
Your organisation employs some form of version locking or pinned dependencies, typically updating them at major releases or if a high-profile vulnerability arises. This might be “good enough” if:
-
Moderate Project Complexity
- Projects can survive months without routine dependency updates, posing little risk.
-
Partial Security Consciousness
- Team leads scan dependencies manually or with open-source scanners but only in reaction to CVE announcements.
-
Limited DevSecOps
- Minimal continuous integration or automated scanning, relying on manual processes at release cycles.
Though better than unmanaged approaches, you might further automate scanning, adopt continuous patching, or integrate advanced DevSecOps. NCSC’s supply chain best practices and NIST SP 800-161 underscore proactive and more frequent checks.
Your answer:
All code is checked and updated regularly, with fixes applied as needed.
How to determine if this good enough
Your organisation has begun actively scanning code repositories, triggering automated dependency updates or PRs when new vulnerabilities appear. This might be considered “good enough” if:
-
Frequent Dependency Updates
- Teams integrate fresh library versions on a weekly or sprint basis, not just big releases.
-
Automated Patches or Merge Requests
- Tools generate PRs automatically for security fixes, and developers review or test them quickly.
-
Wider Organisational Awareness
- Alerts or dashboards highlight vulnerabilities in each project, ensuring consistent coverage across the enterprise.
You could further improve by employing advanced triage (prioritising fixes by severity or usage context), adopting container image scanning, or establishing a centralised SOC for supply chain threats. NCSC’s protective monitoring or NIST SP 800-161 supply chain risk management approach outlines more advanced strategies.
Your answer:
A central team watches all code, fixes big problems first, and checks how each dependency is used.
How to determine if this good enough
Your organisation’s SOC or security team has a single pane of glass for code repositories, assessing discovered vulnerabilities in context (e.g., usage path, data sensitivity). You might see it “good enough” if:
-
Robust Overviews
- The SOC sees each project’s open vulnerabilities, ensuring none slip through cracks.
-
Contextual Prioritisation
- Vulnerabilities are triaged by severity and usage context (are dependencies actually loaded at runtime?).
-
Coordinated Response
- The SOC, dev leads, and ops teams collaborate on remediation tasks; no major backlog or confusion over ownership.
You can further refine by adopting advanced threat intel feeds, deeper container or HPC scanning, or linking to enterprise risk management. NCSC’s advice on a protective monitoring approach and NIST SP 800-171 for protecting CUI in non-federal systems might inform future expansions.
Your answer:
We use advanced tools to watch and fix supply chain risks, focusing on real threats.
How to determine if this good enough
At this highest maturity level, your organisation unifies proactive scanning, advanced threat intel, context-based triage, and real-time analytics to handle supply chain security. You might consider it “good enough” if:
-
Minimal Noise, High Impact
- Automated processes accurately prioritise genuine threats, with few wasted cycles on false positives.
-
Strategic Alignment
- The SOC or security function continuously updates leadership or cross-department risk boards about relevant vulnerabilities or supplier issues, referencing NCSC’s supply chain security frameworks.
-
Cross-Organisational Culture
- DevOps, security, and product leads collaborate seamlessly, ensuring supply chain checks are integral to release processes.
Still, you might adopt zero trust or HPC/AI scanning, cross-government code sharing, or advanced developer training as next steps. NIST SP 800-161 on supply chain risk management and NCSC advanced DevSecOps patterns suggest iterative expansions of scanning and collaboration.
How do you find and fix security problems, vulnerabilities, and misconfigurations? [change your answer]
Your answer:
There is no clear way for people to report problems.
How to determine if this good enough
Your organisation may not offer any channel or official statement on how external security researchers or even the general public can report potential security flaws. It might be seen as “good enough” if:
-
Very Limited External Exposure
- The services you run are not publicly accessible or have little interaction with external users.
-
Low Risk Tolerance
- You have minimal data or no major known threat vectors, so you assume public disclosure might be rarely needed.
-
Short-Term or Pilot
- You’re in an early stage and have not formalised public-facing vulnerability reporting.
However, failing to provide a clear disclosure route can lead to undisclosed or zero-day vulnerabilities persisting in your systems. NCSC’s vulnerability disclosure guidelines and NIST SP 800-53 SI (System and Information Integrity) controls emphasize the importance of structured vulnerability reporting to quickly remediate discovered issues.
Your answer:
We publish how to report problems and respond quickly. We may use public reporting platforms.
How to determine if this good enough
Your organisation provides a public vulnerability disclosure policy or is listed on a responsible disclosure platform (e.g., HackerOne, Bugcrowd). It might be “good enough” if:
-
Good Public Communication
- External researchers or citizens know precisely how to submit a vulnerability, and you respond within a stated timeframe.
-
Moderate Volunteer Testing
- You handle moderate volumes of reported issues, typically from well-intentioned testers.
-
Decent Internal Triage
- You have a structured way to evaluate reported issues, possibly referencing NCSC’s vulnerability disclosure best practices.
However, you could enhance your approach with automated scanning and proactive threat detection. NIST SP 800-53 or 800-161 supply chain risk guidelines often advise balancing external reports with continuous internal or automated checks.
Your answer:
We use automated tools to scan for problems and do regular checks.
How to determine if this good enough
Your organisation invests in standard security scanning (e.g., SAST, DAST, container scans) as part of CI/CD or separate regular testing, plus periodic manual assessments. This is likely “good enough” if:
-
Continuous Improvement
- Regular scans detect new vulnerabilities promptly, feeding them into backlog or release cycles.
-
Routine Audits
- You run scheduled pen tests or monthly/quarterly security reviews, referencing NCSC’s 10 Steps to Cyber Security or relevant IT Health Check (ITHC).
-
Clear Remediation Path
- Once discovered, vulnerabilities are assigned owners and typically resolved in a reasonable timeframe.
You might refine the process by adding advanced threat hunting, zero trust, or cross-department threat intelligence sharing. NIST SP 800-53 CA controls and NCSC’s protective monitoring approach recommend proactive threat monitoring in addition to scanning.
Your answer:
We hunt for threats and respond quickly, with some automation.
How to determine if this good enough
Your organisation has a dedicated security function or SOC actively hunting for suspicious activity, not just waiting for automated scanners. It might be “good enough” if:
-
Threat Intelligence Feeds
- The SOC or security leads incorporate intel on new attack vectors or high-profile exploits, scanning your environment proactively.
-
Swift Incident Response
- When a threat is found, dedicated teams quickly isolate and remediate within defined SLAs.
-
Partial Automation
- Some standard or low-complexity threats are auto-contained (e.g., blocking known malicious IPs, quarantining compromised containers).
You could extend capabilities with advanced forensics readiness, red/purple team exercises, or more granular zero-trust microsegmentation. NCSC’s incident management guidance and NIST SP 800-61 Computer Security Incident Handling Guide encourage continuous threat hunting expansions.
Your answer:
We use red and purple teams to test security. A central team checks and fixes issues, with many actions automated.
How to determine if this good enough
At this top maturity level, your organisation invests in continuous offensive testing and advanced SOC operations. It’s likely “good enough” if:
-
Extensive Validation
- Regular (annual or more frequent) red team exercises and major release-based ITHCs confirm robust security posture.
-
Sophisticated SOC
- The SOC actively hunts threats, triages vulnerabilities, and automates mitigations for known patterns.
-
Organisational Priority
- Leadership supports ongoing security testing budgets, responding promptly to critical findings.
Still, you might refine multi-cloud threat detection, adopt advanced AI-based threat analysis, or integrate cross-public-sector threat sharing. NCSC’s advanced operational resilience guidelines and NIST SP 800-137 for continuous monitoring encourage iterative expansions.
How do you secure your network and control access? [change your answer]
Your answer:
We rely on network controls like firewalls and IP allow-lists.
How to determine if this good enough
Your organisation might rely heavily on firewall rules, IP allow-lists, or a perimeter-based model (e.g., on-premises network controls or perimeter appliances) to secure data and apps. This might be “good enough” if:
-
Limited External Exposure
- Only a few services are exposed to the internet, while most remain behind a well-managed firewall.
-
Legacy Infrastructure
- The environment or relevant compliance demands a dedicated network perimeter approach, with limited capacity to adopt more modern identity-based methods.
-
Strict On-Prem or Single-Cloud Approach
- If everything is co-located behind on-prem or one cloud’s network layer, perimeter rules might reduce external threats.
Yet perimeter security alone can fail if an attacker bypasses your firewall or uses compromised credentials internally. NCSC’s zero-trust principles and NIST SP 800-207 Zero Trust Architecture both encourage focusing on identity-based checks rather than relying solely on network boundaries.
Your answer:
We use network controls and also check user identity.
How to determine if this good enough
Your organisation still maintains a perimeter firewall, but user identity checks (e.g., login with unique credentials) are enforced when accessing apps behind it. It might be “good enough” if:
-
Mixed Legacy and Modern Systems
- Some older apps demand perimeter-level protection, but you do require user logins or limited authentication steps for critical apps.
-
Basic Zero-Trust Awareness
- Recognising that IP-based controls alone are insufficient, you at least require unique logins for each service.
-
Minimal Threat or Complexity
- You’ve had no incidents from insider threats or compromised internal network segments.
Though an improvement over pure perimeter reliance, deeper identity-based checks can help. NCSC’s zero-trust approach and NIST SP 800-207 guidelines promote validating each request’s user or device identity, not just pre-checking them at the perimeter.
Your answer:
We check both user and service identity, as well as network controls.
How to determine if this good enough
You verify not just the user’s identity but also ensure the service or system making the request is authenticated. This indicates a move towards more modern, partial zero-trust concepts. It might be “good enough” if:
-
Service Identities
- Non-human accounts also need secure tokens or certificates, so you know which microservice or job is calling your APIs.
-
User + Service Auth
- Each request includes user identity (or claims) plus the service’s verified identity.
-
Reduced Attack Surface
- Even if someone penetrates your perimeter, they need valid service credentials or ephemeral tokens to pivot or call internal APIs.
To progress further, you might adopt advanced mutual TLS, ephemeral identity tokens, or partial zero-trust microsegmentation. NCSC’s zero-trust approach and NIST SP 800-207 Zero Trust Architecture both advise deeper trust evaluations for each request.
Your answer:
In some areas, we use strong identity checks instead of network controls, reducing VPN use.
How to determine if this good enough
Your organisation has started phasing out VPN or perimeter-based approaches, preferring direct connections where each request is authenticated and authorised at the identity level. It’s likely “good enough” if:
-
Mixed Environments
- Some apps still use older network-based rules, but new services rely on modern identity or SSO for access.
-
Reduction in Attack Surface
- No blanket VPN that grants wide network access—users or microservices authenticate to each resource directly.
-
Increasing Zero Trust
- You see initial success in adopting zero-trust patterns for some apps, but not fully universal yet.
To advance, you might unify all apps under identity-based controls, incorporate advanced device posture checks, or adopt full microsegmentation. NCSC’s zero-trust guidance and NIST SP 800-207 Zero Trust Architecture frameworks can guide further expansions.
Your answer:
We don’t use network perimeters. Access is based on device and user identity, with strong checks.
How to determine if this good enough
At this final maturity level, your organisation’s security is fully identity- and device-centric—no blanket perimeter or VPN. You might consider it “good enough” if:
-
Zero-Trust Realisation
- Every request is authenticated and authorised per device and user identity, referencing NCSC zero trust or NIST SP 800-207 approaches.
-
Full Cloud or Hybrid Environment
- You’ve adapted all systems to identity-based access, no backdoor VPN routes or firewall exceptions.
-
Streamlined Access
- Staff easily connect from anywhere, but each request must prove who they are and what device they’re on before gaining resources.
Even so, consider advanced HPC/AI zero-trust expansions, cross-department identity federation, or deeper attribute-based access control. Continuous iteration remains beneficial to match evolving threats, as recommended by NCSC and NIST guidance.
How do you use two-factor or multi-factor authentication (2FA/MFA)? [change your answer]
Your answer:
It’s suggested, but not required.
How to determine if this good enough
Your organisation may advise staff to enable 2FA (two-factor) or MFA (multi-factor) on their accounts, but it’s left to personal choice or departmental preference. This might be “good enough” if:
-
Minimal Risk Appetite
- You have low-value, non-sensitive services, so the impact of compromised accounts is relatively small.
-
Testing or Early Rollout
- You’re in a pilot phase before formalising a universal requirement.
-
No High-Stakes Obligations
- You don’t face stringent regulatory demands or public sector security mandates.
However, purely optional MFA typically leads to inconsistent adoption. NCSC’s multi-factor authentication guidance and NIST SP 800-63B Identity Assurance Level recommendations advise requiring MFA for all or at least privileged accounts to significantly reduce credential-based breaches.
Your answer:
It’s required, but not always enforced.
How to determine if this good enough
Your organisation has a policy stating all staff “must” enable MFA. However, actual compliance might vary—some services allow bypass, or certain users remain on single-factor. This can be “good enough” if:
-
Broad Organisational Recognition
- Everyone knows MFA is required, reducing the risk from total single-factor usage.
-
Partial Gains
- Many staff and services do indeed use MFA, reducing the chance of mass credential compromise.
-
Resource Constraints
- Full enforcement or zero exceptions aren’t yet achieved due to time, legacy systems, or user objections.
Though better than optional MFA, exceptions or non-enforcement create holes. NCSC’s MFA best practices and NIST SP 800-63B (AAL2+) advise systematically enforcing multi-factor to effectively protect user credentials.
Your answer:
It’s enforced for nearly all users, with few exceptions.
How to determine if this good enough
Your organisation has successfully mandated MFA for nearly every scenario, though a small number of systems or roles may not align due to technical constraints or a specific risk-based exemption. This is likely “good enough” if:
-
High MFA Coverage
- Over 90% of your users and services require multi-factor login, drastically minimising account compromise risk.
-
Well-Documented Exceptions
- Each exception is risk-assessed and typically short-term. The organisation knows precisely which systems lack enforced MFA.
-
Strong Culture & Processes
- Staff generally accept MFA as standard, and you rarely experience pushback or confusion.
At this stage, you can refine advanced or stronger factors (e.g., hardware tokens, FIDO2) for privileged accounts, or adopt risk-based step-up authentication. NCSC multi-factor recommendations and NIST SP 800-63B “Auth Assurance Levels” advise continuing improvements.
Your answer:
Only strong 2FA/MFA methods are allowed (no SMS or phone-based codes).
How to determine if this good enough
Your organisation refuses to allow SMS-based or similarly weak MFA. Instead, you use TOTP apps, hardware tokens, or other resilient factors. This might be “good enough” if:
-
High-Security Requirements
- Handling sensitive citizen data or critical infrastructure, so you need robust protection from phishing and SIM-swap attacks.
-
Firm Policy
- You publish a stance that phone-based authentication is disallowed, ensuring staff adopt recommended alternatives.
-
Consistent Implementation
- Everyone’s using TOTP, FIDO2 tokens, or other strong factors. Rarely do exceptions exist.
However, you might still refine device posture checks, adopt hardware-based tokens for privileged roles, or integrate continuous authentication for maximum security. NCSC’s guidance on phishing-resistant MFA and NIST SP 800-63B AAL3 recommendations highlight advanced factors beyond TOTP.
Your answer:
Only hardware-based MFA is used, managed and given out by the organisation.
How to determine if this good enough
At this pinnacle, your organisation requires hardware-based tokens (e.g., FIDO2, YubiKeys, or similar) for all staff, forbidding weaker factors like SMS or even TOTP. This is typically “good enough” if:
-
Full Hardware Token Adoption
- Everyone uses hardware keys for login, including privileged or admin accounts.
-
Central Key Lifecycle Management
- The organisation issues, tracks, and revokes hardware tokens systematically, referencing NCSC hardware token management best practices.
-
High Assurance
- This approach meets or exceeds NIST SP 800-63B AAL3 standards and offers strong resilience against phishing or SIM-swap exploits.
You could still refine ephemeral or risk-adaptive auth, integrate zero-trust posture checks, and implement cross-department hardware token bridging. Continuous iteration ensures alignment with future security advances. NCSC’s advanced multi-factor recommendations or vendor-based hardware token solutions might help expand coverage.
How do you manage privileged access? [change your answer]
Your answer:
Each admin manages their own privileged accounts, with no set process.
How to determine if this good enough
Your organisation may let each system admin handle privileged credentials independently, storing them in personal files or spreadsheets. This might be acceptable if:
-
Small-Scale or Legacy Systems
- You have few privileged accounts and limited complexity, and potential downsides of ad-hoc management haven’t yet materialised.
-
Short-Term or Pilot
- You’re in a transitional stage, planning to adopt better solutions soon but not there yet.
-
No Pressing Compliance Requirements
- Strict audits or public sector mandates for privileged account management haven’t been triggered.
However, ad-hoc methods often risk unauthorised usage, inconsistent rotation, and difficulty tracking who accessed what. NCSC’s privileged account security guidance and NIST SP 800-53 AC-6 (least privilege) emphasise stricter control over privileged credentials.
Your answer:
We use central controls for passwords and keys, with basic logging.
How to determine if this good enough
Your organisation implements a vaulting solution (e.g., a password manager or secrets manager) that securely stores privileged credentials, with usage logs or basic policy checks. This might be “good enough” if:
-
Reduced Credential Sprawl
- No more random spreadsheets or personal note files; vault usage is mandatory for storing admin credentials.
-
Initial Logging & Policy
- Access to vault entries is logged, and policy controls (like who can retrieve which credential) exist.
-
Improved Accountability
- Audit logs show which admin took which credential, though real-time or advanced analytics may be limited.
To enhance further, you can adopt ephemeral credentials, just-in-time privilege grants, or integrate automatic rotation. NCSC’s privileged access management guidance and NIST SP 800-63B AAL2+ usage for admin accounts suggest deeper automation and advanced threat detection.
Your answer:
We have structured admin processes, with one-time passwords for access.
How to determine if this good enough
In this scenario, your organisation has formal processes: new privileged accounts require an approval workflow, privileges are tracked, and one-time passwords or tokens might be used to access certain sensitive credentials or sessions. It may be “good enough” if:
-
Managed Lifecycle
- You have explicit procedures for provisioning, rotating, and revoking privileged accounts.
-
OTP for Sensitive Operations
- For high-risk tasks (e.g., root or “god-mode” usage), a user must supply a fresh OTP from the vault or via a token generator.
-
Reduced Risk
- Mandatory approvals and short-lived passcodes curb the chance of stale or misused privileged credentials.
Still, you might consider advanced measures like ephemeral role assumption, context-based or zero-trust policies, or real-time threat detection. NCSC’s privileged user management best practices and NIST SP 800-53 AC-6 advanced usage outline continuing improvements.
Your answer:
We use automated systems for privileged access, with strong controls and checks.
How to determine if this good enough
Your organisation has advanced systems that dynamically adjust privileged user access based on real-time signals (e.g., user context, device posture, time of day), with logging across multiple clouds. It’s likely “good enough” if:
-
Flexible, Policy-Driven Access
- Certain tasks require elevated privileges only when risk or context is validated (e.g., location-based or device checks).
-
Unified Multi-Cloud Oversight
- You can see all privileged accounts for AWS, Azure, GCP, OCI in a single pane, highlighting anomalies.
-
Prompt Mitigation & Revocation
- If an account shows unusual behavior, the system can auto-limit privileges or alert security leads in near real-time.
You could refine it by adopting zero-trust microsegmentation for each privileged action, or real-time AI threat detection. NCSC’s zero trust approach and NIST SP 800-207 Zero Trust Architecture often encourage continuous verification for highest-value accounts.
Your answer:
We use advanced tools for privileged access, with full logging, approval steps, and regular reviews.
How to determine if this good enough
At this highest maturity level, your organisation dynamically grants privileged access based on real-time context (time window, location, device posture, or manager approval) and logs all actions. Senior leadership is involved in after-action reviews for critical escalations. This is typically “good enough” if:
-
Comprehensive Zero-Trust
- Privileged roles exist only if requested and verified in real-time, with ephemeral credentials.
-
Senior Leadership Accountability
- The mandatory wash-up sessions ensure no suspicious or repeated escalations go unexamined, reinforcing a security-focused culture.
-
Automation Minimises Need
- Many tasks that previously required manual privileged access are automated or delegated to safer, limited-scope roles, aligning with NCSC zero trust / least privilege guidance and NIST SP 800-207 approaches.
Though advanced, you may refine HPC/AI roles under ephemeral policies, integrate multi-department identity bridging, or further embed AI-based anomaly detection. Continual iteration aligns with future public sector security demands.
How does your organisation respond to security breaches and incidents? [change your answer]
Your answer:
We do not have a set process for handling security breaches.
How to determine if this good enough
Your organisation may rely on ad-hoc or manual processes to classify and secure data (e.g., staff deciding on classification levels individually, using guidelines but no enforcement tooling). This can be acceptable if:
-
Small or Low-Risk Datasets
- You handle minimal or non-sensitive data, so the impact of a breach is low.
-
Limited Organisational Complexity
- A few staff or single department handle data security manually, and no major compliance demands exist yet.
-
Short-Term/Pilot State
- You’re in early experimentation with cloud, planning better controls soon.
However, manual classification often leads to inconsistent labeling, insufficient logging, and potential data mishandling. NCSC’s data security guidance and NIST SP 800-53 SC (System and Communications Protection) controls advise more structured data classification and automated policy enforcement.
Your answer:
We have a basic process for reporting and managing breaches, but it is not always followed.
How to determine if this good enough
Your organisation has a recognised policy framework (e.g., data classification policy, access controls) and uses central configuration to handle data security, typically at least partially automated. This might be “good enough” if:
-
Consistent Application
- Most teams adhere to defined policies, ensuring a baseline of uniform data protection.
-
Reduced Complexity
- Staff leverage a standard set of controls for data at rest (encryption) and data in transit (TLS), referencing NCSC’s guidance on data encryption and NIST SP 800-53 SC controls.
-
Moderate Maturity
- You can see a consistent approach to user or service access across departmental data repositories.
You could enhance these controls by adding real-time monitoring, automation for labeling, or advanced data flow analysis. NCSC’s zero trust approach and NIST SP 800-171 for protecting CUI can guide expansions to more granular or continuous data security.
Your answer:
We have a clear process for handling breaches. Staff are trained, and we record what happens.
How to determine if this good enough
Your organisation enforces data protection policies but only partially monitors for suspicious activity (e.g., some DLP or logging solutions in place). It might be “good enough” if:
-
Basic DLP or Anomaly Detection
- You log file transfer or download activity from key systems, though coverage might not be universal.
-
Minimal Incidents
- You rarely see large-scale data leaks, so partial monitoring hasn’t caused major issues.
-
Structured but Incomplete
- Policies exist for classification, encryption, and access, but continuous or real-time exfiltration detection is partial.
You can strengthen by adopting more advanced DLP solutions, real-time anomaly detection, and integrated threat intelligence. NCSC’s protective monitoring approach and NIST SP 800-53 SI controls emphasize continuous detection and response to suspicious data movements.
Your answer:
We test our breach process regularly and update it when needed.
How to determine if this good enough
Your organisation employs layered controls (encryption, classification, role-based access, DLP) plus automated anomaly detection systems. This approach might be “good enough” if:
-
Cross-Platform Coverage
- Data in AWS, Azure, GCP, or on-premises is consistently monitored, with uniform detection rules.
-
Immediate Alerts & Automated Responses
- If suspicious exfil or corruption is detected, the system can contain the user or action in near real-time.
-
Mature Security Culture
- Staff know that unusual data activity triggers alerts, so they practice good data handling.
Further evolution might include advanced zero trust for each data request, HPC/AI-specific DLP, or integrated cross-department data threat intelligence. NCSC operational resilience guidance and NIST SP 800-137 continuous monitoring frameworks highlight ongoing improvements in automation and analytics.
Your answer:
We have a well-tested breach process. We review incidents, learn from them, and make improvements each time.
How to determine if this good enough
At this top maturity level, your organisation’s data breach prevention strategy is fully integrated, with real-time automated responses and proactive scanning. It’s typically “good enough” if:
-
Continuous Visibility & Reaction
- You always see data flows, with immediate anomaly detection, containment, and incident response, referencing NCSC advanced protective monitoring or NIST continuous monitoring guidelines.
-
Frequent Access & Security Reviews
- Privileged or sensitive data access is automatically logged, regularly audited for minimal or suspicious usage.
-
Seamless Multi-Cloud or Hybrid
- You track data across AWS, Azure, GCP, on-prem systems, or container/Kubernetes platforms with uniform policies.
Even so, you might refine advanced AI-based analytics, adopt cross-department supply chain correlation, or evolve HPC data security. NCSC’s zero trust posture or NIST SP 800-207 zero trust architecture can guide further improvements.
Technology
How do you choose technologies for new projects? [change your answer]
Your answer:
Each project picks its own technologies. This leads to lots of different tools that may not work well together.
How to determine if this good enough
Your organisation may let project teams pick any tech stack or tool they prefer, resulting in minimal standardisation. This can be considered “good enough” if:
-
Small or Isolated Projects
- Few cross-dependencies exist; each project runs mostly independently without needing to integrate or share solutions.
-
Low Risk & Early Stage
- You’re in an experimental or startup-like phase, testing different tools before formalising a standard.
-
No Centralised Governance Requirements
- There isn’t (yet) a policy from senior leadership or compliance bodies demanding consistent technology choices.
However, purely ad-hoc selections often lead to higher maintenance costs, learning curves, and integration challenges. NCSC’s cloud and digital guidance and NIST enterprise architecture best practices encourage balancing project freedom with broader organisational consistency and security.
Your answer:
Everyone must use the same technology stack.
How to determine if this good enough
Your organisation has a strict policy (e.g., “All apps must use Java + Oracle DB” or a locked stack). It can be considered “good enough” if:
-
Stable & Predictable
- The environment is stable, and forced uniformity hasn’t hindered project innovation or changed business needs drastically.
-
Meets Regulatory Compliance
- Uniform tooling might simplify audits, referencing NCSC frameworks or NIST guidelines for consistent security controls.
-
Sufficient for Current Workloads
- No major impetus from staff or leadership to adopt new frameworks or advanced cloud services.
Nevertheless, overly rigid mandates can stifle innovation, leading to shadow IT or suboptimal solutions. GOV.UK’s service manual on agile and iterative approaches often advises balancing standardisation with flexibility for user needs.
Your answer:
We have some guidance, but it’s out of date and not very useful.
How to determine if this good enough
Your organisation made an effort to produce a technology radar or pattern library, but it’s now stale or incomplete. Teams may ignore it, preferring to research on their own. It might be “good enough” if:
-
Past Good Intentions
- The existing radar or patterns once offered value, but no one has updated them in 1-2 years.
-
Low Current Impact
- Projects have found alternative references, so the outdated resources do minimal harm.
-
No High-Level Mandates
- Leadership or GDS/NCSC have not mandated an up-to-date approach yet.
Still, stale patterns or radars can lead to confusion about which tools are recommended or disapproved. NCSC’s guidance on choosing secure technology solutions and NIST’s enterprise architecture best practices emphasize regularly refreshed references for modern security features.
Your answer:
We have up-to-date guidance and documents that help teams choose the right technology.
How to determine if this good enough
Your organisation invests in a living, frequently updated set of technology choices or recommended patterns, which teams genuinely consult before starting projects. This can be “good enough” if:
-
Broad Adoption
- Most dev/ops teams refer to the radar or patterns and find them beneficial.
-
Timely Updates
- Items are regularly revised in response to new cloud services, NCSC security alerts, or new GDS guidelines.
-
Consistent Security & Cost
- The recommended solutions reduce redundant spend and ensure up-to-date security features.
To push further, you might incorporate a community-driven pipeline for continuous improvement or collaborate with cross-public sector bodies on shared patterns. NIST enterprise architecture best practices or NCSC supply chain guidelines can help integrate security aspects more deeply.
Your answer:
Teams share what works, reuse solutions, and are encouraged to try new things.
How to determine if this good enough
At this top maturity level, your organisation not only maintains up-to-date patterns or a tech radar, but also fosters a culture of continuous improvement and knowledge sharing. This is typically “good enough” if:
-
Inherent Collaboration
- Teams frequently discuss or exchange solutions, referencing real success or lessons to guide new projects.
-
Focus on Reuse
- If an app or microservice solves a common problem, others can adopt or adapt it, reducing duplication.
-
Encouragement of New Ideas
- Innovation is rewarded, with agile, user-centered approaches, aligned with GDS and NCSC agile security approaches.
Nevertheless, you can refine advanced cross-government collaboration, embed HPC or AI solutions, or adopt multi-cloud synergy. NIST SP 800-160 for software engineering considerations and NCSC’s supply chain and DevSecOps guidance might help expand.
What best describes your current technology stack? [change your answer]
Your answer:
Most systems are large, single applications using lots of different technologies.
How to determine if this good enough
Your organisation may bundle most functionalities (e.g., front-end, back-end, database access) into a single codebase. This can be considered “good enough” if:
-
Limited Project Scale
- You have only a few apps or these monoliths aren’t facing rapid feature changes that necessitate frequent deployments.
-
Stability Over Innovation
- The environment is stable, with minimal demands for agile or continuous deployment.
-
No Pressing Modernisation Requirements
- No immediate need from leadership or compliance frameworks for microservices, containerisation, or advanced DevSecOps.
However, monoliths often slow new feature rollout and hamper scaling. NCSC’s DevSecOps guidance and NIST SP 800-160 systems engineering best practices typically advise considering modular approaches to handle evolving user needs and security updates more flexibly.
Your answer:
Systems are split into parts, but these parts can’t run on their own.
How to determine if this good enough
Your application is conceptually modular—teams write separate modules or libraries—but the final deployment still merges everything into a single artifact or container. It can be considered “good enough” if:
-
Moderate Complexity
- The system’s complexity is contained enough that simultaneous deployment of modules is tolerable.
-
Basic Reuse
- Code modules are reused across the solution, even if they deploy together.
-
No Continuous Deployment Pressure
- You can handle monolithic-ish releases with scheduled downtime or limited user impact.
Though better than a single massive codebase, you might miss the benefits of shipping each module independently. NCSC DevOps best practices and NIST SP 800-204 microservices architecture guidance suggest modular architectures with independent deployment can accelerate security fixes and scaling.
Your answer:
Systems are made of parts that can run on their own, but they depend on each other a lot.
How to determine if this good enough
You have multiple microservices or modules each packaged and deployable on its own. However, there may be strong coupling (e.g., version sync or data schema dependencies). It can be “good enough” if:
-
Significant Gains Over Monolith
- You can release some parts separately, reducing the scope of each deployment risk.
-
Partial Testing Complexity
- Integrations require orchestrated end-to-end tests or mocking, but you still benefit from incremental updates.
-
Mature DevOps Practices
- Each component has a pipeline, though simultaneous releases across many components might pose a challenge.
Nevertheless, heavy interdependencies hamper the full advantage of modular architectures. NCSC zero trust or microsegmentation approaches and NIST microservices best practices advocate further decoupling or contract-based testing to reduce friction.
Your answer:
Most parts can run and be tested on their own, but a few main systems are still monolithic.
How to determine if this good enough
Your organisation has successfully modularised most services, yet some legacy or core systems remain monolithic due to complexity or historical constraints. It may be “good enough” if:
-
Limited Legacy Scope
- Only a small portion of the overall estate is monolithic, so the negative impacts are contained.
-
Proven Stability
- The remaining monolith(s) might be stable, with minimal changes needed, reducing the urgency of refactoring.
-
Mature DevOps for Modern Parts
- You enjoy the benefits of microservices for most new features or cloud expansions.
To fully benefit from independent deployments, you might eventually replace or further decompose those monoliths. NCSC’s approach to legacy modernisation or NIST SP 800-160 engineering guidelines can help plan that transition.
Your answer:
All systems are made of small parts that can run and be tested on their own, with no monolithic systems.
How to determine if this good enough
At this pinnacle, your organisation’s technology stack is entirely modular or microservices-based, each component testable and deployable on its own. It might be “good enough” if:
-
Highly Agile & Scalable
- Teams release features or bug fixes individually, mitigating risk and accelerating time-to-value.
-
Strong DevOps Maturity
- You have extensive CI/CD pipelines, container orchestration, thorough test automation, referencing NCSC or NIST SP 800-53 agile security approaches.
-
Minimal Coupling
- Interdependencies are managed via robust APIs or messaging, enabling each component to evolve with minimal friction.
Even so, you can refine HPC/AI or domain-specific modules, adopt advanced zero-trust gating, or unify cross-organisational microservices. NCSC’s guidance on microservices security and NIST SP 800-204 microservices frameworks encourage continuous improvements.