Detection and Response

MDR Selection is a Partnership Decision

|Last updated on Apr 28, 2026|xx min read
MDR Selection is a Partnership Decision

Managed Detection and Response (MDR) is a cybersecurity service that combines human expertise and technology to detect, investigate, and respond to threats 24/7.

I write this as a Field CISO at Rapid7, but also as someone who has had to live with the operational reality of MDR on the customer side. I have seen what happens when a service is a black box, when technology and service drift apart, and when cost, retention, and accountability are misaligned. That experience shapes the view in this piece: MDR selection is not just about buying monitoring in isolation, but about choosing a partner that can help your team reduce risk and improve the way security operates over time.

When organisations evaluate MDR, they often start in the wrong place. The discussion begins with integration counts, dashboards, pricing tables, and increasingly bold claims about AI or dramatic reductions in alert volume. Those things all matter to a degree, but they are not the centre of the decision. The real question is whether you are choosing a provider that will work as a genuine partner, help you reduce risk over time, and strengthen the way your team operates when the environment becomes noisy, complex, or difficult to manage.

That matters because MDR is not a service that sits neatly off to one side of the security function. It becomes part of the operating model. It influences how visibility is created, how incidents are handled, how priorities are surfaced, and how much confidence a leadership team has in the people and processes around it. For that reason, I do not think MDR selection is primarily a tooling exercise. It is a partnership decision.

What poor MDR looks like in practice

My own view on this has been shaped by more than one experience. In one case, our MSSP was part of a defence company that was later carved out into a separate business. The service was built around a legacy SIEM. They had plenty of interest elsewhere in automation and future-state capability, but the fundamentals were being missed. We could talk about what we wanted to automate, but not with enough confidence about the quality of the underlying visibility, the operational process around it, or how the service was supposed to mature over time.

In another case, the issue was an MSSP overlay wrapped around a well-known, high-cost log indexer. On paper, that should have been a strong foundation. In practice, the management layer around it was poor. There was a lack of expertise, no credible roadmap, and very little meaningful tuning. As the MSSP was also reselling the ingest, there was no obvious incentive to optimize data use in the customer’s favour. Ingest was capped because of cost, retention was limited to 90 days, and we were left with the uncomfortable combination of high spend, constrained visibility, and a service that did not appear to be improving in any meaningful way.

Those experiences shaped how I think about MDR because they exposed the same underlying problem. The technology was not absent, but the service model around it was weak. When the gap between the platform and the service becomes too wide, the customer ends up paying for capability in theory while carrying the operational risk in practice.

Why the gap between platform and service matters

This is where many MDR relationships start to fail. Even when the tooling is capable, the provider still has to connect platform, people, process, and commercial model into one coherent service. If that does not happen, the customer ends up living with support issues, awkward hand-offs, misaligned contracts, unclear accountability, and a constant sense that there are too many moving parts and not enough ownership.

That is why I would start any MDR evaluation by looking at how the relationship is meant to work in practice. 

  • Does the provider genuinely own the experience end to end, or are they effectively brokering one element through another?

  • Can they show how the programme will improve over the first year, not just how onboarding works in the first month?

  • Do they understand the rest of your security ecosystem and how to operate within it, or do they assume every answer involves expanding their footprint?

Strong providers think holistically. They understand that the customer already has an environment to manage, existing tools to work with, and internal teams who need clarity rather than additional friction. They think in terms of operating model, monitoring, response, and continuous improvement over time, rather than treating the service as a thin wrapper around a platform. That is usually where the difference between coverage and real partnership becomes obvious.

Proactive defense starts with the fundamentals

True partnership is defined by its ability to deliver proactive defense and continuous improvement. By this, I do not just mean threat hunting or faster triage. I mean exposure reduction in the broader sense. It is understanding attack paths, using intelligence well, tuning detections properly, improving visibility where it matters, and building a service rhythm that reduces the conditions attackers rely on.

That sounds obvious, but it is surprisingly easy for organisations to be distracted from those fundamentals. Low entry prices often mask a fundamentally constrained operating model, shifting risk and cost back to the customer. 

Sweeping promises about single digit alert volumes should be treated carefully, especially before a provider has properly understood the environment. The same is true of broad agentic AI claims. Automation can absolutely help, but it does not replace accountability, operational judgement, or the need for a provider to show how the service will improve over time.

For me, that last point is one of the clearest tests of whether the relationship is working. An MDR service should not be something you set and forget. A mature partnership should look better in month twelve than it did in month one. Visibility should improve. Tuning should improve. The roadmap should improve. Confidence in escalation and response should improve. If none of that is happening, it becomes very difficult to describe the relationship as a real partnership. At that point, you may simply have outsourced a queue.

When displacement becomes the right answer

That is also how I think about displacement. An incumbent should not be displaced simply because another provider has a sharper demo or a more fashionable story. Displacement makes sense when the existing model has stopped improving, when the service feels static or opaque, when the team lacks the expertise to tune and evolve it properly, or when the commercial structure and delivery model are working against the customer rather than with them.

If the relationship is held together by workarounds, if there is no meaningful roadmap, or if the customer is left carrying too much of the integration and governance burden themselves, the problem is usually structural rather than temporary. In that situation, the question is no longer whether the service can be tweaked around the edges. The question is whether the model is fit for purpose at all.

Consolidation is only useful if it improves the model

That does not automatically mean consolidation is the answer. Consolidation can be valuable, but only when it improves the operating model rather than simply reducing the number of logos in the environment. In some cases, the right answer will be to build a broader relationship with a provider that has earned trust and shown it can deliver more. In others, the right answer will be better integration and a clearer division of responsibilities.

What matters is whether the provider helps create a more coherent, scalable, and accountable way of operating. If consolidation leads to better hand-offs, stronger accountability, and a simpler way of reducing risk, it can be very valuable. If it does not, then consolidation is not the point. A better operating model is.

This broader view is also consistent with established security guidance. NIST CSF 2.0 frames cybersecurity as a risk management discipline across governance, protection, detection, response, and recovery [1]. NIST’s latest incident response guidance reinforces that response should be integrated into wider risk management and improved over time [2]. The NCSC makes a similar point in its guidance on building a SOC and on security monitoring, where tools, skills, and operating model all need to work together [3]. CISA’s exposure reduction guidance points in the same direction by focusing on reducing the conditions attackers rely on before incidents escalate [4].

Questions worth asking any MDR provider

There are a few practical questions I would encourage any CISO, Security Director, or Security Operations Manager to ask, whether they are reviewing an incumbent or evaluating a new provider:

  • How will the service improve over the first year and beyond?

  • Where do the hand-offs happen between your platform, your analysts, and my team?

  • How do you work with the security and IT tools we already rely on?

  • How predictable is the commercial model as coverage expands?

  • What are you doing to reduce risk before the next incident, not just respond after it?

  • If your commercial model benefits from more ingest, what incentive do you have to tune it down?

Those questions reveal far more than a polished demo ever will.

Ultimately, the organisations that get the most value from MDR tend to be the ones that treat it as part of a wider security partnership rather than a neatly outsourced function. They expect transparency, progress, and a provider that understands both the environment they have today and the operating model they are trying to build over time. That is the standard worth holding. If the provider is not improving the programme over time, you do not have a real partnership. And if consolidation does not lead to a better operating model, it is probably not worth doing in the first place.

Learn more about Rapid7's approach to preemptive MDR.

Alan Simpson is Field CISO for the UK and Ireland at Rapid7, advising CISOs and senior leaders on cyber risk, resilience, and security strategy that supports business outcomes. Before joining Rapid7, he served as Global Security Operations Manager and Acting CISO at Keyloop, where he led security operations and wider information security initiatives. He has also held senior security leadership roles at Allianz and LV=, with experience across security operations, incident response, architecture, awareness, supplier assurance, and security testing.

[1] https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.29.pdf

[2] https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r3.pdf

[3] https://www.ncsc.gov.uk/collection/building-a-security-operations-centre

[4] https://www.cisa.gov/resources-tools/resources/exposure-reduction

LinkedInFacebookXBluesky

Related blog posts