Third-party breaches are forcing organisations to rethink how they oversee vendor risk. In many cases, the issue is not simply a security failure but a governance model that has not kept pace with modern data ecosystems. Joshua Stuts, Director of Security and Trust at Drata, explains why continuous oversight, clearer visibility into data flows, and stronger preparedness are becoming essential.
Why do organisations often underestimate how complex and opaque their data flows have become?
The average organisation’s IT environment is exponentially more complicated than it was 20 or even 10 years ago. Modern organisations rarely operate in isolation and depend on an increasingly dense network of SaaS platforms, integration partners, and specialised service providers that constantly exchange data.
So one of the biggest governance challenges today is simply understanding where all that data is going.
This is made even more difficult because of the way those connections form a web of dependencies that grows over time. Applications integrate through APIs, vendors may rely on subcontractors, and cloud services automatically move information between systems. An individual connection might make sense and be easy to track, but together they collectively create an ecosystem that is surprisingly difficult to map clearly.
So even for an organisation with a reasonable understanding of its direct vendors, the chain gets increasingly opaque the further down you go. Add complications like legacy integrations that remain active long after their original purpose, and visibility becomes even more challenging.
One approach that is gaining traction is obligation mapping. Rather than trying to track systems and vendors in isolation, obligation mapping links regulatory requirements, contractual commitments and internal policies directly to the vendors, systems and data types involved.
By connecting obligations to the actual data flows in the environment, organisations gain a clearer picture of where sensitive data lives, who has access to it, and what safeguards should apply along the way. This creates an understanding of where risk sits and where governance gaps might emerge across the wider supply chain.
Why are point-in-time vendor assessments no longer enough to manage third-party risk?
So, while we have all this complexity building up, traditional third-party risk assessment has become increasingly inadequate.
Vendor risk management has followed a fairly predictable pattern for a long time. A vendor is assessed during procurement, questionnaires are completed, certifications are reviewed, and a contract is signed. From that point onward, the assumption is often that the vendor has been vetted and the risk is understood.
That largely worked in a slower-moving, analogue world, but today’s digital environments don’tstand still for long. Systems integrate through APIs, automated workflows connect services behind the scenes, and vendors regularly update their platforms or expand the way they interact with customer data.
Point-in-time assessments capture only a snapshot of that environment. They provide useful information about how a vendor looked at the moment of review, but they cannot account for how integrations, credentials, or access patterns might evolve months later. This creates blind spots where dormant integrations remain active, permissions expand gradually, or access tokens persist longer than intended.
This is why many organisations are shifting toward continuous monitoring. Instead of relying solely on periodic reviews, they maintain ongoing visibility into how vendors interact with systems and data. That might include monitoring changes in access behaviour, reviewing API credentials or service accounts, and identifying unusual activity patterns.
This approach also sets third-party risk management as a governance strategy, rather than a technical tool.
Why are transparency and preparedness becoming competitive differentiators after third-party breaches?
Along with the complexity of our business environments, expectations around breach response have also changed significantly in recent years. In the past, the main question after an incident was often simply whether a breach occurred and what data and systems were affected. Today, stakeholders want to understand how prepared the organisation was before the incident happened.
Regulators, customers and partners increasingly ask what governance structures were in place to monitor vendor risk, how unusual behaviour could be detected, and how quickly the organisationcould respond once something went wrong.
The effectiveness of the governance surrounding an incident can be just as important as the handling of the breach itself.
This preparedness matters because third-party incidents rarely involve a single organisationacting alone. They often require coordination between vendors, legal teams, compliance functions and senior leadership. Without clearly defined reporting structures and response processes, you end up with a slow and cumbersome process that increases the impact of the incident – and the reputations of everyone involved.
Transparency is also essential, with organisations that communicate openly maintaining far more credibility than those that limit communication to minimum disclosure requirements. Companies should be willing and able to detail what happened, how it was detected and how they are strengthening governance.
Alongside the fact that regulations like NIS2 are now making supply chain management mandatory, this governance maturity is now becoming part of the competitive landscape in many industries.
When organisations evaluate potential partners or suppliers, they increasingly look for evidence of continuous oversight, structured vendor governance and tested incident response processes. If you can show that you’ve really taken this seriously, you’re going to stand far above those who have only done enough to tick boxes.