Gartner: half of firms adopt zero-trust data by 2028
Gartner said half of organisations will implement a zero-trust posture for data governance by 2028 as unverified AI-generated data spreads across corporate systems and public data sources.
The analyst firm linked the shift to the rising volume of synthetic content and the growing difficulty of distinguishing it from human-created information. It also pointed to risks for large language models that rely on broad training data drawn from the open web and other repositories.
"Organisations can no longer implicitly trust data or assume it was human generated," said Wan Fui Chan, Managing VP, Gartner.
Gartner said large language models typically train on "web-scraped" sources and other datasets, including books, code repositories and research papers. It said some of those sources already contain AI-generated material. Gartner said those sources could become heavily populated with AI-generated data if current trends continue.
Model reliability
Gartner described a risk that future generations of large language models will train on outputs from earlier models. It said this could heighten the chance of "model collapse". Gartner used the term to describe a scenario where AI tools' responses no longer accurately reflect reality.
The firm also linked the trend to increasing compliance pressure. It said some regions may move towards rules that require verification of "AI-free" data. It said requirements could vary widely across jurisdictions.
"As AI-generated data becomes pervasive and indistinguishable from human-created data, a zero-trust posture establishing authentication and verification measures, is essential to safeguard business and financial outcomes," said Chan.
Gartner also cited its 2026 Gartner CIO and Technology Executive Survey. It said 84% of respondents expect their enterprise to increase funding for generative AI in 2026. Gartner said greater adoption and investment will raise the volume of AI-generated content produced and stored inside organisations.
Zero-trust shift
The zero-trust approach has its roots in cybersecurity. Organisations have used it to reduce implicit trust inside networks and to increase verification of identities and access requests. Gartner applied the same concept to data governance in the context of AI-generated and machine-generated information.
In its view, organisations will need stronger processes for authentication, verification and tracking of data lineage. Gartner also said organisations will need to identify and tag AI-generated data as it moves through systems, products and reporting processes.
"As AI-generated content becomes more prevalent, regulatory requirements for verifying 'AI-free' data are expected to intensify in certain regions," said Chan. "However, these requirements may differ significantly across geographies, with some jurisdictions seeking to enforce stricter controls on AI-generated content, while others may adopt a more flexible approach.
"In this evolving regulatory environment, all organisations will need the ability to identify and tag AI-generated data. Success will depend on having the right tools and a workforce skilled in information and knowledge management, as well as metadata management solutions that are essential for data cataloging."
Metadata focus
Gartner said active metadata management will become more important as organisations manage mixed datasets that include human-created, machine-generated and AI-generated content. It said organisations will use metadata to analyse and automate decision making across data assets. It also linked metadata practices to alerts that indicate when data becomes stale or needs recertification.
It said organisations face operational risk if business-critical systems ingest inaccurate or biased information. It also said the risks extend to financial outcomes, given the use of data in forecasting, decision support and customer interactions.
Governance roles
Gartner set out steps it said organisations should consider as they address unverified AI-generated data. It said organisations should appoint an AI Governance Leader responsible for AI governance, including zero-trust policies, AI risk management and compliance operations. It said that leader should work with data and analytics teams on data readiness and systems that can handle AI-generated content.
It also said organisations should set up cross-functional teams that include cybersecurity and data and analytics stakeholders. It said those groups should run data risk assessments. Gartner said organisations should identify which risks existing data security policies already address and which require new strategies.
Gartner also said organisations should build on existing data and analytics governance frameworks and update policies on security, metadata management and ethics. It said organisations should adopt active metadata practices that create real-time alerts when data needs attention.
"Organisations can no longer implicitly trust data or assume it was human generated," said Wan Fui Chan, Managing VP, Gartner.