CFOtech US - Technology news for CFOs & financial decision-makers
Flux result df1b6304 321e 4d91 bee7 8b69390bf349

Qlik adds trust controls for AI-ready data products

Thu, 16th Apr 2026 (Yesterday)

Qlik has expanded its AI trust and governance tools around data products, adding controls to help teams assess whether datasets are reliable enough for analytics and automated systems.

The update centres on treating data products as managed units within Qlik Analytics, with tools to create, share and monitor them across business and AI workflows. The package includes trust signals, service levels, anomaly detection and agent-assisted stewardship to help users judge whether data is fit for use.

The move reflects growing pressure on companies to improve oversight of the data that feeds AI systems. As more organisations push AI into operational workflows, poor-quality or outdated data can create risks that go beyond inaccurate reporting and affect business actions and decisions.

Trust signals

Among the additions is a Data Product Agent, designed to help teams create, manage and deliver data products through natural language interaction. It is intended to evaluate data quality, generate trust scores and guide users and AI systems to relevant datasets.

Qlik is also introducing Trust Score, a visible measure of a data product's readiness. The score assesses datasets against factors including accuracy, timeliness, diversity and completeness, giving teams a way to inspect data condition before decisions or automated processes depend on it.

Another part of the update is a contract layer for data products. This is intended to define what a data product is expected to provide, setting a clearer operating standard for producing teams and a more explicit basis for trust for consumers.

Service-level objectives, alerting and anomaly detection are also being added to help users track whether data products continue to meet expectations over time. These tools are meant to surface degradation and drift before they turn into wider business problems.

AI oversight

Qlik is also extending agent-assisted operations into data quality and trust workflows. Users will be able to retrieve trust signals and quality metrics, create and edit rules, define service levels, run calculations and detect anomalies through conversational interactions.

It is also adding stewardship functions intended to reduce manual work in data governance. These include tools to generate rules, improve glossary coverage, create field descriptions and suggest remediations, while leaving final decisions with human users.

Qlik framed the changes as a response to the need for stronger operational controls as AI systems take on a greater role in business processes. Data governance has long been a concern for analytics teams, but the spread of AI agents and automated decision systems has made reliability and accountability more urgent for software suppliers and their customers.

Data products, in Qlik's view, are becoming the main operating unit of trust rather than raw datasets managed in isolation. By putting service standards, quality indicators and monitoring around those products, it aims to give organisations a clearer way to decide what humans and AI can use safely.

Qlik says its software is used by 75% of the Fortune 500, giving it a significant audience among large enterprises grappling with how to govern AI projects without slowing deployment across departments.

Mike Capone, Chief Executive Officer of Qlik, set out the company's view.

"As AI moves from answers into decisions and actions, weak data stops being a reporting problem and becomes an execution problem," said Mike Capone, Chief Executive Officer, Qlik. "Data products need the same accountability as any other production asset, with clear signals for what humans and AI can safely rely on. That is how enterprises scale AI without scaling risk."