PYMNTS Intelligence March 2025 CAIO Report Expresses Apprehensions Regarding GenAI’s Impact on Sensitive Company-Wide Data
A newly published March 2025 study from PYMNTS Intelligence exposes serious concerns among enterprise leaders about the implications of generative AI on internal data security, transparency, and workforce stability.
The report surveyed 60 CFOs from U.S. enterprises with more than $1 billion in annual revenue. Respondents were segmented by their level of GenAI adoption — high-impact users (using GenAI for strategic business decisions), medium-impact users (customer service, forecasting), and low-impact users (routine automation). Across these tiers, the underlying concern was consistent: AI adoption without clear operating standards increases the risk of internal data misuse and reduces control over sensitive information.
Nearly 40% of surveyed CFOs flagged the lack of clear operating standards as a significant barrier to further AI investment. High-impact users — those embedding GenAI into executive decision-making or financial modeling — were particularly vocal about this issue. These organizations often deal with confidential board-level information, proprietary financial forecasts, and strategic data. Without well-defined AI governance structures, the perceived risk of exposing such data to AI systems outweighs the technology’s potential benefits.
This hesitancy is not rooted in skepticism of GenAI’s capabilities, but rather in the uncertainty around how AI systems manage and retain internal information. Of the enterprises classified as high-impact GenAI users, an overwhelming 91% cited data privacy and internal access as a top concern.
These companies are worried about AI systems being trained or refined using proprietary data without adequate safeguards. The fear extends beyond data leaks — it also includes the possibility of internal data being accessed or inferred by other users of the same AI platform if data separation and permissioning protocols are insufficient.
For organizations with extensive financial, operational, and legal records, a lack of visibility into how AI models process internal documents raises compliance and liability issues. As AI continues to be integrated into functions like vendor management, budgeting, and supply chain logistics, ensuring traceability and oversight becomes even more critical.
The clear throughline across the study is that future GenAI integrations require transparency and governance. Organizations are calling for standardized operating frameworks that define data permissioning, model explainability, bias detection, and privacy controls. Without such foundations, the adoption of GenAI will continue to be cautious and uneven.
Enterprises cannot afford to operate in a gray zone when it comes to GenAI. Whether it’s internal audit requirements, compliance with data privacy laws, or maintaining trust with customers and partners, the need for operational discipline around AI systems is rising.
As these challenges intensify, Dappier offers a pragmatic and secure approach to GenAI integration. By transforming internal content and data into AI-ready APIs, Dappier allows enterprises to adopt AI without relinquishing control over their proprietary information. Sign up today or schedule a demo at dappier.com/demo.
