CFOtech US - Technology news for CFOs & financial decision-makers
Ritchen

Three hidden risks in your AI stack and what to do about them

Tue, 18th Nov 2025

AI isn't just hype anymore. It's become a foundational part of how companies operate. From financial services in Singapore to eCommerce giants in Japan and software companies in Australia, AI is powering product features, automating operations, and shaping how businesses compete. But as developers race to build, security teams are often left in the dark and that's where risk creeps in.

In a recent survey of 100 global security leaders, 1 in 4 admitted they had limited or no visibility into the AI services running across their environments. That's a gap that attackers can and will exploit, especially as local businesses accelerate AI initiatives in parallel with evolving regional regulations and data privacy laws.

The pace of AI adoption is outstripping the pace of governance. Wiz data shows that 75% of organizations now run self-hosted AI models. A similar number have deployed dedicated AI/ML stacks. In cloud environments, OpenAI still dominates globally but newer platforms like DeepSeek are gaining traction. When DeepSeek-R1 launched earlier this year, its usage tripled in just two weeks, which is a sign of how quickly preferences and platforms are shifting.

This rapid adoption brings real risk: misconfigured services, unclear data ownership, leaked credentials, and a dramatically wider attack surface. While AI creates enormous potential, it also introduces three emerging security challenges APJ leaders can't afford to ignore.

Shadow AI: an invisible threat

Different teams across your business are most likely spinning up AI tools without engaging the security or IT teams. Sometimes unknowingly, sometimes to avoid friction. Whether it's marketing running an LLM-powered chatbot, or developers testing open-source models, the pattern is consistent. Tools get deployed, sensitive data gets processed, and no one in security is aware until something goes wrong. In tightly regulated industries, this can lead to data residency violations or non-compliance with local laws, including Singapore's PDPA, Australia's Privacy Act, Japan's APPI, and South Korea's PIPA. These laws impose strict conditions on how data is collected, transferred, and processed, especially when AI is involved, and failure to comply can result in regulatory penalties, reputational damage, and operational risk.

Trying to ban AI usage outside of official channels often backfires. Developers find workarounds, and usage just goes underground. What's needed instead is an approach that encourages responsible experimentation. That includes creating clear guidelines for teams, building default security guardrails into development workflows, and gaining visibility across cloud environments. Once teams can see what's running and understand the risks, they're in a much better position to protect it without slowing innovation down.

Model Context Protocol (MCP): the new API frontier

MCP is the connective layer that lets LLMs interact with data, tools, and applications – essentially, an API framework for AI models. MCP is powerful and flexible, but also prone to many of the same risks we've long seen in the software supply chain. It's not uncommon to find plugins sourced from unofficial registries, auto-run configurations, or binaries that haven't been vetted properly. These scenarios open the door to impersonation, typosquatting, or malicious extensions, especially if permissioning is overly broad.

For enterprises in APJ that rely on third-party integrations or operate in hybrid cloud environments, MCP can become an invisible entry point for attackers. Just like any integration layer, it should be treated with scrutiny: understanding what plugins are in use, who has access, and whether they're coming from trusted sources. Risks that feel abstract on paper can escalate quickly in production if controls aren't in place.

Leaked secrets: a rising attack vector

In fast-paced AI development, shortcuts are common. Developers may hardcode credentials, forget to rotate API keys, or leave secrets in public repositories. In one scan of GitHub, Wiz found that four of the five most leaked secrets were related to AI services, including access keys for OpenAI, AWS Bedrock, and other compute-intensive endpoints.

These keys are high-value targets. If exposed, they can be used to access sensitive data, spin up expensive cloud resources, or launch lateral attacks inside your infrastructure. For organizations dealing with cross-border data flows or country-specific compliance requirements, the fallout can be significant. And with the rising cost of AI compute, there's a growing financial incentive for threat actors to exploit stolen access.

The solution starts with visibility. Organisations need a way to track where credentials live, how they're being used, and whether they're scoped properly. Monitoring for abnormal behavior and automating rotation are important steps, but even more important is creating a culture where security is part of how AI gets built from day one.

How security can catch up

AI is moving faster than most security programs were designed to handle. But that doesn't mean security has to fall behind. What security leaders need now is a shift in approach – one that starts with mapping out the full AI footprint, identifying areas of exposure, and integrating safeguards into the development lifecycle itself.

This isn't about adding friction. It's about adding confidence so that teams can build without second-guessing if what they're doing is safe or compliant. In a region as diverse, fast-growing, and regulation-heavy as APJ, that confidence matters.

Security should be an enabler, not a blocker. When security teams can see what's running, understand how it connects, and embed governance into AI workflows from the start, that's when organizations can innovate freely and safely.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X