A new report by professional services firm Alvarez & Marsal (A&M) warned that while Indian enterprises are rapidly scaling their use of artificial intelligence, governance and security frameworks are not evolving at the same pace.
The study highlighted that despite rising interest in AI, enterprise-wide deployment remains limited, with only 15% of organisations using AI extensively across their operations.
The report draws on insights from a month-long survey of CISOs, CIOs, CTOs, and CROs across industries such as BFSI, technology, healthcare, manufacturing, and retail.
A key concern raised is that AI governance maturity is still low. Around 60% of companies have introduced basic governance or acceptable-use policies, yet only 19% have conducted detailed risk assessments. More critically, 81% lack full visibility into how their AI systems are being monitored.
The prevalence of siloed AI projects — often combining in-house and third-party models — has further created inconsistent standards and unclear accountability. The report stresses the need for unified, organisation-wide governance structures that ensure transparency and define clear roles.
Despite widespread acknowledgement of the importance of responsible AI, actual implementation remains weak. Fewer than 20% of organisations have mechanisms for explainability, bias detection, or fairness.
About 60% lack structured processes to validate model integrity. Data governance gaps mirror these concerns: only 26% have built-in PII scanning or data masking, and 60% do not perform systematic dataset validation. These weaknesses heighten risks related to bias, compromised datasets, and unstable model performance.
As organisations deploy more advanced AI models, security risks are intensifying. While 52% maintain secure development environments, fewer than 30% perform penetration testing or red-teaming exercises. Only 19% have safeguards against data poisoning during training. The report urges companies to strengthen end-to-end security, using containerised training setups, dataset authenticity checks, and adversarial testing.
Post-deployment oversight is identified as one of the weakest areas. Twenty-six percent of organisations have no monitoring, and 45% rely on partial or delayed tracking. Only 15% have AI-specific incident response plans, and 66% do not audit their AI systems. This exposes businesses to hidden failures, performance issues, and compliance risks.
A&M’s leaders emphasise that India’s expanding AI ecosystem will deliver long-term value only if governance, security, and accountability frameworks are strengthened early.