India's Digital Personal Data Protection Act has fundamentally altered the country's Right to Information landscape by amending Section 8(1)(j) of the RTI Act. The change removes specific safeguards that previously governed when and how government agencies could withhold information from public disclosure. What was once a carefully balanced framework requiring justification for secrecy has become a broad exemption that agencies can invoke simply by claiming personal data protection.

This isn't just about transparency — it's about AI governance in the world's most populous democracy. India is rapidly deploying AI systems across government services, from digital identity verification to automated decision-making in welfare distribution. When citizens can't access information about how these systems work, what data they collect, or how decisions are made, algorithmic accountability becomes impossible. The privacy-transparency trade-off here mirrors debates happening globally, but India's approach appears to heavily favor opacity over oversight.

The timing reveals the real stakes. India is positioning itself as both an AI powerhouse and a data protection leader, but this legislative sleight-of-hand suggests the government wants to have it both ways — claiming privacy leadership while avoiding scrutiny of its own AI deployments. The DPDP Act's broad language could easily be used to shield government AI systems from public examination, creating a transparency vacuum just as algorithmic governance expands.

For developers building AI systems in India, this creates a new compliance landscape where government transparency standards don't match private sector requirements. You'll need to be more transparent about your AI systems than the government is about theirs — a regulatory asymmetry that could reshape how AI accountability works in practice.