The rapid integration of artificial intelligence into Indian classrooms is sparking urgent debates beyond just educational efficiency. A growing concern centers on the vulnerability of children’s personal data amid rising cyber threats. Global watchdogs, including UNICEF, UNESCO, and UN human rights bodies, have issued a joint call for stringent safeguards on AI systems handling student information.
In the US, a massive data breach at edtech giant PowerSchool exposed sensitive details of over 60 million students and 10 million teachers, including Social Security numbers. This incident underscores the perils facing student information systems worldwide.
Closer home, a pilot study revealed Indian educational institutions endured over 200,000 cyberattacks and nearly 400,000 data breaches in just nine months. Such statistics paint a dire picture for data security in the education sector.
Enter the recent partnership between Indian NGO Pratham and US-based AI firm Anthropic. Their collaborative product, the Anytime Testing Machine (ATM), powered by Anthropic’s Claude model, digitizes handwritten student answers, generates curriculum-aligned tests, applies rubric-based grading, and delivers personalized bilingual feedback in Hindi and English.
While innovative, this setup raises red flags under India’s Digital Personal Data Protection (DPDP) Act. Section 9(1) mandates verifiable parental consent before processing any data of children under 18. Draft DPDP rules further specify mechanisms like OTP-based verification integrated with government IDs.
Parents may remain unaware that their child’s handwritten work is photographed, uploaded to cloud servers, processed by a US company, and analyzed by a large language model. This opacity could expose young learners to unforeseen privacy risks, demanding immediate regulatory scrutiny and transparent practices from edtech providers.
As AI transforms education, balancing innovation with child protection must become paramount. Policymakers, schools, and tech firms need to collaborate on robust frameworks to prevent data misuse.