The DPDP Act Section 8(5) blindspot: Use of AI by employees and the unseen IP and data breach

Why you may be blind to the biggest threat of a ₹250 crore penalty under the DPDP Act: the own goal in cybersecurity.
Data Privacy and The Internet
Data Privacy and The Internet
Published on
5 min read

If you enter a boardroom and open an agenda item on cybersecurity today, odds are that the room will obsess over sophisticated AI threats and ransomware mafia breaching the external perimeter.

The Chief Information Security Officer (CISO) will present top-notch investments worth millions in firewalls, intrusion detection systems and penetration testing to ward off the voodoo of data breaches. Risk discussions with legal will highlight that the Digital Personal Data Protection Act, 2023 (DPDPA) opens a massive, never-seen-before, regulatory exposure of a whooping ₹250 crore penalty that most organisations simply cannot wish away with a hope and a prayer. They will highlight the need to ramp up technical, physical and administrative safeguards that can prevent against data breaches. And the wisest in the room will build a case to secure a cyber insurance that can prevent a corporate obituary if (when) the data breach happens (it inevitably will).

While investing in legal policies and cyber resilience are of utmost importance, the problem in this approach is that it limits its focus on external threats of data breaches and very likely misses the critical role of insider breaches (an own-goal in the GRC field, so to speak). The actual data breach is very likely happening at that very moment within the office itself - in the marketing and engineering departments. Employees are freely pasting sensitive customer data, proprietary source code and confidential financial projections into public large language models. They’re not doing it with malicious intentions or self-sabotaging their careers. In their mind, its only to draft emails faster, debug code more efficiently, or summarise lengthy documents before meetings.

This behaviour has become so normalised that most employees do not even register it as a security or legal concern. The marketing associate pastes customer complaint data into a chatbot to generate response templates and most team members don’t even blink. The software engineer uploads proprietary algorithms to get debugging assistance. The finance analyst feeds quarterly projections into an AI tool to create presentation summaries. Each of these actions takes mere seconds, speeds up organisational efficiency and helps employees avoid drudge work, but ends up compromising your most sensitive intellectual property and results in a fine-worthy data breach under the DPDPA.

The legal exposure is immediate

Section 8(5) of the DPDPA, read with its operational layer in Rule 6 of the DPDP Rules, 2025, places strict obligations on the data fiduciary to implement reasonable security safeguards to prevent personal data breaches for personal data it holds in its possession or under its control (for example, through its processors like payroll vendors, marketing partners, or IT support teams). Dropping customer personal data into an ungoverned external AI tool constitutes an unauthorised third-party data transfer. The moment that data leaves your controlled environment and enters an external server, you have lost dominion over it and found yourself with a personal data breach under Section 2(u).

The employee’s intentions have nothing to do with the definitional requirements. You trigger DPDP compliance obligations. If customer personal data was included in those prompts, under Section 8(6) and Rule 7, you may have obligations to notify the Data Protection Board of India and affected individuals (immediately upon discovery, followed by a more informed notification within a 72 hour window) depending on the nature and sensitivity of the information disclosed.

Failure to meet Section 8(5) or Section 8(6) leaves you open to a penalties going up to ₹250 crore and ₹200 crore, respectively. Each violation is a separate penalty, which means that your liability for multiple violations is theoretically uncapped. Bottomline - it’s bad enough if your security safeguards cannot reasonably protect against employee misuse. It’s even worse if you fail to honestly report it under the DPDPA.

Then comes the compromise over highly valuable IP and the loss of control of your trade secrets. That proprietary algorithm your engineering team spent 18 months developing now exists on infrastructure you do not control, governed by terms of service you likely never read. The AI provider may use submitted data for model training. Competitors using the same service might benefit from patterns learned from your intellectual property.

What’s worse is that such actions also likely hold you in material breach of client confidentiality agreements. Professional services firms, law practices, accounting firms and consultancies face particular exposure here. Client data shared with external AI tools violates the confidentiality provisions that form the foundation of these business relationships.

The visibility problem

Most enterprises have absolutely zero visibility into this shadow data transfer. Traditional data loss prevention tools were designed to monitor email attachments and USB drives. They were not architected to intercept browser-based interactions with AI chatbots.

Startups particularly face acute risk because informal cultures often celebrate productivity hacks without examining their compliance implications. The same scrappy attitude that drives product development can create catastrophic security blind spots. The DPDPA does not at the time of this writing exempt startups or SMEs from data protection obligations. A single engineer pasting production database queries into an external AI can expose thousands of customer records and cause the business to sink under penalties before it even left the launch pad.

For large enterprises, the problem lies in the risk they face given the scale of operations and volume of workforce. With thousands of employees across multiple departments, the aggregate exposure from uncontrolled AI tool usage becomes statistically staggering. Even if only 5% of your workforce engages in this behaviour, the volume of sensitive data leaving your perimeter is substantial. The DPDPA holds a data fiduciary absolutely liable for compliance by its processors, meaning that organisations need to bear the risk of their own employees plus the added risk of subcontractors, vendors and partners making the same mistake.

Building an educated defensive perimeter

The solution requires a combination of technical controls and cultural transformation. Ironically, it also involves the use of AI to stop the misuse of AI. Technical measures include deploying enterprise-grade AI tools with appropriate data handling agreements, implementing browser extensions that detect and block sensitive data from being pasted into unauthorised services and establishing network-level monitoring of AI platform traffic.

Cultural measures are equally critical. Employees engage in this behaviour because they find AI tools genuinely useful for their work. Blanket prohibitions drive the behaviour underground rather than eliminating it. Effective programs educate employees about the specific risks, provide approved alternatives for common use cases and establish clear escalation paths for novel situations. There is a need to build a strong culture of privacy within organisations through both a carrot and stick approach.

AI prompts and LLMs are here to stay. It will be wise for Indian organisations to quickly and proactively audit their internal systems and place adequate safeguards against employee misuse of personal data and intellectual property that is worth millions (and might cost millions more in penalties and contractual damages if mishandled). Usage guidelines must be tailored to your business context, deploying enterprise-tier boundaries that provide productivity benefits without compliance risk and run intensive behavioural change programs that can transform the workforce into an educated defensive perimeter.

Mimansa Ambastha is a cybersecurity and privacy expert and Founder of Starlex Consultants (Delhi-NCR).

Bar and Bench - Indian Legal news
www.barandbench.com