Future Of AI: Legal Challenges In The Era Of Digital Personal Data Protection Act, 2023


Artificial intelligence (AI) is transforming the world in unprecedented ways. From healthcare to education, from entertainment to e-commerce, AI is enabling new possibilities and opportunities for innovation and growth. However, AI also poses significant challenges to data privacy, as it often relies on the collection, processing, and analysis of enormous amounts of personal data.

Personal data is any information that relates to an identified or identifiable individual, such as name, email, location, biometrics, health records, preferences, behaviour, etc. Data privacy is the right of individuals to control how their data is used and shared by others and to protect it from unauthorised access, misuse, or harm.

As organisations increasingly deploy AI systems, they are likely to find themselves at the crossroads of innovation and regulation, struggling to balance technological progress with stringent Indian data privacy law, i.e., the Digital Personal Data Protection Act, 2023 (DPDPA).

The DPDPA imposes various requirements and restrictions on how companies can collect, process, store, transfer, and disclose personal data, and also grant data principals, i.e., individuals certain rights, such as the right to access, rectify or erase their data.

AI systems thrive on data. They learn, evolve, and improve by analysing vast amounts of information, which includes personal data. This dependency on data creates an inherent conflict with the personal data protection law which is designed to safeguard individuals’ privacy. While the DPDPA aims to ensure that personal data is used in a fair, transparent, and lawful manner, it also poses significant challenges for companies that want to leverage the potential of AI.

AI often involves complex, automated, and opaque data processing activities that may not be compatible with the principles and obligations of the personal data protection legislation. The DPDPA imposes strict limitations on how personal data can be collected, processed, and stored, which is likely to be at odds with the data-hungry nature of AI.

The key challenges that companies may face in complying with the DPDPA for their use of personal data for AI processing purposes are set out below:

Lack of Transparency

One of the core principles of the DPDPA is transparency. Organisations must inform individuals about how their data is being processed in clear and plain language. However, many AI systems, particularly those based on deep learning, operate as ‘black boxes’, making it difficult to provide clear explanations of their processing and decision-making processes.

For the processing of personal data to be lawful, companies would be required to provide clear and comprehensive information to data principals about the nature, scope, and purpose of data processing, and the potential risks and benefits of AI, in a concise, intelligible, and easily accessible manner using clear and plain language, which may not be an easy task and prove to be challenging to provide the required level of transparency.

Data Minimisation Requirements

The DPDPA advocates for data minimization, meaning that organisations should only collect and use personal data that is necessary for a specific purpose. AI, on the other hand, often requires large datasets to function effectively and to enhance their learning and predictive capabilities, which creates friction between the need to minimise personal data collection and the desire to harness the power of big data.

Striking a balance between collecting sufficient data for AI functionality and adhering to data minimisation principles is a significant challenge. Organisations must clearly define and limit the scope of data collection, which can potentially hamper the effectiveness of AI models.

Consent and Purpose Limitation

One of the main challenges that companies may face in using personal data for AI processing is obtaining valid consent from data principals. Under the DPDPA, consent is the primary legal basis or lawful purpose for processing of personal data, and it means that data principals have given their informed, specific, and voluntary agreement to the processing of their data.

Obtaining explicit consent for data use and ensuring that personal data is used only for the specified purpose are fundamental aspects of the DPDPA. AI applications frequently involve secondary data usage, where data collected for one purpose is repurposed for another, often without the explicit consent of the data principals.

This raises concerns under the DPDPA as companies can only use the data for the purpose for which consent was taken in the first place. Furthermore, AI often involves complex, dynamic, and unpredictable data processing activities that may not be fully understood or anticipated by the data principals or data fiduciaries themselves.

Additionally, under the provisions of the DPDPA, companies are required to seek separate consent for specific purposes. As such, companies may be required to seek separate specific consent if the personal data of individuals is required to be processed for AI purposes. Bundled or pre-ticked consent for multiple or unrelated purposes may be not permitted under the DPDPA. Seeking such specific consent may prove to be a challenge.

Data Security and Confidentiality

Another challenge that companies are likely to face in using personal data for AI is data security and confidentiality. Data security and confidentiality are vital for the trust and confidence of data principles and data fiduciaries, as they affect the protection and preservation of personal data.

Data security and confidentiality are also crucial for the compliance and responsibility of personal data processing, as they affect the prevention and mitigation of personal data breaches and data incidents. However, data security and confidentiality for AI is not always easy or effective, as AI often involves complex, distributed, and interconnected data processing activities that may not be secure or confidential. For example, AI may require large and diverse datasets that may not be encrypted, anonymised, or pseudonymised, or may involve data processing that may not be authorised, authenticated, or audited, or may generate results that may not be encrypted, anonymised, or pseudonymised.

The DPDPA requires companies to implement appropriate technical and organisational measures as well as reasonable security safeguards to protect personal data. To mitigate privacy risks, companies often rely on data anonymisation and pseudonymisation. While these techniques can help protect personal data, they can also hinder the effectiveness of AI systems. Anonymised data may lose the granular details that AI algorithms need to make accurate predictions, leading to a trade-off between privacy and performance.

Rights of Data Principals

The DPDPA grants individuals the right to access their data, request corrections, and demand the deletion of their data (the right to be forgotten). Implementing these rights in AI systems may be technically complex. Continuous learning AI models, which adapt based on new data, may face difficulties in accurately deleting or rectifying data without disrupting the model’s integrity. Inability to honour access and erasure requests can lead to legal non-compliance and significant operational challenges.

The DPDPA requires organisations to delete the personal data once the purpose for which the personal data was collected is no longer being served or if the data principal withdraws consent. As such, setting short retention periods may impact the performance of AI and machine learning technologies. On the other hand, setting longer retention periods may violate the provisions of the DPDPA.

Continuous Compliance and Adaptation

The regulatory landscape is likely to be dynamic, with evolving guidelines, jurisprudence and emerging best practices. Organisations must stay abreast of changes in the personal data protection law and continuously adapt their AI systems to remain compliant. This requires ongoing investment in compliance tools, regular training for staff, and close collaboration with legal and regulatory experts to navigate the complex intersection of AI and DPDPA.

Conclusion

Artificial intelligence and data privacy are two of the most important and influential topics in the contemporary world. They offer immense opportunities and challenges for companies that want to process personal data for AI purposes or applications. Companies need to find a balance between the benefits of AI and the risks of data privacy, and to adopt a proactive and responsible approach to ensure that their AI systems are compliant with the personal data protection law, and that they respect the rights and expectations of data principals.

-The author is Partner at Saraf and Partners. Views expressed are personal.

Disclaimer:The views expressed in this article are those of the author and do not represent the stand of this publication.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *