Recently, when OpenAI CEO Sam Altman emphasized that conversations with ChatGPT might need to be “produced” and easily given if authorities demanded it, a renewed discussion was started on one of today’s most pressing problems: how confidential are our interactions with AI systems?
Altman’s discussion highlights a reality often underestimated by everyday users and even businesses. While AI systems are becoming more thoughtful and helpful in various ways, they are not legally secured by professional secrecy. The data shared through these innovative tools — from casual conversations to personal thoughts and up to business insights, can fall into dangerous zones if not adequately safeguarded.
This raises a critical question: How does Philippine law, specifically the Data Privacy Act of 2012 (DPA), protect users and businesses in an AI-driven world?
The Uprising of AI and the Data Privacy Gap
AI has changed how various organizations collect, analyze, and act on information from their users. Ranging from automation of client communications to generating financial insights and forecasts, tools like ChatGPT and other well-known language models, such as Claude and DeepSeek, have become integrated into our daily workflows.
Yet, many users still treat AI as a “digital confidant” or even a “digital therapist”. They share sensitive information assuming that their privacy is safe, when in reality — the interaction may not be protected at all.
The problem lies in the lack of a legal framework to support confidentiality. Unlike conversations with a lawyer, doctor, accountant, or therapist, interactions with AI systems don’t fall under privileged communication. When data passes through cloud servers or is used to improve AI models, it enters an area where control, accountability, and consent become harder to trace and keep safe.
The Data Privacy Act and the New AI Guidelines
Favorably, the Philippine Data Privacy Act (Republic Act No. 10173) stood as a strong framework to protect personal data. On December 19, 2024, the National Privacy Commission (NPC) released Advisory No. 2024-04, which explicitly applies the DPA to AI systems processing personal data, that also discussed the critical factors involved regarding data management.
The key factors depicted in the guidelines are:
- Transparency: Companies or establishments using AI must explain the system’s purpose, inputs, outputs, and potential risks associated with the use of the AI.
- Accountability: Data controllers remain responsible for privacy compliance, even when using third-party AI providers. Demonstrable measures must include the implementation of effective policies & procedures, and governance mechanisms that ensure the ethical processing of personal data.
- Fairness and Accuracy: AI systems must process data proportionately and accurately, preventing a manipulative or oppressive nature and biased results.
- Data Subject Rights: Individuals retain rights to access, object, rectify, or erase their data that was acquired from them — even when it’s used for AI training or testing.
Simply put, this means that Philippine organizations can no longer treat AI as an ambiguous space to collect vast data easily. If in such case that AI tools handle personal data, the business or establishment utilizing them must ensure compliance with DPA standards and guidelines.
Why It Matters for Businesses
For companies adopting AI, whether in chatbots, analytics, or insights, privacy governance is now a strategic duty. Here’s what it means in a practical sense:
- Check your data flows – Identify what personal data your AI tools collect, how it’s processed, and whether consent was obtained from the user.
- Update your privacy notices – Be transparent when AI systems are involved in decision-making or communication with clients.
- Establish AI accountability policies – Set internal controls for possible human errors, bias detection, and data minimization to create a better framework for handling such data.
- Rebuild trust with users – Users now heavily expect privacy and ethical handling of their data, not just for compliance. Clear communication and security must be established to build confidence.
- Prepare for future regulation – Bills like the proposed Artificial Intelligence Development and Regulation Act (H.B. 7396) serve as a sign that stricter AI regulations are coming. Early compliance and structuring will be of enormous help in the near future.
The Bigger Picture
The tension between innovation and privacy isn’t new nowadays, and AI amplifies it. Indeed, the more we rely on intelligent systems to process our information, the more compelling it becomes to clarify where the line of privacy ends and accountability begins.
The Data Privacy Act gives Philippine organizations a framework to balance both privacy and security. With the NPC’s new AI advisory, businesses have a clear function, and that is to embrace innovation, but not to compromise data protection.
In this view, compliance isn’t just a legal safety net, it’s building a competitive advantage. Companies that can promise both intelligence and integrity will soon stand out in a digital economy.
Leading the Way to Innovative AI
At Babylon2k, AI innovation must move hand in hand with data privacy, transparency, and accountability. That’s why we built BETH AI — our proprietary AI assistant explicitly designed for accounting and business workflows.
Book a demo or consultation today and see how BETH AI can help your firm work smarter without compromising data privacy.
Disclaimer: Content developed with support and insights from our in-house counsel





