Establishing Accountable AI: Governance Essentials for the Philippines

The Philippines has become one of the largest users of generative Artificial Intelligence(AI) models worldwide. Our country has placed sixth globally, with around 42.4% of Filipino internet users having used ChatGPT in September. 

As AI adoption accelerates, government attention has increasingly shifted toward the development of regulatory frameworks to address AI-related risks. Recent legislative initiatives reflect this direction, including House Bill No. 7913, the Artificial Intelligence Regulation Act, which seeks to establish an “AI Bill of Rights” to protect individuals from the unsafe or unethical use of AI systems. Similarly, House Bill No. 7396, the Artificial Intelligence Development and Regulation Act of the Philippines, proposes the creation of an Artificial Intelligence Development Authority (AIDA) to oversee and coordinate national AI development and governance efforts.

While regulatory action is both timely and necessary, a growing concern remains: “Proposed AI regulation in the Philippines tends to focus on surface-level outcomes rather than addressing the underlying causes of risk.” When regulation is pushed, restriction is prioritized over understanding — making such innovation a constraint without effectively resolving the problems it must address.

Fast-Paced Regulation of Innovation

Regulating AI poses a unique challenge because it is a technology that does not pertain to a single system or product. AI encompasses a wide range of tools, models, and other case-by-case approaches, with different risk concerns and needs to fulfill. Despite this, regulatory discussions often treat AI as a uniform threat that requires a broad control measure. 

This approach reflects a recurring pattern in technological regulations. Innovation moves faster than understanding, resulting in reactive regulation. Hence, without sufficient technical knowledge and public engagement, legislation risks focusing on perceived surface-level dangers rather than real, evidence-based harms.

The Difference Between AI Risks and Systemic Failures

In most cited AI-related concerns, the problem is not caused by the AI itself but by deeper governance and structural issues. One of the key factors here is Bias and Discrimination. 

Algorithmic bias is frequently attributed to AI systems — but in reality, it usually originates from historically biased or incomplete datasets, weak data governance standards, and poor oversight of automated decision-making. Therefore, AI systems often reflect current existing institutional and social biases. Regulating AI tools alone does not correct flawed data pipelines or the discriminatory intentions embedded in organizations and processes.

An illustrative example of this is DeepSeek AI. As conducted by a technologist and scientist speaker, Doctor Dominic Ligot, when a prompt is done regarding the issue between China and the Philippines about the Nine-Dash Lineit does not show its answer, given that it has political bias since DeepSeek AI originated from China. The technologist has done this multiple times, yet the AI model still does not answer the question or the prompt, unlike other models, such as Perplexity AI, which can provide answers directly. 

Privacy and Workplace Risks Existing Beyond AI

AI is often seen as a threat to data privacy. However, misuse of data, weak cybersecurity, and compliance inconsistencies existed before AI became popular. 

In the Philippines, the Data Privacy Act already provides a regulatory framework for the protection of personal information. Yet the challenge remains in enforcement, compliance maturity, and organizational governance — not in the mere presence of AI-driven systems. Therefore, AI amplifies data usage, but it does not replace the need for strong data protection fundamentals.

Another concern to address is automation and job displacement, which justifies AI regulation. In reality, employment risks are primarily driven by skill mismatch, slow reskilling and upskilling initiatives, and an education curriculum not aligned with technological advancements. Therefore, restricting AI without first strengthening workforce development frameworks may reduce productivity and, in the long run, fail to protect workers. 

The Toll of Overregulation

Extensive and ambiguous regulations on AI pose significant challenges, particularly for AI startups and small to medium-sized enterprises (SMEs) in the Philippines. Unlike multinational corporations and other companies with extensive resources and legal teams, these smaller entities struggle to navigate stringent compliance requirements with limited capital and personnel. This creates disproportionate regulatory burdens, diverting critical resources away from innovation, research, and development toward legal compliance and AI risk management. The uncertainty from vague accountability standards can induce a cautious approach among entrepreneurs, leading to delays or abandonment of promising AI projects due to fears of non-compliance or potential liabilities. This dynamic compromises the competitive landscape in favour of larger firms that can more easily absorb regulatory costs.

To cultivate a vibrant and inclusive Philippine AI ecosystem, regulatory frameworks must aim for a balance between safeguards and undue burdens on smaller players. Clear, targeted, and proportionate AI regulations can empower startups and SMEs to innovate with confidence, attract AI investments, and significantly contribute to the country’s digital transformation and economic development.

UNESCO AI Readiness Assessment and Why Root-Cause-Oriented Regulation Matters

The UNESCO AI Readiness Assessment Report on the Philippines reinforces the perceived AI challenges in the country, which are anchored primarily in governance and capacity-related issues rather than purely technological ones. Further, the report emphasizes the absence of a dedicated lead agency for AI governance and highlights the need to leverage existing legal frameworks, such as the Data Privacy Act and the Cybercrime Prevention Act, to address regulatory challenges in data protection, cybersecurity, and cross-border data flows. 

UNESCO also noted structural constraints affecting the adoption of responsible AI in the Philippines, including limited broadband access, uneven digital skills, and mixed public trust in AI, particularly regarding transparency, explainability, job displacement, and data misuse. Despite strong private-sector adoption and projections placing the Philippine AI market at approximately US$3.49 billion by 2030, the report cautions that institutional capacity, domestic innovation, and ethical contextualization must be strengthened to sustain growth.

Globally, best practices increasingly emphasize a risk-based, use-case-driven approach to AI governance. Under this framework, regulation focuses on how AI is deployed, the level of inherent risk, accountability for decision-making, and whether existing laws already address potential harm. This allows governance to move beyond treating AI itself as the problem and instead concentrate on the conditions under which risk actually arises.

Taken together, these findings point to a clear policy direction: effective AI regulation should target the real sources of risk, poor governance, weak enforcement of existing statutes, and institutional capacity constraints. By strengthening these foundations, the Philippines can ensure the responsible development and deployment of AI across sectors without unnecessarily constraining innovation.

Conclusion

Proposed AI regulation in the Philippines must move beyond surface-level solutions. To be truly effective, it should address underlying issues such as data governance failures, weak accountability structures, persistent skills gaps, and algorithmic biases. Without tackling these root causes, regulation risks becoming ineffective or even counterproductive, rather than serving the public interest. A balanced, informed, and risk-based approach is essential for the Philippines to use AI for sustainable, inclusive, and competitive growth, particularly within the business sector.

For organizations navigating AI in compliance, tax, and accounting environments, Babylon2k offers BETH AI, a purpose-built solution that supports regulatory alignment, operational efficiency, and responsible technology adoption. If your organization faces complex or specialized AI-related concerns, connect with us to explore tailored advisory and professional support.

Reference:

More Posts

Subscribe to our newsletter for the latest updates, news, insights, and promotions.

Leave a Reply

Your email address will not be published. Required fields are marked *