Navigating Governance, Ethics, and Social Impact
Globally, artificial intelligence is moving from experimental pilot programs to everyday educational practices. AI-powered learning platforms now adjust content based on learner performance. Automated systems evaluate students work using predictive analysis and could flag students showing early signs of academic struggle. A decade ago, we could not imagine how AI would change the way we learn today.
Europe is trying to balance the scale between setting the rules for AI and encouraging innovation. The EU AI Act (2024), classifies education AI as “high-risk“, requiring conformity assessments, transparency, and human oversight before usage.The Digital Education Action Plan 2021-2027 outlines the European Commission’s vision for this transformation and sets ambitious targets: by 2030, at least 80% of adults with basic digital skills, gigabit connectivity to all schools, and 20 million ICT (Information and Communication Technology) specialists across EU. Supporting frameworks include the European Strategy for a Better Internet for Children (BIK+), addressing age-appropriate design, and the European Declaration on Digital Rights and Principles, establishing minimum expectations for human-centric technology development.
In addition, Horizon Europe, the EU’s research and innovation funding program, also invests to fund educational technology research and projects about effectiveness, failures and usage consequences.
Hungary offers an interesting position within this European ecosystem. The National Artificial Intelligence Strategy 2020–2030 identifies education as a priority, emphasizing digital literacy and workforce preparation. The government has invested in digital infrastructure through programs like the Digital Education Strategy to modernize teaching and expand technology access.
Ethical and Societal Challenges
AI in education sheds light on several ethical challenges.
Algorithmic bias is a major concern. The EU Fundamental Rights Agency has documented how systems learn from historical data, including historical patterns of discrimination. An AI predicting student success might learn that students from certain neighborhoods or ethnic backgrounds are “less likely” to succeed, simply because past inequalities created those patterns. In Hungary and elsewhere, where educational gaps persist between Roma and non-Roma learners or between regions for example, an AI trained on data from Budapest schools might perform unfairly when used in disadvantaged rural areas.
Privacy becomes especially complex when the users are children. Although the General Data Protection Regulation strictly regulates children’s data, practical issues emerge: Can students give meaningful consent when schools require them to use specific AI platforms? Do parents truly understand the systems they are consenting to? How can algorithms be transparent when even teachers don’t fully understand how they work? The Hungarian National Authority for Data Protection and Freedom of Information (NAIH) provides guidance on educational data, but technology evolves faster than regulation.
Accountability is another challenge when responsibility is distributed among multiple actors. If an AI system incorrectly labels a student as “at risk of dropping out,” triggering interventions that stimatize that student, who is responsible? the developer of the algorithm? The school administrator who used the system? The teacher who acted on its recommendation? Or the policymaker who mandated its adoption? AI systems often operate through complex chains of decision-making where no single actor can be held fully accountable.
Digital divides risk widening educational inequalities rather than reducing them. Infrastructure disparities mean that AI learning opportunities remain unevenly distributed. Significant part of the world, including Europe, particularly in rural areas, lack adequate internet connectivity. Indeed, UNESCO’s Global Education Monitoring Report found that only 40% of primary schools are connected to the internet. The gap between ambitious policies and everyday realities remains a governance challenge.
Governance, Regulation, and Policy
The EU AI Act classifies educational AI as high-risk which means that it requires providers to conduct safety assessments, implement risk management, maintain documentation, ensure human oversight, and meet transparency standards before market entry. In theory, AI systems reach schools already checked for safety and ethics. Whether this works or not depends on enforcement capacity and whether regulators can keep pace with rapid technological change.
UNESCO’s Recommendation on Ethics of Artificial Intelligence, applicable to all 194 member states, establishes principles rather than binding rules. It emphasizes participatory governance, cultural sensitivity, and respect for human dignity, reference points for policymakers rather than having any enforcement power.
The OECD AI Principles, endorsed by 46 countries including Hungary, establish commitments to AI systems that are inclusive, sustainable, transparent, accountable, and rights-respecting.
The Council of Europe connects AI governance to human rights, particularly the right to education, implicating fundamental rights that states must protect.
Hungary’s National AI Strategy 2020–2030, developed by a partnership of over 320 members from government, academia, and industry, aims for a 15% GDP increase from AI by 2030. The Strategy identifies education and digital literacy as priorities, with commitments to expand STEM education (Science, Technology, Engineering, and Mathematics), integrate computational thinking, and support teacher professional development in digital competencies.
However, implementation challenges persist. While Hungary achieved above-EU-average broadband coverage (84.1% vs. 78.8%), the European Commission’s 2024 Digital Decade monitoring identifies significant gaps. Enterprise AI adoption stands at only 3.7%, less than half the EU average of 8% and digital skills remain uneven. The Commission notes that “attention should remain high, to ensure that digital skills are spread across all the population”. The growth rate of Information and Communication Technology (ICT) specialists in Hungary (+2.4% annually) lags behind the EU (+4.3%), therefore insufficient for meeting 2030 targets. The Commission concludes that “Hungary must improve its performance” recommending accelerated efforts to bridge the digital gap through inclusion policies focusing on vulnerable groups.
Why Schools Are Where AI Ethics Matters Most
Learners are still developing their capacities and identities. When AI systems make high-stakes decisions during these formative years (tracking students academically, recommending career paths, or flagging concerns), mistakes shape futures that limit how young people see their own potential.
Global discourse around AI in education lacks transparent pilot programs where students and educators co-design AI tools rather than receiving already created products. Participatory design approaches align with the UN Convention on the Rights of the Child (Article 12, paragraph 1), which establishes children’s right to express their views on matters affecting them.
The danger is not just that AI might fail to improve education. It is that poorly governed AI could deepen the very inequalities it promises to solve, shaping young generations before they even have a chance to discover their own potential.
We can do better and we must; because imagine the possibilities if we get this right!
Photo: freepik.com