Navigating Governance, Ethics, and Social Impact
Globally, artificial intelligence is moving from experimental pilot programs to everyday educational practices. AI-powered learning platforms now adjust content based on learner performance. Automated systems evaluate students work using predictive analysis and could flag students showing early signs of academic struggle. A decade ago, we could not imagine how AI would change the way we learn today.
Europe is trying to balance the scale between setting the rules for AI and encouraging innovation. The EU AI Act (2024) education AI as “high-risk”, requiring conformity assessments, transparency, and human oversight before usage.
The Digital Education Action Plan 2021-2027 outlines the European Commission’s vision for this transformation and sets ambitious targets by 2030, at least 80% of adults with basic digital skills, gigabit connectivity to all schools, and 20 million ICT (Information and Communication Technology) specialists across EU. Supporting frameworks include the European Strategy for a Better Internet for Children (BIK+) age-appropriate design, and the European Declaration on Digital Rights and Principles, establishing minimum expectations for human-centric technology development. In addition, Horizon Europe, the EU’s research and innovation funding program, also invests to fund educational technology research and projects about effectiveness, failures and usage consequences.
Hungary offers an interesting position within this European ecosystem. The National Artificial Intelligence Strategy 2020–2030 identified education as a priority, emphasizing digital literacy and workforce preparation. In 2025, following the appointment of a dedicated government commissioner for artificial intelligence, Hungary adopted its Renewed Artificial Intelligence Strategy 2025–2030, which maintains education and competence development as a core pillar with the vision “Középpontban az emberi képességek” (Human Capabilities at the Centre). The renewed strategy restructures priorities into six pillars: regulation and security, infrastructure, education and competence development, data economy, research and development and innovation, and encouraging AI applications; its three focus areas are: AI for Technology, AI for Society, and AI for Business.
Ethical and Societal Challenges
AI in education sheds light on several ethical challenges.
Algorithmic bias is a major concern. The EU Fundamental Rights Agency has documented how systems learn from historical data, including historical patterns of discrimination. An AI predicting student success might learn that students from certain neighborhoods or ethnic backgrounds are “less likely” to succeed, simply because past inequalities created those patterns. In Hungary and elsewhere, where educational gaps persist between Roma and non-Roma learners or between regions for example, an AI trained on data from Budapest schools might perform unfairly when used in disadvantaged rural areas.
Privacy becomes especially complex when the users are children. Although the General Data Protection Regulation strictly regulates children’s data, practical issues emerge: Can students give meaningful consent when schools require them to use specific AI platforms? Do parents truly understand the systems they are consenting to? How can algorithms be transparent when even teachers don’t fully understand how they work? The Hungarian National Authority for Data Protection and Freedom of Information (NAIH) provides guidance on educational data, but technology evolves faster than regulation.
Accountability is another challenge when responsibility is distributed among multiple actors. If an AI system incorrectly labels a student as “at risk of dropping out”, triggering interventions that stimatize that student, who is responsible? the developer of the algorithm? The school administrator who used the system? The teacher who acted on its recommendation? Or the policymaker who mandated its adoption? AI systems often operate through complex chains of decision-making where no single actor can be held fully accountable.
Digital divides risk widening educational inequalities rather than reducing them. Infrastructure disparities mean that AI learning opportunities remain unevenly distributed. Significant part of the world, including Europe, particularly in rural areas, lack adequate internet connectivity. Indeed, UNESCO’s Global Education Monitoring Report found that only 40% of primary schools are connected to the internet. The gap between ambitious policies and everyday realities remains a governance challenge.
Governance, Regulation, and Policy
The EU AI Act classifies educational AI as high-risk which means that it requires providers to conduct safety assessments, implement risk management, maintain documentation, ensure human oversight, and meet transparency standards before market entry. In theory, AI systems reach schools already checked for safety and ethics. Whether this works or not depends on enforcement capacity and whether regulators can keep pace with rapid technological change.
UNESCO’s Recommendation on Ethics of Artificial Intelligence, applicable to all 194 member states, establishes principles rather than binding rules. It emphasizes participatory governance, cultural sensitivity, and respect for human dignity, reference points for policymakers rather than having any enforcement power.
The OECD AI Principles, endorsed by 46 countries including Hungary, establish commitments to AI systems that are inclusive, sustainable, transparent, accountable, and rights-respecting.
The Council of Europe connects AI governance to human rights, particularly the right to education, implicating fundamental rights that states must protect.
Hungary’s renewed strategy includes specific mechanisms for algorithmic bias detection. The strategy establishes requirements for the registration, documentation, disclosure and human oversight of high-risk AI systems, and mandates the detection of algorithmic bias through continuous audits and independent checks. Furthermore, following an amendment to the Act on Higher Education, Hungarian institutions were required by 1 September 2025 to review their study and examination rules regarding AI use, ensuring that AI tools and technologies are integrated in a responsible and well-regulated manner.
The renewed strategy establishes specific education targets for 2030. The strategy commits to training 8,000 citizens in adult education programs, particularly in data-intensive sectors including health, agriculture, and manufacturing. Additionally, the strategy targets 300 PhD students conducting AI-related research and anticipates that 2.5 million citizens will participate in AI-supported education by 2030.
However, gaps persist between Hungary’s strategic ambitions and implementation capacity. While the country achieved above-EU-average connectivity in very high-capacity networks (86% versus 82.5% EU average), the European Commission’s 2025 Digital Decade monitoring report reveals that enterprise AI adoption stands at only 3.7%, representing less than half the EU average of 8%. This gap indicates that infrastructure investment alone has not translated into widespread AI deployment. Digital skills remain unevenly distributed: the growth rate of Information and Communication Technology (ICT) specialists in Hungary (+2.4% annually) significantly lags the EU average (+4.3%), placing the country at risk of not meeting 2030 training and development targets. The European Commission’s assessment notes that “attention should remain high, to ensure that digital skills are spread across all the population, recommending accelerated efforts to bridge the digital gap.
The adoption of the Renewed AI Strategy in September 2025 (five years into the 2020 – 2030 strategy) reflects a fundamental challenge for education governance: artificial intelligence technologies evolve faster than policy and institutional frameworks can accommodate. The strategy itself acknowledges this by mandating annual reviews, transparent assessment, implementation progress, and continuous adjustment to keep pace with rapid technological and regulatory changes.
Why Schools Are Where AI Ethics Matters Most
Learners are still developing their capacities and identities. When AI systems make high-stakes decisions during these formative years (tracking students academically, recommending career paths, or flagging concerns), mistakes shape futures that limit how young people see their own potential.
Global discourse around AI in educationlacks transparent pilot programs where students and educators co-design AI tools rather than receiving already created products. Participatory design approaches align with the UN Convention on the Rights of the Child (Article 12, paragraph 1), which establishes children’s right to express their views on matters affecting them.
The danger is not just that AI might fail to improve education. It is that poorly governed AI could deepen the very inequalities it promises to solve, shaping young generations before they even have a chance to discover their own potential.
This is why education policymakers and scholars must recognize that AI governance in education is not a fixed endpoint but an ongoing process of alignment between emerging technologies, documented risks and benefits, and institutional implementation capacity.




