
The week began with a simple, unassailable truth returning to headlines: teachers are the key to students’ AI literacy—and they need support to do the job well [1]. In a season of breathless product launches and policy whiplash, that reminder is less a slogan than a societal checkpoint. If we want classrooms to be where democratic competence with intelligent tools is cultivated rather than corroded, we must equip the educators who steward them. Everything else is vapor.
Breakthrough technologies have a habit of outpacing public understanding, not because people are incapable but because the bandwidth of civic learning is finite and the cadence of innovation is not. Schools are where we attempt to synchronize that cadence with human development, and teachers are the metronomes of that delicate rhythm. The Conversation Africa’s framing—that educators are central to AI literacy and require structured support—lands precisely because it shifts the question from gadgets to guardianship [1]. Without investment in teacher capacity, AI in classrooms is not progress; it is abdication disguised as innovation.
Not everyone agrees on the direction of travel. In Dublin, a group of lecturers publicly argued it is their responsibility to resist AI in higher education, voicing concerns about academic integrity, the erosion of critical skills, and the creeping normalization of machine mediation [2]. Their stance is not a Luddite tantrum but a moral signal: an unchecked rush can deform the very purposes education serves. Taken with the call to empower teachers, the message is consistent—professional judgment must lead, not follow, when machines enter the learning space [1][2].
The tension between “adopt” and “resist” is not a binary; it is a demand for agency. Agency starts with accessible pathways into computational thinking that do not worsen inequality. Recent research on “unplugged” and gamified coding tools shows that children can begin learning core programming concepts without computers, using thoughtfully designed activities [3]. That matters for communities where devices are scarce, bandwidth is patchy, or policy lags funding.
If AI literacy is to be a public good, it must be teachable with low‑cost, low‑infrastructure methods alongside high‑tech platforms [3]. Otherwise we replicate the digital divide under a new acronym. We also need to care for the carers. During the pandemic, a randomized clinical trial found that an adaptive simulation intervention reduced the physiologic stress experienced by emergency physicians while caring for COVID‑19 patients [4].
Classrooms are not emergency rooms, but the lesson travels: well‑designed, context‑aware simulations can help professionals practice under pressure, learn from mistakes, and steady their nervous systems before the stakes rise. Imagine professional development for teachers that uses adaptive simulations to rehearse AI‑supported lesson planning, academic integrity dilemmas, or bias detection in educational tools—building calm competence rather than panic [4]. The point is not to gamify ethics; it is to give teachers a safe place to practice it. We should not underestimate ambient stress among students either.
A longitudinal qualitative study documented how a highly stressful global event affected health sciences students, shaping their experiences and coping over time [5]. Introducing powerful, poorly explained technologies into already strained learning environments risks compounding anxiety and disengagement. Responsible rollout therefore means pacing, transparency, and attention to student well‑being. The emotional climate of a classroom is not a soft variable; it is the medium in which literacy takes root or withers [5].
Public education also needs public education. Evidence from a study on breast cancer awareness campaigns shows that targeted campaigns can shift knowledge, attitudes, and practices among employees [6]. We should apply that playbook to AI literacy: sustained, culturally attuned campaigns for teachers, students, and families that clarify what AI is and isn’t, model safe and creative uses, and explicitly address risks like bias, privacy, and over‑reliance [6]. Awareness isn’t a glossy poster; it’s a scaffolding that enables healthy habits at scale.
Trust is the currency of adoption. Research on perceived value in tourism during crises offers a parallel lesson: in uncertain conditions, people’s sense of value hinges on how well institutions communicate, mitigate risk, and meet evolving expectations [7]. Schools operate under a similar trust calculus. When leaders introduce AI with opaque contracts, rushed timelines, or consultant‑speak, perceived value collapses; when they involve teachers early, pilot transparently, and share evidence of learning gains and guardrails, value becomes legible [7].
Perception is not mere optics—it is a rational proxy for lived experience. So how do we move from slogans to systems? First, put contractual muscle behind support: time‑tabled hours for teacher training, stipends for mentoring, and protected planning periods devoted to AI‑integrated pedagogy—not just one‑off workshops. Second, co‑design norms with teachers and students: clear use‑cases, disclosure expectations when AI assists, and assessment practices that elevate process over product.
Third, require transparency from vendors: audit trails, bias testing reports, data‑handling disclosures, and the ability to turn features off. Fourth, protect offline equity: continue to develop unplugged AI and coding activities so that curiosity and competence don’t depend on device counts [3]. Fifth, build simulation‑based training for the hard parts—cheating investigations, hallucination triage, and bias debriefs—so that educators rehearse judgment before it’s needed [4]. Sixth, run awareness campaigns that reach families in the languages and media they use, pairing optimism with concrete safety practices [6].
Finally, evaluate for learning, not novelty: publish what works, retire what doesn’t, and refuse to let procurement cycles set the pace of pedagogy [1]. There remains a philosophical wager beneath all this policy: whether we see AI as an occasion to outsource our humanity or to reorganize it. The Irish call to resist reminds us that refusal is sometimes a form of care—guarding attention, craft, and academic integrity against dilution [2]. The cases for unplugged learning and adaptive training show that humane design can widen the circle of participation and lower the temperature of change [3][4].
And the evidence on campaigns and perceived value says culture can be shaped, not merely endured [6][7]. Technology will not slow down for our comfort; but we can slow down enough to teach it well. If we get this right, classrooms can become places where machine intelligence expands human judgment rather than replaces it. Teachers, supported and trusted, can translate the raw power of new tools into literacies that belong to every neighborhood, not just the well‑resourced ones [1][3].
Students can learn to ask better questions of their algorithms and of themselves, with room to make mistakes and the resilience to recover [4][5]. And communities, informed by sustained campaigns and transparent leadership, can see the value of AI as something earned through ethical practice, not imposed by hype [6][7]. That is a future in which every generation has a dignified place: elders sharing wisdom about consequences, teachers orchestrating humane norms, and young people shaping systems that deserve their brilliance.
Sources
- Teachers are key to students’ AI literacy, and need support (The Conversation Africa, 2025-09-01T12:21:33Z)
- Opinion: We are lecturers in Trinity College Dublin. It is our responsibility to resist AI (The Irish Times, 2025-09-04T05:00:01Z)
- Start learning coding without computers? A case study on children’s unplugged gamified coding education tool with explanatory sequential mixed method (Plos.org, 2025-09-03T14:00:00Z)
- An adaptive simulation intervention decreases emergency physician physiologic stress while caring for patients during COVID-19: A randomized clinical trial (Plos.org, 2025-09-03T14:00:00Z)
- Highly stressful global event affecting health sciences students: A longitudinal qualitative study (Plos.org, 2025-09-05T14:00:00Z)
- The impact of breast cancer awareness campaigns on the knowledge, attitudes, and practices of breast cancer screening among Saudi female employees (Plos.org, 2025-09-05T14:00:00Z)
- Understanding perceived value in tourism: Insights from destinations facing crises (Plos.org, 2025-09-02T14:00:00Z)