Linda Mannila
Gervigreind í skólastarfi
Í fyrirlestri sínum fjallar Linda um gervigrein í skólastarfi út frá tveimur mikilvægum sjónarhornum. Annars vera að læra MEÐ gervigreind og hins vegar að læra UM gervigreind. Gervigreind er ekki ný en spunagreind (e. Generative AI) eins og ChatGPT hafa gjörbreytt landslaginu. Linda ræðir um gervigreinarlæsi og sýnir okkur 3 mismunandi kennslustundir sem hægt er að nota með nemendum til að kenna þeim hvað gervigreind er, hvernig hún virkar og hvað hún ætti að gera.
“AI in education should not focus only on using tools, but also on building the understanding needed to navigate an AI-driven world.”
— Linda Mannila
Spurningar og svör frá Sli.Do
Q1: How do we teach students to use AI as a learning partner, without abusing or without a 100% reliance on AI?
This is a million dollar question, which still lacks clear answers. I assume the questions are on generative AI (e.g., chatbots), and I will answer based on that, although AI, naturally, is much broader. I think teaching students to use generative AI responsibly begins with setting clear guidelines about why, when and how the tools can – and should not – be used. For instance, the AI assessment scale by Leon Furze provides a practical tool for how AI use can be guided, ranging from 0 to 100 % AI. I also believe modelling good practices is key: educators should demonstrate ethical uses like fact-checking and brainstorming, while cautioning against simply copying AI-generated text. Encouraging critical reflection helps students reflect on where AI added value and where their own input mattered more. The overarching goal should be to see AI as a supportive tool – a copilot rather than an autopilot.
Q2: Is there a risk that excessive use of AI in education will reduce students 'independent thinking'?
Yes, there is indeed a risk that heavy reliance on AI could undermine independent thinking as we outsource more and more tasks. This, of course, holds for everyone, not only students. If we increasingly skip the struggle that deep learning often requires, we may lose resilience in problem-solving and reduce our sense of ownership of their work. Over time, we may grow complacent, accepting AI outputs without questioning them and thus diminishing vital critical-thinking and evaluative skills. On the other hand, the calculator did make us worse at calculating in our heads, but has instead helped us do more complicated calculations. AI tools will probably result in similar trade-offs. The tricky thing is that we cannot know today, which skills we might lose in the future from outsourcing tasks now.
Q3: How do we ensure that students benefit from the use of AI without becoming overly dependent on it for other/further or even all project work?
To prevent overdependence, I believe we need to rethink how we teach and assess learning. If a task can easily be completed by a chatbot, maybe the task itself is not that good. We also need to focus more on the process than the final result, and design activities that show students’ learning and progress over time. Designing good tasks naturally requires time and creativity, and we cannot create 100 % AI-proof tasks. But by using more motivating and “human-focused” activities, we can hopefully show why students should put effort into doing them themselves. At the same time, we need to remember that students need to learn how to use AI tools responsibly and creatively, so the aim cannot be trying to create AI-free learning. Balance should be the goal: using AI to support teaching and learning when suitable, while keeping the learners actively engaged.
Hver er Linda Mannila?
Linda Mannila hefur starfað í yfir 20 ár á krossgötum tækni, náms og samfélagsbreytinga sem rannsakandi, frumkvöðull og kennari. Hún starfar nú sem dósent í tölvunarfræði við Háskólann í Helsinki og sem aðjúnkt í tölvunarfræðimenntun við Linköping University í Svíþjóð.