A 2023 report on AI’s impact on the future of teaching and learning, advised educational leaders, “to avoid romancing the magic of AI.[1]” Easier said than done though – novel or new exciting technologies have often made us feel this way (AI is no different here) however, we also sense that this interferes with our ability to critically assess them. The report also notes, “the romance of technology can lead to a ‘let’s see what the tech can do’ attitude, which can weaken the focus on [our] goals and cause us to adopt models that fit our priorities poorly.[2]” It can also cause us to misunderstand a product’s true capabilities and minimise the complexity behind the scenes of AI.
It shouldn’t be a tall order if educators and leaders don’t want to lose sight of their goals, and want an approach to AI that realises valuable opportunities while also supporting their practice and safeguarding children’s information.
Enter the power of ‘cautious curiosity’
Cautious curiosity is a balance between being open to exploration and new ideas while maintaining appropriate boundaries, skepticism and critical thinking – when held intentionally, it’s a mindset that represents the best of both worlds.
Cautious curiosity parallels how educators support and model children’s exploration day to day – as let children explore new materials, environments, or social situations while maintaining appropriate boundaries. Applying that same mindset to your own professional practice with AI isn’t too dissimilar.
Considering the sector-specific stakes: Child protection and educator needs
Not all AI use is related to the handling of personal or confidential information. However, ECEC as a sector has unique considerations to hold around consent, child protection, and developmental appropriateness that make cautious curiosity not just wise, but essential.
Another sector specific stake that requires consideration is the needs and priorities of those using and engaging with AI tools, educators. As managers and leaders express concern about tools that (unintentionally) support cognitive offloading and/or ‘do the work on behalf of educators’ – the flipside is effective, sector specific AI tools that “make intentional choices about when to scaffold and when to push.[3]” To see and understand the difference between the array of AI tools available to educators today isn’t hugely difficult, but can be overwhelming and does take curiosity.
What cautious curiosity looks like in practice
Developing the following practices will help you to use and understand how AI tools work in ECEC settings with confidence, clarity and healthy caution:
Understanding your role in data governance: This is the system of rules, responsibilities, and processes that controls who can access, use, and manage data, and how they can do it. In ECEC settings, this can be seen in clear policies about what child data can be used when interacting with AI features, who approves and/or reviews that use, and how you ensure families’ privacy is protected.
Questioning where information goes: Caution is healthy when you think about information processing. As AI tools process the information you enter, you’ll want to understand, even in the simplest terms, a bit about its movement and use. The Storypark AI fact sheet for example, details what data is used, how long data is retained for AI processing and whether the data is used for training (it’s not). Anyone using AI tools can and should question where their information goes.
Checking AI outputs against pedagogical knowledge: AI tools have limitations. They may sometimes generate inaccurate or incomplete information. They do not have access to all the information that educators do. Storypark will always recommend reviewing AI-generated suggestions and content to ensure accuracy and relevance. Educators engaging and trusting their professional intuition and insight is essential when interacting with AI tools but particularly when reviewing its outputs.
Maintaining professional judgment rather than outsourcing decisions: The Artificial Intelligence and the Future of Teaching and Learning report uses a surprisingly simple but effective metaphor to demonstrate how educators can interact with AI tools without letting go of professional judgement. “We envision a technology-enhanced future more like an electric bike and less like robot vacuums. On an electric bike, the human is fully aware and fully in control, but their burden is less…Robot vacuums do their job, freeing the human from involvement or oversight.[4]” Like riding a bike, educators can choose when to start, when to stop and where to go with AI tools.
Staying open to genuine improvements: The ‘curiosity’ part of cautious curiosity means you remain open to and aware of where AI tools have specific benefits and real innovation. You can take your time to explore and assess new technologies thoroughly before committing to them. Not all AI tools will meet your standards or fit your goals even if they have initial exciting potential, but you’ll want to stay open to genuine improvements and benefits.
In short, being concerned about data safety, consent and the long-term consequences of ‘efficiency gains’ doesn’t necessarily mean saying no to innovation. Adopting a mindset of cautious curiosity aligns with ECEC as you can be considered in your thinking about which innovations and tools align best with your priorities, values, and responsibilities to children and families.
Questions about AI tool use you may like to consider and/or discuss with your team
How important to us is it if the AI tool(s) we are using are specifically designed for the ECEC sector? Why or why not?
How important to us is it if the AI tool(s) we are using have been created alongside educators?
Are there regular opportunities to share feedback on the tool’s usefulness and suitability?
Do we swing more towards caution or curiosity when thinking about AI?
Where do I/we need to be more cautious or more curious?
If you’re already using AI in your ECEC setting but don’t have a policy that addresses everyone’s use, consider discussing with your team the importance of creating a specific AI policy.
References
-
Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations
-
Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations
-
Built for Learning Lessons From Emerging Artificial Intelligence: Solutions and Approaches
- Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations
- (Un)making AI Magic: A Design Taxonomy
Ready to explore AI as part of your own practice?
Take a look at Storypark AI and see the difference considered AI tools make in supporting all educators and enhancing quality practice, whilst also safeguarding children’s information.


Share this: