Storypark AI is launching next week.  We are excited to harness the strengths of artificial intelligence (AI) while providing a set of considered, safe tools specifically designed for ECE.

In creating a set of principles to guide the impact of any AI tools we create, a commitment to protecting children’s data as well as openness and honesty was always going to be important in aligning with the considered approach Storypark has taken since we began back in 2011.

How the information educators put into an AI tool is used and stored should not be opaque. Neither should it be solely left to educators to ascertain what data and privacy issues could arise as they use AI tools. We believe we can also openly acknowledge the limitations and challenges we see regarding AI now and in the future.

While it’s clear that there are similar risks to the other software and technology that early childhood educators already use, AI in particular can still seem like a bit of a wild west. ECE educators are right to question how they can continue to protect children’s privacy if they make the choice to use AI tools. Recently Storypark’s CEO Jamie MacDonald acknowledged, “I know that some of the concern around the use of AI comes from not being able to easily pop the hood and see how it works.”

We’ve given robust thought to the best way for children’s information to move through our AI tools and how to make this clear and transparent for educators and teams. We are also committed to sharing and educating about what responsible AI use looks like. This approach prioritises transparency, privacy, and the autonomy of early childhood professionals. The best security posture is a partnership of secure, up-to-date tools alongside informed, security conscious users.

So how does Storypark AI uniquely help educators safeguard the data of children in their care?

We align with NIST Cybersecurity Framework (CSF) 2.0, ensuring a proactive, industry-standard approach to risk management, data protection, and secure AI practices. There are also some simple, more tangible things to see once the hood of Storypark AI is popped:

>Designed for early childhood use
Risk around safeguarding children’s data does vary depending on the AI tool educators use and how they use it. Notably though, “the most common AI platforms are created by big technology companies and were not designed specifically for the use in education. These include ChatGPT from OpenAI and Google’s Gemini.”  Creating Storypark AI, we gave careful and robust consideration to the kinds of data early childhood educators typically share and create. Although much of this is protected by the already high privacy standards set out in our Privacy Policy, there was also room to understand how we interact with the large language models our AI tools use and be informed by the expectations of educators and leaders we work with.

>No child or educator data used to train models
Large language models (LLMs) are trained on vast amounts of text data. This data helps the AI ‘learn’ language patterns, grammar, facts, and other aspects of human communication. As a safeguard against models accidentally storing or learning from sensitive data, it is very important to us that neither child or educator data from Storypark is involved in this process.

>Data is quickly processed and deleted
Data sent to AI partners isn’t sent at random, only when educators engage with the tools and is only retained for as long as necessary to complete the query. To identify and prevent misuse, partners may temporarily store data for up to 30 days before deletion, unless a longer retention period is required by law. Storypark collects metadata internally to understand feature usage and improve functionality (e.g. how frequently an AI feature is accessed). This metadata is not shared with our AI partners.

>Easy for all educators to understand and use
It’s not ‘AI for dummies’ exactly, but we believe the way data moves through Storypark AI and how the tools work should be easy to explain and understand for those who use it. You won’t see complicated acronyms or concepts to explain the tools and we’ve created a fact sheet with more detail and transparency for those interested.

>Storypark AI is opt-in
It is important to us that artificial intelligence tools are opt-in, meaning that educators and leaders can take the time to make a considered, informed decision about their use. In a busy environment where children are at the heart of everything, implementation and use needn’t be rushed – we’ve also developed a free guide for services looking to approach the adoption of AI tools that helps them to think through change and the best way to support diverse teaching teams. Opt-in also means if the tools aren’t quite right for a particular setting, that’s totally okay, educators can keep using Storypark as they regularly do.


Ready to explore AI as part of your own practice? 

We’d love to see you at the launch of Storypark AI – where you can see the tools for yourself, as well as gaining best practice tips for safe, successful implementation.

 

Posted by Bernadette

Bernadette is part of the Storypark team. One of her earliest memories at kindergarten is declaring to the class that reading was too hard so she wasn't going to learn - whoops! She really enjoys helping educators and families get the most out of Storypark.


Try Storypark for free and improve family engagement with children’s learning


Leave a reply

Your email address will not be published. Required fields are marked *