The Opportunities and Drawbacks of AI-Powered Reading Coaches, Assistants and Tutors


The edtech market is saturated with various tools designed to improve children’s literacy from e-readers to apps to digital libraries. Over the past few years, more literacy tools have been using generative AI, either to accelerate children’s reading proficiency or to stimulate more reading interest.

Recently, a new kind of tool has emerged. Referred to as AI-powered reading coaches, assistants or tutors, these tools use generative AI to provide learners with personalized reading practice, stories, feedback and support.

Some of these tools focus on a specific learning objective, such as phonics instruction, or on a thematic area within a story. Others incorporate personal data like the child’s name and offer options for choosing settings and avatars, providing unique narratives for each child.

As a professor of reading and children’s development, specializing in children’s digital tools, I’ve researched what works and what doesn’t when it comes to coaching children to read. And by collaborating on research with colleagues through WiKIT, an international research organization focused on edtech evidence, I’ve reviewed multiple tools using generative AI to teach children to read. I have seen that many have the potential to bring learning breakthroughs, for example, by offering personalized fluency practice or feedback tailored to each user. But there are very real concerns about the impact of these tools on children’s literary and literacy experiences.

Potential Opportunities and Drawbacks

Depending on the tool, these AI-powered reading coaches, assistants and tutors include a variety of elements to support children with literacy. Some common features include using speech recognition technology to listen to a child read and then using AI to select from a bank of interventions or feedback, using AI to generate narrative texts for children to read or to create distinct prompts based on the child’s ability. And like many edtech tools, it’s common for these to use reward systems, such as giving learners the ability to collect badges or prizes as they progress. Each of these elements comes with its own set of opportunities and drawbacks.

Using speech recognition technology to listen to a child read and using AI to offer feedback can be helpful as long as the technology is based on science-backed design. It is problematic that many tools claim to be science-based but in reality, have not been developed by learning scientists and have not been tested in rigorous evaluation studies. Such tools are typically designed to engage and motivate the child in interacting with stories, but don’t always lead children to improve their reading skills.

The same is true for AI-generated narratives, which typically engage children by allowing them to make choices, such as what kind of character and setting to pick for a story, and by personalizing the experience, say by making the protagonist a character with the child’s name and age. But AI-generated narratives often misalign with what science recommends for children’s literary experiences. For example, AI-generated narratives often exhibit inconsistencies in story elements. On one page, the main protagonist may appear as a 5-year-old blond girl, but on the next page, she transforms into a teenager with no prior time indication in the text. Inconsistencies in story events are also very common: In a story I recently created on one of these tools, the main character, Natalia, who I named after myself of course, suddenly was interacting with a new character, “Remi’s dog,” with no prior reference to how Remi or the dog got into the story. Research indicates that such narrative disruptions confuse young readers and hinder readers’ empathy for the characters.

Drawing on research is valuable for effective content as well as the format of narrative texts. Currently, most stories generated by AI resemble illustrated e-books rather than digital picture books. Typically, in an illustrated e-book, characters are merely drawn to reflect the information in the text. If the text says, “Natalia is wearing a yellow shirt as she stands in her garden smiling,” the character would be drawn to match exactly that description. In contrast, in high-quality children’s picture books, both pictures and texts contribute to the narrative’s depth, expanding children’s horizons, making them reflect and engage in abstract thinking. The kind of literary experience that authors like Jacqueline Woodson achieved in her book, “Brown Girl Dreaming,” where poetry paints a picture in readers’ minds, elevating the reading experience to art.

Also, in high-quality digital children’s books, voiceovers do not merely recite the written text, but they augment the story with additional emotion and drama. With the complementary, mutually enriching roles of images, texts and voice-overs in stories, children can become not only better readers, but can also develop stronger writing skills and media competence.

While the aesthetic quality of AI-generated stories may improve over time, I am concerned about how exposure to such stories might shape children’s standards for story quality. Children’s multimodal ability to make meaning of a story is diminished when these quality markers are taken away. Despite claims by producers of digital story-making tools to democratize access to story production, poorly designed digital books may inadvertently widen the gap between digitally produced narratives and those crafted by professional authors. Such disparities introduce a sharper divide in terms of what literary critics deem high-quality literature worthy of exposing children to, as opposed to quick reads generated on demand by AI tools. While the latter may entertain, the former serve to educate.

Concerns about AI-powered reading coaches, assistants and tutors relate to both learning to read and reading to learn, especially when it comes to AI-generated prompts. Many digital book producers already integrate real-time conversation prompts that can enhance children’s comprehension and these have been found to support literacy development. The new AI-generated prompts may also help children, but not as much as reading with a skilled human adult, such as a teacher, parent or tutor — and they should not be used to replace that experience. Overall, while these tools hold potential, they also may exacerbate the existing digital divide, particularly for children who either lack access to the technology or a qualified adult to work with them on using it effectively.

How the Research On These Tools Is Unfolding

As the tools are still in development, researchers can only predict, rather than determine, their effects. Based on academic research about reading motivation, we can anticipate some challenges. For example, research shows that extrinsic motivators, like badges, are either negatively correlated or insignificantly associated with reading competence. On the other hand, intrinsic reading motivation, which stems from readers’ curiosity and active involvement in the reading process, is moderately and positively correlated with measures of reading competence.

Contrary to these findings, AI-powered reading coaches seem designed to prioritize encouraging external motivation. Children’s progress and time spent on the platforms is rewarded with stickers, applause and unlockable rewards. Comprehension checks via quizzes can be easily bypassed through trial and error, resulting in children pretending to read and receiving rewards for incorrect answers. Moreover, there’s no external assessment to gauge if skills transfer to other texts, weakening the accountability of these technologies.

A recent meta-analysis of interventions that foster reading motivation revealed a small but noteworthy impact from strategies that customize texts to various reading levels or incorporate real-world connections. Importantly, this short-term effect is more noticeable among advanced readers than struggling ones. Yet, as of now, the AI-powered reading coaches on the market lack the specificity of effective targeted approaches.

Observing these trends is disappointing. These tools have the potential to enhance reading experiences for children, if they’re designed with insights from educators and researchers, particularly in the field of learning science. For example, these tools could disrupt traditional ideologies in literary texts if they involved teachers in the design process. Through this collaborative approach, they could also foster teachers’ AI literacy. And product developers could draw from learning science research to build tools that foster children’s self-expression and creativity.

Unfortunately, there is a staggering lack of collaboration between the community of edtech companies building children’s technology products, educators and researchers possessing domain-specific knowledge. Even when companies engage with researchers, it tends to be sporadic communication advice rather than continuous dialogue. And while some companies test their tools with teachers, it’s more common to develop features that are popular or aligned with pressing curriculum requirements rather than latest and best science.

Who suffers most from low quality technologies? The children. So, how can we ensure that learners’ agency, volition and ability to make free choices, is preserved and encouraged in their interaction with AI-powered reading coaches?

Currently, this key question boils down to concerns about data privacy and improving consent-gathering procedures for data. However, answering the question also involves determining who ultimately benefits from these tools. If children are the intended beneficiaries, then the companies building these tools must reconsider their strategies for design and scaling. Instead of rapid scaling and integration into various reading products driven by tech trends and investor demands for growth, edtech development requires a more patient approach. This involves participatory design with diverse groups of children and engaging educators and researchers in iterative co-creation cycles. Let’s not diminish the potential of these technologies by hastily releasing tools that are not yet mature enough to fully support children’s development.



Source link

About The Author

Scroll to Top