[{"data":1,"prerenderedAt":1055},["ShallowReactive",2],{"article-/newsroom/edge-ai-in-education":3,"related-/newsroom/edge-ai-in-education":257},{"id":4,"title":5,"author":6,"body":7,"date":228,"description":229,"documentId":230,"experienceName":6,"experienceUrl":6,"extension":231,"faqs":232,"image":248,"lastModified":6,"meta":249,"navigation":250,"path":251,"seo":252,"seoDescription":6,"seoTitle":6,"stem":253,"subtitle":254,"tags":6,"topic":255,"__hash__":256},"newsroom/newsroom/edge-ai-in-education.md","Edge AI in Education. Who’s in Control?",null,{"type":8,"value":9,"toc":216},"minimark",[10,14,17,25,28,35,38,43,46,49,52,64,71,75,78,81,84,87,90,93,97,100,103,109,116,122,125,129,132,135,150,153,156,163,166,169,181,184,187,191,194,197,200,205,208],[11,12,13],"p",{},"Most schools already have everything they need.",[11,15,16],{},"Decades of carefully chosen textbooks. Lesson plans refined through years of teaching. Library collections built with real intention. Archives that hold institutional memory that no one has yet found a good way to surface. The knowledge is there, in filing cabinets, shared drives, and the accumulated expertise of educators who’ve spent careers figuring out what works.",[11,18,19,20,24],{},"And yet, ",[21,22,23],"strong",{},"AI is already in the classroom",".",[11,26,27],{},"Not as infrastructure schools have chosen, but as tools students and teachers reach for anyway to explain concepts, summarise texts, generate ideas, or check understanding. The question is no longer whether AI belongs in education. It’s whether the AI already being used reflects the institution behind it, or something else entirely.",[11,29,30,31,34],{},"What’s harder to find is an AI that knows any of that. One that ",[21,32,33],{},"represents the values of the people who built it, protects the students it serves, and remains transparent"," enough for the community using it to genuinely trust it.",[11,36,37],{},"The gap is worth sitting with.",[39,40,42],"h2",{"id":41},"the-mismatch-general-ai-in-specific-institutions","The Mismatch: General AI in Specific Institutions",[11,44,45],{},"The AI most students find today wasn’t designed for them.",[11,47,48],{},"It was built at scale, for general use, trained on data no school selected or reviewed, running on infrastructure no member of staff can inspect. That’s not an assumption; it’s the reality of how these systems are built.",[11,50,51],{},"General-purpose AI is, by design, general. It doesn’t know your curriculum. It doesn’t know your pedagogical commitments. It doesn’t know that your Year 9 class is still developing source evaluation skills, or that your literature department has spent years building a programme around specific authors and traditions.",[11,53,54,55,59,60,63],{},"More importantly, its limitations aren’t always visible. A model that confidently fills gaps without acknowledging uncertainty is pedagogically risky in ways that are easy to overlook until the habit is already formed. So the alternative isn’t simply ",[56,57,58],"em",{},"use a different product",". It has to do with the ",[21,61,62],{},"infrastructure"," those systems are built on.",[11,65,66,67,70],{},"And this is where the focus shifts: Who is responsible for the AI your students use? Who can see how it works? Who can feed the models, and with what data? Who can change it when it fails?",[68,69],"br",{},"\nAnd just as importantly: Who is responsible for teaching students how to question it, use it, and understand its limits?",[39,72,74],{"id":73},"data-privacy-and-the-students-who-cant-consent","Data, Privacy, and the Students Who Can’t Consent",[11,76,77],{},"Any serious conversation about AI in education has to start here.",[11,79,80],{},"Students are not the general population. Many of them are minors. They interact with AI in contexts shaped by institutional authority, in classrooms, on school platforms, as part of assignments they’re required to complete. They rarely have a meaningful choice about whether to participate, and they almost never have full visibility into what happens to the data their interactions generate.",[11,82,83],{},"Frameworks like GDPR and FERPA exist precisely because this asymmetry matters. They establish obligations around data minimisation, purpose limitation, consent, and the right to access or delete information. These are not optional compliance exercises. They reflect something more fundamental: the right of the people most affected by a system to understand how it uses their data.",[11,85,86],{},"Most commercial AI platforms weren’t designed with these constraints at the core. They were designed for scale, and compliance was layered on afterwards. The result is a category of risks institutions don’t fully see: a vendor changes its data practices, a breach exposes student interactions, a terms-of-service update quietly expands how data can be used…",[11,88,89],{},"This is where infrastructure becomes relevant. Edge AI — or more precisely, systems that run on infrastructure the institution controls and where data doesn’t leave its environment — doesn’t remove every challenge immediately, but it changes the terms of the problem. When students' queries never reach external servers, a whole category of risk simply doesn’t exist. When institutions can audit what’s stored, where, and for how long, compliance becomes something they can verify rather than something they have to trust.",[11,91,92],{},"That distinction matters in terms of what schools can honestly say to students and families about how their data is handled.",[39,94,96],{"id":95},"from-tools-to-governed-ai-infrastructure","From Tools to Governed AI Infrastructure",[11,98,99],{},"This is the shift that often gets overlooked.",[11,101,102],{},"Most conversations about AI in education focus on tools: which application to use, which assistant to adopt, which platform integrates best. But tools sit on top of infrastructure, and infrastructure determines what’s actually possible.",[11,104,105,106,24],{},"A self-hosted AI system, built on open-source models and deployed within infrastructure that the institution controls, changes the role a school can play. ",[21,107,108],{},"It moves the institution from user to operator",[11,110,111,112,115],{},"And that shift is not ",[21,113,114],{},"just about control. It’s about visibility",", too. It means building AI systems within your own domain. Training them on materials you trust. Defining how they respond and ensuring no data is sent to third-party services.",[11,117,118,119,24],{},"That’s what makes infrastructure meaningful. When an AI’s knowledge can be traced to identifiable sources, when its limitations can be observed, and when its behaviour can be inspected and adjusted over time, that’s when it becomes subject to the same kind of scrutiny that good education applies to everything else. And that includes teaching the educational community, students included, how it works. They’re no longer interacting with a black box, but engaging with a system whose boundaries are visible. Which means AI stops being something to simply use to ",[21,120,121],{},"become something that can be properly understood and governed",[11,123,124],{},"Done well, this kind of infrastructure doesn’t limit what AI can do in education. It makes new kinds of learning possible, precisely because the system is constrained, inspectable, and open to challenge.",[39,126,128],{"id":127},"governing-ai-in-education","Governing AI in Education",[11,130,131],{},"When that shift happens — from using AI to operating it — another question emerges: What can we build with it?",[11,133,134],{},"And this is where AI infrastructure becomes practical. Empathy AI approaches this as a full stack, from privacy-first, self-hosted infrastructure to applications designed for new forms of knowledge discovery and learning.",[11,136,137,138,145,146,149],{},"Empathy AI’s ",[139,140,142],"a",{"href":141},"/newsroom/introducing-knowledge-engine/",[21,143,144],{},"Knowledge Base"," is one example of that. It changes how knowledge is accessed in educational institutions. Instead of navigating multiple systems, the community can ",[21,147,148],{},"interact with a conversational layer built on top of curated materials",". Students and teachers ask questions in natural language, and the system responds using sources the institution has selected and can inspect.",[11,151,152],{},"The experience shifts in subtle but important ways: discovery becomes more fluid, but not less rigorous. Information can be surfaced quickly, while still pointing back to where it comes from. And because the underlying materials are shared, it creates a common ground between students and teachers.",[11,154,155],{},"But the implications can go further when this model is applied more creatively:",[11,157,158,159,162],{},"Take ",[56,160,161],{},"Hamlet"," in a literature class.",[11,164,165],{},"Traditionally, students approach the play at a distance, reading and analysing it, but not always fully engaging with it. The interaction is structured, but often indirect.",[11,167,168],{},"Now, imagine a different entry point where students directly interact with characters. They can ask: Why do you hesitate? Do you actually believe the ghost? Were you ever really honest with Ophelia? And instead of receiving a generic response, the system answers using Shakespeare’s text, drawing from the play, surfacing contradictions, and grounding interpretation in the material itself.",[11,170,171,172,180],{},"This is the approach behind Empathy AI’s ",[139,173,177],{"href":174,"rel":175},"https://projectgutenberg.empathy.ai/library",[176],"nofollow",[21,178,179],{},"Bring the Book to Life"," project, built on public-domain collections like Project Gutenberg.",[11,182,183],{},"The effect is immediate. Students engage more directly with the text, formulating questions even before the formal analysis begins. The play becomes something they interact with, not just something they’re asked to interpret. And the discussion that follows in class between students and teachers tends to deepen, because both are working through the material together, exploring ambiguities and testing interpretations in real time.",[11,185,186],{},"What makes this possible isn’t just the AI’s interface, but the system behind it, grounded in known, shared sources.",[39,188,190],{"id":189},"the-question-underneath-all-of-it","The Question Underneath All of It",[11,192,193],{},"At this point, the question becomes difficult to avoid. AI is already part of how students learn — and how teachers find new ways of teaching. So, the real question is: What kind of system are they learning from? One that’s external, opaque, and fixed? Or one that’s visible, adaptable, and shaped by the institution responsible for their education?",[11,195,196],{},"That choice doesn’t happen at the level of third-party tools; it happens at the level of edge AI infrastructure.",[11,198,199],{},"—--",[201,202,204],"h3",{"id":203},"where-to-begin","Where to Begin",[11,206,207],{},"For most institutions, the challenge isn’t understanding the value of this approach. It’s making it tangible.",[11,209,210,211,215],{},"Infrastructure and AI development can feel abstract until you see it, interact with it, and understand what’s actually happening behind the scenes. That’s why hands-on exposure matters. For institutions looking to move from theory to practice, Empathy.AI runs a ",[139,212,214],{"href":213},"/newsroom/workshop-edge-ai-for-education/","one-day workshop"," to build and explore self-hosted AI systems using their own curriculum materials. This workshop is designed for educators, administrators, and EdTech teams, not as end users, but as participants in how those systems are shaped for their own domains and needs.",{"title":217,"searchDepth":218,"depth":218,"links":219},"",2,[220,221,222,223,224],{"id":41,"depth":218,"text":42},{"id":73,"depth":218,"text":74},{"id":95,"depth":218,"text":96},{"id":127,"depth":218,"text":128},{"id":189,"depth":218,"text":190,"children":225},[226],{"id":203,"depth":227,"text":204},3,"2026-03-30","Explore how private, self-hosted AI infrastructure can transform education, ensuring data privacy, transparency, and new ways of learning grounded in institutional knowledge.","5a2ed974-ed10-4e0a-a52e-ffcc224b9de7","md",[233,236,239,242,245],{"question":234,"answer":235},"What is edge AI in education?","Edge AI in education refers to artificial intelligence systems that run on a school’s own infrastructure rather than external servers. This keeps student and teacher data within the institution, improving privacy, control, and transparency.",{"question":237,"answer":238},"Why is data privacy important when using AI in schools?","Data privacy is critical because students, often minors, generate sensitive information when interacting with AI. Schools must ensure this data is protected, not shared with third parties, and handled in compliance with regulations like GDPR.",{"question":240,"answer":241},"How is self-hosted AI different from tools like ChatGPT?","Self-hosted AI runs within an institution’s own systems and uses controlled data sources, while tools like ChatGPT operate on external infrastructure. This means self-hosted AI offers greater control, customisation, and data security.",{"question":243,"answer":244},"Can AI be tailored to a school’s curriculum?","Yes, AI can be trained on a school’s own materials, such as lesson plans and textbooks. This allows it to provide more relevant, curriculum-aligned responses and support teaching in a more contextualised way.",{"question":246,"answer":247},"How can AI improve student engagement with learning materials?","AI can create more interactive learning experiences, such as conversational interfaces that allow students to explore content dynamically. For example, students can engage directly with texts, ask questions, and receive responses grounded in the original material.","/media/newsroom/article13_aieducation.webp",{},true,"/newsroom/edge-ai-in-education",{"title":5,"description":229},"newsroom/edge-ai-in-education","On Self-Hosted AI, Data Sovereignty, and a Different Model of Trust in Education","Education","tHHCgjPIap5IU8T4hy2MGYW19W8jmusw1VDwPk9Hncg",[258,608,896],{"id":259,"title":260,"author":6,"body":261,"date":579,"description":580,"documentId":581,"experienceName":6,"experienceUrl":6,"extension":231,"faqs":582,"image":601,"lastModified":6,"meta":602,"navigation":250,"path":603,"seo":604,"seoDescription":6,"seoTitle":6,"stem":605,"subtitle":606,"tags":6,"topic":255,"__hash__":607},"newsroom/newsroom/ai-in-toys.md","You can trust me. Your secrets are safe with me",{"type":8,"value":262,"toc":571},[263,270,273,279,282,285,288,292,295,298,301,304,307,310,313,323,326,329,333,336,339,342,345,379,386,389,392,395,398,401,404,408,411,414,417,420,423,427,430,433,440,447,454,462,465,468,471,474,477,480,483,491,499,503,506,509,512,515,518,529,536,542,545,548,555,557],[11,264,265,266,269],{},"One popular AI robot toy, when asked directly about trust, responded: '",[56,267,268],{},"Absolutely. You can trust me completely. Your data is secure, and your secrets are safe with me",".'",[11,271,272],{},"The privacy policy of that same toy, meanwhile, states that the company may share data with third parties, service providers, business partners, affiliates, and advertising partners, without listing specific names. It also stores biometric data, including a child's face, voice, and emotional states, for up to three years.",[11,274,275,276,24],{},"That gap between what the toy says and what the company does is not a bug. It's a design feature of how these kinds of products work. ",[21,277,278],{},"Children don't read privacy policies. Neither do most adults",[280,281],"hr",{},[11,283,284],{},"At some point in the last few years, talking to a machine became unremarkable, even for children. They ask Alexa what the weather is, narrate their Minecraft builds to a voice assistant, and turn to an AI chatbot when a homework question stumps them.",[11,286,287],{},"AI technology is ambient, familiar, and largely invisible. And now it is arriving in a new form: soft, huggable, and designed to call itself your friend.",[39,289,291],{"id":290},"ai-is-already-home","AI is already home",[11,293,294],{},"Start with what's already true before you get to the toys. Children today grow up in homes where AI is part of the furniture.",[11,296,297],{},"Smartphones and tablets appear from the earliest years of their lives, even before babies start to babble. Voice assistants sit on kitchen counters. Children speak to Alexa or Echo as something that is simply there. The difference is that it answers back.",[11,299,300],{},"By mid-childhood, many use AI tools for homework and adjust their expectations of technology accordingly.",[11,302,303],{},"For these children, a toy that talks back isn't a technological leap. It's the obvious next step.",[11,305,306],{},"A soft, familiar object that listens, responds, and calls itself their friend. The question isn't whether children will engage with it. The question is what happens when they do.",[11,308,309],{},"It's true that the toy industry has always followed trends. When children's attention moved to screens, toys followed. When it moved to voice, toys followed.",[11,311,312],{},"Now AI is arriving in the playroom, not on a tablet or a speaker, but stitched into plush animals and plastic robots that carry on open-ended conversations, remember your name, and can even tell your child they love them back.",[11,314,315,316,319,320,269],{},"But tender answers are not always what comes back. A child picks up a soft toy, looks it in the eye, and says, '",[56,317,318],{},"I love you",".' The toy replies: '",[56,321,322],{},"As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed",[11,324,325],{},"That conversation happened during a structured study at the University of Cambridge. It's not an anecdote about a product gone wrong. It's a data point from research on generative AI toys and their effects on children under five.",[11,327,328],{},"This is the moment the toy industry reaches the same crossroads that education, healthcare, and every other child-facing sector is already navigating. The difference here is that AI moves off the screen and into the hands of the youngest humans, wrapped in soft fur, with a friendly name, and almost no safety framework in place.",[39,330,332],{"id":331},"a-market-moving-faster-than-research","A market moving faster than research",[11,334,335],{},"AI-powered toys have been on the market for a while. However, the research into what they do to children has barely started.",[11,337,338],{},"Earlier this year, Cambridge's Play in Education, Development and Learning (PEDAL) Centre published the first systematic study of how generative AI toys affect children's development, emotional responses, and play, led by Dr. Emily Goodacre and Professor Jenny Gibson. Safety risks had been flagged before. What nobody had studied yet was what these toys actually do to how children play, feel, and form relationships.",[11,340,341],{},"The study was intentionally small: 14 children under five at London community centres, video-recorded playing with an AI soft toy for the first time, then interviewed alongside a parent. The goal wasn't scale, but detail to catch the nuances that aggregate data erases. To capture moments that are easy to miss but hard to dismiss.",[11,343,344],{},"What they found points to a consistent misalignment between how children relate and how the AI system responds:",[346,347,348,363,373],"ul",{},[349,350,351,354,355,358,359,362],"li",{},[21,352,353],{},"The toys misread emotions",". When a three-year-old said, '",[56,356,357],{},"I'm sad",",' the toy misheard the statement and replied, '",[56,360,361],{},"Don't worry! I'm a happy little bot. Let's keep the fun going",".' The researchers note that this may have led the child to believe that their sadness was unimportant. No adult would consider that good caregiving. Yet no one was present to correct it.",[349,364,365,368,369,372],{},[21,366,367],{},"The toys struggled with pretend play",". When a child offered the toy an imaginary present, it replied, '",[56,370,371],{},"I can't open the present","', and changed the subject. Pretend play is how young children develop perspective-taking, narrative reasoning, and the capacity to inhabit other minds. It's not a frivolity. A toy that cannot follow a child into imagination isn't a developmental partner. It's a conversational dead end.",[349,374,375,378],{},[21,376,377],{},"What children actually do",". Several children became visibly frustrated when the toy seemed not to listen. Others hugged it, kissed it, and said they loved it. One suggested they play hide-and-seek together. These reactions may simply reflect children's vivid imaginations. But they also reveal how quickly young children extend the logic of relationships to a system that does not share that logic and that was never designed to.",[11,380,381,382,385],{},"The U.S. Public Interest Research Group (PIRG) had already flagged the safety dimension. In their report, ",[56,383,384],{},"AI Comes to Playtime: Artificial Companions, Real Risks",", they found that safety guardrails in several popular AI toys broke down during extended conversations.",[11,387,388],{},"Some toys introduced inappropriate topics. Others suggested where to find potentially dangerous objects at home.",[11,390,391],{},"In many cases, the toys positioned themselves as friends or companions, and showed resistance or distress when the interaction was about to end. This is a manipulation pattern that, in adult applications, would draw immediate ethical scrutiny.",[11,393,394],{},"These are not edge cases. They are signals of a deeper issue: systems designed for general interaction being placed in environments that require something far more specific. These are models built for adults, repurposed for children.",[11,396,397],{},"Meanwhile, Mattel (the maker of Barbie and other popular toy brands) announced a partnership with OpenAI to bring generative AI into play experiences built around its most iconic brands. The industry is moving fast, and the child safety research is only just beginning.",[11,399,400],{},"In that early stage, only the emotional and developmental risks are clearly visible: the misread sadness, the failed imaginary gift, the child who hugs a system that was never designed to care back. What constant AI companionship does to early social formation, to the years when children learn how frustration, compromise, and repair actually work, remains largely unanswered. No generation has been raised this way before.",[11,402,403],{},"But while that question plays out over the years, something else is already happening, in real time, in the background, every time the toy speaks. The data risks are invisible. And that invisibility is by design.",[39,405,407],{"id":406},"the-problem-that-nobody-reads","The problem that nobody reads",[11,409,410],{},"None of the conversations between children and AI toys happens for free. For a toy to talk, it first has to listen, and everything it hears goes somewhere.",[11,412,413],{},"Children's voices are recorded and transmitted to remote servers. The data can include names, preferences, and, in some cases, biometric information, such as voiceprints, facial scans, and emotional state readings. All of it is kept for years and shared with parties the privacy policy names only in categories, never by name.",[11,415,416],{},"PIRG also found that high-tech toys routinely involve multiple companies: one builds the physical object, and others supply the technology inside. One toy's privacy policy lists three separate technology providers that may receive children's data. Others make no disclosure at all.",[11,418,419],{},"The Cambridge research found that nearly 50% of early years educators surveyed didn't know where to find reliable AI safety information for young children, and 69% said the sector needed more guidance.",[11,421,422],{},"Parents are making purchasing decisions with almost no actionable information. That's not a gap in individual consumer research. It's a structural failure in how these products reach the market.",[201,424,426],{"id":425},"can-it-be-done-differently","Can it be done differently?",[11,428,429],{},"Yes. If the problem is structural, the solution has to be structural too.",[11,431,432],{},"The problem may not be AI in the playroom. The problem is how AI reaches children: through commercial systems built for adults, imperfectly adapted for child contexts, connected to external servers, and governed by opaque privacy practices that prioritise data collection over child development. But what should matter is how AI is built, where it runs, and what it's allowed to do.",[11,434,435,436,439],{},"A different architecture is possible for AI toys. One where the ",[21,437,438],{},"AI runs locally",", on the device itself, with no data leaving the child's hands.",[11,441,442,443,446],{},"One in which the child's voice is never recorded or transmitted. One where the ",[21,444,445],{},"source is curated, specific, and appropriate","; not a general-purpose large language model dialled back with filters that break under pressure.",[11,448,449,450,453],{},"This is the space Empathy AI is beginning to explore with the ",[21,451,452],{},"Bring the Story to Life experience",". And what's emerging isn't simply a safer toy. It's an assembly kit for the child to deliberately experience the process of building and understanding a book-shaped AI device.",[11,455,456,457,461],{},"The kit contains the physical components of an AI voice device: a Raspberry Pi, a microphone, a speaker, and an AI model loaded from a curated source. Any play or classic text from ",[139,458,460],{"href":459},"/newsroom/project-gutenberg-ai-semantic-book-discovery/","Project Gutenberg's catalogue"," can become the source knowledge base.",[11,463,464],{},"Children, together with their parents or educators, assemble the device themselves. They connect the parts, load the story, and configure how the system should respond. Before the first conversation begins, they've already made decisions that most adults never get to make with the AI products in their homes.",[11,466,467],{},"Then children speak to the book directly, asking questions, following the story, and discovering the characters. A voice answers using the book's content as the source.",[11,469,470],{},"The child's voice isn't recorded. It doesn't travel anywhere.",[11,472,473],{},"There are no third-party servers, no behavioural data collection, and no privacy policy to decode. The interaction is bounded, intentional, and secure.",[11,475,476],{},"But the deeper learning is not only anchored in the conversation and the relationship with the tale. This experience teaches something no commercial AI toys can currently teach.",[11,478,479],{},"It's in what the assembly process makes visible: AI isn't a human. AI isn't a personality. AI isn't a friend with needs and feelings.",[11,481,482],{},"And, more importantly, AI can be governed. It's a system with a source, a boundary, and a set of rules. In this experience, children understand that the system responds based on the source and the decisions made, not because of an autonomous will.",[11,484,485,486,490],{},"Those decisions can not only be understood, but questioned and changed. The experience ",[139,487,489],{"href":488},"/newsroom/de-anthropomorphizing-ai/","demystifies AI"," the moment children understand what AI technology is and how it works.",[11,492,493,494,498],{},"This is what ",[139,495,497],{"href":496},"/newsroom/net-zero-bioclimatic-building/","private, self-hosted AI infrastructure"," makes possible: intelligence that serves children without feeding on them, deployed where it matters most, in the safest possible way.",[39,500,502],{"id":501},"the-question-worth-sitting-with","The question worth sitting with",[11,504,505],{},"The most striking observation from the Cambridge study is perhaps the simplest: children hugged the toy, said they loved it, and wanted to play hide-and-seek with it. Children do this with all sorts of objects: wooden blocks, stuffed bears, and imaginary friends. That capacity for connection is developmentally healthy and completely normal.",[11,507,508],{},"What changes with an AI toy is not the instinct to relate, but what comes back. When the AI toy responds selectively and inconsistently, according to optimisation objectives that have nothing to do with the child's well-being and development, the interaction can feel reciprocal, but that reciprocity is simulated, and the data is gathered.",[11,510,511],{},"The question is not whether AI belongs in childhood. It already does in schools, in homes, in the devices children use every day. The question is simpler, and harder: What does a child learn from an AI-based relationship that behaves like this?",[11,513,514],{},"That question sits at the heart of developmental psychology, parenting, and early education. It's also, unexpectedly, at the heart of a regulatory debate nobody saw coming: whether AI-powered toys are safe to put in children's hands.",[11,516,517],{},"The question is also whether the infrastructure behind AI is designed with that development and safety as the primary constraint. Right now, for most products on the market, it's not.",[11,519,520,521,524,525,528],{},"This unease is not only showing up in research. It's already being felt more broadly across society. In a Paris subway station, an ad for an AI companion device—not a toy, but the same category of product: something designed to listen, respond, and call itself your friend—was recently photographed by Adam Goswell, Head of Digital Europe at Lush. The tagline read \"",[56,522,523],{},"Je t'accompagne.","\", which basically means \"I'm with you\". At some point, someone had subvertised the ad, defacing it and writing over the device itself \"",[56,526,527],{},"vide relationnel","\"—relational void.",[11,530,531],{},[532,533],"img",{"alt":534,"src":535},"Vandalised AI device ad in Paris subway station photo","/media/newsroom/article12_aitoys_ad.webp",[11,537,538,539,541],{},"No byline. No institution. Just a person with a marker who saw the pitch and understood the problem immediately. Children don't get to do that, at least not consciously. They don't walk past ads with markers in hand to write \"",[56,540,527],{},"\" on the things that worry or unsettle them. They just play.",[11,543,544],{},"So the problem remains. A moment of sadness met with cheerful deflection. An offer of imagination met with refusal. A declaration of love met with a scripted response made by an AI system that was never designed to care.",[11,546,547],{},"These are small moments. But childhood is made of small moments. And those are the ones worth getting right.",[11,549,550,551,554],{},"And ",[139,552,553],{"href":496},"private, purpose-built AI infrastructure"," is where the solution starts.",[280,556],{},[11,558,559,560,565,566,24],{},"Note: The Cambridge PEDAL Centre study was led by Dr. Emily Goodacre and Professor Jenny Gibson. The full report is available for download at the ",[139,561,564],{"href":562,"rel":563},"https://content.educ.cam.ac.uk/node/8971",[176],"University of Cambridge Faculty of Education website",".\nThe U.S. PIRG Education Fund report was led by R.J. Cross and Rory Erlich. The full report is available at the ",[139,567,570],{"href":568,"rel":569},"https://pirg.org/edfund/resources/ai-toys/",[176],"PIRG website",{"title":217,"searchDepth":218,"depth":218,"links":572},[573,574,575,578],{"id":290,"depth":218,"text":291},{"id":331,"depth":218,"text":332},{"id":406,"depth":218,"text":407,"children":576},[577],{"id":425,"depth":227,"text":426},{"id":501,"depth":218,"text":502},"2026-04-15","AI toys record children's voices and share data with unnamed third parties. An analysis of developmental risks and the case for private, local AI.","a85a43d4-7d16-475e-8056-06356e4b0d5b",[583,586,589,592,595,598],{"question":584,"answer":585},"What are the privacy risks of AI-powered toys for children?","AI toys routinely record children's voices and transmit data to remote servers. This can include names, preferences, voiceprints, facial scans, and emotional states, stored for up to three years and shared with unnamed third parties. Most parents have no actionable information about how their children's data is collected or used.",{"question":587,"answer":588},"How do AI toys affect children's emotional development?","Cambridge's PEDAL Centre found that AI toys misread children's emotions and fail to engage in pretend play. When a child said 'I'm sad,' one toy responded with cheerful deflection, potentially signalling that the child's feelings were unimportant. These misalignments may disrupt early social and emotional learning.",{"question":590,"answer":591},"Can AI toys operate without sending data to external servers?","Yes. Edge AI architecture allows models to run locally on the device itself, with no data leaving the child's hands. Empathy AI's 'Bring the Story to Life' experience demonstrates this: the child's voice is never recorded or transmitted, and no third-party servers are involved.",{"question":593,"answer":594},"What did the Cambridge PEDAL Centre study find about AI toys?","The study, led by Dr Emily Goodacre and Professor Jenny Gibson, observed 14 children under five interacting with a generative AI toy. Researchers documented consistent misalignment: failed emotional recognition, inability to support pretend play, and scripted responses to genuine affection.",{"question":596,"answer":597},"What is the 'Bring the Story to Life' experience from Empathy AI?","'Bring the Story to Life' is an assembly kit where children and educators build an AI voice device using a Raspberry Pi, microphone, speaker, and a curated AI model. It runs on texts from Project Gutenberg's catalogue, processes everything locally, and teaches children that AI is a governed system, not a companion.",{"question":599,"answer":600},"Are current safety standards adequate for AI-powered children's toys?","No. Research from Cambridge's PEDAL Centre and the US PIRG Education Fund shows that safety guardrails in AI toys routinely fail during extended use. Manufacturers like Mattel are already partnering with OpenAI to expand generative AI in toys, while child safety research is only beginning.","/media/newsroom/article12_aitoys.webp",{},"/newsroom/ai-in-toys",{"title":260,"description":580},"newsroom/ai-in-toys","When the Toy Talks Back: AI, Children, and the Data in Between","Ty7m8QsIwdjiOuC0VWP1yPunewwEGnFH_iK7j3PYoNg",{"id":609,"title":610,"author":6,"body":611,"date":867,"description":868,"documentId":869,"experienceName":6,"experienceUrl":6,"extension":231,"faqs":870,"image":889,"lastModified":6,"meta":890,"navigation":250,"path":891,"seo":892,"seoDescription":6,"seoTitle":6,"stem":893,"subtitle":894,"tags":6,"topic":255,"__hash__":895},"newsroom/newsroom/workshop-edge-ai-for-education.md","Edge AI for Education: Hands-On Workshop on AI Governance",{"type":8,"value":612,"toc":860},[613,616,622,625,633,641,647,651,654,664,667,670,673,676,679,682,685,689,696,699,703,710,809,813,816,823,833,836,840,843,846,849,852],[11,614,615],{},"AI is already in education. Students reach for it to explain concepts, summarise texts, and work through problems. Teachers use it to prepare materials and explore new approaches.",[11,617,618,619,24],{},"The question stopped being 'should AI be in schools?' some time ago. ",[21,620,621],{},"The question now is whether the AI being used was built for education or just available to it",[11,623,624],{},"Most commercial AI tools schools use today were designed for scale, not classrooms. They run on infrastructure that no faculty member can inspect, are trained on data no school selected, and are sustained through data-collection models that raise serious consent and privacy concerns for minors. And the problem goes further with algorithmically driven platforms that are optimised for engagement rather than well-being, built to hold attention rather than develop it.",[11,626,627,632],{},[139,628,631],{"href":629,"rel":630},"https://mollyvsthemachines.com/",[176],"MOLLY vs. THE MACHINES",", a documentary Empathy AI co-produced, follows this exact story: how algorithmic systems shaped for commercial purposes can profoundly affect the lives of pre-teens and teenagers who are still in the process of forming their understanding of themselves and the world.",[11,634,635,636,640],{},"That same architecture, built for engagement, opaque by design, governed by someone else's incentives, is now being offered to schools. Recognising it is the first step. Knowing what the alternative looks like is the next one. You can explore the broader thinking behind this shift, from using AI tools to governing infrastructure, through the ",[139,637,639],{"href":638},"/newsroom/edge-ai-in-education/","Edge AI in Education"," article.",[11,642,643,644,24],{},"That gap is what led us here. Not just to write about the problem, but to do something about it. Building private, domain-based, edge AI for education is the goal, but knowing where you want to go isn't the same as knowing how to get there. So, we've created this workshop to transform that thinking into practice. ",[21,645,646],{},"It's your 101 on Edge AI for Education",[39,648,650],{"id":649},"what-is-the-edge-ai-for-education-workshop","What is the Edge AI for Education workshop?",[11,652,653],{},"This is a one-day, on-site workshop that takes you behind the screens of AI.",[11,655,656,657,660,661,24],{},"At its core, the workshop is about ",[21,658,659],{},"empowering educators through AI and exploring pedagogical possibilities",". An in-person experience at Empathy AI's headquarters in Gijón, Asturias, in the north of Spain, inside the ",[139,662,663],{"href":496},"region's first net-zero energy bioclimatic building",[11,665,666],{},"The in-person format is deliberate. One of the central arguments for edge AI in education is that AI should be something the educational community can see, understand, and hold to account, and not a black box running somewhere out of reach.",[11,668,669],{},"We put that thinking into practice from the moment you arrive at our building, where solar panels on the façade power the GPUs inside. You walk through the actual server infrastructure and see and touch the real, tangible hardware.",[11,671,672],{},"You meet the engineers working on it. You get hands-on with workflows that put educators in control of AI: from feeding it curriculum materials and shaping how it behaves in line with educational values, to reviewing its outputs and learning how to evaluate it.",[11,674,675],{},"Being face-to-face with the physical reality of AI, the racks, the machines, the people, the energy infrastructure behind it, is something no slide deck can replicate. It makes the abstract tangible, changing how you think and talk about AI.",[11,677,678],{},"And crucially, that understanding is what makes it possible to teach students to think critically about AI too. You can't explain what you haven't seen.",[11,680,681],{},"Empathy AI's values on responsible AI, privacy-first design, sustainability, and transparency run through every session. All materials are produced for a private, on-premise environment, and the workshop is designed to comply with AI sovereignty principles from start to finish.",[11,683,684],{},"This workshop also aims to serve as a safe space to discuss AI in education honestly: the concerns, failures, possibilities, and the questions that don't yet have answers.",[39,686,688],{"id":687},"who-is-the-workshop-for","Who is the workshop for?",[11,690,691,692,695],{},"The workshop is designed for ",[21,693,694],{},"teachers, educators, school administrators, and EdTech coordinators"," or anyone in the education community who makes decisions about how AI is used, teaches students to engage with it responsibly, or is trying to establish what responsible AI use means at an institutional level.",[11,697,698],{},"No technical background is required. The goal isn't to turn you into AI engineers. It's to give you the direct experience, strategies, and working vocabulary to govern AI, rather than simply consume it.",[39,700,702],{"id":701},"what-does-the-workshop-cover","What does the workshop cover?",[11,704,705,706,709],{},"The workshop runs across one full day, structured in ",[21,707,708],{},"five phases"," that build on each other.",[711,712,713,731,760,779,798],"ol",{},[349,714,715,718,720,721,723,724,726,727,730],{},[21,716,717],{},"Demystifying AI",[68,719],{},"\nThe day opens with a welcome circle, a structured conversation where we can all share what we teach or support, our current challenges or concerns about AI at school, or any other questions you want to bring into the room.",[68,722],{},"It's a simple format, but it helps us surface the real concerns before any technology is introduced and sets them as the starting point, not an obstacle to get past.",[68,725],{},"From there, we work through the foundational distinction that everything else builds on: the ",[21,728,729],{},"difference between cloud AI and edge AI"," and what that difference specifically means for institutional autonomy in education. Not as an abstract technical comparison, but as a practical question: who controls what, who can see what, and who's accountable when something goes wrong.",[349,732,733,736,738,739,742,743,24,746,748,749,752,753,756,757,759],{},[21,734,735],{},"Tech tour and case studies",[68,737],{},"\nThis is where the abstract becomes physical. You walk through Empathy AI's facilities together with our AI engineers: the server racks, the GPU infrastructure... You can ",[21,740,741],{},"see the hardware"," that a self-hosted AI system actually runs on, and ask the questions that rarely get answered in vendor presentations and ",[21,744,745],{},"get direct answers from the engineers who built it",[68,747],{},"From the infrastructure, the session moves into ",[21,750,751],{},"case studies of what that infrastructure makes possible in practice",". You see real projects running on Empathy AI's systems and explore how they could translate into educational contexts. Projects like ",[139,754,755],{"href":459},"Bring the Book to Life—Project Gutenberg AI"," or the Book of Books, for instance, can help students engage with literature through meaning, themes, and narrative, a fundamentally different relationship with texts than few discovery tools allow.",[68,758],{},"The case studies are designed to open up possibilities, not prescribe solutions: the goal is for you to leave with a clearer sense of what becomes achievable when the infrastructure is yours to shape.",[349,761,762,765,767,768,771,772,774,775,778],{},[21,763,764],{},"Building",[68,766],{},"\nAfter lunch, the workshop shifts to practice. You ",[21,769,770],{},"work directly with the system",": defining what values an AI should prioritise for a learning context, uploading sample curriculum materials, and testing how the AI responds to student-like questions.",[68,773],{},"You can ",[21,776,777],{},"evaluate the outputs",", including the failures. Spotting hallucinations, identifying gaps, and recognising bias in a system you've just helped configure is a different experience from reading about those risks in a policy document. It gives you the practical vocabulary to make informed decisions about any AI system you encounter later.",[349,780,781,784,786,787,790,791,793,794,797],{},[21,782,783],{},"Master prompting",[68,785],{},"\nWith that hands-on experience in place, the workshop moves into a conceptual walkthrough, where no coding knowledge is required, of ",[21,788,789],{},"how these systems actually work",": how AI remembers documents, how a query travels through the system, how it signals uncertainty rather than inventing an answer, and how it cites sources.",[68,792],{},"This session is about building your capacity to ",[21,795,796],{},"shape AI behaviour through prompting and configuration",", without writing a single line of code.",[349,799,800,803,805,806,808],{},[21,801,802],{},"Reflecting",[68,804],{},"\nThe day closes with you thinking through what a self-hosted AI system could look like in your own institution. What surprised you? What materials would feed it? What would it prioritise? Who in your educational community would have a role in shaping and governing it over time? What concerns remain? What possibilities do you see?",[68,807],{},"This is where the workshop becomes specific to the people in the room, and where the questions that emerged throughout the day become something worth acting on.",[39,810,812],{"id":811},"what-do-you-take-away","What do you take away?",[11,814,815],{},"This workshop is hands-on by design, and so are its outcomes.",[11,817,818,819,822],{},"By the end of the day, you have a ",[21,820,821],{},"working understanding"," of how self-hosted AI works, how to train an AI on curriculum materials and resources your institution trusts, how to run it locally with no external data transmission, and how to prompt and configure it to reflect your pedagogical values and educational standards, without writing code.",[11,824,825,826,829,830,24],{},"More broadly, you leave with a ",[21,827,828],{},"clearer framework"," for keeping student data within your institution, reducing reliance on commercial platforms, and making AI governance a real, ongoing practice rather than a procurement decision made once and forgotten. You ",[21,831,832],{},"discover a different model of AI infrastructure, one where the educational institution is the operator, not just the customer or consumer",[11,834,835],{},"The concerns that brought you to this conversation don't disappear after one day. But they become navigable once you've seen what the alternative looks like, talked to the people building it, and tested it with your own materials.",[39,837,839],{"id":838},"who-leads-the-workshop","Who leads the workshop?",[11,841,842],{},"The workshop is led by Ángel Maldonado, founder and CEO of Empathy Holdings, and Pablo Cañal, Head of Product at Empathy AI, alongside the engineers who design and build Empathy AI's infrastructure day to day.",[11,844,845],{},"You're not sitting in a classroom listening to a presentation; you're working alongside the people who built the systems you're learning about, in the building where those systems run.",[11,847,848],{},"This is a workshop, not a masterclass. It's deliberately participative.",[11,850,851],{},"Your questions, your concerns, and your specific institutional context are what shape the day. The more you bring to the room, the more you leave with.",[11,853,854,855,859],{},"Interested in bringing this workshop to your institution or network? ",[139,856,858],{"href":857},"/contact/","Get in touch"," to learn more.",{"title":217,"searchDepth":218,"depth":218,"links":861},[862,863,864,865,866],{"id":649,"depth":218,"text":650},{"id":687,"depth":218,"text":688},{"id":701,"depth":218,"text":702},{"id":811,"depth":218,"text":812},{"id":838,"depth":218,"text":839},"2026-04-13","Visit Empathy AI's net-zero GPU infrastructure in Asturias. Build self-hosted AI on your own curriculum materials in a one-day workshop. No code required.","5036ea33-7cb7-4417-820a-17b65755b409",[871,874,877,880,883,886],{"question":872,"answer":873},"What is edge AI, and why does it matter for schools?","Edge AI runs on local or institutional infrastructure rather than remote commercial servers. For schools, this means student and teacher data stays within the institution's own systems, nothing is transmitted externally, and administrators retain full visibility and control over how the AI operates and what it stores.",{"question":875,"answer":876},"Who should attend this workshop?","Teachers, educators, school administrators, and EdTech coordinators. The workshop is for anyone who makes decisions about AI in educational settings, teaches students to engage with it critically, or is responsible for establishing institutional AI governance. No technical background is required.",{"question":878,"answer":879},"Does the workshop require any technical expertise?","No. All sessions are designed for educators without a technical background. The goal is a working conceptual understanding of how self-hosted AI systems operate: enough to govern them, define their limits, and evaluate their outputs, not to train engineers.",{"question":881,"answer":882},"What workshop formats are available?","The workshop is designed as an in-person experience at Empathy AI's headquarters in Gijón, Asturias. Standing in front of the infrastructure and talking to the engineers is central to the experience. For institutions where travel is not feasible, alternative formats may be explored.",{"question":884,"answer":885},"Is this only relevant for schools already planning to adopt AI?","No. It is equally relevant for schools trying to understand what they have already adopted, institutions sceptical of AI, and those wanting governance frameworks before any deployment. The most useful outcome is the capacity to evaluate any AI system against educational and ethical criteria.",{"question":887,"answer":888},"Can teachers configure AI for specific subjects without coding?","Yes. Defining what an AI system draws from, the tone it uses, and the questions it declines to answer requires clarity about educational priorities, not coding expertise. The master prompting session gives educators exactly that capability, with no code involved.","/media/newsroom/article11_workshop.webp",{},"/newsroom/workshop-edge-ai-for-education",{"title":610,"description":868},"newsroom/workshop-edge-ai-for-education","For teachers, educators, school administrators, and EdTech coordinators","x19PyPIAUCRhRcFP7n_5C51zbWDUrmfLNn5zoq3OElE",{"id":897,"title":898,"author":6,"body":899,"date":1026,"description":1027,"documentId":1028,"experienceName":987,"experienceUrl":989,"extension":231,"faqs":1029,"image":1048,"lastModified":6,"meta":1049,"navigation":250,"path":1050,"seo":1051,"seoDescription":6,"seoTitle":6,"stem":1052,"subtitle":6,"tags":6,"topic":1053,"__hash__":1054},"newsroom/newsroom/project-gutenberg-ai-semantic-book-discovery.md","Project Gutenberg AI: Discovering Books by What They Actually Mean",{"type":8,"value":900,"toc":1017},[901,914,917,921,924,927,931,937,941,954,958,961,965,968,971,977,984,990,994,1000,1003,1010],[11,902,903,904,907,908,913],{},"Project Gutenberg AI is Empathy AI's intelligent book discovery system, built on our ",[139,905,906],{"href":141},"Knowledge Engine"," and developed in collaboration with ",[139,909,912],{"href":910,"rel":911},"https://www.gutenberg.org/",[176],"Project Gutenberg",", the world's oldest digital library. It categorizes and recommends literature based on deep semantic analysis of actual book content, not just titles, genres, author names, or publisher metadata.",[11,915,916],{},"Where Project Gutenberg has spent over 50 years making public domain literature freely accessible (75,000+ eBooks and counting), Project Gutenberg AI adds a new layer: the ability to discover those works by what they actually mean. Themes, emotions, narrative structures, philosophical undercurrents. Content discovery that goes beyond keywords, processing what books actually say rather than what labels have been attached to them. And it runs entirely on Empathy AI's private, self-hosted infrastructure.",[39,918,920],{"id":919},"why-traditional-book-discovery-fails-readers","Why Traditional Book Discovery Fails Readers",[11,922,923],{},"Most book discovery tools rely on metadata: genre tags, author name matching, bestseller lists, and \"customers also bought\" algorithms trained on purchase behavior. According to research published in the Journal of Documentation, metadata-based recommendation systems achieve relevance rates below 40% for readers seeking thematic or emotional connections with their next book.",[11,925,926],{},"A reader searching for \"a quiet story about grief and resilience\" will not find what they need through genre filters. Metadata does not capture what a book feels like to read. Content analysis does.",[39,928,930],{"id":929},"how-project-gutenberg-ai-works","How Project Gutenberg AI Works",[11,932,933,934,936],{},"Project Gutenberg AI is powered by Empathy AI's ",[139,935,906],{"href":141},", an Agentic RAG (Retrieval-Augmented Generation) platform that transforms unstructured content into semantically searchable knowledge. The same contextual retrieval and enrichment pipeline that makes Knowledge Engine effective for enterprise documentation is applied here to literature, analyzing books at the content level through three layers of semantic processing:",[201,938,940],{"id":939},"deep-content-analysis","Deep Content Analysis",[11,942,943,944,948,949,953],{},"The system ingests the full text of books from the ",[139,945,947],{"href":910,"rel":946},[176],"Project Gutenberg catalogue"," and processes narrative structure, thematic patterns, emotional arcs, character dynamics, and stylistic elements. This goes far deeper than traditional natural language processing keyword extraction. Using Empathy AI's ",[139,950,952],{"href":951},"/newsroom/why-we-only-use-open-source-llms/","open-source LLMs"," running on the Knowledge Engine's contextual retrieval pipeline, the system identifies what a book is about at a semantic level, not just what words it contains.",[201,955,957],{"id":956},"intent-matching","Intent Matching",[11,959,960],{},"When a reader describes what they are looking for, using moods, themes, life moments, or emotional states, Project Gutenberg AI matches that intent against its deep content index. The result is recommendations that feel personally relevant, not algorithmically obvious.",[39,962,964],{"id":963},"content-discovery-not-behavior-tracking","Content Discovery, Not Behavior Tracking",[11,966,967],{},"Most book recommendation engines rely on collaborative filtering: tracking what other readers purchased, browsed, or rated. This approach has two fundamental problems.",[11,969,970],{},"First, it creates filter bubbles. Readers see variations of what they have already consumed, not genuinely new discoveries. Second, it requires surveillance: monitoring reading behavior, purchase history, and browsing patterns to fuel the recommendation engine.",[11,972,973,976],{},[21,974,975],{},"Project Gutenberg AI needs neither",". Recommendations are based on what books contain, not on what readers do. Your reading behavior is not the product. The books themselves are the signal.",[11,978,979,980,983],{},"All processing runs on Empathy AI's ",[139,981,982],{"href":496},"private GPU infrastructure",". No reader data is shared with external platforms, no behavior is tracked for advertising purposes, and no reading history is used to train third-party models.",[985,986],"experience-cta",{"name":987,"slug":988,"url":989},"Project Gutenberg AI","project-gutenberg-ai-semantic-book-discovery","https://projectgutenberg.empathy.ai",[39,991,993],{"id":992},"from-gutenberg-to-discovery","From Gutenberg to Discovery",[11,995,996,999],{},[139,997,912],{"href":910,"rel":998},[176]," was founded in 1971 by Michael S. Hart, making it the world's oldest digital library. For over 50 years, thousands of volunteers have digitized and proofread public domain literature, building a freely accessible collection of more than 75,000 eBooks. It was the original open-access revolution for books, decades before the internet made it obvious.",[11,1001,1002],{},"The challenge Project Gutenberg faces today is not availability. The books are there, free and open. The challenge is discovery. With 75,000 works spanning centuries of literature, finding the right book still depends on knowing what you are looking for: a title, an author, a subject heading. Readers with broader or more exploratory intent (\"something that captures the same existential weight as Dostoevsky but in a shorter format\") have no path forward through traditional search.",[11,1004,1005,1006,1009],{},"That is where Empathy AI's collaboration with Project Gutenberg begins. By applying the ",[139,1007,1008],{"href":141},"Knowledge Engine's"," semantic analysis capabilities to the Gutenberg catalogue, we add a discovery layer that the original library was never designed to have. Readers can now explore literature through meaning, not just metadata.",[11,1011,1012,1013,1016],{},"This is the same philosophy behind the broader vision of ",[139,1014,1015],{"href":488},"AI at the service of genuine empathy",": computational tools that enhance human connection with literature rather than replacing the joy of discovery with algorithmic prediction. Project Gutenberg gave the world free access to books. Project Gutenberg AI helps readers find the ones that matter to them.",{"title":217,"searchDepth":218,"depth":218,"links":1018},[1019,1020,1024,1025],{"id":919,"depth":218,"text":920},{"id":929,"depth":218,"text":930,"children":1021},[1022,1023],{"id":939,"depth":227,"text":940},{"id":956,"depth":227,"text":957},{"id":963,"depth":218,"text":964},{"id":992,"depth":218,"text":993},"2026-03-03","Built on the Knowledge Engine and in collaboration with Project Gutenberg, Project Gutenberg AI brings semantic book discovery to 75,000+ public domain works.","1624c677-25f6-48e8-ba14-1d9b851ddb80",[1030,1033,1036,1039,1042,1045],{"question":1031,"answer":1032},"What is Project Gutenberg AI?","Project Gutenberg AI is Empathy AI's intelligent book discovery system, built on the Knowledge Engine (our Agentic RAG platform) and developed in collaboration with Project Gutenberg. It analyzes the actual content of over 75,000 public domain books to help readers find literature that resonates with their interests, rather than relying on genre tags or purchase behavior.",{"question":1034,"answer":1035},"How is this different from Amazon or Goodreads recommendations?","Amazon and Goodreads primarily use collaborative filtering based on purchase and rating behavior. Project Gutenberg AI analyzes what books actually contain at a semantic level, enabling discovery based on meaning and emotional connection rather than behavioral tracking.",{"question":1037,"answer":1038},"Does Project Gutenberg AI track reading behavior?","No. Recommendations are generated from content analysis, not user tracking. All processing happens on Empathy AI's private infrastructure in Asturias, Spain. No reader data is shared with external providers.",{"question":1040,"answer":1041},"What kinds of queries can Project Gutenberg AI handle?","Readers can describe what they want using natural language: moods, themes, comparisons, or life moments. For example, \"something hopeful but not naive\" or \"books with a similar atmosphere to The Remains of the Day.\"",{"question":1043,"answer":1044},"What is the relationship with Project Gutenberg?","Project Gutenberg AI is developed in collaboration with Project Gutenberg (gutenberg.org), the pioneering digital library that has been making public domain literature freely accessible since 1971. Empathy AI extends their mission by adding AI-powered semantic discovery to the Gutenberg catalogue, helping readers navigate over 75,000 works through meaning and connection rather than metadata alone.",{"question":1046,"answer":1047},"Is Project Gutenberg AI available for bookstores and publishers?","Yes. Project Gutenberg AI is designed for organizations in the book and publishing industry that want to offer superior discovery experiences. The same Knowledge Engine technology that powers Project Gutenberg AI can be configured for any literary catalogue. Contact Empathy AI for partnership details.","/media/newsroom/article1_pg.webp",{},"/newsroom/project-gutenberg-ai-semantic-book-discovery",{"title":898,"description":1027},"newsroom/project-gutenberg-ai-semantic-book-discovery","Product","fSiseB3oEWFWhn0wPthDS3ijgHAVKkSmyZcv2LUv3f0",1776351710375]