👋 Good morning! This week’s stories highlight a quieter but consequential shift in how AI is being positioned for everyday users, especially those who don’t think of themselves as “AI users” at all. Rather than new model releases or benchmark wins, the focus is on who AI is built for and how it shows up in their daily lives. Apple is reworking Siri by leaning on external foundation models to finally make its assistant useful at scale, while a new startup founded by former Googlers is reimagining AI as an interactive learning companion for children. In both cases, the competition is less about raw capability and more about interface design, trust, and adoption, the layers that determine whether AI becomes embedded or ignored.

🧒 Former Googlers Seek to Captivate Kids With an AI-Powered Learning App

Three former Google employees, Lax Poojary, Lucie Marchand, and Myn Kang, founded Sparkli to solve a practical problem they experienced as parents: standard AI assistants tend to respond with a “wall of text,” which isn’t naturally engaging for children. Sparkli’s bet is that kids don’t just want answers; they want interactive experiences.

Sparkli generates what it calls AI-powered learning “expeditions.” Kids can choose from predefined categories or ask their own questions to generate a learning path, with chapters that mix audio, video, images, quizzes, and games, including “choose-as-you-go” adventures designed to reduce the pressure of right/wrong testing. The company says it can generate a full learning experience within about two minutes of a question being asked.

On implementation and go-to-market, TechCrunch notes Sparkli’s target audience is ages 5–12, and that it tested the product in 20+ schools last year and is piloting with an institute connected to a network of schools totaling 100,000+ students. The company also built a teacher module for assigning homework and tracking progress, and it borrows engagement mechanics (streaks, rewards, quest cards) inspired by Duolingo to drive repeat use.

Safety is addressed explicitly: Sparkli says certain topics (e.g., sexual content) are banned, and that for sensitive prompts like self-harm it aims to steer toward emotional intelligence and encourage kids to talk to parents.

Finally, Sparkli raised $5M pre-seed, led by Founderful, and plans to focus on schools in the near term before opening consumer access for parents to download the app by mid-2026.

🧠 Prompt engeneering: Why AI Feels Underwhelming (and It’s Not a Model Problem)

As AI becomes more embedded in everyday tools and workflows, an interesting paradox is emerging: capability is increasing rapidly, yet many users report that AI still feels generic, shallow, or surprisingly unhelpful in practice.

The issue is rarely the model. It’s how people interact with it.

Most users still approach AI the way they approach Google: by asking open-ended questions and expecting ready-made answers. That interaction model works for search engines, which retrieve existing information. It is far less effective for AI systems, which are designed to generate, transform, and reason through tasks.

In practice, this leads to a familiar pattern:

  • Vague questions

  • Polished but generic responses

  • The impression that “AI is overhyped”

A more accurate mental model is to treat AI not as an oracle, but as a junior operator or analyst. Someone fast and capable, but who requires direction, context, and clear deliverables.

The difference is not prompt cleverness, but task framing.

Compare:

  • “How can I be more productive at work?”
    versus

  • “You are my junior analyst. I spend ~6 hours a week in meetings. Identify which meetings can be cut or replaced, and propose a revised weekly schedule that reduces meeting time by at least 30%.”

The second prompt does not ask for advice; it assigns work. It defines a role, a goal, and an output. This is where AI’s strengths show up most clearly.

As AI continues to move closer to users, through hardware, IDE integrations, and creator tools, the limiting factor will increasingly be human-side interfaces: how clearly intent is expressed, how well tasks are scoped, and how feedback loops are structured. In that sense, prompting is less about “talking to AI” and more about learning how to delegate.

⚙️AI tool and updates: Apple to Reveal Gemini-Powered Siri in February

Apple is preparing to unveil a next-generation Siri assistant in the second half of February 2026, powered by Google’s Gemini AI models, the first concrete product to come from the recently announced AI partnership between Apple and Google.

This update is significant because it’s expected to finally deliver on Apple’s earlier promises about a more capable Siri. The new version reportedly will be able to complete tasks by accessing personal data and on-screen content, moving beyond basic voice commands toward deeper contextual assistance.

The February announcement will likely be a preview or demonstration, with Apple planning an even larger update later in the year, possibly at its Worldwide Developers Conference. That future upgrade could shift Siri toward a more conversational, chatbot-like experience akin to alternatives such as ChatGPT.

Past internal struggles with its AI strategy, including leadership departures and earlier delays, have contributed to this moment, making the Google partnership a pivotal shift for Apple’s AI roadmap.

Practical takeaway: Apple’s upcoming Gemini-powered Siri could finally meaningfully improve the assistant’s usefulness by acting on context and personal data, but the initial February reveal will be just the first step in a broader rollout of deeper capabilities through 2026.

🧩 Closing Thought

Taken together, these developments underscore a broader pattern: the next phase of AI isn’t being decided by smarter models alone, but by who can translate intelligence into experiences that fit real users. Apple’s Gemini-powered Siri represents a pragmatic admission that AI assistants live or die by execution, not ambition. Sparkli’s approach shows that even powerful generative models fail if they don’t align with how specific audiences, in this case, children, actually learn and engage. As AI matures, success will increasingly belong to teams that understand distribution, context, and human behavior as well as they understand technology.

Keep Reading