👋 Good morning! Today’s highlights show AI making waves in unexpected places, from diagnosing medical cases in seconds to helping shoppers buy more efficiently, and even outperforming top mathematicians on one of the toughest competitions in the world. At the same time, debates rage over AI detection tools mislabeling historic texts and platforms like ChatGPT preparing to introduce ads. These examples underscore that AI is no longer a distant concept, it’s actively reshaping decisions, workflows, and even our perception of authorship and expertise.
🧓Americas Founding Fathers Used ChatGPT?

AI-detection tools are having a full-blown credibility crisis. What started as scattered complaints from frustrated students has now turned into a broad online backlash, especially on X, where people are testing historic texts, classic literature, and handwritten letters, only to watch detectors confidently label them as “AI-generated.”
The failures aren’t subtle.
Users have run 19th-century novels, original handwritten correspondence, and even personal journals through popular detection tools, and the tools flag them as machine-written. The most absurd example so far: the 1776 Declaration of Independence being “detected” as AI-authored. That test has gone viral multiple times, partly because it makes the core issue impossible to ignore. If a model can’t distinguish between foundational historical documents and ChatGPT output, it’s not reliable enough to judge anyone’s work.
The impact goes beyond funny screenshots. Entire classes are reporting mass assignment failures after instructors rely on these tools. Some students say they’ve been accused of cheating despite writing everything themselves, and appeals are often messy because the tools don’t explain their reasoning. The problem is structural: these detectors aren’t actually identifying AI; they’re guessing based on writing patterns, patterns that appear in perfectly normal human writing from any era.
Bottom line: AI-detection tech isn’t ready for high-stakes use. It produces false positives, can’t justify its conclusions, and collapses when tested on anything outside its training assumptions. For educators, companies, and policymakers, the takeaway is simple but uncomfortable: if you need proof of authorship, use methods that actually rely on evidence, not statistical guesswork dressed up as certainty.
🤖 AI in Action: Amazon’s Rufus Chatbot — Real-World Proof That AI Can Drive Sales
On Black Friday 2025, Amazon saw a significant boost in conversions when shoppers used Rufus. According to data from market-intelligence firm Sensor Tower, U.S. Amazon sessions that included Rufus and resulted in a purchase surged 100% compared with the trailing 30 days, while purchase sessions without Rufus rose only ~20%.
Further breaking it down: day-over-day, sessions with Rufus that ended in a sale increased by 75%, compared to just 35% for non-Rufus sessions. Meanwhile, overall website sessions across Amazon rose 20%, but sessions involving Rufus rose 35%.
Rufus launched into beta in early 2024 and now rolled out broadly, acts as a conversational assistant: helping users find products, compare options, and make decisions via natural-language queries instead of traditional search.
This isn’t just a minor bump in engagement. The jump suggests that for many buyers, an AI assistant is more effective at converting browsing into sales than standard browsing. For e-commerce and retail, that means conversational AI is no longer a novelty, it’s becoming a core part of the customer journey.
Rufus’ performance underlines a bigger trend: AI doesn’t just automate tasks, when deployed thoughtfully, it can reshape how decisions get made, how users shop, and how value flows through digital products.
⚙️In Focus: ChatGPT Might Start Showing Ads — What That Means for AI Users
A new leak suggests that OpenAI is preparing to introduce advertising in ChatGPT, at least for users on the free tier. The evidence comes from a beta version of ChatGPT’s Android app (v 1.2025.329), where developers found code references to an “ads feature,” “search ad,” “search ads carousel” and “bazaar content.”
With roughly 800 million weekly users — and about 95 % of them on the free tier — ads could generate a sizable revenue stream for OpenAI.
That shift carries trade-offs. On the one hand, ads might allow OpenAI to keep ChatGPT accessible to a very large user base. On the other hand, injecting ads or sponsored content into a conversational AI threatens one of ChatGPT’s core appeals: a relatively clean, unbiased interface where responses are based on knowledge and reasoning, not commercial incentives.
For researchers, founders, and tech-savvy users, this change means: treat any future shopping suggestions, tool recommendations, or product comparisons coming via ChatGPT with scepticism, double-check with independent sources.
Bottom line: ChatGPT is evolving. If ads go live, the tool may begin to resemble traditional ad-supported platforms more than a pure AI assistant. For users who care about objectivity and unbiased recommendations, that calls for increased vigilance.
💡Quick Hits and Numbers
An AI system developed by Shanghai AI Lab and local hospitals reportedly diagnosed a complex gastrointestinal case in under 2 seconds, while human doctors took about 13 minutes during a live diagnostic showdown in Shanghai.
Apple names former Microsoft, Google exec to replace their retiring AI chief.
Deepseek launchesDeepSeekMath‑V2, an open‑weights maths AI, that scored 118/120 on the 2024 Putnam Competition, outperforming nearly all human competitors, by generating and self-verifying full natural-language proofs.
🧩Closing thought
AI isn’t just a tool; it’s a disruptive collaborator, capable of boosting efficiency, augmenting human decision-making, and challenging assumptions about expertise and trust. From medical diagnostics to online shopping, and from mathematical problem-solving to debates over AI detection, the question isn’t whether AI can perform tasks, it’s how we integrate it responsibly and thoughtfully into the work we do, and the lives we lead.
