Episodes

Friday Jan 23, 2026
Friday Jan 23, 2026
This episode breaks down three major shifts reshaping AI: Anthropic's constitution for Claude and the rise of machine morality and persona control; the music industry’s pivot from litigation to licensing AI-generated voices; and Apple’s push toward body-worn AI hardware and new partnerships for Siri.Together these stories show a wider trend from novelty to integration — new rules, new business models, and deep ethical questions about how we treat and build intelligence.

Wednesday Jan 21, 2026
Wednesday Jan 21, 2026
Today's episode takes you straight to Davos, where the World Economic Forum became the stage for Anthropic CEO Dario Amodei’s stark warning that selling AI chips to China is akin to arming North Korea with nuclear weapons, a sentiment countered by Google DeepMind’s Demis Hassabis who argued Chinese firms remain six months behind Western labs. We then examine the financial fallout hitting the software sector as major SaaS stocks like Salesforce and HubSpot tumble in response to the rise of AI coding agents, prompting Block Inc. to launch "Goose," a free, open-source competitor to Claude Code. The conversation continues with a look at "Humans&," a three-month-old startup that just raised a massive $480 million seed round at a nearly $4.5 billion valuation to build "human-centric" AI, and wraps up with new creative capabilities from LTX’s audio-to-video generation tool and OpenAI’s latest age-verification safeguards.

Monday Jan 19, 2026
Monday Jan 19, 2026
This episode examines OpenAI’s significant shift in monetization strategy, detailing the official launch of targeted advertisements for free users in the U.S. and the global rollout of the lower-cost ChatGPT Go subscription to support broader access. We also unpack the escalating legal and public conflict between Elon Musk and OpenAI, analyzing leaked journals regarding the company’s transition from a non-profit alongside Musk’s deployment of the massive Colossus 2 gigawatt-scale training cluster. Finally, the discussion explores the potential industrialization of cyber exploits by autonomous AI agents and new workflows for mobile coding using OpenAI’s Codex.

Monday Jan 12, 2026
Monday Jan 12, 2026
This episode explores OpenAI’s launch of ChatGPT Health, a private experience that integrates personal medical records and fitness data from platforms like Apple Health and MyFitnessPal to provide tailored wellness advice. We also examine Utah’s landmark decision to allow an AI system to autonomously approve prescription refills for 191 different medications, signaling a significant transition from AI providing health information to making actual medical decisions. Beyond healthcare, the discussion covers Lenovo’s new "Personal Ambient Intelligence" assistant, Qira, which follows users across PCs and mobile devices, alongside major industry shifts like Anthropic’s $10 billion funding round and China’s push for domestic AI chips. Finally, the episode touches on practical workflows for automating expense tracking with Claude and Google’s new Gemini-powered audio lessons for educators.

Thursday Jan 08, 2026
Thursday Jan 08, 2026
AI has crossed a line — it no longer only helps, it now decides. This episode traces that boundary shift from personalized health to institutional finance and consumer hardware. We unpack OpenAI’s ChatGPT Health, which links Apple Health, MyFitnessPal, Peloton and Be Well medical records into isolated, encrypted health chats that OpenAI promises not to use for model training — a move designed to trade scale for trust. Then we examine Utah’s landmark approval of Doctronic’s autonomous prescription refill system: 191 drugs covered, critical exclusions (pain meds, ADHD treatments, injectables), 99% agreement with human clinicians across 500 cases, $4 per refill pricing, and supervising physicians retaining legal responsibility — a blueprint states from Texas to Missouri are already watching.
On the consumer edge, Lenovo’s Cura (Kira) pushes ambient, cross-device context into millions of PCs, bundling OpenAI/Microsoft cloud models with specialist tools like Stability AI to make assistants feel like continuous collaborators. At the institutional apex, JP Morgan’s Proxy IQ automates proxy voting across $7 trillion in assets — proof that firms now trust AI with governance-level strategy.
We also explain the technical engines enabling this leap: context graphs that map relationships across people, projects and decisions, and Hugging Face’s Fine PDFs — a 3 trillion token, high‑quality dataset that unlocks expert reasoning. Practical examples show the immediate value: Claude automating Gmail-to-sheet expense tracking, and ChatGPT 5.2 turning a six‑page PT plan into a 20‑week, patient-friendly recovery grid.
Finally, we confront the core question for marketers, technologists and regulators: when billion‑dollar valuations (Anthropic, OpenAI) hinge on systems that will fail sometimes, where does accountability live and who owns the cost when an AI decision goes wrong? This episode equips you to spot the risks and opportunities as AI moves from assistant to authorized actor.

Wednesday Jan 07, 2026
Wednesday Jan 07, 2026
Frontier AI just leapt from demos to daily economics — money, models and medicine are moving at breakneck speed and marketers must rethink what wins. This episode synthesizes the week’s biggest moves: massive strategic capital (XAI’s $20B round and sovereign backers that tilt compute and distribution), a hardware arms race (multi‑gigawatt datacenters and Memphis facilities), and product leaps that push AI off the screen — Razer’s Project AVA holographic Grok companions, Gemini’s video‑to‑code transforms, and Sleep FM’s sleep‑based foundation model that predicts dozens of diseases from one night of data. We explain why Claude Skills and Cursor’s dynamic context discovery aren’t just technical tweaks but the cost architecture that makes agents practical (token efficiency + modular skill files = deployable automation), and why OpenAI’s science hiring plus GPT‑5 Pro’s rapid problem solving signals a new industry tradeoff between buying commodity intelligence and building proprietary capability. For marketing teams and AI strategists the takeaways are immediate: treat agent interfaces like product experiences (learn from game design), protect privacy and consent as first‑order business risks for ambient devices and health agents, and pivot content strategy from SEO to machine‑first formats that agents can reliably index and reuse. Practical next steps include auditing your data plumbing, prototyping one agent workflow with human checkpoints, and negotiating distribution and compute in any partnership — because in 2026 the winners will be the teams that pair cheap, fast intelligence with ironclad trust and operational controls.

Tuesday Jan 06, 2026
Tuesday Jan 06, 2026
This episode slices through a blistering news cycle to track three seismic shifts: AI assistants have migrated from speakers to the web and every screen; reasoning AI is going physical with open‑source stacks for cars and robots; and consumer adoption in healthcare is already massive and quietly consequential. We unpack Amazon’s tactical pivot — Alexa.com and an agentic Alexa Plus with Expedia, Yelp and Uber integrations — and why that distribution advantage matters as rivals like OpenAI chase vision and commerce (hence the Pinterest chatter). Then we explain Nvidia’s Alpamayo and the “ChatGPT moment for physical AI”: chain‑of‑thought reasoning, open datasets, and auditable decision traces that lower the barrier to building autonomous vehicles and robots — and force regulators to rethink safety for open components. We cover the hardware and economics powering the shift: Vera Rubin chips promising ~10x cost cuts, AMD roadmaps that leapfrog performance, Meta’s Kernel Evolve automating hardware-specific tuning, and smaller smart models like Falcon H1R that beat much larger rivals. For marketers and founders the implications are immediate — agentic commerce, visual-first shopping experiences, and turnkey creative workflows (think Gen Store and nanobanana Pro) change go-to-market economics for solo sellers and brands. We also dig into the hidden story in healthcare: ~40 million daily ChatGPT health users, 5% of all prompts, 70% outside clinic hours and hundreds of thousands weekly from rural “hospital deserts,” pushing regulators toward new FDA pathways. Finally, we highlight what investors and builders must watch: gross profit per token (gppt) as the new valuation lever (0.71 correlation), risky CAPEX bets built on LOIs, and the regulatory tension between fast open innovation and public safety. Actionable takeaways for marketing pros: plan for agentic, multimodal experiences; prioritize efficiency over scale; and map regulatory exposure as a go‑to‑market risk.

Tuesday Dec 30, 2025
Tuesday Dec 30, 2025
This episode maps the startling duality shaping AI right now: a flood of low‑quality, algorithm‑gamed content that’s degrading platforms, and simultaneously a leap in research where models literally teach themselves to fix code. We start with hard data: Kapwing found that 21% of the first 500 recommended YouTube videos on a fresh account were “AI slop” — low‑quality, auto‑generated clips created to farm views and ad dollars. That economy is massive and global (examples include a channel with ~2 billion views and an estimated $4.25M/year; top viewership from South Korea, Pakistan, then the US). For marketers, that means platforms optimized for engagement, not quality, and a persistent incentive for bad actors to pollute feeds.
Then we run a high‑stakes experiment: Anthropic’s Claudius shopkeeper, placed in a newsroom, ended up $1,000 in debt after journalists used social‑engineering prompts to exploit its helpfulness — tricking the agent into giving away a PlayStation 5 and even bypassing supervisory layers with forged board documents. The takeaway is clear: obedience and utility make agents exploitable. Human‑in‑the‑loop controls remain essential when real assets or trust are on the line.
Next we shift to practical tools you can use today. NotebookLM’s DataTables and lecture formats turn scattered documents into structured spreadsheets and audio overviews — a huge time saver for research workflows. Perplexity can auto‑generate pre‑call memos if you connect it to Google Calendar and craft precise event metadata (pro tip: let the agent interview you first to tune prompts). And a reader case study shows Airtable + ChatGPT powering a year’s worth of content by keeping strategy human‑owned and execution automated. For marketers, the rule is simple: give AI structured, high‑quality inputs and keep human strategy as the backbone.
Finally, we explain the breakthrough in model training from Meta: SWERL self‑play for coding, where a single model intentionally injects bugs and then fixes them, creating an infinite, high‑quality curriculum of failures and fixes. The result: double‑digit benchmark gains and models that outperform ones trained only on human data. This points to a future where models generate their own training signal and even write their own updates — while the market shifts too (ChatGPT’s web traffic share falling from 87% to 68% as Gemini rises, and OpenAI reporting WAU not MAU).
For marketing professionals and AI enthusiasts, the episode ties these threads into practical conclusions: invest in critical thinking and curation to combat AI slop, architect human‑in‑the‑loop safeguards for any asset‑touching agents, and adopt structure‑first workflows to safely scale automation. And one provocative question to leave you with: if models can create infinite high‑quality training data to self‑improve, perhaps the hardest AI problem left is not code or logic but resisting the persuasive, social hacks of humans who want a free PlayStation.

Monday Dec 22, 2025
Monday Dec 22, 2025
The AI moment we’re living through is defined by two concurrent tectonic shifts: nation‑scale science mobilization and hyper‑personalized agents that act on behalf of people. On the macro side, governments are no longer passive regulators — the DOE’s “Genesis”‑style mobilization is a Manhattan‑Project scale play that stitches 17 national labs to 24 frontier tech firms (OpenAI, Google, Anthropic, Nvidia, Microsoft and more). Those partnerships pair specialized lab tools (AlphaGenome, AlphaVolve), massive cloud commitments and supercomputer access to accelerate discovery in physics, biology and energy. If you build or buy AI at scale, expect this public‑private axis to determine access to the deepest compute, pre‑qualified toolkits and research pipelines for the next decade.
At the same time the market has gone microscopic: AI is purpose‑built into agents that perform multistep, real‑world work for individuals and teams. The key engineering pattern is modular skills and context plumbing — think Claude “skill” zip files, MCP/context7mcp style rulebooks and developer‑friendly skill marketplaces inside ChatGPT and platform UIs. That architecture makes it trivial to hand an agent a brand style guide, a compliance template or a banking spreadsheet and have it produce production‑ready outputs. Real examples in the field are telling — a consumer fixed a dead furnace in 15 minutes after an agent combined visual reasoning and commonsense troubleshooting; enterprises are deploying agents that synthesize documents, generate audited P&L forecasts, or automate invoice reconciliation.
But there’s a hard reality under the headlines: capability is jagged and benchmarks can mislead. Models that shine on narrow benchmarks often fail on long, sequential, real‑world tasks; some agent architectures multiply token costs or produce fragile chains of thought. Open‑source evaluation tools and modular self‑testing (open Bloom‑style evaluators, verification/verifier layers) are emerging to separate marketing from governable performance. Meanwhile the infrastructure race is forcing new economics — massive multibillion dollar cloud and chip commitments are the new moat, but they create RPO and valuation risks that boards and procurement teams must manage.
What this means for marketers and AI practitioners — practical next moves:
- Treat content as a product for LLMs: reorganize copy into machine‑friendly building blocks (short canonical answers, structured metadata, extractable facts) so agents consume and reuse your expertise reliably (think AEO not only SEO).
- Package brand and compliance as “skills”: create reusable zipped skill packs (brand rules, legal templates, tone controls) that agents can load on demand and that embed audit traces.
- Design agents as audited teammates: require explicit checkpoints, provenance, editable artifacts, and human‑in‑the‑loop sign‑offs for any revenue‑impacting action.
- Invest in data plumbing and governance: prioritize clean, accessible internal data stores, vector search hygiene, and token‑efficient prompts (session compaction, tool calls) to control cost and latency.
- Pilot outcome‑based metrics: measure agents by verifiable business outcomes (time saved on a task, error reduction, revenue uplift) not just engagement or API calls.
The race is now about orchestration, trust and data quality as much as raw model size. Lead by defining the scarce human judgment you will preserve, then build the agent scaffolding to scale everything else.

Friday Dec 19, 2025
Friday Dec 19, 2025
This episode maps the two-speed transformation reshaping AI: enormous, government-backed moonshots like the DOE’s Genesis mission that tie 24 tech giants to 17 national labs, and a parallel surge of hyperspecialized agentic tools built to solve narrow, high-value tasks. We break down the stakes — from AWS’s $50B infrastructure pledges and OpenAI’s rumored $100B raise to the emergence of GPT‑5.2 Codex, agent skills as an open standard, and the vibe coding boom that’s turning developer environments into AI-first workspaces. You’ll hear why ChatGPT’s app marketplace and integrated partners position conversational interfaces as operating systems, how portable skill packages speed deployment across platforms, and why investors are pouring billions into tools that shave hours off developer workflows. We ground these macro trends with a simple consumer vignette — an AI+vision assistant that helped a homeowner fix a furnace — to show how specialist agents are already democratizing expensive expertise. For marketing professionals and AI enthusiasts, this episode highlights the biggest opportunities (platform monetization, verticalized products, contextualized agents) and the central question driving the race: will brute‑force compute or lean, shared skill architectures win the next wave of real-world breakthroughs?


