As AI adoption accelerates, the focus is shifting from single-shot prompts and chat interfaces to persistent, intelligent agents that can plan, act, and collaborate with both humans and tools. But deploying such agents in production—especially in infrastructure or DevOps-heavy contexts—requires more than LLM magic. It requires architecture.
This talk walks you through how to build interruptible, tool-calling AI agents that can reason over tasks, handle human input in the loop, cooperate with other agents, and call external tools or APIs dynamically—using modern techniques like Model Context Protocol (MCP), loop debugging, and event-based interruptions.