Lecture 17: LLM Agents & Tool Use
From prompting to acting: reasoning, tools, and multi-agent systems
Overview
Large Language Models are more than chatbots—they can plan, call tools, and take actions. This lecture shows how to turn an LLM into an agent: structuring outputs, invoking external functions/APIs, and iterating with reasoning-and-action loops (ReAct). We contrast naive prompting with constrained and schema-validated approaches, calibrate model decisions via log-prob scoring, and discuss evaluation, safety, and failure modes when models act in the real world.
Learning Objectives
By the end of this lecture, you will be able to:
- Understand LLM versatility - Recognize that LLMs are general-purpose text processors capable of classification, extraction, reasoning, and generation tasks
- Implement tool use - Build systems that enable LLMs to interact with external functions and APIs through structured tool calling
- Create ReAct agents - Develop agents that combine reasoning and acting in an iterative loop to solve multi-step problems
- Fine-tune models - Customize language models for specific domains through efficient fine-tuning techniques
Materials
TipQuick Access
Resources
- Classic Blog Posts:
- A Visual Guide to LLM Agents — Architectures for reasoning, planning, and tool use
LLM Powered Autonomous Agents
Previous: ← Lecture 16: Recurrent Neural Networks | Next: Lecture 18: Coming Soon →