Open Source AI Agent Framework

TinyAgent

The most minimal yet powerful AI Agent framework. Supports ANY LLM model including GPT-5, Claude-4, Gemini, Moonshot, and 100+ models. Features revolutionary subagent tools, sandboxed file operations, MCP client support, persistent storage, built-in UI hooks, and extensible architecture.

View on GitHub
MIT License
Python 3.8+
100 Lines Core
GPT-5 & Claude-4 Ready
Subagent Tools

Get Started in Minutes

Install TinyAgent and build your first AI agent

Installation

pip install tinyagent-py

Basic Usage

import asyncio
from tinyagent import TinyAgent
from tinyagent.tools.subagent import create_general_subagent
from tinyagent.tools.todo_write import enable_todo_write_tool

async def main():
    # Initialize TinyAgent with new features
    agent = TinyAgent(
        model="gpt-4o-mini",  # or "claude-4", "gpt-5", etc.
        api_key="your-api-key",
        enable_todo_write=True  # Enable TodoWrite tool
    )
    
    # Add a general-purpose subagent for parallel tasks
    helper_subagent = create_general_subagent(
        name="helper",
        model="gpt-4.1-mini",
        max_turns=20,
        enable_python=True,
        enable_shell=True
    )
    agent.add_tool(helper_subagent)
    
    try:
        result = await agent.run("""
        I need help with a complex project:
        1. Create a todo list for this task
        2. Use the helper subagent to research AI trends 2024
        3. Generate a comprehensive report
        
        Track progress with the todo system.
        """)
        print(result)
    finally:
        await agent.close()

asyncio.run(main())

Works with Any LLM Model

Powered by LiteLLM - switch between models seamlessly

OpenAI

gpt-5, gpt-4o, gpt-4o-mini, gpt-4, gpt-3.5-turbo

Anthropic

claude-4, claude-3-5-sonnet, claude-3-opus, claude-3-haiku

Google

gemini-pro, gemini-1.5-pro, gemini-1.5-flash

Others

Moonshot Kimi-k2, Cohere, Llama, and 100+ more

Model Switching Examples

OpenAI GPT-5 (Latest)

agent = TinyAgent(
  model="gpt-5",
  api_key="your-openai-key"
)

Claude-4 (Latest)

agent = TinyAgent(
  model="claude-4",
  api_key="your-anthropic-key"
)

Google Gemini

agent = TinyAgent(
  model="gemini-1.5-pro",
  api_key="your-gemini-key"
)

Moonshot Kimi

agent = TinyAgent(
  model="moonshot/moonshot-v1-8k",
  api_key="your-moonshot-key"
)

Powerful Features

Everything you need to build production-ready AI agents

Subagent Tools

Revolutionary parallel task execution system with specialized workers and context isolation

Sandboxed File Tools

Secure file operations: read_file, write_file, update_file, glob, grep with provider sandboxes

Enhanced Shell Tool

Improved bash tool with safety validation, platform tips, and provider-backed execution

TodoWrite Tool

Built-in task management system for tracking progress and organizing complex workflows

Anthropic Prompt Caching

Automatic prompt caching for Claude models to reduce API costs for large messages

Universal Tool Hooks

Control any tool execution via before_tool_execution/after_tool_execution callbacks

GPT-5 & Claude-4 Ready

Latest AI models including GPT-5, Claude-4, and 100+ models via LiteLLM

Advanced Usage with Subagents

Build sophisticated agents with subagent tools, prompt caching, and parallel execution

Production-Ready Agent

import asyncio
from tinyagent import TinyAgent
from tinyagent.storage import JsonFileStorage
from tinyagent.hooks.rich_ui_callback import RichUICallback
from tinyagent.hooks import anthropic_prompt_cache
from tinyagent.tools.subagent import create_research_subagent, create_coding_subagent

async def main():
    # Production-ready agent with all new features
    storage = JsonFileStorage("./sessions")
    ui = RichUICallback(markdown=True, show_thinking=True)
    
    agent = TinyAgent(
        model="claude-3-5-sonnet-20241022",  # or "gpt-5", "claude-4"
        api_key="your-api-key",
        storage=storage,
        session_id="user_session",
        enable_todo_write=True
    )
    
    # Add Anthropic prompt caching for cost reduction
    cache_callback = anthropic_prompt_cache()
    agent.add_callback(cache_callback)
    agent.add_callback(ui)
    
    # Add specialized subagents
    researcher = create_research_subagent("researcher", "gpt-4o", max_turns=20)
    coder = create_coding_subagent("coder", "claude-3-sonnet", max_turns=25)
    agent.add_tool(researcher)
    agent.add_tool(coder)
    
    try:
        result = await agent.run("""
        Complex project workflow:
        1. Use researcher to analyze market trends
        2. Use coder to implement analysis algorithms
        3. Track all tasks with todos
        4. Generate final report
        """)
        print(result)
    finally:
        await agent.close()

asyncio.run(main())

Built-in UI Hooks

Choose your preferred interface: Terminal, Web, or Jupyter

RichUI (Terminal)

Beautiful terminal interface with markdown support

import asyncio
from tinyagent import TinyAgent
from tinyagent.hooks.rich_ui_callback import RichUICallback

async def main():
    # Beautiful terminal UI with Rich library
    ui = RichUICallback(
        markdown=True,
        show_thinking=True,
        show_tool_calls=True
    )
    
    agent = TinyAgent(
        model="o4-mini",
        api_key="your-api-key"
    )
    
    agent.add_callback(ui)
    
    try:
        result = await agent.run("Analyze this data and create a report")
        print(result)
    finally:
        await agent.close()

asyncio.run(main())

GradioUI (Web)

Web-based interface accessible from any browser

import asyncio
from tinyagent import TinyAgent
from tinyagent.hooks.gradio_callback import GradioCallback

async def main():
    # Web-based UI with Gradio
    gradio_ui = GradioCallback(
        show_thinking=True,
        show_tool_calls=True,
        port=7860
    )
    
    agent = TinyAgent(
        model="o4-mini",
        api_key="your-api-key"
    )
    
    agent.add_callback(gradio_ui)
    
    try:
        # Launch web interface at http://localhost:7860
        result = await agent.run("Help me with data analysis")
        print(result)
    finally:
        await agent.close()

asyncio.run(main())

JupyterUI (Notebook)

Interactive interface directly in Jupyter notebooks

# In a Jupyter notebook cell
import asyncio
from tinyagent import TinyAgent

async def main():
    # Interactive UI directly in Jupyter notebooks
    agent = TinyAgent(
        model="o4-mini",
        api_key="your-api-key",
        ui="jupyter"  # Automatically enables JupyterUI
    )
    
    try:
        # Rich interactive output in notebook cells
        result = await agent.run("""
        Create a visualization of sales data:
        1. Generate sample sales data
        2. Create a matplotlib chart
        3. Show analysis insights
        """)
        # Output appears directly in notebook with rich formatting
        return result
    finally:
        await agent.close()

# Run in notebook
await main()

Real-World Examples

See TinyAgent in action with practical use cases

Customer Support Bot

import asyncio
from tinyagent import TinyAgent, tool

@tool("get_order", "Get order status")
def get_order_status(order_id: str) -> str:
    # Your database logic here
    return f"Order {order_id}: Shipped"

async def main():
    bot = TinyAgent(
        model="o4-mini",
        api_key="your-api-key",
        system_prompt="You are a helpful customer support agent"
    )
    
    bot.add_tool(get_order_status)
    
    try:
        result = await bot.run("Check order #12345")
        print(result)
    finally:
        await bot.close()

asyncio.run(main())

Research Assistant

import asyncio
from tinyagent import TinyAgent
from tinyagent.storage import JsonFileStorage

async def main():
    storage = JsonFileStorage("./research_sessions")
    
    researcher = TinyAgent(
        model="o4-mini",
        api_key="your-api-key",
        storage=storage,
        session_id="research_session",
        system_prompt="You are a research assistant"
    )
    
    # Connect to MCP servers for research tools
    await researcher.connect_to_server("npx", ["-y", "@modelcontextprotocol/server-web-search"])
    
    try:
        result = await researcher.run("""
        Research AI trends in 2024 and
        create a summary report
        """)
        print(result)
    finally:
        await researcher.close()

asyncio.run(main())

Why Choose TinyAgent?

🚀 Revolutionary Subagent System

Execute multiple tasks in parallel with specialized AI workers. Each subagent operates independently with complete context isolation, automatic cleanup, and specialized capabilities for research, coding, analysis, and more.

🔒 Sandboxed File Operations

Secure file tools (read_file, write_file, update_file, glob, grep) route through provider sandboxes. Universal tool hooks let you control any tool execution with before/after callbacks for complete security and auditability.

💰 Cost Optimization

Automatic Anthropic prompt caching reduces API costs for large messages. TodoWrite tool tracks progress to prevent redundant work. Support for cost-effective models like GPT-4o-mini and Claude-3-haiku alongside premium models.

🤖 GPT-5 & Claude-4 Ready

First-class support for the latest AI models including GPT-5 and Claude-4. Enhanced shell tool with safety validation, platform-specific tips, and provider-backed execution. Multiple UI options and comprehensive monitoring built-in.

Ready to Build Your AI Agent?

Start building powerful AI agents with TinyAgent today. It's free and open source.