The TaskToolSet lets a parent agent launch sub-agents that handle complex, multi-step tasks autonomously. Each sub-agent runs synchronously — the parent blocks until the sub-agent finishes and returns its result. Sub-agents can be resumed later using a task ID, preserving their full conversation context.This pattern is useful when:
Delegating specialized work to purpose-built sub-agents
Breaking a problem into sequential steps handled by different experts
Maintaining conversational context across multiple interactions with a sub-agent
Isolating sub-task complexity from the parent agent’s context
For parallel sub-agent execution, see Sub-Agent Delegation. TaskToolSet is designed for sequential blocking tasks.
The agent calls the task tool with a prompt and a sub-agent type. The TaskManager creates (or resumes) a sub-agent conversation, runs it to completion, and returns the result to the parent.
A key feature of TaskToolSet is the ability to resume a previously completed task. When a task finishes, its conversation is persisted to disk. Passing the resume parameter with the task ID reloads the full conversation history, allowing the sub-agent to continue where it left off.
# First call — sub-agent generates a quiz questionconversation.send_message( "Use the task tool with subagent_type='quiz_expert' to generate " "a multiple-choice question about zebras.")conversation.run()# The agent receives task_id "task_00000001" in the observation# Second call — resume the same sub-agent to verify the answerconversation.send_message( "The user answered A. Use the task tool with resume='task_00000001' " "to ask the same sub-agent whether that answer is correct.")conversation.run()
"""Animal Quiz with Task Tool SetDemonstrates the TaskToolSet with a main agent delegating to ananimal-expert sub-agent. The flow is:1. User names an animal.2. Main agent delegates to the "animal_expert" sub-agent to generate a multiple-choice question about that animal.3. Main agent shows the question to the user.4. User picks an answer.5. Main agent resumes the same sub-agent to check whether the answer is correct and explain why."""import osfrom pydantic import SecretStrfrom openhands.sdk import LLM, Agent, AgentContext, Conversation, Toolfrom openhands.sdk.context import Skillfrom openhands.tools.delegate import DelegationVisualizer, register_agentfrom openhands.tools.task import TaskToolSet# ── LLM setup ────────────────────────────────────────────────────────api_key = os.getenv("LLM_API_KEY")assert api_key is not None, "LLM_API_KEY environment variable is not set."llm = LLM( model=os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929"), api_key=SecretStr(api_key), base_url=os.getenv("LLM_BASE_URL", None),)# ── Register the animal expert sub-agent ─────────────────────────────def create_animal_expert(llm: LLM) -> Agent: """Factory for the animal-expert sub-agent.""" return Agent( llm=llm, tools=[], # no tools needed – pure knowledge agent_context=AgentContext( skills=[ Skill( name="animal_expertise", content=( "You are a world-class zoologist. " "When asked to generate a quiz question, respond with " "EXACTLY this format and nothing else:\n\n" "Question: <question text>\n" "A) <option>\n" "B) <option>\n" "C) <option>\n" "D) <option>\n\n" "When asked to verify an answer, state whether it is " "correct or incorrect, reveal the right answer, and " "give a short fun-fact explanation." ), trigger=None, # always active ) ], system_message_suffix="Keep every response concise.", ), )register_agent( name="animal_expert", factory_func=create_animal_expert, description="Zoologist that creates and verifies animal quiz questions.",)# ── Main agent ───────────────────────────────────────────────────────main_agent = Agent( llm=llm, tools=[Tool(name=TaskToolSet.name)],)conversation = Conversation( agent=main_agent, workspace=os.getcwd(), visualizer=DelegationVisualizer(name="QuizHost"),)# ── Round 1: generate the question ──────────────────────────────────animal = input("Pick an animal: ")conversation.send_message( f"The user chose the animal: {animal}. " "Use the task tool to delegate to the 'animal_expert' sub-agent " "and ask it to generate a single multiple-choice question (A-D) " f"about {animal}. " "Once you get the question back, display it to the user exactly " "as the sub-agent returned it and ask the user to pick A, B, C, or D.")conversation.run()# ── Round 2: verify the answer ──────────────────────────────────────answer = input("Your answer (A/B/C/D): ")conversation.send_message( f"The user answered: {answer}. " "Use the task tool to delegate to the 'animal_expert' sub-agent again " f"and ask it whether '{answer}' is the correct answer to the question " "it generated earlier. Don't include the question; instead, use the " "'resume' parameter to continue the previous conversation.")conversation.run()# ── Done ────────────────────────────────────────────────────────────cost = conversation.conversation_stats.get_combined_metrics().accumulated_costprint(f"\nEXAMPLE_COST: {cost}")
You can run the example code as-is.
The model name should follow the LiteLLM convention: provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o).
The LLM_API_KEY should be the API key for your chosen provider.
ChatGPT Plus/Pro subscribers: You can use LLM.subscription_login() to authenticate with your ChatGPT account and access Codex models without consuming API credits. See the LLM Subscriptions guide for details.