Vizra.ai |

Documentation

🔄

Agent Lifecycle

Follow your agent's journey from first thought to final response! 🔄 Understanding the lifecycle is like having X-ray vision into your AI's mind - see every step, hook into every moment, and optimize every interaction!

🌟 The Journey of an Agent Request

When you ask an agent to do something, it goes through a carefully orchestrated lifecycle that ensures reliability, traceability, and extensibility. Let's follow a request from start to finish!

🗺️ The Complete Flow

1. Your Code → AgentExecutor (Fluent API)
2. AgentExecutor → AgentManager (Orchestration)
3. AgentManager → Agent Instance (Your Logic)
4. Agent → LLM Provider (AI Processing)
5. LLM → Tools (If Needed)
6. Response → Back Through the Chain

🚀 Step 1: Starting the Journey

Everything begins with a simple, fluent API call. Here's how you kick things off:

// The journey begins!
$response = CustomerSupportAgent::ask('Where is my order?')
    ->forUser($user)
    ->withSession('session-123')
    ->withContext(['order_id' => 123])
    ->temperature(0.7)
    ->execute();

// Or for async processing
DataProcessingAgent::process($bigDataset)
    ->async()
    ->onQueue('heavy-lifting')
    ->execute();

💡 What Happens Behind the Scenes?

The AgentExecutor collects all your configuration and passes it to the AgentManager, which orchestrates the entire execution. If you chose async, it dispatches a job to your queue!

🎭 Step 2: The Lifecycle Hooks

Your agent has six powerful hooks to customize behavior at key moments. These are your backstage passes to the agent's thinking process!

🎬
beforeLlmCall() - Pre-Processing Magic

Called right before sending messages to the AI. Perfect for adding context, filtering messages, or starting traces.

public function beforeLlmCall(array $inputMessages, AgentContext $context): array
{
    // Add system context, filter sensitive data, start tracing
    return $inputMessages;
}

🎁
afterLlmResponse() - Post-Processing Power

Called after receiving the AI's response. Transform responses, extract insights, or trigger side effects.

public function afterLlmResponse(Response|Generator $response, AgentContext $context): mixed
{
    // Log token usage: $response->usage
    // Extract and store memories
    return $response;
}

🔧
beforeToolCall() - Tool Preparation

Called before executing any tool. Validate inputs, add authentication, or modify parameters.

public function beforeToolCall(string $toolName, array $arguments, AgentContext $context): array
{
    // Validate permissions, add API keys, log tool usage
    return $arguments;
}

📊
afterToolResult() - Tool Result Processing

Called after a tool returns results. Format data, handle errors, or cache results.

public function afterToolResult(string $toolName, string $result, AgentContext $context): string
{
    // Process results, cache for future use
    return $result;
}

🤝
Sub-Agent Delegation Hooks

Control how your agent delegates tasks to specialized sub-agents.

// Before delegation
public function beforeSubAgentDelegation(string $subAgentName, string $taskInput, ...): array

// After delegation
public function afterSubAgentDelegation(string $subAgentName, string $result, ...): string

📡 Step 3: Events Along the Way

As your agent works, it broadcasts seven key events that you can listen to for monitoring, logging, or triggering other actions!

🏁 AgentExecutionStarting
When execution begins
🔮 LlmCallInitiating
Before calling the AI
🔧 ToolCallInitiating
Before using a tool
ToolCallCompleted
After tool finishes
📨 LlmResponseReceived
AI response received
🎯 AgentResponseGenerated
Final response ready
🎉 AgentExecutionFinished
All done!

👂 Listening to Events

// In your EventServiceProvider
protected $listen = [
    AgentExecutionStarting::class => [
        LogAgentActivity::class,
        CheckRateLimits::class,
    ],
    ToolCallInitiating::class => [
        ValidateToolPermissions::class,
        LogToolUsage::class,
    ],
];

// Example event listener
class LogAgentActivity
{
    public function handle(AgentExecutionStarting $event): void
    {
        Log::info('Agent starting', [
            'agent' => $event->agentClass,
            'mode' => $event->mode,
            'user' => $event->userId,
        ]);
    }
}

🧩 Step 4: Key Players in the Lifecycle

Several important components work together to make the agent lifecycle smooth and reliable:

📦 AgentContext

Maintains conversation history, user info, and session state throughout execution

📊 Tracer

Creates detailed execution traces for debugging and performance analysis

💾 StateManager

Persists and loads agent state between requests

🎭 Prism

Handles LLM communication and orchestrates tool execution (up to 5 steps)

Streaming: Real-time Responses

When you enable streaming, the lifecycle adapts to deliver responses token by token:

// Enable streaming for real-time responses
$stream = CreativeWriterAgent::ask('Write a story')
    ->streaming()
    ->execute();

foreach ($stream as $chunk) {
    // Each chunk arrives as soon as it's generated
    echo $chunk;
    flush();
}

// Events are still dispatched during streaming!

💡 Pro Tips for Lifecycle Mastery

🔍 Use Tracing for Debugging

The built-in tracer shows you exactly what happened at each step. Access traces with php artisan vizra:trace session-id

📊 Monitor with Events

Listen to lifecycle events for metrics, logging, and alerting. Know exactly what your agents are doing!

🎣 Hook Into Everything

The six lifecycle hooks give you control at every critical moment. Use them wisely!

🚀 Async for Heavy Lifting

Use async execution for long-running tasks. The same lifecycle applies, just in the background!