AI Agents
Meet your new AI workforce! ๐ค Agents are the heart and soul of Vizra ADK - intelligent, conversational, and incredibly powerful. Ready to build agents that will amaze your users and transform your business?
๐ฏ What's an Agent?
Think of agents as your smart AI assistants that can handle complex tasks, remember conversations, and even work with other agents. They're not just chatbots โ they're intelligent systems that can analyze data, make decisions, and take actions! ๐
๐ง Smart & Contextual
Remembers conversations, understands context, and learns from interactions
๐ ๏ธ Tool-Powered
Uses custom tools to access databases, APIs, and perform real actions
๐ Multi-Model Support
Works with OpenAI, Anthropic Claude, and Google Gemini models
๐ค Team Players
Agents can delegate tasks to other specialized agents
๐ง Vector Memory
Built-in semantic search and RAG capabilities with public API access
๐งช Testing Friendly
Public methods make testing and experimentation a breeze
๐๏ธ Creating Your First Agent
Ready to build your first AI agent? It's easier than you think! Let's create a helpful customer support agent that can answer questions and solve problems.
โก Quick Start with Artisan
Fire up your terminal and run this magical command:
php artisan vizra:make:agent CustomerSupportAgent
Boom! ๐ฅ You've just created your first agent! Let's peek inside and see what makes it tick:
๐ The Agent Blueprint
<?php
namespace App\Agents;
use Vizra\VizraADK\Agents\BaseLlmAgent;
class CustomerSupportAgent extends BaseLlmAgent
{
protected string $name = 'customer_support';
protected string $description = 'Handles customer inquiries and support requests';
protected string $instructions = "You are a helpful customer support assistant.
Be friendly, professional, and solution-oriented.
Always prioritize customer satisfaction.";
// Optional: Specify model and parameters
protected string $model = 'gpt-4o';
protected ?float $temperature = 0.7;
protected ?int $maxTokens = 1000;
}
Agent Configuration Options
class AdvancedAgent extends BaseLlmAgent
{
// Model configuration
protected string $model = 'gpt-4o';
protected ?float $temperature = 0.7;
protected ?int $maxTokens = 1000;
protected ?float $topP = 0.9;
// Tools this agent can use
protected array $tools = [
OrderLookupTool::class,
RefundProcessorTool::class,
EmailSenderTool::class,
];
// Sub-agents this agent can delegate to
protected array $subAgents = [
TechnicalSupportAgent::class,
BillingSupportAgent::class,
];
// Enable streaming responses
protected bool $streaming = false;
}
๐ Unleashing Your Agent's Powers
Now comes the fun part โ putting your agent to work! Vizra ADK gives you a powerful and simple way to interact with your agents.
โจ One Method to Rule Them All
The run()
method is your gateway to agent intelligence. Simple, powerful, and flexible โ it handles everything from conversations to complex data processing.
๐ก Pro tip: The run() method adapts to your needs - whether you're chatting, analyzing data, or processing events!
๐ฌ Let's Chat! Using the Fluent API
Working with agents feels natural with our fluent API. Check out these examples:
// Basic conversational usage
$response = CustomerSupportAgent::run('How do I reset my password?')
->go();
// With user context
$response = CustomerSupportAgent::run('Show me my recent orders')
->forUser($user)
->go();
// With session for conversation continuity
$response = CustomerSupportAgent::run('What was my previous question?')
->withSession($sessionId)
->go();
// Event-driven execution
OrderProcessingAgent::run($orderCreatedEvent)
->async()
->go();
// Data analysis with context
$insights = AnalyticsAgent::run($salesData)
->withContext(['period' => 'last_quarter'])
->go();
// Process large datasets asynchronously
DataProcessorAgent::run($largeDataset)
->async()
->onQueue('processing')
->go();
// Monitor system health with thresholds
SystemMonitorAgent::run($metrics)
->withContext(['threshold' => 0.95])
->go();
// Generate reports with specific formats
$report = ReportAgent::run('weekly_summary')
->withContext(['format' => 'pdf'])
->go();
๐ Real-time Magic with Streaming
Want to see your agent think in real-time? Enable streaming for that ChatGPT-like experience:
// Enable streaming for real-time responses
$stream = StorytellerAgent::run('Tell me a story')
->streaming()
->go();
foreach ($stream as $chunk) {
echo $chunk;
flush();
}
๐ฌ Managing Conversation History
โ ๏ธ Important: All messages are always saved to the database for session continuity. These settings only control what previous messages are sent to the LLM for context.
By default, agents don't send conversation history to the LLM (for better performance and lower costs). But for chat agents, you'll want context! Here's how to control what history gets sent to the LLM:
// Enable history for conversational agents
class ChatAgent extends BaseLlmAgent
{
protected bool $includeConversationHistory = true; // Send history to LLM
protected string $contextStrategy = 'recent'; // 'none', 'recent', 'full'
protected int $historyLimit = 10; // last 10 messages (for 'recent' strategy)
}
// Override at runtime
$context->setState('include_history', true);
$context->setState('history_depth', 5);
๐ง Customizing Agent Behavior
Want to add your own special sauce? ๐ถ๏ธ Agents come with powerful lifecycle hooks that let you customize exactly how they work. It's like having backstage passes to your agent's brain!
๐ฃ Available Hooks
-
โข
beforeLlmCall - Tweak messages before they hit the AI
-
โข
afterLlmResponse - Process AI responses your way
-
โข
beforeToolCall - Modify tool inputs on the fly
-
โข
afterToolResult - Transform tool outputs
Here's how to use these superpowers:
class CustomAgent extends BaseLlmAgent
{
public function beforeLlmCall(array $inputMessages, AgentContext $context): array
{
// Modify messages before sending to LLM
// This is also where tracing starts
return $inputMessages;
}
public function afterLlmResponse(Response|Generator $response, AgentContext $context): mixed
{
// Process the LLM response
// Access token usage: $response->usage
return $response;
}
public function beforeToolCall(string $toolName, array $arguments, AgentContext $context): array
{
// Modify tool arguments before execution
return $arguments;
}
public function afterToolResult(string $toolName, string $result, AgentContext $context): string
{
// Process tool results
return $result;
}
}
๐ญ Dynamic Prompts
Want to test different personalities without changing code? ๐ฏ Prompt versioning lets you A/B test, switch between tones, and evolve your agent's voice on the fly!
โจ Switch Prompts at Runtime
No more hardcoding prompts! Store different versions and switch between them instantly:
// Use a specific prompt version
$response = CustomerSupportAgent::run('Help me with my order')
->withPromptVersion('friendly')
->go();
// A/B test different tones
$version = rand(0, 1) ? 'professional' : 'casual';
$response = CustomerSupportAgent::run($query)
->withPromptVersion($version)
->go();
๐ก Pro tip: Store prompts as .md
files in resources/prompts/{agent_name}/
for easy version control!
โจ Dynamic Prompts with Blade Templates
Create dynamic, context-aware prompts using Laravel's Blade templating engine. Your prompts can adapt based on user data, session state, and custom variables.
๐ Quick Example
Save your prompt as .blade.php
to enable dynamic content:
You are {{ $agent['name'] }}, a helpful assistant.
@if(isset($user_name))
Hello {{ $user_name }}! How can I help you today?
@else
Hello! How can I assist you?
@endif
@if($context && $context->getState('premium_user'))
โญ Premium support mode activated
@endif
@if($tools->isNotEmpty())
I can help you with: {{ $tools->pluck('description')->join(', ') }}
@endif
๐ง Available Variables
Templates have access to: $agent
, $context
, $user
, $memory_context
, $tools
, $sub_agents
, and any custom variables you provide.
๐ก Adding Custom Variables
Inject your own data by implementing getPromptData()
in your agent:
class CustomerSupportAgent extends BaseLlmAgent
{
protected function getPromptData(AgentContext $context): array
{
return [
'company_name' => config('app.company_name'),
'support_hours' => '9 AM - 5 PM EST',
'ticket_count' => $this->getUserTicketCount($context),
];
}
}
Then use them in your template:
Welcome to {{ $company_name }} support!
Our hours: {{ $support_hours }}
@if($ticket_count > 0)
You have {{ $ticket_count }} open tickets.
@endif
๐ค Building Agent Teams
Why have one agent when you can have a whole team? ๐ Agents can delegate tasks to specialized sub-agents, creating powerful AI workflows. Think of it as building your own AI company!
class ManagerAgent extends BaseLlmAgent
{
protected array $subAgents = [
TechnicalSupportAgent::class,
BillingSupportAgent::class,
SalesAgent::class,
];
}
๐ Advanced Agent Techniques
Ready to level up? Here are some pro tips and advanced features that'll make your agents work harder and smarter! ๐ช
๐ง Vector Memory & RAG
Give your agents superpowers with built-in semantic search and knowledge retrieval! Perfect for building documentation assistants, knowledge bases, and intelligent Q&A systems.
class DocumentationAgent extends BaseLlmAgent
{
protected string $name = 'docs_agent';
public function loadKnowledge(string $content): void
{
// Simple: just add content
$this->vector()->addDocument($content);
// Advanced: with full metadata and organization
$this->vector()->addDocument([
'content' => $content,
'metadata' => ['type' => 'docs', 'version' => '2.0'],
'namespace' => 'knowledge_base',
'source' => 'user_upload'
]);
}
public function answerQuestion(string $question, AgentContext $context): mixed
{
// Generate contextual answer using RAG
$ragContext = $this->rag()->generateRagContext($question, [
'namespace' => 'knowledge_base',
'limit' => 5,
'threshold' => 0.7
]);
if ($ragContext['total_results'] > 0) {
$contextualInput = "Based on this knowledge:\n" .
$ragContext['context'] .
"\n\nAnswer: " . $question;
} else {
$contextualInput = $question;
}
return parent::run($contextualInput, $context);
}
}
// โจ New! Public access means you can test directly:
$agent = Agent::named('docs_agent');
$agent->vector()->addDocument('Laravel is awesome!');
$results = $agent->vector()->search('framework');
๐ก Vector Memory Pro Tips
- โข Use progressive API: simple strings for prototypes, arrays for production
- โข Public methods make testing in Tinkerwell super easy
- โข Organize with namespaces: 'docs', 'faqs', 'policies', etc.
- โข Perfect for building chatbots that remember context
โ๏ธ Background Processing with Async
Got heavy lifting to do? Send your agents to work in the background:
// Execute agent asynchronously via queue
$job = DataProcessorAgent::run($largeDataset)
->async()
->onQueue('processing')
->go();
// With delay and retries
$job = ReportAgent::run('quarterly_report')
->delay(300) // 5 minutes
->tries(3)
->timeout(600) // 10 minutes
->go();
๐ผ๏ธ Vision & Multimodal Magic
Your agents aren't just text wizards โ they have eyes too! ๐ Send images, documents, and watch the magic happen:
// Send images with your request
$response = VisionAgent::run('What\'s in this image?')
->withImage('/path/to/image.jpg')
->go();
// Multiple images and documents
$response = DocumentAnalyzer::run('Summarize these documents')
->withDocument('/path/to/report.pdf')
->withImage('/path/to/chart.png')
->withImageFromUrl('https://example.com/diagram.jpg')
->go();
๐๏ธ Fine-Tune on the Fly
Need more creativity? Want faster responses? Override any parameter at runtime:
// Override agent parameters at runtime
$response = CreativeWriterAgent::run('Write a poem')
->temperature(0.9) // More creative
->maxTokens(500)
->go();
// Set multiple parameters
$response = AnalyticalAgent::run($data)
->withParameters([
'temperature' => 0.2,
'max_tokens' => 2000,
'top_p' => 0.95
])
->go();
๐ง Memory That Actually Remembers
Unlike that friend who forgets your birthday every year, your agents have perfect memory! They remember everything important about your conversations and context.
๐ What Your Agents Remember
Every message in the session
What tools did and returned
Who they're talking to
Any context you provide
// Add custom context
$response = ShoppingAssistant::run('Find me a laptop')
->withContext([
'budget' => 1500,
'preferences' => ['brand' => 'Apple'],
'location' => 'New York'
])
->go();
๐ Understanding the Agent Lifecycle
When you ask an agent to do something, it goes through a carefully orchestrated lifecycle. Understanding this flow helps you build more powerful agents and debug issues faster!
๐บ๏ธ The Request Journey
1. Your Code โ AgentExecutor (Fluent API)
2. AgentExecutor โ AgentManager (Orchestration)
3. AgentManager โ Agent Instance (Your Logic)
4. Agent โ LLM Provider (AI Processing)
5. LLM โ Tools (If Needed)
6. Response โ Back Through the Chain
๐ฃ Lifecycle Hooks - Your Power Points
Agents provide strategic hooks where you can intercept and modify behavior. These are your control points for customization:
๐ฌ beforeLlmCall() - Pre-Processing
Called before sending messages to the AI. Perfect for adding context, filtering messages, or injecting system prompts.
public function beforeLlmCall(array $messages, AgentContext $context): array
{
// Add user preferences to system message
$messages[0]['content'] .= "\nUser prefers formal tone.";
return $messages;
}
๐ afterLlmResponse() - Post-Processing
Called after receiving the AI's response. Transform responses, extract insights, or trigger side effects.
public function afterLlmResponse(Response $response, AgentContext $context): mixed
{
// Log token usage for monitoring
Log::info('Token usage', $response->usage);
return $response;
}
๐ง Tool Execution Hooks
Control tool execution with beforeToolCall() and afterToolResult() hooks.
public function beforeToolCall(string $toolName, array $arguments, AgentContext $context): array
{
// Add API keys or validate permissions
if ($toolName === 'weather_api') {
$arguments['api_key'] = config('services.weather.key');
}
return $arguments;
}
๐ก Lifecycle Events
As your agent works, it broadcasts events you can listen to for monitoring, logging, or triggering other actions:
AgentExecutionStarting
When execution begins
LlmCallInitiating
Before calling the AI
ToolCallInitiating
Before using a tool
ToolCallCompleted
After tool finishes
LlmResponseReceived
AI response received
AgentExecutionFinished
All done!
๐ Pro Tips from the Trenches
Want to build agents that users actually love? Here's the wisdom we've gathered from building hundreds of agents:
๐ฏ Crystal Clear Instructions
Write instructions like you're explaining to a smart friend. Be specific about what you want!
๐๏ธ Right Model for the Job
GPT-4 for complex reasoning, GPT-3.5 for quick tasks. Don't use a Ferrari to go to the corner store!
๐ Hook Into Everything
Use lifecycle hooks for debugging and monitoring. You'll thank yourself later!
๐ค Delegate Like a Boss
Complex workflows? Use sub-agents! Let specialists handle what they do best.
โก Stream for the Win
Enable streaming for long responses. Users love seeing agents "think" in real-time!
๐ Ready to Build Something Amazing?
You've got the knowledge, now let's put it to work!
Ready for Professional AI Agent Evaluation? ๐
Evaluate and debug your Vizra ADK agents with professional cloud tools. Get early access to Vizra Cloud and be among the first to experience advanced evaluation and trace analysis at scale.
Join other developers already on the waitlist. No spam, just launch updates.