Ready to experience intelligent ticket discovery?
The Protocol & Technology
Distributed microservices architecture powered by multi-agent AI for intelligent event ticket discovery and trading
Understanding Our Architecture Philosophy
At ParlayTix, we've built something different. While most ticketing platforms rely on monolithic architectures that become bottlenecks under load, we've embraced a distributed microservices approach inspired by modern cloud-native principles.
Our system isn't just about finding tickets—it's about orchestrating intelligent agents that work together like a well-coordinated team. Each service has a specific role, a clear boundary, and communicates through well-defined contracts. This separation of concerns allows us to:
- Scale individual components based on real-time demand
- Deploy updates without system-wide downtime
- Maintain 99.9% uptime even when external ticket sites fail
- Process 10,000+ concurrent requests without degradation
"In a world where popular events sell out in seconds, our distributed architecture ensures that no single point of failure can stand between you and your tickets."
Core Architecture Principles
Clear Service Boundaries
We maintain strict separation of concerns across our services. The Frontend Agent owns ALL user interactions—it never tries to process complex logic. The Agent Swarm handles ALL complex processing—it never talks directly to users. This clarity eliminates confusion, reduces bugs, and allows each team to optimize their service without affecting others.
Smart Communication Patterns
Not all communication is created equal. We've carefully chosen the right protocol for each interaction:
- →REST APIs: For immediate responses (user queries, session data)
- →RabbitMQ: For async operations (web crawling, batch processing)
- →Direct DB: For consistency-critical operations (transactions, inventory)
User Experience First
Every architectural decision starts with one question: "How does this affect the user?" We ensure users always get immediate acknowledgment, even for long-running operations. Background processes run invisibly while maintaining conversation flow. Results are delivered conversationally when ready, not as raw data dumps. This philosophy drives our entire system design.
The Power of Specialization
Traditional ticket platforms try to do everything in one place—chat, search, automation, and data processing all tangled together. This creates fragility. When one feature fails, everything fails.
We took a different approach. By separating our system into five specialized services, each optimized for its specific task, we achieve remarkable efficiency. The Frontend Agent excels at understanding human conversation. The Agent Swarm orchestrates complex multi-step operations. The Crawler Core navigates hostile web environments with sophisticated anti-detection. Each service is a master of its domain.
Technology Stack by Purpose:
Microservices Architecture
Five specialized services working together to deliver intelligent ticket discovery
User Interface Layer
Multi-platform access points
Eliza AI Agent
Conversational intelligence powered by ElizaOS
Agent Swarm
OpenAI Multi-Agent System
Crawler Core
Web Automation Engine
Unified Data Platform
High-performance data infrastructure
Chat Frontend
Port 4000Modern chat interface with WebSocket connections for real-time updates
Frontend Agent
Port 3000Handles ALL user interactions across Web, Telegram, and Twitter platforms
Agent Swarm
Port 8100The 'brain' that processes complex tasks without direct user interaction
Crawler Core
Port 8080Automated browser control with anti-bot evasion for ticket sites
Data API
Port 8101Stores user profiles, ticket data, price history, and request tracking
How Services Communicate
The magic happens in the communication layer. Our services don't just pass messages—they engage in intelligent conversations. When you ask Pepper about Lakers tickets, your request triggers a carefully choreographed sequence that feels instantaneous to you but involves sophisticated orchestration behind the scenes.
The Frontend Agent immediately acknowledges your request, maintaining the conversation flow while the Agent Swarm begins its work behind the scenes. If fresh data is needed, a crawl request is published to our RabbitMQ message bus—not as a simple command, but as a rich context object containing your preferences, budget constraints, and timing requirements.
Real-time Request Flow:
The Crawler Core, constantly monitoring the message bus, picks up the request and spins up a browser session through BrowserBase. Using Playwright with custom anti-detection measures, it navigates ticket sites as a human would—clicking buttons, waiting for content to load, even solving simple challenges. The scraped data flows back through the message bus, gets processed and stored by the Data API, and ultimately returns to you as a conversational response from Pepper.
This asynchronous architecture means you're never waiting for slow external sites. You get immediate feedback, and detailed results arrive as they become available. It's the difference between a frustrating wait and a delightful conversation.
Request Processing Flow
User Query Reception
Frontend Agent receives natural language query from any platform
Service: Frontend AgentIntent Classification
ElizaOS processes intent and extracts entities from user input
Service: Frontend AgentTask Delegation
Complex tasks are delegated to Agent Swarm for processing
Service: Agent SwarmData Check
Agent Swarm checks cached data vs. need for fresh crawl
Service: Data APIAsync Crawl
If needed, crawl request published to message bus
Service: RabbitMQWeb Automation
Crawler Core performs browser automation on ticket sites
Service: Crawler CoreResult Processing
Results stored in database and returned via message bus
Service: Data APIResponse Delivery
Formatted results sent back to user through conversation
Service: Frontend AgentMessage Bus Communication
Asynchronous Operations
crawl.request
Agent Swarm → Crawler Core: Initiate web scraping
crawl.response
Crawler Core → Agent Swarm: Return scraping results
price.alert
Data API → Frontend Agent: Price drop notifications
Synchronous APIs
User Messages
Frontend → Frontend Agent: Real-time conversations
Task Delegation
Frontend Agent → Agent Swarm: Complex task processing
Data Persistence
All Services → Data API: Direct database access
Technology Stack
AI & Intelligence
- •ElizaOS Framework: Multi-platform conversational AI (Web, Telegram, Twitter)
- •OpenAI Agents SDK: GPT-4 powered task orchestration and planning
- •ChromaDB Vectors: Semantic search and similarity matching
Infrastructure
- •Next.js 15 + TypeScript: Modern web UI with type safety
- •Python + FastAPI: High-performance backend services
- •RabbitMQ: Asynchronous message bus for scalability
Web Automation
- •Playwright + BrowserBase: Cloud browser automation with anti-detection
- •LLM Navigation: AI-powered web scraping and form filling
- •Session Recording: Replay and debug crawl sessions
Payments & Settlements
- •x402 Payment Protocol: Coinbase's instant, fee-free USDC payments on Base
- •Coinbase Facilitator: Automated payment verification and settlement
- •USDC on Base: Stablecoin payments with instant settlement
Architecture Insights
✅ How It Works
- →Frontend Agent is the conversational layer for ALL users
- →Agent Swarm is the backend brain for complex processing
- →Message bus handles async operations like web crawling
- →Data API maintains consistency across all services
❌ Common Misconceptions
- ×Agent Swarm does NOT talk to users directly
- ×Multiple services do NOT handle conversations
- ×Everything does NOT go through message bus
- ×Services are NOT monolithic - each has one job
Enterprise-Grade Security
🔒 Encryption
TLS 1.3 for all API communications
🔐 Authentication
API key validation across services
🛡️ Isolation
Containerized service deployment
📊 Monitoring
Request tracking & audit logs
Built for Scale & Performance
Our distributed architecture ensures reliable ticket discovery at any scale





