OpenAI Agents SDK + GEPA + SuperOptiX = Self-Optimizing AI Agents

π Read detailed version of this blog on your favorite platform
Choose your preferred platform to dive deeper
Today, we are excited to announce the open-source superoptix-lite-openai implementation of GEPA optimization with the OpenAI Agents SDK. The lite version of SuperOptiX that can be tried by anyone for demo purposes. Yesterday OpenAI released the cookbook and featured GEPA (Genetic Pareto optimization) in their official cookbook on Self-Evolving Agents. To make this idea practical and accessible, today we're releasing a complete open-source implementation that demonstrates how to build self-optimizing agents using the official OpenAI Agents SDK.
From Vision to Reality: Self-Evolving Agents Are Here
When we started building SuperOptiX, Evals and Optimization was the key focus. Now that, GEPA added structured optimisation of AI agents. At Superagentic AI, our vision was simple: agents that can systematically improve themselves through evaluation-driven optimization. OpenAI's cookbook inclusion gives that vision strong community validation, and our repo provides a ready-to-run, production-minded example you can adapt to your own domain.
What We're Releasing Today
superoptix-lite-openai β Open Source, Production-Ready, FREE
- Repository: github.com/SuperagenticAI/superoptix-lite-openai
- License: MIT β free to use, modify, and deploy
- Docs & Tutorial: Full tutorial and integration guide published in our docs. (See links at the end.)
Why This Release Matters
Featured in OpenAI's Cookbook
OpenAI's Self-Evolving Agents cookbook highlights optimization-driven retraining loops as practical paths to robust agent systems. That cookbook provides patterns and an end-to-end retraining loop you can adapt.
Works with FREE Local Models
Our implementation runs with local runtimes (e.g., Ollama) so you can iterate without cloud costs or sending private data to external APIs.
Real Production Patterns
We integrate GEPA with the official OpenAI Agents SDK usage patterns rather than shipping a toy wrapper; the code is designed to be understandable and deployable.
Cloud-Ready
If you want to run on cloud providers, the same workflow supports OpenAI, Anthropic, Google, or any OpenAI-compatible API by configuration alone.
Understanding GEPA: Genetic Pareto Optimization
GEPA (Genetic Pareto) is an optimization technique that combines genetic algorithm ideas with Pareto-style selection to iteratively improve agent behavior. The newly released repo demonstrates the technique applied to the instructions (system prompt) of an OpenAI SDK agent so improvements are persistent and reproducible.
How GEPA Optimization Works in SuperOptiX Lite (High Level)
- 1Evaluate current agent performance on a suite of test scenarios (YAML driven).
- 2Analyze failure modes and identify where outputs fall short of expectations.
- 3Generate candidate instruction variants (mutation / crossover style).
- 4Test the variants using the same evaluation pipeline and metrics.
- 5Select improved candidates using Pareto selection to balance multiple objectives (e.g., recall vs precision, thoroughness vs brevity).
In our OpenAI Agents SDK integration, GEPA targets the agent's system prompt (the instruction text) because changing that text often has the largest impact on how an agent reasons and responds.
Concrete Example: Instructions Before & After
Before Optimization:
instructions = """You are a Code Reviewer.
Review code for quality and issues."""After GEPA Optimization (Example):
instructions = """You are a Code Reviewer specialized in security and performance.
When reviewing code, you MUST explicitly check for:
1. SECURITY VULNERABILITIES: Identify SQL injection, XSS, command injection.
Always use terms like "SQL injection", "vulnerability", "security risk".
2. MEMORY LEAKS: Look for unbounded data structures, event listeners
without cleanup. Always mention "memory leak" when found.
3. ERROR HANDLING: Check for try-catch blocks, validation.
Mention "error handling" when missing.
4. PERFORMANCE ISSUES: Identify O(nΒ²) algorithms. State complexity
and suggest alternatives like "set", "hash map".
Your review MUST include these specific terms when issues are present."""Result: The optimized instructions are more specific, actionable, and lead to significantly better code reviews with structured feedback.
Why OpenAI Agents SDK?
We chose the OpenAI Agents SDK for this demo because it provides a straightforward, provider-agnostic API for building agent workflows. The same patterns apply whether you attach a local model for development or a cloud model for production. However, SuperOptiX also supports all major frameworks like Deep Agents, CrewAI, Agent Framework, Google ADK etc.
Example (abridged):
from agents import Agent, Runner, OpenAIChatCompletionsModel
from openai import AsyncOpenAI
# Initialize model (works with Ollama or cloud)
model = OpenAIChatCompletionsModel(
model="gpt-oss:20b",
openai_client=AsyncOpenAI(
base_url="http://localhost:11434/v1",
api_key="ollama",
),
)
agent = Agent(name="Code Reviewer", instructions=instructions, model=model)
result = await Runner.run(agent, input=code_sample)Technical Architecture
Our repo wraps an OpenAI SDK agent in a component that exposes an optimizable variable (the instructions). The evaluation pipeline runs predefined YAML test scenarios against the agent and measures pass rates. GEPA then proposes instruction variants and the pipeline re-evaluates them to find improved versions. Optimized instructions are persisted and loaded automatically in subsequent runs.

Real-World Use Case: Code Reviewer
The provided Code Reviewer agent demonstrates the full loop: evaluation scenarios (security, memory, error handling, performance), automated GEPA optimization, and deployment-ready agent loading. The pipeline produces structured reviews that identify issues, severity, and remediation suggestions.
Example usage (abridged):
from openai_gepa.agents.code_reviewer.pipelines.code_reviewer_openai_pipeline import CodeReviewerPipeline
pipeline = CodeReviewerPipeline('.../code_reviewer_playbook.yaml')
result = pipeline.run(code=sample_code, language='python')
print(result['review'])Getting Started
Quick Start (Local, ~5 Minutes):
# Clone repository
git clone https://github.com/SuperagenticAI/superoptix-lite-openai.git
cd superoptix-lite-openai
# Install Ollama (macOS example)
ollama pull gpt-oss:20b
ollama pull llama3.1:8b
# Setup & run demo
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python demo_local.pyFull tutorial and integration guide are available in our documentation. The tutorial walks through environment setup, test scenario design, running GEPA, and interpreting results.
Try It Now
github.com/SuperagenticAI/superoptix-lite-openai
Acknowledgments: Thanks to the OpenAI team for the Agents SDK and cookbook resources and mention of GEPA open-source from research from UC Berkeley and others.
π Continue the conversation
Join our community on these platforms for more insights
π‘ Found this helpful? Share it with your network and help others discover these insights!
