
OpenAI Codex goal the new long-horizon mode for agentic coding
OpenAI Codex CLI's new /goal command lets AI agents pursue persistent objectives across sessions, enabling multi-step engineering tasks without constant prompting.
OpenAI's Codex CLI just shipped a feature that fundamentally changes how AI coding agents work. The new /goal command, available in Codex CLI v0.128.0+, introduces persistent objectives that survive interruptions, session breaks, and token budget limits. Instead of giving your agent one instruction at a time, you can now define a durable engineering goal and let Codex pursue it across multiple turns.
This shifts the interaction model from "answer this prompt" to "pursue this outcome"—much closer to how software engineers actually work on complex projects.
What is the /goal command in OpenAI Codex CLI?
The /goal command creates a persistent objective that your Codex agent tracks and pursues until completion. Unlike a regular prompt that gets a single response, a goal maintains state including its objective text, whether it's active, paused, or budget-limited, how many tokens have been used, and whether it's been completed.
OpenAI Codex CLI is the terminal-based coding agent where this feature lives. The commands are straightforward:
/goal <objective> # Create or replace a goal
/goal # View current goal status
/goal pause # Pause active goal
/goal resume # Resume paused goal
/goal clear # Remove goal entirely
The key difference from a normal prompt: when Codex finishes a turn of work but the goal isn't complete, it automatically continues working. It doesn't wait for you to type another message.
How does long-horizon mode actually work?
"Long-horizon" means Codex can pursue engineering objectives that require multiple steps, iterations, and verification cycles. Think of goals like these:
- "Migrate this package from v2 to v3 API, validating each step"
- "Increase test coverage until all critical paths are covered"
- "Reproduce this bug, fix it, and verify the fix passes"
- "Convert this JavaScript project to TypeScript"
Each of these requires the agent to plan, execute, check results, adjust, and repeat—sometimes across dozens of turns. Without /goal, you'd need to keep re-prompting after each step.
How does the persistence layer keep goals alive?
The implementation uses a five-layer architecture built across five merged pull requests (#18073 through #18077). At its foundation is a persistence layer that stores goal state at the thread level. This means your goal survives if you lose connection, run out of tokens mid-task, or deliberately pause work.
The system tracks elapsed time, token usage, and completion state. A stale-update protection mechanism using goal_id prevents race conditions when multiple updates happen simultaneously.
What happens when you run out of tokens mid-goal?
Rather than abruptly stopping, Codex enters a "budget-limited" state. The runtime steers the agent toward a graceful wrap-up—summarizing progress, noting what's left, and saving state so you can resume later.
This accounting happens at multiple boundaries: turn completions, tool calls, file mutations, interrupts, and resume events. The agent knows exactly where it stands budget-wise and can plan accordingly.
How does interruption handling work?
If you interrupt Codex while it's pursuing a goal (pressing Ctrl+C or typing a new message), the goal automatically pauses. When you resume the session, it reactivates. Your input always takes priority over automatic continuation—the agent won't ignore you in favor of chasing its goal.
There's also a safety mechanism: if the agent produces continuation turns with no tool calls (meaning it's stuck in a loop without making progress), the system suppresses repeated continuations.
How is /goal different from planning mode?
Plans and goals serve different purposes. A plan is a structured step outline—a list of things to do. A goal is a durable objective that the agent pursues regardless of how the plan changes.
You might set a goal of "get all integration tests passing" and the agent creates a plan to achieve it. If the plan fails partway through, the goal remains active and the agent can devise a new plan. The goal is the what, the plan is the how.
Similarly, /goal differs from session resume. Resume continues a session where you left off. A goal defines what objective remains active within that session.
What are the best use cases for /goal?
The feature works best for engineering work that requires iteration and verification:
- Package migrations — Moving from one API version to another across many files
- Test coverage improvements — Writing tests until a coverage threshold is met
- Refactoring — Restructuring code while preserving behavior, running tests after each change
- Bug reproduction and fixing — The agent keeps trying approaches until tests pass
- TypeScript modernization — Converting files one by one with type checking after each
These are tasks where you'd normally need to supervise closely and keep re-prompting. With /goal, you set the objective and check back when it's done or budget-limited.
Should you trust /goal output without review?
No. OpenAI is clear that /goal increases agent persistence, which makes human oversight more critical, not less. The agent works longer autonomously, producing more code changes that you need to validate before merging.
Treat /goal output the same way you'd treat any AI-generated code: review the diff, run the tests yourself, and verify the changes match your intent. The feature handles the tedious iteration—you still own the quality gate.
How do you get started with /goal today?
You need Codex CLI version 0.128.0 or later. The feature is currently behind a goals feature flag, so you may need to enable it in your configuration.
Once available, start with a well-defined, verifiable objective. "Fix the flaky test in auth.spec.ts" is better than "improve the codebase." The more specific and testable your goal, the better the agent can determine when it's done.
The /goal feature represents a meaningful step toward AI agents that work like persistent collaborators rather than single-turn assistants. For developers comfortable with AI coding tools, it removes the biggest friction point: having to babysit every step of multi-stage engineering work.