GPT-5.3-Codex — What’s New

Released by OpenAI on 2026-02-05 — the same day as Claude Opus 4.6.

Key updates

A unified model

GPT-5.3-Codex combines Codex-grade coding with GPT-style general reasoning. The goal: fewer model switches depending on the task.

25% faster (reported)

OpenAI reports a ~25% speedup for Codex users. In agent workflows, small per-turn latency savings add up quickly.

“Self-involved” development

OpenAI describes this as the first model that participated in parts of its own creation pipeline—debugging training processes, deployment operations, diagnostics, and evaluations.

Real-time interactive coding

More interactive execution: the model can frequently report progress and decisions, and developers can interrupt/redirect during the run (more like collaborating with a teammate).

Cybersecurity positioning

OpenAI notes elevated safeguards under the Preparedness Framework and emphasizes cautious language around thresholds; the posture suggests concern about rapidly increasing capability.

Long-running task execution

Better support for research + tool use + complex multi-step execution chains.

Availability

ChannelStatus
ChatGPT paid plans (app / CLI / IDE extension / web)Available
API“Rolling out with safety gating” (per OpenAI)

What this means for me

A unified model is the most practical change: fewer decisions about “which model do I use for this?”

The speed gain is underrated. Agent tasks are multi-turn; shaving seconds per turn can mean minutes per task.

Real-time interaction also changes the workflow: instead of “send a spec and wait,” it’s closer to live collaboration—better for ambiguous requirements.

API availability lag is the main downside for automation setups. For now, the realistic plan is: use what’s available today, and switch once the API path is open.