OpenAI Releases GPT-5.3-Codex-Spark with High Speed
OpenAI has introduced GPT-5.3-Codex-Spark, a smaller version of GPT-5.3 designed for high-speed processing with a target speed of 1,000 tokens per second. This means most tasks can complete in under 3 minutes, especially those without long test runs.
The model supports a context length of 128,000 tokens but does not handle images. A key advantage is that GPT-5.3-Codex-Spark does not consume users’ limits, making it ideal for relatively simple tasks on frontier models.
It is available across multiple platforms including the Codex app, CLI, and VS Code plugin (update required).