Codex - Cheatsheet
Overview
Estimated time: 15โ20 minutes
Quick reference guide for OpenAI Codex patterns, prompting techniques, and historical significance. While Codex is deprecated, understanding these patterns helps with modern AI coding tools.
Codex Model Versions
Historical: Enhanced Model
- Context: Up to 8,192 tokens (historical)
- Best for: Complex code generation (historical)
- Strengths: Instruction following (historical)
- Status: Deprecated (March 2023)
Prompting Patterns
Function Generation
# Pattern: Comment + signature
# Create a function that calculates compound interest
def compound_interest(principal, rate, time, compound_frequency):
"""Calculate compound interest"""
# Codex would complete this function
Natural Language Instructions
# Pattern: Detailed comments
# Write a function that:
# 1. Takes a list of numbers
# 2. Filters out negative numbers
# 3. Returns the sum of squares of remaining numbers
def sum_of_positive_squares(numbers):
# Implementation would be generated
Code Explanation
// Pattern: Code + explanation request
function debounce(func, wait) {
let timeout;
return function executedFunction(...args) {
const later = () => {
timeout = null;
func(...args);
};
clearTimeout(timeout);
timeout = setTimeout(later, wait);
};
}
// Explain what this function does:
// [Codex would provide explanation]
API Usage Patterns
Chat Completion (Modern)
Use the chat completions endpoint for best results. Provide a system message to set behavior and a user message with the task.
# curl example - modern chat completion
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "system", "content": "You are a helpful coding assistant."},
{"role": "user", "content": "Write a Python function that sorts a dictionary by values and returns a list of tuples."}
],
"max_tokens": 300,
"temperature": 0.1
}'
# Python example using openai (modern chat API)
from openai import OpenAI
client = OpenAI(api_key="YOUR_API_KEY")
resp = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role":"system","content":"You are a helpful coding assistant."},
{"role":"user","content":"Write a Python function that sorts a dictionary by values and returns a list of tuples."}
],
max_tokens=300,
temperature=0.1
)
print(resp.choices[0].message.content)
Common Parameters
Temperature
- 0.0: Deterministic output
- 0.1-0.3: Good for code
- 0.7-1.0: Creative but less reliable
Max Tokens
- 50-100: Short completions
- 150-300: Function generation
- 500+: Complex implementations
OpenAI Codex (2025) โ Web, CLI & VS Code
Web Console & Workspace Assistant
In 2025 the OpenAI Codex web console acts as a full workspace assistant. It supports repo-aware analysis, multi-file refactors, sandboxed test execution, and saved prompt/workflow templates. Use it to prototype multi-step refactors and preview patches before applying them locally.
- Repo-aware assistance: Grant read-only access or upload a repo bundle to get project-level suggestions and multi-file patches.
- Execution sandbox: Run generated code in an isolated runner to validate outputs and tests before copying into your source tree.
- Prompt workflows: Save reusable prompt templates, chains (task โ test โ lint), and guarded transformations (pre-checks + post-linting) for repeatable automation.
- Model selection: Choose between precision, speed, and local-augmentation modes from the UI; tune temperature, max tokens, and safety filters per workspace.
CLI & Automation (2025)
The modern Codex CLI is designed for reproducible automation: it produces machine-readable patches, supports streaming outputs, function-calling, and integrates with CI pipelines. The recommended pattern is to use chat completions + function-calling to request patches or tests, then consume the structured response in scripts.
# Example: request a repo patch via chat + function-calling
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{"role":"system","content":"You are Codex. When possible, produce a JSON patch by invoking the 'create_patch' function."},
{"role":"user","content":"Refactor process_items to use streaming and add unit tests."}
],
"functions": [
{"name":"create_patch","parameters":{"type":"object","properties":{"patch":{"type":"string"}}}}
],
"function_call": "auto"
}'
Official CLI tooling (e.g., codex patch
, codex run
) wraps these same patterns and emits JSON-friendly outputs for CI. Use pre-commit hooks to apply lint/test gates before accepting automated patches.
VS Code Extension (2025)
The 2025 OpenAI Codex extension integrates workspace mode, sandboxed test runs, and multi-file code actions directly into the editor.
- Inline assistant & code actions: Request rewrites, optimizations, or explanations for selections; preview multi-file patches and apply them as staged changes.
- Multi-file refactor: Run a single refactor across the workspace and review a proposed patch with diffs, tests, and linter results before applying.
- Test generation & execution: Generate unit tests and run them in the sandbox; failing tests remain in the preview so you can iterate.
- Security & privacy: Store keys in OS secret stores, enable workspace delegation tokens, and exclude sensitive files from remote processing via settings.
- Local augmentation: Use a local code index/embedding cache to speed up retrieval and keep sensitive data on-device.
VS Code settings example
{
"openai.apiKey": "${env:OPENAI_API_KEY}",
"codex.enableWorkspaceMode": true,
"codex.sandboxOnApply": true,
"codex.model": "gpt-4o",
"codex.safetyLevel": "medium"
}
Always preview auto-generated patches, run generated tests, and run linters/formatters before committing changes.
Language-Specific Patterns
Python
# Class definition pattern
class DataProcessor:
"""Process and analyze data from various sources"""
def __init__(self, data_source):
# Codex would complete initialization
def clean_data(self):
"""Remove invalid entries and normalize format"""
# Implementation generated
def analyze_trends(self):
"""Identify patterns and trends in the data"""
# Analysis code generated
JavaScript
# API endpoint pattern
// Create an Express.js endpoint for user authentication
app.post('/api/auth/login', async (req, res) => {
// Codex would generate authentication logic
});
// React component pattern
// Create a React component for displaying user profiles
const UserProfile = ({ user }) => {
// Component implementation generated
};
SQL
-- Complex query pattern
-- Find the top 5 customers by total order value in the last quarter
SELECT
-- Codex would complete the query
Best Practices (Historical)
Effective Prompting
Do's
- Use clear, descriptive comments
- Provide function signatures
- Include context and requirements
- Use consistent naming conventions
Don'ts
- Don't rely on generated code blindly
- Avoid vague or ambiguous prompts
- Don't ignore security implications
- Don't skip code review
Quality Control
- Always Review: Generated code needed human validation
- Test Thoroughly: Codex could generate plausible but incorrect code
- Security Check: Scan for potential vulnerabilities
- Performance Review: Optimize generated algorithms
Common Use Cases
Code Generation
Utility Functions
# Generate helper functions
def validate_email(email):
def format_currency(amount):
def generate_uuid():
API Integrations
# API wrapper functions
def fetch_weather_data(city):
def send_notification(message):
def upload_to_s3(file):
Code Translation
# Pattern: Language conversion
# Convert this JavaScript function to Python:
# function fibonacci(n) { ... }
def fibonacci(n):
# Python equivalent generated
Documentation Generation
def complex_algorithm(data, threshold, mode='strict'):
# Implementation here
pass
# Generate docstring for the above function:
"""
[Codex would generate comprehensive docstring]
"""
Integration Patterns
GitHub Copilot (Codex-powered)
- Inline Suggestions: Real-time code completion
- Comment-driven: Generate code from comments
- Context Aware: Use surrounding code for context
- Multi-language: Support across programming languages
VS Code Extensions
# Extension configuration pattern
{
"openai.apiKey": "${env:OPENAI_API_KEY}",
"openai.model": "gpt-4o",
"openai.temperature": 0.1,
"openai.maxTokens": 200
}
Limitations Reference
Technical Constraints
- Context Window: Maximum 8,192 tokens
- Knowledge Cutoff: Training data up to early 2021
- No Internet Access: Couldn't fetch current information
- No Execution: Couldn't run or test generated code
Quality Issues
- Hallucination: Could generate non-existent APIs
- Security Gaps: Sometimes suggested vulnerable patterns
- Logic Errors: Plausible but incorrect implementations
- Outdated Patterns: Used deprecated APIs or practices
Modern Equivalents
Migration Path
From Codex
- Legacy Codex-era models (deprecated)
- Single-turn completion
- Function-level generation
- Limited context
To Modern Models
- GPT-4, Claude, Gemini
- Conversational interfaces
- Multi-file understanding
- Extended context windows
Tool Evolution
- GitHub Copilot: Now uses GPT-4 and specialized models
- Cursor: Advanced multi-file editing with GPT-4
- Cline: Autonomous agents beyond simple completion
- Windsurf: Enhanced codebase understanding
Historical Context
Timeline
- August 2021: Codex private beta launch
- October 2021: GitHub Copilot technical preview
- March 2022: Codex API public availability
- June 2022: GitHub Copilot general availability
- March 2023: Codex API deprecation
Impact Metrics
- GitHub Copilot: Over 1 million developers using Codex-powered features
- Code Generation: Billions of lines of AI-suggested code
- Industry Shift: Launched the AI coding assistant market
- Developer Adoption: Proved AI assistance value
Learning Resources
Historical Documentation
- OpenAI Research Papers: Original Codex methodology
- GitHub Copilot Studies: Usage patterns and effectiveness
- Community Experiments: Creative applications and use cases
- Academic Research: AI-assisted programming studies
Conclusion
While OpenAI Codex is no longer available, its patterns and approaches continue to influence modern AI coding tools. Understanding these historical foundations helps developers better use current AI assistants and appreciate how far the technology has advanced.