Exploring Artificial Intelligence

Exploring Artificial Intelligence

How to run the Codex CLI locally with Ollama and LM Studio

Elisa Terumi's avatar
Elisa Terumi
Feb 01, 2026
∙ Paid

In our last article, we saw how to run Claude Code locally with Ollama.

In this week’s article, we’ll look at how to configure and run OpenAI’s Codex CLI locally on our machine.

This means: no reliance on paid APIs, no sending our code to external servers, and full control over the AI models we use.

If you want privacy, cost savings, and flexibility to experiment with different open-source models, this tutorial is for you.

Follow our page on LinkedIn for more content like this! 😉

What we’re going to cover

  • What Codex CLI is

  • How to install and configure Codex

  • How to run local models with Ollama and LM Studio

  • Practical configuration of the config.toml file

  • Commands to switch between different models

Let’s go!


1: Understanding Codex CLI

Codex CLI is a command-line tool from OpenAI that works as a programming assistant, designed to translate natural language into code and execute coding tasks autonomously.

Features

  • Code Generation: Codex can write code from natural language instructions, allowing developers to describe what they want in simple terms, such as “create a login page” or “sort a list of numbers.”

  • Debugging and Testing: It is capable of identifying and fixing errors in code, as well as running automated tests to ensure the code works as expected.

  • Integration with Development Environments: Codex operates in isolated sandbox environments, where it can interact with code repositories and execute tasks without affecting the production environment.

  • Multi-language Support: The system understands more than a dozen programming languages, including Python, JavaScript, TypeScript, Ruby, Go, and PHP, making it versatile for different software projects.

And here’s the cool part: it’s not locked to OpenAI models!

We can configure it to use local models running on our own machine through different providers.

Installing Codex CLI

Via Node.js:

npm i -g @openai/codex

Via Homebrew (macOS):

brew install codex

After installation, you’ll have a ~/.codex/ folder on your system. That’s where we’ll configure everything.

Step 2: Choose your local provider

To run AI models locally, we need a “server” that executes those models. Let’s look at two options:

Ollama – The Most Popular

It’s like Docker for language models. Simple to install and use, works on Mac, Linux, and Windows. Ideal for beginners.

Website: ollama.ai

See an article explaining more about Ollama here.

LM Studio – Graphical Interface + CLI

It has a friendly visual interface to manage models, but also offers a CLI. Great if you don’t live 100% in the terminal.

Website: lmstudio.ai

See an article explaining more about LM Studio here.

Step 3: Configuration with Ollama

Installing Ollama

Download and install it from the official website (or update it if you already have it installed). Then, download an open-source model:

ollama pull gpt-oss:20b

or a smaller model:

ollama pull qwen2.5-coder:1.5b

Check that it’s running:

ollama list

Configuring Codex for Ollama

User's avatar

Continue reading this post for free, courtesy of Elisa Terumi.

Or purchase a paid subscription.
© 2026 Elisa Terumi · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture