Setting Up A Free Claude-Like Assistant With OpenCode And Ollama

I’ve been experimenting with Claude Code for a while, and there is no doubt that it’s a powerful assistant, capable of performing many tasks as a coding partner. While it’s highly capable, it’s also a paid service, and depending on your budget, it may not be a viable option, so you should be open to exploring alternatives.

There are several options you can consider. One of them is OpenCode, an open-source coding assistant that can be configured to work with various LLM providers and models, including local ones.

Your Local Alternative with Ollama

Installing OpenCode is straightforward. You can accomplish this by running the following command in your terminal:

curl -fsSL https://opencode.ai/install | bash

There are other installation methods available in the official documentation.

Once installed, the command opencode should be available in your terminal. Executing it will start the TUI (Terminal User Interface) for OpenCode.

OpenCode assistant running

In my installation, the tool came with two free models by default: big-pickle and grok-code. These models are provided by the OpenCode team. They are part of a curated list that includes other models from different providers.

I’m using Ollama to run local models. I was eager to experiment with the qwen3-coder model, which I have high expectations for, given my experience with other models in the Qwen series.

With a file named opencode.json in my home configuration directory (~/.config/opencode/), I configured OpenCode to use Ollama as the provider and qwen3-coder as the chosen model.

{
  "$schema": "https://opencode.ai/config.json",
  "model": "ollama/qwen3-coder:480b-cloud", // set default model
  "provider": {
    "ollama": { // provider id
      "npm": "@ai-sdk/openai-compatible",
      "name": "Ollama",
      "options": {
        "baseURL": "http://localhost:11434/v1"
      },
      "models": {
        "qwen3-coder:480b-cloud": { // configure the model
          "name": "Qwen3 Coder 480B Cloud"
        },
      }
    }
  }
}

The baseURL points to my local Ollama server, which is running on port 11434. Note that the model is specified as ollama/qwen3-coder:480b-cloud, which indicates that this model is hosted in Ollama Cloud. That’s due to my hardware limitations; I don’t have sufficient resources to run the complete model locally. If you have a powerful machine, you can run the model locally by changing the model name to ollama/qwen3-coder:480b, or the smaller one ollama/qwen3-coder:30b, available for local use.

I can, for instance, incorporate any other model I have running locally simply by adding it to the models section of the configuration file.

Listing available models with the command "ollama list"

To incorporate the mistral model from Ollama, I would add the following entry to the models section:

"mistral": {
  "name": "Mistral"
}

NOTE: The model to be used must support tools. You can check model capabilities by running ollama show <model-name>.

After configuring, to select the model, type /models which will open the model selection interface.

Listing available models on OpenCode

That’s it! Get your Ollama server running with your desired models, configure OpenCode, and start coding with your local AI assistant. No need to be dependent on paid services.

Connecting To Ollama Cloud Using API Key

When a model is hosted in Ollama Cloud, having a local Ollama server is not required; simply connect to the cloud endpoint.

First, establish the API key by running the command:

opencode auth login

This command opens a prompt asking for some details about the provider. Regarding the version of OpenCode I’m using (1.0.25), Ollama is not listed there. I selected "Other", set the provider id to ollama, and provided the API key.

It warns:

This only stores a credential for ollama – you will need to configure it in opencode.json, check the docs for examples.

I needed to modify the configuration file to point to the cloud endpoint:

{
  "$schema": "https://opencode.ai/config.json",
  "model": "ollama/qwen3-coder:480b-cloud",
  "provider": {
    "ollama": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Ollama",
      "options": {
        "baseURL": "https://ollama.com/v1" // set Ollama Cloud endpoint
      },
      "models": {
        "qwen3-coder:480b-cloud": {
          "name": "Qwen3 Coder 480B Cloud"
        }
      }
    }
  }
}

Pretty simple, right? After incorporating the API key, it’s just a matter of configuring the baseURL to point to the cloud endpoint.

Connecting to Claude Models Using API Key

I wanted to test the capabilities of OpenCode compared to Claude Code. The fairest way to do that is to connect OpenCode to the same models used by Claude Code. It turns out that it’s possible, and even easier than configuring Ollama.

Using the same command:

opencode auth login

It is possible to connect to Anthropic models used by Claude Code. In this case, just select "Anthropic" as the provider, and put the API key.

After doing that, I could see the Anthropic models inside of OpenCode without changing the openconfig.json.

OpenCode listing Anthropic Models like Claude Opus, Claud Sonnet and so on

You can try any other Provider available. There are a couple of them.

Final Thoughts

I just started exploring OpenCode, but so far I’m impressed with its capabilities and flexibility. Here, I demonstrated how to configure it to work with Ollama and local models, but there are many other possibilities like agents, MCP, ACP, formatting, and more.

If you’re looking for a local alternative to Claude Code, give OpenCode a try. It’s open-source, provider-agnostic, and has a growing community. Happy coding!

We want to work with you. Check out our Services page!

Edy Silva

I own a computer

View all posts by Edy Silva →