Running Gemma 4 Locally with Ollama and OpenCode
First steps: The usual first step with getting Gemma 4 running on Ollama is to pull the model: ollama pull gemma4:e4b See the available models and select the correct version for your system. The e4...

Source: DEV Community
First steps: The usual first step with getting Gemma 4 running on Ollama is to pull the model: ollama pull gemma4:e4b See the available models and select the correct version for your system. The e4b variant is a good starting point if your hardware can support it. Use the ollama list command to ensure your version is now available to Ollama. Testing Now, run the model to ensure it works as expected: ollama run gemma4:e4b Ask a simple question or just say "Hello", then use /bye to exit. Immediately run ollama ps. You should see something like this: NAME ID SIZE PROCESSOR CONTEXT UNTIL gemma4:e4b c6eb396dbd59 10 GB 100% GPU 4096 4 minutes from now Pay close attention to that CONTEXT value. If you see 4096 like this, then Ollama is using the default 4K context window. This will bite you when you try to work with the model in OpenCode. Symptoms of the small context window might be the model constantly stating "Just let me know what you want to do", or similar. The cause is the system promp