How to use local AI model like Llama instead of OpenAI

Has anyone tried using a local API endpoint for an AI assistant instead of the OpenAI API? I have Ollama running on my laptop, along with several models to assist with my tasks. That’s why I want to use this local model instead of relying on Anthropic, OpenAI, or Google. By the way, I’m using Llama and Open WebUI.
7 Replies
Hall
Hall7d ago
Someone will reply to you shortly. In the meantime, this might help:
eugene
eugene7d ago
AI Completion - marimo
The next generation of Python notebooks
Crimson Nebula
Crimson NebulaOP6d ago
Yeah But I’ve another question that while trying to select another model from the dropdown it seems I couldn’t select and no others ml model isn’t showing in the dropdown menu Is there any way to fix this?
eugene
eugene6d ago
may i ask which dropdown are you dealing with? i can't see any dropdown here
No description
Crimson Nebula
Crimson NebulaOP6d ago
The last one named Model
eugene
eugene6d ago
that's a text input area, you can input the model name there by hand, like llama3.1 or qwen2.5:0.5b, and the ai assist will automatically use that model you can use the ollama ls command to list all available models
Crimson Nebula
Crimson NebulaOP6d ago
Oooo, I got it. Thanks for your help.

Did you find this page helpful?