How to use local AI model like Llama instead of OpenAI
Has anyone tried using a local API endpoint for an AI assistant instead of the OpenAI API?
I have Ollama running on my laptop, along with several models to assist with my tasks. That’s why I want to use this local model instead of relying on Anthropic, OpenAI, or Google. By the way, I’m using Llama and Open WebUI.
7 Replies
Someone will reply to you shortly. In the meantime, this might help:
Hi, can this helps with you problem?
https://docs.marimo.io/guides/editor_features/ai_completion/?h=ai#using-ollama
AI Completion - marimo
The next generation of Python notebooks
Yeah
But I’ve another question that while trying to select another model from the dropdown it seems I couldn’t select and no others ml model isn’t showing in the dropdown menu
Is there any way to fix this?
may i ask which dropdown are you dealing with? i can't see any dropdown here
The last one
named Model
that's a text input area, you can input the model name there by hand, like
llama3.1
or qwen2.5:0.5b
, and the ai assist will automatically use that model
you can use the ollama ls
command to list all available modelsOooo, I got it. Thanks for your help.