"Generate with AI" fails / vLLM OpenAI compatible server
I am using vLLM to serve a local LLM using the vLLM OpenAI compatible server. I can set the URL, API key, and the Model successfully in the marimo Settings pane.
When I hit "generate", marimo sees the server but the call violates a requirement for the chat role to alternate between "assistant" and "user".
0 Replies