-
Notifications
You must be signed in to change notification settings - Fork 575
UN-3115 [FIX] - Ollama Adapters Failing post LiteLLM Migration #1730
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings. WalkthroughUpdates parameter validation for Ollama LLM and Embedding adapters. The Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI Agents
In @unstract/sdk1/src/unstract/sdk1/adapters/base1.py:
- Around line 785-789: The code block that constructs the Ollama model
identifier uses adapter_metadata.get("model_name"/"model") and then
conditionally prepends "ollama/" which is incorrect for embeddings; update the
logic in base1.py so that the embedding model name is returned/passed exactly as
extracted (variable model from adapter_metadata) without adding or checking for
the "ollama/" prefix, ensuring the plain embedding model name is sent to
LiteLLM's embeddings API.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
unstract/sdk1/src/unstract/sdk1/adapters/base1.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build
🔇 Additional comments (2)
unstract/sdk1/src/unstract/sdk1/adapters/base1.py (2)
629-631: Good fix for the double-validation issue.The nested fallback
get("base_url", get("api_base", ""))correctly preserves the existingapi_basevalue whenbase_urlis absent during the second validation call, preventing the connection refused error.
777-779: Consistent fix applied to embedding adapter.The same fallback logic correctly addresses the double-validation issue for embeddings, maintaining consistency with the LLM adapter fix.
Test ResultsSummary
Runner Tests - Full Report
SDK1 Tests - Full Report
|
|
gaya3-zipstack
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left one comment regarding maintainability . ELse looks fine. Approving for now.



What
Why
a. First in
LLM.__init__()/Embedding.__init__()- converts base_url → api_baseb. Second in
LLM.complete()- re-validates already-validated kwargsHow
adapter_metadata.get("base_url", adapter_metadata.get("api_base", ""))OllamaEmbeddingParameters.validate()Can this PR break any existing features. If yes, please list possible items. If no, please explain why. (PS: Admins do not merge the PR without this section filled)
Database Migrations
Env Config
Relevant Docs
Related Issues or PRs
Dependencies Versions
Notes on Testing
Screenshots
Checklist
I have read and understood the Contribution Guidelines.