You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
In a Langchain.rb based application that is setup and hosted by different users
it's not known which LLMs can be used via API keys.
So the idea would be to allow users to run such an application with agents that
either do not rely on a specific LLM, or
use specific LLM methods and parameters on an optional base, or
raise a specific error when the LLM does not match.
Users could just provide the specific LLM API keys they have access to, and
the application would select and use the corresponding LLM class.
While the application could handle this on its own, it might be a common use
case for Langchain.rb users.
Providing a mechanism for generic LLM configuration in Langchain.rb directly
would thus allow application developers to use Langchain.rb out of the box
without implementing their own LLM setup.
This would also be important for multi-tenant applications, cost
optimization, failover scenarios, and experimentation with different LLMs.
Describe the solution you'd like
A generic Langchain::LLM::Generic model could handle several parts:
Dynamic LLM Selection:
Use an LANGCHAINRB_LLM environment variable containing the provider and
model name (e.g., "anthropic.claude-3-sonnet").
Support fallback values or a prioritized list of LLMs (e.g., "openai.gpt-4,anthropic.claude-3-sonnet").
LLM Initialization:
Instantiate the corresponding LLM class based on the environment variable.
Validate configuration and handle initialization errors gracefully.
Method Availability Check:
Provide a check on the llm instance (or a maintained list of methods)
before forwarding a call to let agents know whether a method is available
in that LLM.
Parameter Mapping:
Map generic parameters to provider-specific ones (e.g., temperature to top_p for certain providers).
Method Forwarding:
Forward all method calls to the underlying LLM instance.
Describe alternatives you've considered
Factory Pattern:
A factory could deliver the specific LLM instance directly based on the environment.
Pros: Simpler implementation, clearer separation of concerns.
Cons: Less flexibility for dynamic method handling.
Plugin System:
A plugin system could allow users to add new LLM providers dynamically.
Pros: Highly extensible.
Cons: More complex to implement and maintain.
Additional context
This feature would be particularly useful for:
Multi-tenant or locally hosted applications where different users may have
access to different LLMs.
Cost optimization by dynamically selecting the most cost-effective LLM.
Agents could provide a list of required llm methods.
Failover scenarios where the application can switch to a backup LLM if the
primary one fails.
Experimentation with different LLMs to compare performance and results.
llm=Langchain::LLM::Generic.new# Attempt to use a method that is not supported by the selected LLMifllm.supports_message?(:summarize)summary=llm.summarize(text: "A long article about AI advancements...")puts"Summary: #{summary}"elseputs"Summarization is not supported by the selected LLM."end
If for example the selected LLM is Anthropic (which doesn't support summarize):
The `summarize` method is not supported by the selected LLM.
Supported LLMs for `summarize`: AI21, Cohere, OpenAI
moduleLangchain::LLMclassMethodNotSupportedError < StandardErrorattr_reader:supported_llmsdefinitialize(method_name,supported_llms)@supported_llms=supported_llmssuper("#{method_name} is not supported by the selected LLM. Supported LLMs: #{supported_llms.sort.join(', ')}")endendend
Is your feature request related to a problem? Please describe.
In a Langchain.rb based application that is setup and hosted by different users
it's not known which LLMs can be used via API keys.
So the idea would be to allow users to run such an application with agents that
Users could just provide the specific LLM API keys they have access to, and
the application would select and use the corresponding LLM class.
While the application could handle this on its own, it might be a common use
case for Langchain.rb users.
Providing a mechanism for generic LLM configuration in Langchain.rb directly
would thus allow application developers to use Langchain.rb out of the box
without implementing their own LLM setup.
This would also be important for multi-tenant applications, cost
optimization, failover scenarios, and experimentation with different LLMs.
Describe the solution you'd like
A generic
Langchain::LLM::Generic
model could handle several parts:Dynamic LLM Selection:
LANGCHAINRB_LLM
environment variable containing the provider andmodel name (e.g.,
"anthropic.claude-3-sonnet"
)."openai.gpt-4,anthropic.claude-3-sonnet"
).LLM Initialization:
Method Availability Check:
before forwarding a call to let agents know whether a method is available
in that LLM.
Parameter Mapping:
temperature
totop_p
for certain providers).Method Forwarding:
Describe alternatives you've considered
Factory Pattern:
Plugin System:
Additional context
This feature would be particularly useful for:
access to different LLMs.
Agents could provide a list of required llm methods.
primary one fails.
Examples
If for example the selected LLM is
Anthropic
(which doesn't supportsummarize
):Implementation notes:
Usage with
rescue
:The text was updated successfully, but these errors were encountered: