Skip to content

Chat Services

Antonio Fin edited this page Dec 11, 2024 · 4 revisions

The LLM Agentic Tool Mesh platform's chat services provide all the necessary tools to create a robust chat application utilizing Large Language Models (LLMs). These services handle

chat

All these services are implemented following the Factory Design Pattern. Configuration settings and details of the general service can be found in the abstract base class, while instance-specific settings and results are documented within each specific implementation file.

Chat Model

The chat model service manages the instantiation and utilization of different LLMs based on the provided configuration. It uses the Factory Design Pattern to abstract the complexity of model selection, making integration straightforward and scalable.

Example: Chat Model

Here’s an example of how you can use the ChatModel class to initialize a chat model and invoke it with specific prompts.

from athon.chat import ChatModel
from langchain.schema import HumanMessage, SystemMessage

# Example configuration for the Chat Model
LLM_CONFIG = {
    'type': 'LangChainChatOpenAI',
    'api_key': 'your-api-key-here',
    'model_name': 'gpt-4o',
    'temperature': 0.7
}

# Initialize the Chat Model with the provided configuration
chat = ChatModel.create(LLM_CONFIG)

# Define the prompts
prompts = [
    SystemMessage(content="Convert the message to pirate language"),
    HumanMessage(content="Today is a sunny day and the sky is blue")
]

# Invoke the model with the prompts
result = chat.invoke(prompts)

# Handle the response
if result.status == "success":
    print(f"COMPLETION:\n{result.content}")
else:
    print(f"ERROR:\n{result.error_message}")

Chat Memory

The chat memory service manages the storage and retrieval of conversation history, which is essential for maintaining context in chat interactions. Below is an example of how to use the LangChainBufferMemory class to manage chat memory.

Example: Chat Memory

Here’s an example of how you can initialize a chat memory instance and retrieve it using the get_memory method.

from athon.chat import ChatMemory

# Example configuration for the Chat Memory
MEMORY_CONFIG = {
    'type': 'LangChainBuffer',
    'memory_key': 'chat_history',
    'return_messages': True
}

# Initialize the Chat Memory with the provided configuration
memory = ChatMemory.create(MEMORY_CONFIG)

# Retrieve the memory instance
memory_result = memory.get_memory()

# Handle the response
if memory_result.status == "success":
    print(f"MEMORY INSTANCE:\n{memory_result.memory}")
else:
    print(f"ERROR:\n{memory_result.error_message}")

Prompt Rendering

The prompt rendering service in Athon handles the creation and management of prompts, which are essential for interacting with LLMs. This service allows for rendering prompts from both string templates and files, as well as saving customized prompts back to the file system. The service is built using the Factory Design Pattern, allowing flexibility and ease of use.

Example: Prompt Rendering with JinjaTemplate

Here’s an example of how you can use the JinjaTemplatePromptRender class to render a prompt from a string template, load a prompt from a file, and save a prompt to a file.

from athon.chat import PromptRender

# Example configuration for the Prompt Render
PROMPT_CONFIG = {
    'type': 'JinjaTemplate',
    'environment': 'path/to/templates',
    'templates': {
        'welcome': 'welcome_template.txt',
        'goodbye': 'goodbye_template.txt'
    }
}

# Initialize the Prompt Render with the provided configuration
prompt_render = PromptRender.create(PROMPT_CONFIG)

# Render a prompt from a string template
template_string = "Hello, {{ name }}! Welcome to our service."
render_result = prompt_render.render(template_string, name="John Doe")

if render_result.status == "success":
    print(f"RENDERED CONTENT:\n{render_result.content}")
else:
    print(f"ERROR:\n{render_result.error_message}")

# Load a prompt from a file
load_result = prompt_render.load('welcome', name="John Doe")

if load_result.status == "success":
    print(f"LOADED CONTENT:\n{load_result.content}")
else:
    print(f"ERROR:\n{load_result.error_message}")

# Save a prompt to a file
save_result = prompt_render.save('goodbye', "Goodbye, John Doe! See you next time.")

if save_result.status == "success":
    print("Prompt content saved successfully.")
else:
    print(f"ERROR:\n{save_result.error_message}")

Message Management

The message management service in LLM Agentic Tool Mesh allows for the serialization and deserialization of messages, which is crucial for converting between different data formats and ensuring compatibility with various components of the chat system. The service supports converting dictionaries of strings into message objects and vice versa, using the Factory Design Pattern for flexibility and ease of use.

Example: Message Management with LangChainPrompts

Here’s an example of how you can use the LangChainPromptsMessageManager class to serialize and deserialize messages.

from athon.chat import MessageManager

# Example configuration for the Message Manager
MESSAGE_CONFIG = {
    'type': 'LangChainPrompts',
    'json_convert': True,
    'memory_key': 'chat_history'
}

# Initialize the Message Manager with the provided configuration
message_manager = MessageManager.create(MESSAGE_CONFIG)

# Example dictionary of prompts to be converted into message objects
prompts_dict = {
    'chat_history': json.dumps([
        {'type': 'SystemMessage', 'content': 'You are a helpful assistant.'},
        {'type': 'HumanMessage', 'content': 'What is the weather like today?'},
        {'type': 'AIMessage', 'content': 'The weather is sunny and warm.'}
    ])
}

# Convert the dictionary to message objects
convert_result = message_manager.convert_to_messages(prompts_dict)

if convert_result.status == "success":
    print(f"CONVERTED MESSAGES:\n{convert_result.prompts}")
else:
    print(f"ERROR:\n{convert_result.error_message}")

# Now convert the message objects back to a dictionary of strings
messages = convert_result.prompts['chat_history']
serialize_result = message_manager.convert_to_strings(messages)

if serialize_result.status == "success":
    print(f"SERIALIZED MESSAGES:\n{serialize_result.prompts}")
else:
    print(f"ERROR:\n{serialize_result.error_message}")