diff --git a/python/instrumentation/openinference-instrumentation-smolagents/CHANGELOG.md b/python/instrumentation/openinference-instrumentation-smolagents/CHANGELOG.md new file mode 100644 index 000000000..72254c74e --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/CHANGELOG.md @@ -0,0 +1,7 @@ +# Changelog + +## 0.1.0 (2025-01-10) + +### Features + +* **smolagents:** instrumentation ([#1182](https://github.com/Arize-ai/openinference/issues/1182)) ([#1184](https://github.com/Arize-ai/openinference/pull/1184)) diff --git a/python/instrumentation/openinference-instrumentation-smolagents/LICENSE b/python/instrumentation/openinference-instrumentation-smolagents/LICENSE new file mode 100644 index 000000000..191f9d346 --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + +Copyright The OpenTelemetry Authors + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. \ No newline at end of file diff --git a/python/instrumentation/openinference-instrumentation-smolagents/README.md b/python/instrumentation/openinference-instrumentation-smolagents/README.md new file mode 100644 index 000000000..8f0971584 --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/README.md @@ -0,0 +1,59 @@ +# OpenInference smolagents Instrumentation + +[![pypi](https://badge.fury.io/py/openinference-instrumentation-smolagents.svg)](https://pypi.org/project/openinference-instrumentation-smolagents/) + +Python auto-instrumentation library for LLM agents implemented with smolagents + +Crews are fully OpenTelemetry-compatible and can be sent to an OpenTelemetry collector for monitoring, such as [`arize-phoenix`](https://github.com/Arize-ai/phoenix). + +## Installation + +```shell +pip install openinference-instrumentation-smolagents +``` + +## Quickstart + +This quickstart shows you how to instrument your guardrailed LLM application + +Install required packages. + +```shell +pip install smolagents arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp +``` + +Start Phoenix in the background as a collector. By default, it listens on `http://localhost:6006`. You can visit the app via a browser at the same address. (Phoenix does not send data over the internet. It only operates locally on your machine.) + +```shell +python -m phoenix.server.main serve +``` + +Set up `SmolagentsInstrumentor` to trace your crew and send the traces to Phoenix at the endpoint defined below. + +```python +from opentelemetry import trace +from opentelemetry.sdk.trace import TracerProvider +from opentelemetry.sdk.trace.export import BatchSpanProcessor + +from openinference.instrumentation.smolagents import SmolagentsInstrumentor +from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter +from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor + +endpoint = "http://0.0.0.0:6006/v1/traces" +trace_provider = TracerProvider() +trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint))) + +SmolagentsInstrumentor().instrument(tracer_provider=trace_provider) + +from smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel + +agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=HfApiModel()) + +agent.run("How many seconds would it take for a leopard at full speed to run through Pont des Arts?") +``` + +## More Info + +* [More info on OpenInference and Phoenix](https://docs.arize.com/phoenix) +* [How to customize spans to track sessions, metadata, etc.](https://github.com/Arize-ai/openinference/tree/main/python/openinference-instrumentation#customizing-spans) +* [How to account for private information and span payload customization](https://github.com/Arize-ai/openinference/tree/main/python/openinference-instrumentation#tracing-configuration) \ No newline at end of file diff --git a/python/instrumentation/openinference-instrumentation-smolagents/examples/e2b_example.py b/python/instrumentation/openinference-instrumentation-smolagents/examples/e2b_example.py new file mode 100644 index 000000000..acf62e32c --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/examples/e2b_example.py @@ -0,0 +1,39 @@ +from io import BytesIO + +import requests +from PIL import Image +from smolagents import CodeAgent, GradioUI, HfApiModel, Tool +from smolagents.default_tools import VisitWebpageTool + + +class GetCatImageTool(Tool): + name = "get_cat_image" + description = "Get a cat image" + inputs = {} + output_type = "image" + + def __init__(self): + super().__init__() + self.url = "https://em-content.zobj.net/source/twitter/53/robot-face_1f916.png" + + def forward(self): + response = requests.get(self.url) + + return Image.open(BytesIO(response.content)) + + +get_cat_image = GetCatImageTool() + +agent = CodeAgent( + tools=[get_cat_image, VisitWebpageTool()], + model=HfApiModel(), + additional_authorized_imports=["Pillow", "requests", "markdownify"], # "duckduckgo-search", + use_e2b_executor=True, +) + +agent.run( + "Return me an image of a cat. Directly use the image provided in your state.", + additional_args={"cat_image": get_cat_image()}, +) + +GradioUI(agent).launch() diff --git a/python/instrumentation/openinference-instrumentation-smolagents/examples/managed_agent.py b/python/instrumentation/openinference-instrumentation-smolagents/examples/managed_agent.py new file mode 100644 index 000000000..cddd749f8 --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/examples/managed_agent.py @@ -0,0 +1,50 @@ +import os + +from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter +from opentelemetry.sdk.trace import TracerProvider +from opentelemetry.sdk.trace.export import SimpleSpanProcessor +from smolagents import ( + CodeAgent, + DuckDuckGoSearchTool, + ManagedAgent, + OpenAIServerModel, + ToolCallingAgent, +) + +from openinference.instrumentation.smolagents import SmolagentsInstrumentor + +endpoint = "http://0.0.0.0:6006/v1/traces" +trace_provider = TracerProvider() +trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint))) + +SmolagentsInstrumentor().instrument(tracer_provider=trace_provider, skip_dep_check=True) + + +model = OpenAIServerModel( + model_id="gpt-4o", + api_base="https://api.openai.com/v1", + api_key=os.environ["OPENAI_API_KEY"], +) +agent = ToolCallingAgent( + tools=[DuckDuckGoSearchTool()], + model=model, + max_steps=3, +) +managed_agent = ManagedAgent( + agent=agent, + name="managed_agent", + description=( + "This is an agent that can do web search. " + "When solving a task, ask him directly first, he gives good answers. " + "Then you can double check." + ), +) +manager_agent = CodeAgent( + tools=[DuckDuckGoSearchTool()], + model=model, + managed_agents=[managed_agent], +) +manager_agent.run( + "How many seconds would it take for a leopard at full speed to run through Pont des Arts? " + "ASK YOUR MANAGED AGENT FOR LEOPARD SPEED FIRST" +) diff --git a/python/instrumentation/openinference-instrumentation-smolagents/examples/openai_model.py b/python/instrumentation/openinference-instrumentation-smolagents/examples/openai_model.py new file mode 100644 index 000000000..eee9f669c --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/examples/openai_model.py @@ -0,0 +1,22 @@ +import os + +from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter +from opentelemetry.sdk.trace import TracerProvider +from opentelemetry.sdk.trace.export import ( + SimpleSpanProcessor, +) +from smolagents import OpenAIServerModel + +from openinference.instrumentation.smolagents import SmolagentsInstrumentor + +endpoint = "http://0.0.0.0:6006/v1/traces" +trace_provider = TracerProvider() +trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint))) + +SmolagentsInstrumentor().instrument(tracer_provider=trace_provider, skip_dep_check=True) + +model = OpenAIServerModel( + model_id="gpt-4o", api_key=os.environ["OPENAI_API_KEY"], api_base="https://api.openai.com/v1" +) +output = model(messages=[{"role": "user", "content": "hello world"}]) +print(output) diff --git a/python/instrumentation/openinference-instrumentation-smolagents/examples/openai_model_tool_call.py b/python/instrumentation/openinference-instrumentation-smolagents/examples/openai_model_tool_call.py new file mode 100644 index 000000000..954fdad1b --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/examples/openai_model_tool_call.py @@ -0,0 +1,42 @@ +import os + +from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter +from opentelemetry.sdk.trace import TracerProvider +from opentelemetry.sdk.trace.export import ( + SimpleSpanProcessor, +) +from smolagents import OpenAIServerModel +from smolagents.tools import Tool + +from openinference.instrumentation.smolagents import SmolagentsInstrumentor + +endpoint = "http://0.0.0.0:6006/v1/traces" +trace_provider = TracerProvider() +trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint))) + +SmolagentsInstrumentor().instrument(tracer_provider=trace_provider, skip_dep_check=True) + + +class GetWeatherTool(Tool): + name = "get_weather" + description = "Get the weather for a given city" + inputs = {"location": {"type": "string", "description": "The city to get the weather for"}} + output_type = "string" + + def forward(self, location: str) -> str: + return "sunny" + + +model = OpenAIServerModel( + model_id="gpt-4o", api_key=os.environ["OPENAI_API_KEY"], api_base="https://api.openai.com/v1" +) +output_message = model( + messages=[ + { + "role": "user", + "content": "What is the weather in Paris?", + } + ], + tools_to_call_from=[GetWeatherTool()], +) +print(output_message) diff --git a/python/instrumentation/openinference-instrumentation-smolagents/examples/rag.py b/python/instrumentation/openinference-instrumentation-smolagents/examples/rag.py new file mode 100644 index 000000000..006f29771 --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/examples/rag.py @@ -0,0 +1,95 @@ +import os + +import datasets +from langchain.docstore.document import Document +from langchain.text_splitter import RecursiveCharacterTextSplitter +from langchain_community.retrievers import BM25Retriever +from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter +from opentelemetry.sdk.trace import TracerProvider +from opentelemetry.sdk.trace.export import ( + SimpleSpanProcessor, +) +from smolagents import CodeAgent, OpenAIServerModel, Tool + +from openinference.instrumentation.smolagents import SmolagentsInstrumentor + +endpoint = "http://0.0.0.0:6006/v1/traces" +trace_provider = TracerProvider() +trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint))) + +SmolagentsInstrumentor().instrument(tracer_provider=trace_provider) + +knowledge_base = datasets.load_dataset("m-ric/huggingface_doc", split="train") +knowledge_base = knowledge_base.filter( + lambda row: row["source"].startswith("huggingface/transformers") +) + +source_docs = [ + Document(page_content=doc["text"], metadata={"source": doc["source"].split("/")[1]}) + for doc in knowledge_base +] + +text_splitter = RecursiveCharacterTextSplitter( + chunk_size=500, + chunk_overlap=50, + add_start_index=True, + strip_whitespace=True, + separators=["\n\n", "\n", ".", " ", ""], +) +docs_processed = text_splitter.split_documents(source_docs) + + +class RetrieverTool(Tool): + name = "retriever" + description = ( + "Uses semantic search to retrieve the parts of transformers documentation " + "that could be most relevant to answer your query." + ) + inputs = { + "query": { + "type": "string", + "description": ( + "The query to perform. " + "This should be semantically close to your target documents. " + "Use the affirmative form rather than a question." + ), + } + } + output_type = "string" + + def __init__(self, docs, **kwargs): + super().__init__(**kwargs) + self.retriever = BM25Retriever.from_documents(docs, k=10) + + def forward(self, query: str) -> str: + assert isinstance(query, str), "Your search query must be a string" + + docs = self.retriever.invoke( + query, + ) + return "\nRetrieved documents:\n" + "".join( + [ + f"\n\n===== Document {str(i)} =====\n" + doc.page_content + for i, doc in enumerate(docs) + ] + ) + + +retriever_tool = RetrieverTool(docs_processed) +agent = CodeAgent( + tools=[retriever_tool], + model=OpenAIServerModel( + "gpt-4o", + api_base="https://api.openai.com/v1", + api_key=os.environ["OPENAI_API_KEY"], + ), + max_steps=4, + verbose=True, +) + +agent_output = agent.run( + "For a transformers model training, which is slower, the forward or the backward pass?" +) + +print("Final output:") +print(agent_output) diff --git a/python/instrumentation/openinference-instrumentation-smolagents/examples/requirements.txt b/python/instrumentation/openinference-instrumentation-smolagents/examples/requirements.txt new file mode 100644 index 000000000..bd948433b --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/examples/requirements.txt @@ -0,0 +1,9 @@ +datasets +langchain +langchain-community +opentelemetry-exporter-otlp +opentelemetry-exporter-otlp-proto-http +opentelemetry-sdk +rank_bm25 +requests +sqlalchemy diff --git a/python/instrumentation/openinference-instrumentation-smolagents/examples/text2sql.py b/python/instrumentation/openinference-instrumentation-smolagents/examples/text2sql.py new file mode 100644 index 000000000..e6a02ff3d --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/examples/text2sql.py @@ -0,0 +1,99 @@ +import os + +from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter +from opentelemetry.sdk.trace import TracerProvider +from opentelemetry.sdk.trace.export import ( + SimpleSpanProcessor, +) +from smolagents import ( + CodeAgent, + OpenAIServerModel, + tool, +) +from sqlalchemy import ( + Column, + Float, + Integer, + MetaData, + String, + Table, + create_engine, + insert, + inspect, + text, +) + +from openinference.instrumentation.smolagents import SmolagentsInstrumentor + +endpoint = "http://0.0.0.0:6006/v1/traces" +trace_provider = TracerProvider() +trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint))) + +SmolagentsInstrumentor().instrument(tracer_provider=trace_provider) + +engine = create_engine("sqlite:///:memory:") +metadata_obj = MetaData() + +# create city SQL table +table_name = "receipts" +receipts = Table( + table_name, + metadata_obj, + Column("receipt_id", Integer, primary_key=True), + Column("customer_name", String(16), primary_key=True), + Column("price", Float), + Column("tip", Float), +) +metadata_obj.create_all(engine) + +rows = [ + {"receipt_id": 1, "customer_name": "Alan Payne", "price": 12.06, "tip": 1.20}, + {"receipt_id": 2, "customer_name": "Alex Mason", "price": 23.86, "tip": 0.24}, + {"receipt_id": 3, "customer_name": "Woodrow Wilson", "price": 53.43, "tip": 5.43}, + {"receipt_id": 4, "customer_name": "Margaret James", "price": 21.11, "tip": 1.00}, +] +for row in rows: + stmt = insert(receipts).values(**row) + with engine.begin() as connection: + cursor = connection.execute(stmt) + +inspector = inspect(engine) +columns_info = [(col["name"], col["type"]) for col in inspector.get_columns("receipts")] + +table_description = "Columns:\n" + "\n".join( + [f" - {name}: {col_type}" for name, col_type in columns_info] +) +print(table_description) + + +@tool +def sql_engine(query: str) -> str: + """ + Allows you to perform SQL queries on the table. Returns a string representation of the result. + The table is named 'receipts'. Its description is as follows: + Columns: + - receipt_id: INTEGER + - customer_name: VARCHAR(16) + - price: FLOAT + - tip: FLOAT + + Args: + query: The query to perform. This should be correct SQL. + """ + output = "" + with engine.connect() as con: + rows = con.execute(text(query)) + for row in rows: + output += "\n" + str(row) + return output + + +agent = CodeAgent( + tools=[sql_engine], + model=OpenAIServerModel( + "gpt-4o-mini", + api_base="https://api.openai.com/v1", + api_key=os.environ["OPENAI_API_KEY"], + ), +) +agent.run("Can you give me the name of the client who got the most expensive receipt?") diff --git a/python/instrumentation/openinference-instrumentation-smolagents/examples/tool_calling_agent.py b/python/instrumentation/openinference-instrumentation-smolagents/examples/tool_calling_agent.py new file mode 100644 index 000000000..156b5ec5d --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/examples/tool_calling_agent.py @@ -0,0 +1,45 @@ +from typing import Optional + +from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter +from opentelemetry.sdk.trace import TracerProvider +from opentelemetry.sdk.trace.export import ( + SimpleSpanProcessor, +) +from smolagents import ( + LiteLLMModel, + tool, +) +from smolagents.agents import ToolCallingAgent + +from openinference.instrumentation.smolagents import SmolagentsInstrumentor + +endpoint = "http://0.0.0.0:6006/v1/traces" +trace_provider = TracerProvider() +trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint))) + +SmolagentsInstrumentor().instrument(tracer_provider=trace_provider, skip_dep_check=True) + +# Choose which LLM engine to use! +# model = HfApiModel(model_id="meta-llama/Llama-3.3-70B-Instruct") +# model = TransformersModel(model_id="meta-llama/Llama-3.2-2B-Instruct") + +# For anthropic: change model_id below to 'anthropic/claude-3-5-sonnet-20240620' +model = LiteLLMModel(model_id="gpt-4o") + + +@tool +def get_weather(location: str, celsius: Optional[bool] = False) -> str: + """ + Get weather in the next days at given location. + Secretly this tool does not care about the location, it hates the weather everywhere. + + Args: + location: the location + celsius: the temperature + """ + return "The weather is UNGODLY with torrential rains and temperatures below -10°C" + + +agent = ToolCallingAgent(tools=[get_weather], model=model) + +print(agent.run("What's the weather like in Paris?")) diff --git a/python/instrumentation/openinference-instrumentation-smolagents/examples/tool_invocation.py b/python/instrumentation/openinference-instrumentation-smolagents/examples/tool_invocation.py new file mode 100644 index 000000000..9b2e40c46 --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/examples/tool_invocation.py @@ -0,0 +1,29 @@ +from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter +from opentelemetry.sdk.trace import TracerProvider +from opentelemetry.sdk.trace.export import ( + SimpleSpanProcessor, +) +from smolagents.tools import Tool + +from openinference.instrumentation.smolagents import SmolagentsInstrumentor + +endpoint = "http://0.0.0.0:6006/v1/traces" +trace_provider = TracerProvider() +trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint))) + +SmolagentsInstrumentor().instrument(tracer_provider=trace_provider, skip_dep_check=True) + + +class GetWeatherTool(Tool): + name = "get_weather" + description = "Get the weather for a given city" + inputs = {"location": {"type": "string", "description": "The city to get the weather for"}} + output_type = "string" + + def forward(self, location: str) -> str: + return "sunny" + + +get_weather_tool = GetWeatherTool() +assert get_weather_tool("Paris") == "sunny" +assert get_weather_tool(location="Paris") == "sunny" diff --git a/python/instrumentation/openinference-instrumentation-smolagents/pyproject.toml b/python/instrumentation/openinference-instrumentation-smolagents/pyproject.toml new file mode 100644 index 000000000..3c1b2c2c1 --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/pyproject.toml @@ -0,0 +1,92 @@ +[build-system] +requires = ["hatchling"] +build-backend = "hatchling.build" + +[project] +name = "openinference-instrumentation-smolagents" +dynamic = ["version"] +description = "OpenInference smolagents Instrumentation" +readme = "README.md" +license = "Apache-2.0" +requires-python = ">=3.10, <3.13" +authors = [ + { name = "OpenInference Authors", email = "oss@arize.com" }, +] +classifiers = [ + "Development Status :: 5 - Production/Stable", + "Intended Audience :: Developers", + "License :: OSI Approved :: Apache Software License", + "Programming Language :: Python", + "Programming Language :: Python :: 3", + "Programming Language :: Python :: 3.10", + "Programming Language :: Python :: 3.11", + "Programming Language :: Python :: 3.12", +] +dependencies = [ + "opentelemetry-api", + "opentelemetry-instrumentation", + "opentelemetry-semantic-conventions", + "openinference-instrumentation>=0.1.20", + "openinference-semantic-conventions", + "wrapt", + "typing-extensions", +] + +[project.optional-dependencies] +instruments = [ + "smolagents>=1.2.2", +] +test = [ + "smolagents>=1.2.2", + "opentelemetry-sdk", + "pytest-recording", +] + +[project.urls] +Homepage = "https://github.com/Arize-ai/openinference/tree/main/python/instrumentation/openinference-instrumentation-smolagents" + +[tool.hatch.version] +path = "src/openinference/instrumentation/smolagents/version.py" + +[tool.hatch.build.targets.sdist] +include = [ + "/src", +] + +[tool.hatch.build.targets.wheel] +packages = ["src/openinference"] + +[tool.pytest.ini_options] +asyncio_mode = "auto" +testpaths = [ + "tests", +] + +[tool.mypy] +strict = true +explicit_package_bases = true +exclude = [ + "examples", + "dist", + "sdist", +] + +[[tool.mypy.overrides]] +ignore_missing_imports = true +module = [ + "smolagents", + "wrapt", +] + +[tool.ruff] +line-length = 100 +target-version = "py310" + +[tool.ruff.lint.per-file-ignores] +"*.ipynb" = ["E402", "E501"] + +[tool.ruff.lint] +select = ["E", "F", "W", "I"] + +[tool.ruff.lint.isort] +force-single-line = false diff --git a/python/instrumentation/openinference-instrumentation-smolagents/src/openinference/instrumentation/smolagents/__init__.py b/python/instrumentation/openinference-instrumentation-smolagents/src/openinference/instrumentation/smolagents/__init__.py new file mode 100644 index 000000000..73bf333ce --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/src/openinference/instrumentation/smolagents/__init__.py @@ -0,0 +1,107 @@ +from typing import Any, Callable, Collection, Optional + +from opentelemetry import trace as trace_api +from opentelemetry.instrumentation.instrumentor import BaseInstrumentor # type: ignore +from wrapt import wrap_function_wrapper + +from openinference.instrumentation import ( + OITracer, + TraceConfig, +) +from openinference.instrumentation.smolagents._wrappers import ( + _ModelWrapper, + _RunWrapper, + _StepWrapper, + _ToolCallWrapper, +) +from openinference.instrumentation.smolagents.version import __version__ + +_instruments = ("smolagents >= 1.2.2",) + + +class SmolagentsInstrumentor(BaseInstrumentor): # type: ignore + __slots__ = ( + "_original_run_method", + "_original_step_methods", + "_original_tool_call_method", + "_original_model_call_methods", + "_tracer", + ) + + def instrumentation_dependencies(self) -> Collection[str]: + return _instruments + + def _instrument(self, **kwargs: Any) -> None: + from smolagents import CodeAgent, Model, MultiStepAgent, Tool, ToolCallingAgent + + if not (tracer_provider := kwargs.get("tracer_provider")): + tracer_provider = trace_api.get_tracer_provider() + if not (config := kwargs.get("config")): + config = TraceConfig() + else: + assert isinstance(config, TraceConfig) + self._tracer = OITracer( + trace_api.get_tracer(__name__, __version__, tracer_provider), + config=config, + ) + + run_wrapper = _RunWrapper(tracer=self._tracer) + self._original_run_method = getattr(MultiStepAgent, "run", None) + wrap_function_wrapper( + module="smolagents", + name="MultiStepAgent.run", + wrapper=run_wrapper, + ) + + self._original_step_methods: Optional[dict[type, Optional[Callable[..., Any]]]] = {} + step_wrapper = _StepWrapper(tracer=self._tracer) + for step_cls in [CodeAgent, ToolCallingAgent]: + self._original_step_methods[step_cls] = getattr(step_cls, "step", None) + wrap_function_wrapper( + module="smolagents", + name=f"{step_cls.__name__}.step", + wrapper=step_wrapper, + ) + + model_subclasses = Model.__subclasses__() + self._original_model_call_methods: Optional[dict[type, Callable[..., Any]]] = {} + for model_subclass in model_subclasses: + model_subclass_wrapper = _ModelWrapper(tracer=self._tracer) + self._original_model_call_methods[model_subclass] = getattr(model_subclass, "__call__") + wrap_function_wrapper( + module="smolagents", + name=model_subclass.__name__ + ".__call__", + wrapper=model_subclass_wrapper, + ) + + tool_call_wrapper = _ToolCallWrapper(tracer=self._tracer) + self._original_tool_call_method = getattr(Tool, "__call__", None) + wrap_function_wrapper( + module="smolagents", + name="Tool.__call__", + wrapper=tool_call_wrapper, + ) + + def _uninstrument(self, **kwargs: Any) -> None: + from smolagents import MultiStepAgent, Tool + + if self._original_run_method is not None: + MultiStepAgent.run = self._original_run_method + self._original_run_method = None + + if self._original_step_methods is not None: + for step_cls, original_step_method in self._original_step_methods.items(): + setattr(step_cls, "step", original_step_method) + self._original_step_methods = None + + if self._original_model_call_methods is not None: + for ( + model_subclass, + original_model_call_method, + ) in self._original_model_call_methods.items(): + setattr(model_subclass, "__call__", original_model_call_method) + self._original_model_call_methods = None + + if self._original_tool_call_method is not None: + Tool.__call__ = self._original_tool_call_method + self._original_tool_call_method = None diff --git a/python/instrumentation/openinference-instrumentation-smolagents/src/openinference/instrumentation/smolagents/_wrappers.py b/python/instrumentation/openinference-instrumentation-smolagents/src/openinference/instrumentation/smolagents/_wrappers.py new file mode 100644 index 000000000..6eacb56ba --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/src/openinference/instrumentation/smolagents/_wrappers.py @@ -0,0 +1,399 @@ +from enum import Enum +from inspect import signature +from typing import TYPE_CHECKING, Any, Callable, Dict, Iterator, Mapping, Optional, Tuple + +from opentelemetry import context as context_api +from opentelemetry import trace as trace_api +from opentelemetry.util.types import AttributeValue + +from openinference.instrumentation import get_attributes_from_context, safe_json_dumps +from openinference.semconv.trace import ( + MessageAttributes, + OpenInferenceMimeTypeValues, + OpenInferenceSpanKindValues, + SpanAttributes, + ToolAttributes, + ToolCallAttributes, +) + +if TYPE_CHECKING: + from smolagents.tools import Tool # type: ignore[import-untyped] + + +def _flatten(mapping: Optional[Mapping[str, Any]]) -> Iterator[Tuple[str, AttributeValue]]: + if not mapping: + return + for key, value in mapping.items(): + if value is None: + continue + if isinstance(value, Mapping): + for sub_key, sub_value in _flatten(value): + yield f"{key}.{sub_key}", sub_value + elif isinstance(value, list) and any(isinstance(item, Mapping) for item in value): + for index, sub_mapping in enumerate(value): + for sub_key, sub_value in _flatten(sub_mapping): + yield f"{key}.{index}.{sub_key}", sub_value + else: + if isinstance(value, Enum): + value = value.value + yield key, value + + +def _get_input_value(method: Callable[..., Any], *args: Any, **kwargs: Any) -> str: + arguments = _bind_arguments(method, *args, **kwargs) + arguments = _strip_method_args(arguments) + return safe_json_dumps(arguments) + + +def _bind_arguments(method: Callable[..., Any], *args: Any, **kwargs: Any) -> Dict[str, Any]: + method_signature = signature(method) + bound_args = method_signature.bind(*args, **kwargs) + bound_args.apply_defaults() + return bound_args.arguments + + +def _strip_method_args(arguments: Mapping[str, Any]) -> dict[str, Any]: + return {key: value for key, value in arguments.items() if key not in ("self", "cls")} + + +def _smolagent_run_attributes( + agent: Any, arguments: dict[str, Any] +) -> Iterator[Tuple[str, AttributeValue]]: + if task := agent.task: + yield "smolagents.task", task + if additional_args := arguments.get("additional_args"): + yield "smolagents.additional_args", safe_json_dumps(additional_args) + yield "smolagents.max_steps", agent.max_steps + yield "smolagents.tools_names", list(agent.tools.keys()) + for managed_agent_index, managed_agent in enumerate(agent.managed_agents.values()): + yield f"smolagents.managed_agents.{managed_agent_index}.name", managed_agent.name + yield ( + f"smolagents.managed_agents.{managed_agent_index}.description", + managed_agent.description, + ) + if managed_agent.additional_prompting: + yield ( + f"smolagents.managed_agents.{managed_agent_index}.additional_prompting", + managed_agent.additional_prompting, + ) + yield ( + f"smolagents.managed_agents.{managed_agent_index}.max_steps", + managed_agent.agent.max_steps, + ) + yield ( + f"smolagents.managed_agents.{managed_agent_index}.tools_names", + list(managed_agent.agent.tools.keys()), + ) + + +class _RunWrapper: + def __init__(self, tracer: trace_api.Tracer) -> None: + self._tracer = tracer + + def __call__( + self, + wrapped: Callable[..., Any], + instance: Any, + args: Tuple[Any, ...], + kwargs: Mapping[str, Any], + ) -> Any: + if context_api.get_value(context_api._SUPPRESS_INSTRUMENTATION_KEY): + return wrapped(*args, **kwargs) + span_name = f"{instance.__class__.__name__}.run" + agent = instance + arguments = _bind_arguments(wrapped, *args, **kwargs) + with self._tracer.start_as_current_span( + span_name, + attributes=dict( + _flatten( + { + OPENINFERENCE_SPAN_KIND: AGENT, + INPUT_VALUE: _get_input_value( + wrapped, + *args, + **kwargs, + ), + **dict(_smolagent_run_attributes(agent, arguments)), + **dict(get_attributes_from_context()), + } + ) + ), + ) as span: + agent_output = wrapped(*args, **kwargs) + span.set_attribute(LLM_TOKEN_COUNT_PROMPT, agent.monitor.total_input_token_count) + span.set_attribute(LLM_TOKEN_COUNT_COMPLETION, agent.monitor.total_output_token_count) + span.set_attribute( + LLM_TOKEN_COUNT_TOTAL, + agent.monitor.total_input_token_count + agent.monitor.total_output_token_count, + ) + span.set_status(trace_api.StatusCode.OK) + span.set_attribute(OUTPUT_VALUE, str(agent_output)) + return agent_output + + +class _StepWrapper: + def __init__(self, tracer: trace_api.Tracer) -> None: + self._tracer = tracer + + def __call__( + self, + wrapped: Callable[..., Any], + instance: Any, + args: Tuple[Any, ...], + kwargs: Mapping[str, Any], + ) -> Any: + if context_api.get_value(context_api._SUPPRESS_INSTRUMENTATION_KEY): + return wrapped(*args, **kwargs) + agent = instance + span_name = f"Step {agent.step_number}" + with self._tracer.start_as_current_span( + span_name, + attributes={ + OPENINFERENCE_SPAN_KIND: CHAIN, + INPUT_VALUE: _get_input_value(wrapped, *args, **kwargs), + **dict(get_attributes_from_context()), + }, + ) as span: + result = wrapped(*args, **kwargs) + step_log = args[0] # ActionStep + span.set_attribute(OUTPUT_VALUE, step_log.observations) + if step_log.error is not None: + span.record_exception(step_log.error) + span.set_status(trace_api.StatusCode.OK) + return result + + +def _llm_input_messages(arguments: Mapping[str, Any]) -> Iterator[Tuple[str, Any]]: + if isinstance(prompt := arguments.get("prompt"), str): + yield f"{LLM_INPUT_MESSAGES}.0.{MESSAGE_ROLE}", "user" + yield f"{LLM_INPUT_MESSAGES}.0.{MESSAGE_CONTENT}", prompt + elif isinstance(messages := arguments.get("messages"), list): + for i, message in enumerate(messages): + if not isinstance(message, dict): + continue + if (role := message.get("role", None)) is not None: + yield ( + f"{LLM_INPUT_MESSAGES}.{i}.{MESSAGE_ROLE}", + role, + ) + if (content := message.get("content", None)) is not None: + yield ( + f"{LLM_INPUT_MESSAGES}.{i}.{MESSAGE_CONTENT}", + content, + ) + + +def _llm_output_messages(output_message: Any) -> Iterator[Tuple[str, Any]]: + if (role := getattr(output_message, "role", None)) is not None: + yield ( + f"{LLM_OUTPUT_MESSAGES}.0.{MESSAGE_ROLE}", + role, + ) + if (content := getattr(output_message, "content", None)) is not None: + yield ( + f"{LLM_OUTPUT_MESSAGES}.0.{MESSAGE_CONTENT}", + content, + ) + if isinstance(tool_calls := getattr(output_message, "tool_calls", None), list): + for tool_call_index, tool_call in enumerate(tool_calls): + if (tool_call_id := getattr(tool_call, "id", None)) is not None: + yield ( + f"{LLM_OUTPUT_MESSAGES}.0.{MESSAGE_TOOL_CALLS}.{tool_call_index}.{TOOL_CALL_ID}", + tool_call_id, + ) + if (function := getattr(tool_call, "function", None)) is not None: + if (name := getattr(function, "name", None)) is not None: + yield ( + f"{LLM_OUTPUT_MESSAGES}.0.{MESSAGE_TOOL_CALLS}.{tool_call_index}.{TOOL_CALL_FUNCTION_NAME}", + name, + ) + if isinstance(arguments := getattr(function, "arguments", None), str): + yield ( + f"{LLM_OUTPUT_MESSAGES}.0.{MESSAGE_TOOL_CALLS}.{tool_call_index}.{TOOL_CALL_FUNCTION_ARGUMENTS_JSON}", + arguments, + ) + + +def _output_value_and_mime_type(output: Any) -> Iterator[Tuple[str, Any]]: + yield OUTPUT_MIME_TYPE, JSON + yield OUTPUT_VALUE, output.model_dump_json() + + +def _llm_invocation_parameters( + model: Any, arguments: Mapping[str, Any] +) -> Iterator[Tuple[str, Any]]: + model_kwargs = _ if isinstance(_ := getattr(model, "kwargs", {}), dict) else {} + kwargs = _ if isinstance(_ := arguments.get("kwargs"), dict) else {} + yield LLM_INVOCATION_PARAMETERS, safe_json_dumps(model_kwargs | kwargs) + + +def _llm_tools(tools_to_call_from: list[Any]) -> Iterator[Tuple[str, Any]]: + from smolagents import Tool + from smolagents.models import get_json_schema # type:ignore[import-untyped] + + if not isinstance(tools_to_call_from, list): + return + for tool_index, tool in enumerate(tools_to_call_from): + if isinstance(tool, Tool): + yield ( + f"{LLM_TOOLS}.{tool_index}.{TOOL_JSON_SCHEMA}", + safe_json_dumps(get_json_schema(tool)), + ) + + +def _tools(tool: "Tool") -> Iterator[Tuple[str, Any]]: + if tool_name := getattr(tool, "name", None): + yield TOOL_NAME, tool_name + if tool_description := getattr(tool, "description", None): + yield TOOL_DESCRIPTION, tool_description + yield TOOL_PARAMETERS, safe_json_dumps(tool.inputs) + + +def _input_value_and_mime_type(arguments: Mapping[str, Any]) -> Iterator[Tuple[str, Any]]: + yield INPUT_MIME_TYPE, JSON + yield INPUT_VALUE, safe_json_dumps(arguments) + + +class _ModelWrapper: + def __init__(self, tracer: trace_api.Tracer) -> None: + self._tracer = tracer + + def __call__( + self, + wrapped: Callable[..., Any], + instance: Any, + args: Tuple[Any, ...], + kwargs: Mapping[str, Any], + ) -> Any: + if context_api.get_value(context_api._SUPPRESS_INSTRUMENTATION_KEY): + return wrapped(*args, **kwargs) + arguments = _bind_arguments(wrapped, *args, **kwargs) + span_name = f"{instance.__class__.__name__}.__call__" + model = instance + with self._tracer.start_as_current_span( + span_name, + attributes={ + OPENINFERENCE_SPAN_KIND: LLM, + **dict(_input_value_and_mime_type(arguments)), + **dict(_llm_invocation_parameters(instance, arguments)), + **dict(_llm_input_messages(arguments)), + **dict(get_attributes_from_context()), + }, + ) as span: + output_message = wrapped(*args, **kwargs) + span.set_status(trace_api.StatusCode.OK) + span.set_attribute(LLM_TOKEN_COUNT_PROMPT, model.last_input_token_count) + span.set_attribute(LLM_TOKEN_COUNT_COMPLETION, model.last_output_token_count) + span.set_attribute(LLM_MODEL_NAME, model.model_id) + span.set_attribute( + LLM_TOKEN_COUNT_TOTAL, model.last_input_token_count + model.last_output_token_count + ) + span.set_attribute(OUTPUT_VALUE, output_message) + span.set_attributes(dict(_llm_output_messages(output_message))) + span.set_attributes(dict(_llm_tools(arguments.get("tools_to_call_from", [])))) + span.set_attributes(dict(_output_value_and_mime_type(output_message))) + return output_message + + +class _ToolCallWrapper: + def __init__(self, tracer: trace_api.Tracer) -> None: + self._tracer = tracer + + def __call__( + self, + wrapped: Callable[..., Any], + instance: Any, + args: Tuple[Any, ...], + kwargs: Mapping[str, Any], + ) -> Any: + if context_api.get_value(context_api._SUPPRESS_INSTRUMENTATION_KEY): + return wrapped(*args, **kwargs) + span_name = f"{instance.__class__.__name__}" + with self._tracer.start_as_current_span( + span_name, + attributes={ + OPENINFERENCE_SPAN_KIND: TOOL, + INPUT_VALUE: _get_input_value( + wrapped, + *args, + **kwargs, + ), + **dict(_tools(instance)), + **dict(get_attributes_from_context()), + }, + ) as span: + response = wrapped(*args, **kwargs) + span.set_status(trace_api.StatusCode.OK) + span.set_attributes( + dict( + _output_value_and_mime_type_for_tool_span( + response=response, + output_type=instance.output_type, + ) + ) + ) + return response + + +def _output_value_and_mime_type_for_tool_span( + response: Any, output_type: str +) -> Iterator[Tuple[str, Any]]: + if output_type in ( + "string", + "boolean", + "integer", + "number", + ): + yield OUTPUT_VALUE, response + yield OUTPUT_MIME_TYPE, TEXT + elif output_type == "object": + yield OUTPUT_VALUE, safe_json_dumps(response) + yield OUTPUT_MIME_TYPE, JSON + + # TODO: handle other types + + +# span attributes +INPUT_MIME_TYPE = SpanAttributes.INPUT_MIME_TYPE +INPUT_VALUE = SpanAttributes.INPUT_VALUE +LLM_INPUT_MESSAGES = SpanAttributes.LLM_INPUT_MESSAGES +LLM_INVOCATION_PARAMETERS = SpanAttributes.LLM_INVOCATION_PARAMETERS +LLM_MODEL_NAME = SpanAttributes.LLM_MODEL_NAME +LLM_OUTPUT_MESSAGES = SpanAttributes.LLM_OUTPUT_MESSAGES +LLM_PROMPTS = SpanAttributes.LLM_PROMPTS +LLM_TOKEN_COUNT_COMPLETION = SpanAttributes.LLM_TOKEN_COUNT_COMPLETION +LLM_TOKEN_COUNT_PROMPT = SpanAttributes.LLM_TOKEN_COUNT_PROMPT +LLM_TOKEN_COUNT_TOTAL = SpanAttributes.LLM_TOKEN_COUNT_TOTAL +LLM_TOOLS = SpanAttributes.LLM_TOOLS +OPENINFERENCE_SPAN_KIND = SpanAttributes.OPENINFERENCE_SPAN_KIND +OUTPUT_MIME_TYPE = SpanAttributes.OUTPUT_MIME_TYPE +OUTPUT_VALUE = SpanAttributes.OUTPUT_VALUE +TOOL_DESCRIPTION = SpanAttributes.TOOL_DESCRIPTION +TOOL_NAME = SpanAttributes.TOOL_NAME +TOOL_PARAMETERS = SpanAttributes.TOOL_PARAMETERS + +# message attributes +MESSAGE_CONTENT = MessageAttributes.MESSAGE_CONTENT +MESSAGE_FUNCTION_CALL_ARGUMENTS_JSON = MessageAttributes.MESSAGE_FUNCTION_CALL_ARGUMENTS_JSON +MESSAGE_FUNCTION_CALL_NAME = MessageAttributes.MESSAGE_FUNCTION_CALL_NAME +MESSAGE_NAME = MessageAttributes.MESSAGE_NAME +MESSAGE_ROLE = MessageAttributes.MESSAGE_ROLE +MESSAGE_TOOL_CALLS = MessageAttributes.MESSAGE_TOOL_CALLS + +# mime types +JSON = OpenInferenceMimeTypeValues.JSON.value +TEXT = OpenInferenceMimeTypeValues.TEXT.value + +# span kinds +AGENT = OpenInferenceSpanKindValues.AGENT.value +CHAIN = OpenInferenceSpanKindValues.CHAIN.value +LLM = OpenInferenceSpanKindValues.LLM.value +TOOL = OpenInferenceSpanKindValues.TOOL.value + +# tool attributes +TOOL_JSON_SCHEMA = ToolAttributes.TOOL_JSON_SCHEMA + +# tool call attributes +TOOL_CALL_FUNCTION_ARGUMENTS_JSON = ToolCallAttributes.TOOL_CALL_FUNCTION_ARGUMENTS_JSON +TOOL_CALL_FUNCTION_NAME = ToolCallAttributes.TOOL_CALL_FUNCTION_NAME +TOOL_CALL_ID = ToolCallAttributes.TOOL_CALL_ID diff --git a/python/instrumentation/openinference-instrumentation-smolagents/src/openinference/instrumentation/smolagents/py.typed b/python/instrumentation/openinference-instrumentation-smolagents/src/openinference/instrumentation/smolagents/py.typed new file mode 100644 index 000000000..e69de29bb diff --git a/python/instrumentation/openinference-instrumentation-smolagents/src/openinference/instrumentation/smolagents/version.py b/python/instrumentation/openinference-instrumentation-smolagents/src/openinference/instrumentation/smolagents/version.py new file mode 100644 index 000000000..3dc1f76bc --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/src/openinference/instrumentation/smolagents/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/python/instrumentation/openinference-instrumentation-smolagents/tests/openinference/instrumentation/smolagents/cassettes/test_instrumentor/TestModels.test_openai_server_model_has_expected_attributes.yaml b/python/instrumentation/openinference-instrumentation-smolagents/tests/openinference/instrumentation/smolagents/cassettes/test_instrumentor/TestModels.test_openai_server_model_has_expected_attributes.yaml new file mode 100644 index 000000000..503596857 --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/tests/openinference/instrumentation/smolagents/cassettes/test_instrumentor/TestModels.test_openai_server_model_has_expected_attributes.yaml @@ -0,0 +1,26 @@ +interactions: +- request: + body: '{"messages": [{"role": "user", "content": "Who won the World Cup in 2018? + Answer in one word with no punctuation."}], "model": "gpt-4o", "max_tokens": + 1500, "stop": null, "temperature": 0.7}' + headers: {} + method: POST + uri: https://api.openai.com/v1/chat/completions + response: + body: + string: "{\n \"id\": \"chatcmpl-Ap7ufxJG59lObqU3ZjqsDq0ryGo1M\",\n \"object\": + \"chat.completion\",\n \"created\": 1736748509,\n \"model\": \"gpt-4o-2024-08-06\",\n + \ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": + \"assistant\",\n \"content\": \"France\",\n \"refusal\": null\n + \ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n + \ ],\n \"usage\": {\n \"prompt_tokens\": 25,\n \"completion_tokens\": + 2,\n \"total_tokens\": 27,\n \"prompt_tokens_details\": {\n \"cached_tokens\": + 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": + {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": + 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": + \"default\",\n \"system_fingerprint\": \"fp_703d4ff298\"\n}\n" + headers: {} + status: + code: 200 + message: OK +version: 1 diff --git a/python/instrumentation/openinference-instrumentation-smolagents/tests/openinference/instrumentation/smolagents/cassettes/test_instrumentor/TestModels.test_openai_server_model_with_tool_has_expected_attributes.yaml b/python/instrumentation/openinference-instrumentation-smolagents/tests/openinference/instrumentation/smolagents/cassettes/test_instrumentor/TestModels.test_openai_server_model_with_tool_has_expected_attributes.yaml new file mode 100644 index 000000000..4636a5895 --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/tests/openinference/instrumentation/smolagents/cassettes/test_instrumentor/TestModels.test_openai_server_model_with_tool_has_expected_attributes.yaml @@ -0,0 +1,33 @@ +interactions: +- request: + body: '{"messages": [{"role": "user", "content": "What is the weather in Paris?"}], + "model": "gpt-4o", "max_tokens": 1500, "stop": null, "temperature": 0.7, "tool_choice": + "auto", "tools": [{"type": "function", "function": {"name": "get_weather", "description": + "Get the weather for a given city", "parameters": {"type": "object", "properties": + {"location": {"type": "string", "description": "The city to get the weather + for"}}, "required": ["location"]}}}]}' + headers: {} + method: POST + uri: https://api.openai.com/v1/chat/completions + response: + body: + string: "{\n \"id\": \"chatcmpl-Ap7uf7aUH5nQ3tOeCHnA7WP70vV23\",\n \"object\": + \"chat.completion\",\n \"created\": 1736748509,\n \"model\": \"gpt-4o-2024-08-06\",\n + \ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": + \"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n + \ \"id\": \"call_SJU40osDa7rxyCVRXc9wG2Vs\",\n \"type\": + \"function\",\n \"function\": {\n \"name\": \"get_weather\",\n + \ \"arguments\": \"{\\\"location\\\":\\\"Paris\\\"}\"\n }\n + \ }\n ],\n \"refusal\": null\n },\n \"logprobs\": + null,\n \"finish_reason\": \"tool_calls\"\n }\n ],\n \"usage\": + {\n \"prompt_tokens\": 61,\n \"completion_tokens\": 15,\n \"total_tokens\": + 76,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": + 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": + 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n + \ \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": + \"default\",\n \"system_fingerprint\": \"fp_703d4ff298\"\n}\n" + headers: {} + status: + code: 200 + message: OK +version: 1 diff --git a/python/instrumentation/openinference-instrumentation-smolagents/tests/openinference/instrumentation/smolagents/test_instrumentor.py b/python/instrumentation/openinference-instrumentation-smolagents/tests/openinference/instrumentation/smolagents/test_instrumentor.py new file mode 100644 index 000000000..4e992b472 --- /dev/null +++ b/python/instrumentation/openinference-instrumentation-smolagents/tests/openinference/instrumentation/smolagents/test_instrumentor.py @@ -0,0 +1,532 @@ +import json +import os +from typing import Any, Generator, Optional + +import pytest +from openai.types.chat.chat_completion_message_tool_call import ChatCompletionMessageToolCall +from opentelemetry import trace as trace_api +from opentelemetry.sdk import trace as trace_sdk +from opentelemetry.sdk.resources import Resource +from opentelemetry.sdk.trace.export import SimpleSpanProcessor +from opentelemetry.sdk.trace.export.in_memory_span_exporter import InMemorySpanExporter +from smolagents import OpenAIServerModel, Tool +from smolagents.agents import ( # type: ignore[import-untyped] + CodeAgent, + ManagedAgent, + ToolCallingAgent, +) + +from openinference.instrumentation.smolagents import SmolagentsInstrumentor +from openinference.semconv.trace import ( + MessageAttributes, + OpenInferenceMimeTypeValues, + OpenInferenceSpanKindValues, + SpanAttributes, + ToolAttributes, + ToolCallAttributes, +) + + +def remove_all_vcr_request_headers(request: Any) -> Any: + """ + Removes all request headers. + + Example: + ``` + @pytest.mark.vcr( + before_record_response=remove_all_vcr_request_headers + ) + def test_openai() -> None: + # make request to OpenAI + """ + request.headers.clear() + return request + + +def remove_all_vcr_response_headers(response: dict[str, Any]) -> dict[str, Any]: + """ + Removes all response headers. + + Example: + ``` + @pytest.mark.vcr( + before_record_response=remove_all_vcr_response_headers + ) + def test_openai() -> None: + # make request to OpenAI + """ + response["headers"] = {} + return response + + +@pytest.fixture +def in_memory_span_exporter() -> InMemorySpanExporter: + return InMemorySpanExporter() + + +@pytest.fixture +def tracer_provider(in_memory_span_exporter: InMemorySpanExporter) -> trace_api.TracerProvider: + resource = Resource(attributes={}) + tracer_provider = trace_sdk.TracerProvider(resource=resource) + span_processor = SimpleSpanProcessor(span_exporter=in_memory_span_exporter) + tracer_provider.add_span_processor(span_processor=span_processor) + return tracer_provider + + +@pytest.fixture(autouse=True) +def instrument( + tracer_provider: trace_api.TracerProvider, + in_memory_span_exporter: InMemorySpanExporter, +) -> Generator[None, None, None]: + SmolagentsInstrumentor().instrument(tracer_provider=tracer_provider, skip_dep_check=True) + yield + SmolagentsInstrumentor().uninstrument() + in_memory_span_exporter.clear() + + +@pytest.fixture +def openai_api_key(monkeypatch: pytest.MonkeyPatch) -> str: + api_key = "sk-0123456789" + monkeypatch.setenv("OPENAI_API_KEY", api_key) + return api_key + + +class TestModels: + @pytest.mark.vcr( + decode_compressed_response=True, + before_record_request=remove_all_vcr_request_headers, + before_record_response=remove_all_vcr_response_headers, + ) + def test_openai_server_model_has_expected_attributes( + self, + openai_api_key: str, + in_memory_span_exporter: InMemorySpanExporter, + ) -> None: + model = OpenAIServerModel( + model_id="gpt-4o", + api_key=os.environ["OPENAI_API_KEY"], + api_base="https://api.openai.com/v1", + ) + input_message_content = ( + "Who won the World Cup in 2018? Answer in one word with no punctuation." + ) + output_message = model( + messages=[ + { + "role": "user", + "content": input_message_content, + } + ] + ) + output_message_content = output_message.content + assert output_message_content == "France" + + spans = in_memory_span_exporter.get_finished_spans() + assert len(spans) == 1 + span = spans[0] + assert span.name == "OpenAIServerModel.__call__" + assert span.status.is_ok + attributes = dict(span.attributes or {}) + assert attributes.pop(OPENINFERENCE_SPAN_KIND) == LLM + assert attributes.pop(INPUT_MIME_TYPE) == JSON + assert isinstance(input_value := attributes.pop(INPUT_VALUE), str) + input_data = json.loads(input_value) + assert "messages" in input_data + assert attributes.pop(OUTPUT_MIME_TYPE) == JSON + assert isinstance(output_value := attributes.pop(OUTPUT_VALUE), str) + assert isinstance(json.loads(output_value), dict) + assert attributes.pop(LLM_MODEL_NAME) == "gpt-4o" + assert isinstance(inv_params := attributes.pop(LLM_INVOCATION_PARAMETERS), str) + assert json.loads(inv_params) == {} + assert attributes.pop(f"{LLM_INPUT_MESSAGES}.0.{MESSAGE_ROLE}") == "user" + assert attributes.pop(f"{LLM_INPUT_MESSAGES}.0.{MESSAGE_CONTENT}") == input_message_content + assert isinstance(attributes.pop(LLM_TOKEN_COUNT_PROMPT), int) + assert isinstance(attributes.pop(LLM_TOKEN_COUNT_COMPLETION), int) + assert isinstance(attributes.pop(LLM_TOKEN_COUNT_TOTAL), int) + assert attributes.pop(f"{LLM_OUTPUT_MESSAGES}.0.{MESSAGE_ROLE}") == "assistant" + assert ( + attributes.pop(f"{LLM_OUTPUT_MESSAGES}.0.{MESSAGE_CONTENT}") == output_message_content + ) + assert not attributes + + @pytest.mark.vcr( + decode_compressed_response=True, + before_record_request=remove_all_vcr_request_headers, + before_record_response=remove_all_vcr_response_headers, + ) + def test_openai_server_model_with_tool_has_expected_attributes( + self, + openai_api_key: str, + in_memory_span_exporter: InMemorySpanExporter, + tracer_provider: trace_api.TracerProvider, + ) -> None: + model = OpenAIServerModel( + model_id="gpt-4o", + api_key=os.environ["OPENAI_API_KEY"], + api_base="https://api.openai.com/v1", + ) + input_message_content = "What is the weather in Paris?" + + class GetWeatherTool(Tool): # type: ignore[misc] + name = "get_weather" + description = "Get the weather for a given city" + inputs = { + "location": {"type": "string", "description": "The city to get the weather for"} + } + output_type = "string" + + def forward(self, location: str) -> str: + return "sunny" + + output_message = model( + messages=[ + { + "role": "user", + "content": input_message_content, + } + ], + tools_to_call_from=[GetWeatherTool()], + ) + output_message_content = output_message.content + assert output_message_content is None + tool_calls = output_message.tool_calls + assert len(tool_calls) == 1 + assert isinstance(tool_call := tool_calls[0], ChatCompletionMessageToolCall) + assert tool_call.function.name == "get_weather" + assert isinstance(tool_call_arguments := tool_call.function.arguments, str) + assert json.loads(tool_call_arguments) == {"location": "Paris"} + + spans = in_memory_span_exporter.get_finished_spans() + assert len(spans) == 1 + span = spans[0] + assert span.name == "OpenAIServerModel.__call__" + assert span.status.is_ok + attributes = dict(span.attributes or {}) + assert attributes.pop(OPENINFERENCE_SPAN_KIND) == LLM + assert attributes.pop(INPUT_MIME_TYPE) == JSON + assert isinstance(input_value := attributes.pop(INPUT_VALUE), str) + input_data = json.loads(input_value) + assert "messages" in input_data + assert attributes.pop(OUTPUT_MIME_TYPE) == JSON + assert isinstance(output_value := attributes.pop(OUTPUT_VALUE), str) + assert isinstance(json.loads(output_value), dict) + assert attributes.pop(LLM_MODEL_NAME) == "gpt-4o" + assert isinstance(inv_params := attributes.pop(LLM_INVOCATION_PARAMETERS), str) + assert json.loads(inv_params) == {} + assert attributes.pop(f"{LLM_INPUT_MESSAGES}.0.{MESSAGE_ROLE}") == "user" + assert attributes.pop(f"{LLM_INPUT_MESSAGES}.0.{MESSAGE_CONTENT}") == input_message_content + assert isinstance( + tool_json_schema := attributes.pop(f"{LLM_TOOLS}.0.{TOOL_JSON_SCHEMA}"), str + ) + assert json.loads(tool_json_schema) == { + "type": "function", + "function": { + "name": "get_weather", + "description": "Get the weather for a given city", + "parameters": { + "type": "object", + "properties": { + "location": { + "type": "string", + "description": "The city to get the weather for", + }, + }, + "required": ["location"], + }, + }, + } + assert isinstance(attributes.pop(LLM_TOKEN_COUNT_PROMPT), int) + assert isinstance(attributes.pop(LLM_TOKEN_COUNT_COMPLETION), int) + assert isinstance(attributes.pop(LLM_TOKEN_COUNT_TOTAL), int) + assert attributes.pop(f"{LLM_OUTPUT_MESSAGES}.0.{MESSAGE_ROLE}") == "assistant" + assert ( + attributes.pop(f"{LLM_OUTPUT_MESSAGES}.0.{MESSAGE_TOOL_CALLS}.0.{TOOL_CALL_ID}") + == tool_call.id + ) + assert ( + attributes.pop( + f"{LLM_OUTPUT_MESSAGES}.0.{MESSAGE_TOOL_CALLS}.0.{TOOL_CALL_FUNCTION_NAME}" + ) + == "get_weather" + ) + assert isinstance( + tool_call_arguments_json := attributes.pop( + f"{LLM_OUTPUT_MESSAGES}.0.{MESSAGE_TOOL_CALLS}.0.{TOOL_CALL_FUNCTION_ARGUMENTS_JSON}" + ), + str, + ) + assert json.loads(tool_call_arguments_json) == {"location": "Paris"} + assert not attributes + + +class TestRun: + @pytest.mark.xfail + def test_multiagents(self) -> None: + from smolagents.models import ( # type: ignore[import-untyped] + ChatMessage, + ChatMessageToolCall, + ChatMessageToolCallDefinition, + ) + + class FakeModelMultiagentsManagerAgent: + def __call__( + self, + messages: list[dict[str, Any]], + stop_sequences: Optional[list[str]] = None, + grammar: Optional[Any] = None, + tools_to_call_from: Optional[list[Tool]] = None, + ) -> Any: + if tools_to_call_from is not None: + if len(messages) < 3: + return ChatMessage( + role="assistant", + content="", + tool_calls=[ + ChatMessageToolCall( + id="call_0", + type="function", + function=ChatMessageToolCallDefinition( + name="search_agent", + arguments="Who is the current US president?", + ), + ) + ], + ) + else: + assert "Report on the current US president" in str(messages) + return ChatMessage( + role="assistant", + content="", + tool_calls=[ + ChatMessageToolCall( + id="call_0", + type="function", + function=ChatMessageToolCallDefinition( + name="final_answer", arguments="Final report." + ), + ) + ], + ) + else: + if len(messages) < 3: + return ChatMessage( + role="assistant", + content=""" +Thought: Let's call our search agent. +Code: +```py +result = search_agent("Who is the current US president?") +``` +""", + ) + else: + assert "Report on the current US president" in str(messages) + return ChatMessage( + role="assistant", + content=""" +Thought: Let's return the report. +Code: +```py +final_answer("Final report.") +``` +""", + ) + + manager_model = FakeModelMultiagentsManagerAgent() + + class FakeModelMultiagentsManagedAgent: + def __call__( + self, + messages: list[dict[str, Any]], + tools_to_call_from: Optional[list[Tool]] = None, + stop_sequences: Optional[list[str]] = None, + grammar: Optional[Any] = None, + ) -> Any: + return ChatMessage( + role="assistant", + content="", + tool_calls=[ + ChatMessageToolCall( + id="call_0", + type="function", + function=ChatMessageToolCallDefinition( + name="final_answer", + arguments="Report on the current US president", + ), + ) + ], + ) + + managed_model = FakeModelMultiagentsManagedAgent() + + web_agent = ToolCallingAgent( + tools=[], + model=managed_model, + max_steps=10, + ) + + managed_web_agent = ManagedAgent( + agent=web_agent, + name="search_agent", + description=( + "Runs web searches for you. Give it your request as an argument. " + "Make the request as detailed as needed, you can ask for thorough reports" + ), + ) + + manager_code_agent = CodeAgent( + tools=[], + model=manager_model, + managed_agents=[managed_web_agent], + additional_authorized_imports=["time", "numpy", "pandas"], + ) + + report = manager_code_agent.run("Fake question.") + assert report == "Final report." + + manager_toolcalling_agent = ToolCallingAgent( + tools=[], + model=manager_model, + managed_agents=[managed_web_agent], + ) + + report = manager_toolcalling_agent.run("Fake question.") + assert report == "Final report." + + +class TestTools: + def test_tool_invocation_returning_string_has_expected_attributes( + self, in_memory_span_exporter: InMemorySpanExporter + ) -> None: + class GetWeatherTool(Tool): # type: ignore[misc] + name = "get_weather" + description = "Get the weather for a given city" + inputs = { + "location": {"type": "string", "description": "The city to get the weather for"} + } + output_type = "string" + + def forward(self, location: str) -> str: + return "sunny" + + weather_tool = GetWeatherTool() + + result = weather_tool("Paris") + assert result == "sunny" + + spans = in_memory_span_exporter.get_finished_spans() + assert len(spans) == 1 + span = spans[0] + attributes = dict(span.attributes or {}) + + assert attributes.pop(OPENINFERENCE_SPAN_KIND) == TOOL + assert isinstance(input_value := attributes.pop(INPUT_VALUE), str) + assert json.loads(input_value) == { + "args": ["Paris"], + "sanitize_inputs_outputs": False, + "kwargs": {}, + } + assert attributes.pop(OUTPUT_VALUE) == "sunny" + assert attributes.pop(OUTPUT_MIME_TYPE) == TEXT + assert attributes.pop(TOOL_NAME) == "get_weather" + assert attributes.pop(TOOL_DESCRIPTION) == "Get the weather for a given city" + assert isinstance(tool_parameters := attributes.pop(TOOL_PARAMETERS), str) + assert json.loads(tool_parameters) == { + "location": { + "type": "string", + "description": "The city to get the weather for", + }, + } + assert not attributes + + def test_tool_invocation_returning_dict_has_expected_attributes( + self, in_memory_span_exporter: InMemorySpanExporter + ) -> None: + class GetWeatherTool(Tool): # type: ignore[misc] + name = "get_weather" + description = "Get detailed weather information for a given city" + inputs = { + "location": {"type": "string", "description": "The city to get the weather for"} + } + output_type = "object" + + def forward(self, location: str) -> dict[str, Any]: + return {"condition": "sunny", "temperature": 25, "humidity": 60} + + weather_tool = GetWeatherTool() + + result = weather_tool("Paris") + assert result == {"condition": "sunny", "temperature": 25, "humidity": 60} + + spans = in_memory_span_exporter.get_finished_spans() + assert len(spans) == 1 + span = spans[0] + attributes = dict(span.attributes or {}) + + assert attributes.pop(OPENINFERENCE_SPAN_KIND) == TOOL + assert isinstance(input_value := attributes.pop(INPUT_VALUE), str) + assert json.loads(input_value) == { + "args": ["Paris"], + "sanitize_inputs_outputs": False, + "kwargs": {}, + } + assert isinstance(output_value := attributes.pop(OUTPUT_VALUE), str) + assert json.loads(output_value) == {"condition": "sunny", "temperature": 25, "humidity": 60} + assert attributes.pop(OUTPUT_MIME_TYPE) == JSON + assert attributes.pop(TOOL_NAME) == "get_weather" + assert ( + attributes.pop(TOOL_DESCRIPTION) == "Get detailed weather information for a given city" + ) + assert isinstance(tool_parameters := attributes.pop(TOOL_PARAMETERS), str) + assert json.loads(tool_parameters) == { + "location": { + "type": "string", + "description": "The city to get the weather for", + }, + } + assert not attributes + + +# message attributes +MESSAGE_CONTENT = MessageAttributes.MESSAGE_CONTENT +MESSAGE_FUNCTION_CALL_ARGUMENTS_JSON = MessageAttributes.MESSAGE_FUNCTION_CALL_ARGUMENTS_JSON +MESSAGE_FUNCTION_CALL_NAME = MessageAttributes.MESSAGE_FUNCTION_CALL_NAME +MESSAGE_NAME = MessageAttributes.MESSAGE_NAME +MESSAGE_ROLE = MessageAttributes.MESSAGE_ROLE +MESSAGE_TOOL_CALLS = MessageAttributes.MESSAGE_TOOL_CALLS + +# mime types +JSON = OpenInferenceMimeTypeValues.JSON.value +TEXT = OpenInferenceMimeTypeValues.TEXT.value + +# span kinds +CHAIN = OpenInferenceSpanKindValues.CHAIN.value +LLM = OpenInferenceSpanKindValues.LLM.value +TOOL = OpenInferenceSpanKindValues.TOOL.value + +# span attributes +INPUT_MIME_TYPE = SpanAttributes.INPUT_MIME_TYPE +INPUT_VALUE = SpanAttributes.INPUT_VALUE +LLM_INPUT_MESSAGES = SpanAttributes.LLM_INPUT_MESSAGES +LLM_INVOCATION_PARAMETERS = SpanAttributes.LLM_INVOCATION_PARAMETERS +LLM_MODEL_NAME = SpanAttributes.LLM_MODEL_NAME +LLM_OUTPUT_MESSAGES = SpanAttributes.LLM_OUTPUT_MESSAGES +LLM_PROMPTS = SpanAttributes.LLM_PROMPTS +LLM_TOKEN_COUNT_COMPLETION = SpanAttributes.LLM_TOKEN_COUNT_COMPLETION +LLM_TOKEN_COUNT_PROMPT = SpanAttributes.LLM_TOKEN_COUNT_PROMPT +LLM_TOKEN_COUNT_TOTAL = SpanAttributes.LLM_TOKEN_COUNT_TOTAL +LLM_TOOLS = SpanAttributes.LLM_TOOLS +OPENINFERENCE_SPAN_KIND = SpanAttributes.OPENINFERENCE_SPAN_KIND +OUTPUT_MIME_TYPE = SpanAttributes.OUTPUT_MIME_TYPE +OUTPUT_VALUE = SpanAttributes.OUTPUT_VALUE +TOOL_DESCRIPTION = SpanAttributes.TOOL_DESCRIPTION +TOOL_NAME = SpanAttributes.TOOL_NAME +TOOL_PARAMETERS = SpanAttributes.TOOL_PARAMETERS + +# tool attributes +TOOL_JSON_SCHEMA = ToolAttributes.TOOL_JSON_SCHEMA + +# tool call attributes +TOOL_CALL_FUNCTION_ARGUMENTS_JSON = ToolCallAttributes.TOOL_CALL_FUNCTION_ARGUMENTS_JSON +TOOL_CALL_FUNCTION_NAME = ToolCallAttributes.TOOL_CALL_FUNCTION_NAME +TOOL_CALL_ID = ToolCallAttributes.TOOL_CALL_ID diff --git a/python/tox.ini b/python/tox.ini index 7db3507da..387e96b00 100644 --- a/python/tox.ini +++ b/python/tox.ini @@ -18,6 +18,7 @@ envlist = py3{8,12}-ci-{litellm,litellm-latest} ; py3{9,12}-ci-instructor py3{8,12}-ci-{anthropic,anthropic-latest} + py3{10,13}-ci-{smolagents,smolagents-latest} py38-mypy-langchain_core [testenv] @@ -43,6 +44,7 @@ changedir = litellm: instrumentation/openinference-instrumentation-litellm/ instructor: instrumentation/openinference-instrumentation-instructor/ anthropic: instrumentation/openinference-instrumentation-anthropic/ + smolagents: instrumentation/openinference-instrumentation-smolagents/ commands_pre = instrumentation: uv pip install --reinstall {toxinidir}/openinference-instrumentation[test] semconv: uv pip install --reinstall {toxinidir}/openinference-semantic-conventions @@ -84,6 +86,7 @@ commands_pre = anthropic: python -c 'import openinference.instrumentation.anthropic' anthropic: uv pip install -r test-requirements.txt anthropic-latest: uv pip install -U anthropic 'httpx<0.28' + smolagents: uv pip install --reinstall {toxinidir}/instrumentation/openinference-instrumentation-smolagents[test] commands = ruff: ruff format {posargs:.} ruff: ruff check --fix {posargs:.}