Additonal chat results to include time spent waiting on LLMs, time spent in code execution, etc. #2556
DarinShapiroMS
started this conversation in
General
Replies: 1 comment
-
You can take a look at runtime_logging module: https://microsoft.github.io/autogen/docs/notebooks/agentchat_logging |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a multi agent setup with group chat, with each agent using a different LLM. When executing my plan, it seems like there is a lot of latency in between calls to LLMs. It seems like a black box and I can't tell if, for example, my AOAI endpoint throttling especially due to Teachability adding a lot of chattiness. Also, I have been trying different LLMs for different agents, trying to compare performance vs quality. Is there a good way to see where time is being spent? If not, would including this type of telemetry in the chat results make sense (if a certain logging verbosity was chosen of course).
Beta Was this translation helpful? Give feedback.
All reactions