Skip to content

Commit

Permalink
Address feedback
Browse files Browse the repository at this point in the history
  • Loading branch information
J2-D2-3PO committed Jan 22, 2025
1 parent bece3fa commit 050524d
Showing 1 changed file with 12 additions and 9 deletions.
21 changes: 12 additions & 9 deletions docs/docs/introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,18 @@ slug: /

Weights & Biases (W&B) Weave is a framework for tracking, experimenting with, evaluating, deploying, and improving LLM-based applications. Designed for flexibility and scalability, Weave supports every stage of your LLM application development workflow:

- **Track:** Log and analyze LLM application inputs and outputs for debugging and further analysis.
- **Experiment:** Test and compare models, prompts, inputs, and outputs.
- **Evaluate:** Track costs, score and annotate model inputs and outputs, and systematically evaluate your application.
- **Deploy:** Implement safety guardrails such as content moderation or access control, and monitor production systems.
- **Iterate:** Systematically refine and enhance your application.

You can interact programmatically with Weave via a [Python SDK](./reference/python-sdk/weave/index.md), [TypeScript SDK](./reference/typescript-sdk/weave/README.md), or the Service API.

Weave [integrates](./guides/integrations/index.md) with numerous popular LLM providers, local models, frameworks, and third-party services.
- **Tracing & Monitoring**: [Track your application from input to output](./guides/tracking/) to debug and analyze production systems.
- **Systematic Iteration**: Refine and iterate on [prompts](./guides/core-types/prompts.md), [datasets](./guides/core-types/datasets.md), and [models](./guides/core-types/models.md).
- **Experimentation**: Experiment with different models and prompts in the [LLM Playground](./guides/tools/playground.md).
- **Evaluation**: Use [pre-built scorers](./guides/evaluation/scorers#predefined-scorers), [object comparison](./guides/tools/comparison.md), and LLM tools to systematically assess and enhance application performance.
- **Guardrails**: Protect your application with [pre- and post-safeguards](./guides/evaluation/guardrails_and_monitors.md) for content moderation, prompt safety, and more.

Integrate Weave with your existing development stack via the:
- [Python SDK](./reference/python-sdk/weave/index.md)
- [TypeScript SDK](./reference/typescript-sdk/weave/README.md)
- [Service API](./reference/service-api/call-start-call-start-post)

Weave supports [numerous LLM providers, local models, frameworks, and third-party services](./guides/integrations/index.md).

## Get started

Expand Down

0 comments on commit 050524d

Please sign in to comment.