Skip to content

Commit

Permalink
Merge pull request #32 from docker/cm/readme-2
Browse files Browse the repository at this point in the history
New readme
  • Loading branch information
ColinMcNeil authored Nov 14, 2024
2 parents 70d0c60 + d0c16f1 commit 36b8bc4
Show file tree
Hide file tree
Showing 2 changed files with 49 additions and 19 deletions.
68 changes: 49 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,44 @@
**This README is an agentic workflow**

# AI Tools for Developers

Agentic AI workflows enabled by Docker containers.

Just Docker. Just Markdown. BYOLLM.

![overall architecture diagram preview](img1.png)

Source for many experiments in our [LinkedIn newsletter](https://www.linkedin.com/newsletters/docker-labs-genai-7204877599427194882/)

Docker Desktop Extension: https://hub.docker.com/extensions/docker/labs-ai-tools-for-devs
[**VSCode Extension**](https://github.com/docker/labs-ai-tools-vscode)

VSCode Extension: https://github.com/docker/labs-make-runbook
[**Docs**](https://vonwig.github.io/prompts.docs/)

# What is this?

This is a simple Docker image which enables infinite possibilities for novel workflows by combining Dockerized Tools, Markdown, and the LLM of your choice.

## Markdown is the language

Humans already speak it. So do LLM's. This software allows you to write complex workflows in a markdown files, and then run them with your own LLM in your editor or terminal...or any environment, thanks to Docker.

## Dockerized Tools
![dockerized tools](img4.png)

OpenAI API compatiable LLM's already support tool calling. We believe these tools could just be Docker images. Some of the benefits using Docker based on our [research](https://www.linkedin.com/newsletters/docker-labs-genai-7204877599427194882/) are enabling the LLM to:
- take more complex actions
- get more context with fewer tokens
- work across a wider range of environments
- operate in a sandboxed environment

## Conversation *Loop*
The conversation loop is the core of each workflow. Tool results, agent responses, and of course, the markdown prompts, are all passed through the loop. If an agent sees an error, it will try running the tool with different parameters, or even different tools until it gets the right result.

## Multi-Model Agents
Each prompt can be configured to be run with different LLM models, or even different model families. This allows you to use the best tool for the job. When you combine these tools, you can create multi-agent workflows where each agent runs with the model best suited for that task.

With Docker, it is possible to have frontier models plan, while lightweight local models execute.

## Project-First Design
To get help from an assistant in your software development loop, the only context necessary is the project you are working on.

Expand All @@ -23,27 +50,24 @@ An extractor is a Docker image that runs against a project and extracts informat
## Prompts as a trackable artifact
![prompts as a trackable artifact](img3.png)

Prompts are stored in a git repo and can be versioned, tracked, and shared.

## Dockerized Tools
![dockerized tools](img4.png)

OpenAI API compatiable LLM's already support function calling. This is our workbench to test the same spec, but with functions as Docker images. Some of the benefits using Docker based on our [research](https://www.linkedin.com/newsletters/docker-labs-genai-7204877599427194882/) are enabling the LLM to:
- take more complex actions
- deliver relevant context to the LLM without too many tokens
- work across a wider range of environments
- operate in a sandboxed environment
Prompts are stored in a git repo and can be versioned, tracked, and shared for anyone to run in their own environment.

# Get Started
We highly recommend using the VSCode extension to get started. It will help you create prompts, and run them with your own LLM.

You can install it in one-click with our Docker Desktop Extension:
https://hub.docker.com/extensions/docker/labs-vscode-installer

## Running a Conversation Loop
## Running your first loop
Instructions assume you have a terminal open, and Docker Desktop running.

Set OpenAI key
1. Set OpenAI key
```sh
echo $OPENAI_API_KEY > $HOME/.openai-api-key
```
Note: we assume this file exists, so you must set a dummy value for local models.

Run
2. Run the container in your project directory

```sh
docker run
Expand All @@ -61,10 +85,7 @@ docker run
--prompts "github:docker/labs-githooks?ref=main&path=prompts/git_hooks"
```

See [docs](https://vonwig.github.io/prompts.docs/#/page/running%20the%20prompt%20engine) for more details on how to run the conversation loop,
and especially how to use it to run local prompts that are not yet in GitHub.

[PROMPTS KNOWLEDGE GRAPH](https://vonwig.github.io/prompts.docs/#/page/index)
See [docs](https://vonwig.github.io/prompts.docs/#/page/running%20the%20prompt%20engine) for more details on how to run the conversation loop.

## Building

Expand All @@ -74,3 +95,12 @@ and especially how to use it to run local prompts that are not yet in GitHub.
docker build -t vonwig/prompts:local -f Dockerfile .
```

Now, for the agentic workflow...

# prompt system

You are an expert at software development, so of course you know how great Docker is. Just tell people how great Docker is.

# prompt user

Why is Docker great?
Binary file modified img4.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 36b8bc4

Please sign in to comment.