From f5ad2b7af65791dee465a007b3665e79a7c477bb Mon Sep 17 00:00:00 2001 From: Thomas Vitale Date: Fri, 26 Jan 2024 18:54:36 +0100 Subject: [PATCH] Update docs --- 01-chat-models/chat-models-ollama/README.md | 1 + 01-chat-models/chat-models-openai/README.md | 3 +- 02-prompts/prompts-basics-ollama/README.md | 16 ++++++- 02-prompts/prompts-basics-openai/README.md | 12 +++++ 02-prompts/prompts-messages-ollama/README.md | 16 ++++++- 02-prompts/prompts-messages-openai/README.md | 12 +++++ 02-prompts/prompts-templates-ollama/README.md | 16 ++++++- 02-prompts/prompts-templates-openai/README.md | 12 +++++ .../output-parsers-ollama/README.md | 16 ++++++- .../output-parsers-openai/README.md | 12 +++++ .../embedding-models-ollama/README.md | 44 +++++++++++++++++++ .../embedding-models-openai/README.md | 42 ++++++++++++++++++ README.md | 40 ++++++++--------- 13 files changed, 217 insertions(+), 25 deletions(-) diff --git a/01-chat-models/chat-models-ollama/README.md b/01-chat-models/chat-models-ollama/README.md index b1bfe72..9387940 100644 --- a/01-chat-models/chat-models-ollama/README.md +++ b/01-chat-models/chat-models-ollama/README.md @@ -55,6 +55,7 @@ The application relies on the native Testcontainers support in Spring Boot to sp ## Calling the application You can now call the application that will use Ollama and llama2 to generate text based on a default prompt. +This example uses [httpie](https://httpie.io) to send HTTP requests. ```shell http :8080/ai/chat diff --git a/01-chat-models/chat-models-openai/README.md b/01-chat-models/chat-models-openai/README.md index 1ef1327..5c4bd8a 100644 --- a/01-chat-models/chat-models-openai/README.md +++ b/01-chat-models/chat-models-openai/README.md @@ -46,7 +46,8 @@ Finally, run the Spring Boot application. ## Calling the application -You can now call the application that will use Ollama and llama2 to generate text based on a default prompt. +You can now call the application that will use OpenAI and _gpt-3.5-turbo_ to generate text based on a default prompt. +This example uses [httpie](https://httpie.io) to send HTTP requests. ```shell http :8080/ai/chat diff --git a/02-prompts/prompts-basics-ollama/README.md b/02-prompts/prompts-basics-ollama/README.md index c3a640e..37546d8 100644 --- a/02-prompts/prompts-basics-ollama/README.md +++ b/02-prompts/prompts-basics-ollama/README.md @@ -1,25 +1,39 @@ # Prompts Basic: Ollama -## Running the application +Prompting using simple text with LLMs via Ollama. + +# Running the application + +The application relies on Ollama for providing LLMs. You can either run Ollama locally on your laptop (macOS or Linux), or rely on the Testcontainers support in Spring Boot to spin up an Ollama service automatically. ### When using Ollama +First, make sure you have [Ollama](https://ollama.ai) installed on your laptop (macOS or Linux). +Then, use Ollama to run the _llama2_ large language model. + ```shell ollama run llama2 ``` +Finally, run the Spring Boot application. + ```shell ./gradlew bootRun ``` ### When using Docker/Podman +The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama2_ model at startup time. + ```shell ./gradlew bootTestRun ``` ## Calling the application +You can now call the application that will use Ollama and llama2 to generate an answer to your questions. +This example uses [httpie](https://httpie.io) to send HTTP requests. + ```shell http --raw "What is the capital of Italy?" :8080/ai/chat/simple ``` diff --git a/02-prompts/prompts-basics-openai/README.md b/02-prompts/prompts-basics-openai/README.md index 6260431..f8a83d5 100644 --- a/02-prompts/prompts-basics-openai/README.md +++ b/02-prompts/prompts-basics-openai/README.md @@ -1,19 +1,31 @@ # Prompts Basic: OpenAI +Prompting using simple text with LLMs via OpenAI. + ## Running the application +The application relies on an OpenAI API for providing LLMs. + ### When using OpenAI +First, make sure you have an OpenAI account. +Then, define an environment variable with the OpenAI API Key associated to your OpenAI account as the value. + ```shell export SPRING_AI_OPENAI_API_KEY= ``` +Finally, run the Spring Boot application. + ```shell ./gradlew bootRun ``` ## Calling the application +You can now call the application that will use OpenAI and _gpt-3.5-turbo_ to generate an answer to your questions. +This example uses [httpie](https://httpie.io) to send HTTP requests. + ```shell http --raw "What is the capital of Italy?" :8080/ai/chat/simple ``` diff --git a/02-prompts/prompts-messages-ollama/README.md b/02-prompts/prompts-messages-ollama/README.md index 1fea76d..2bb3029 100644 --- a/02-prompts/prompts-messages-ollama/README.md +++ b/02-prompts/prompts-messages-ollama/README.md @@ -1,25 +1,39 @@ # Prompts Messages: Ollama -## Running the application +Prompting using structured messages and roles with LLMs via Ollama. + +# Running the application + +The application relies on Ollama for providing LLMs. You can either run Ollama locally on your laptop (macOS or Linux), or rely on the Testcontainers support in Spring Boot to spin up an Ollama service automatically. ### When using Ollama +First, make sure you have [Ollama](https://ollama.ai) installed on your laptop (macOS or Linux). +Then, use Ollama to run the _llama2_ large language model. + ```shell ollama run llama2 ``` +Finally, run the Spring Boot application. + ```shell ./gradlew bootRun ``` ### When using Docker/Podman +The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama2_ model at startup time. + ```shell ./gradlew bootTestRun ``` ## Calling the application +You can now call the application that will use Ollama and llama2 to generate an answer to your questions. +This example uses [httpie](https://httpie.io) to send HTTP requests. + ```shell http --raw "What is the capital of Italy?" :8080/ai/chat/single ``` diff --git a/02-prompts/prompts-messages-openai/README.md b/02-prompts/prompts-messages-openai/README.md index 0c4f6c6..0d17353 100644 --- a/02-prompts/prompts-messages-openai/README.md +++ b/02-prompts/prompts-messages-openai/README.md @@ -1,19 +1,31 @@ # Prompts Messages: OpenAI +Prompting using structured messages and roles with LLMs via OpenAI. + ## Running the application +The application relies on an OpenAI API for providing LLMs. + ### When using OpenAI +First, make sure you have an OpenAI account. +Then, define an environment variable with the OpenAI API Key associated to your OpenAI account as the value. + ```shell export SPRING_AI_OPENAI_API_KEY= ``` +Finally, run the Spring Boot application. + ```shell ./gradlew bootRun ``` ## Calling the application +You can now call the application that will use OpenAI and _gpt-3.5-turbo_ to generate an answer to your questions. +This example uses [httpie](https://httpie.io) to send HTTP requests. + ```shell http --raw "What is the capital of Italy?" :8080/ai/chat/single ``` diff --git a/02-prompts/prompts-templates-ollama/README.md b/02-prompts/prompts-templates-ollama/README.md index 5d589ef..13b69e9 100644 --- a/02-prompts/prompts-templates-ollama/README.md +++ b/02-prompts/prompts-templates-ollama/README.md @@ -1,25 +1,39 @@ # Prompts Templates: Ollama -## Running the application +Prompting using templates with LLMs via Ollama. + +# Running the application + +The application relies on Ollama for providing LLMs. You can either run Ollama locally on your laptop (macOS or Linux), or rely on the Testcontainers support in Spring Boot to spin up an Ollama service automatically. ### When using Ollama +First, make sure you have [Ollama](https://ollama.ai) installed on your laptop (macOS or Linux). +Then, use Ollama to run the _llama2_ large language model. + ```shell ollama run llama2 ``` +Finally, run the Spring Boot application. + ```shell ./gradlew bootRun ``` ### When using Docker/Podman +The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama2_ model at startup time. + ```shell ./gradlew bootTestRun ``` ## Calling the application +You can now call the application that will use Ollama and llama2 to generate an answer to your questions. +This example uses [httpie](https://httpie.io) to send HTTP requests. + ```shell http :8080/ai/chat/user genre="rock" instrument="piano" ``` diff --git a/02-prompts/prompts-templates-openai/README.md b/02-prompts/prompts-templates-openai/README.md index 5b0a594..a7422e8 100644 --- a/02-prompts/prompts-templates-openai/README.md +++ b/02-prompts/prompts-templates-openai/README.md @@ -1,19 +1,31 @@ # Prompts Templates: OpenAI +Prompting using templates with LLMs via OpenAI. + ## Running the application +The application relies on an OpenAI API for providing LLMs. + ### When using OpenAI +First, make sure you have an OpenAI account. +Then, define an environment variable with the OpenAI API Key associated to your OpenAI account as the value. + ```shell export SPRING_AI_OPENAI_API_KEY= ``` +Finally, run the Spring Boot application. + ```shell ./gradlew bootRun ``` ## Calling the application +You can now call the application that will use OpenAI and _gpt-3.5-turbo_ to generate an answer to your questions. +This example uses [httpie](https://httpie.io) to send HTTP requests. + ```shell http :8080/ai/chat/user genre="rock" instrument="piano" ``` diff --git a/03-output-parsers/output-parsers-ollama/README.md b/03-output-parsers/output-parsers-ollama/README.md index 968efb0..2da2af5 100644 --- a/03-output-parsers/output-parsers-ollama/README.md +++ b/03-output-parsers/output-parsers-ollama/README.md @@ -1,25 +1,39 @@ # Output Parsers: Ollama -## Running the application +Parsing the LLM output as structured objects (Beans, Map, List) via Ollama. + +# Running the application + +The application relies on Ollama for providing LLMs. You can either run Ollama locally on your laptop (macOS or Linux), or rely on the Testcontainers support in Spring Boot to spin up an Ollama service automatically. ### When using Ollama +First, make sure you have [Ollama](https://ollama.ai) installed on your laptop (macOS or Linux). +Then, use Ollama to run the _llama2_ large language model. + ```shell ollama run llama2 ``` +Finally, run the Spring Boot application. + ```shell ./gradlew bootRun ``` ### When using Docker/Podman +The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama2_ model at startup time. + ```shell ./gradlew bootTestRun ``` ## Calling the application +You can now call the application that will use Ollama and llama2 to generate an answer to your questions. +This example uses [httpie](https://httpie.io) to send HTTP requests. + ```shell http :8080/ai/chat/bean genre="rock" instrument="piano" ``` diff --git a/03-output-parsers/output-parsers-openai/README.md b/03-output-parsers/output-parsers-openai/README.md index 542fdb7..2fbe6c0 100644 --- a/03-output-parsers/output-parsers-openai/README.md +++ b/03-output-parsers/output-parsers-openai/README.md @@ -1,19 +1,31 @@ # Output Parsers: OpenAI +Parsing the LLM output as structured objects (Beans, Map, List) via Open AI. + ## Running the application +The application relies on an OpenAI API for providing LLMs. + ### When using OpenAI +First, make sure you have an OpenAI account. +Then, define an environment variable with the OpenAI API Key associated to your OpenAI account as the value. + ```shell export SPRING_AI_OPENAI_API_KEY= ``` +Finally, run the Spring Boot application. + ```shell ./gradlew bootRun ``` ## Calling the application +You can now call the application that will use OpenAI and _gpt-3.5-turbo_ to generate an answer to your questions. +This example uses [httpie](https://httpie.io) to send HTTP requests. + ```shell http :8080/ai/chat/bean genre="rock" instrument="piano" ``` diff --git a/04-embedding-models/embedding-models-ollama/README.md b/04-embedding-models/embedding-models-ollama/README.md index 02d9e09..279f997 100644 --- a/04-embedding-models/embedding-models-ollama/README.md +++ b/04-embedding-models/embedding-models-ollama/README.md @@ -1,25 +1,69 @@ # Embedding Models: Ollama +Vector transformation (embeddings) with LLMs via Ollama. + +## Description + +Spring AI provides an `EmbeddingClient` abstraction for integrating with LLMs via several providers, including Ollama. + +When using the _Spring AI Ollama Spring Boot Starter_, an `EmbeddingClient` object is autoconfigured for you to use Ollama. +By default, the _llama2_ model is used. + +```java +@RestController +class EmbeddingController { + private final EmbeddingClient embeddingClient; + + EmbeddingController(EmbeddingClient embeddingClient) { + this.embeddingClient = embeddingClient; + } + + @GetMapping("/ai/embed") + String embed(@RequestParam(defaultValue = "And Gandalf yelled: 'You shall not pass!'") String message) { + var embeddings = embeddingClient.embed(message); + return "Size of the embedding vector: " + embeddings.size(); + } +} +``` + ## Running the application +The application relies on Ollama for providing LLMs. You can either run Ollama locally on your laptop (macOS or Linux), or rely on the Testcontainers support in Spring Boot to spin up an Ollama service automatically. + ### When using Ollama +First, make sure you have [Ollama](https://ollama.ai) installed on your laptop (macOS or Linux). +Then, use Ollama to run the _llama2_ large language model. + ```shell ollama run llama2 ``` +Finally, run the Spring Boot application. + ```shell ./gradlew bootRun ``` ### When using Docker/Podman +The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama2_ model at startup time. + ```shell ./gradlew bootTestRun ``` ## Calling the application +You can now call the application that will use Ollama and llama2 to generate a vector representation (embeddings) of a default text. +This example uses [httpie](https://httpie.io) to send HTTP requests. + ```shell http :8080/ai/embed ``` + +Try passing your custom prompt and check the result. + +```shell +http :8080/ai/embed message=="The capital of Italy is Rome" +``` diff --git a/04-embedding-models/embedding-models-openai/README.md b/04-embedding-models/embedding-models-openai/README.md index d6ef745..1444e62 100644 --- a/04-embedding-models/embedding-models-openai/README.md +++ b/04-embedding-models/embedding-models-openai/README.md @@ -1,19 +1,61 @@ # Embedding Models: OpenAI +Vector transformation (embeddings) with LLMs via OpenAI. + +## Description + +Spring AI provides an `EmbeddingClient` abstraction for integrating with LLMs via several providers, including OpenAI. + +When using the _Spring AI OpenAI Spring Boot Starter_, an `EmbeddingClient` object is autoconfigured for you to use OpenAI. +By default, the _text-embedding-ada-002_ model is used. + +```java +@RestController +class EmbeddingController { + private final EmbeddingClient embeddingClient; + + EmbeddingController(EmbeddingClient embeddingClient) { + this.embeddingClient = embeddingClient; + } + + @GetMapping("/ai/embed") + String embed(@RequestParam(defaultValue = "And Gandalf yelled: 'You shall not pass!'") String message) { + var embeddings = embeddingClient.embed(message); + return "Size of the embedding vector: " + embeddings.size(); + } +} +``` + ## Running the application +The application relies on an OpenAI API for providing LLMs. + ### When using OpenAI +First, make sure you have an OpenAI account. +Then, define an environment variable with the OpenAI API Key associated to your OpenAI account as the value. + ```shell export SPRING_AI_OPENAI_API_KEY= ``` +Finally, run the Spring Boot application. + ```shell ./gradlew bootRun ``` ## Calling the application +You can now call the application that will use OpenAI and _text-embedding-ada-002_ to generate a vector representation (embeddings) of a default text. +This example uses [httpie](https://httpie.io) to send HTTP requests. + ```shell http :8080/ai/embed ``` + +Try passing your custom prompt and check the result. + +```shell +http :8080/ai/embed message=="The capital of Italy is Rome" +``` diff --git a/README.md b/README.md index 116738c..88bcebf 100644 --- a/README.md +++ b/README.md @@ -20,35 +20,35 @@ Samples showing how to build Java applications powered by Generative AI and LLMs ### 2. Prompts -| Project | Description | -|------------------------------------------------------------------------------------------------------------------------------------|---------------| -| [prompts-basics-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-basics-ollama) | _Coming soon_ | -| [prompts-basics-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-basics-openai) | _Coming soon_ | -| [prompts-messages-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-messages-ollama) | _Coming soon_ | -| [prompts-messages-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-messages-openai) | _Coming soon_ | -| [prompts-templates-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-templates-ollama) | _Coming soon_ | -| [prompts-templates-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-templates-openai) | _Coming soon_ | +| Project | Description | +|------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------| +| [prompts-basics-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-basics-ollama) | Prompting using simple text with LLMs via Ollama. | +| [prompts-basics-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-basics-openai) | Prompting using simple text with LLMs via OpenAI. | +| [prompts-messages-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-messages-ollama) | Prompting using structured messages and roles with LLMs via Ollama. | +| [prompts-messages-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-messages-openai) | Prompting using structured messages and roles with LLMs via OpenAI. | +| [prompts-templates-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-templates-ollama) | Prompting using templates with LLMs via Ollama. | +| [prompts-templates-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-templates-openai) | Prompting using templates with LLMs via OpenAI. | ### 3. Output Parsers -| Project | Description | -|------------------------------------------------------------------------------------------------------------------------------------|---------------| -| [output-parsers-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/03-output-parsers/output-parsers-ollama) | _Coming soon_ | -| [output-parsers-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/03-output-parsers/output-parsers-openai) | _Coming soon_ | +| Project | Description | +|------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------| +| [output-parsers-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/03-output-parsers/output-parsers-ollama) | Parsing the LLM output as structured objects (Beans, Map, List) via Ollama. | +| [output-parsers-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/03-output-parsers/output-parsers-openai) | Parsing the LLM output as structured objects (Beans, Map, List) via Open AI. | ### 4. Embedding Models -| Project | Description | -|------------------------------------------------------------------------------------------------------------------------------------------|---------------| -| [embedding-models-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/04-embedding-models/embedding-models-ollama) | _Coming soon_ | -| [embedding-models-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/04-embedding-models/embedding-models-openai) | _Coming soon_ | +| Project | Description | +|------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------| +| [embedding-models-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/04-embedding-models/embedding-models-ollama) | Vector transformation (embeddings) with LLMs via Ollama. | +| [embedding-models-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/04-embedding-models/embedding-models-openai) | Vector transformation (embeddings) with LLMs via OpenAI. | ### 5. Document Readers -| Project | Description | -|----------------------------------------------------------------------------------------------------------------------------------------------------|---------------| -| [document-readers-json-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/05-document-readers/document-readers-json-ollama) | _Coming soon_ | -| [document-readers-text-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/05-document-readers/document-readers-text-ollama) | _Coming soon_ | +| Project | Description | +|----------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------| +| [document-readers-json-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/05-document-readers/document-readers-json-ollama) | Reading and vectorizing JSON documents with LLMs via Ollama. | +| [document-readers-text-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/05-document-readers/document-readers-text-ollama) | Reading and vectorizing text documents with LLMs via Ollama. | ### 6. Document Transformers