Skip to content

Commit

Permalink
Merge pull request #7 from samestrin/2.0.9
Browse files Browse the repository at this point in the history
2.0.9
  • Loading branch information
samestrin authored Jun 29, 2024
2 parents 6400dcb + e5962f6 commit 997a86f
Show file tree
Hide file tree
Showing 85 changed files with 3,833 additions and 663 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -132,3 +132,5 @@ dist
/src/cache
.prettier*

.DS_STORE
cache/
38 changes: 19 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,40 +2,40 @@

[![Star on GitHub](https://img.shields.io/github/stars/samestrin/llm-interface?style=social)](https://github.com/samestrin/llm-interface/stargazers) [![Fork on GitHub](https://img.shields.io/github/forks/samestrin/llm-interface?style=social)](https://github.com/samestrin/llm-interface/network/members) [![Watch on GitHub](https://img.shields.io/github/watchers/samestrin/llm-interface?style=social)](https://github.com/samestrin/llm-interface/watchers)

![Version 2.0.8](https://img.shields.io/badge/Version-2.0.8-blue) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Built with Node.js](https://img.shields.io/badge/Built%20with-Node.js-green)](https://nodejs.org/)
![Version 2.0.9](https://img.shields.io/badge/Version-2.0.9-blue) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Built with Node.js](https://img.shields.io/badge/Built%20with-Node.js-green)](https://nodejs.org/)

## Introduction

`llm-interface` is a wrapper designed to interact with multiple Large Language Model (LLM) APIs. `llm-interface` simplifies integrating various LLM providers, including **OpenAI, AI21 Studio, Anthropic, Cloudflare AI, Cohere, DeepInfra, Fireworks AI, Friendli AI, Google Gemini, Goose AI, Groq, Hugging Face, Mistral AI, Monster API, Octo AI, Perplexity, Reka AI, watsonx.ai, and LLaMA.cpp**, into your applications. It is available as an [NPM package](https://www.npmjs.com/package/llm-interface).
`llm-interface` is a wrapper designed to interact with multiple Large Language Model (LLM) APIs. `llm-interface` simplifies integrating various LLM providers, including **OpenAI, AI21 Studio, AIML API, Anthropic, Cloudflare AI, Cohere, DeepInfra, Fireworks AI, Forefront, Friendli AI, Google Gemini, Goose AI, Groq, Hugging Face, Mistral AI, Monster API, Octo AI, Ollama, Perplexity, Reka AI, Replicate, watsonx.ai, Writer, and LLaMA.cpp**, into your applications. It is available as an [NPM package](https://www.npmjs.com/package/llm-interface).

This goal of `llm-interface` is to provide a single, simple, unified interface for sending messages and receiving responses from different LLM services. This will make it easier for developers to work with multiple LLMs without worrying about the specific intricacies of each API.

## Features

- **Unified Interface**: `LLMInterfaceSendMessage` is a single, consistent interface to interact with **19 different LLM APIs**.
- **Unified Interface**: `LLMInterfaceSendMessage` is a single, consistent interface to interact with **24 different LLM APIs** (22 hosted LLM providers and 2 local LLM providers).
- **Dynamic Module Loading**: Automatically loads and manages LLM interfaces only when they are invoked, minimizing resource usage.
- **Error Handling**: Robust error handling mechanisms to ensure reliable API interactions.
- **Extensible**: Easily extendable to support additional LLM providers as needed.
- **Response Caching**: Efficiently caches LLM responses to reduce costs and enhance performance.
- **Graceful Retries**: Automatically retry failed prompts with increasing delays to ensure successful responses.
- **JSON Output**: Simple to use native JSON output for OpenAI, Fireworks AI, and Gemini responses.
- **JSON Repair**: Detect and repair invalid JSON responses.
- **JSON Output**: Simple to use native JSON output for various LLM providers including OpenAI, Fireworks AI, Google Gemini, and more.
- **JSON Repair**: Detect and repair invalid JSON responses.

## Updates

**v2.0.8**

- **Removing Dependencies**: The removal of OpenAI and Groq SDKs results in a smaller bundle, faster installs, and reduced complexity.
**v2.0.9**

**v2.0.7**
- **New LLM Providers**: Added support for AIML API (_currently not respecting option values_), DeepSeek, Forefront, Ollama, Replicate, and Writer.
- **New LLMInterface Methods**: `LLMInterface.setApiKey`, `LLMInterface.sendMesage`, and `LLMInterface.streamMessage`.
- **Streaming**: Streaming support available for: AI21 Studio, AIML API, DeepInfra, DeepSeek, Fireworks AI, FriendliAI, Groq, Hugging Face, LLaMa.CPP, Mistral AI, Monster API, NVIDIA,
Octo AI, Ollama, OpenAI, Perplexity, Together AI, and Writer.
- **New Interface Function**: `LLMInterfaceStreamMessage`
- **Test Coverage**: 100% test coverage for all interface classes.
- **Examples**: New usage [examples](/examples).

- **New LLM Providers**: Added support for DeepInfra, FriendliAI, Monster API, Octo AI, Together AI, and NVIDIA.
- **Improved Test Coverage**: New DeepInfra, FriendliAI, Monster API, NVIDIA, Octo AI, Together AI, and watsonx.ai test cases.
- **Refactor**: Improved support for OpenAI compatible APIs using new BaseInterface class.

**v2.0.6**
**v2.0.8**

- **New LLM Provider**: Added support for watsonx.ai.
- **Removing Dependencies**: The removal of OpenAI and Groq SDKs results in a smaller bundle, faster installs, and reduced complexity.

## Dependencies

Expand Down Expand Up @@ -111,13 +111,13 @@ The project includes tests for each LLM handler. To run the tests, use the follo
npm test
```

#### Current Test Results
#### Current Test Results

```bash
Test Suites: 52 passed, 52 total
Tests: 2 skipped, 215 passed, 217 total
Test Suites: 1 skipped, 65 passed, 65 of 66 total
Tests: 2 skipped, 291 passed, 293 total
Snapshots: 0 total
Time: 76.236 s
Time: 103.293 s, estimated 121 s
```

_Note: Currently skipping NVIDIA test cases due to API key limits._
Expand Down
26 changes: 24 additions & 2 deletions docs/APIKEYS.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,16 +32,30 @@ The Cohere API offers trial keys. Trial keys are rate-limited, and cannot be use

- https://dashboard.cohere.com/api-keys

## Deepinfra
## DeepInfra

The Deepinfra API is commercial but new accounts will start with a $1.80 credit.
The DeepInfra API is commercial but new accounts will start with a $1.80 credit.

- https://deepinfra.com/dash/api_keys

## DeepSeek

The DeepSeek API is commercial and required a credit card or debit card to get started.

- https://platform.deepseek.com/api_keys

## Fireworks AI

The Fireworks AI API offers a free developer tier and commercial accounts. A Credit is not required for the free developer tier.

- https://fireworks.ai/api-keys

## Forefront

The Forefront API is commercial but it comes with $20 free credit.

- https://platform.forefront.ai/app/api-keys

## Friendli AI

The Friendli AI API is commercial but it comes with a $5.00 credit.
Expand Down Expand Up @@ -110,6 +124,14 @@ The Reka AI API requires a credit card, but currently comes with a $5.00 credit.

- https://platform.reka.ai/apikeys

## Replicate

The Replicate API is commercial but it does offer a free tier that you can use without providing a credit card.

- https://replicate.com/

After you login, you will need to click "Dashboard", then "Run a model".

## Together AI

The Together API is commercial, but it did not require a credit card, and it came with a $5.00 credit.
Expand Down
11 changes: 7 additions & 4 deletions env
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,17 @@ AI21_API_KEY=
FIREWORKSAI_API_KEY=
CLOUDFLARE_API_KEY=
CLOUDFLARE_ACCOUNT_ID=
LLAMACPP_URL=http://localhost:8080/completions
CLOUDFLARE_API_KEY=
CLOUDFLARE_ACCOUNT_ID=
WATSONXSAI_API_KEY=
WATSONXSAI_SPACE_ID=
FRIENDLIAI_API_KEY=
NVIDIA_API_KEY=
DEEPINFRA_API_KEY=
TOGETHERAI_API_KEY=
MONSTERAPI_API_KEY=
OCTOAI_API_KEY=
OCTOAI_API_KEY=
AIMLAPI_API_KEY=
FOREFRONT_API_KEY=
DEEPSEEK_API_KEY=

REPLICATE_API_KEY=
LLAMACPP_URL=http://localhost:8080/completions
41 changes: 41 additions & 0 deletions examples/json-output.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
/**
* @file examples/json-output.js
* @description Example showing JSON output. To do this, I will specify my JSON output requirements through my prompt.
*/
const { LLMInterface } = require('llm-interface');
const { simplePrompt, options } = require('../src/utils/defaults.js');

require('dotenv').config({ path: '../.env' });

// Setup your key and interface
const interface = 'huggingface';
const apiKey = process.env.HUGGINGFACE_API_KEY;

/**
* Main exampleUsage() function.
*/
async function exampleUsage() {
let prompt = `${simplePrompt} Return 5 results.\n\nProvide the response as a JSON object.\n\nFollow this output format, only responding with the JSON object and nothing else:\n\n{title, reason}`;

console.log('JSON Output (Prompt Based):');
console.log();
console.log('Prompt:');
console.log(`> ${prompt.replaceAll('\n\n', '\n>\n> ')}`);
console.log();

LLMInterface.setApiKey(interface, apiKey);

try {
const response = await LLMInterface.sendMessage(interface, prompt, {
max_tokens: 1024,
});

console.log('Repaired JSON Result:');
console.log(response.results);
console.log();
} catch (error) {
console.error('Error processing LLMInterface.sendMessage:', error);
}
}

exampleUsage();
48 changes: 48 additions & 0 deletions examples/json-repair.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
/**
* @file examples/native-json-output.js
* @description Example showing JSON repair. To do this, I will specify my JSON output requirements through my prompt, and I will request a
* larger result set then can be returned based on token size using a prompt, this will result in a response containing an invalid JSON object. I
* will then repair the response using the attemptJsonRepair interfaceOption.
*/
const { LLMInterface } = require('llm-interface');
const { simplePrompt, options } = require('../src/utils/defaults.js');

require('dotenv').config({ path: '../.env' });

// Setup your key and interface
const interface = 'groq';
const apiKey = process.env.GROQ_API_KEY;

/**
* Main exampleUsage() function.
*/
async function exampleUsage() {
let prompt = `${simplePrompt} Return 5 results.\n\nProvide the response as a JSON object.\n\nFollow this output format, only responding with the JSON object and nothing else:\n\n{title, reason}`;

console.log('JSON Repair:');
console.log();
console.log('Prompt:');
console.log(`> ${prompt.replaceAll('\n\n', '\n>\n> ')}`);
console.log();

LLMInterface.setApiKey(interface, apiKey);

try {
const response = await LLMInterface.sendMessage(
interface,
prompt,
{
max_tokens: 100,
},
{ attemptJsonRepair: true },
);

console.log('Repaired JSON Result:');
console.log(response.results);
console.log();
} catch (error) {
console.error('Error processing LLMInterface.sendMessage:', error);
}
}

exampleUsage();
43 changes: 43 additions & 0 deletions examples/native-json-output.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
/**
* @file examples/native-json-output.js
* @description Example showing native JSON output. I will specify my JSON requirements in my prompt, and also specify native JSON mode. This will have
* the added benefit of server side JSON validation, however this can return a null response when the result set is too large for the response token.
*/
const { LLMInterface } = require('llm-interface');
const { simplePrompt, options } = require('../src/utils/defaults.js');

require('dotenv').config({ path: '../.env' });

// Setup your key and interface
const interface = 'gemini';
const apiKey = process.env.GEMINI_API_KEY;

/**
* Main exampleUsage() function.
*/
async function exampleUsage() {
let prompt = `${simplePrompt} Return 5 results.\n\nProvide the response as a valid JSON object; validate the object before responding.\n\nJSON Output Format: [{title, reason}]`;

console.log('Native JSON Output:');
console.log();
console.log('Prompt:');
console.log(`> ${prompt.replaceAll('\n\n', '\n>\n> ')}`);
console.log();

LLMInterface.setApiKey(interface, apiKey);

try {
const response = await LLMInterface.sendMessage(interface, prompt, {
max_tokens: 1024,
response_format: 'json_object',
});

console.log('JSON Result:');
console.log(response.results);
console.log();
} catch (error) {
console.error('Error processing LLMInterface.sendMessage:', error);
}
}

exampleUsage();
Loading

0 comments on commit 997a86f

Please sign in to comment.