Skip to content

Latest commit

 

History

History
105 lines (56 loc) · 3.65 KB

glossary.md

File metadata and controls

105 lines (56 loc) · 3.65 KB

Glossary

LLMInterface Specific

Interface Name

The specific string used within the LLM Interface package to identify and access a provider's API (e.g., 'openai', 'cohere', 'anthropic').

Interface Options

An optional object interfaceOptions that can be used to control LLMInterface.

LLM Interface

This Node.js module that provides a standardized way to interact with various LLM providers' APIs.

Options

An optional object options that can be used to send parameters to a LLM.

OpenAI Compatible Structure

An object that is compatible with OpenAI chat.completion. The object can contain the model, message, and parameters.

Example:

{
  "model": "gpt-3.5-turbo",
  "messages": [{"role": "user", "content": "Say this is a test!"}],
  "temperature": 0.7
}

Provider

The company or organization that offers a language model API (e.g., OpenAI, Cohere, Anthropic).

General

API (Application Programming Interface)

A set of rules, protocols, and tools for building software applications. In the context of LLMs, an API allows you to interact with a language model hosted by a provider.

Embeddings

Numerical representations of text that capture their meaning and semantic relationships. Embeddings can be used for tasks like semantic search, clustering, and recommendation.

Functions

Specific actions or operations that can be performed by an LLM. The LLM Interface may allow you to define custom functions or utilize pre-defined functions offered by providers.

Inference

The process of generating a response from an LLM based on a given prompt.

LLM (Large Language Model)

A type of artificial intelligence model trained on a massive dataset of text and code. LLMs can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.

Model

The specific string used to reference a LLM offered by a provider (e.g., 'gpt-3.5-turbo', 'command-nightly').

Native JSON Mode

A communication mode where the LLM directly parses and responds with JSON objects, facilitating structured data exchange and function calling.

Parameter

A value that the model uses during inference, typically referring to the adjustable weights in the model that were learned during training. Supported parameters vary between providers and models, please check the provider documentation for a list of supported parameters.

Prompt

The input given to an LLM, which can be a question, instruction, or piece of text.

Response

The output generated by an LLM in response to a prompt.

Streaming

A feature that allows you to receive the LLM's response as it is being generated, rather than waiting for it to be fully completed.

Temperature

A parameter that controls the randomness of the LLM's outputs. Lower values make the model more deterministic, while higher values increase randomness.

Token

The basic unit of text used by LLMs. Tokens can be words, subwords, or even characters, depending on the specific model. Token usage can affect pricing in some LLM APIs.

Tools

External resources or functionalities (e.g., calculators, code interpreters, search engines) that can be integrated with LLMs to enhance their capabilities.

Top-k Sampling

A decoding strategy where the model considers only the top k most probable next tokens when generating text, promoting diversity in the output.

Top-p (Nucleus) Sampling

A decoding strategy where the model considers the smallest set of tokens whose cumulative probability exceeds a certain threshold p, balancing between diversity and relevance.