Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs updates #3632

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ After you've written your context provider, make sure to complete the following:

### Adding an LLM Provider

Continue has support for more than a dozen different LLM "providers", making it easy to use models running on OpenAI, Ollama, Together, Novita AI, LM Studio, Msty, and more. You can find all of the existing providers [here](https://github.com/continuedev/continue/tree/main/core/llm/llms), and if you see one missing, you can add it with the following steps:
Continue has support for more than a dozen different LLM "providers", making it easy to use models running on OpenAI, Ollama, Together, LM Studio, Msty, and more. You can find all of the existing providers [here](https://github.com/continuedev/continue/tree/main/core/llm/llms), and if you see one missing, you can add it with the following steps:

1. Create a new file in the `core/llm/llms` directory. The name of the file should be the name of the provider, and it should export a class that extends `BaseLLM`. This class should contain the following minimal implementation. We recommend viewing pre-existing providers for more details. The [LlamaCpp Provider](./core/llm/llms/LlamaCpp.ts) is a good simple example.

Expand All @@ -209,7 +209,7 @@ While any model that works with a supported provider can be used with Continue,
1. Add a `ModelPackage` entry for the model into [configs/models.ts](./gui/src/pages/AddNewModel/configs/models.ts), following the lead of the many examples near the top of the file
2. Add the model within its provider's array to [AddNewModel.tsx](./gui/src/pages/AddNewModel/AddNewModel.tsx) (add provider if needed)
- [index.d.ts](./core/index.d.ts) - This file defines the TypeScript types used throughout Continue. You'll find a `ModelName` type. Be sure to add the name of your model to this.
- LLM Providers: Since many providers use their own custom strings to identify models, you'll have to add the translation from Continue's model name (the one you added to `index.d.ts`) and the model string for each of these providers: [Ollama](./core/llm/llms/Ollama.ts), [Together](./core/llm/llms/Together.ts), [Novita AI](./core/llm/llms/Novita.ts), and [Replicate](./core/llm/llms/Replicate.ts). You can find their full model lists here: [Ollama](https://ollama.ai/library), [Together](https://docs.together.ai/docs/inference-models), [Novita AI](https://novita.ai/llm-api?utm_source=github_continuedev&utm_medium=github_readme&utm_campaign=github_link), [Replicate](https://replicate.com/collections/streaming-language-models).
- LLM Providers: Since many providers use their own custom strings to identify models, you'll have to add the translation from Continue's model name (the one you added to `index.d.ts`) and the model string for each of these providers: [Ollama](./core/llm/llms/Ollama.ts), [Together](./core/llm/llms/Together.ts), and [Replicate](./core/llm/llms/Replicate.ts). You can find their full model lists here: [Ollama](https://ollama.ai/library), [Together](https://docs.together.ai/docs/inference-models), [Replicate](https://replicate.com/collections/streaming-language-models).
- [Prompt Templates](./core/llm/index.ts) - In this file you'll find the `autodetectTemplateType` function. Make sure that for the model name you just added, this function returns the correct template type. This is assuming that the chat template for that model is already built in Continue. If not, you will have to add the template type and corresponding edit and chat templates.

### Adding Pre-indexed Documentation
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/chat/context-selection.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ The highlighted code you’ve selected by pressing <kbd>cmd/ctrl</kbd> + <kbd>L<

## Active file

You can include the currently open file as context by pressing <kbd>cmd</kbd> + <kbd>opt</kbd> + <kbd>enter</kbd> (Mac) or <kbd>alt</kbd> + <kbd>enter</kbd> (Windows) when you send your request at Chat window(Prompt can't be empty).
You can include the currently open file as context by pressing <kbd>opt</kbd> + <kbd>enter</kbd> (Mac) or <kbd>alt</kbd> + <kbd>enter</kbd> (Windows) when you send your request at Chat window(Prompt can't be empty).

## Specific file

Expand Down
26 changes: 1 addition & 25 deletions docs/docs/chat/model-setup.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Our current top recommendation is Claude Sonnet 3.5 from [Anthropic](../customiz

### Llama 3.1 405B from Meta

If you prefer to use an open-weight model, then Llama 3.1 405B from Meta is your best option right now. You will need to decide if you use it through a SaaS model provider (e.g. [Together](../customize/model-providers/more/together.md) or [Novita AI](../customize/model-providers/more/novita.md) or [Groq](../customize/model-providers/more/groq.md)) or self-host it (e.g. using [vLLM](../customize/model-providers//more/vllm.md) or [Ollama](../customize/model-providers/top-level/ollama.md)).
If you prefer to use an open-weight model, then Llama 3.1 405B from Meta is your best option right now. You will need to decide if you use it through a SaaS model provider (e.g. [Together](../customize/model-providers/more/together.md) or [Groq](../customize/model-providers/more/groq.md)) or self-host it (e.g. using [vLLM](../customize/model-providers//more/vllm.md) or [Ollama](../customize/model-providers/top-level/ollama.md)).

<Tabs groupId="providers">
<TabItem value="Together">
Expand All @@ -48,18 +48,6 @@ If you prefer to use an open-weight model, then Llama 3.1 405B from Meta is your
]
```
</TabItem>
<TabItem value="Novita">
```json title="config.json"
"models": [
{
"title": "Llama 3.1 405B",
"provider": "novita",
"model": "meta-llama/llama-3.1-405b-instruct",
"apiKey": "[NOVITA_API_KEY]"
}
]
```
</TabItem>
<TabItem value="Groq">
```json title="config.json"
"models": [
Expand Down Expand Up @@ -94,18 +82,6 @@ If you prefer to use an open-weight model, then Llama 3.1 405B from Meta is your
]
```
</TabItem>
<TabItem value="Nebius">
```json title="config.json"
"models": [
{
"title": "Llama 3.1 405B",
"provider": "nebius",
"model": "llama3.1-405b",
"apiKey": "[NEBIUS_API_KEY]"
}
]
```
</TabItem>
</Tabs>

### GPT-4o from OpenAI
Expand Down
22 changes: 0 additions & 22 deletions docs/docs/customize/tutorials/llama3.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,28 +71,6 @@ Together AI provides fast and reliable inference of open-source models. You'll b
}
```


## Novita AI

[Novita AI](https://novita.ai?utm_source=github_continuedev&utm_medium=github_readme&utm_campaign=github_link) offers an affordable, reliable, and simple inference platform with scalable [LLM API](https://novita.ai/docs/model-api/reference/introduction.html), empowering developers to build AI applications. Try the [Novita AI Llama 3 API Demo](https://novita.ai/model-api/product/llm-api/playground/meta-llama-llama-3.1-70b-instruct?utm_source=github_continuedev&utm_medium=github_readme&utm_campaign=github_link) today!

1. Create an account [here](https://novita.ai/user/login?&redirect=/&utm_source=github_continuedev&utm_medium=github_readme&utm_campaign=github_link)
2. Copy your API key that appears on the welcome screen
3. Update your Continue config file like this:

```json title="config.json"
{
"models": [
{
"title": "Llama 3.1 405b",
"provider": "novita",
"model": "meta-llama/llama-3.1-405b-instruct",
"apiKey": "<API_KEY>"
}
]
}
```

## Replicate

Replicate makes it easy to host and run open-source AI with an API.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ import TabItem from "@theme/TabItem";

### 来自 Meta 的 Llama 3.1 405B

如果你倾向于使用开放权重模型,那么来自 Meta 的 Llama 3.1 405B 是你当前的最好选择。你需要决定,通过 SaaS 模型提供者使用它(比如 [Together](../customize/model-providers/more/together.md) 、[Novita AI](../customize/model-providers/more/novita.md)或 [Groq](../customize/model-providers/more/groq.md))或者自托管使用它(比如使用 [vLLM](../customize/model-providers//more/vllm.md) 或 [Ollama](../customize/model-providers/top-level/ollama.md)) 。
如果你倾向于使用开放权重模型,那么来自 Meta 的 Llama 3.1 405B 是你当前的最好选择。你需要决定,通过 SaaS 模型提供者使用它(比如 [Together](../customize/model-providers/more/together.md)或 [Groq](../customize/model-providers/more/groq.md))或者自托管使用它(比如使用 [vLLM](../customize/model-providers//more/vllm.md) 或 [Ollama](../customize/model-providers/top-level/ollama.md)) 。

<Tabs groupId="providers">
<TabItem value="Together">
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -71,27 +71,6 @@ Together AI 提供开源模型的快速和可信任的推理。你可以以良
}
```

## Novita AI

[Novita AI](https://novita.ai?utm_source=github_continuedev&utm_medium=github_readme&utm_campaign=github_link) 提供了一个经济实惠、可靠且简单的推理平台。你可以以良好的速度运行 405b 模型。

1. 创建账号 [在这里](https://novita.ai/user/login?&redirect=/&utm_source=github_continuedev&utm_medium=github_readme&utm_campaign=github_link)
2. 复制[Key Management](https://novita.ai/settings/key-management?utm_source=github_continuedev&utm_medium=github_readme&utm_campaign=github_link)中的你的 API key
3. 更新你的 Continue 配置文件,像这样:

```json title="config.json"
{
"models": [
{
"title": "Llama 3.1 405b",
"provider": "novita",
"model": "meta-llama/llama-3.1-405b-instruct",
"apiKey": "<API_KEY>"
}
]
}
```

## Replicate

Replicate 让使用 API 托管和运行开源 AI 变得简单。
Expand Down
14 changes: 7 additions & 7 deletions gui/src/pages/AddNewModel/configs/providers.ts
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,8 @@ export const providers: Partial<Record<string, ProviderInfo>> = {
title: "Scaleway",
provider: "scaleway",
refPage: "scaleway",
description: "Use the Scaleway Generative APIs to instantly access leading open models",
description:
"Use the Scaleway Generative APIs to instantly access leading open models",
longDescription: `Hosted in European data centers, ideal for developers requiring low latency, full data privacy, and compliance with EU AI Act. You can generate your API key in [Scaleway's console](https://console.scaleway.com/generative-api/models). Get started:\n1. Create an API key [here](https://console.scaleway.com/iam/api-keys/)\n2. Paste below\n3. Select a model preset`,
params: {
apiKey: "",
Expand Down Expand Up @@ -426,13 +427,12 @@ Select the \`GPT-4o\` model below to complete your provider configuration, but n
},
...completionParamsInputsConfigs,
],
packages: [
models.llama318BChat, models.mistralChat
].map((p) => {
packages: [models.llama318BChat, models.mistralChat].map((p) => {
p.params.contextLength = 4096;
return p;
}),
apiKeyUrl: "https://novita.ai/settings/key-management?utm_source=github_continuedev&utm_medium=github_readme&utm_campaign=github_link",
apiKeyUrl:
"https://novita.ai/settings/key-management?utm_source=github_continuedev&utm_medium=github_readme&utm_campaign=github_link",
},
gemini: {
title: "Google Gemini API",
Expand Down Expand Up @@ -670,8 +670,8 @@ To get started, [register](https://dataplatform.cloud.ibm.com/registration/stepo
provider: "free-trial",
refPage: "freetrial",
description:
"New users can try out Continue for free using a proxy server that securely makes calls to OpenAI, Anthropic, Together, or Novita AI using our API key",
longDescription: `New users can try out Continue for free using a proxy server that securely makes calls to OpenAI, Anthropic, Together or Novita AI using our API key. If you are ready to set up a model for long-term use or have used all ${FREE_TRIAL_LIMIT_REQUESTS} free uses, you can enter your API key or use a local model.`,
"New users can try out Continue for free using a proxy server that securely makes calls to OpenAI, Anthropic, or Together using our API key",
longDescription: `New users can try out Continue for free using a proxy server that securely makes calls to OpenAI, Anthropic, or Together using our API key. If you are ready to set up a model for long-term use or have used all ${FREE_TRIAL_LIMIT_REQUESTS} free uses, you can enter your API key or use a local model.`,
icon: "openai.png",
tags: [ModelProviderTags.Free],
packages: [
Expand Down
Loading