Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When using OpenAI's O1-Preview or O1-Mini, the results cannot be returned. It is normal to use GPT-4O. #3279

Closed
3 tasks done
TaoWolf opened this issue Dec 10, 2024 · 8 comments
Assignees
Labels
area:configuration Relates to configuration options ide:jetbrains Relates specifically to JetBrains extension kind:bug Indicates an unexpected problem or unintended behavior priority:medium Indicates medium priority

Comments

@TaoWolf
Copy link

TaoWolf commented Dec 10, 2024

Before submitting your bug report

Relevant environment info

- OS:win11
- Continue version:0.0.83
- IDE version:idea 2023.2.2
- Model:o1-Preview or o1-Mini
- config.json:
  
{
      "model": "o1-preview",
      "provider": "openai",
      "title": "o1-preview",
      "systemMessage": "When providing code completion, only supply the part that needs to be completed, without repeating the original code. Responses should be in Chinese, keeping them concise and helpful, and include necessary comments and explanations where appropriate. The developer should maintain professionalism by delivering clear, direct solutions focused on the user's needs.",
      "apiKey": "xxxx-myKey private ……",
      "contextLength": 128000
    }

Description

No response

To reproduce

No response

Log output

No response

@dosubot dosubot bot added area:configuration Relates to configuration options kind:bug Indicates an unexpected problem or unintended behavior priority:medium Indicates medium priority labels Dec 10, 2024
@tomasz-stefaniak
Copy link
Collaborator

@TaoWolf to make sure we are on the same page, this is located under models in your config.json, not tabAutocompleteModel?

Do you see any error messages or any output in the logs?

image

@TaoWolf
Copy link
Author

TaoWolf commented Dec 16, 2024

Ahem, can't you even think of such an obvious question?
It's definitely O1-Preview and O1-mini, and the requirements for message structs have changed!!
Wait a minute, I'll go to the official to verify, and then tell you, hard work, you should hurry up and make adjustments, many users are waiting for this update!!

@TaoWolf
Copy link
Author

TaoWolf commented Dec 16, 2024

你好,已经查到,以下几点,请注意:
1、o1相关开放的API当前不支持 'system' 消息体字段,请把‘system’放进‘assistant’或‘user’字段。
2、o1相关API当前‘temperature’字段仅允许使用1为确定值。
3、'max_tokens'不受支持,仅支持使用'max_completion_tokens'字段。

我大概看了一下,发现了上面三处和之前gpt-4o模型的不同之处,应该改了这三个地方,就可以使用了。
也辛苦大家都仔细看一下官方的要求,核对一下当用户使用o1相关模型的时候,后台的请求应该有哪些改变。

@TaoWolf
Copy link
Author

TaoWolf commented Dec 16, 2024

Hello, it has been checked, the following points, please note:

  1. The open API related to O1 does not currently support the 'system' message body field, please put 'system' into the 'assistant' or 'user' field.
  2. The current 'temperature' field of the o1 API only allows 1 to be used as a definite value.
  3. 'max_tokens' is not supported, only the 'max_completion_tokens' field is supported.

I took a look and found that the above three places are different from the previous GPT-4O model, and these three places should be changed and you can use them.
It's also hard for everyone to take a closer look at the official requirements and check what changes should be made to the backend requests when users use O1-related models.

@tomasz-stefaniak
Copy link
Collaborator

Thanks for checking, I'll take a look and see if we can implement a quick fix.

@tomasz-stefaniak tomasz-stefaniak added the ide:jetbrains Relates specifically to JetBrains extension label Dec 16, 2024
@tomasz-stefaniak
Copy link
Collaborator

I'm tagging @Patrick-Erichsen who owns JetBrains on our team.

On my end, I tested it locally on VSCode and o1 seems to work as expected. I found an issue with the system prompt being ignored and addressed it in this PR: #3408

For reference, we handle max_completion_tokens and other conversations here:

https://github.com/continuedev/continue/blob/main/core/llm/llms/OpenAI.ts#L130
https://github.com/continuedev/continue/blob/main/core/llm/llms/OpenAI.ts#L223

It might be that this logic doesn't get triggered when using JetBrains.

@TaoWolf
Copy link
Author

TaoWolf commented Dec 17, 2024

Based on your suggestion, I have modified the configuration file config.json by directly updating the relevant settings as follows:

"completionOptions": {
"temperature": 1,
"maxCompletionTokens": 4096
},

After testing, I can successfully use o1 and o1-mini, so there is no need to fix this issue. However, such adjustments to the configuration file should be reflected in the reference documentation.

I have already updated the documentation and submitted a pull request: #3420

Please review and approve the merge so that it can be shared with other users.

@TaoWolf TaoWolf closed this as completed Dec 17, 2024
@xihuai18
Copy link

你好,已经查到,以下几点,请注意: 1、o1相关开放的API当前不支持 'system' 消息体字段,请把‘system’放进‘assistant’或‘user’字段。 2、o1相关API当前‘temperature’字段仅允许使用1为确定值。 3、'max_tokens'不受支持,仅支持使用'max_completion_tokens'字段。

我大概看了一下,发现了上面三处和之前gpt-4o模型的不同之处,应该改了这三个地方,就可以使用了。 也辛苦大家都仔细看一下官方的要求,核对一下当用户使用o1相关模型的时候,后台的请求应该有哪些改变。

也不支持stream

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:configuration Relates to configuration options ide:jetbrains Relates specifically to JetBrains extension kind:bug Indicates an unexpected problem or unintended behavior priority:medium Indicates medium priority
Projects
None yet
Development

No branches or pull requests

3 participants