-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathconfig.yaml
69 lines (62 loc) · 2.5 KB
/
config.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
irc:
# The hostname or IP address of the IRC server.
host: irc.rwx.im
# The port number of the IRC server.
port: 6697
# Use a secure connection.
tls: true
# The nickname used by the bot.
nick: chud
# The username used by the bot.
username: chad
# The version response to CTCP VERSION requests.
version: "github.com/rwx-labs/chadgpt"
# List of channels to join when initially connected.
channels:
- "#chad"
# List of nicknames to ignore messages from.
ignored_nicks: []
# Whether to automatically change nickname to the configured one if it wasn't
# available during registration, and the bot sees it become available.
keepnick: true
# Configuration for the GPT completion requests.
# Reference: https://beta.openai.com/docs/api-reference/completions
openai:
# The identifier of the model to use for completions.
model: gpt-3.5-turbo
# What sampling temperature to use, between 0 and 2. Higher values like 0.8
# will make the output more random, while lower values like 0.2 will make it
# more focused and deterministic.
# temperature: 1
# The maximum number of tokens to generate in the chat completion.
max_tokens: 90
# An alternative to sampling with temperature, called nucleus sampling, where
# the model considers the results of the tokens with top_p probability mass.
# So 0.1 means only the tokens comprising the top 10% probability mass are
# considered.
#
# It's recommended to use this or `temperature`, but not both.
# top_p: 1
# Number between -2.0 and 2.0. Positive values penalize new tokens based on
# whether they appear in the text so far, increasing the model's likelihood to
# talk about new topics.
# presence_penalty: 0.0
# Number between -2.0 and 2.0. Positive values penalize new tokens based on
# their existing frequency in the text so far, decreasing the model's
# likelihood to repeat the same line verbatim.
# frequency_penalty: 0.5
# List of messages used for instructing GPT.
#
# Each message has `content` which will be templated with Mustache.
#
# Available variables:
# `message` - The message a user sent to the bot (not including the bot nick)
# `rawMessage` - The message a user sent to the bot, including the bot nick
# `channel` - The name of the channel the message was sent in.
# `nick` - The nickname of the sender.
messages:
- role: system
content: "You are ChatGPT, a large language model trained by OpenAI. Answer as concisely as possible."
- role: user
content: "{{ message }}"