Releases: RAHB-REALTORS-Association/chat2gpt
v2.2.0
Introducing the API_URL
variable! 🚀
- Now route chatbot requests to custom API endpoints.
- Just set the
API_URL
environment variable and you're good to go.
Upgrade today and enjoy the added flexibility!
Changes
- API_URL variable added by @justinh-rahb in #41
Full Changelog: v2.1.2...v2.2.0
v2.1.2
v2.1.1
v2.1.0
🚀 Welcome to Version 2.1! 🚀
We're ecstatic to present version 2.1. With this update, we're ushering in a series of important enhancements that will elevate your chatbot experience!
What's New
- Content Moderation: Keeping our platform safe and user-friendly is a top priority. With the new content moderation feature, inappropriate messages will be flagged, ensuring a wholesome experience for all.
- Max Token Limit: We're optimizing interactions by introducing a max token output limit, making sure responses are concise and relevant.
Changes
- Added Moderation & Enhanced Output Limitations by @justinh-rahb in #39
Full Changelog: v2.0.0...v2.1.0
v2.0.0
🚀 Welcome to Version 2.0! 🚀
We are thrilled to introduce version 2.0, a significant update that brings the long-awaited integration with DALL-E for image generation!
What's New
🎨 DALL-E Integration: Users can now generate images directly in the chat! Use the /image command followed by your creative prompt to bring your ideas to life.
Changes
- GitHub Page by @justinh-rahb in #34
- Add DALL-E image generation by @justinh-rahb in #38
Full Changelog: v1.7.1...v2.0.0
v1.7.1
LLMs are lossy compression algorithms, never forget.
What's Changed
- Fix /reset regression by @justinh-rahb in #30
Full Changelog: v1.7.0...v1.7.1
v1.7.0
We've implemented token counting for each message the bot processes. This new feature prevents the bot from sending messages that are too large, which could potentially overload the system. If a message exceeds the token limit, the bot will respond with an error message asking the user to try a shorter message.
What's Changed
- Implement token count limit by @justinh-rahb in #24
- Fix tokenizer encoding by @justinh-rahb in #25
- Fix tokenizer function by @justinh-rahb in #26
- Count system and user prompt by @justinh-rahb in #27
- Fix tokenizer (again) by @justinh-rahb in #28
- Fix argument count by @justinh-rahb in #29
Full Changelog: v1.6.1...v1.7.0
v1.6.1
Send canned response after /reset instead of generating reply.
What's Changed
- Update main.py by @justinh-rahb in #20
Full Changelog: v1.6.0...v1.6.1
v1.6.0
v1.5.0
What's Changed
- Repo infrastructure by @justinh-rahb in #12
- Update README.md by @justinh-rahb in #13
- Update gcp-deploy.yml by @justinh-rahb in #14
- Update gcp-deploy.yml by @justinh-rahb in #15
- Update repo meta by @justinh-rahb in #16
- Integrate simpleaichat for session-based chat functionality by @justinh-rahb in #17
- Enhance User Session Management with TTL Functionality by @justinh-rahb in #18
Full Changelog: v1.0.0...v1.2.0