Skip to content

Commit

Permalink
Update changelog
Browse files Browse the repository at this point in the history
  • Loading branch information
countzero committed Jul 6, 2023
1 parent 272dab5 commit 3c70add
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 0 deletions.
9 changes: 9 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,15 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [1.2.0] - 2023-07-06

### Added
- Add server example to the build
- Add documentation on how to use the webinterface

### Fixed
- Fix automatic update of the submodules

## [1.1.0] - 2023-07-03

### Added
Expand Down
16 changes: 16 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,22 @@ You can now chat with the model:
--interactive
```

### Chat via Webinterface

You can start llama.cpp as a webserver:

```PowerShell
./vendor/llama.cpp/build/bin/Release/server `
--model "./vendor/llama.cpp/models/open-llama-7B-open-instruct.ggmlv3.q4_K_M.bin" `
--ctx-size 2048 `
--threads 16 `
--n-gpu-layers 32
```

And then access llama.cpp via the webinterface at:

* http://localhost:8080/

### Measure model perplexity

Execute the following to measure the perplexity of the GGML formatted model:
Expand Down

0 comments on commit 3c70add

Please sign in to comment.