Skip to content

Commit

Permalink
OpenAI Function Calling (#42)
Browse files Browse the repository at this point in the history
# OpenAI Function Calling

## ♻️ Current situation & Problem
As of now, the `SpeziLLMOpenAI` target doesn't provide an option to use
the [OpenAI Function Calling
mechanism](https://platform.openai.com/docs/guides/function-calling).
This functionality is crucial to advanced LLM use cases, such as
`HealthGPT` or `LLMonFHIR`.

Related to #10


## ⚙️ Release Notes 
- `SpeziLLMOpenAI` now supports the OpenAI function calling mechanism
via a Domain Specific Language, enabling straightforward usage of
function calling within Spezi-based applications.


## 📚 Documentation
Inline DocC as well as DocC articles written, other documentation
adjusted where necessary.


## ✅ Testing
Unit tests to properly test injection logic of parameters into
`LLMFunction`s as well as validation of schema synthization.
No proper UI tests as we would test the OpenAI API and therefore need
proper API tokens and OpenAI balance. However, extensive manual testing
via the UI test application.

## 📝 Code of Conduct & Contributing Guidelines 

By submitting creating this pull request, you agree to follow our [Code
of
Conduct](https://github.com/StanfordSpezi/.github/blob/main/CODE_OF_CONDUCT.md)
and [Contributing
Guidelines](https://github.com/StanfordSpezi/.github/blob/main/CONTRIBUTING.md):
- [x] I agree to follow the [Code of
Conduct](https://github.com/StanfordSpezi/.github/blob/main/CODE_OF_CONDUCT.md)
and [Contributing
Guidelines](https://github.com/StanfordSpezi/.github/blob/main/CONTRIBUTING.md).
  • Loading branch information
philippzagar authored Jan 22, 2024
1 parent c3b31a3 commit 24d6c19
Show file tree
Hide file tree
Showing 52 changed files with 2,811 additions and 119 deletions.
20 changes: 18 additions & 2 deletions .github/workflows/build-and-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,34 @@ jobs:
buildandtest:
name: Build and Test Swift Package
uses: StanfordSpezi/.github/.github/workflows/xcodebuild-or-fastlane.yml@v2
strategy:
matrix:
include:
- buildConfig: Debug
artifactname: SpeziLLM-Package.xcresult
- buildConfig: Release
artifactname: SpeziLLM-Package-Release.xcresult
with:
artifactname: SpeziLLM-Package.xcresult
artifactname: ${{ matrix.artifactname }}
runsonlabels: '["macOS", "self-hosted"]'
scheme: SpeziLLM-Package
buildConfig: ${{ matrix.buildConfig }}
buildandtestuitests:
name: Build and Test UI Tests
uses: StanfordSpezi/.github/.github/workflows/xcodebuild-or-fastlane.yml@v2
strategy:
matrix:
include:
- buildConfig: Debug
artifactname: TestApp.xcresult
- buildConfig: Release
artifactname: TestApp-Release.xcresult
with:
artifactname: TestApp.xcresult
artifactname: ${{ matrix.artifactname }}
runsonlabels: '["macOS", "self-hosted"]'
path: 'Tests/UITests'
scheme: TestApp
buildConfig: ${{ matrix.buildConfig }}
uploadcoveragereport:
name: Upload Coverage Report
needs: [buildandtest, buildandtestuitests]
Expand Down
14 changes: 7 additions & 7 deletions Package.swift
Original file line number Diff line number Diff line change
Expand Up @@ -24,14 +24,14 @@ let package = Package(
.library(name: "SpeziLLMOpenAI", targets: ["SpeziLLMOpenAI"])
],
dependencies: [
.package(url: "https://github.com/MacPaw/OpenAI", .upToNextMinor(from: "0.2.4")),
.package(url: "https://github.com/MacPaw/OpenAI", .upToNextMinor(from: "0.2.5")),
.package(url: "https://github.com/StanfordBDHG/llama.cpp", .upToNextMinor(from: "0.1.6")),
.package(url: "https://github.com/StanfordSpezi/Spezi", .upToNextMinor(from: "0.8.2")),
.package(url: "https://github.com/StanfordSpezi/SpeziStorage", .upToNextMinor(from: "0.5.0")),
.package(url: "https://github.com/StanfordSpezi/SpeziOnboarding", .upToNextMinor(from: "0.7.0")),
.package(url: "https://github.com/StanfordSpezi/SpeziSpeech", .upToNextMinor(from: "0.1.1")),
.package(url: "https://github.com/StanfordSpezi/SpeziChat", .upToNextMinor(from: "0.1.2")),
.package(url: "https://github.com/StanfordSpezi/SpeziViews", .upToNextMinor(from: "0.6.3"))
.package(url: "https://github.com/StanfordSpezi/Spezi", from: "1.1.0"),
.package(url: "https://github.com/StanfordSpezi/SpeziStorage", from: "1.0.0"),
.package(url: "https://github.com/StanfordSpezi/SpeziOnboarding", from: "1.0.0"),
.package(url: "https://github.com/StanfordSpezi/SpeziSpeech", from: "1.0.0"),
.package(url: "https://github.com/StanfordSpezi/SpeziChat", .upToNextMinor(from: "0.1.4")),
.package(url: "https://github.com/StanfordSpezi/SpeziViews", from: "1.0.0")
],
targets: [
.target(
Expand Down
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,7 @@ struct LocalLLMChatView: View {

A module that allows you to interact with GPT-based Large Language Models (LLMs) from OpenAI within your Spezi application.
`SpeziLLMOpenAI` provides a pure Swift-based API for interacting with the OpenAI GPT API, building on top of the infrastructure of the [SpeziLLM target](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm).
In addition, `SpeziLLMOpenAI` provides developers with a declarative Domain Specific Language to utilize OpenAI function calling mechanism. This enables a structured, bidirectional, and reliable communication between the OpenAI LLMs and external tools, such as the Spezi ecosystem.

#### Setup

Expand Down
17 changes: 14 additions & 3 deletions Sources/SpeziLLM/Helpers/Chat+Append.swift
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,13 @@ extension Chat {
///
/// - Parameters:
/// - output: The `ChatEntity/Role/assistant` output `String` (part) that should be appended.
/// - overwrite: Indicates if the already present content of the assistant message should be overwritten.
@MainActor
public mutating func append(assistantOutput output: String) {
public mutating func append(assistantOutput output: String, overwrite: Bool = false) {
if self.last?.role == .assistant {
self[self.count - 1] = .init(
role: self.last?.role ?? .assistant,
content: (self.last?.content ?? "") + output
role: .assistant,
content: overwrite ? output : ((self.last?.content ?? "") + output)
)
} else {
self.append(.init(role: .assistant, content: output))
Expand All @@ -44,4 +45,14 @@ extension Chat {
public mutating func append(systemMessage systemPrompt: String) {
self.insert(.init(role: .system, content: systemPrompt), at: 0)
}

/// Append a `ChatEntity/Role/function` response from a function call to the `Chat.
///
/// - Parameters:
/// - functionName: The name of the `ChatEntity/Role/function` that is called by the LLM.
/// - functionResponse: The response `String` of the `ChatEntity/Role/function` that is called by the LLM.
@MainActor
public mutating func append(forFunction functionName: String, response functionResponse: String) {
self.append(.init(role: .function(name: functionName), content: functionResponse))
}
}
3 changes: 2 additions & 1 deletion Sources/SpeziLLM/LLMRunner.swift
Original file line number Diff line number Diff line change
Expand Up @@ -189,8 +189,9 @@ public actor LLMRunner: Module, DefaultInitializable, EnvironmentAccessible {

/// Upon deinit, cancel all ``LLMRunnerInferenceTask``'s.
deinit {
let runnerTasks = runnerTasks
Task {
for runnerTask in await runnerTasks.values {
for runnerTask in runnerTasks.values {
await runnerTask.task?.cancel()
}
}
Expand Down
6 changes: 5 additions & 1 deletion Sources/SpeziLLM/Views/LLMChatView.swift
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,11 @@ public struct LLMChatView: View {
}

public var body: some View {
ChatView($model.context, disableInput: inputDisabled)
ChatView(
$model.context,
disableInput: inputDisabled,
exportFormat: .pdf
)
.onChange(of: model.context) { oldValue, newValue in
/// Once the user enters a message in the chat, send a request to the local LLM.
if oldValue.count != newValue.count,
Expand Down
30 changes: 30 additions & 0 deletions Sources/SpeziLLMLocal/Helpers/String+Cxx.swift
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
//
// This source file is part of the Stanford Spezi open source project
//
// SPDX-FileCopyrightText: 2022 Stanford University and the project authors (see CONTRIBUTORS.md)
//
// SPDX-License-Identifier: MIT
//

import Foundation


extension String {
/// Initializes a Swift `String` from a C++ `string`.
///
/// - Parameters:
/// - cxxString: The given C++ `string`
///
/// In the Release build mode, the Swift compiler is unable to choose the correct String initializer from the Swift stdlib.
/// Therefore, manual `String `extension by SpeziLLM that mirrors the C++ interop implementation within the Swift stdlib: https://github.com/apple/swift/blob/cf2a338afca54a787d59b83db6238b1568215b94/stdlib/public/Cxx/std/String.swift#L231-L239
init(_ cxxString: std.string) {
let buffer = UnsafeBufferPointer<CChar>(
start: cxxString.__c_strUnsafe(),
count: cxxString.size()
)
self = buffer.withMemoryRebound(to: UInt8.self) {
String(decoding: $0, as: UTF8.self)
}
withExtendedLifetime(cxxString) {}
}
}
5 changes: 4 additions & 1 deletion Sources/SpeziLLMLocal/LLMLlama+Generation.swift
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ extension LLMLlama {
return
}

// Tokenizes the entire context of the `LLM?
// Tokenizes the entire context of the `LLM`
guard let tokens = try? await tokenize() else {
Self.logger.error("""
SpeziLLMLocal: Tokenization failed as illegal context exists.
Expand Down Expand Up @@ -85,6 +85,9 @@ extension LLMLlama {
batch.logits[Int(batch.n_tokens) - 1] = 1

if llama_decode(self.modelContext, batch) != 0 {
Self.logger.error("""
SpeziLLMLocal: Initial prompt decoding as failed!
""")
await finishGenerationWithError(LLMLlamaError.generationError, on: continuation)
return
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ public struct LLMOpenAIModelParameters: Sendable {
/// - responseFormat: Format for model responses.
/// - temperature: Sampling temperature (0 to 2); higher values (e.g., 0.8) increase randomness, lower values (e.g., 0.2) enhance focus. Adjust this or topP, not both.
/// - topP: Nucleus sampling threshold; considers tokens with top_p probability mass. Alternative to temperature sampling.
/// - completionsPerOutput: Number of generated chat completions per input.
/// - completionsPerOutput: Number of generated chat completions (choices) per input, defaults to 1 choice.
/// - stopSequence: Sequences (up to 4) where generation stops; output doesn't include these sequences.
/// - maxOutputLength: Maximum token count for each completion.
/// - presencePenalty: Adjusts new topic exploration (-2.0 to 2.0); higher values encourage novelty.
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
//
// This source file is part of the Stanford Spezi open-source project
//
// SPDX-FileCopyrightText: 2022 Stanford University and the project authors (see CONTRIBUTORS.md)
//
// SPDX-License-Identifier: MIT
//

import Foundation


/// A result builder used to aggregate multiple ``LLMFunction``s within the ``LLMOpenAI``.
@resultBuilder
public enum LLMFunctionBuilder {
/// If declared, provides contextual type information for statement expressions to translate them into partial results.
public static func buildExpression<L: LLMFunction>(_ expression: L) -> [L] {
[expression]
}

/// Required by every result builder to build combined results from statement blocks.
public static func buildBlock(_ children: [any LLMFunction]...) -> [any LLMFunction] {
children.flatMap { $0 }
}

/// Enables support for `if` statements that do not have an `else`.
public static func buildOptional(_ component: [any LLMFunction]?) -> [any LLMFunction] {
// swiftlint:disable:previous discouraged_optional_collection
// The optional collection is a requirement defined by @resultBuilder, we can not use a non-optional collection here.
component ?? []
}

/// With buildEither(second:), enables support for 'if-else' and 'switch' statements by folding conditional results into a single result.
public static func buildEither(first: [any LLMFunction]) -> [any LLMFunction] {
first
}

/// With buildEither(first:), enables support for 'if-else' and 'switch' statements by folding conditional results into a single result.
public static func buildEither(second: [any LLMFunction]) -> [any LLMFunction] {
second
}

/// Enables support for 'for..in' loops by combining the results of all iterations into a single result.
public static func buildArray(_ components: [[any LLMFunction]]) -> [any LLMFunction] {
components.flatMap { $0 }
}

/// If declared, this will be called on the partial result of an 'if #available' block to allow the result builder to erase type information.
public static func buildLimitedAvailability(_ component: [any LLMFunction]) -> [any LLMFunction] {
component
}

/// If declared, this will be called on the partial result from the outermost block statement to produce the final returned result.
public static func buildFinalResult(_ component: [any LLMFunction]) -> _LLMFunctionCollection {
_LLMFunctionCollection(functions: component)
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
//
// This source file is part of the Stanford Spezi open source project
//
// SPDX-FileCopyrightText: 2022 Stanford University and the project authors (see CONTRIBUTORS.md)
//
// SPDX-License-Identifier: MIT
//

import Foundation


/// In case of a `DecodingError` of the called function parameters to the ``LLMFunction/Parameter``s, indicates where in the `DecodingError` occurred.
struct LLMFunctionParameterCodingKey: CodingKey {
let stringValue: String
var intValue: Int?


init(stringValue: String) {
self.stringValue = stringValue
}

init?(intValue: Int) {
self.stringValue = String(intValue)
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
//
// This source file is part of the Stanford Spezi open source project
//
// SPDX-FileCopyrightText: 2022 Stanford University and the project authors (see CONTRIBUTORS.md)
//
// SPDX-License-Identifier: MIT
//

import Foundation


/// Serves as an intermediary representation of the requested function call parameters in order to decode the parameters into the ``LLMFunction/Parameter``s.
enum LLMFunctionParameterIntermediary: Codable {
case null
case int(Int)
case bool(Bool)
case string(String)
case double(Double)
case array(Array<LLMFunctionParameterIntermediary>)
case dictionary([String: LLMFunctionParameterIntermediary])


/// Provides a representation of the received JSON where each first-level parameter (the key) maps to the respective nested JSON `Data`.
var topLayerJSONRepresentation: [String: Data] {
get throws {
guard case let .dictionary(dictionary) = self else {
return [:]
}

return try dictionary.mapValues {
try JSONEncoder().encode($0)
}
}
}


init(from decoder: Decoder) throws {
let container = try decoder.singleValueContainer()

if container.decodeNil() {
self = .null
} else if let bool = try? container.decode(Bool.self) {
self = .bool(bool)
} else if let int = try? container.decode(Int.self) {
self = .int(int)
} else if let double = try? container.decode(Double.self) {
self = .double(double)
} else if let string = try? container.decode(String.self) {
self = .string(string)
} else if let array = try? container.decode([LLMFunctionParameterIntermediary].self) {
self = .array(array)
} else if let dictionary = try? container.decode([String: LLMFunctionParameterIntermediary].self) {
self = .dictionary(dictionary)
} else {
throw DecodingError.dataCorrupted(
DecodingError.Context(
codingPath: decoder.codingPath,
debugDescription: "Encountered unexpected JSON values within LLM Function Calling Parameters"
)
)
}
}


func encode(to encoder: Encoder) throws {
var container = encoder.singleValueContainer()

switch self {
case .null:
return
case let .int(int):
try container.encode(int)
case let .bool(bool):
try container.encode(bool)
case let .string(string):
try container.encode(string)
case let .double(double):
try container.encode(double)
case let .array(array):
try container.encode(array)
case let .dictionary(dictionary):
try container.encode(dictionary)
}
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
//
// This source file is part of the Stanford Spezi open-source project
//
// SPDX-FileCopyrightText: 2022 Stanford University and the project authors (see CONTRIBUTORS.md)
//
// SPDX-License-Identifier: MIT
//

import Foundation
import SwiftUI


/// Defines a collection of ``SpeziLLMOpenAI`` ``LLMFunction``s.
///
/// You can not create a `_LLMFunctionCollection` yourself. Please use the ``LLMOpenAI`` that internally creates a `_LLMFunctionCollection` with the passed ``LLMFunction``s.
public struct _LLMFunctionCollection { // swiftlint:disable:this type_name
var functions: [String: LLMFunction] = [:]


init(functions: [any LLMFunction]) {
for function in functions {
self.functions[Swift.type(of: function).name] = function
}
}
}
Loading

0 comments on commit 24d6c19

Please sign in to comment.