You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The assistant should use my microphone input (audio) to process responses dynamically. However, despite sending my audio data to the API, the model doesn’t seem to recognize or respond based on my actual audio input. Instead, it produces default responses, ignoring the context of my input audio.
Technical Setup:
Using Node.js with the realtime-api-beta package
Audio input is captured from the microphone, converted to PCM16 format, and streamed to the API.
The appendInputAudio() method is used to send audio chunks, followed by createResponse() to initiate a response when silence is detected.
Any insights on debugging this setup or ensuring the model correctly interprets the live audio input would be greatly appreciated!
The text was updated successfully, but these errors were encountered:
The assistant should use my microphone input (audio) to process responses dynamically. However, despite sending my audio data to the API, the model doesn’t seem to recognize or respond based on my actual audio input. Instead, it produces default responses, ignoring the context of my input audio.
Technical Setup:
Any insights on debugging this setup or ensuring the model correctly interprets the live audio input would be greatly appreciated!
The text was updated successfully, but these errors were encountered: