LLM selection (depends on #2) #3
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
LLM Model Selection Feature (depends on #2)
Overview
This PR adds the ability to select different LLM models when generating playlists, allowing users to choose between higher quality (GPT-4) or more affordable (GPT-3.5 Turbo) options. It also fixes a bug with the min/max track count parameters.
This change also embeds the metadata about the generation (prompt, model used, etc.) into the summary of the generated playlist, so that this data isn't lost to time.
Changes
Technical Details
app/services/llm_service.pyto properly use min/max track parametersapp/models.pyto clarify model options in field descriptionMotivation
GPT-4 requests can be expensive ($0.30+ per request) while GPT-3.5 Turbo is significantly more affordable (under $0.01 for the same request) and faster. This change gives users the flexibility to choose based on their needs and budget constraints.
Testing
Screenshot