
Interpret Epidemiological Data or Visualisations using LLMs
llm_interpret.Rd
This function interprets a given data frame or ggplot visualisation by sending it to a language model API via the elmer package. It supports multiple LLM providers, allowing users to specify the desired provider and model through environment variables.
Arguments
- input
An input object, either a data frame or a ggplot object, representing the data or visualisation to be interpreted.
- word_limit
Integer. The desired word length for the response. Defaults to 100.
- prompt_extension
Character. Optional additional instructions to extend the standard prompt. Defaults to NULL.
Value
A character string containing the narrative or interpretation of the input object as generated by the LLM.
Details
Supported LLM Providers and Models:
OpenAI: Utilizes OpenAI's models via
chat_openai()
. Requires setting theOPENAI_API_KEY
environment variable. Applicable models include:"gpt-4o"
"gpt-4o-mini"
"o1-mini"
Google Gemini: Utilizes Google's Gemini models via
chat_gemini()
. Requires setting theGOOGLE_API_KEY
environment variable. Applicable models include:"gemini-1.5-flash"
Anthropic Claude: Utilizes Anthropic's Claude models via
chat_claude()
. Requires setting theCLAUDE_API_KEY
environment variable. Applicable models include:"claude-1"
Environment Variables:
LLM_PROVIDER
: Specifies the LLM provider ("openai", "gemini", "claude").LLM_API_KEY
: The API key corresponding to the chosen provider.LLM_MODEL
: The model identifier to use.
Note: Ensure that the appropriate environment variables are set before invoking this function. The function will throw an error if the specified provider is unsupported or if required environment variables are missing.