A Python package for interpreting user-submitted text about natural or scientific phenomena, extracting structured insights, and classifying events based on textual input.
phenomenon_interpreter is designed to analyze free-form descriptions of phenomena (e.g., solar storms, earthquakes, or other natural events) and generate structured summaries or classifications. It leverages large language models (LLMs) to extract domain-specific insights from unstructured text, enabling automated analysis without requiring multimedia processing.
Install the package via pip:
pip install phenomenon_interpreterfrom phenomenon_interpreter import phenomenon_interpreter
user_input = "A massive solar storm caused radio blackouts in Australia today."
response = phenomenon_interpreter(user_input)
print(response) # Structured output based on the inputYou can replace the default ChatLLM7 with any LangChain-compatible LLM (e.g., OpenAI, Anthropic, Google Generative AI). Example:
from langchain_openai import ChatOpenAI
from phenomenon_interpreter import phenomenon_interpreter
llm = ChatOpenAI()
response = phenomenon_interpreter(user_input, llm=llm)from langchain_anthropic import ChatAnthropic
from phenomenon_interpreter import phenomenon_interpreter
llm = ChatAnthropic()
response = phenomenon_interpreter(user_input, llm=llm)from langchain_google_genai import ChatGoogleGenerativeAI
from phenomenon_interpreter import phenomenon_interpreter
llm = ChatGoogleGenerativeAI()
response = phenomenon_interpreter(user_input, llm=llm)By default, the package uses ChatLLM7 with an API key fetched from the environment variable LLM7_API_KEY. You can:
- Set it via environment variable:
export LLM7_API_KEY="your_api_key_here"
- Pass it directly:
from phenomenon_interpreter import phenomenon_interpreter response = phenomenon_interpreter(user_input, api_key="your_api_key_here")
Get a free API key from LLM7.
| Parameter | Type | Description |
|---|---|---|
user_input |
str |
The text describing the phenomenon to analyze. |
api_key |
Optional[str] |
LLM7 API key (default: fetched from LLM7_API_KEY environment variable). |
llm |
Optional[BaseChatModel] |
Custom LLM instance (e.g., ChatOpenAI, ChatAnthropic). Default: ChatLLM7. |
The function returns a list of structured insights extracted from the input text, formatted to match predefined patterns (e.g., impact classification, event nature).
- LLM7 Free Tier: Sufficient for most use cases.
- Custom API Key: For higher rate limits, provide your own
api_keyor setLLM7_API_KEY.
MIT License.
For bugs or feature requests, open an issue on GitHub.
- Eugene Evstafev (GitHub)
- Email: hi@euegne.plus