App
create_openai_common_kwargs(llm_model_name)
¶
Create common parameters for OpenAI's LLM.
These parameters include: - model_name (str
): model ID of the LLM. - temperature (float
): temperature to perform the sampling. - request_timeout (float
): timeout in seconds before cancelling the request to OpenAI API. - max_tokens (int
): max. number of tokens to generate with the LLM. - top_p (float
): top-p value to use when sampling the LLM. Some of these parameters are reused for other model providers.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
llm_model_name | `str` | model ID of the LLM. | required |
Returns:
Type | Description |
---|---|
dict |
|
Source code in projects/gptstonks_api/gptstonks/api/initialization/app.py
download_vsi(vsi_path)
¶
Download Vector Store Index (VSI) if necessary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
vsi_path | `str` | Path to VSI. If it doesn't exist already, it is downloaded from Google Drive's URI AUTOLLAMAINDEX_VSI_GDRIVE_URI. | required |
Source code in projects/gptstonks_api/gptstonks/api/initialization/app.py
init_agent_tools(embed_model, llm, use_openai_agent=False)
¶
Initialize the agent tools.
These tools are by default: - World Knowledge: a multi-step reasoning tool to answer complex queries by looking on the Internet. - OpenBB: custom tool to retrieve financial data using OpenBB Platform.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
embed_model | `str | OpenAIEmbedding` | embedding model to use for the RAG. It should be the same as in the Vector Store Index. | required |
llm | `langchain_core.language_models.llms.LLM` | LLM to use inside the tools that need one. | required |
Returns:
Type | Description |
---|---|
list[Tool] |
|
Source code in projects/gptstonks_api/gptstonks/api/initialization/app.py
init_api(app_data)
¶
Initial function called during the application startup.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
app_data | `AppData` | global application data. | required |
Source code in projects/gptstonks_api/gptstonks/api/initialization/app.py
init_openbb_async_tool(auto_rag, node_postprocessors, name='OpenBB', return_direct=True)
¶
Initialize OpenBB asynchronous agent tool.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
auto_rag | `AutoRag` | contains the necessary objects for performing RAG (i.e., vector store, embedding model, etc.). | required |
node_postprocessors | `list[BaseNodePostprocessor]` | list of LlamaIndex's postprocessors to apply to the retrieved nodes. | required |
name | `str` | name of the tool. | 'OpenBB' |
return_direct | `bool` | whether or not to return directly from this tool, without going through the agent again. | True |
Returns:
Type | Description |
---|---|
Tool |
|
Source code in projects/gptstonks_api/gptstonks/api/initialization/app.py
init_world_knowledge_tool(llamaindex_llm, name='world_knowledge', use_openai_agent=False, return_direct=True, verbose=False)
¶
Initialize World Knowledge tool.
The World Knowledge tool can solve complex queries by applying multi-step reasoning. It has several tools available, which include:
- Search: to look up information on the Internet.
- Wikipedia: to look up information about places, people, etc.
- Request: to look up specific webpages on the Internet.
In each step, the LLM can select any tool (or its own knowledge) to solve the target query. The final response is generated by combining the responses to each subquery.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
llamaindex_llm | `llama_index.core.llms.llm.LLM` | LLM that will decompose the main query and answer the subqueries. | required |
name | `str` | name of the tool. | 'world_knowledge' |
return_direct | `bool` | whether or not the tool should return when the final answer is given. | True |
verbose | `bool` | whether or not the tool should write to stdout the intermediate information. | False |
Returns:
Type | Description |
---|---|
Tool |
|
Source code in projects/gptstonks_api/gptstonks/api/initialization/app.py
256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 |
|
load_embed_model()
¶
Get LlamaIndex embedding model.
Returns:
Type | Description |
---|---|
str | OpenAIEmbedding |
|
Source code in projects/gptstonks_api/gptstonks/api/initialization/app.py
load_llm_model()
¶
Initialize the Langchain LLM to use.
Several providers are currently supported: - OpenAI. - AWS Bedrock. - Llama.cpp. - HuggingFace. The provider is selected and configured based on env variables.
Returns:
Type | Description |
---|---|
LLM |
|