Tutorial - Bootstrap your AI app project
In this tutorial, we will guide you through all the steps needed to create a new AI app with the Athena Owl Framework.
We will:
- create a new AI app project from a template
- run the AI app so that you can understand the various ways to interact with the app
- modify the AI app to add a new agent with its own prompt and a custom tool
Prerequisites
You will need to be able to run Docker on your machine. The following instructions have been built using Docker Desktop but you can use alternative tools such as Colima or Rancher Desktop.
Create a new Agent App from a template
Create a working directory for everything related to Athena and move into it. From a terminal, you can do:
Clone athena-owl-core and athena-owl-demos github repositories
git clone https://github.com/AthenaDecisionSystems/athena-owl-core.git
git clone https://github.com/AthenaDecisionSystems/athena-owl-demos.git
git clone git@github.com:AthenaDecisionSystems/athena-owl-core.git
git clone git@github.com:AthenaDecisionSystems/athena-owl-demos.git
Create a folder that will be the placeholder for your first Athena project:
Copy the content of the skeleton app SkeletonAppHelloLLM into your app folder
Create a .env file in the app folder Add the API keys for the various third party providers that you want to use.
In this demo, we use OpenAI and Tavily. So, the .env file should look like this:
Please insert your OpenAI and Tavily API keys in the .env file if you already have the keys. Otherwise, follow these links to create keys for OpenAI and TavilyWe are now ready to run our application. First, make sure that Docker Desktop is started and then use this command to start the demo with Docker Compose:
This will pull the two Docker images that are used to run our application:
-
athenadecisionsystems/athena-owl-backend is the backend component that will serve APIs
-
athenadecisionsystems/athena-owl-frontend is the OOTB web application that can be used to interact with AI agents using a chatbot interface.
Once the images have been pulled, you can check that two containers are actually up and running:
The output of the command should look like:CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d1bee478ae99 athenadecisionsystems/athena-owl-backend:1.0.0 "uvicorn athena.main…" 11 seconds ago Up 10 seconds 0.0.0.0:8002->8000/tcp ibu-backend
35530e7bd690 athenadecisionsystems/athena-owl-frontend:1.0.0 "docker-entrypoint.s…" 6 days ago Up 10 seconds 0.0.0.0:3000->3000/tcp owl-frontend
Run the default 'Hello LLM' Agent App
We are now ready to interact with our Hello World AI application. Running our Agent App is a good way to understand how we can interact with it, either using an out-of-the-box chatbot provided as a webapp or by calling APIs.
Start a browser and point to localhost:3000
-
we will use the OOTB chatbot to send queries that will be served by the pre-configured LLM. Click on the Chatbot tab in the top navigation bar. In the chat, enter the following message:
The agent will use the common knowledge of an LLM like OpenAI to provide an answer. A reasonable answer should look like: -
The agent has been configured with some tools and will use a specific tool to retrieve customer data based on their email address. If you enter the following query:
You will get the following response: The response is returned by a custom function that mimicks a call to an enterprise data API. -
The agent can also use Tavily to search the web for all queries that require fresh data that was not available when the LLM was trained.
The agent will perform an API call to Tavily. A reasonable answer should look like:
Play with the Athena Backend APIs using Swagger UI Start a browser and point to localhost:8002/docs Use the generic_chat endpoint with the following JSON payload
{
"locale": "en",
"query": "what is the data of pierre@acme.fr?",
"user_id": "123",
"agent_id": "hello_world_agent_with_tools"
}
{
"messages": [
{
"content": "The data for the email address pierre@acme.fr is as follows:\n\n- **Date of Birth**: December 14, 1994\n- **Income**: 19,500 EUR\n- **Country of Residence**: France",
"style_class": null
}
],
"closed_questions": null,
"reenter_into": null,
"status": 200,
"error": "",
"chat_history": [
{
"content": "what is the data of pierre@acme.fr?",
"role": "human"
},
{
"content": "The data for the email address pierre@acme.fr is as follows:\n\n- **Date of Birth**: December 14, 1994\n- **Income**: 19,500 EUR\n- **Country of Residence**: France",
"role": "AI"
}
],
"user_id": "123",
"agent_id": "hello_world_agent_with_tools",
"thread_id": "8d3a7baf-b266-4f35-9a28-04edff0a5703"
}
If you want, you can also make direct calls to the Athena Backend APIs using cURL from a terminal.
curl -X 'POST' \
'http://localhost:8002/api/v1/c/generic_chat' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"locale": "en",
"query": "what is the data of pierre@acme.fr?",
"user_id": "123",
"agent_id": "hello_world_agent_with_tools"
}'
Tailor the Agent App to your needs
Let's have a look at the docker-compose.yaml file under deployment/local to understand how an application can be customized. The docker compose file defines a component called ibu-backend that uses a predefined Docker image provided by Athena Decision Systems and serves APIs to interact with the agent app.
When running an ibu-backend Docker container, it will be able to access 3 mounted volumes that play a crucial role to customize the application:
- ../../ibu_backend/src/config:/app/config. This mounted volume is used to access various configuration files for our app, such a prompts.yaml, agents.yaml
- ../../ibu_backend/src/ibu:/app/ibu. This mounted volume is used to access your custom Python code.
- ./data/file_content:/app/file_content. This mounted volume is used to store the document database used for RAG.
# THE BACKEND
ibu-backend:
hostname: ibu-backend
image: athenadecisionsystems/athena-owl-backend:1.0.0
container_name: ibu-backend
ports:
- 8002:8000
environment:
CONFIG_FILE: /app/config/config.yaml
env_file:
- ../../.env # path to the file containing the API keys of your various providers
volumes:
- ../../ibu_backend/src/config:/app/config # <-- access the app configuration files
- ../../ibu_backend/src/ibu:/app/ibu # <-- access some custom Python code
- ./data/file_content:/app/file_content # <-- used for the document database and RAG
Create a new prompt
All the prompts used by our application are specified in the prompts.yaml file located in the folder ibu_backend/src/config
Create a new agent
All the agents used by the application are specified in the agents.yaml file located in the folder ibu_backend/src/config
Add a new tool
Register a new function as a tool in the tool factory