Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: fix PR template #487

Open
wants to merge 13 commits into
base: main
Choose a base branch
from
4 changes: 1 addition & 3 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,14 +20,12 @@

## How Has This Been Tested?
<!--- Put an `x` in all the boxes that apply: --->
- [ ] Pass the test by running: `pytest qlib/tests/test_all_pipeline.py` under upper directory of `qlib`.
- [ ] If you are adding a new feature, test on your own test scripts.

<!--- **ATTENTION**: If you are adding a new feature, please make sure your codes are **correctly tested**. If our test scripts do not cover your cases, please provide your own test scripts under the `tests` folder and test them. More information about test scripts can be found [here](https://docs.python.org/3/library/unittest.html#basic-example), or you could refer to those we provide under the `tests` folder. -->

## Screenshots of Test Results (if appropriate):
1. Pipeline test:
2. Your own tests:
1. Your own tests:

## Types of changes
<!--- What types of changes does your code introduce? Put an `x` in all the boxes that apply: -->
Expand Down
25 changes: 20 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,14 +84,25 @@ Users must ensure Docker is installed before attempting most scenarios. Please r
```

### ⚙️ Configuration
- You have to config your GPT model in the `.env`
SunsetWolf marked this conversation as resolved.
Show resolved Hide resolved
- If you are using the `OpenAI API`, you have to configure your GPT model in the `.env` file like this.
```bash
cat << EOF > .env
OPENAI_API_KEY=<your_api_key>
OPENAI_API_KEY=<replace_with_your_openai_api_key>
# EMBEDDING_MODEL=text-embedding-3-small
CHAT_MODEL=gpt-4-turbo
EOF
```
- If you are using the `AZURE OpenAI`, you have to configure your GPT model in the `.env` file like this.
SunsetWolf marked this conversation as resolved.
Show resolved Hide resolved
```bash
cat << EOF > .env
USE_AZURE=True
OPENAI_API_KEY=<replace_with_your_azure_openai_api_key>
# EMBEDDING_MODEL=text-embedding-3-small
CHAT_MODEL=<replace_it_with_the_name_of_your_chat_model.>
CHAT_AZURE_API_VERSION=<replace_with_the_version_of_your_Azure_OpenAI_API>
EOF
```
- For more configuration information, please refer to the [documentation](https://rdagent.readthedocs.io/en/latest/installation_and_configuration.html).

### 🚀 Run the Application

Expand Down Expand Up @@ -170,9 +181,11 @@ The **[🖥️ Live Demo](https://rdagent.azurewebsites.net/)** is implemented b
> - The **Competition List Available** can be found [here](https://rdagent.readthedocs.io/en/latest/scens/kaggle_agent.html#competition-list-available). <br />

### 🖥️ Monitor the Application Results
- You can serve our demo app to monitor the RD loop by running the following command:
- You can run the following command for our demo program to see the run logs.

**Note:** Although port 19899 is not commonly used, but before you run this demo, you need to check if port 19899 is occupied. If it is, please change it to another port that is not occupied.
```sh
rdagent ui --port 80 --log_dir <your log folder like "log/">
rdagent ui --port 19899 --log_dir <your log folder like "log/">
```

# 🏭 Scenarios
Expand Down Expand Up @@ -210,8 +223,10 @@ Different scenarios vary in entrance and configuration. Please check the detaile

Here is a gallery of [successful explorations](https://github.com/SunsetWolf/rdagent_resource/releases/download/demo_traces/demo_traces.zip) (5 traces showed in **[🖥️ Live Demo](https://rdagent.azurewebsites.net/)**). You can download and view the execution trace using the command below:

**Note:** Although port 19899 is not commonly used, but before you run this demo, you need to check if port 19899 is occupied. If it is, please change it to another port that is not occupied.

```bash
rdagent ui --port 80 --log_dir ./demo_traces
rdagent ui --port 19899 --log_dir ./demo_traces
SunsetWolf marked this conversation as resolved.
Show resolved Hide resolved
```

Please refer to **[📖readthedocs_scen](https://rdagent.readthedocs.io/en/latest/scens/catalog.html)** for more details of the scenarios.
Expand Down
2 changes: 2 additions & 0 deletions rdagent/app/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
from rdagent.app.qlib_rd_loop.factor import main as fin_factor
from rdagent.app.qlib_rd_loop.factor_from_report import main as fin_factor_report
from rdagent.app.qlib_rd_loop.model import main as fin_model
from rdagent.app.utils.health_check import health_check
from rdagent.app.utils.info import collect_info


Expand Down Expand Up @@ -52,6 +53,7 @@ def app():
"med_model": med_model,
"general_model": general_model,
"ui": ui,
"health_check": health_check,
"collect_info": collect_info,
"kaggle": kaggle_main,
}
Expand Down
41 changes: 41 additions & 0 deletions rdagent/app/utils/health_check.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
import shutil
import subprocess

from rdagent.log import rdagent_logger as logger

SunsetWolf marked this conversation as resolved.
Show resolved Hide resolved

def check_command_exists(command: str) -> bool:
return shutil.which(command) is not None


def check_command_execution(command: str) -> bool:
SunsetWolf marked this conversation as resolved.
Show resolved Hide resolved
command = command.split(" ")
result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)

if "Hello from Docker!" in result.stdout:
return True
else:
print("STDOUT:", result.stdout)
print("STDERR:", result.stderr)
return False


def check_docker():
if check_command_exists("docker"):
if check_command_execution("docker run hello-world"):
logger.info(f"The docker status is normal")
else:
if check_command_execution("sudo docker run hello-world"):
logger.warning(f"Please add the user to the docker user group.")
else:
logger.error(
f"Docker status is exception, please check the docker configuration or reinstall it. Refs: https://docs.docker.com/engine/install/ubuntu/."
)
else:
logger.warning(
f"Docker is not installed, please install docker. Refs: https://docs.docker.com/engine/install/ubuntu/."
)


def health_check():
SunsetWolf marked this conversation as resolved.
Show resolved Hide resolved
check_docker()
6 changes: 5 additions & 1 deletion rdagent/log/ui/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -681,7 +681,11 @@ def evolving_window():
if manually:
st.text_input("log path", key="log_path", on_change=refresh, label_visibility="collapsed")
else:
folders = [folder.relative_to(main_log_path) for folder in main_log_path.iterdir() if folder.is_dir()]
folders = [
folder.relative_to(main_log_path)
for folder in main_log_path.iterdir()
if folder.is_dir() and any((folder / "__session__").exists for folder in folder.iterdir())
SunsetWolf marked this conversation as resolved.
Show resolved Hide resolved
SunsetWolf marked this conversation as resolved.
Show resolved Hide resolved
]
folders = sorted(folders, key=lambda x: x.name)
st.selectbox(f"**Select from `{main_log_path}`**", folders, key="log_path", on_change=refresh)
else:
Expand Down