Generating anomaly detection models for the MAX32690 Evaluation Kit with AutoML¶
This example demonstrates how to find anomaly detection models and evaluate them on a simulated MCU using Kenning, Zephyr RTOS and Renode. The models are generated with the Auto-PyTorch AutoML framework. The platform used in this example is the MAX32690 Evaluation Kit. The demo uses Kenning Zephyr Runtime and Zephyr RTOS for execution of the model on simulated hardware.
Prepare an environment for development¶
Assuming git
and docker
are available in the system, first let’s clone the repository:
git clone https://github.com/antmicro/kenning-zephyr-runtime-example-app.git sample-app
Then, let’s build the Docker image based on ghcr.io/antmicro/kenning-zephyr-runtime:latest
for quicker environment setup:
docker build -t kenning-automl ./sample-app/environments
After successful build of the image, run:
docker run --rm -it --name automl -w $(realpath sample-app) -v $(pwd):$(pwd) kenning-automl:latest bash
Then, in the Docker container, initialize the Zephyr application and Kenning Zephyr Runtime as follows:
west init -l app
west update
west zephyr-export
pushd ./kenning-zephyr-runtime
./scripts/prepare_zephyr_env.sh
./scripts/prepare_modules.sh
popd
Note
To make sure you have the latest version of the Kenning with AutoML features, run:
pip3 install "kenning[iree,tvm,torch,anomaly_detection,auto_pytorch,tensorflow,tflite,reports,renode,uart] @ git+https://github.com/antmicro/kenning.git"
With the configured environment, you can now run the AutoML flow.
To create a workspace
directory where intermediate results of command executed further will be stored, run:
mkdir -p workspace
Note
For more step-by-step instructions on how to set up the environment locally, see Kenning Zephyr Runtime build instructions.
Run AutoML flow¶
The AutoML flow can be configured using the YAML below.
# This scenario demonstrates AutoML flow on example of Anomaly Detection in time series
# for MAX32960 Evaluation Kit.
#
# Configures AutoML flow
automl:
type: AutoPyTorchML
parameters:
# Time limit for AutoML task (in minutes)
time_limit: 20
# List of model architectures used for AutoML,
# represented by ModelWrapper (has to implement AutoMLModel class)
use_models:
- PyTorchAnomalyDetectionVAE
# Folder storing AutoML results
output_directory: ./workspace/automl-results
# Maximum number of models returned by the flow
n_best_models: 5
# AutoPyTorch specific options
# Chosen metric to optimize
optimize_metric: f1
# Type of budget for training models, either epochs or time limit
budget_type: epochs
# Lower and upper limit of the budger
min_budget: 1
max_budget: 5
# Size of the application that will use generated models
application_size: 75.5
# Chooses the platform to run
platform:
type: ZephyrPlatform
parameters:
# Chooses MAX32690 Evaluation Kit
name: max32690evkit/max32690/m4
# Use Renode to simulate the platform
simulated: True
# Defines dataset for anomaly detection
dataset:
type: AnomalyDetectionDataset
parameters:
dataset_root: ./workspace/CATS
csv_file: kenning:///datasets/anomaly_detection/cats_nano.csv
split_fraction_test: 0.1
split_seed: 12345
inference_batch_size: 1
# Remaining configurations, despite not being directly
# used for a AutoML flow, are copied to resulting scenarios,
# and influences the standard Kenning flow run with generated models
optimizers:
- type: TFLiteCompiler
parameters:
target: default
compiled_model_path: ./workspace/automl-results/vae.tflite
inference_input_type: float32
inference_output_type: float32
runtime_builder:
type: ZephyrRuntimeBuilder
parameters:
workspace: .
# venv_dir: ../.venv
output_path: ./workspace/kzr_build
run_west_update: false
extra_targets: [board-repl]
The introduced automl
entry provides an implementation of the AutoML flow, as well as its parameters.
The main parameters affecting the model search process are:
time_limit
- determines how much time the AutoML algorithm will spend looking for the modelsapplication_size
- determines how much of the space is consumed by the app excluding the model. By taking into account the provided size of the application ran on the board and the size of RAM (received from the platform class), the flow will automatically reject (before training) all models that do not fit into available space. This ensures than time is not wasted on models that cannot be run on the hardware (or its simulation).use_models
- provides models for the algorithm to take into account. Models provided in the list contribute to the search space of available configurations, providing their hyperparameters with acceptable ranges and structure definition. It is possible to override default ranges specifying them right after chosen model:use_models: - PyTorchAnomalyDetectionVAE: encoder_neuron_list: list_range: [4, 15] item_range: [6, 128] dropout_rate: item_range: [0.1, 0.4] output_activation: enum: [tanh]
The list of AutoML specific options is available in Defining arguments for classes section.
Note
To use the microTVM runtime, change the optimizer to:
optimizers:
- type: TVMCompiler
parameters:
compiled_model_path: ./workspace/vae.graph_data
Depending on runtime selection, application size may vary greatly, so to generate models with correct size, make sure to adjust this value.
To run the full flow, use this command:
kenning automl optimize test report \
--cfg ./kenning-zephyr-runtime/kenning-scenarios/renode-zephyr-auto-tflite-automl-vae-max32690.yml \
--report-path ./workspace/automl-report/report.md \
--allow-failures --to-html --ver INFO \
--skip-general-information
The command above:
Runs an AutoML search for models for the amount of time specified in the
time_limit
Optimizes best-performing using given optimization pipeline
Runs evaluation of the compiled models in Renode simulation
Generates a full comparison report for the models so that user can pick the best one (located in
workspace/automl-report/report/report.html
)Generates optimized models (
vae.<id>.tflite
), their AutoML-derived configuration (automl_conf_<id>.yml
) and IO specification files for Kenning (vae.<id>.tflite.json
) underworkspace/automl-results/
.
Run sample app with a chosen model¶
Once the best model is selected (e.g. workspace/automl-results/vae.0.tflite
), let’s compile the sample app with it:
west build \
-p always \
-b max32690evkit/max32690/m4 app -- \
-DEXTRA_CONF_FILE="tflite.conf" \
-DCONFIG_KENNING_MODEL_PATH=\"$(realpath workspace/automl-results/vae.0.tflite)\"
west build -t board-repl
Note
To use microTVM runtime, change -DEXTRA_CONF_FILE
to "tvm.conf"
and -DCONFIG_KENNING_MODEL_PATH
to a chosen model compiled with TVM (usually with .graph_data
extension).
In the end, the app with the model can be simulated with:
python3 kenning-zephyr-runtime/scripts/run_renode.py --no-kcomm
To end the simulation, press Ctrl+C
.