Generating anomaly detection models for the MAX32690 Evaluation Kit with AutoML

This example demonstrates how to find anomaly detection models and evaluate them on a simulated MCU using Kenning, Zephyr RTOS and Renode. The models are generated with the Auto-PyTorch AutoML framework. The platform used in this example is the MAX32690 Evaluation Kit. The demo uses Kenning Zephyr Runtime and Zephyr RTOS for execution of the model on simulated hardware.

Prepare an environment for development

Assuming git and docker are available in the system, first let’s clone the repository:

git clone https://github.com/antmicro/kenning-zephyr-runtime-example-app.git sample-app
cd ./sample-app

Then, let’s build the Docker image based on ghcr.io/antmicro/kenning-zephyr-runtime:latest for quicker environment setup:

docker build -t kenning-automl ./environments

After successful build of the image, run:

docker run --rm -it --name automl -w $(pwd) -v $(pwd):$(pwd) kenning-automl:latest bash

Then, in the Docker container, initialize the Zephyr application and Kenning Zephyr Runtime as follows:

west init -l app
west update
west zephyr-export
pushd ./kenning-zephyr-runtime
./scripts/prepare_zephyr_env.sh
./scripts/prepare_modules.sh
popd

Note

To make sure you have the latest version of the Kenning with AutoML features, run:

pip3 install "kenning[iree,tvm,torch,anomaly_detection,auto_pytorch,tensorflow,tflite,reports,renode,uart] @ git+https://github.com/antmicro/kenning.git"

With the configured environment, you can now run the AutoML flow.

To create a workspace directory where intermediate results of command executed further will be stored, run:

mkdir -p workspace

Note

For more step-by-step instructions on how to set up the environment locally, see Kenning Zephyr Runtime build instructions.

Run AutoML flow

The AutoML flow can be configured using the YAML below.

platform:
  type: ZephyrPlatform
  parameters:
    simulated: True
    name: max32690fthr/max32690/m4
    uart_port: /tmp/uart
    uart_log_port: /tmp/uart-log
    profiler_dump_path: ./build/renode_profiler.dump
runtime_builder:
  type: ZephyrRuntimeBuilder
  parameters:
    workspace: ./
    run_west_update: false
    output_path: ./build
    extra_targets: [board-repl]
dataset:
  type: AnomalyDetectionDataset
  parameters:
    dataset_root: build/CATS
    csv_file: ./data.csv
    split_seed: 12345
    split_fraction_test: 0.0001
    inference_batch_size: 1
    reduce_dataset: 0.1
automl:
  type: AutoPyTorchML
  parameters:
    output_directory: ./build/automl-results
    time_limit: 30
    use_models:
      - PyTorchAnomalyDetectionVAE
    n_best_models: 5
    # AutoPyTorch specific options
    optimize_metric: f1
    budget_type: epochs
    min_budget: 1
    max_budget: 5
    application_size: 74.01
optimizers:
- type: TFLiteCompiler
  parameters:
    target: default
    compiled_model_path: ./build/fp32.tflite
    inference_input_type: float32
    inference_output_type: float32

The introduced automl entry provides an implementation of the AutoML flow, as well as its parameters. The main parameters affecting the model search process are:

  • time_limit - determines how much time the AutoML algorithm will spend looking for the models

  • application_size - determines how much of the space is consumed by the app excluding the model. By taking into account the provided size of the application ran on the board and the size of RAM (received from the platform class), the flow will automatically reject (before training) all models that do not fit into available space. This ensures than time is not wasted on models that cannot be run on the hardware (or its simulation).

  • use_models - provides models for the algorithm to take into account. Models provided in the list contribute to the search space of available configurations, providing their hyperparameters and structure definition.

Note

To use the microTVM runtime, change the optimizer to:

optimizers:
- type: TVMCompiler
  parameters:
    compiled_model_path: ./workspace/vae.graph_data

Depending on runtime selection, application size may vary greatly, so to generate models with correct size, make sure to adjust this value.

To run the full flow, use this command:

kenning automl optimize test report \
  --cfg ./kenning-zephyr-runtime/kenning-scenarios/renode-zephyr-auto-tflite-automl-vae-max32690.yml \
  --report-path ./workspace/automl-report/report.md \
  --allow-failures --to-html --ver INFO \
  --comparison-only --skip-general-information

The command above:

  • Runs an AutoML search for models for the amount of time specified in the time_limit

  • Optimizes best-performing using given optimization pipeline

  • Runs evaluation of the compiled models in Renode simulation

  • Generates a full comparison report for the models so that user can pick the best one (located in workspace/automl-report/report/report.html)

  • Generates optimized models (vae.<id>.tflite), their AutoML-derived configuration (automl_conf_<id>.yml) and IO specification files for Kenning (vae.<id>.tflite.json) under workspace/automl-results/.

Run sample app with a chosen model

Once the best model is selected (e.g. workspace/automl-results/vae.0.tflite), let’s compile the sample app with it:

west build \
  -p always \
  -b max32690evkit/max32690/m4 app -- \
  -DEXTRA_CONF_FILE="tflite.conf" \
  -DCONFIG_KENNING_MODEL_PATH=\"$(realpath workspace/automl-results/vae.0.tflite)\"
west build -t board-repl

Note

To use microTVM runtime, change -DEXTRA_CONF_FILE to "tvm.conf" and -DCONFIG_KENNING_MODEL_PATH to a chosen model compiled with TVM (usually with .graph_data extension).

In the end, the app with the model can be simulated with:

python3 kenning-zephyr-runtime/scripts/run_renode.py --no-kcomm

To end the simulation, press Ctrl+C.


Last update: 2025-03-08