フレームワークの使用

Qualcomm® AI Hub は、コンパイル、プロファイル、および推論ジョブに対して複数の Qualcomm® AI Engine Direct バージョンの使用をサポートしています。

フレームワークの問い合わせ

To programmatically get a list of all supported Qualcomm® AI Engine Direct versions, use get_frameworks().

import qai_hub as hub

client = hub.Client()
supported_frameworks = client.get_frameworks()
print(supported_frameworks)

これはCLIを用いても利用可能です

qai-hub list-frameworks

各フレームワークには2つの特別なタグがあります:

  • default: Refers to the default version of the framework used in Qualcomm® AI Hub Workbench.

  • latest: Refers to the most recently released version of framework used in Qualcomm® AI Hub Workbench.

フレームワークバージョンを使用してジョブを提出する

Compile, profile, or inference jobs can be submitted with a specific framework version identified by the api_version listed in get_frameworks()

import qai_hub as hub

client = hub.Client()
compile_job = client.submit_compile_job(
    model="mobilenet_v2.onnx",
    device=hub.Device("Samsung Galaxy S23 (Family)"),
    target_runime="qnn_context_binary",
    options="--qairt_version 2.31"
)
print(compile_job)

フレームワークは default または latest のタグを使用して識別することもできます。次のコードサンプルは、Qualcomm® AI Engine Direct フレームワークの最新の利用可能なバージョンを使用します

import qai_hub as hub

client = hub.Client()
compile_job = client.submit_compile_job(
    model="mobilenet_v2.onnx",
    device=hub.Device("Samsung Galaxy S23 (Family)"),
    target_runime="qnn_context_binary",
    options="--qairt_version latest"
)
print(compile_job)

次のコードサンプルは、Qualcomm® AI Engine Direct フレームワークのデフォルト(最も安定した)バージョンを使用します。

import qai_hub as hub

client = hub.Client()
compile_job = client.submit_compile_job(
    model="mobilenet_v2.onnx",
    device=hub.Device("Samsung Galaxy S23 (Family)"),
    target_runime="qnn_context_binary",
    options="--qairt_version default"
)
print(compile_job)

バージョン選択

When submitting a profile or inference job with an explicit Qualcomm® AI Runtime version, Qualcomm® AI Hub Workbench will use that version if it is compatible with the input model; Qualcomm® AI Hub Workbench will return an error if the requested version is not compatible with the input model.

When submitting a profile or inference job without an explicit Qualcomm® AI Runtime version, Qualcomm® AI Hub Workbench will automatcally select a compatible version based on the input model.

If the input model is not a Qualcomm® AI Engine Direct asset (i.e. a TensorFlow Lite model, or a standard ONNX model), Qualcomm® AI Hub Workbench will always select the default Qualcomm® AI Engine Direct version.

If the input model is a DLC or a context binary model Qualcomm® AI Engine Direct asset, Qualcomm® AI Hub Workbench will query the Qualcomm® AI Engine Direct version from the asset in order to select a compatible Qualcomm® AI Engine Direct version. The default Qualcomm® AI Engine Direct version is used if it is compatible with the model; the latest Qualcomm® AI Engine Direct version is used if the default version is not compatible with the model. An error will be returned if no compatible version is available.