프레임워크들과 일하기

Qualcomm® AI Hub 은 컴파일, 프로파일, 추론 작업을 위해 여러 Qualcomm® AI Engine Direct 버전을 지원합니다.

프레임워크 조회하기

To programmatically get a list of all supported Qualcomm® AI Engine Direct versions, use get_frameworks().

import qai_hub as hub

client = hub.Client()
supported_frameworks = client.get_frameworks()
print(supported_frameworks)

이것은 CLI를 통해서도 할 수 있습니다.

qai-hub list-frameworks

각 프레임워크에는 두 가지 특별한 태그가 있습니다:

  • default: Refers to the default version of the framework used in Qualcomm® AI Hub Workbench.

  • latest: Refers to the most recently released version of framework used in Qualcomm® AI Hub Workbench.

프레임워크 버전과 함께 작업 제출

Compile, profile, or inference jobs can be submitted with a specific framework version identified by the api_version listed in get_frameworks()

import qai_hub as hub

client = hub.Client()
compile_job = client.submit_compile_job(
    model="mobilenet_v2.onnx",
    device=hub.Device("Samsung Galaxy S23 (Family)"),
    target_runime="qnn_context_binary",
    options="--qairt_version 2.31"
)
print(compile_job)

프레임워크는 default 또는 latest 태그를 사용하여 식별할 수도 있습니다. 다음 코드 샘플은 사용 가능한 최신 Qualcomm® AI Engine Direct 프레임워크 버전을 사용합니다.

import qai_hub as hub

client = hub.Client()
compile_job = client.submit_compile_job(
    model="mobilenet_v2.onnx",
    device=hub.Device("Samsung Galaxy S23 (Family)"),
    target_runime="qnn_context_binary",
    options="--qairt_version latest"
)
print(compile_job)

다음 코드 샘플은 기본(가장 안정적인) Qualcomm® AI Engine Direct 프레임워크 버전을 사용합니다.

import qai_hub as hub

client = hub.Client()
compile_job = client.submit_compile_job(
    model="mobilenet_v2.onnx",
    device=hub.Device("Samsung Galaxy S23 (Family)"),
    target_runime="qnn_context_binary",
    options="--qairt_version default"
)
print(compile_job)

버전 선택

When submitting a profile or inference job with an explicit Qualcomm® AI Runtime version, Qualcomm® AI Hub Workbench will use that version if it is compatible with the input model; Qualcomm® AI Hub Workbench will return an error if the requested version is not compatible with the input model.

When submitting a profile or inference job without an explicit Qualcomm® AI Runtime version, Qualcomm® AI Hub Workbench will automatcally select a compatible version based on the input model.

If the input model is not a Qualcomm® AI Engine Direct asset (i.e. a TensorFlow Lite model, or a standard ONNX model), Qualcomm® AI Hub Workbench will always select the default Qualcomm® AI Engine Direct version.

If the input model is a DLC or a context binary model Qualcomm® AI Engine Direct asset, Qualcomm® AI Hub Workbench will query the Qualcomm® AI Engine Direct version from the asset in order to select a compatible Qualcomm® AI Engine Direct version. The default Qualcomm® AI Engine Direct version is used if it is compatible with the model; the latest Qualcomm® AI Engine Direct version is used if the default version is not compatible with the model. An error will be returned if no compatible version is available.