ailia package

Classes

class ailia.Net(stream=None, weight=None, env_id=- 1, num_thread=0, memory_mode=None, debug_log=False, enable_layer_fusion=True)

Bases: object

wrapper class for ailia network model instance

__init__(stream=None, weight=None, env_id=- 1, num_thread=0, memory_mode=None, debug_log=False, enable_layer_fusion=True)

constructor of ailia network model instance.

Parameters
  • stream (str, numpy.ndarray) – network model file path (ex. “foobar.prototxt”) network model data for use onnx file, please specify None.

  • weight (str, numpy.ndarray) – network weight file path (ex. “foobar.caffemodel” “foobar.onnx” ) network weight data

  • env_id (int, optional, default:ENVIRONMENT_AUTO(-1)) –

    environment id of ailia excecution. To retrieve env_id value, use

    get_environment_count() / get_environment() pair

    or

    get_gpu_environment_id() .

  • num_thread (int, optional, default: MULTITHREAD_AUTO(0)) –

    number of threads. valid values:

    MULTITHREAD_AUTO=0 [means systems’s logical processor count], 1 to 32.

  • memory_mode (int or None, optional, default: None) – memory management mode of ailia excecution. To retrieve memory_mode value, use get_memory_mode() .

  • debug_log (bool, optional, default: False) – enable trace logging and verbose log for ailia.

  • enable_layer_fusion (bool, optional, default: True) – enable layer fusion optimization.

copy_blob_data(dst_idx, src_idx, src_net)

copy blobs inter Net object

Parameters
  • dst_idx (int or str) – The destination blob index (int) or blob name (str).

  • src_idx (int or str) – The source blob index (int) or blob name (str).

  • src_net (Net or None) – If specifyed Net object, copy blob from src_net. Otherwise, copy from self.

find_blob_index_by_name(name)

retrieve blob index by name.

Parameters

name (str) – blob name.

Returns

blob index.

Return type

int

get_blob_count()

get blob count.

Returns

blob count.

Return type

int

get_blob_data(idx, buffer=None)

get the data of the blob specified by idx.

Parameters
  • idx (int or str) – blob index (int) or blob name (str). valid values (int) : range(0, a.get_blob_count()) .

  • buffer (numpy.ndarray, optional) –

    output blob buffer. (for the performance freak) if used, ailia doesn’t create new output buffer every call. requirements :

    buffer.dtype is numpy.float32 . buffer.flags[‘C_CONTIGUOUS’] is True . buffer.shape is the same as a.get_blob_shape(idx) .

Returns

output blob data.

Return type

numpy.ndarray

get_blob_name(idx)

get the name of the blob specified by idx.

Parameters

idx (int) – blob index. valid values : range(0, a.get_blob_count()) .

Returns

blob name.

Return type

str

get_blob_shape(idx)

get the shape of the blob specified by idx.

Parameters

idx (int or str) – blob index (int) or blob name (str). valid values (int) : range(0, a.get_blob_count()) .

Returns

blob shape.

Return type

tuple of ints

get_error_detail()

get error detail.

Returns

error detail.

Return type

str

get_input_blob_list()

get input blob indices list.

Returns

input blob indices.

Return type

list(int)

get_input_shape()

get input blob shape.

Returns

input blob shape. (same as numpy’s shape)

Return type

tuple of ints

get_output_blob_list()

get output blob indices list.

Returns

output blob indices.

Return type

list(int)

get_output_shape()

get output blob shape.

Returns

output blob shape. (same as numpy’s shape)

Return type

tuple of ints

get_raw_pointer()

get raw instance of ailia

Returns

raw instance of ailia.

Return type

ctypes.c_void_p

get_results()

get the list of output blobs data.

Returns

output blob data.

Return type

list(numpy.ndarray)

get_selected_environment()

get current execution environment.

Returns

id : int, used for create() function’s env_id argument. type : str, one of the following :

’CPU’, ‘BLAS’, ‘GPU’

name : str, detail description of execution environment. backend : str, one of the following :

’NONE’, ‘CUDA’, ‘MPS’, ‘VULKAN’

propslist(str), is empty or contains one or more following:

’LOWPOWER’, ‘FP16’

Return type

namedtuple(“Environment”)

get_summary()

get, as a string, a summary of all layers and blobs.

Returns

summary string.

Return type

str

predict(input, output=None)

run ailia network model.

Parameters
  • input (numpy.ndarray or dict(str, numpy.ndarray) or sequence( numpy.ndarray )) –

    input blob data. requirements :

    input.shape is the same as a.get_input_shape() .

  • output (numpy.ndarray or sequence( numpy.ndarray ), optional) –

    output blob buffer. (for the performance freak) if used, ailia doesn’t create new output buffer every call and get only top N (len(output)) blobs of model. requirements :

    output.dtype is numpy.float32 . output.flags[‘C_CONTIGUOUS’] is True . output.shape is the same as a.get_output_shape() .

Returns

output blob data. when input is a ndarray, return is single ndarray (return output[0]) when input is dict|sequence, return is list( ndarray )

Return type

numpy.ndarray or list( numpy.ndarray )

run(input, output=None)

run ailia network model.

Parameters
  • input (numpy.ndarray or dict(str, numpy.ndarray) or sequence( numpy.ndarray )) –

    input blob data. requirements :

    input.shape is the same as a.get_input_shape() .

  • output (numpy.ndarray or sequence( numpy.ndarray ), optional) –

    output blob buffer. (for the performance freak) if used, ailia doesn’t create new output buffer every call and get only top N (len(output)) blobs of model. requirements :

    output.dtype is numpy.float32 . output.flags[‘C_CONTIGUOUS’] is True . output.shape is the same as a.get_output_shape() .

Returns

list of output blob data. return is a list in any case.

Return type

list( numpy.ndarray )

set_input_blob_data(input, idx)

set input blob data. (for multiple input network model)

Parameters
  • input (numpy.ndarray) –

    input data. requirements :

    input.shape is model’s acceptable shape .

  • idx (int or str) – blob index (int) or blob name (str). valid values (int) : range(0, a.get_blob_count())

See also

update

set_input_blob_shape(shape, idx)

set input blob shape. (for multiple input network model)

Parameters
  • shape (tuple of ints) – new input blob shape.

  • idx (int or str) – blob index (int) or blob name (str). valid values (int) : range(0, a.get_blob_count())

See also

update

set_input_shape(shape)

change input blob shape.

Parameters

shape (tuple of ints) – new input layer shape.

set_profile_mode(mode=1)

change profile mode.

Parameters

mode (int, optional, default=PROFILE_AVERAGE(1)) –

valid values:

PROFILE_DISABLE(0) PROFILE_AVERAGE(1)

update()

run ailia network model with pre stored input data.

using with a.set_input_blob_data() / a.get_results() ex.

# set input [A] idx = a.find_blob_index_by_name(net, “input_a”) a.set_input_blob_data(data_a, idx) # set other input [B] idx = a.find_blob_index_by_name(net, “input_b”) a.set_input_blob_data(data_b, idx) # run ailia network model a.update(t) # get result list_of_outputs = a.get_results()

class ailia.Detector(stream_path, weight_path, category_count, env_id=- 1, num_thread=0, format=1, channel=0, range=3, algorithm=0, flags=0, debug_log=False, enable_layer_fusion=True, memory_mode=None)

Bases: ailia.wrapper.Net

wrapper class for ailia object detector (YOLO style) instance

__init__(stream_path, weight_path, category_count, env_id=- 1, num_thread=0, format=1, channel=0, range=3, algorithm=0, flags=0, debug_log=False, enable_layer_fusion=True, memory_mode=None)

constructor of ailia object detector instance.

Parameters
  • stream_path (str,) – network model file path (ex. “foobar.prototxt”) for use onnx file, please specify None.

  • weight_path (str,) – network weight file path (ex. “foobar.caffemodel”)

  • category_count (int) – use same value as model training time.

  • env_id (int, optional, default:ENVIRONMENT_AUTO(-1)) –

    environment id of ailia excecution. To retrieve env_id value, use

    get_environment_count() / get_environment() pair

    or

    get_gpu_environment_id() .

  • num_thread (int, optional, default: MULTITHREAD_AUTO(0)) –

    number of threads. valid values:

    MULTITHREAD_AUTO=0 [means systems’s logical processor count], 1 to 32.

  • format (int, optional, default=NETWORK_IMAGE_FORMAT_RGB(1)) –

  • channel (int, optional, default=NETWORK_IMAGE_CHANNEL_FIRST(0)) –

  • range (int, optional, default=NETWORK_IMAGE_RANGE_S_FP32(3)) – use network model’s expected input data format.

  • algorithm (int, optional, default=DETECTOR_ALGORITHM_YOLOV1(0)) –

    algorithm selector, use

    DETECTOR_ALGORITHM_YOLOV1(0)

    or

    DETECTOR_ALGORITHM_YOLOV2(1)

    or

    DETECTOR_ALGORITHM_YOLOV3(2) .

    or

    DETECTOR_ALGORITHM_YOLOV4(3) .

    or

    DETECTOR_ALGORITHM_YOLOX(4) .

    or

    DETECTOR_ALGORITHM_SSD(8) .

  • flags (int, optional, default=DETECTOR_FLAGS_NORMAL(0)) – reserved for future use.

  • debug_log (bool, optional, default: False) – enable trace logging and verbose log for ailia.

  • enable_layer_fusion (bool, optional, default: True) – enable layer fusion optimization.

  • memory_mode (int or None, optional, default: None) – memory management mode of ailia excecution. To retrieve memory_mode value, use get_memory_mode() .

compute(image, threshold, iou)

run the ailia object detector with a input image.

Parameters
  • image (numpy.ndarray) – input image data. expect a result of cv2.imread(img_path, cv2.IMREAD_UNCHANGED) .

  • threshold (float) – object recognition threshold. an object whose probability is less than the threshold is undetected.

  • iou (float) – intersection over union threshold, used for non-maximum suppression.

get_object(idx)

get a detected object detail specified by idx.

Parameters

idx (int) – object index. vaild values : range(0, a.get_object_count())

Returns

category : int, object category index. prob : float, object probability. x : float, object rectangle’s top-left x coordinate. y : float, object rectangle’s top-left y coordinate. w : float, object rectangle’s width. h : float, object rectangle’s height.

Return type

namedtuple(“DetectedObject”)

get_object_count()

get detected object count.

Returns

number of objects.

Return type

int

run(image, threshold, iou)

run the ailia object detector with a input image.

Parameters
  • image (numpy.ndarray) – input image data. expect a result of cv2.imread(img_path, cv2.IMREAD_UNCHANGED) .

  • threshold (float) – object recognition threshold. an object whose probability is less than the threshold is undetected.

  • iou (float) – intersection over union threshold, used for non-maximum suppression.

Returns

numpy structured array(“DetectorObject”)

category : int, object category index. prob : float, object probability. box : numpy structured array (“DetecorRectangle”)

x : float, object rectangle’s top-left x coordinate. y : float, object rectangle’s top-left y coordinate. w : float, object rectangle’s width. h : float, object rectangle’s height.

Return type

array of numpy structured array(“DetectorObject”)

set_anchors(data)

sets the anchor information for YOLOv2 or other models.

Parameters

data (numpy.ndarray) – extra data. anchors or biasis, etc.

set_input_shape(input_width, input_height)

set model input image size (for YOLOv3)

Parameters
  • input_width (int, input model image width) –

  • input_height (int, input model image height) –

class ailia.Classifier(stream_path, weight_path, env_id=- 1, num_thread=0, format=0, channel=0, range=3, debug_log=False, enable_layer_fusion=True, memory_mode=None)

Bases: ailia.wrapper.Net

wrapper class for ailia image classifier instance

__init__(stream_path, weight_path, env_id=- 1, num_thread=0, format=0, channel=0, range=3, debug_log=False, enable_layer_fusion=True, memory_mode=None)

constructor of ailia image classifier instance.

Parameters
  • stream_path (str,) – network model file path (ex. “foobar.prototxt”) for use onnx file, please specify None.

  • weight_path (str,) – network weight file path (ex. “foobar.caffemodel”)

  • env_id (int, optional, default:ENVIRONMENT_AUTO(-1)) –

    environment id of ailia excecution. To retrieve env_id value, use

    get_environment_count() / get_environment() pair

    or

    get_gpu_environment_id() .

  • num_thread (int, optional, default: MULTITHREAD_AUTO(0)) –

    number of threads. valid values:

    MULTITHREAD_AUTO=0 [means systems’s logical processor count], 1 to 32.

  • format (int, optional, default=NETWORK_IMAGE_FORMAT_BGR(0)) –

  • channel (int, optional, default=NETWORK_IMAGE_CHANNEL_FIRST(0)) –

  • range (int, optional, default=NETWORK_IMAGE_RANGE_S_FP32(3)) – use network model’s expected input data format.

  • debug_log (bool, optional, default: False) – enable trace logging and verbose log for ailia.

  • enable_layer_fusion (bool, optional, default: True) – enable layer fusion optimization.

  • memory_mode (int or None, optional, default: None) – memory management mode of ailia excecution. To retrieve memory_mode value, use get_memory_mode() .

compute(image, max_class_count=1)

run the ailia image classifier with a input image.

Parameters
  • image (numpy.ndarray) – input image data. expect a result of cv2.imread(img_path, cv2.IMREAD_UNCHANGED) .

  • max_class_count (int, optional, default=1) – maximum number of listed classes. finds N classes in decending order of probability.

get_class(idx)

get a class information specified by idx.

Parameters

idx (int) – class index. vaild values : range(0, a.get_class_count())

Returns

category : int, class category index. prob : float, class probability.

Return type

namedtuple(“ClassifierClass”)

get_class_count()

get listed classes count.

Returns

number of classes.

Return type

int

run(image, max_class_count=1)

run the ailia image classifier with a input image.

Parameters
  • image (numpy.ndarray) – input image data. expect a result of cv2.imread(img_path, cv2.IMREAD_UNCHANGED) .

  • max_class_count (int, optional, default=1) – maximum number of listed classes. finds N classes in decending order of probability.

Returns

numpy structured array(“ClassifierClass”)

category : int, class category index. prob : float, class probability.

Return type

array of numpy structured array(“ClassifierClass”)

class ailia.PoseEstimator(stream_path, weight_path, env_id=- 1, num_thread=0, algorithm=0, memory_mode=None, debug_log=False, enable_layer_fusion=True)

Bases: ailia.wrapper.Net

ailia pose estimator instance wrapper class

__init__(stream_path, weight_path, env_id=- 1, num_thread=0, algorithm=0, memory_mode=None, debug_log=False, enable_layer_fusion=True)

constructor of ailia pose stimator instance.

Parameters
  • stream_path (str,) – network model file path (ex. “foobar.prototxt”) for use onnx file, please specify None.

  • weight_path (str,) – network weight file path (ex. “foobar.caffemodel”)

  • env_id (int, optional, default:ENVIRONMENT_AUTO(-1)) –

    environment id of ailia excecution. To retrieve env_id value, use

    get_environment_count() / get_environment() pair

    or

    get_gpu_environment_id() .

  • num_thread (int, optional, default: MULTITHREAD_AUTO(0)) –

    number of threads. valid values:

    MULTITHREAD_AUTO=0 [means systems’s logical processor count], 1 to 32.

  • algorithm (int, optional, default=POSE_ALGORITHM_ACCULUS_POSE(0)) – algorithm selector, use ALGORITHM_*

  • debug_log (bool, optional, default: False) – enable trace logging and verbose log for ailia.

  • enable_layer_fusion (bool, optional, default: True) – enable layer fusion optimization.

  • memory_mode (int or None, optional, default: None) – memory management mode of ailia excecution. To retrieve memory_mode value, use get_memory_mode() .

compute(image)

run the ailia pose stimator with a input image.

Parameters

image (numpy.ndarray) – input image data. expect a result of cv2.imread(img_path, cv2.IMREAD_UNCHANGED) .

get_object_count()

get detected object count.

Returns

number of objects.

Return type

int

get_object_hand(idx)

get a detected object detail specified by idx.

Parameters

idx (int) – object index. vaild values : range(0, a.get_object_count())

Returns

r

pointslist of namedtuple(“PoseEstimatorKeypoint”)
namedtuple(“PoseEstimatorKeypoint”)

x : float, keypoint position y : float, keypoint position z_local : float, keypoint position score : float, keypoint probablity interpolated : int, 0 or 1

total_score : float, sum of object probability.

Return type

namedtuple(“PoseEstimatorObjectHand”)

get_object_pose(idx)

get a detected object detail specified by idx.

Parameters

idx (int) – object index. vaild values : range(0, a.get_object_count())

Returns

pointslist of namedtuple(“PoseEstimatorKeypoint”)
namedtuple(“PoseEstimatorKeypoint”)

x : float, keypoint position y : float, keypoint position z_local : float, keypoint position score : float, keypoint probablity interpolated : int, 0 or 1

total_score : float, sum of object probability. num_valid_points : int, number of valid key points id : int, person id angle_x : float, object angle angle_y : float, object angle angle_z : float, object angle

Return type

namedtuple(“PoseEstimatorObjectPose”)

get_object_up_pose(idx)

get a detected object detail specified by idx.

Parameters

idx (int) – object index. vaild values : range(0, a.get_object_count())

Returns

pointslist of namedtuple(“PoseEstimatorKeypoint”)
namedtuple(“PoseEstimatorKeypoint”)

x : float, keypoint position y : float, keypoint position z_local : float, keypoint position score : float, keypoint probablity interpolated : int, 0 or 1

total_score : float, sum of object probability. num_valid_points : int, number of valid key points id : int, person id angle_x : float, object angle angle_y : float, object angle angle_z : float, object angle

Return type

namedtuple(“PoseEstimatorObjectUpPose”)

run(image, max_class_count=1)

run the ailia pose stimator with a input image.

Parameters

image (numpy.ndarray) – input image data. expect a result of cv2.imread(img_path, cv2.IMREAD_UNCHANGED) .

Returns

  • algorithm==POSE_ALGORITHM_ACCULUS_POSE or algorithm==POSE_ALGORITHM_OPEN_POSE

  • or algorithm==POSE_ALGORITHM_LW_HUMAN_POSE or algorithm==POSE_ALGORITHM_OPEN_POSE_SINGLE_SCALE

  • array of numpy structured array(“PoseEstimatorObjectPose”)

    numpy structured array(“PoseEstimatorObjectPose”)
    pointsarray of numpy structured array(“PoseEstimatorKeypoint”)
    numpy strucuted array(“PoseEstimatorKeypoint”)

    x : float, keypoint position y : float, keypoint position z_local : float, keypoint position score : float, keypoint probablity interpolated : int, 0 or 1

    total_score : float, sum of object probability. num_valid_points : int, number of valid key points id : int, person id angle_x : float, object angle angle_y : float, object angle angle_z : float, object angle

  • algorithm==POSE_ALGORITHM_ACCULUS_UPPOSE or algorithm==POSE_ALGORITHM_ACCULUS_UPPOSE_FPGA

  • array of numpy structured array(“PoseEstimatorObjectUpPose”)

    numpy structured array(“PoseEstimatorObjectUpPose”)
    pointsarray of numpy structured array(“PoseEstimatorKeypoint”)
    numpy strucuted array(“PoseEstimatorKeypoint”)

    x : float, keypoint position y : float, keypoint position z_local : float, keypoint position score : float, keypoint probablity interpolated : int, 0 or 1

    total_score : float, sum of object probability. num_valid_points : int, number of valid key points id : int, person id angle_x : float, object angle angle_y : float, object angle angle_z : float, object angle

  • algorithm==POSE_ALGORITHM_ACCULUS_HAND

  • array of numpy structured array(“PoseEstimatorObjectHand”)

    numpy structured array(“PoseEstimatorObjectHand”)
    pointsarray of numpy structured array(“PoseEstimatorKeypoint”)
    numpy strucuted array(“PoseEstimatorKeypoint”)

    x : float, keypoint position y : float, keypoint position z_local : float, keypoint position score : float, keypoint probablity interpolated : int, 0 or 1

    total_score : float, sum of object probability.

set_threshold(thre)

Functions

ailia.set_temporary_cache_path(path)

set system cache path.

ailia.get_environment_count()

get available environments count.

Returns

available execution environments count

Return type

int

See also

get_environmnet

ailia.get_environment(idx)

get environment detail.

Parameters

idx (int) – env_id. 0 to (get_environment_count()-1).

Returns

execution environment detail. id : int, used for create() function’s env_id argument. type : str, one of the following :

’CPU’, ‘BLAS’, ‘GPU’

name : str, detail description of execution environment. backend : str, one of the following :

’NONE’, ‘CUDA’, ‘MPS’, ‘VULKAN’

propslist(str), is empty or contains one or more following:

’LOWPOWER’, ‘FP16’

Return type

namedtuple(“Environment”)

ailia.get_environment_list()

get environment detail.

Returns

namedtuple(“Environment”)

execution environment detail. id : int, used for create() function’s env_id argument. type : str, one of the following :

’CPU’, ‘BLAS’, ‘GPU’

name : str, detail description of execution environment. backend : str, one of the following :

’NONE’, ‘CUDA’, ‘MPS’, ‘VULKAN’

propslist(str), is empty or contains one or more following:

’LOWPOWER’, ‘FP16’

Return type

list of namedtuple(“Environment”)

ailia.get_version()

get version string of ailia library.

Returns

version string.

Return type

str

ailia.get_gpu_environment_id()

utility function to get GPU execution environment.

Returns

first env_id of GPU execution environment.

Return type

int

Note

if there were no GPU execution environment, this function returns ENVIRONMENT_AUTO(-1).

ailia.get_memory_mode(reduce_constant=False, ignore_input_with_initializer=False, reduce_interstage=False, reuse_interstage=False)

get memory management mode combined argument.

Returns

memory management mode. ( for ailia.Net() )

Return type

int

Parameters
  • reduce_constant (bool, optional, default=False) – free a constant intermediate blob.

  • ignore_input_with_initializer (bool, optional, default=False) – consider all initializer as constant. (even if they overlap with input)

  • reduce_interstage (bool, optional, default=False) – free an intermediate blob.

  • reuse_interstage (bool, optional, default=False) – reuse an available intermediate blob.

Objects

class ailia.ClassifierClass(category, prob)

Bases: tuple

property category

Alias for field number 0

property prob

Alias for field number 1

class ailia.DetectorObject(category, prob, x, y, w, h)

Bases: tuple

property category

Alias for field number 0

property h

Alias for field number 5

property prob

Alias for field number 1

property w

Alias for field number 4

property x

Alias for field number 2

property y

Alias for field number 3

class ailia.Environment(id, type, name, backend, props)

Bases: tuple

property backend

Alias for field number 3

property id

Alias for field number 0

property name

Alias for field number 2

property props

Alias for field number 4

property type

Alias for field number 1

class ailia.PoseEstimatorKeypoint(x, y, z_local, score, interpolated)

Bases: tuple

property interpolated

Alias for field number 4

property score

Alias for field number 3

property x

Alias for field number 0

property y

Alias for field number 1

property z_local

Alias for field number 2

class ailia.PoseEstimatorObjectHand(points, total_score)

Bases: tuple

property points

Alias for field number 0

property total_score

Alias for field number 1

class ailia.PoseEstimatorObjectPose(points, total_score, num_valid_points, id, angle_x, angle_y, angle_z)

Bases: tuple

property angle_x

Alias for field number 4

property angle_y

Alias for field number 5

property angle_z

Alias for field number 6

property id

Alias for field number 3

property num_valid_points

Alias for field number 2

property points

Alias for field number 0

property total_score

Alias for field number 1

class ailia.PoseEstimatorObjectUpPose(points, total_score, num_valid_points, id, angle_x, angle_y, angle_z)

Bases: tuple

property angle_x

Alias for field number 4

property angle_y

Alias for field number 5

property angle_z

Alias for field number 6

property id

Alias for field number 3

property num_valid_points

Alias for field number 2

property points

Alias for field number 0

property total_score

Alias for field number 1

Exceptions

exception ailia.core.AiliaException

Bases: Exception

Base class for exceptions of ailia

exception ailia.core.AiliaInvalidArgumentException

Bases: ailia.core.AiliaException

Incorrect argument

Please check argument of called API.

exception ailia.core.AiliaFileIoException

Bases: ailia.core.AiliaException

File access failed.

Please check file is exist or not, and check access permission.

exception ailia.core.AiliaInvalidVersionException

Bases: ailia.core.AiliaException

Incorrect struct version

Please check struct version that passed with API and please pass correct struct version.

exception ailia.core.AiliaBrokenDataException

Bases: ailia.core.AiliaException

A corrupt file was passed.

Please check model file are correct or not, and please pass correct model.

exception ailia.core.AiliaResourceInsufficientException

Bases: ailia.core.AiliaException

Insufficient system resource

Please check usage of system resource (e.g. thread). And please call API after release system resources.

exception ailia.core.AiliaInvalidStateException

Bases: ailia.core.AiliaException

The internal status of the ailia is incorrect.

Please check API document and API call steps.

exception ailia.core.AiliaUnsupportNetException

Bases: ailia.core.AiliaException

Unsupported network

Non supported model file was passed to wrapper functions (e.g. Detector). Please check document whether presented models are supported or not.

exception ailia.core.AiliaInvalidLayerException

Bases: ailia.core.AiliaException

Incorrect layer weight, parameter, or input or output shape

The layer of model had incorrect parameter or so on. Please call ailiaGetErrorDetail() and check detail message. And, please check model.

exception ailia.core.AiliaInvalidParamException

Bases: ailia.core.AiliaException

The content of the parameter file is invalid.

Please check parameter file are correct or not.

exception ailia.core.AiliaNotFoundException

Bases: ailia.core.AiliaException

The specified element was not found.

The specified element of passed name/index was not found. Please check the element are exisit on model or not.

exception ailia.core.AiliaGpuUnsupportLayerException

Bases: ailia.core.AiliaException

A layer parameter not supported by the GPU was given.

The layer or parameter that not supported by the GPU was given. Please check model file are correct or not and contact support desk that described on document.

exception ailia.core.AiliaGpuErrorException

Bases: ailia.core.AiliaException

Error during processing on the GPU

Please check the GPU driver are latest and VRAM are sufficient or not.

exception ailia.core.AiliaUnimplementedException

Bases: ailia.core.AiliaException

Unimplemented error

The called API are not available on current environment. Please contact support desk that described on document.

exception ailia.core.AiliaPermissionDeniedException

Bases: ailia.core.AiliaException

Operation not allowed

The called API are not allowed on this model (e.g. encrypted model are used.). Please check model file and change API call flow.

exception ailia.core.AiliaExpiredException

Bases: ailia.core.AiliaException

Model Expired

The model file are expired. Please re generate model with ailia_obfuscate_c.

exception ailia.core.AiliaUnsettledShapeException

Bases: ailia.core.AiliaException

The shape is not yet determined

The shape (e.g. output shape) are not determined. When called API that to get output shape, please set input shape and execute inference, then call API that to get output shape.

exception ailia.core.AiliaDataRemovedException

Bases: ailia.core.AiliaException

The information was not available from the application

The specified information was removed due to optimization. If you need the information, please disable optimization and call API.

exception ailia.core.AiliaDataHiddenException

Bases: ailia.core.AiliaDataRemovedException

exception ailia.core.AiliaLicenseNotFoundException

Bases: ailia.core.AiliaException

No valid license found

The license file are required for trial version. Please contact support desk that described on document.

exception ailia.core.AiliaLicenseBrokenException

Bases: ailia.core.AiliaException

License is broken

The license file that are required for trial version are broken. Please contact support desk that described on document.

exception ailia.core.AiliaLicenseExpiredException

Bases: ailia.core.AiliaException

License expired

The license file that are required for trial version are expired. Please contact support desk that described on document.

exception ailia.core.AiliaShapeHasExDimException

Bases: ailia.core.AiliaException

Dimension of shape is 5 or more.

The called API are supported up to 4 dimension. Please replace API that described on API document.