The Xentara Torch Engine v1.0
User Manual
Loading...
Searching...
No Matches
Torch Engine
See also
I/O Components in the Xentara user manual
Torch machine learning library.

Description

Xentara Torch Engine Module is a type of Xentara component. A Module is a computing instance which allows Xentara real time processing using neural network implemented by libtorch. The Module generates a set of outputs which can be used for further processing and decision making.

Torch model

Requirements

The Xentara Torch Engine requires a pre-trained Torch model for the assigned task. The Interface parameters and the type of task are configured during the training process and they are not possible to change or addapt different behevior while it is used by the Torch Engine in the Evaluation process. The supported model is type of TorchScript mode which allows the model to be portable and optimized.

Interface parameters and task type

Using Torch model in the Xentara Torch Engine plug-in, requires to know the interface parameters and the task type.

  • The Interface parameters for the input and the output of the Torch model describing the number, the data type for each but also index of the individual data units.
  • The task type can be time series or non time series. If type of of the Torch model is time series, it is madatory to know the time steps.

Device type

The Xentara Torch Engine can support two device types for processing and listed below :

  • CPU (x86-64)
  • CUDA (Nvidia)

The Torch Engine plug-in can use x86-64 processor. If the hardware is available, Xentara Torch Engine can utilize the accelerating properties of nvidia's computing unit by using CUDA. CUDA is a parallel computing platform developed by nvidia which provides access to cuda cores for accelerating the data processing.

Input

The input data of the Torch model in Xentara Torch Engine plug-in is described by the following elements.

Datapoints

Datapoints in the input defines data applied in the input of the Torch model and described by a list of datapoints. The order of the datapoints in the list defines the input index of the Torch model.

Data type

All the inputs share the same datatype and must match datatype the Torch model requires.

The available input datatypes can be found bellow :

Datatype Description
Float32 float 32bit
Float64 float 64bit
Int8 signed integer 8bit
UInt8 unsigned integer 8bit
Int16 signed integer 16bit
Int32 signed integer 32bit
Int64 signed integer 64bit
Bool boolean

Simple Torch model type

In simple Torch model, a single set of data is used. After the collection of a single sample input set of data, the data can be evaluated and provide the values in the output.

Time series model type

Time series model uses a pre-defined number of samples collected over specified interval as the input data for processing. The input of the time series model is implemented by using a fixed size buffer which has the structure of queue (FIFO) and works as sliding window over the Time.

Every time the collect task is called, the data samples from the input datapoints feed the buffer until it reaches its full capacity. When the buffer is full, the evaluating process is enabled. Once the buffer is full and new data sample pushed into the buffer, the oldest sample is removed.

To congfigure the Xentara Torch Engine for Time series model, it is madatory to define the Time steps. Optionaly, an additional property of Compute Delay can be defined.

Time steps

Time steps define the size of the buffer and number of samples required to evaluate the input data. This number is defined in the training process of the model and is not chaning.

Compute Delay

Compute delay defines the number of steps the buffer will delay for collecting data when the evaluate task is triggered. By default the value is 0 and means that once the evaluate is triggered it will process all the data in the buffer at that moment. For example, if this number is 5, it will collect other 5 samples in the next 5 steps and after the 5th step it will execute the evaluation. The compute delay number can be only positive.

Output

The number, the index and the data type of the outputs is defined in the training process and must match with the output configuration in the Xentara model.

Accessing Torch Engine module members

See also
Accessing Xentara Elements in the Xentara user manual

In addition to the standard members of Xentara I/O points, Module have the following attributes:

Attributes5
qualityThe quality of the channel’s value. See The Quality Attribute in the Xentara user manual for an explanation of an I/O point’s quality.
updateTimeThe last time the channel’s value was polled.
errorThe error that occurred the last time the output’s value was updated. The attribute contains one of the error string value describing the error state.

Module Tasks

Collect Data

Collect Task reads the input samples from the datapoints as it has been defined. In case of time series, collect Task must be called at standard regular intervals which defines the sample rate. This rate must match with the sampling rate that the Torch model has been trained.

Evaluate Data

Evaluate task process is enabled once the input buffer reaches the full capacity for the first time and it always evaluates the data in the buffer at the moment the Task is triggered regardless of the excursion time. If the evaluation is triggered while is other evaluation is in progress, the evaluation is executed after the other is finished. If there are multiple evaluation requests while an other evaluation is in progress, most recent evaluation request will be executed and the rest will be ignored.

Tasks
collectReads the input datapoint's and push the data to the neural network model.
evaluateTrigers the evaluation process.

Module Events

Evaluated event

Evaluated event is triggered when the evaluation process is finished and the output data are ready to be collected. The evaluated thask can trigger tasks to use the output.

events
evaluatedThis event is triggered once the evaluation process is complete.
Note
The Xentara Torch Engine can not modify or adapt the Torch Model requirements automaticaly. It is users responsibility to ensure that the the proper configuration parameters of the Torch Model are set in Xentara Torch Engine plugin model.