The Xentara Torch Engine v1.1
User Manual
|
Xentara Torch Engine Module is a type of Xentara component. A Module is a computing instance which allows Xentara real time processing using neural network implemented by libtorch. The Module generates a set of outputs which can be used for further processing and decision making.
The Xentara Torch Engine requires a pre-trained Torch model for the assigned task. The Interface parameters and the type of task are configured during the training process and they are not possible to change or addapt different behevior while it is used by the Torch Engine in the Evaluation process. The supported model is type of TorchScript mode which allows the model to be portable and optimized.
Using Torch model in the Xentara Torch Engine plug-in, requires to know the interface parameters and the task type.
The Xentara Torch Engine can support two device types for processing and listed below :
The Torch Engine plug-in can use x86-64 processor. If the hardware is available, Xentara Torch Engine can utilize the accelerating properties of nvidia's computing unit by using CUDA. CUDA is a parallel computing platform developed by nvidia which provides access to cuda cores for accelerating the data processing.
The input data of the Torch model in Xentara Torch Engine plug-in is described by the following elements.
Datapoints in the input defines data applied in the input of the Torch model and described by a list of datapoints. The order of the datapoints in the list defines the input index of the Torch model.
All the inputs share the same datatype and must match datatype the Torch model requires.
The available input datatypes can be found bellow :
Datatype | Description |
---|---|
Float32 | float 32bit |
Float64 | float 64bit |
Int8 | signed integer 8bit |
UInt8 | unsigned integer 8bit |
Int16 | signed integer 16bit |
Int32 | signed integer 32bit |
Int64 | signed integer 64bit |
Bool | boolean |
In simple Torch model, a single set of data is used. After the collection of a single sample input set of data, the data can be evaluated and provide the values in the output.
Time series model uses a pre-defined number of samples collected over specified interval as the input data for processing. The input of the time series model is implemented by using a fixed size buffer which has the structure of queue (FIFO) and works as sliding window over the Time.
Every time the collect task is called, the data samples from the input datapoints feed the buffer until it reaches its full capacity. When the buffer is full, the evaluating process is enabled. Once the buffer is full and new data sample pushed into the buffer, the oldest sample is removed.
To congfigure the Xentara Torch Engine for Time series model, it is madatory to define the Time steps. Optionaly, an additional property of Compute Delay can be defined.
Time steps define the size of the buffer and number of samples required to evaluate the input data. This number is defined in the training process of the model and is not chaning.
Compute delay defines the number of steps the buffer will delay for collecting data when the evaluate task is triggered. By default the value is 0 and means that once the evaluate is triggered it will process all the data in the buffer at that moment. For example, if this number is 5, it will collect other 5 samples in the next 5 steps and after the 5th step it will execute the evaluation. The compute delay number can be only positive.
The number, the index and the data type of the outputs is defined in the training process and must match with the output configuration in the Xentara model.
In addition to the standard members of Xentara elements, Modules have the following attributes:
quality | The quality of the channel’s value. See xentara_io_points_quality in the Xentara user manual for an explanation of an I/O point’s quality. |
updateTime | The last time the channel’s value was polled. |
error | The error that occurred the last time the output’s value was updated. The attribute contains one of the error string value describing the error state. |
Collect Task reads the input samples from the datapoints as it has been defined. In case of time series, collect Task must be called at standard regular intervals which defines the sample rate. This rate must match with the sampling rate that the Torch model has been trained.
Evaluate task process is enabled once the input buffer reaches the full capacity for the first time and it always evaluates the data in the buffer at the moment the Task is triggered regardless of the excursion time. If the evaluation is triggered while is other evaluation is in progress, the evaluation is executed after the other is finished. If there are multiple evaluation requests while an other evaluation is in progress, most recent evaluation request will be executed and the rest will be ignored.
collect | Reads the input datapoint's and push the data to the neural network model. |
evaluate | Trigers the evaluation process. |
Evaluated event is triggered when the evaluation process is finished and the output data are ready to be collected. The evaluated thask can trigger tasks to use the output.
evaluated | This event is triggered once the evaluation process is complete. |