minibatchpredict
Syntax
Description
[
specifies additional options using one or more name-value arguments.Y1,...,YM
] = minibatchpredict(___,Name=Value
)
Examples
Make Predictions Using dlnetwork
Object
This example shows how to make predictions using a dlnetwork
object by looping over mini-batches.
For large data sets, or when predicting on hardware with limited memory, make predictions by looping over mini-batches of the data using the minibatchpredict
function.
Load dlnetwork
Object
Load a trained dlnetwork
object and the corresponding class names. The neural network has one input and two outputs. It takes images of handwritten digits as input, and predicts the digit label and angle of rotation.
load dlnetDigits
Load Data for Prediction
Load the digits test data for prediction.
load DigitsDataTest
View the class names.
classNames
classNames = 10x1 cell
{'0'}
{'1'}
{'2'}
{'3'}
{'4'}
{'5'}
{'6'}
{'7'}
{'8'}
{'9'}
View some of the images and the corresponding labels and angles of rotation.
numObservations = size(XTest,4); numPlots = 9; idx = randperm(numObservations,numPlots); figure for i = 1:numPlots nexttile(i) I = XTest(:,:,:,idx(i)); label = labelsTest(idx(i)); imshow(I) title("Label: " + string(label) + newline + "Angle: " + anglesTest(idx(i))) end
Make Predictions
Make predictions using the minibatchpredict
function and convert the classification scores to labels using the scores2label
function. By default, the minibatchpredict
function uses a GPU if one is available. Using a GPU requires a Parallel Computing Toolbox™ license and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). Otherwise, the function uses the CPU. To specify the execution environment, use the ExecutionEnvironment
option.
[scoresTest,Y2Test] = minibatchpredict(net,XTest); Y1Test = scores2label(scoresTest,classNames);
Visualize some of the predictions.
idx = randperm(numObservations,numPlots); figure for i = 1:numPlots nexttile(i) I = XTest(:,:,:,idx(i)); label = Y1Test(idx(i)); imshow(I) title("Label: " + string(label) + newline + "Angle: " + Y2Test(idx(i))) end
Input Arguments
net
— Neural network
dlnetwork
object
Neural network, specified as a dlnetwork
object.
images
— Image data
numeric array | dlarray
object | datastore | minibatchqueue
object
Image data, specified as a numeric array, dlarray
object,
datastore, or minibatchqueue
object.
Tip
For sequences of images, for example video data, use the
sequences
input argument.
If you have data that fits in memory that does not require additional processing, then it is usually easiest to specify the input data as a numeric array. If you want to make predictions with image files stored on disk, or want to apply additional processing, then it is usually easiest to use datastores.
Tip
Neural networks expect input data with a specific layout. For example image classification networks typically expect an image to be represented as a h-by-w-by-c numeric array, where h, w, and c are the height, width, and number of channels of the images, respectively. Most neural networks have an input layer that specifies the expected layout of the data.
Most datastores and functions output data in the layout that the network expects. If your data
is in a different layout to what the network expects, then indicate that your data has a
different layout by using the InputDataFormats
option or by specifying
input data as a formatted dlarray
object. It is usually easiest to adjust
the InputDataFormats
option than to preprocess the input data.
For neural networks that do not have input layers, you must use the InputDataFormats
option or use formatted dlarray
objects.
For more information, see Deep Learning Data Formats.
Numeric Array or dlarray
Object
For data that fits in memory and does not require additional processing, you can
specify a data set of images as a numeric array or a dlarray
object.
The layout of numeric arrays and unformatted dlarray
objects
depend on the type of image data and must be consistent with the
InputDataFormats
option.
Most networks expect image data in these layouts:
Data | Layout |
---|---|
2-D images | h-by-w-by-c-by-N array, where h, w, and c are the height, width, and number of channels of the images, respectively, and N is the number of images. Data in this layout has the data format
|
3-D images | h-by-w-by-d-by-c-by-N array, where h, w, d, and c are the height, width, depth, and number of channels of the images, respectively, and N is the number of images. Data in this
layout has the data format |
For data in a different layout, indicate that your data has a different layout by
using the InputDataFormats
option or use a formatted
dlarray
object. For more information, see Deep Learning Data Formats.
Datastore
Datastores read batches of images and targets. Datastores are best suited when you have data that does not fit in memory or when you want to apply augmentations or transformations to the data.
For image data, the minibatchpredict
function supports these
datastores:
Datastore | Description | Example Usage |
---|---|---|
ImageDatastore | Datastore of images saved on disk. | Make predictions with images saved on disk, where the images are
the same size. When the images are different sizes, use an |
AugmentedImageDatastore | Datastore that applies random affine geometric transformations, including resizing. | Make predictions with images saved on disk, where the images are different sizes. When you make predictions using an augmented image datastore, do not apply additional augmentations such as rotation, reflection, shear, and translation. |
TransformedDatastore | Datastore that transforms batches of data read from an underlying datastore using a custom transformation function. |
|
CombinedDatastore | Datastore that reads from two or more underlying datastores. | Make predictions using networks with multiple inputs. |
Custom mini-batch datastore | Custom datastore that returns mini-batches of data. | Make predictions using data in a layout that other datastores do not support. For details, see Develop Custom Mini-Batch Datastore. |
Tip
Use augmentedImageDatastore
for efficient preprocessing of images for deep
learning, including image resizing. Do not use the ReadFcn
option of
ImageDatastore
objects.
ImageDatastore
allows batch reading of JPG or PNG image files using
prefetching. If you set the ReadFcn
option to a custom function, then
ImageDatastore
does not prefetch and is usually significantly
slower.
You can use other built-in datastores for making predictions by using the
transform
and combine
functions. These functions can convert the data read from datastores to the layout
required by the minibatchpredict
function. The required layout of
the datastore output depends on the neural network architecture. For more information,
see Datastore Customization.
minibatchqueue
Object
For greater control over how the software processes and transforms mini-batches, you can specify data as a minibatchqueue
object.
If you specify data as a minibatchqueue
object, then the
minibatchpredict
function ignores the
MiniBatchSize
property of the object and uses the MiniBatchSize
option instead. For minibatchqueue
input, the
PerprocessingEnvironment
property must be
"serial"
.
Note
This argument supports complex-valued predictors and targets.
sequences
— Sequence or time series data
cell array of numeric arrays | cell array of dlarray
objects | numeric array | dlarray
object | datastore | minibatchqueue
object
Sequence or time series data, specified a numeric array, a cell array of numeric
arrays, a dlarray
object, a cell array of dlarray
objects, datastore, or minibatchqueue
object.
If you have sequences of the same length that fits in memory that does not require additional processing, then it is usually easiest to specify the input data as a numeric array. If you have sequences of different lengths that fit in memory that does not require additional processing, then it is usually easiest to specify the input data as a cell array of numeric arrays. If you want to train with sequences stored on disk, or want to apply additional processing such as custom transformations, then it is usually easiest to use datastores.
Tip
Neural networks expect input data with a specific layout. For example, vector-sequence classification networks typically expect a sequence to be represented as a t-by-c numeric array, where t and c are the number of time steps and channels of sequences, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.
Most datastores and functions output data in the layout that the network expects. If your data
is in a different layout to what the network expects, then indicate that your data has a
different layout by using the InputDataFormats
option or by specifying
input data as a formatted dlarray
object. It is usually easiest to adjust
the InputDataFormats
option than to preprocess the input data.
For neural networks that do not have input layers, you must use the InputDataFormats
option or use formatted dlarray
objects.
For more information, see Deep Learning Data Formats.
Numeric Array, dlarray
Object, or Cell Array
For data that fits in memory and does not require additional processing like
custom transformations, you can specify a single sequence as a numeric array or
dlarray
object or a data set of sequences as a cell array of
numeric arrays, or dlarray
objects.
For cell array input, the cell array must be an
N-by-1 cell array of numeric arrays or dlarray
objects, where N is the number of observations. The size and shape
of the numeric arrays or dlarray
objects that represent the sequences
depend on the type of sequence data and must be consistent with the
InputDataFormats
option.
This table describes the expected layout of data for a neural network with a sequence input layer.
Data | Layout |
---|---|
Vector sequences | s-by-c matrices, where s and c are the numbers of time steps and channels (features) of the sequences, respectively. |
1-D image sequences | h-by-c-by-s arrays, where h and c correspond to the height and number of channels of the images, respectively, and s is the sequence length. |
2-D image sequences | h-by-w-by-c-by-s arrays, where h, w, and c correspond to the height, width, and number of channels of the images, respectively, and s is the sequence length. |
3-D image sequences | h-by-w-by-d-by-c-by-s, where h, w, d, and c correspond to the height, width, depth, and number of channels of the 3-D images, respectively, and s is the sequence length. |
For data in a different layout, indicate that your data has a different layout by
using the InputDataFormats
option or use a formatted
dlarray
object. For more information, see Deep Learning Data Formats.
Datastore
Datastores read batches of sequences and targets. Datastores are best suited when you have data that does not fit in memory or when you want to apply transformations to the data.
For sequence and time-series data, the minibatchpredict
function supports these datastores:
Datastore | Description | Example Usage |
---|---|---|
TransformedDatastore | Datastore that transforms batches of data read from an underlying datastore using a custom transformation function. |
|
CombinedDatastore | Datastore that reads from two or more underlying datastores. | Make predictions using network with multiple inputs |
Custom mini-batch datastore | Custom datastore that returns mini-batches of data. | Train neural network using data in a layout that other datastores do not support. For details, see Develop Custom Mini-Batch Datastore. |
You can use other built-in datastores for prediction by using the transform
and combine
functions. These functions can convert the data read from datastores to the layout
required by the minibatchpredict
function. For example, you can
transform and combine data read from in-memory arrays and CSV files using
ArrayDatastore
and TabularTextDatastore
objects,
respectively. The required layout of the datastore output depends on the neural
network architecture. For more information, see Datastore Customization.
minibatchqueue
Object
For greater control over how the software processes and transforms mini-batches, you can specify data as a minibatchqueue
object.
If you specify data as a minibatchqueue
object, then the
minibatchpredict
function ignores the
MiniBatchSize
property of the object and uses the MiniBatchSize
option instead. For minibatchqueue
input, the
PerprocessingEnvironment
property must be
"serial"
.
Note
This argument supports complex-valued predictors and targets.
features
— Feature or tabular data
numeric array | dlarray
object | table | datastore | minibatchqueue
object
Feature or tabular data, specified as a numeric array, datastore, table, or
minibatchqueue
object.
If you have data that fits in memory that does not require additional processing, then it is usually easiest to specify the input data as a numeric array or table. If you want to train with feature or tabular data stored on disk, or want to apply additional processing such as custom transformations, then it is usually easiest to use datastores.
Tip
Neural networks expect input data with a specific layout. For example feature classification networks typically expect feature and tabular data to be represented as a 1-by-c vector, where c is the number features of the data. Neural networks typically have an input layer that specifies the expected layout of the data.
Most datastores and functions output data in the layout that the network expects. If your data
is in a different layout to what the network expects, then indicate that your data has a
different layout by using the InputDataFormats
option or by specifying
input data as a formatted dlarray
object. It is usually easiest to adjust
the InputDataFormats
option than to preprocess the input data.
For neural networks that do not have input layers, you must use the InputDataFormats
option or use formatted dlarray
objects.
For more information, see Deep Learning Data Formats.
Numeric Array or dlarray
Objects
For feature data that fits in memory and does not require additional processing
like custom transformations, you can specify feature data as a numeric array or
dlarray
object.
The layout of numeric arrays and unformatted dlarray
objects
depend must be consistent with the InputDataFormats
option. Most
networks with feature input expect input data specified as a
N-by-numFeatures
array, where
N is the number of observations and
numFeatures
is the number of features of the input data.
Table
For feature data that fits in memory and does not require additional processing like custom transformations, you can specify feature data as a table.
To specify feature data as a table, specify a table with
numObservations
rows and numFeatures+1
columns, where numObservations
and numFeatures
are the number of observations and channels of the input data. The
minibatchpredict
function uses the first
numFeatures
columns as the input features and uses the last
column as the targets.
Datastore
Datastores read batches of feature data and targets. Datastores are best suited when you have data that does not fit in memory or when you want to apply transformations to the data.
For feature and tabular data, the minibatchpredict
function
supports these datastores:
Data Type | Description | Example Usage |
---|---|---|
TransformedDatastore | Datastore that transforms batches of data read from an underlying datastore using a custom transformation function. |
|
CombinedDatastore | Datastore that reads from two or more underlying datastores. | Make predictions using neural networks with multiple inputs. |
Custom mini-batch datastore | Custom datastore that returns mini-batches of data. | Make predictions using data in a layout that other datastores do not support. For details, see Develop Custom Mini-Batch Datastore. |
You can use other built-in datastores for making predictions by using the
transform
and combine
functions. These functions can convert the data read from datastores to the table or
cell array format required by minibatchpredict
. For more
information, see Datastore Customization.
minibatchqueue
Object
For greater control over how the software processes and transforms mini-batches, you can specify data as a minibatchqueue
object.
If you specify data as a minibatchqueue
object, then the
minibatchpredict
function ignores the
MiniBatchSize
property of the object and uses the MiniBatchSize
option instead. For minibatchqueue
input, the
PerprocessingEnvironment
property must be
"serial"
.
Note
This argument supports complex-valued predictors and targets.
data
— Generic data or combinations of data types
numeric array | dlarray
object | datastore | minibatchqueue
object
Generic data or combinations of data types, specified as a numeric array,
dlarray
object, datastore, or minibatchqueue
object.
If you have data that fits in memory that does not require additional processing, then it is usually easiest to specify the input data as a numeric array. If you want to train with data stored on disk, or want to apply additional processing, then it is usually easiest to use datastores.
Tip
Neural networks expect input data with a specific layout. For example, vector-sequence classification networks typically expect a sequence to be represented as a t-by-c numeric array, where t and c are the number of time steps and channels of sequences, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.
Most datastores and functions output data in the layout that the network expects. If your data
is in a different layout to what the network expects, then indicate that your data has a
different layout by using the InputDataFormats
option or by specifying
input data as a formatted dlarray
object. It is usually easiest to adjust
the InputDataFormats
option than to preprocess the input data.
For neural networks that do not have input layers, you must use the InputDataFormats
option or use formatted dlarray
objects.
For more information, see Deep Learning Data Formats.
Numeric or dlarray
Objects
For data that fits in memory and does not require additional processing like custom transformations, you can specify feature data as a numeric array.
For a neural network with an inputLayer
object, the expected
layout of input data is a given by the InputFormat
property of the
layer.
For data in a different layout, indicate that your data has a different layout by
using the InputDataFormats
option or use a formatted
dlarray
object. For more information, see Deep Learning Data Formats.
Datastores
Datastores read batches of data and targets. Datastores are best suited when you have data that does not fit in memory or when you want to apply transformations to the data.
Generic data or combinations of data types, the
minibatchpredict
function supports these datastores:
Data Type | Description | Example Usage |
---|---|---|
TransformedDatastore | Datastore that transforms batches of data read from an underlying datastore using a custom transformation function. |
|
CombinedDatastore | Datastore that reads from two or more underlying datastores. | Make predictions using neural networks with multiple inputs. |
Custom mini-batch datastore | Custom datastore that returns mini-batches of data. | Make predictions using data in a format that other datastores do not support. For details, see Develop Custom Mini-Batch Datastore. |
You can use other built-in datastores for making predictions by using the
transform
and combine
functions. These functions can convert the data read from datastores to the table or
cell array format required by minibatchpredict
. For more
information, see Datastore Customization.
minibatchqueue
Object
For greater control over how the software processes and transforms mini-batches, you can specify data as a minibatchqueue
object.
If you specify data as a minibatchqueue
object, then the
minibatchpredict
function ignores the
MiniBatchSize
property of the object and uses the MiniBatchSize
option instead. For minibatchqueue
input, the
PerprocessingEnvironment
property must be
"serial"
.
Note
This argument supports complex-valued predictors.
X1,...,XN
— In-memory data for multi-input network
numeric array | dlarray
object | cell array
In-memory data for multi-input network, specified as numeric arrays,
dlarray
objects, or cell arrays.
For multi-input networks, if you have data that fits in memory that does not require additional processing, then it is usually easiest to specify the input data as in-memory arrays. If you want to make predictions with data stored on disk, or want to apply additional processing, then it is usually easiest to use datastores.
Tip
Neural networks expect input data with a specific layout. For example, vector-sequence classification networks typically expect a sequence to be represented as a t-by-c numeric array, where t and c are the number of time steps and channels of sequences, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.
Most datastores and functions output data in the layout that the network expects. If your data
is in a different layout to what the network expects, then indicate that your data has a
different layout by using the InputDataFormats
option or by specifying
input data as a formatted dlarray
object. It is usually easiest to adjust
the InputDataFormats
option than to preprocess the input data.
For neural networks that do not have input layers, you must use the InputDataFormats
option or use formatted dlarray
objects.
For more information, see Deep Learning Data Formats.
For each input X1,...,XN
, where N
is the
number of inputs, specify the data as a numeric array, dlarray
object,
or cell array as described by the argument images
,
sequences
, features
, or
data
that matches the type of data. The input
Xi
corresponds to the network input
net.InputNames(i)
.
Note
This argument supports complex-valued predictors.
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Example: minibatchpredict(net,images,MiniBatchSize=32)
makes
predictions by looping over images
using mini-batches of size
32.
MiniBatchSize
— Size of mini-batches
128
(default) | positive integer
Size of mini-batches to use for prediction, specified as a positive integer. Larger mini-batch sizes require more memory, but can lead to faster predictions.
When you make predictions with sequences of different lengths,
the mini-batch size can impact the amount of padding added to the input data, which can result
in different predicted values. Try using different values to see which works best with your
network. To specify mini-batch size and padding options, use the
MiniBatchSize
and SequenceLength
options, respectively.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Outputs
— Layers to extract outputs from
string array | cell array of character vectors
Layers to extract outputs from, specified as a string array or a cell array of character vectors containing the layer names.
If
Outputs(i)
corresponds to a layer with a single output, thenOutputs(i)
is the name of the layer.If
Outputs(i)
corresponds to a layer with multiple outputs, thenOutputs(i)
is the layer name followed by the/
character and the name of the layer output:"layerName/outputName"
.
The default value is net.OutputNames
.
Acceleration
— Performance optimization
"auto"
(default) | "mex"
| "none"
Performance optimization, specified as one of these values:
"auto"
— Automatically apply a number of optimizations suitable for the input network and hardware resources."mex"
— Compile and execute a MEX function. This option is available when using a GPU only. The input data or the network learnable parameters must be stored asgpuArray
objects. Using a GPU requires Parallel Computing Toolbox™ and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error."none"
— Disable all acceleration.
When you use the "auto"
or "mex"
option, the software
can offer performance benefits at the expense of an increased initial run time. Subsequent
calls to the function are typically faster. Use performance optimization when you call the
function multiple times using new input data.
When Acceleration
is "mex"
, the software generates and
executes a MEX function based on the model and parameters you specify in the function call.
A single model can have several associated MEX functions at one time. Clearing the model
variable also clears any MEX functions associated with that model.
When Acceleration
is
"auto"
, the software does not generate a MEX function.
The "mex"
option is available only when you use a GPU. You must have a
C/C++ compiler installed and the GPU Coder™ Interface for Deep Learning support package. Install the support package using the Add-On Explorer in
MATLAB®. For setup instructions, see MEX Setup (GPU Coder). GPU Coder is not required.
The "mex"
option has these limitations:
Only
single
precision is supported. The input data or the network learnable parameters must have underlying typesingle
.Networks with inputs that are not connected to an input layer are not supported.
Traced
dlarray
objects are not supported. This means that the"mex"
option is not supported inside a call todlfeval
.Not all layers are supported. For a list of supported layers, see Supported Layers (GPU Coder).
MATLAB Compiler™ does not support deploying your network when using the
"mex"
option.
For quantized networks, the "mex"
option requires a CUDA® enabled NVIDIA® GPU with compute capability 6.1, 6.3, or higher.
ExecutionEnvironment
— Hardware resource
"auto"
(default) | "gpu"
| "cpu"
Hardware resource, specified as one of these values:
"auto"
— Use a GPU if one is available. Otherwise, use the CPU."gpu"
— Use the GPU. Using a GPU requires a Parallel Computing Toolbox license and a supported GPU device. For information about supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error."cpu"
— Use the CPU.
SequenceLength
— Option to pad or truncate sequences
"longest"
(default) | "shortest"
Option to pad, truncate, or split input sequences, specified as one of the following:
"longest"
— Pad sequences to have the same length as the longest sequence. This option does not discard any data, though padding can introduce noise to the neural network."shortest"
— Truncate sequences to have the same length as the shortest sequence. This option ensures that no padding is added, at the cost of discarding data.
To learn more about the effect of padding, truncating, and splitting the input sequences, see Sequence Padding and Truncation.
SequencePaddingDirection
— Direction of padding or truncation
"right"
(default) | "left"
Direction of padding or truncation, specified as one of the following:
"right"
— Pad or truncate sequences on the right. The sequences start at the same time step and the software truncates or adds padding to the end of the sequences."left"
— Pad or truncate sequences on the left. The software truncates or adds padding to the start of the sequences so that the sequences end at the same time step.
Because recurrent layers process sequence data one time step at a time, when the recurrent
layer OutputMode
property is "last"
, any padding in
the final time steps can negatively influence the layer output. To pad or truncate sequence
data on the left, set the SequencePaddingDirection
option to "left"
.
For sequence-to-sequence neural networks (when the OutputMode
property is
"sequence"
for each recurrent layer), any padding in the first time
steps can negatively influence the predictions for the earlier time steps. To pad or
truncate sequence data on the right, set the SequencePaddingDirection
option to "right"
.
To learn more about the effect of padding and truncating sequences, see Sequence Padding and Truncation.
SequencePaddingValue
— Value to pad sequences
0
(default) | scalar
Value by which to pad input sequences, specified as a scalar.
Do not pad sequences with NaN
, because doing so can
propagate errors throughout the neural network.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
InputDataFormats
— Description of input data dimensions
"auto"
(default) | string array | cell array of character vectors | character vector
Description of the input data dimensions, specified as a string array, character vector, or cell array of character vectors.
If InputDataFormats
is "auto"
, then the
software uses the formats expected by the network input. Otherwise, the software uses
the specified formats for the corresponding network input.
A data format is a string of characters, where each character describes the type of the corresponding data dimension.
The characters are:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, consider an array containing a batch of sequences where the first, second,
and third dimensions correspond to channels, observations, and time steps, respectively. You
can specify that this array has the format "CBT"
(channel, batch,
time).
You can specify multiple dimensions labeled "S"
or "U"
.
You can use the labels "C"
, "B"
, and
"T"
at most once. The software ignores singleton trailing
"U"
dimensions after the second dimension.
For a neural networks with multiple inputs net
, specify an
array of input data formats, where InputDataFormats(i)
corresponds
to the input net.InputNames(i)
.
For more information, see Deep Learning Data Formats.
Data Types: char
| string
| cell
OutputDataFormats
— Description of output data dimensions
"auto"
(default) | string array | cell array of character vectors | character vector
Description of the output data dimensions, specified as one of these values:
"auto"
— If the output data has the same number of dimensions as the input data, then theminibatchpredict
function uses the format specified byInputDataFormats
. If the output data has a different number of dimensions to the input data, then theminibatchpredict
function automatically permutes the dimensions of the output data so that they are consistent with the network input layers or theInputDataFormats
option.Data formats, specified as a string array, character vector, or cell array of character vectors — The
minibatchpredict
function uses the specified data formats.
A data format is a string of characters, where each character describes the type of the corresponding data dimension.
The characters are:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, consider an array containing a batch of sequences where the first, second,
and third dimensions correspond to channels, observations, and time steps, respectively. You
can specify that this array has the format "CBT"
(channel, batch,
time).
You can specify multiple dimensions labeled "S"
or "U"
.
You can use the labels "C"
, "B"
, and
"T"
at most once. The software ignores singleton trailing
"U"
dimensions after the second dimension.
For more information, see Deep Learning Data Formats.
Data Types: char
| string
| cell
UniformOutput
— Flag to return padded data as uniform array
1
(true
) (default) | 0
(false
)
Flag to return padded data as a uniform array, specified as a 1
(true
) or 0
(false
). When
you set the value to 0
, software outputs a cell array of
predictions.
Output Arguments
Y1,...,YM
— Neural network predictions
numeric array | dlarray
object | cell array
Neural network predictions, returned as numeric arrays, dlarray
objects, or cell arrays Y1,...,YM
, where M
is the
number of network outputs.
The predictions Yi
correspond to the output
Outputs(i)
.
More About
Floating-Point Arithmetic
The minibatchpredict
function casts integer numeric array and datastore
inputs to single precision. For minibatchqueue
input, the software uses the
datatype specified by their OutputCast
property.
When you use prediction or validation functions with a dlnetwork
object with single-precision learnable and state parameters, the software performs the computations using single-precision, floating-point arithmetic.
When you use prediction or validation functions with a dlnetwork
object with double-precision learnable and state parameters:
If the input data is single precision, the software performs the computations using single-precision, floating-point arithmetic.
If the input data is double precision, the software performs the computations using double-precision, floating-point arithmetic.
Extended Capabilities
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
This function fully supports GPU acceleration.
By default, the minibatchpredict
function uses a GPU if one is
available. You can specify the hardware that the minibatchpredict
function
uses by setting the ExecutionEnvironment
name-value argument.
For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2024a
MATLAB-Befehl
Sie haben auf einen Link geklickt, der diesem MATLAB-Befehl entspricht:
Führen Sie den Befehl durch Eingabe in das MATLAB-Befehlsfenster aus. Webbrowser unterstützen keine MATLAB-Befehle.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)