featureInputLayer

Feature input layer

Description

A feature input layer inputs feature data into a network and applies data normalization. Use this layer when you have a data set of numeric scalars representing features (data without spatial or time dimensions).

For image input, use imageInputLayer.

Creation

Description

layer = featureInputLayer(numFeatures) returns a feature input layer and sets the InputSize property to the specified number of features.

example

layer = featureInputLayer(numFeatures,Name,Value) sets the optional properties using name-value pairs. You can specify multiple name-value pairs. Enclose each property name in single quotes.

Properties

expand all

Feature Input

Number of features for each observation in the data, specified as a positive integer.

For image input, use imageInputLayer.

Example: 10

Data normalization to apply every time data is forward propagated through the input layer, specified as one of the following:

  • 'zerocenter' — Subtract the mean specified by Mean.

  • 'zscore' — Subtract the mean specified by Mean and divide by StandardDeviation.

  • 'rescale-symmetric' — Rescale the input to be in the range [-1, 1] using the minimum and maximum values specified by Min and Max, respectively.

  • 'rescale-zero-one' — Rescale the input to be in the range [0, 1] using the minimum and maximum values specified by Min and Max, respectively.

  • 'none' — Do not normalize the input data.

  • function handle — Normalize the data using the specified function. The function must be of the form Y = func(X), where X is the input data, and the output Y is the normalized data.

Tip

The software, by default, automatically calculates the normalization statistics at training time. To save time when training, specify the required statistics for normalization and set the 'ResetInputNormalization' option in trainingOptions to false.

Normalization dimension, specified as one of the following:

  • 'auto' – If the training option is false and you specify any of the normalization statistics (Mean, StandardDeviation, Min, or Max), then normalize over the dimensions matching the statistics. Otherwise, recalculate the statistics at training time and apply channel-wise normalization.

  • 'channel' – Channel-wise normalization.

  • 'all' – Normalize all values using scalar statistics.

Mean for zero-center and z-score normalization, specified as a numFeatures-by-1 vector of means per feature, a numeric scalar, or [].

If you specify the Mean property, then Normalization must be 'zerocenter' or 'zscore'. If Mean is [], then the software calculates the mean at training time.

You can set this property when creating networks without training (for example, when assembling networks using assembleNetwork).

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Standard deviation for z-score normalization, specified as a numFeatures-by-1 vector of means per feature, a numeric scalar, or [].

If you specify the StandardDeviation property, then Normalization must be 'zscore'. If StandardDeviation is [], then the software calculates the standard deviation at training time.

You can set this property when creating networks without training (for example, when assembling networks using assembleNetwork).

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Minimum value for rescaling, specified as a numFeatures-by-1 vector of minima per feature, a numeric scalar, or [].

If you specify the Min property, then Normalization must be 'rescale-symmetric' or 'rescale-zero-one'. If Min is [], then the software calculates the minimum at training time.

You can set this property when creating networks without training (for example, when assembling networks using assembleNetwork).

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Maximum value for rescaling, specified as a numFeatures-by-1 vector of maxima per feature, a numeric scalar, or [].

If you specify the Max property, then Normalization must be 'rescale-symmetric' or 'rescale-zero-one'. If Max is [], then the software calculates the maximum at training time.

You can set this property when creating networks without training (for example, when assembling networks using assembleNetwork).

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Layer

Layer name, specified as a character vector or a string scalar. To include a layer in a layer graph, you must specify a nonempty unique layer name. If you train a series network with the layer and Name is set to '', then the software automatically assigns a name to the layer at training time.

Data Types: char | string

Number of inputs of the layer. The layer has no inputs.

Data Types: double

Input names of the layer. The layer has no inputs.

Data Types: cell

Number of outputs of the layer. This layer has a single output only.

Data Types: double

Output names of the layer. This layer has a single output only.

Data Types: cell

Examples

collapse all

Create a feature input layer with name 'input' for observations consisting of 21 features.

layer = featureInputLayer(21,'Name','input')
layer = 
  FeatureInputLayer with properties:

                      Name: 'input'
                 InputSize: 21

   Hyperparameters
             Normalization: 'none'
    NormalizationDimension: 'auto'

Include a feature input layer in a Layer array.

numFeatures = 21;
numClasses = 3;
 
layers = [
    featureInputLayer(numFeatures,'Name','input')
    fullyConnectedLayer(numClasses, 'Name','fc')
    softmaxLayer('Name','sm')
    classificationLayer('Name','classification')]
layers = 
  4x1 Layer array with layers:

     1   'input'            Feature Input           21 features
     2   'fc'               Fully Connected         3 fully connected layer
     3   'sm'               Softmax                 softmax
     4   'classification'   Classification Output   crossentropyex

To train a network containing both an image input layer and a feature input layer, you must use a dlnetwork object in a custom training loop.

Define the size of the input image, the number of features of each observation, the number of classes, and the size and number of filters of the convolution layer.

imageInputSize = [28 28 1];
numFeatures = 1;
numClasses = 10;
filterSize = 5;
numFilters = 16;

To create a network with two input layers, you must define the network in two parts and join them, for example, by using a concatenation layer.

Define the first part of the network. Define the image classification layers and include a concatenation layer before the last fully connected layer.

layers = [
    imageInputLayer(imageInputSize,'Normalization','none','Name','images')
    convolution2dLayer(filterSize,numFilters,'Name','conv')
    reluLayer('Name','relu')
    fullyConnectedLayer(50,'Name','fc1')
    concatenationLayer(1,2,'Name','concat')
    fullyConnectedLayer(numClasses,'Name','fc2')
    softmaxLayer('Name','softmax')];

Convert the layers to a layer graph.

lgraph = layerGraph(layers);

For the second part of the network, add a feature input layer and connect it to the second input of the concatenation layer.

featInput = featureInputLayer(numFeatures,'Name','features');
lgraph = addLayers(lgraph, featInput);
lgraph = connectLayers(lgraph, 'features', 'concat/in2');

Visualize the network.

plot(lgraph)

Create a dlnetwork object.

dlnet = dlnetwork(lgraph)
dlnet = 
  dlnetwork with properties:

         Layers: [8x1 nnet.cnn.layer.Layer]
    Connections: [7x2 table]
     Learnables: [6x3 table]
          State: [0x3 table]
     InputNames: {'images'  'features'}
    OutputNames: {'softmax'}

Introduced in R2020b