neural_networks package

Submodules

neural_networks.autoencoder_neural_network module

Module containing the AutoencoderNeuralNetwork class and the command line interface.

class neural_networks.autoencoder_neural_network.AutoencoderNeuralNetwork(input_decode_path, output_model_path, input_predict_path=None, output_test_decode_path=None, output_test_predict_path=None, properties=None, **kwargs)[source]

Bases: BiobbObject

biobb_ml AutoencoderNeuralNetwork
Wrapper of the TensorFlow Keras LSTM method for encoding.
Fits and tests a given dataset and save the compiled model for an Autoencoder Neural Network. Visit the LSTM documentation page in the TensorFlow Keras official website for further information.
Parameters:
  • input_decode_path (str) – Path to the input decode dataset. File type: input. Sample file. Accepted formats: csv (edam:format_3752).

  • input_predict_path (str) (Optional) –

    Path to the input predict dataset. File type: input. Sample file. Accepted formats: csv (edam:format_3752).

  • output_model_path (str) –

    Path to the output model file. File type: output. Sample file. Accepted formats: h5 (edam:format_3590).

  • output_test_decode_path (str) (Optional) –

    Path to the test decode table file. File type: output. Sample file. Accepted formats: csv (edam:format_3752).

  • output_test_predict_path (str) (Optional) –

    Path to the test predict table file. File type: output. Sample file. Accepted formats: csv (edam:format_3752).

  • properties (dic - Python dictionary object containing the tool parameters, not input/output files) –

    • optimizer (string) - (“Adam”) Name of optimizer instance. Values: Adadelta (Adadelta optimization is a stochastic gradient descent method that is based on adaptive learning rate per dimension to address two drawbacks: the continual decay of learning rates throughout training and the need for a manually selected global learning rate), Adagrad (Adagrad is an optimizer with parameter-specific learning rates; which are adapted relative to how frequently a parameter gets updated during training. The more updates a parameter receives; the smaller the updates), Adam (Adam optimization is a stochastic gradient descent method that is based on adaptive estimation of first-order and second-order moments), Adamax (It is a variant of Adam based on the infinity norm. Default parameters follow those provided in the paper. Adamax is sometimes superior to adam; specially in models with embeddings), Ftrl (Optimizer that implements the FTRL algorithm), Nadam (Much like Adam is essentially RMSprop with momentum; Nadam is Adam with Nesterov momentum), RMSprop (Optimizer that implements the RMSprop algorithm), SGD (Gradient descent -with momentum- optimizer).

    • learning_rate (float) - (0.02) [0~100|0.01] Determines the step size at each iteration while moving toward a minimum of a loss function

    • batch_size (int) - (100) [0~1000|1] Number of samples per gradient update.

    • max_epochs (int) - (100) [0~1000|1] Number of epochs to train the model. As the early stopping is enabled, this is a maximum.

    • remove_tmp (bool) - (True) [WF property] Remove temporal files.

    • restart (bool) - (False) [WF property] Do not execute if output files exist.

Examples

This is a use example of how to use the building block from Python:

from biobb_ml.neural_networks.autoencoder_neural_network import autoencoder_neural_network
prop = {
    'optimizer': 'Adam',
    'learning_rate': 0.01,
    'batch_size': 32,
    'max_epochs': 300
}
autoencoder_neural_network(input_decode_path='/path/to/myDecodeDataset.csv',
                            output_model_path='/path/to/newModel.h5',
                            input_predict_path='/path/to/myPredictDataset.csv',
                            output_test_decode_path='/path/to/newDecodeDataset.csv',
                            output_test_predict_path='/path/to/newPredictDataset.csv',
                            properties=prop)
Info:
build_model(n_in, n_out=None)[source]

Builds Neural network

check_data_params(out_log, err_log)[source]

Checks all the input/output paths and parameters

launch() int[source]

Execute the AutoencoderNeuralNetwork neural_networks.autoencoder_neural_network.AutoencoderNeuralNetwork object.

neural_networks.autoencoder_neural_network.autoencoder_neural_network(input_decode_path: str, output_model_path: str, input_predict_path: str | None = None, output_test_decode_path: str | None = None, output_test_predict_path: str | None = None, properties: dict | None = None, **kwargs) int[source]

Execute the AutoencoderNeuralNetwork class and execute the launch() method.

neural_networks.autoencoder_neural_network.main()[source]

Command line execution of this building block. Please check the command line documentation.

neural_networks.classification_neural_network module

Module containing the ClassificationNeuralNetwork class and the command line interface.

class neural_networks.classification_neural_network.ClassificationNeuralNetwork(input_dataset_path, output_model_path, output_test_table_path=None, output_plot_path=None, properties=None, **kwargs)[source]

Bases: BiobbObject

biobb_ml ClassificationNeuralNetwork
Wrapper of the TensorFlow Keras Sequential method for classification.
Trains and tests a given dataset and save the complete model for a Neural Network Classification. Visit the Sequential documentation page in the TensorFlow Keras official website for further information.
Parameters:
  • input_dataset_path (str) –

    Path to the input dataset. File type: input. Sample file. Accepted formats: csv (edam:format_3752).

  • output_model_path (str) –

    Path to the output model file. File type: output. Sample file. Accepted formats: h5 (edam:format_3590).

  • output_test_table_path (str) (Optional) –

    Path to the test table file. File type: output. Sample file. Accepted formats: csv (edam:format_3752).

  • output_plot_path (str) (Optional) –

    Loss, accuracy and MSE plots. File type: output. Sample file. Accepted formats: png (edam:format_3603).

  • properties (dic - Python dictionary object containing the tool parameters, not input/output files) –

    • features (dict) - ({}) Independent variables or columns from your dataset you want to train. You can specify either a list of columns names from your input dataset, a list of columns indexes or a range of columns indexes. Formats: { “columns”: [“column1”, “column2”] } or { “indexes”: [0, 2, 3, 10, 11, 17] } or { “range”: [[0, 20], [50, 102]] }. In case of mulitple formats, the first one will be picked.

    • target (dict) - ({}) Dependent variable you want to predict from your dataset. You can specify either a column name or a column index. Formats: { “column”: “column3” } or { “index”: 21 }. In case of mulitple formats, the first one will be picked.

    • weight (dict) - ({}) Weight variable from your dataset. You can specify either a column name or a column index. Formats: { “column”: “column3” } or { “index”: 21 }. In case of multiple formats, the first one will be picked.

    • validation_size (float) - (0.2) [0~1|0.05] Represents the proportion of the dataset to include in the validation split. It should be between 0.0 and 1.0.

    • test_size (float) - (0.1) [0~1|0.05] Represents the proportion of the dataset to include in the test split. It should be between 0.0 and 1.0.

    • hidden_layers (list) - (None) List of dictionaries with hidden layers values. Format: [ { ‘size’: 50, ‘activation’: ‘relu’ } ].

    • output_layer_activation (string) - (“softmax”) Activation function to use in the output layer. Values: sigmoid (Sigmoid activation function: sigmoid[x] = 1 / [1 + exp[-x]]), tanh (Hyperbolic tangent activation function), relu (Applies the rectified linear unit activation function), softmax (Softmax converts a real vector to a vector of categorical probabilities).

    • optimizer (string) - (“Adam”) Name of optimizer instance. Values: Adadelta (Adadelta optimization is a stochastic gradient descent method that is based on adaptive learning rate per dimension to address two drawbacks: the continual decay of learning rates throughout training and the need for a manually selected global learning rate), Adagrad (Adagrad is an optimizer with parameter-specific learning rates; which are adapted relative to how frequently a parameter gets updated during training. The more updates a parameter receives; the smaller the updates), Adam (Adam optimization is a stochastic gradient descent method that is based on adaptive estimation of first-order and second-order moments), Adamax (It is a variant of Adam based on the infinity norm. Default parameters follow those provided in the paper. Adamax is sometimes superior to adam; specially in models with embeddings), Ftrl (Optimizer that implements the FTRL algorithm), Nadam (Much like Adam is essentially RMSprop with momentum; Nadam is Adam with Nesterov momentum), RMSprop (Optimizer that implements the RMSprop algorithm), SGD (Gradient descent -with momentum- optimizer).

    • learning_rate (float) - (0.02) [0~100|0.01] Determines the step size at each iteration while moving toward a minimum of a loss function

    • batch_size (int) - (100) [0~1000|1] Number of samples per gradient update.

    • max_epochs (int) - (100) [0~1000|1] Number of epochs to train the model. As the early stopping is enabled, this is a maximum.

    • normalize_cm (bool) - (False) Whether or not to normalize the confusion matrix.

    • random_state (int) - (5) [1~1000|1] Controls the shuffling applied to the data before applying the split. .

    • scale (bool) - (False) Whether or not to scale the input dataset.

    • remove_tmp (bool) - (True) [WF property] Remove temporal files.

    • restart (bool) - (False) [WF property] Do not execute if output files exist.

Examples

This is a use example of how to use the building block from Python:

from biobb_ml.neural_networks.classification_neural_network import classification_neural_network
prop = {
    'features': {
        'columns': [ 'column1', 'column2', 'column3' ]
    },
    'target': {
        'column': 'target'
    },
    'validation_size': 0.2,
    'test_size': .33,
    'hidden_layers': [
        {
            'size': 10,
            'activation': 'relu'
        },
        {
            'size': 8,
            'activation': 'relu'
        }
    ],
    'optimizer': 'Adam',
    'learning_rate': 0.01,
    'batch_size': 32,
    'max_epochs': 150
}
classification_neural_network(input_dataset_path='/path/to/myDataset.csv',
                            output_model_path='/path/to/newModel.h5',
                            output_test_table_path='/path/to/newTable.csv',
                            output_plot_path='/path/to/newPlot.png',
                            properties=prop)
Info:
build_model(input_shape, output_size)[source]

Builds Neural network according to hidden_layers property

check_data_params(out_log, err_log)[source]

Checks all the input/output paths and parameters

launch() int[source]

Execute the ClassificationNeuralNetwork neural_networks.classification_neural_network.ClassificationNeuralNetwork object.

neural_networks.classification_neural_network.classification_neural_network(input_dataset_path: str, output_model_path: str, output_test_table_path: str | None = None, output_plot_path: str | None = None, properties: dict | None = None, **kwargs) int[source]

Execute the AutoencoderNeuralNetwork class and execute the launch() method.

neural_networks.classification_neural_network.main()[source]

Command line execution of this building block. Please check the command line documentation.

neural_networks.recurrent_neural_network module

Module containing the RecurrentNeuralNetwork class and the command line interface.

class neural_networks.recurrent_neural_network.RecurrentNeuralNetwork(input_dataset_path, output_model_path, output_test_table_path=None, output_plot_path=None, properties=None, **kwargs)[source]

Bases: BiobbObject

biobb_ml RecurrentNeuralNetwork
Wrapper of the TensorFlow Keras LSTM method using Recurrent Neural Networks.
Trains and tests a given dataset and save the complete model for a Recurrent Neural Network. Visit the LSTM documentation page in the TensorFlow Keras official website for further information.
Parameters:
  • input_dataset_path (str) –

    Path to the input dataset. File type: input. Sample file. Accepted formats: csv (edam:format_3752).

  • output_model_path (str) –

    Path to the output model file. File type: output. Sample file. Accepted formats: h5 (edam:format_3590).

  • output_test_table_path (str) (Optional) –

    Path to the test table file. File type: output. Sample file. Accepted formats: csv (edam:format_3752).

  • output_plot_path (str) (Optional) –

    Loss, accuracy and MSE plots. File type: output. Sample file. Accepted formats: png (edam:format_3603).

  • properties (dic - Python dictionary object containing the tool parameters, not input/output files) –

    • target (dict) - ({}) Dependent variable you want to predict from your dataset. You can specify either a column name or a column index. Formats: { “column”: “column3” } or { “index”: 21 }. In case of mulitple formats, the first one will be picked.

    • validation_size (float) - (0.2) [0~1|0.05] Represents the proportion of the dataset to include in the validation split. It should be between 0.0 and 1.0.

    • window_size (int) - (5) [0~100|1] Number of steps for each window of training model.

    • test_size (int) - (5) [0~100000|1] Represents the number of samples of the dataset to include in the test split.

    • hidden_layers (list) - (None) List of dictionaries with hidden layers values. Format: [ { ‘size’: 50, ‘activation’: ‘relu’ } ].

    • optimizer (string) - (“Adam”) Name of optimizer instance. Values: Adadelta (Adadelta optimization is a stochastic gradient descent method that is based on adaptive learning rate per dimension to address two drawbacks: the continual decay of learning rates throughout training and the need for a manually selected global learning rate), Adagrad (Adagrad is an optimizer with parameter-specific learning rates; which are adapted relative to how frequently a parameter gets updated during training. The more updates a parameter receives; the smaller the updates), Adam (Adam optimization is a stochastic gradient descent method that is based on adaptive estimation of first-order and second-order moments), Adamax (It is a variant of Adam based on the infinity norm. Default parameters follow those provided in the paper. Adamax is sometimes superior to adam; specially in models with embeddings), Ftrl (Optimizer that implements the FTRL algorithm), Nadam (Much like Adam is essentially RMSprop with momentum; Nadam is Adam with Nesterov momentum), RMSprop (Optimizer that implements the RMSprop algorithm), SGD (Gradient descent -with momentum- optimizer).

    • learning_rate (float) - (0.02) [0~100|0.01] Determines the step size at each iteration while moving toward a minimum of a loss function

    • batch_size (int) - (100) [0~1000|1] Number of samples per gradient update.

    • max_epochs (int) - (100) [0~1000|1] Number of epochs to train the model. As the early stopping is enabled, this is a maximum.

    • normalize_cm (bool) - (False) Whether or not to normalize the confusion matrix.

    • remove_tmp (bool) - (True) [WF property] Remove temporal files.

    • restart (bool) - (False) [WF property] Do not execute if output files exist.

Examples

This is a use example of how to use the building block from Python:

from biobb_ml.neural_networks.recurrent_neural_network import recurrent_neural_network
prop = {
    'target': {
        'column': 'target'
    },
    'window_size': 5,
    'validation_size': 0.2,
    'test_size': 0.2,
    'hidden_layers': [
        {
            'size': 10,
            'activation': 'relu'
        },
        {
            'size': 8,
            'activation': 'relu'
        }
    ],
    'optimizer': 'Adam',
    'learning_rate': 0.01,
    'batch_size': 32,
    'max_epochs': 150
}
recurrent_neural_network(input_dataset_path='/path/to/myDataset.csv',
                            output_model_path='/path/to/newModel.h5',
                            output_test_table_path='/path/to/newTable.csv',
                            output_plot_path='/path/to/newPlot.png',
                            properties=prop)
Info:
build_model(input_shape)[source]

Builds Neural network according to hidden_layers property

check_data_params(out_log, err_log)[source]

Checks all the input/output paths and parameters

launch() int[source]

Execute the RecurrentNeuralNetwork neural_networks.recurrent_neural_network.RecurrentNeuralNetwork object.

neural_networks.recurrent_neural_network.main()[source]

Command line execution of this building block. Please check the command line documentation.

neural_networks.recurrent_neural_network.recurrent_neural_network(input_dataset_path: str, output_model_path: str, output_test_table_path: str | None = None, output_plot_path: str | None = None, properties: dict | None = None, **kwargs) int[source]

Execute the RecurrentNeuralNetwork class and execute the launch() method.

neural_networks.regression_neural_network module

Module containing the RegressionNeuralNetwork class and the command line interface.

class neural_networks.regression_neural_network.RegressionNeuralNetwork(input_dataset_path, output_model_path, output_test_table_path=None, output_plot_path=None, properties=None, **kwargs)[source]

Bases: BiobbObject

biobb_ml RegressionNeuralNetwork
Wrapper of the TensorFlow Keras Sequential method for regression.
Trains and tests a given dataset and save the complete model for a Neural Network Regression. Visit the Sequential documentation page in the TensorFlow Keras official website for further information.
Parameters:
  • input_dataset_path (str) –

    Path to the input dataset. File type: input. Sample file. Accepted formats: csv (edam:format_3752).

  • output_model_path (str) –

    Path to the output model file. File type: output. Sample file. Accepted formats: h5 (edam:format_3590).

  • output_test_table_path (str) (Optional) –

    Path to the test table file. File type: output. Sample file. Accepted formats: csv (edam:format_3752).

  • output_plot_path (str) (Optional) –

    Loss, MAE and MSE plots. File type: output. Sample file. Accepted formats: png (edam:format_3603).

  • properties (dic - Python dictionary object containing the tool parameters, not input/output files) –

    • features (dict) - ({}) Independent variables or columns from your dataset you want to train. You can specify either a list of columns names from your input dataset, a list of columns indexes or a range of columns indexes. Formats: { “columns”: [“column1”, “column2”] } or { “indexes”: [0, 2, 3, 10, 11, 17] } or { “range”: [[0, 20], [50, 102]] }. In case of mulitple formats, the first one will be picked.

    • target (dict) - ({}) Dependent variable you want to predict from your dataset. You can specify either a column name or a column index. Formats: { “column”: “column3” } or { “index”: 21 }. In case of mulitple formats, the first one will be picked.

    • weight (dict) - ({}) Weight variable from your dataset. You can specify either a column name or a column index. Formats: { “column”: “column3” } or { “index”: 21 }. In case of mulitple formats, the first one will be picked.

    • validation_size (float) - (0.2) [0~1|0.05] Represents the proportion of the dataset to include in the validation split. It should be between 0.0 and 1.0.

    • test_size (float) - (0.1) [0~1|0.05] Represents the proportion of the dataset to include in the test split. It should be between 0.0 and 1.0.

    • hidden_layers (list) - (None) List of dictionaries with hidden layers values. Format: [ { ‘size’: 50, ‘activation’: ‘relu’ } ].

    • output_layer_activation (string) - (“softmax”) Activation function to use in the output layer. Values: sigmoid (Sigmoid activation function: sigmoid[x] = 1 / [1 + exp[-x]]), tanh (Hyperbolic tangent activation function), relu (Applies the rectified linear unit activation function), softmax(Softmax converts a real vector to a vector of categorical probabilities).

    • optimizer (string) - (“Adam”) Name of optimizer instance. Values: Adadelta (Adadelta optimization is a stochastic gradient descent method that is based on adaptive learning rate per dimension to address two drawbacks: the continual decay of learning rates throughout training and the need for a manually selected global learning rate), Adagrad (Adagrad is an optimizer with parameter-specific learning rates; which are adapted relative to how frequently a parameter gets updated during training. The more updates a parameter receives; the smaller the updates), Adam (Adam optimization is a stochastic gradient descent method that is based on adaptive estimation of first-order and second-order moments), Adamax (It is a variant of Adam based on the infinity norm. Default parameters follow those provided in the paper. Adamax is sometimes superior to adam; specially in models with embeddings), Ftrl (Optimizer that implements the FTRL algorithm), Nadam (Much like Adam is essentially RMSprop with momentum; Nadam is Adam with Nesterov momentum), RMSprop (Optimizer that implements the RMSprop algorithm), SGD (Gradient descent -with momentum- optimizer).

    • learning_rate (float) - (0.02) [0~100|0.01] Determines the step size at each iteration while moving toward a minimum of a loss function

    • batch_size (int) - (100) [0~1000|1] Number of samples per gradient update.

    • max_epochs (int) - (100) [0~1000|1] Number of epochs to train the model. As the early stopping is enabled, this is a maximum.

    • random_state (int) - (5) [1~1000|1] Controls the shuffling applied to the data before applying the split. .

    • scale (bool) - (False) Whether or not to scale the input dataset.

    • remove_tmp (bool) - (True) [WF property] Remove temporal files.

    • restart (bool) - (False) [WF property] Do not execute if output files exist.

Examples

This is a use example of how to use the building block from Python:

from biobb_ml.neural_networks.regression_neural_network import regression_neural_network
prop = {
    'features': {
        'columns': [ 'column1', 'column2', 'column3' ]
    },
    'target': {
        'column': 'target'
    },
    'validation_size': 0.2,
    'test_size': .33,
    'hidden_layers': [
        {
            'size': 10,
            'activation': 'relu'
        },
        {
            'size': 8,
            'activation': 'relu'
        }
    ],
    'optimizer': 'Adam',
    'learning_rate': 0.01,
    'batch_size': 32,
    'max_epochs': 150
}
regression_neural_network(input_dataset_path='/path/to/myDataset.csv',
                            output_model_path='/path/to/newModel.h5',
                            output_test_table_path='/path/to/newTable.csv',
                            output_plot_path='/path/to/newPlot.png',
                            properties=prop)
Info:
build_model(input_shape)[source]

Builds Neural network according to hidden_layers property

check_data_params(out_log, err_log)[source]

Checks all the input/output paths and parameters

launch() int[source]

Execute the RegressionNeuralNetwork neural_networks.regression_neural_network.RegressionNeuralNetwork object.

neural_networks.regression_neural_network.main()[source]

Command line execution of this building block. Please check the command line documentation.

neural_networks.regression_neural_network.regression_neural_network(input_dataset_path: str, output_model_path: str, output_test_table_path: str | None = None, output_plot_path: str | None = None, properties: dict | None = None, **kwargs) int[source]

Execute the RegressionNeuralNetwork class and execute the launch() method.

neural_networks.neural_network_decode module

Module containing the DecodingNeuralNetwork class and the command line interface.

class neural_networks.neural_network_decode.DecodingNeuralNetwork(input_decode_path, input_model_path, output_decode_path, output_predict_path=None, properties=None, **kwargs)[source]

Bases: BiobbObject

biobb_ml DecodingNeuralNetwork
Wrapper of the TensorFlow Keras LSTM method for decoding.
Decodes and predicts given a dataset and a model file compiled by an Autoencoder Neural Network. Visit the LSTM documentation page in the TensorFlow Keras official website for further information.
Parameters:
  • input_decode_path (str) –

    Path to the input decode dataset. File type: input. Sample file. Accepted formats: csv (edam:format_3752).

  • input_model_path (str) –

    Path to the input model. File type: input. Sample file. Accepted formats: h5 (edam:format_3590).

  • output_decode_path (str) –

    Path to the output decode file. File type: output. Sample file. Accepted formats: csv (edam:format_3752).

  • output_predict_path (str) (Optional) –

    Path to the output predict file. File type: output. Sample file. Accepted formats: csv (edam:format_3752).

  • properties (dic - Python dictionary object containing the tool parameters, not input/output files) –

    • remove_tmp (bool) - (True) [WF property] Remove temporal files.

    • restart (bool) - (False) [WF property] Do not execute if output files exist.

Examples

This is a use example of how to use the building block from Python:

from biobb_ml.neural_networks.neural_network_decode import neural_network_decode
prop = { }
neural_network_decode(input_decode_path='/path/to/myDecodeDataset.csv',
                    input_model_path='/path/to/newModel.h5',
                    output_decode_path='/path/to/newDecodeDataset.csv',
                    output_predict_path='/path/to/newPredictDataset.csv',
                    properties=prop)
Info:
check_data_params(out_log, err_log)[source]

Checks all the input/output paths and parameters

launch() int[source]

Execute the DecodingNeuralNetwork neural_networks.neural_network_decode.DecodingNeuralNetwork object.

neural_networks.neural_network_decode.main()[source]

Command line execution of this building block. Please check the command line documentation.

neural_networks.neural_network_decode.neural_network_decode(input_decode_path: str, input_model_path: str, output_decode_path: str, output_predict_path: str | None = None, properties: dict | None = None, **kwargs) int[source]

Execute the DecodingNeuralNetwork class and execute the launch() method.

neural_networks.neural_network_predict module

Module containing the PredictNeuralNetwork class and the command line interface.

class neural_networks.neural_network_predict.PredictNeuralNetwork(input_model_path, output_results_path, input_dataset_path=None, properties=None, **kwargs)[source]

Bases: BiobbObject

biobb_ml PredictNeuralNetwork
Makes predictions from an input dataset and a given model.
Makes predictions from an input dataset (provided either as a file or as a dictionary property) and a given model trained with TensorFlow Keras Sequential and TensorFlow Keras LSTM
Parameters:
  • input_model_path (str) –

    Path to the input model. File type: input. Sample file. Accepted formats: h5 (edam:format_3590).

  • input_dataset_path (str) (Optional) –

    Path to the dataset to predict. File type: input. Sample file. Accepted formats: csv (edam:format_3752).

  • output_results_path (str) –

    Path to the output results file. File type: output. Sample file. Accepted formats: csv (edam:format_3752).

  • properties (dic - Python dictionary object containing the tool parameters, not input/output files) –

    • predictions (list) - (None) List of dictionaries with all values you want to predict targets. It will be taken into account only in case input_dataset_path is not provided. Format: [{ ‘var1’: 1.0, ‘var2’: 2.0 }, { ‘var1’: 4.0, ‘var2’: 2.7 }] for datasets with headers and [[ 1.0, 2.0 ], [ 4.0, 2.7 ]] for datasets without headers.

    • remove_tmp (bool) - (True) [WF property] Remove temporal files.

    • restart (bool) - (False) [WF property] Do not execute if output files exist.

Examples

This is a use example of how to use the building block from Python:

from biobb_ml.neural_networks.neural_network_predict import neural_network_predict
prop = {
    'predictions': [
        {
            'var1': 1.0,
            'var2': 2.0
        },
        {
            'var1': 4.0,
            'var2': 2.7
        }
    ]
}
neural_network_predict(input_model_path='/path/to/myModel.h5',
                        input_dataset_path='/path/to/myDataset.csv',
                        output_results_path='/path/to/newPredictedResults.csv',
                        properties=prop)
Info:
check_data_params(out_log, err_log)[source]

Checks all the input/output paths and parameters

launch() int[source]

Execute the PredictNeuralNetwork neural_networks.neural_network_predict.PredictNeuralNetwork object.

neural_networks.neural_network_predict.main()[source]

Command line execution of this building block. Please check the command line documentation.

neural_networks.neural_network_predict.neural_network_predict(input_model_path: str, output_results_path: str, input_dataset_path: str | None = None, properties: dict | None = None, **kwargs) int[source]

Execute the PredictNeuralNetwork class and execute the launch() method.