Configuring the pipeline

This feature is in the beta tier. For more information on feature tiers, see API Tiers.

This page explains how to create and configure a link prediction pipeline.

Creating a pipeline

The first step of building a new pipeline is to create one using gds.beta.pipeline.linkPrediction.create. This stores a trainable pipeline object in the pipeline catalog of type Link prediction training pipeline. This represents a configurable pipeline that can later be invoked for training, which in turn creates a trained pipeline. The latter is also a model which is stored in the catalog with type LinkPrediction.

Syntax

Create pipeline syntax
CALL gds.beta.pipeline.linkPrediction.create(
  pipelineName: String
)
YIELD
  name: String,
  nodePropertySteps: List of Map,
  featureSteps: List of Map,
  splitConfig: Map,
  autoTuningConfig: Map,
  parameterSpace: List of Map
Table 1. Parameters
Name Type Description

pipelineName

String

The name of the created pipeline.

Table 2. Results
Name Type Description

name

String

Name of the pipeline.

nodePropertySteps

List of Map

List of configurations for node property steps.

featureSteps

List of Map

List of configurations for feature steps.

splitConfig

Map

Configuration to define the split before the model training.

autoTuningConfig

Map

Configuration to define the behavior of auto-tuning.

parameterSpace

List of Map

List of parameter configurations for models which the train mode uses for model selection.

Example

The following will create a pipeline:
CALL gds.beta.pipeline.linkPrediction.create('pipe')
Table 3. Results
name nodePropertySteps featureSteps splitConfig autoTuningConfig parameterSpace

"pipe"

[]

[]

{negativeSamplingRatio=1.0, testFraction=0.1, trainFraction=0.1, validationFolds=3}

{maxTrials=10}

{LogisticRegression=[], MultilayerPerceptron=[], RandomForest=[]}

This shows that the newly created pipeline does not contain any steps yet, and has defaults for the split and train parameters.

Adding node properties

A link prediction pipeline can execute one or several GDS algorithms in mutate mode that create node properties in the projected graph. Such steps producing node properties can be chained one after another and created properties can also be used to add features. Moreover, the node property steps that are added to the pipeline will be executed both when training a pipeline and when the trained model is applied for prediction.

The name of the procedure that should be added can be a fully qualified GDS procedure name ending with .mutate. The ending .mutate may be omitted and one may also use shorthand forms such as node2vec instead of gds.node2vec.mutate. But please note that a tier qualification must still be given as part of the name.

For example, pre-processing algorithms can be used as node property steps.

Syntax

Add node property syntax
CALL gds.beta.pipeline.linkPrediction.addNodeProperty(
  pipelineName: String,
  procedureName: String,
  procedureConfiguration: Map
)
YIELD
  name: String,
  nodePropertySteps: List of Map,
  featureSteps: List of Map,
  splitConfig: Map,
  autoTuningConfig: Map,
  parameterSpace: List of Map
Table 4. Parameters
Name Type Description

pipelineName

String

The name of the pipeline.

procedureName

String

The name of the procedure to be added to the pipeline.

procedureConfiguration

Map

The map used to generate the configuration of the procedure. It includes procedure specific configurations except nodeLabels and relationshipTypes. It can optionally contain parameters in table below.

Table 5. Node property step context configuration
Name Type Default Description

contextNodeLabels

List of String

[]

Additional node labels which are added as context.

contextRelationshipTypes

List of String

[]

Additional relationship types which are added as context.

During training, the context configuration is combined with the train configuration to produce the final node label and relationship type filter for each node property step.

Table 6. Results
Name Type Description

name

String

Name of the pipeline.

nodePropertySteps

List of Map

List of configurations for node property steps.

featureSteps

List of Map

List of configurations for feature steps.

splitConfig

Map

Configuration to define the split before the model training.

autoTuningConfig

Map

Configuration to define the behavior of auto-tuning.

parameterSpace

List of Map

List of parameter configurations for models which the train mode uses for model selection.

Example

The following will add a node property step to the pipeline:
CALL gds.beta.pipeline.linkPrediction.addNodeProperty('pipe', 'fastRP', {
  mutateProperty: 'embedding',
  embeddingDimension: 256,
  randomSeed: 42
})
Table 7. Results
name nodePropertySteps featureSteps splitConfig autoTuningConfig parameterSpace

"pipe"

[{config={contextNodeLabels=[], contextRelationshipTypes=[], embeddingDimension=256, mutateProperty="embedding", randomSeed=42}, name="gds.fastRP.mutate"}]

[]

{negativeSamplingRatio=1.0, testFraction=0.1, trainFraction=0.1, validationFolds=3}

{maxTrials=10}

{LogisticRegression=[], MultilayerPerceptron=[], RandomForest=[]}

The pipeline will now execute the fastRP algorithm in mutate mode both before training a model, and when the trained model is applied for prediction. This ensures the embedding property can be used as an input for link features.

Adding link features

A Link Prediction pipeline executes a sequence of steps to compute the features used by a machine learning model. A feature step computes a vector of features for given node pairs. For each node pair, the results are concatenated into a single link feature vector. The order of the features in the link feature vector follows the order of the feature steps. Like with node property steps, the feature steps are also executed both at training and prediction time. The supported methods for obtaining features are described below.

Syntax

Adding a link feature to a pipeline
CALL gds.beta.pipeline.linkPrediction.addFeature(
  pipelineName: String,
  featureType: String,
  configuration: Map
)
YIELD
  name: String,
  nodePropertySteps: List of Map,
  featureSteps: List of Map,
  splitConfig: Map,
  autoTuningConfig: Map,
  parameterSpace: List of Map
Table 8. Parameters
Name Type Description

pipelineName

String

The name of the pipeline.

featureType

String

The featureType determines the method used for computing the link feature. See supported types.

configuration

Map

Configuration for adding the link feature.

Table 9. Configuration
Name Type Default Description

nodeProperties

List of String

no

The names of the node properties that should be used as input.

Table 10. Results
Name Type Description

name

String

Name of the pipeline.

nodePropertySteps

List of Map

List of configurations for node property steps.

featureSteps

List of Map

List of configurations for feature steps.

splitConfig

Map

Configuration to define the split before the model training.

autoTuningConfig

Map

Configuration to define the behavior of auto-tuning.

parameterSpace

List of Map

List of parameter configurations for models which the train mode uses for model selection.

Supported feature types

A feature step can use node properties that exist in the input graph or are added by the pipeline. For each node in each potential link, the values of nodeProperties are concatenated, in the configured order, into a vector f. That is, for each potential link the feature vector for the source node, s equals s1 comma s2 comma dot dot dot s d, is combined with the one for the target node, s equals t1 comma t2 comma dot dot dot t d, into a single feature vector f.

The supported types of features can then be described as follows:

Table 11. Supported feature types
Feature Type Formula / Description

L2

f equals vector of s1 minus t1 squared comma s2 minus t2 squared comma dot dot dot comma s d minus t d squared

HADAMARD

f equals vector of s1 dot t1 comma s2 dot t2 comma dot dot dot comma s d dot t d

COSINE

f equals sum of i from 1 to d of s i t i divided by square root of sum of i from 1 to d of s i squared times square root of sum of i from 1 to d of t i squared

SAME_CATEGORY

The feature is 1 if the category value of source and target are the same, otherwise its 0. Similar to Same Community.

Example

The following will add a feature step to the pipeline:
CALL gds.beta.pipeline.linkPrediction.addFeature('pipe', 'hadamard', {
  nodeProperties: ['embedding', 'age']
}) YIELD featureSteps
Table 12. Results
featureSteps

[{config={nodeProperties=["embedding", "age"]}, name="HADAMARD"}]

When executing the pipeline, the nodeProperties must be either present in the input graph, or created by a previous node property step. For example, the embedding property could be created by the previous example, and we expect age to already be present in the in-memory graph used as input, at train and predict time.

Configuring the relationship splits

Link Prediction training pipelines manage splitting the relationships into several sets and add sampled negative relationships to some of these sets. Configuring the splitting is optional, and if omitted, splitting will be done using default settings.

The splitting configuration of a pipeline can be inspected by using gds.model.list and possibly only yielding splitConfig.

The splitting of relationships proceeds internally in the following steps:

  1. The graph is filtered according to specified sourceNodeLabel, targetNodeLabel and targetRelationshipType, which are configured at train time.

  2. The relationships remaining after filtering we call positive, they are split into test, train and feature-input sets.

    • The test set contains a testFraction fraction of the positive relationships. The remaining positive relationships are referred to as the testComplement set.

    • The train set contains trainFraction of the testComplement set.

    • The feature-input set contains the rest.

  3. Random negative relationships, which conform to the sourceNodeLabel and targetNodeLabel filter, are added to the test and train sets.

    • The number of negative relationships in each set is the number of positive ones multiplied by the negativeSamplingRatio.

    • The negative relationships do not coincide with positive relationships.

    • If negativeRelationshipType is specified, then instead of sampling, all relationships of this type in the graph are partitioned according to the test and train set size ratio and added as negative relationships. All relationships of negativeRelationshipType must also conform to the sourceNodeLabel and targetNodeLabel filter.

The positive and negative relationships are given relationship weights of 1.0 and 0.0 respectively so that they can be distinguished.

The train and test relationship sets are used for:

  • determining the label (positive or negative) for each training or test example

  • identifying the node pair for which link features are to be computed

However, they are not used by the algorithms run in the node property steps. The reason for this is that otherwise the model would use the prediction target (existence of a relationship) as a feature.

Each node property step uses a feature-input graph. The feature-input graph has nodes with sourceNodeLabel, targetNodeLabel and contextNodeLabels and the relationships from the feature-input set plus those of contextRelationshipTypes. This graph is used for computing node properties and features which depend on node properties. The node properties generated in the feature-input graph are used in training and testing.

Syntax

Configure the relationship split syntax
CALL gds.beta.pipeline.linkPrediction.configureSplit(
  pipelineName: String,
  configuration: Map
)
YIELD
  name: String,
  nodePropertySteps: List of Map,
  featureSteps: List of Map,
  splitConfig: Map,
  autoTuningConfig: Map,
  parameterSpace: List of Map
Table 13. Parameters
Name Type Description

pipelineName

String

The name of the pipeline.

configuration

Map

Configuration for splitting the relationships.

Table 14. Configuration
Name Type Default Description

validationFolds

Integer

3

Number of divisions of the training graph used during model selection.

testFraction

Double

0.1

Fraction of the graph reserved for testing. Must be in the range (0, 1).

trainFraction

Double

0.1

Fraction of the test-complement set reserved for training. Must be in the range (0, 1).

negativeSamplingRatio

Double

1.0

The desired ratio of negative to positive samples in the test and train set. More details here. It is a mutually exclusive parameter with negativeRelationshipType.

negativeRelationshipType

String

n/a

Specifies which relationships should be used as negative relationships, added to the test and train sets. It is a mutually exclusive parameter with negativeSamplingRatio.

Table 15. Results
Name Type Description

name

String

Name of the pipeline.

nodePropertySteps

List of Map

List of configurations for node property steps.

featureSteps

List of Map

List of configurations for feature steps.

splitConfig

Map

Configuration to define the split before the model training.

autoTuningConfig

Map

Configuration to define the behavior of auto-tuning.

parameterSpace

List of Map

List of parameter configurations for models which the train mode uses for model selection.

Example

The following will configure the splitting of the pipeline:
CALL gds.beta.pipeline.linkPrediction.configureSplit('pipe', {
  testFraction: 0.25,
  trainFraction: 0.6,
  validationFolds: 3
})
YIELD splitConfig
Table 16. Results
splitConfig

{negativeSamplingRatio=1.0, testFraction=0.25, trainFraction=0.6, validationFolds=3}

We now reconfigured the splitting of the pipeline, which will be applied during training.

As an example, consider a graph with nodes 'Person' and 'City' and relationships 'KNOWS', 'BORN' and 'LIVES'. Please note that this is the same example as in Training the pipeline.

Visualization of the example graph
Figure 1. Full example graph

Suppose we filter by sourceNodeLabel and targetNodeLabel being Person and targetRelationshipType being KNOWS. The filtered graph looks like the following:

example graph for LP split
Figure 2. Filtered graph

The filtered graph has 12 relationships. If we configure split with testFraction 0.25 and negativeSamplingRatio 1, it randomly picks 12 * 0.25 = 3 positive relationships plus 1 * 3 = 3 negative relationship as the test set.

Then if trainFraction is 0.6 and negativeSamplingRatio 1, it randomly picks 9 * 0.6 = 5.4 ≈ 5 positive relationships plus 1 * 5 = 5 negative relationship as the train set.

The remaining 12 * (1 - 0.25) * (1 - 0.6) = 3.6 ≈ 4 relationships in yellow is the feature-input set.

example graph for LP split
Figure 3. Positive and negative relationships for each set according to the split. The test set is in blue, train set in red and feature-input set in yellow. Dashed lines represent negative relationships.

Suppose for example a node property step is added with contextNodeLabel City and contextRelationshipType BORN. Then the feature-input graph for that step would be:

example graph for LP split
Figure 4. Feature-input graph. The feature-input set is in yellow.

Adding model candidates

A pipeline contains a collection of configurations for model candidates which is initially empty. This collection is called the parameter space. Each model candidate configuration contains either fixed values or ranges for training parameters. When a range is present, values from the range are determined automatically by an auto-tuning algorithm, see Auto-tuning. One or more model configurations must be added to the parameter space of the training pipeline, using one of the following procedures:

  • gds.beta.pipeline.linkPrediction.addLogisticRegression

  • gds.beta.pipeline.linkPrediction.addRandomForest

  • gds.alpha.pipeline.linkPrediction.addMLP

For information about the available training methods in GDS, logistic regression, random forest and multilayer perceptron, see Training methods.

In Training the pipeline, we explain further how the configured model candidates are trained, evaluated and compared.

The parameter space of a pipeline can be inspected using gds.model.list and optionally yielding only parameterSpace.

At least one model candidate must be added to the pipeline before training it.

Syntax

Configure the train parameters syntax
CALL gds.beta.pipeline.linkPrediction.addLogisticRegression(
  pipelineName: String,
  config: Map
)
YIELD
  name: String,
  nodePropertySteps: List of Map,
  featureSteps: List of Map,
  splitConfig: Map,
  autoTuningConfig: Map,
  parameterSpace: Map
Table 17. Parameters
Name Type Description

pipelineName

String

The name of the pipeline.

config

Map

The logistic regression config for a model candidate. The allowed parameters for a model are defined in the next table.

Table 18. Logistic regression configuration
Name Type Default Optional Description

batchSize

Integer or Map [1]

100

yes

Number of nodes per batch.

minEpochs

Integer or Map [1]

1

yes

Minimum number of training epochs.

maxEpochs

Integer or Map [1]

100

yes

Maximum number of training epochs.

learningRate [2]

Float or Map [1]

0.001

yes

The learning rate determines the step size at each epoch while moving in the direction dictated by the Adam optimizer for minimizing the loss.

patience

Integer or Map [1]

1

yes

Maximum number of unproductive consecutive epochs.

tolerance [2]

Float or Map [1]

0.001

yes

The minimal improvement of the loss to be considered productive.

penalty [2]

Float or Map [1]

0.0

yes

Penalty used for the logistic regression. By default, no penalty is applied.

focusWeight

Float or Map [1]

0.0

yes

Exponent for the focal loss factor, to make the model focus more on hard, misclassified examples in the train set. The default of 0.0 implies that focus is not applied and cross entropy is used. Must be positive.

classWeights

List of Float

[1.0, 1.0]

yes

Weights for each class in loss function. The list must have length 2. The first weight is for negative examples (missing relationships), and the second for positive examples (actual relationships).

1. A map should be of the form {range: [minValue, maxValue]}. It is used by auto-tuning.

2. Ranges for this parameter are auto-tuned on a logarithmic scale.

Table 19. Results
Name Type Description

name

String

Name of the pipeline.

nodePropertySteps

List of Map

List of configurations for node property steps.

featureSteps

List of Map

List of configurations for feature steps.

splitConfig

Map

Configuration to define the split before the model training.

autoTuningConfig

Map

Configuration to define the behavior of auto-tuning.

parameterSpace

List of Map

List of parameter configurations for models which the train mode uses for model selection.

Configure the train parameters syntax
CALL gds.beta.pipeline.linkPrediction.addRandomForest(
  pipelineName: String,
  config: Map
)
YIELD
  name: String,
  nodePropertySteps: List of Map,
  featureSteps: List of Map,
  splitConfig: Map,
  autoTuningConfig: Map,
  parameterSpace: Map
Table 20. Parameters
Name Type Description

pipelineName

String

The name of the pipeline.

config

Map

The random forest config for a model candidate. The allowed parameters for a model are defined in the next table.

Table 21. Random Forest Classification configuration
Name Type Default Optional Description

maxFeaturesRatio

Float or Map [3]

1 / sqrt(|features|)

yes

The ratio of features to consider when looking for the best split

numberOfSamplesRatio

Float or Map [3]

1.0

yes

The ratio of samples to consider per decision tree. We use sampling with replacement. A value of 0 indicates using every training example (no sampling).

numberOfDecisionTrees

Integer or Map [3]

100

yes

The number of decision trees.

maxDepth

Integer or Map [3]

No max depth

yes

The maximum depth of a decision tree.

minLeafSize

Integer or Map [3]

1

yes

The minimum number of samples for a leaf node in a decision tree. Must be strictly smaller than minSplitSize.

minSplitSize

Integer or Map [3]

2

yes

The minimum number of samples required to split an internal node in a decision tree. Must be strictly larger than minLeafSize.

criterion

String

"GINI"

yes

The impurity criterion used to evaluate potential node splits during decision tree training. Valid options are "GINI" and "ENTROPY" (both case-insensitive).

3. A map should be of the form {range: [minValue, maxValue]}. It is used by auto-tuning.

Table 22. Results
Name Type Description

name

String

Name of the pipeline.

nodePropertySteps

List of Map

List of configurations for node property steps.

featureSteps

List of Map

List of configurations for feature steps.

splitConfig

Map

Configuration to define the split before the model training.

autoTuningConfig

Map

Configuration to define the behavior of auto-tuning.

parameterSpace

List of Map

List of parameter configurations for models which the train mode uses for model selection.

Configure the train parameters syntax
CALL gds.alpha.pipeline.linkPrediction.addMLP(
  pipelineName: String,
  config: Map
)
YIELD
  name: String,
  nodePropertySteps: List of Map,
  featureSteps: List of Map,
  splitConfig: Map,
  autoTuningConfig: Map,
  parameterSpace: Map
Table 23. Parameters
Name Type Description

pipelineName

String

The name of the pipeline.

config

Map

The multilayer perceptron config for a model candidate. The allowed parameters for a model are defined in the next table.

Table 24. Multilayer Perceptron Classification configuration
Name Type Default Optional Description

batchSize

Integer or Map [4]

100

yes

Number of nodes per batch.

minEpochs

Integer or Map [4]

1

yes

Minimum number of training epochs.

maxEpochs

Integer or Map [4]

100

yes

Maximum number of training epochs.

learningRate [5]

Float or Map [4]

0.001

yes

The learning rate determines the step size at each epoch while moving in the direction dictated by the Adam optimizer for minimizing the loss.

patience

Integer or Map [4]

1

yes

Maximum number of unproductive consecutive epochs.

tolerance [5]

Float or Map [4]

0.001

yes

The minimal improvement of the loss to be considered productive.

penalty [5]

Float or Map [4]

0.0

yes

Penalty used for the logistic regression. By default, no penalty is applied.

hiddenLayerSizes

List of Integers

[100]

yes

List of integers representing number of neurons in each layer. The default value specifies an MLP with 1 hidden layer of 100 neurons.

focusWeight

Float or Map [4]

0.0

yes

Exponent for the focal loss factor, to make the model focus more on hard, misclassified examples in the train set. The default of 0.0 implies that focus is not applied and cross entropy is used. Must be positive.

classWeights

List of Float

[1.0, 1.0]

yes

Weights for each class in cross-entropy loss. The list must have length 2. The first weight is for negative examples (missing relationships), and the second for positive examples (actual relationships).

4. A map should be of the form {range: [minValue, maxValue]}. It is used by auto-tuning.

5. Ranges for this parameter are auto-tuned on a logarithmic scale.

Table 25. Results
Name Type Description

name

String

Name of the pipeline.

nodePropertySteps

List of Map

List of configurations for node property steps.

featureSteps

List of Map

List of configurations for feature steps.

splitConfig

Map

Configuration to define the split before the model training.

autoTuningConfig

Map

Configuration to define the behavior of auto-tuning.

parameterSpace

List of Map

List of parameter configurations for models which the train mode uses for model selection.

Example

We can add multiple model candidates to our pipeline.

The following will add a logistic regression model with default configuration:
CALL gds.beta.pipeline.linkPrediction.addLogisticRegression('pipe')
YIELD parameterSpace
The following will add a random forest model:
CALL gds.beta.pipeline.linkPrediction.addRandomForest('pipe', {numberOfDecisionTrees: 10})
YIELD parameterSpace
The following will add a configured multilayer perceptron model with class weighted focal loss and ranged parameters:
CALL gds.alpha.pipeline.linkPrediction.addMLP('pipe',
{hiddenLayerSizes: [4, 2], penalty: 0.5, patience: 2, classWeights: [0.55, 0.45], focusWeight: {range: [0.0, 0.1]}})
YIELD parameterSpace
The following will add a logistic regression model with a range parameter:
CALL gds.beta.pipeline.linkPrediction.addLogisticRegression('pipe', {maxEpochs: 500, penalty: {range: [1e-4, 1e2]}})
YIELD parameterSpace
RETURN parameterSpace.RandomForest AS randomForestSpace, parameterSpace.LogisticRegression AS logisticRegressionSpace, parameterSpace.MultilayerPerceptron AS MultilayerPerceptronSpace
Table 26. Results
randomForestSpace logisticRegressionSpace MultilayerPerceptronSpace

[{criterion="GINI", maxDepth=2147483647, methodName="RandomForest", minLeafSize=1, minSplitSize=2, numberOfDecisionTrees=10, numberOfSamplesRatio=1.0}]

[{batchSize=100, classWeights=[], focusWeight=0.0, learningRate=0.001, maxEpochs=100, methodName="LogisticRegression", minEpochs=1, patience=1, penalty=0.0, tolerance=0.001}, {batchSize=100, classWeights=[], focusWeight=0.0, learningRate=0.001, maxEpochs=500, methodName="LogisticRegression", minEpochs=1, patience=1, penalty={range=[0.0001, 100.0]}, tolerance=0.001}]

[{batchSize=100, classWeights=[0.55, 0.45], focusWeight={range=[0.0, 0.1]}, hiddenLayerSizes=[4, 2], learningRate=0.001, maxEpochs=100, methodName="MultilayerPerceptron", minEpochs=1, patience=2, penalty=0.5, tolerance=0.001}]

The parameterSpace in the pipeline now contains the four different model candidates, expanded with the default values. Each specified model candidate will be tried out during the model selection in training.

These are somewhat naive examples of how to add and configure model candidates. Please see Training methods for more information on how to tune the configuration parameters of each method.

Configuring Auto-tuning

In order to find good models, the pipeline supports automatically tuning the parameters of the training algorithm. Optionally, the procedure described below can be used to configure the auto-tuning behavior. Otherwise, default auto-tuning configuration is used. Currently, it is only possible to configure the maximum number trials of hyper-parameter settings which are evaluated.

Syntax

Configuring auto-tuning syntax
CALL gds.alpha.pipeline.linkPrediction.configureAutoTuning(
  pipelineName: String,
  configuration: Map
)
YIELD
  name: String,
  nodePropertySteps: List of Map,
  featureSteps: List of Map,
  splitConfig: Map,
  autoTuningConfig: Map,
  parameterSpace: List of Map
Table 27. Parameters
Name Type Description

pipelineName

String

The name of the created pipeline.

configuration

Map

The configuration for auto-tuning.

Table 28. Configuration
Name Type Default Description

maxTrials

Integer

10

The value of maxTrials determines the maximum allowed model candidates that should be evaluated and compared when training the pipeline. If no ranges are present in the parameter space, maxTrials is ignored and the each model candidate in the parameter space is evaluated.

Table 29. Results
Name Type Description

name

String

Name of the pipeline.

nodePropertySteps

List of Map

List of configurations for node property steps.

featureSteps

List of Map

List of configurations for feature steps.

splitConfig

Map

Configuration to define the split before the model training.

autoTuningConfig

Map

Configuration to define the behavior of auto-tuning.

parameterSpace

List of Map

List of parameter configurations for models which the train mode uses for model selection.

Example

The following will configure the maximum trials for the auto-tuning:
CALL gds.alpha.pipeline.linkPrediction.configureAutoTuning('pipe', {
  maxTrials: 2
}) YIELD autoTuningConfig
Table 30. Results
autoTuningConfig

{maxTrials=2}

We now reconfigured the auto-tuning to try out at most 2 model candidates during training.