Skip to content

The Bert Large training performance sometimes is wrongly calculated #171

@taotod

Description

@taotod

The code below uses the final iteration training time to calculate the training performance.

https://github.com/IntelAI/models/blob/cdd842a33eb9d402ff18bfb79bd106ae132a8e99/quickstart/language_modeling/pytorch/bert_large/training/gpu/bf16_training_plain_format.sh#L57

If the final training iteration is at the end of the data file, it will be less than the expected batch size (16 or 32), then the final training iteration time will be very small (may be only half of the expected batch size, or less). Then this script will give the wrong performance data.

Suggest setting the parameter "drop_last" in the training code below to drop the final batch data of every data set file.
https://github.com/IntelAI/models/blob/cdd842a33eb9d402ff18bfb79bd106ae132a8e99/models/language_modeling/pytorch/bert_large/training/gpu/run_pretrain_mlperf.py#L904

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions