Reentrant gradient checkpointing (the default) conflicts with Accelerate's gradient accumulation context manager -- causes 'backward through graph a second time' on the first training step. use_reentrant=False uses the non-reentrant autograd hook path which is compatible with Accelerate >= 0.27. |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| benchmark_classifier.py | ||
| classifier_adapters.py | ||
| finetune_classifier.py | ||