Near-linear scaling of gigantic-model training on AWS June 27, 2022 by Amazon AWS A new distributed-training library achieves near-linear efficiency in scaling from tens to hundreds of GPUs.Read More Previous Post Bringing Machine Learning to every developer’s toolbox Next Post PyTorch 1.12: TorchArrow, Functional API for Modules and nvFuser, are now available