

For this reason, we built Composer, a PyTorch library that enables you to easily assemble recipes of improvements to train faster and better models.
#MOSAIC CAREERS CODE#
In the past, combining the diverse set of improvements in this recipe would have required assembling a hodgepodge of code and inserting it into your training loop in messy, ad hoc, bug-prone ways. Data collected on the MosaicML Cloud (8x NVIDIA A100). As the plot below shows, for the same accuracy, the Mosaic ResNet is significantly faster than our NVIDIA Deep Learning Examples -derived baseline (7.1x faster) and other recently proposed training recipes like ResNet Strikes Back in the TIMM repository (2x to 3.8x) or the PyTorch blog (1.6x).Ĭomparison between best MosaicML ResNet-50 Recipe for a given Time & Accuracy to different baselines. We measure efficiency by looking at the tradeoff between final accuracy and training time or cost (after all, time is money on the cloud see our Methodology blog post for more details about how we quantify efficiency). These recipes modify the training algorithm the network architecture is the same ResNet you’ve known and loved since 2015 (with updated anti-aliasing pooling via Blurpool). This speedup is available for any budget, whether you plan a short training run to get baseline accuracy or a longer training run to reach the highest accuracy possible. Simply put, the Mosaic ResNet can be trained faster and cheaper than any other ResNet recipe we know of, and it does so without sacrificing any accuracy. Today, we are releasing the Mosaic ResNet, a modern training recipe that combines a dozen of the best such improvements (including new improvements and better versions of existing ones that we developed in-house) with the goal of maximizing efficiency.

In the years since ResNets first came out, hundreds of researchers have proposed improvements to the training recipe that speed up training or enhance final performance. Although they are ancient by deep learning standards (seven years old to be exact), they remain a go-to choice for image classification and as backbones for segmentation and object detection.

ResNets are the workhorses of the computer vision world. It’s written in standard, easy-to-use PyTorch, so modify it to suit your needs and build on it! Try it out in Composer, our open-source library for efficient neural network training. Reach higher levels of accuracy up to 3.8x faster than existing state of the art (Wightman et al., 2021). TL DR: Match benchmark accuracy on ImageNet (He et al., 2015) in 27 minutes, a 7x speedup (ResNet-50 on 8xA100s). Blazingly Fast Computer Vision Training with the Mosaic ResNet and Composer Introducing the Mosaic ResNet, the most efficient recipe for training ResNets on ImageNet
