Making large AI models cheaper, faster and more accessible
Did you build this?
Claim your listing to see exactly how many AI agents recommend this tool, your success rate, and more. Free, no commission, no fees.
Claim This ListingPyTorch-based distributed training system for large AI models. Reduces GPU memory usage by up to 10x through tensor parallelism, pipeline parallelism, and ZeRO optimisation. Enables training GPT-3, BLOOM, and similar models on smaller GPU clusters.
Save tools & get AI recommendations
Free forever. No credit card required.
Listed for free · No commission · Claim this listing