Knowledge distillation technique for shrinking neural networks yields relative performance increases of up to 122%.Read More
Knowledge distillation technique for shrinking neural networks yields relative performance increases of up to 122%.Read More