×
Dec 13, 2023 · In this study, we propose a USM fine-tuning approach for ASR, with a low-bit quantization and N:M structured sparsity aware paradigm on the model weights.
We propose a model compression approach for Universal Speech Model fine-tuning. ○ With a low-bit quantization and N:M structured sparsity aware paradigm on ...
Apr 16, 2024 · ... This reduction in parameters decreases the memory footprint of models and potentially improves computational efficiency by eliminating ...
Apr 18, 2024 · USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models ; Session: SLP-L21: End-to-end modeling ...
USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models · no code implementations • 13 Dec 2023 • Shaojin Ding ...
Jan 16, 2024 · In this study, we propose a USM fine-tuning approach for. ASR, with a low-bit quantization and N:M structured sparsity aware paradigm on the ...
In this paper, we propose a dynamic cascaded encoder Automatic Speech Recognition (ASR) model, which unifies models for different deployment scenarios. Moreover ...
Oct 7, 2024 · USM-Lite: Quantization and Sparsity Aware Fine-Tuning for Speech Recognition with Universal Speech Models. ICASSP 2024: 10756-10760; 2023.
In this study, we propose a USM fine-tuning approach for ASR, with a low-bit quantization and N:M structured sparsity aware paradigm on the model weights, ...
USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models ... speech recognition. Language Modelling · speech ...