BYOL-S: Learning Self-supervised Speech Representations by Bootstrapping

Research output: Contribution to conferencePaperResearch

Standard

BYOL-S : Learning Self-supervised Speech Representations by Bootstrapping. / Elbanna, Gasser; Scheidwasser-Clow, Neil; Kegler, Mikolaj; Beckmann, Pierre; Hajal, Karl El; Cernak, Milos.

2022. Paper presented at HEAR: Holistic Evaluation of Audio Representations (NeurIPS 2021 Competition).

Research output: Contribution to conferencePaperResearch

Harvard

Elbanna, G, Scheidwasser-Clow, N, Kegler, M, Beckmann, P, Hajal, KE & Cernak, M 2022, 'BYOL-S: Learning Self-supervised Speech Representations by Bootstrapping', Paper presented at HEAR: Holistic Evaluation of Audio Representations (NeurIPS 2021 Competition), 13/12/2021 - 14/12/2021.

APA

Elbanna, G., Scheidwasser-Clow, N., Kegler, M., Beckmann, P., Hajal, K. E., & Cernak, M. (2022). BYOL-S: Learning Self-supervised Speech Representations by Bootstrapping. Paper presented at HEAR: Holistic Evaluation of Audio Representations (NeurIPS 2021 Competition).

Vancouver

Elbanna G, Scheidwasser-Clow N, Kegler M, Beckmann P, Hajal KE, Cernak M. BYOL-S: Learning Self-supervised Speech Representations by Bootstrapping. 2022. Paper presented at HEAR: Holistic Evaluation of Audio Representations (NeurIPS 2021 Competition).

Author

Elbanna, Gasser ; Scheidwasser-Clow, Neil ; Kegler, Mikolaj ; Beckmann, Pierre ; Hajal, Karl El ; Cernak, Milos. / BYOL-S : Learning Self-supervised Speech Representations by Bootstrapping. Paper presented at HEAR: Holistic Evaluation of Audio Representations (NeurIPS 2021 Competition).

Bibtex

@conference{a88c560841934bc0bcde98787f80baf7,
title = "BYOL-S: Learning Self-supervised Speech Representations by Bootstrapping",
abstract = " Methods for extracting audio and speech features have been studied since pioneering work on spectrum analysis decades ago. Recent efforts are guided by the ambition to develop general-purpose audio representations. For example, deep neural networks can extract optimal embeddings if they are trained on large audio datasets. This work extends existing methods based on self-supervised learning by bootstrapping, proposes various encoder architectures, and explores the effects of using different pre-training datasets. Lastly, we present a novel training framework to come up with a hybrid audio representation, which combines handcrafted and data-driven learned audio features. All the proposed representations were evaluated within the HEAR NeurIPS 2021 challenge for auditory scene classification and timestamp detection tasks. Our results indicate that the hybrid model with a convolutional transformer as the encoder yields superior performance in most HEAR challenge tasks. ",
keywords = "cs.SD, cs.AI, cs.LG, eess.AS",
author = "Gasser Elbanna and Neil Scheidwasser-Clow and Mikolaj Kegler and Pierre Beckmann and Hajal, {Karl El} and Milos Cernak",
year = "2022",
month = jun,
day = "24",
language = "Udefineret/Ukendt",
note = "null ; Conference date: 13-12-2021 Through 14-12-2021",
url = "https://proceedings.mlr.press/v166/",

}

RIS

TY - CONF

T1 - BYOL-S

AU - Elbanna, Gasser

AU - Scheidwasser-Clow, Neil

AU - Kegler, Mikolaj

AU - Beckmann, Pierre

AU - Hajal, Karl El

AU - Cernak, Milos

PY - 2022/6/24

Y1 - 2022/6/24

N2 - Methods for extracting audio and speech features have been studied since pioneering work on spectrum analysis decades ago. Recent efforts are guided by the ambition to develop general-purpose audio representations. For example, deep neural networks can extract optimal embeddings if they are trained on large audio datasets. This work extends existing methods based on self-supervised learning by bootstrapping, proposes various encoder architectures, and explores the effects of using different pre-training datasets. Lastly, we present a novel training framework to come up with a hybrid audio representation, which combines handcrafted and data-driven learned audio features. All the proposed representations were evaluated within the HEAR NeurIPS 2021 challenge for auditory scene classification and timestamp detection tasks. Our results indicate that the hybrid model with a convolutional transformer as the encoder yields superior performance in most HEAR challenge tasks.

AB - Methods for extracting audio and speech features have been studied since pioneering work on spectrum analysis decades ago. Recent efforts are guided by the ambition to develop general-purpose audio representations. For example, deep neural networks can extract optimal embeddings if they are trained on large audio datasets. This work extends existing methods based on self-supervised learning by bootstrapping, proposes various encoder architectures, and explores the effects of using different pre-training datasets. Lastly, we present a novel training framework to come up with a hybrid audio representation, which combines handcrafted and data-driven learned audio features. All the proposed representations were evaluated within the HEAR NeurIPS 2021 challenge for auditory scene classification and timestamp detection tasks. Our results indicate that the hybrid model with a convolutional transformer as the encoder yields superior performance in most HEAR challenge tasks.

KW - cs.SD

KW - cs.AI

KW - cs.LG

KW - eess.AS

M3 - Paper

Y2 - 13 December 2021 through 14 December 2021

ER -

ID: 337591641