Llasa: Llama-Based Speech Synthesis

CalmStorm | 158 points

Odd that the page doesn't seem to link to either,

paper: https://arxiv.org/abs/2502.04128

github: https://github.com/zhenye234/LLaSA_training

ks2048 | 18 hours ago

LLaSA is a simple framework for speech synthesis that employs a single-layer vector quantizer (VQ) codec and a single Transformer architecture to fully align with standard LLMs such as LLaMA.

CalmStorm | 21 hours ago

I can't wait see this integrated into Open WebUI! These sound amazing.

StevenNunez | 20 hours ago

the long 'uuuuhhhhhhh' from some of the lesser models is killing me.

mring33621 | 19 hours ago

> employs a single-layer vector quantizer (VQ) codec and a single Transformer architecture to fully align

I really wish when new models were released that they would draw a diagram of all the layers and the tensor input and output sizes at each layer, with zoom in/out capabilities if needed using D3.js or whatever visualization framework if needed. Every single layer should be on there with its input and output sizes.

These one-sentence descriptions, and approximate block diagrams with arrows pointing at each other are never enough to understand how something is actually implemented.

dheera | 18 hours ago