outer_spec@lemmy.blahaj.zone to 196@lemmy.blahaj.zone · 1 年前Ruletaniclemmy.blahaj.zoneimagemessage-square7fedilinkarrow-up1177arrow-down10
arrow-up1177arrow-down1imageRuletaniclemmy.blahaj.zoneouter_spec@lemmy.blahaj.zone to 196@lemmy.blahaj.zone · 1 年前message-square7fedilink
minus-squareJDubbleulinkfedilinkarrow-up10·1 年前I’ve hosted one on a raspberry pi and it took at most a second to process and act on commands. Basic speech to text doesn’t require massive models and has become much less compute intensive in the past decade.
minus-squareNorah - She/They@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up2·1 年前Okay well I was running faster-whisper through Home Assistant.
I’ve hosted one on a raspberry pi and it took at most a second to process and act on commands. Basic speech to text doesn’t require massive models and has become much less compute intensive in the past decade.
Okay well I was running faster-whisper through Home Assistant.