As a hearing impaired software engineer, I have built my own local-first solution to transcribe entirely locally in real time word by word. It's my daily driver for transcribing meetings, interviews, etc. Because of its local-first capability, I do not have to worry about privacy concerns when transcribing meetings at work as all data stays on my machine. It's about as fast as Otter.ai although there's definitely room for improvements in terms of UX and speed. Caveat is that it only works on MacBooks with Apple silicon, fronted by a very simple TUI.
I am thinking of putting it up as an open source project on GitHub to garner interest. If interest is high, commercializing it as a product would be a next step. Before I go too deep in that direction, I'm curious about the market demand. How does the HN community think about the demand for a local-first live transcription tool?
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...