Your friendlyconversational AIcompanion
صوتٌ يفهمُ العربية — كما تُقال.
TTS, STT, dubbing, and live voice in one studio. Ship with Python or Node on the same API.
Agents that actually talk.
Real-time voice agents your customers can interrupt.
Sub-90ms first-token latency, full-duplex transport, and persistent memory across sessions. Drop into telephony, web, or apps in 4 lines of code.
Characters who improvise.
Voices, vision, and memory for game worlds.
Per-character emotion vectors, in-engine plugins for Unity and Unreal, lip-sync, and prosody control. Build NPCs that surprise the player — every time.
Conversations that last.
Long-session voice for embodied systems.
6+ hour sessions with no degradation. Agentic tool use, on-device inference, and end-of-shift memory commits. Built for fleets that talk for themselves.
Built for the work voice actually does.
Built in Riyadh.
Spoken everywhere.
8 Arabic dialects. 32 languages. Hand-tuned in the cities that speak them — not approximated from a transcript pile.
Riyadh
For 70 years, machines listened.
Now they're finally
ready to يتحدثوا.
Voice was the most human interface and the worst computer one — stilted, slow, embarrassed by its own latency. Nur was built by people who'd given up on it, then refused to. We are open-source at the core, fluent in 32 languages, and obsessed with the small space between a question and an answer.
Hear it. Then ship it.
Pick a scenario, pick a voice, generate. Powered by the same model that ships in production.
Releases & research.
Honest meters. No platform tax.
Same model and SDK at every tier. Open-source core stays free, forever.