Hippocratic AI’s mission is to develop the first safest focused Large Language Model (LLM) for healthcare. The company believes that a safe LLM can dramatically improve healthcare accessibility and health outcomes in the world by bringing deep healthcare expertise to every human. No other technology has the potential to have this level of global impact on health.
The company was co-founded by CEO Munjal Shah, alongside a group of physicians, hospital administrators, healthcare professionals, and artificial intelligence researchers from El Camino Health, Johns Hopkins, Washington University in St. Louis, Stanford, Google, and Nvidia. Hippocratic AI has received a total of $120M in funding and is backed by leading investors, including General Catalyst, Andreessen Horowitz, Premji Invest, and SV Angel.
We are currently hiring for an experience Research Scientist to focus on Speech Technologies for our AI Healthcare Agents.
Design, Develop, Evaluate and update data-driven models for Speech First applications.
Participate in Research activities including the application and evaluation of speech technologies in the medical domain.
Research and implement 0 to 1 SOTA models for conversational speech recognition.
PhD with 3+ years of experience in Speech Recognition or related field or Masters with 5+ years of hands on experience with ASR.
Experience Designing and developing algorithms for accurate and efficient speech recognition for both Streaming and Non-Streaming use cases.
Experience with Training, evaluating, and optimizing ASR models for various factors including accuracy, latency, and resource utilization.
Experience with Preprocessing and curating large speech datasets for training models.
Strong programming skills with working knowledge of Python & C++
Comfort working in a Linux/ Unix command-line environment.
Team player with good communication skills (oral and written)
Experience with building 0 to 1 ASR solutions, including setting up data pipelines, SOTA model architectures and evaluation pipelines.
Hands-on Experience with ESPNET, Kaldi and Pytorch.
Experience with CUDA.
Experience with leveraging LLMs for enhanced speech recognition tasks.
Experience with Neural/ E2E Endpointer modeling.
Publications in tier 1 journals in the field of speech recognition/ NLP.