
Ishiki Labs
Winter 2026Building the Future of Multimodal AI
About Company
Current multimodal models can see and hear. But they talk when they shouldn't. They can't tell if you're speaking to them or someone else. We are building an AI that knows when to stay silent, yet still understanding what's going on in your conversation, so it can best assist when you do need it in real time. Our first version, fern-0.1: provides real-time expert opinions on demand, instant task delegation, zero interruptions. All as fast as ChatGPT voice and Gemini live.
Active Founders
Co-founder & CTO of Ishiki Labs (W26). Previously worked on multi-modal AI and Orion AR glasses at Meta and research infra at Citadel Securities.
Cofounder and CEO at Ishiki Labs (W26). Previously: Research Scientist, first in LlaMA team training multimodal LLMs and then in Reality Labs at Meta training video assistant for smart glasses. PhD from Purdue University with 20+ publications at top conferences like CVPR, NeurIPS, and ICASSP

