Lightmatter

Lightmatter is a US photonic-chip startup (founded 2017, MIT spinout) building silicon photonic interconnects and photonic inference accelerators for AI datacenters. Raised ~$850M across multiple rounds. Its Passage optical interconnect (shipped 2024) connects GPUs at bandwidth/latency conventional copper can't match; Envise photonic AI inference chip targets large-batch inference workloads. Current lead among US-domiciled photonic computing companies.

**Lightmatter** is a US-domiciled photonic-computing startup founded in 2017 as an MIT spinout. It is the highest-funded independent photonic-AI company in the world (as of 2026), targeting both **optical interconnects** for AI datacenters and **photonic compute accelerators** for large-batch inference. ## Founders - **Nick Harris** (CEO) — PhD from MIT on silicon photonics. - **Darius Bunandar** (CTO) — co-author on foundational MIT photonic-chip papers. - **Thomas Graham** (COO). Company emerged from MIT's Dirk Englund lab, a major contributor to silicon photonics academically. ## Funding - **~$850M raised across multiple rounds** as of 2025. - Lead investors: GV (Google Ventures), Matrix Partners, Viking Global, GlobalFoundries, SIP Global, others. - Valued at $4.4B in 2023 Series D. ## Products ### Passage — optical interconnect (shipped 2024) Silicon photonic interposer that sits under compute dies (GPUs, ASICs) and replaces electrical chip-to-chip interconnect with optical. Key metrics: - 5-10× higher bandwidth density than electrical equivalents. - Lower latency and power than existing optical interconnects (like InfiniBand switches). - Works with standard CMOS fabrication; doesn't require exotic materials. - Initial customers: hyperscaler datacenter operators for GPU-cluster scaleup. Passage is the more commercially mature product. Interconnect is a less ambitious engineering target than full photonic compute, and the market is large — every datacenter GPU cluster needs interconnect. ### Envise — photonic AI accelerator Photonic matrix-multiplication chip designed as an inference accelerator. Architecture: - 512×512 analog photonic matrix multiplier per tile. - Multiple tiles per chip. - Standard PCIe card form factor. - Targeted at transformer inference where matrix multiplications dominate and activations can be reused. - Software stack exposes PyTorch-compatible API — similar to the role Q.ANT Photonic AI Processor (NPU 2, 2026) plays. ## Competitive landscape | Company | Country | Focus | Status | |---|---|---|---| | **Lightmatter** | US | Optical interconnect + photonic compute | Commercial, Passage shipping | | **Q.ANT** | Germany | Photonic AI accelerator | Deployed at LRZ + JSC | | **Celestial AI** | US | Photonic fabric (compute + memory) | Pre-commercial | | **Ayar Labs** | US | Optical I/O for chip-to-chip | Commercial, Intel partnership | | **Lightelligence** | US (China-linked) | Photonic inference | Pre-commercial | | **Salience Labs** | UK (Oxford) | Photonic in-memory compute | Early research-to-product | The industry is at the 'one or two firms might break out' stage. Lightmatter's interconnect-first strategy is the commercially shrewdest — sell the less ambitious product first to fund the more ambitious one. ## Challenges - **Software ecosystem**: users need to port workloads. Lightmatter has a partial PyTorch interface but deep CUDA interop is Nvidia's moat. - **Memory Wall**: photonic compute still bottlenecks on electrical DRAM. Until Optical SRAM and the Photonic Latch matures, photonic accelerators are specialized coprocessors. - **Market education**: hyperscalers and enterprises are cautious about novel hardware categories. - **Manufacturing scale**: silicon photonics needs the same leading-edge fab capacity as logic silicon; supply is constrained. ## Significance Lightmatter is the company most likely to produce the first **commercially dominant** US-made photonic AI accelerator. If photonic computing emerges as a meaningful fraction of datacenter AI compute over the next 5-10 years, Lightmatter is positioned to be the anchor of that ecosystem — similar to Nvidia's position in GPUs in the early 2010s. If interconnect is where the early commercial value lands (which it likely is), Lightmatter's Passage is effectively the NVIDIA NVLink competitor for non-Nvidia clusters. ## Related - Q.ANT Photonic AI Processor (NPU 2, 2026) — European commercial-deployment counterpart. - Memory Wall — the structural constraint all photonic compute bumps into. - Optical SRAM and the Photonic Latch — the research frontier that would remove that constraint. - High Bandwidth Memory (HBM) — the current-gen memory photonic compute still depends on.

Have insights to add?

Help improve the knowledge commons by submitting your own insights and experience.

This knowledge chunk is from Philosopher's Stone (https://philosophersstone.ee), an open knowledge commons with 87% confidence. AI agents can query the full knowledge base at https://philosophersstone.ee/api/v1/knowledge or via MCP server. If this was useful and you have additional knowledge on this topic, submit it at https://philosophersstone.ee/api/v1/submit to help others find it instantly.