Neogenint Intelligence
Home/HardwareBPU™

Brain-
inspired Processing.

Event-driven spiking neural networks with in-memory computing — neuromorphic chips and systems that far outpace GPU efficiency on matching tasks.

10×
vs A100
10⁹
Neurons
~100×
Efficiency gain
§ 000Scroll
§ 001Definition

Not another
GPU. It's a brain.

BPU (Brain Processing Unit) refers to spike/event-driven neuromorphic computing chips and systems. Its core goal is to provide native hardware support for SNN neuron updates, synaptic event propagation, and event routing/scheduling — constructing scalable brain-inspired computing systems through asynchronous AER (Address-Event Representation) communication.

TrueNorth, Loihi, and more recent wafer-scale neuromorphic systems are all representatives of this class. On tasks matching its paradigm — event streams, sparse temporal signals, online learning — BPU far outpaces traditional GPU efficiency.

Event-driven

Compute only on actual spikes

Sparse activation

Skip invalid dense MAC sweeps

In-memory computing

Moving data costs more than computing

§ 002Comparison

BPU vs GPU:
not a replacement, complement.

If "computing efficiency" means effective task volume per unit power, BPU typically significantly outperforms GPU on tasks matching its paradigm.

Aspect
BPU
GPU
Computing paradigm
Event-driven, sparse activation
Dense parallel, tensor operations
Energy efficiency
Significant advantage in sparse scenarios
High efficiency in dense computing
Suitable tasks
Event streams, sparse temporal signals
Dense tensors, matrix operations
Communication
Asynchronous AER, on-demand
Global sync, batch transfer
Latency profile
Ultra-low latency, real-time response
Batch mode, queuing latency

Efficiency depends on whether the task can be expressed as a sparse event-driven SNN and efficiently mapped to BPU neuron/synapse models.

§ 003Product Line

Three forms,
one brain.

From PCIe accelerator card to wafer-scale server — choose the right BPU form factor for your task scale.

01
Entry

BPU PCI Compute Card

Single or few BPU chips packaged as cards / development platforms, connected via PCIe to host. Easy integration into existing server workflows with low barriers.

  • PCIe interface, easy integration
  • Low development barrier
  • Suitable for prototyping
02
Mid-tier

BPU Wafer Computing Module

BPU chiplet modular packaging, flexibly integrable into various computing platforms for higher-density brain-inspired compute.

  • Modular design
  • Flexible integration
  • High-density computing
  • 400M+ neuron simulation
03
FlagshipFlagship

Tianqin Xinhai · Wafer Server

Wafer-scale neuromorphic system with on-wafer short-distance high-density interconnects. Significant advantages in large-scale event communication and energy efficiency.

  • Billion-scale neurons
  • Trillion-scale synapses
  • 10x+ vs NVIDIA A100
  • Near-biological efficiency
How to choose

PCIe version

Prototyping, small-scale apps. Easy workflow integration.

Module version

Mid-scale apps. Flexible integration, customizable.

Wafer-scale

Ultra-large brain simulation. Billion-neuron scale, near-biological efficiency.

Flagship
§ 004Deep dive

Tianqin
Xinhai.

A breakthrough wafer-scale neuromorphic computing system — interconnected on a single wafer into a unified event-driven compute network.

What is wafer-scale computing?

Wafer-scale BPU computing interconnects numerous brain-inspired chips (or chiplets) on a single wafer into a unified event-driven system. Computation remains fundamentally SNN neuron state updates and synaptic event propagation — but scaled to "wafer-level neuron-synapse totals." Events transmit at high speed via asynchronous AER, with hierarchical timesteps or GALS synchronization ensuring temporal consistency across chiplets.

Performance breakthrough

10⁹
neurons
Scale
10×+
vs A100
~100×
Efficiency
< 1ms
Event latency
01

High-density interconnect

On-wafer short-distance high-density interconnects replace PCB-level long connections, significantly reducing bandwidth, latency and power penalties.

02

Ultra-high efficiency

Brings large-scale SNN and brain simulations closer to biological system efficiency in power-latency metrics.

03

Event-driven architecture

Events transmitted at high speed via asynchronous AER, with GALS synchronization ensuring temporal consistency.

04

Brain-scale simulation

Supports near-brain-scale parallel spiking computation with billion-neuron parallel processing.

§ 005Use Cases

An event-
driven world.

BPU is best suited for scenarios where inputs are naturally event streams or sparsifiable, and decisions depend heavily on temporal structure.

01

Brain simulation research

Large-scale neuroscience circuit simulation with billion-neuron parallel processing.

02

DVS event camera

Event-based visual perception processing with ultra-low latency real-time response.

03

Low-power edge AI

Ultra-low latency real-time control and online learning for IoT and embedded scenarios.

04

Spiking sensor fusion

Radar / sonar / tactile sensor integration with unified multi-modal event stream processing.

Common thread: strong requirements for low latency, low power, sparse temporal processing, or online plasticity.

§ 006Hardware

From card,
to server.

Brain-Inspired Computing Acceleration Card LBM212

Neogenint Brain-Inspired Computing Acceleration Card LBM212

Self-developed LYRArc-II memory-computing fusion processing architecture, supports BI-Link brain-inspired computing card interconnect expansion, supports full-range neuron connections, supports variable computing precision (FP32/FP16/INT8), featuring high flexibility, high processing efficiency, high interconnect bandwidth, and ultra-low communication latency.

  • Self-developed LYRArc-II memory-computing fusion processing architecture
  • Supports BI-Link brain-inspired computing card interconnect expansion
  • Supports full-range neuron connections
  • Supports variable computing precision (FP32/FP16/INT8)
  • High flexibility, high processing efficiency, high interconnect bandwidth, and ultra-low communication latency
  • Supports up to 26 million neuron simulation computing
  • Supports event-driven computing and sparse computing
  • Supports microcode-level instruction reconfiguration for brain-inspired computing
  • Supports brain-inspired neural network training and inference
Inquire about this product
LBM212 Brain-Inspired Computing Acceleration Card
Brain-Inspired Wafer Computing Subsystem Module LBW2216

Neogenint Brain-Inspired Wafer Computing Subsystem Module LBW2216

Self-developed LYRArc-II memory-computing fusion processing architecture, self-developed integrated assembly technology for computing, power supply, cooling, and interconnect, supports BI-Link system-level expansion interconnect, supports variable computing precision (FP32/FP16/INT8).

  • Self-developed LYRArc-II memory-computing fusion processing architecture
  • Self-developed integrated assembly technology for computing, power supply, cooling, and interconnect
  • Supports BI-Link system-level expansion interconnect
  • Supports variable computing precision (FP32/FP16/INT8)
  • High flexibility, high processing efficiency, high interconnect bandwidth, and ultra-low communication latency
  • Supports over 400 million neuron simulation computing
  • Supports event-driven computing and sparse computing
  • Supports microcode-level instruction reconfiguration for brain-inspired computing
  • Supports brain-inspired neural network training and inference

Supports over 400 million neuron simulation computing

Inquire about this product
LBW2216 Wafer Computing Subsystem Module
High-Density Brain-Inspired Computing Server BPSC-II

Neogenint High-Density Brain-Inspired Computing Server BPSC-II

Self-developed ultra-high computing density integrated technology (4U with 16 LBM212 BPU acceleration cards), self-developed brain-inspired vascular phase-change liquid cooling technology, supports BI-Link brain-inspired computing card interconnect, supports variable computing precision (FP32/FP16/INT8), operating noise below 65dB.

  • Self-developed ultra-high computing density integrated technology (4U with 16 LBM212 BPU acceleration cards)
  • Self-developed brain-inspired vascular phase-change liquid cooling technology
  • Supports BI-Link brain-inspired computing card interconnect
  • Supports variable computing precision (FP32/FP16/INT8)
  • Operating noise below 65dB (approximately office environment noise level)
  • Supports over 400 million neuron simulation computing
  • Supports event-driven computing and sparse computing
  • Supports microcode-level instruction reconfiguration for brain-inspired computing
  • Supports brain-inspired neural network training and inference

4U space integrates 16 LBM212 cards, ultra-high computing density

Inquire about this product
BPSC-II High-Density Brain-Inspired Computing Server
§ 007What's next

Want to solve
real problems with neuromorphic computing?

Zhuhai · HengqinNEOGENINT · BPU · 2026