Neogenint Intelligence
NEOGENINT · 001Brain-Inspired Intelligence

Ushering in a New Era of
Brain-Inspired Intelligence.

INN™ intuitive neural networks, BPU brain-inspired silicon, two-phase cooling, and liquid metal — a full stack built for the next generation of intelligence.

1.35Mtokens/s
Inference
99.9%
Energy saved
<100ms
Latency
99.5%
Peak accuracy
Zhuhai · HengqinScroll
§ 001Principles

Intelligence
isn't stacked.

Better intelligence comes from better first principles — rethinking every layer, from neurons to silicon to the way heat leaves the room.

  • 01

    Think like a brain

    INN™ isn't a bigger Transformer — it's a new architecture closer to biological intuition. Explainable, low-power, inference on CPU alone.

    99.5%Accuracy
  • 02

    Silicon, remade for inference

    BPU brain-inspired silicon and wafer-scale systems, designed from first principles for sparse, event-driven computation — not as a GPU afterthought.

    1.35Mtokens/s
  • 03

    Let heat flow away

    Two-phase liquid cooling and liquid-metal interfaces carry heat out of 45KW cabinets silently — no high-power CDU, PUE convergent at 1.08.

    200W/mK
  • 04

    Efficiency is the new performance

    99.9% less energy isn't a marketing number — it means intelligence can live anywhere there's power, not just in hyperscale data centers.

    1.08PUE
  • 05

    Cut the cost of intelligence

    When inference runs on CPU and cooling needs no chiller loop, operational costs fall by 50% — not through compromise, but through better design.

    50%Cost reduction
  • 06

    Academician-led research

    Research led by Chinese Academy of Engineering academicians, building INN™ from foundational theory — not engineering optimization, but scientific breakthrough.

    INN™Original arch.
§ 002A Note

Neogenint · Zhuhai, China

2024 — 2026

For a decade, the story of intelligence has been a story about scale — bigger models, more GPUs, hotter rooms. We're not going to keep telling it.

We're starting over in three places: the algorithm, the compute, the cooling. An algorithm that behaves like a brain, not a fatter Transformer. Compute built for sparse, event-driven thought, not matrix multiplication. Heat carried away quietly, not pinned down by a building full of chillers.

These three have to happen together. Any one alone is an improvement. All three together is a generation change.

That's why we exist. Our name points to the Neogene: the period when much of life began to look recognizably modern.

— The Neogenint Team
§ 004Scale

Not bigger.
An order smaller.

Drop energy by two orders of magnitude and the question changes — intelligence lives wherever power does, not only in hyperscale halls. Here's how we compare to the traditional approach across every dimension that matters.

Dimension
Traditional
Neogenint
Footprint
42U rack
3U wall-mount
From a full row of racks to a single wall.
Power draw
120kW
1.2kW
Runs on a standard wall outlet.
Cooling
CDU + 冷水塔building-scale
0auxiliary gear
Heat leaves on its own — no escort needed.
Deployment
18months
2weeks
From permits to inference — not a project cycle.

An order of magnitude isn't optimized — it's redesigned from first principles.

§ 004.5Benchmarks

Numbers,
not claims.

INN™ accuracy on public scientific classification datasets, measured against traditional systems.

96.5%
Average accuracy
99.5%
Peak accuracy
+4.3%
Avg. lead vs. traditional
10×
Inference speed-up
Kaggle Diabetes
INN™
89.7%
Trad.
85.2%
Lead
+4.5%
INN™
Traditional
Kaggle Heart Disease
INN™
98.6%
Trad.
94.5%
Lead
+4.1%
INN™
Traditional
Kaggle MNIST
INN™
98.2%
Trad.
96.8%
Lead
+1.4%
INN™
Traditional
Double Helix
INN™
99.5%
Trad.
92%
Lead
+7.5%
INN™
Traditional

Tests run independently on public datasets. Accuracy figures represent INN™ vs. equivalent traditional classifiers on the same data.

§ 005In the field

Where it's
actually needed.

One small machine, one wall of power, one decision that actually matters — that's what intelligence should look like.

  • 01 · Healthcare
    98.6%heart-disease detection

    Explainable assistive diagnosis on imaging and records — where the doctor can ask the model back: why did you decide that.

  • 02 · Life Sciences
    40%cycle reduction

    Inference that keeps pace with wet-lab throughput, compressing discovery cycles from months to weeks.

  • 03 · Finance
    92%signal accuracy

    Sparse models resist overfitting in non-stationary markets and return auditable decisions with calibrated confidence.

  • 04 · Data Centers
    1.08PUE

    Thermal as a first-class concern — the cabinet carries its own heat away quietly, no building-scale chiller loop required.

  • 05 · Manufacturing
    99%defect recall

    On-line, on-device quality inspection — CPU-only, no accelerator card per production line.

  • 06 · Smart City
    35%energy reduction

    Models that run at the intersection, the substation, the service hall — not every decision round-tripped to the cloud.

  • 07 · E-commerce
    conversion lift

    Real-time recommendations and fraud detection decided in milliseconds — not every transaction round-tripped to a remote data center.

  • 08 · Education
    94%early-intervention accuracy

    Explainable learning-path assessment — the teacher knows why the model flagged a student for extra support, not just that it did.

§ 006What's next

Want to see how it
runs in your room?

Zhuhai · HengqinNEOGENINT · 001 — 2026lane_nie@neogenint.com