
A brain-closer architecture than Transformer. Three-valued logic, sparse activation, CPU-only inference. Explainability built in, not bolted on.
Traditional deep learning is a black box — you feed it data, it spits out predictions, and you never know why. INN changes that from first principles.
INN adopts a "structural information extraction — logical networking" mechanism, integrating symbolic computation with a data-driven three-valued logic brain-inspired model. It can perform logical reasoning like a human and explain its decision-making process.
This isn't a patch on Transformer. It's a new brain — from neurons to network structure, from activation functions to inference paths, every layer designed for explainability.
INN is a new brain architecture — every layer redesigned, from neurons to networks, from activation to inference.
Extract patterns with minimal data, not millions of labels.
Learn new knowledge without forgetting, no catastrophic interference.
Process text, image, speech in a unified representation.
No GPU cluster required — runs efficiently on ordinary CPUs.
3-node BIE cluster, CPU-only, no GPU required
10 billion tokens trained in 30 hours on a single CPU node. Comparable to traditional clusters requiring multiple high-end GPUs.
Public datasets, reproducible results. Not cherry-picked lab cases — real classification tasks.
Medical diagnosis
Disease prediction
Image recognition
Scientific classification
INN isn't just a faster model — it redefines the baseline across precision, transparency, energy, and generality.
Consistently outperforms comparable models on standard benchmarks — high accuracy, minimal error.
Every inference step is traceable. The logic chain is visible — no more black box.
Native CPU inference. No GPU cluster needed — power draw drops by orders of magnitude.
Full-system inference throughput of 12.6M tokens/s. Response latency under 100ms.
Symbolic logic and numerical computation in a single framework — precision meets generalization.
From NLP to image recognition, classification to multimodal — one architecture, many tasks.
AI development framework for building and deploying INN models. Generates learning networks based on tasks, determining network nodes, connection structures, and node computation models.
Provides intuitive neural network data storage and metadata management based on underlying CPU, GPU, BPU hardware resources, including KV storage engine, metadata dictionary, neuron node and network structure data access interfaces, and cross-process or multi-machine network communication interfaces.
During task training, responsible for extracting organizational relationships from use case data and performing data annotation fitting. This layer utilizes intuitive neural network perception and cognition components, combined with filtering, classification and other operators, to construct multi-directional, multi-scale neural network graph structures required by tasks based on underlying neuron abstraction interfaces.
Provides auxiliary tools and management operation interfaces for the full lifecycle of intuitive neural network training and inference, facilitating rapid construction of various model applications. Mainly includes neural network connection structure visualization tools, data preprocessing and loading tools, and command-line interfaces for training and inference.
As a general artificial intelligence learning algorithm, intuitive neural networks enable the Flint platform to support intelligent application development in various scenarios, such as popular NLP dialogue systems, traditional classification tasks (e.g. Kaggle competition tasks), and multimodal applications combining text and images.
Achieving optimal balance between ease of use and performance — developers can get started quickly and use efficiently.
Each component can be independently replaced or extended, supporting flexible system architecture design.
Supports rapid iterative development with seamless transition from prototype to production.
Built-in network visualization tools that make complex neural networks understandable.
The smallest INN model software version, available for direct download. This is the entry-level version of the INN brain-inspired large model, allowing you to experience the core features of the INN architecture.
Note: INN inside™ PRO and INN inside™ Ultra are only available with BIE PRO and BIE Ultra hardware platforms.