Cross-GPU Verified on Kaggle

4.4 MILLIONx

Faster than mpmath. EXACT results.

NumPy gives WRONG ANSWERS on Hilbert matrices. VLA gives CORRECT ones.

4.4M

Speedup vs mpmath

4

NumPy Wrong Signs

0

VLA Wrong Signs

v6.3.6

Latest Release

VLA vs mpmath Speedup

Click any bar for details. mpmath is the gold standard for exact arithmetic.

64x649,365x
0.25s
26μs
128x12849,979x
2.0s
40μs
256x256433,926x
18.9s
44μs
512x5124,455,047x
3.1 min
41μs
mpmath (CPU)
VLA (GPU)

mpmath is NEVER reproducible

We tested mpmath at 15, 50, 100, and 500 decimal precision. Every single run produced different results. VLA produces identical checksums every time.

Benchmarked on Tesla T4 (Kaggle), v6.3.6, March 2026

NumPy Gets WRONG Answers

Hilbert matrices are a classic ill-conditioned test. The determinant is ALWAYS positive. NumPy gives wrong signs at n≥14.

nNumPy det(H)NumPy SignVLA SignCorrect?
102.16e-53++
122.72e-78++
14-3.44e-106-+
15-2.43e-122-+
16-4.93e-135-+
183.84e-164++
20-2.23e-194-+

4

NumPy wrong signs

n = 14, 15, 16, 20

0

VLA wrong signs

All positive, as mathematically required

For quantum simulation, financial modeling, and scientific computing: wrong signs can be catastrophic.

Cross-GPU Reproducibility

Same checksum on completely different architectures. This is unprecedented.

Matrix Multiply Checksums (SHA-256)

32x32 Matrix

ca242dbb106174d4a2f637d77e2d8cf2fd4b9c57c10139e76475fdafed5a3622

64x64 Matrix

f5eac2fd06b2b14bcb26cac85a9ef688fdfa78a088a9a916eedf47ece344eae7

128x128 Matrix

d8e7492022a1a18b9101e71f93a21f7bf5f1ea22c0b785efcf3e48879361d83c

BIT-IDENTICAL ACROSS GPUs

RTX 4070 (sm_89) and Tesla T4 (sm_75) produce identical results

100%

Reproducible

2

GPU architectures verified

0

Bit differences

Exact Linear Algebra

New in v6.3: Exact determinant, inverse, solve, rank, null space for ANY matrix size.

Exact Solve

0

Residual

A @ x = b exactly

Exact Inverse

I

A @ inv(A)

True identity matrix

Exact Null Space

0

A @ v

True null vectors

Quantum simulation requires unitarity. VLA guarantees U @ U† = I EXACTLY.

Real-World Impact

Quantum Computing

1000+ Gates

Unitarity preserved perfectly

FP64: Accumulating errorVLA: U†U = I exactly

Financial Transactions

$881,143,573.77

1 million transactions summed

FP64 error: $0.0000001VLA error: $0.00

Scientific Computing

Hilbert n=20

Classic ill-conditioned problem

NumPy: WRONG signVLA: CORRECT sign

AI/ML Training

Gradient Sums

Large batch accumulation

FP32: Catastrophic lossVLA: Zero loss

See The Proof

All benchmarks are reproducible. Verify on Kaggle or discuss your use case.