No More DeLuLu: What Happens When We Let Information Speak for Itself

Live at Google DevFest Silicon Valley — how physics-grounded information flows build transparent, auditable AI.

Where did we go wrong?

Current AI systems make fundamentally flawed architectural choices. In this talk, we traced the black-box problem back to its root: the reliance on activation functions. These non-linear transformations obscure information flow and make models opaque by design.

We also exposed a deeper issue: today's systems rely solely on angular information—measuring similarity by vector direction alone. But this means vectors pointing in opposite directions are considered maximally dissimilar, which contradicts linear algebra principles. True similarity requires both angular and spatial information.

Our solution: The Yat Product

We introduced a kernel that combines angular and spatial information, removing the need for activation functions to learn non-linear patterns. This approach is detailed in our technical report "No More DeLuLu", which presents the yat product—a clear measure of how non-linear two things are.

The result: we successfully trained models without activation functions, maintaining full transparency while capturing complex, non-linear relationships. Every computation remains interpretable, every decision traceable.

Key takeaways

  • Activation functions are the main cause of black-box AI—they're not necessary for non-linearity.
  • Angular-only similarity measures are fundamentally incomplete.
  • Combining angular and spatial information via the yat product enables whitebox learning.
  • Models can be trained end-to-end without sacrificing interpretability.
  • This architecture grounds AI in physics and information theory, not heuristics.

Resources