Discussion about this post

User's avatar
Philip Sheldrake's avatar

In the current AI landscape, safety is typically treated as an additive layer — a collection of probabilistic wrappers and guardrails designed to constrain the inherent creativity / volatility of LLMs. Remarkably, from an engineering perspective, safety is being implemented in the same unstable medium (natural language) it's intended to govern.

AI acts in the world through people and through software. We're focused on the latter.

While others attempt to ease the symptoms, we’re resolving the root cause. While others bolt it on, we're building it in.

We’re building the tech to help everyone develop software that understands itself. Software that is itself symbolic AI. It’s superior software in itself, and truly shines in collaboration with LLMs. We call it supersoftware.

Supersoftware and LLMs are intelligent in different yet highly complementary ways. The LLM provides intuition, intent, and context, and supersoftware provides a world model, grounding, validation, security, and architectural guarantees. LLMs generate accurate, explainable, and policy-compliant responses when guided and constrained by the supersoftware's verifiable logic.

It’s a neurosymbolic synergy providing the ontological stability that LLMs innately lack. It acts as a formal reasoning layer that governs the operational logic of agents, ensuring they cannot violate predefined security policies, resource constraints, or architectural rules, regardless of the 'intuition' provided by the neural model.

Our core technology is already enterprise proven in an adjacent context.

You can find our white paper at the following URL Matilda, and you won't be surprised to hear that we'd love to share our deck with you. My contact details are at the top of the white paper.

Best regards.

https://recognitive.io/docs/supersoftware-white-paper.html

No posts

Ready for more?