About Redeo

Large language models are fluent, they speak well - but they do not always think well.

Most models used today are trained to follow instructions: take an input, understand what the user wants, and produce a coherent answer. These instruction-tuned models have been enormously useful, but they still generate responses in a single pass, prioritizing fluency over careful deliberation and stability.

Redeo explores a different approach: treating reasoning not as a one-shot act but as a process that can be guided, observed, and refined before a final answer emerges.


Turning Instruct Models into Thinking Models

Imagine the difference between:

Standard instruct models are like the first student: they are tuned to answer directly from patterns learned during training.

By contrast, thinking models - sometimes called reasoning models -- engage in a deeper effort: they allocate more compute during inference, by thinking longer in their chain-of-thought, revisit intermediate steps, and produce results that benefit from reflection rather than just prediction.

Redeo does not require retraining models from scratch to make them behave this way. Instead, it runs existing models under a different runtime framework that encourages them to operate more like thinking systems than single-shot inkblots of text.

That framework is called Wit-1 - our first inference paradigm designed to unlock latent reasoning capabilities in instruct models without extra training.


Powered by Wit-1

Wit-1 is not a new model. It is a way of running models so that they unfold reasoning over time rather than deliver a one-off response.

Many research efforts have shown that models can produce better results when reasoning is structured or unfolded step by step rather than predicted in one go, like with chain-of-thought techniques that break complex problems into intermediate steps.

Wit-1 takes this a step further, by using massively parallel compute, to improve reasoning, safety and stability just like other products in the market like DeepMind's DeepThink and XAI's Grok's agent-based, Grok 4 Heavy, but as a plug and play framework that can be used with virtually any foundation model.

Wit-1 powers Redeo, so you can steer the same reasoning process however you like. Imagine spending more compute on a particularly difficult part of a larger problem, while leaving the rest untouched. Instead of treating inference as a flat, all-or-nothing operation, Wit-1 allows reasoning effort to be allocated where it actually matters. Redeo is the operating system that lets you do this, intuitively.

This shift does not just change how reasoning happens -- it changes which models can reason. Smaller models that typically would struggle on complex tasks can deliver results that feel closer to those from much larger, more expensive models. (Benchmarks coming soon)


What This Means in Practice

Instead of relying on brute force -- training ever bigger models with ever more data -- Redeo focuses on how inference is structured and controlled.

That has two important consequences:

This is not about model size alone, but about how you let models use what they already know.


Steerable Thinking

Reasoning should not be opaque or uncontrollable.

Redeo makes reasoning observable and steerable: users can see how a conclusion evolved and guide the process at meaningful points, shaping outcomes not simply by editing prompts but by actively participating in the reasoning flow.

This makes reasoning something you collaborate with, not just consume.


Why This Matters

As AI systems become more powerful, the cost of confident mistakes grows too. Improving reliability is not just about bigger models or more data -- it is about how inference itself is constructed and managed.

Redeo explores precisely that layer: the interface between human intent and machine reasoning.

If this approach works, it does not replace models. It changes how they are used.


Our View

We believe intelligence improves through a simple loop:

try -> reflect -> adjust

Redeo applies this loop at inference time.