Foundation models for the real world

Hominis is a new family of AI models built on a simple, powerful idea: true intelligence isn't about knowing everything, it's about doing the right thing, reliably. We traded encyclopedic memory for deep procedural competence to create agents you can trust.

Finetuning and Training Notebooks

We've published comprehensive training notebooks and fine-tuning guides for the Hominis model family. These resources walk through our multi-stage instruction tuning process, from general-purpose instruction following to specialized agentic plan synthesis training. The notebooks include code for the bootstrapping methodology we used to generate logical plan datasets and our rejection sampling techniques that filter for epistemic safety.

Improvements

  • Released complete pre-training and fine-tuning pipeline
  • Published distillation notebooks showing teacher-student knowledge transfer
  • Included structured initialization strategy for accelerated convergence

Huggingface Release 8B

Hominis-lite (8B) is now available on Hugging Face with full weights and inference code. This efficient variant can achieve high throughput on a single consumer level GPU while maintaining competitive MMLU Pro scores. The model features Grouped-Query Attention, RMSNorm pre-normalization, and a 32,768 token context window. It's designed for resource-constrained deployments where reliability matters more than encyclopedic knowledge.

Improvements

  • Released Hominis-lite (8B) under RAIL license
  • Optimized for efficient inference with BF16 support

Preprint Release

Today we're releasing "From Knowing to Doing: A Principled Architecture for Procedurally Competent Agents," our research paper introducing Hominis. Rather than pursuing scale for encyclopedic knowledge, we've built a model family that intentionally trades declarative knowledge for procedural competence. The architecture is founded on three pillars: curated pre-training data emphasizing logical inference, integrated self-assessment for epistemic humility, and a novel WebAssembly-based agentic framework that compiles high-level reasoning into secure, sandboxed execution.

The paper presents our complete methodology, from the strategic up-weighting of arXiv papers and permissively-licensed code during pre-training, to the multi-stage distillation process that teaches the model to recognize its knowledge boundaries.

Early Testing Access Opened

We're opening early access to the Hominis base and instruct models on a rolling basis.

Improvements

  • Launched ReAgent framework with logical plan synthesis capabilities
  • Distilled lighter variants on 20,000 instruction-plan pairs covering multi-step reasoning patterns
  • Pretrained 15B model on RedPajama v2 and a curated scientific corpus.

Project Kickoff

We're pleased to announce commencement of Hominis project. We have secured 1M compute hours to be used on Leonardo GPU cluster under ISCRA-B grant by CINECA. Prof. Stefano Marrone will lead the efforts as Principal Investigator to create human-centric foundation models. RealAI will lend its support as Industrial Partner.