Spectral
v0.1A composite model designed for speed and quality, purpose-built for aexol.ai artifact generation and generation of definitions for the Aexol language.
Average Speed
~500 tok/s
Architecture
Composite
Version
0.1
Release Date
Feb 9, 2026
Model Type
Composite
Weight Sources
Open + Closed
Overview
Spectral is a composite AI model that combines multiple open and closed weight frontier models into a unified architecture, specifically optimized for the Aexol ecosystem. Unlike single-model approaches, Spectral leverages the strengths of different AI frontier models to deliver both speed and quality in artifact generation.
With an average throughput of ~500 tokens per second, Spectral is engineered for real-time development workflows. Whether generating GraphQL schemas, Prisma models, TypeScript types, or complete application scaffolds from Aexol specifications, Spectral delivers production-ready outputs with minimal latency.
The composite architecture allows Spectral to dynamically route tasks to the most suitable underlying model, ensuring optimal quality for each type of generation task while maintaining consistent high throughput across all operations.
Capabilities
Aexol Artifact Generation
Generate complete application artifacts from Aexol specifications including GraphQL schemas, database models, API endpoints, and type definitions.
Definition Generation
Create and refine Aexol language definitions from natural language descriptions, documentation, or existing codebases.
Multi-Language Output
Generate code artifacts targeting TypeScript, Python, Rust, Go, and other languages from a single Aexol specification.
Iterative Refinement
Support for chat-based refinement workflows where specifications are progressively improved through interactive dialog.
Architecture
Spectral employs a composite architecture that orchestrates multiple AI frontier models — both open weight and closed weight — into a unified inference pipeline. This approach combines the best characteristics of each underlying model:
- Speed optimization — Lightweight models handle routine generation tasks for maximum throughput
- Quality assurance — Frontier models handle complex reasoning and architectural decisions
- Task routing — Intelligent routing ensures each subtask is handled by the most capable model
- Aexol-native training — Fine-tuned specifically for the Aexol specification language and its artifact ecosystem
Version History
v0.1
February 9, 2026Initial release. Composite architecture with support for Aexol artifact generation, definition generation, and multi-language output. Average throughput of ~500 tok/s.