Treble SDK
Treble's SDK is the first cloud-based programmatic interface for acoustic simulation, utilizing finite element solvers to accurately capture complex wave phenomena at all frequencies and scales. The SDK enables next generation machine learning training, accelerated product design cycles, and it integrates seamlessly into custom R&D workflows and third-party tools.
Unlocking acoustic simulation technology for
A paradigm shift in acoustic simulation
Treble's hybrid solver models the full audible spectrum, combining wave-based simulations for low-mid frequencies and geometrical acoustics for higher frequencies.
Geometrical acoustics
State-of-the-art phased geometrical acoustics simulations for efficient simulation in large rooms / at high frequencies
Wave-based FEM
Massively accelerated time-domain FEM simulations, inherently capturing wave phenomena like diffraction, phase and scattering.
Synthetic audio data generation in complex acoustic environments
Treble SDK enables the easy generation of high-quality acoustic training data, creating realistic acoustic scenes with complex materials, geometries, furnishings, directional sound sources, and microphone arrays. This enhances machine learning algorithms for applications such as speech enhancement, source localization, blind room estimation, echo cancellation, room adaptation, and generative AI audio. Independent research has shown that wave-based synthetic acoustic data significantly improves ML performance, making Treble SDK a powerful tool for AI-driven audio development.
Machine learning model testing and validation
Treble SDK leverages high-fidelity simulations to evaluate audio ML algorithms, enabling near-real-time device-specific rendering in post-processing. Its Python-based programmatic interface allows seamless, automated performance evaluation across diverse acoustic scenarios. By replacing costly and time-consuming physical measurements with virtual prototyping, Treble SDK accelerates development while ensuring accurate and scalable testing.
Product virtual prototyping
Treble SDK enables high-fidelity simulations to evaluate audio hardware performance, accurately modeling specific sound sources with complex directivity patterns and any microphone array design. Its Python-based programmatic interface allows seamless, automated testing across diverse acoustic scenarios. By replacing costly physical measurements with virtual prototyping and offering effortless data augmentation through proprietary device rendering, Treble SDK streamlines development and enhances precision.
Automotive acoustics and audio
Treble SDK enables rapid and accurate acoustic simulations of car cabins, capturing the interaction between infotainment systems, background noise, and cabin acoustics. It generates authentic virtual listening experiences to evaluate different designs and optimizes audio algorithms for automotive applications, enhancing in-car sound quality and performance.
Spatial audio rendering and analysis
Generate high-fidelity spatial room impulse responses (up to 32nd-order ambisonics), render perceptually authentic binaural and spatial audio, and efficiently set up thousands of acoustic scenarios using imported geometries, Treble’s room database, or programmatic scene generation.
Parametric analysis workflows
Automate virtual prototyping and optimization with parametric workflows in the Treble SDK. Seamlessly integrate with game engines, CAD tools, and other third-party platforms to refine acoustic designs, PA systems, material configurations, microphone arrays, and machine learning algorithms with minimal manual effort.
SDK feature summary
Go to our SDK pricing page for a full list of features.
Simulation at scale
Set up and launch thousands of advanced acoustic simulations in complex environments with a few Python commands.
Advanced source modeling
Model complex sound sources such as loudspeakers and the human voice via directional point sources and surface sources.
Spatial audio
Output physically accurate ambisonics room impulse responses up to 32nd order and render binaural / multi-channel output for auralization.
Device modeling
Model microphone arrays and listeners and render device-specific output in post-processing.
Scene generation
Import your own, leverage Treble’s enormous scene database or program-matically set up complex acoustic environments.
Machine learning workflows
Efficiently train and evaluate any kind of ML-based algorithm, e.g., speech enhancement and blind room estimation.
Real-time collaboration
Modern collaborative workflows, in-product support and shared assets such as materials, sources, receivers and sounds.
Automated workflows
Integrate the SDK into your custom workflows and connect with 3rd party tools for automated development and prototyping.