Our vision is to use AI to help us understand our world and ourselves and increase the fidelity of knowledge transfer between humans and AI.
We accomplish this is by building models that are designed to explore vast spaces of plausible theories.
We are working on a new brand of generative AI algorithms that drastically expand the reach of human creative thought. Our models ingest problem statements as a small set of observational data, and output the entire space of plausible theories.
Our workflow is designed to surface the:
- cognitive errors,
- inconsistencies, and
- edge cases.
These errors are often implicit in the training data and reasoning vocabulary.
Our AI generates theories and expresses them using the very same language and concepts that were initially defined by the researcher. This helps with the process of feedback and validation, resulting in models that are transparent and explainable.
By grounding our AI in concepts defined by the researcher, we develop models that are robust and generalizable - qualities missing in many popular AI models today.
Our approach augments human reasoning through a collaborative process with AI.
First, researchers validate AI reasoning. Then, the AI explores hypotheses beyond human scope. Finally, researchers examine AI output to guide integration of discoveries.
These discoveries are not achievable by either humans or an AI independently.
Our model uses drastically smaller datasets than conventional Deep Learning approaches.
Scientists and researchers can validate AI-generated reasoning sequences. Resulting models are inherently faithful to the human decision making process.
We explore much larger spaces of potential hypotheses than are accessible to the human mind.