Technology
Our approach to AI is both dynamic and platform-agnostic, with universal I/O data formats that integrate directly with your current workflows. Our partners appreciate that our AI/ML solution scales with existing solutions vertically and along the horizontal with ease and little-to-no code or refactoring.
While most solutions today are engineered through aforementioned statistical and mathematical lenses, we looked to physics, information theory, and neuroscience to model cognitive processors that emulate known neurobiological functions instead of purely statistical mechanisms. It utilizes causality to go beyond narrow machine-learning predictions and can be directly integrated into human decision-making. This resulted in our fully explainable universal framework capable of solving complex data-centric problems that can learn from data and accept expert input.
We’ve separated the implementation layers – Data, Intelligence, and Application – to allow them to be addressed individually by different groups with varying expertise within a team or organization.
Our AI platform is built with flexibility and scalability in mind. Our universal data input format, a robust three-field object, is designed to assimilate any dataset, irrespective of its nature or complexity. This unique format transforms your input data into our intelligence system to decode and analyze. Configured manually or via a simple API integration, your input data becomes a question for the system to answer, built as a sequence of events. Because your data is decoupled from the intelligence and application layers of the system, input data sets can change while our AI agents, which house the cognitive processors and manipulatives, and integration can all remain the same, allowing for ultimate scalability.
The platform is our GUI for application integration, and configuring our framework. Affexy enables engineers at all levels to design solution AI agents through a graphical topology of primitives–wrappers for cognitive processors and data manipulatives that manage network connections and API calls. Our AI agents can learn in real-time to accelerate improvements, particularly when complexity increases. Once configured, the intelligence system provides a series of universally formatted prediction objects, which it can actively assess to determine the best course of action, maximizing the predicted utility value of outputs.
With simple REST API calls necessary to integrate into current workflows and future domains, our platform-agnostic framework enables endless opportunities for application. Because this layer is also separate from others within the stack, minimal effort or skill is required for integration or scale. Easily increase the complexity of a problem without needing to update or change code by keeping the same REST interface. Because of the universal I/O data formats, you can also apply the same AI agents to new problem domains without any pre-processing or pre-modeling. Simply change the data set. The best part? All solutions are explainable.
Though our platform consists of multiple layers, it is the Affexy’s Framework that is responsible for real-time learning, fully explainable results, domain adaptation, and universal I/O. We built our solution to eliminate complex integrations, time-consuming development, and difficulties in deploying rapidly applying scalable solutions. When working with Affexy, you’ll visually design solutions by modeling and manipulating topologies of how data flows through the cognitive processors within our cognitive AI system that provides explainable results and concomitantly learns in real-time to improve solutions automatically.
Build, scale, and improve faster with fewer mistakes. With solutions able to automatically evolve within our framework, problem complexities can increase without needing additional modeling. And in case you do want to manually iterate solutions (who doesn’t love a little control), hop into platform to visually re-model in a few simple steps. Because we keep data, intelligence, and application layers separate and employ universal I/O data formats, you’re able to use the same solutions for different data sets and applications, and team members can work on different layers concurrently. No more solving every problem. Solve any problem.
In order to solve problems in any environment from any data set, we needed the platform to offer a multitude of answers based on complex and dynamical systems theory:
What classification or category does X belong to?
What has and/or will happen in the past, present, and future?
What anomalies, utilities, and missing or extra events are in the data?
How do I iterate or change X to make it better?
For this event and context, what is the best course of action?
Why did this particular part break down in the machine?
What's the probability of X happening next?
What patterns exist in the data?
What does X mean from a historical or analytical perspective?