How It Started
In 2021, Noa Srebrnik and Gilad Manor founded Citrusˣ with the mission of bringing transparency and accountability to Machine Learning (ML) models. But the story of Citrusˣ began long before that.
Their path to founding Citrusˣ started while working together at their previous company. As the CTO and co-founder, Gilad developed the algorithms and technology, while Noa was the VP of Product and the first employee. Through their experience, and after speaking to several companies, they gained a deep understanding of the problems organizations face when implementing Artificial Intelligence and ML.
The most important problem preventing widespread AI adoption by businesses is the lack of trust in how well machine learning models work. Companies spend so much money on AI infrastructure and data science teams, among other things, but only 35% of businesses implement AI products globally. So in the end, people aren't confident that AI can actually achieve what businesses need.
Why Do We Not Trust AI?
The lack of trust in AI often occurs because of the complexity of the ML models they are built on. The higher the model complexity, the more difficult it is to validate its accuracy from development to production.
On the tech side, there is a battle between data scientists and their ML models. When data scientists build complex models, they often have and provide minimal visibility into the levels of risk and accuracy. Moreover, data scientists have limited tools for building trust and confidence in their models.
Many have turned to free open-source explainability tools like SHAP and LIME, which have their place in certain use cases and could benefit up to a certain point. In turn, manual validation is highly required, which, unfortunately, falls short of providing a comprehensive assessment and can result in substantial delays. Furthermore, data science managers encounter challenges in gaining approval for their models due to limited visibility, thereby undermining the trustworthiness of these models.
And this is when non-tech teams need to understand the explanations, which are limited with open-source solutions.
Non-tech stakeholders, who are crucial for successful AI implementation, also struggle to understand and trust the models. Yet, when the model is intricate and its decisions prove challenging to clarify, the potential for lawsuits, financial setbacks, and regulation issues rise due to the need for decision accountability. And this is when non-tech teams need to understand the explanations, which are limited with open-source solutions.
As the significance of decisions and/or the level of human engagement required to formulate and implement them increases and more business results are required, the level of necessary trust also escalates.
Our Solution
With all of these issues in mind, Gilad and Noa wanted Citrusˣ to provide explainability and transparency into the black box, giving everyone an x-ray view of their models. With over 20 years of experience developing AI and ML algorithms and tools, Gilad created a unique solution based on proprietary technology for the problems at hand.
Citrusˣ’s holistic solution goes beyond explainability, enabling governance of the AI/ML model throughout the entire development and deployment cycle. The solution gives users close control while mitigating the most common risks that are typically responsible for delaying or preventing the deployment of models to production. Once the model is in production, our system allows you to monitor and evaluate your models in real time.
A Platform for Everyone
The solution is model agnostic, designed to support a range of supervised models utilizing structured data, regardless of the specific model type, and it can be installed on-premise. Whatever your model’s use case, Citrusˣ can help you mitigate risks, ensure fairness, explain outcomes, compare models, increase AI transparency, meet regulatory requirements, and more.
Its functionality extends across the spectrum of roles, offering a triple usability approach.
At its core, Citrusˣ aims to bridge the gap between the diverse stakeholders in an organization's AI/ML lifecycle. Its functionality extends across the spectrum of roles, offering a triple usability approach.
Simultaneously, Citrusˣ offers a robust collaborative solution for evaluation, management, and control roles. Risk officers and MRMs can verify the model's integrity and ensure responsible usage. Decision-makers benefit from the visibility and reports that provide explanations for variables impacting timely decisions.
The Core of Citrusˣ
In the realm of model development and deployment, achieving the intended value often proves difficult. The majority of models fail to transition into production, hindered by manual validation processes, complexity concerns, and the necessity to adhere to regulatory requirements.
Citrusˣ emerged as a game-changer, significantly expediting the journey to production by a remarkable 82%. Its efficiency is evident through reduced errors and improved outcomes, while its seamless integration capability is agnostic to model data, sets up on-premise, and supports various infrastructures.
The platform offers substantial cost savings, steering clear of potential lawsuits and regulatory fines by helping to identify issues, along with reduced time-to-market. Additionally, Citrusˣ enables organizations to stand out, boasting real-time explainability and certainty, vigilant monitoring with alerts, and a clear path toward product excellence.
What Have We Done and What Does the Future Hold
Gilad and Noa handpicked the team to find the most experienced, out-of-the-box thinkers, who could handle anything and everything when building the Citrusˣ solution. Our team of professionals have hands-on experience with various aspects of machine learning and AI, with small to large companies and corporations, helping us see problems and solutions from every perspective.
Citrusˣ is constantly working towards being the one place that can ensure and verify AI and 3rd party models for use in the real world. With a focus on model validation, governance, and monitoring, our mission is clear: to ensure robustness, fairness, and explainability for AI models. We're determined to stand as the leading certifier for responsible and accurate AI.
Коментарі