Enterprise ML/AI requires means to support audit trails explaining the reasons for a certain decisions taken, hence explainability of the models is of paramount importance. This can be a trade-off between having a complex deep-learning model viz-a-viz a more shallow model. It is known that most deep learning models are black boxes, which use accuracy as a metric to defend and are not explainable. These models are not fit for Financial, Healthcare and other applications. Gyrus has complimentary explainability models to the actual model, giving the reasoning to the decision making for multi-layer models as well.
Bias is one of the important checks required for any ML/AI algorithm. There were infamous botches by big companies in the HR practice and image recognition by not doing bias checks. Bias starts with human bias in curating the datasets. Gyrus has orthogonal bias checks to address this.
ML/AI models can leak data used to train them. Models that are not generalized well tend to expose unique test case inputs and outputs. This could mean proprietary data or data that is bound by NDA’s like pricing can be exposed. Gyrus performs differential privacy checks and prevent such data leaks.
Most of the ML/AI algorithms are developed today with the sole objective of accuracy. Whilst this is fine for academic and paper presentations, the business targets are usually quite different, such as ‘0’ False Positives at the cost of accuracy. So the approach would be different for these targets.