Hybrid AI models: a combination of classical algorithms and neural networks to enhance interpretability
Modern Deep Learning neural networks demonstrate high accuracy in classification and forecasting tasks, but significant limitations remain in the interpretability of the results. This creates an obstacle to application in critical areas where maximum transparency of decisions is required. In this paper, we propose the use of a hybrid approach that combines both feature extraction methods based on neural networks and classical interpreted machine learning algorithms. In the course of the work, an architecture was developed in which a neural network forms a compact representation of data, and the final decision is made by an interpreted model in the form of a decision tree or logical regression. Experiments have been conducted on open datasets, confirming that the proposed approach allows for increased interpretability while maintaining accuracy comparable to Deep Learning models. The results demonstrate the promise of hybrid architectures for areas requiring transparency and explainability of the results.