auto_awesomeAI Summary
“This survey examines how to build interpretable surrogate models that reduce computational costs while maintaining explainability in complex system simulations. As AI systems become decision-critical across scientific and engineering domains, understanding model behavior through explainable AI techniques is essential for trustworthy deployment and regulatory compliance.”
Black-box simulators need interpretable surrogates to reveal how inputs drive outputs.
This summary was AI-generated. Neural Digest is not liable for the accuracy of source content. Read the original →
Read full article on ArXiv CS.AIopen_in_new


