This article proposes a process through which a finance practitioner’s knowledge interacts with artificial intelligence (AI) models. AI models are widely applied, but how these models learn or whether they learn the right things is not easily unveiled. Extant studies especially regarding neural networks have attempted to extract reliable rules/features from AI models. However, if these models make mistakes, then the decision maker may establish paradoxical beliefs. Therefore, extracted rules/features should be justified via the prior thoughts, and vice versa. That is, with these extracted rules/features, a practitioner may need either to update his or her belief or to disregard the AI models. This study sets up a finance demonstraion for the proposed process. The proposed guide demonstrates an abductive-reasoning effect.