As more and more business processes have been digitized and orchestrated through various software systems, it has become inevitable to leverage the recorded immense data. In this context, the potential of artificial intelligence (AI) should be exploited to enable the continuous improvement of business processes. Achieving such potentials requires going beyond descriptive analysis and developing capabilities to proactively identify undesirable outcomes and define a set of actions to mitigate the relevant risks. Advanced machine learning approaches can now generate accurate business process predictions, but their black-box nature prevents their wide adoption and operationalization. Non-transparent methods scarcely explain their reasoning trace and hardly provide a justification mechanism for their outcomes. Therefore, the applicability of advanced AI methods in predictive process monitoring and analytics is significantly hampered in practice by the resulting lack of trust and confidence. Recently, explainable artificial intelligence (XAI) has risen again as a promising research domain aimed at enhancing the collaboration between AI-based systems and human users by making the underlying opaque algorithms interpretable. In this keynote, after discussing the necessity of AI for business process management and analytics, we will present recent trends and the relevant technical, social, and organizational considerations for designing the explainable predictive process analytics systems.