ExSS – ATEC Concept
Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user – e.g., because they are too technically complex to be explained or are protected trade secrets. The topics of transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop will provide a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, our goal is to focus on approaches to mitigating algorithmic biases that can be applied by researchers, even without access to a given system’s inter-workings, such as awareness, data provenance, and validation.
Topics of interest include, but are not limited to:
- Is transparency (or explainability) always a good idea? Can transparent algorithms or explanations “hurt” the user experience, and in what circumstances?
- When are the optimal points at which explanations are needed for transparency?
- What are explanations? What should they look like?
- What are more transparent models that still have good performance in terms of speed and accuracy?
- How can we detect biases and discrimination in transparent systems?
- What is important in user modeling for system transparency and explanations?
- What are important social aspects in interaction design for system transparency and explanations?
- What are possible metrics that can be used when evaluating transparent systems and explanations?