Machine learning solutions take an important place in our lives. It is not only about performance anymore but also about responsibility.
In the last decades, many AI projects focused on model efficiency and performance. Results are documented in scientific articles, and the best-performing models are deployed in organizations. Now it is the time to put another important part into our AI systems; responsibility. The algorithms are here to stay and nowadays accessible for everyone with tools like chatGPT, co-pilot, and prompt engineering. Now comes the more challenging part which includes moral consultations, ensuring careful commissioning, and informing the stakeholders. Together, these practices contribute to a responsible and ethical AI landscape. In this blog post, I will describe what responsibility means in AI projects and how to include it in projects using 6 practical steps.
Before I deep dive into responsible AI (rAI), let me first outline some of the important steps that are taken in the field of data science. In a previous blog, I wrote about what to learn in Data Science [1], and that data science products can increase revenue, optimize processes, and lower (production) costs. Currently, many of the deployed models are optimized in terms of performance, and efficiency. In other words, models should have high accuracy in their predictions and low computational costs. But higher model performance usually comes with the side-effect that model complexity gradually increases too. Some models are turned into so-called “black box models”. Examples can be found in the field of image recognition and text mining where neural networks are trained on hundreds of millions of parameters using a specific model architecture. It has become difficult or even unknown to understand why particular decisions are made by such models. Another example is in finance where many core processes readily run on algorithms and decisions are made on a daily basis by machines. It is most important that such machine-made decisions can be fact-checked and re-evaluated by human hands when required.