Kick-off Event

On September 22, 2023, the kick-off event of the project "DigitalisierungsDiskurse - DiDi" took place at the Conti-Campus (Königsworther Platz 1, 30167 Hannover). Below you will find a few impressions from the meeting:


In our kick-off event, we had the pleasure of getting to know each other personally and having a first exchange. On 21 September, some of the participants already had the opportunity to discover an innovative format of science communication at the science slam "Virtuality as a way of life" in the innovercity project of the Hanover universities, which offered inspiring perspectives on the challenges of digitalisation. On 22 September, the participants presented work results and new ideas that emerged from the participating research projects under the headline "Projects & Friends". Talks on the legal handling of phenomena of digitalization - such as automated decision-making and artificial intelligence - were complemented by contributions from philosophy, sociology and computer science.

Our keynote speaker Katja Anclam, after being welcomed by Prof. Dr. Margrit Seckelmann, shared her own experiences with the project "KIDD - AI in the service of diversity". The KIDD process developed in the project helps organizations in reacting to challenges posed by algorithmic bias by shaping the process of introduction of algorithms in an organisation in the interest of diversity. Central to this is the discursive participation of stakeholders in a panel of diversity. Biases or disadvantages can be discovered and addressed early in this process. Ms. Anclam reported on both the substantive work on the KIDD process and the interest shown by the public and the companies involved in the project, and demonstrated how to obtain practice-oriented research results on human-centered digitization and immediately put them to use.

In the first contribution from the collaborating projects, Filip Paspalj (University of Vienna) presented some results of his work in the Digitize! project, in which he dealt with the copyright privileging of text and data mining for scientific purposes in Austria. One success of this work was the scientific advice given to the legislator by means of a statement on the Austrian copyright amendment act implementing Directive (EU) 2019/790. The participants in DiDi then discussed the legal situation regarding text and data mining for scientific purposes in Germany and Austria as well as the opportunities for social science research created by this regulation.

Katharina Lippke (Chair of Civil Law, Labour and Business Law) and Nelli Schlee (Institute of Legal Informatics) from the PhD program "Responsible Artificial Intelligence in the Digital Society" gave an insight into their work on algorithmic decision-making in hiring decisions from the perspective of current anti-discrimination law and data protection rules regarding the transparency of automated decisions. They identified in particular a risk of discrimination in combination with particular difficulties for applicants to also recognize it as on particular challenge posed by the introduction of artificial intelligence in this area. As possible legal solutions, Lippke and Schlee suggested a change in the rules on burden of proof in the AGG and a new risk distribution in the case of a lack of traceability of automated decision-making.

The following contributions underscored the project’s interdisciplinarity. First, Justus Rahn from the Institute of Sociology at LUH presented a paper on discursive negotiation of algorithm implementation in the Austrian Employment Service (Braunsmann, K., Gall, K., & Rahn, F. J. (2022). Discourse Strategies of Implementing Algorithmic Decision Support Systems: The Case of the Austrian Employment Service. Historical Social Research, 47(3), 171-201). The paper used the instrument of sociological discourse analysis to investigate the public discourse surrounding the introduction and use of an algorithm for forecasting labour market opportunities in the Austrian Employment Service. According to the authors, the discourse about the algorithm was intertwined with discourses about efficiency and social assistance. In addition, the vagueness and changeability of the terms and images used to describe the algorithm in the discourse, as well as a focus on the integration of the algorithm into the organization of the AMS, contributed to shielding the algorithm itself from criticism.

Jannik Zeiser (Institute of Philosophy) and Jan Horstmann (Institute of Legal Informatics), following their work in the BIAS project, highlighted different facets of responsibility in automated decision making. With the question "Who decides?" Zeiser directed attention to the aspect of the attributability of decisions, which he contrasted with the aspect of accountability in particular. Attributability poses the question whether a decision reflects a value judgment that can ultimately be attributed to the acting human being. In the case of decisions for which, for legal or ethical reasons, a human being should be responsible, artificial intelligence poses challenges to attributability by introducing implicit value judgments into the decision-making process. The acting human, Zeiser said, is not always aware of such judgements. Horstmann followed up on this work by undertaking a classification and critique of the rules on automated decisions enshrined in EU data protection law vis-à-vis the aspects of responsibility identified by Zeiser. Although the legislator attempted to ensure responsibility, the regulations do not fully cover all of its facets. However, in combination with other GDPR provisions and the proposed AI regulation, there is potential for a stronger safeguarding of responsibility.

In the second block of the day, Luca Deck (University of Bayreuth/Fraunhofer Institute for Applied Information Technology) presented a review study in progress on the question of how the relationship between explainable artificial intelligence (XAI), and fairness is assessed in research. He showed that there is agreement that explainability is generally conducive to fairness. However, the exact contours of this relationship revealed a complex relationship between XAI and fairness, which roughly maps along the dimensions of formal fairness criteria and human (subjective) perceptions of fairness. The concept of fairness is also not unambiguous. For example, one might focus on procedural, often formal, fairness or on fair outcomes. Therefore, generating explanations of an AI-generated output, which itself is subject to limitations, could not guarantee fairness. Rather, it could support human judgment in implementing and controlling the fairness criteria appropriate for a use case.

Kristen Scott (KU Leuven) presented her ongoing work within the NoBias project. In collaboration with the British BBC, she is using Natural Language Processing (NLP) tools for qualitative analysis of media content. As the selection of displayed and recommended content on BBC platforms is largely driven by journalistic curation, the aim is to explore how the impact of new technologies such as recommendation algorithms can be monitored. The tool used by Scott is an interactive visualization of the framing of certain topics for the editorial team, based, among other things, on measuring the frequency of word pairs and triplets (so-called bigrams and trigrams). The tools developed in this process could help to understand how the BBC can better fulfil its public service purposes in terms of, for example, impartial reporting and the representation of the diversity of British regions and nations.

Boris Kandov (University of Vienna), working in the project RAP - Legal Aspects of the Platform Economy and Platform Society, devoted his contribution to regulatory approaches for algorithms on online platforms in the Digital Services Act (DSA). The DSA provides for transparency obligations for content recommendation algorithms on platforms. Kandov emphasized the challenge of providing information about algorithmic recommender systems in the clear, simple, understandable, user-friendly and unambiguous language required by the DSA. He also introduced the obligation to assess and mitigate systemic risks which applies to both recommender systems and content moderation systems. Enforcement of the DSA occurs through a wide range of tools. In addition to monitoring by the European Commission, these include, for example, data access rights for the Digital Services Coordinator and support from the newly established European Center for Algorithmic Transparency (ECAT).

Some of the questions raised in the presentations were explored in greater depth in the following final discussion, with a view to sketch the next steps in the research project. In particular, there was animated discussion about how people could recognize whether they are being treated fairly or possibly even unlawfully discriminated against in automated decision-making involving artificial intelligence. This would first require a basic understanding of which forms of automated decision-making systems are already in use and how they work. It was emphasized that conceptual clarity was particularly important in view of both technical complexity and abundance of fashionable buzzwords in this field. The importance of data for the algorithms as end products was also pointed out. Finally, it was agreed that it must be possible for the society at large to discuss which values artificial intelligence transports and whether the technical development really brings socially desired progress. In this respect, the DiDi project will have the future task of organizing terms on the basis of the current state of the research and raising awareness in the public about the significance and risks of artificial intelligence.

Programme Schedule

  • Thursday, September 21, 2023
    Time Programme
    7.00-10.00 p.m.

    Science Slam "Virtualität als Lebensform"
    Auditorium "aufhof",
    Osterstr. 13, 30159 Hannover


  • Friday, September 22, 2023
    Time Programme
    9.00-9.15 a.m. Arrival, Coffee/Tea
    9.15-9.30 a.m. Welcome Address
    Prof. Dr. Margrit Seckelmann, M.A.,
    Leibniz Universität Hannover
    9.30-9.45 a.m. Introduction of the project DigitalisierungsDiskurse (DiDi)
    Marlene Delventhal & Jan Horstmann,
    Leibniz Universität Hannover
    9.45-10.30 a.m. Wenn KIs gendern… Digitalisierungsdiskurse zwischen Wissenschaft, Zivilgesellschaft und Unternehmen im Forschungsprojekt KIDD - Künstliche Intelligenz im Dienste der Diversität
    Katja Anclam, M.A.,
    Institut für Gutes Leben/ e.V., Berlin
    10.30-10.45 Uhr Coffee break


    Projects & Friends I:

    Time Programme
    10.45-11.05 a.m. Die Privilegierung von Text- und Data-Mining für wissenschaftliche Zwecke in Österreich
    Filip Paspalj,
    Universität Wien
    11.05-11.35 a.m. Algorithmische Entscheidungsfindung im Kontext von Einstellungsverfahren und Anti-Diskriminierungsrecht
    Katharina Lippke & Nelli Schlee,
    Leibniz Universität Hannover
    11.35-11.55 a.m. Diskursive Verhandlung von Algorithmeneinführung im Arbeitsmarktservice Österreich
    Justus Rahn & Korbinian Gall,
    Leibniz Universität Hannover
    11.55-12.25 a.m. Wer trifft die Entscheidung? Algorithmen und Facetten von menschlicher Verantwortung im Lichte von Philosophie und Recht
    Jannik Zeiser & Jan Horstmann,
    Leibniz Universität Hannover
    12.25a.m.-1.30p.m. Lunch


    Projects & Friends II:

    Time Programme
    1.30-2.00 p.m. Overcoming Intuition: A Critical Survey on the Multidimensional Relationship between Explainable AI and Fairness
    Luca Deck,
    Universität Bayreuth
    2.00-2.20 p.m. Using Concurrency for Frame Analysis by Gender and non-UK Region in BBC News
    Kristen Scott,
    KU Leuven
    2.20-2.40 p.m. Die Regelung von Algorithmen auf Online-Plattformen im Digital Services Act
    Boris Kandov,
    Universität Wien
    2.40-2.45 p.m. Coffee brak
    2.45-3.45 p.m. Interactive development of topics
    3.45-4.30 p.m. Presentation of results & Ergebnispräsentation and final discussion: Quo vadis, DiDi?
    16.30-17.00 p.m. Wrap-Up & Goodbye


If you have any questions about the event, please feel free to contact us!