📌 Key facts

- Objective: Annotate and detect ad hominem attacks in media articles using LLMs and Machine Learning, expanding GETT’s media analysis capabilities
- Start date: Flexible, starting now! Applications open immediately
- How to apply: Email your CV, grade report, and a short motivation letter (see below)
- 📌 Key facts
- 💡 Background
- 🦾Who We Are
- 🎯 Goals
- Your Tasks
- 🎓 Profile
- 📚 Further Reading
- 📝 How to Apply
- 📬 Contact
💡 Background
The Gender Equality Tech Tool (GETT) is a research-based analysis platform designed to make representation in media measurable. While originally focused on gender-balanced reporting, GETT’s underlying technology and methodology are broadly applicable to analyzing visibility, voice, and representation of different groups, roles, and topics in public communication.
By processing media content—such as articles, interviews, and announcements—from universities, research institutions, companies, and major media outlets, GETT extracts key metrics such as:
- Frequency and context of person mentions
- Quoted statements per person
- Gender distribution across articles and timeframes
- Professional roles and descriptions as mentioned in the media
These insights help organizations in media, science, and business understand and optimize their public presence, storytelling impact, and diversity strategies.
🦾Who We Are
The Chair for Strategy and Organization is focused on research with impact. This means we do not want to repeat old ideas and base our research solely on the research people did 10 years ago. Instead, we currently research topics that will shape the future. Topics such as Agile Organisations and Digital Disruption, Blockchain Technology, Creativity and Innovation, Digital Transformation and Business Model Innovation, Diversity, Education: Education Technology and Performance Management, HRTech, Leadership, and Teams.. We are always early in noticing trends, technologies, strategies, and organisations that shape the future, which has its ups and downs.
🎯 Goals
We are looking for motivated students or research assistants to expand GETT’s capabilities by analyzing how media articles employ different forms of argumentative tactics—with a focus on detecting ad hominem statements. Your work will directly contribute to the next generation of data-driven editorial analysis.
Your Tasks
- Annotate sentences from news articles for the presence of ad hominem attacks (required). You may optionally also analyze for other argumentative levels (see Graham’s Hierarchy of Disagreement, e.g., refuting the central point, counterargument, name-calling, etc.). —> news article data is already available !
- Develop and apply both Large Language Model (LLM) prompting approaches and classic Machine Learning (ML) methods to perform ad hominem detection. You must compare the effectiveness of these approaches. Self-proposed, creative methods are also welcome!
- Evaluate your models/approaches using appropriate metrics. Your evaluation must include precision and recall for ad hominem detection.
- Work primarily with German-language media data, but also test your approaches on English-language articles.
🎓 Profile
- Strong interest in AI, NLP, and LLMs (e.g., GPT-4, Claude, Llama).
- Experience with Python and machine learning workflows
- Ability to develop and experiment with LLM prompts (e.g., OpenAI, Anthropic APIs).
- Solid analytical and evaluation skills, with an understanding of precision/recall metrics.
- Detail-oriented—data quality and annotation accuracy are critical!
- Bonus: Proficiency in German —> helps in labeling the data
📚 Further Reading
- Graham’s Hierarchy of Disagreement (pyramid image, levels explained)
- MBIC: A Media Bias Annotation Dataset Including Annotator Characteristics (Spinde et al., 2021)
- ChatGPT Outperforms Crowd-workers for Text-Annotation Tasks (Gilardi et al., 2023)
📝 How to Apply
Send your CV, grade report, and a brief motivation letter explaining your interest in ad hominem detection and how your skills fit this project to:
Joe Yu (Chair for Strategy and Organization)
joe.yu@tum.de
📬 Contact
Put your contact information here.
Joe Yu
joe.yu@tum.de