REFSQ 2026
Mon 23 - Thu 26 March 2026 Poznań, Poland

Special Theme: Trustworthy and Ethical Systems via Requirements Engineering

These days, we face many challenges in information systems and software engineering, motivated by continuous and rapid developments in AI. In our view, this evolution is unstoppable, and we must make the world ready for it. However, as usual, technology evolves before the methods are crafted to develop them. The Requirements Engineering community is, however, very attentive and keeps up with the trends and needs of this society. REFSQ is one of the conferences responsible for that, and this year, we want to continue this tradition by proposing the Trustworthy and Ethical Systems via Requirements Engineering special theme. We cannot expect novel systems to lead to trustworthy and ethical results if we do not endow requirements engineers, system designers, and developers with proper approaches focusing on trust and ethics since the early stages of the development cycle. Thus, we hereby solicit contributions in this direction, emphasizing the following aspects:

  • Correctness and ethics should go hand in hand with Requirements, elicitation, analysis, negotiation, monitoring, and assessment.
  • Trust is paramount to creating a society that relies on machines for crucial processes and aspects of people’s lives.
  • Providing people with trustworthy, ethical, and ready information about novel technology to let them choose if and how to use it is at the essence of a safe and happy world.
Dates
Tracks
Plenary

This program is tentative and subject to change.

You're viewing the program in a time zone which is different from your device's time zone change time zone

Tue 24 Mar

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

08:00 - 09:00
RegistrationResearch Track at CW 3
09:00 - 10:30
09:00
15m
Day opening
Opening
Research Track
Sylwia Kopczyńska Poznan University of Technology
09:15
75m
Keynote
Keynote 1
Research Track
Tony Gorschek Blekinge Institute of Technology / DocEngineering
10:30 - 11:00
Coffee breakCatering at CW 053
11:00 - 11:15
Poster PitchesPosters & Tools at CW 3
12:25 - 14:00
Lunch / PostersCatering at CW 053
14:00 - 15:40
Trustworthiness in AI and Information SystemsResearch Track at CW 3
14:00
30m
Technical design
Fairness as a First-Class Requirement: A Fairness Hazard Analysis Approach to Socio-Technical ProcessesTechnical Design Paper
Research Track
Giovanna Broccia ISTI-CNR, FMT Lab, Lucio Lelii , Roberto Cirillo , Dario Di Dario University of Salerno, Samuel Fricker FHNW, Fabio Palomba University of Salerno, Giorgio Spagnolo ISTI-CNR, FMT Lab, Alessio Ferrari CNR-ISTI
14:30
30m
Scientific evaluation
Supporting Stakeholder Requirements Expression with LLM Revisions: An Empirical EvaluationScientific Evaluation Paper
Research Track
Michael Mircea , Emre Gevrek , Elisa Schmid Leibniz Universität Hannover, Kurt Schneider Leibniz Universität Hannover, Software Engineering Group
15:00
20m
Research preview
Specifying and Validating Fairness & Transparency Requirements for AI-Based Social Benefit Allocation in Digital GovernmentResearch Preview Paper
Research Track
Amanda Aline Figueiredo Carvalho Vicenzi , José Siqueira de Cerqueira Tampere University, Edna Dias Canedo Computer Science Department - University of Brasília, Pekka Abrahamsson University of Jyväskylä
15:20
20m
Research preview
Embedding Normative Requirements in Fuzzy LogicResearch Preview Paper
Research Track
Ziba Assadi , Paola Inverardi Gran Sasso Science Institute (GSSI)
14:00 - 15:30
LLMs use in REResearch Track at CW 8
14:00
30m
Scientific evaluation
Opportunities and Limitations of GenAI in RE: Viewpoints from PracticeScientific Evaluation Paper
Research Track
Anne Hess Technical University of Applied Sciences Würzburg-Schweinfurt, Andreas Vogelsang paluno – The Ruhr Institute for Software Technology, University of Duisburg-Essen, Xavier Franch Universitat Politècnica de Catalunya, Andrea Herrmann Herrmann & Ehrlich, Sylwia Kopczyńska Poznan University of Technology, Alexander Rachmann Hochschule Niederrhein
14:30
30m
Scientific evaluation
A Comparative Study of Large and Small Language Models for Conceptual Model ExtractionScientific Evaluation Paper
Research Track
Cheng Yi Chou , Fatma Başak Aydemir Utrecht University, Fabiano Dalpiaz Utrecht University
15:00
30m
Scientific evaluation
From Online User Feedback to Requirements: Evaluating Large Language Models for Classification and Specification TasksScientific Evaluation Paper
Research Track
Manjeshwar Mallaya , Alessio Ferrari CNR-ISTI, Mohammad Amin Zadenoori ISTI-CNR, Jacek Dąbrowski Lero - the Science Foundation Ireland Research Centre for Software
Pre-print
15:40 - 16:10
Coffee break / PostersCatering at CW 053
16:10 - 17:30
Panel: Trustworthy and Ethical Systems via Requirements Engineering Research Track at CW 3

Thu 26 Mar

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

09:00 - 10:15
09:00
75m
Keynote
Keynote 2
Research Track
Pınar Yolum Utrecht University
10:15 - 10:45
Coffee breakCatering at CW 053
10:45 - 12:25
Requirements Specification and Privacy by DesignResearch Track at CW 3
10:45
30m
Technical design
An Industry-Driven Template for the Documentation of Non-Functional RequirementsTechnical Design Paper
Research Track
Sabine Molenaar Utrecht University, Fabiano Dalpiaz Utrecht University
11:15
30m
Technical design
FeClustRE: Hierarchical Clustering and Semantic Tagging of App Features from User ReviewsTechnical Design Paper
Research Track
Max Tiessler Universitat Politècnica de Catalunya, Quim Motger Universitat Politècnica de Catalunya
Pre-print
11:45
20m
Research preview
Eliciting and Ingraining Cultural Elements in Digital Information Systems with CEFISResearch Preview Paper
Research Track
12:05
20m
Research preview
Towards a Goal-Centric Assessment of Requirements Engineering Methods for Privacy by DesignResearch Preview Paper
Research Track
Oleksandr Kosenkov Blekinge Institute of Technology, Ehsan Zabardast Nordea / Blekinge Institute of Technology, Jannik Fischbach Netlight Consulting GmbH and fortiss GmbH, Tony Gorschek Blekinge Institute of Technology / DocEngineering, Daniel Mendez Blekinge Institute of Technology and fortiss
10:45 - 12:05
Formal MethodsResearch Track at CW 8
10:45
30m
Technical design
Provably Relevant HAL Interface Requirements for Embedded SystemsTechnical Design Paper
Research Track
Manuel Bentele University of Freiburg, Andreas Podelski University of Freiburg, Axel Sikora , Bernd Westphal German Aerospace Center (DLR)
11:15
30m
Technical design
A practical and complete method for detecting rt-inconsistencies in real-time requirementsTechnical Design Paper
Research Track
Nico Hauff University of Freiburg, Elisabeth Henkel University Freiburg, Elisabeth Fünfgeld , Vincent Langenfeld University of Freiburg, Andreas Podelski University of Freiburg
11:45
20m
Research preview
Automata-Represented Requirements in HanforPLResearch Preview Paper
Research Track
Tobias Kolzer Albert-Ludwigs-Universitaet Freiburg, Vincent Langenfeld University of Freiburg, Nico Hauff University of Freiburg, Elisabeth Henkel University Freiburg, Andreas Podelski University of Freiburg
12:05 - 12:20
Best Poster PresentationPosters & Tools at CW 8
12:25 - 14:00
14:00 - 15:30
ExplainabilityResearch Track at CW 3
14:00
30m
Scientific evaluation
Immersive and Enjoyable Explanations - On Distinct Explainability Requirements in GamesScientific Evaluation Paper
Research Track
Jakob Droste Leibniz Universität Hannover, Ronja Fuchs Leibniz Universität Hannover, Hannah Deters Leibniz University Hannover, Martin Obaidi Leibniz Universität Hannover, Alexander Dockhorn University of Southern Denmark, SDU, Kurt Schneider Leibniz Universität Hannover, Software Engineering Group
14:30
30m
Scientific evaluation
Misunderstandings by Design: Using Erroneous Tutorials to Induce Mental Model Conflicts and the Need for ExplanationsScientific Evaluation Paper
Research Track
Jakob Droste Leibniz Universität Hannover, Hannah Deters Leibniz University Hannover, Carolin Kirchhoff , Lukas Nagel Leibniz Universität Hannover, Software Engineering Group, Martin Obaidi Leibniz Universität Hannover, Kurt Schneider Leibniz Universität Hannover, Software Engineering Group
15:00
30m
Scientific evaluation
All Eyes on User Needs: Using Gaze and Pupillometric Measures to Identify Explanation NeedsScientific Evaluation Paper
Research Track
Laura Reinhardt Leibniz University Hannover, Hannah Deters Leibniz University Hannover, Jakob Droste Leibniz Universität Hannover, Kurt Schneider Leibniz Universität Hannover, Software Engineering Group
14:00 - 15:30
Software DevelopmentResearch Track at CW 8
14:00
30m
Technical design
A Context-Aware Multi-Agent Approach to Enhancing User Story Management in Agile Software DevelopmentTechnical Design Paper
Research Track
Hoang Khoa Nguyen , Malik Sami Tampere University, Zheying Zhang Tampere University, Pekka Abrahamsson Tampere University
14:30
30m
Scientific evaluation
Security under Pressure: How Agile Teams Experience and Manage Security RequirementsScientific Evaluation Paper
Research Track
Dahlia Thaewjaturat , Oksana Kulyk IT University of Copenhagen, Denmark, Elda Paja IT University of Copenhagen
15:00
30m
Scientific evaluation
Understanding Usefulness in Developer Explanations on Stack OverflowScientific Evaluation Paper
Research Track
Martin Obaidi Leibniz Universität Hannover, Kushtrim Qengaj , Hannah Deters Leibniz University Hannover, Jakob Droste Leibniz Universität Hannover, Marc Herrmann Leibniz University Hannover, Kurt Schneider Leibniz Universität Hannover, Software Engineering Group, Jil Klünder University of Applied Sciences | FHDW Hannover
15:30 - 16:00

Unscheduled Events

Not scheduled
Awards
Best Poster Presentation
Research Track

Accepted Papers

Title
A Comparative Study of Large and Small Language Models for Conceptual Model ExtractionScientific Evaluation Paper
Research Track
A Context-Aware Multi-Agent Approach to Enhancing User Story Management in Agile Software DevelopmentTechnical Design Paper
Research Track
All Eyes on User Needs: Using Gaze and Pupillometric Measures to Identify Explanation NeedsScientific Evaluation Paper
Research Track
An Industry-Driven Template for the Documentation of Non-Functional RequirementsTechnical Design Paper
Research Track
A practical and complete method for detecting rt-inconsistencies in real-time requirementsTechnical Design Paper
Research Track
Automata-Represented Requirements in HanforPLResearch Preview Paper
Research Track
A Visual Formalism for the Specification of Maritime Traffic ScenariosTechnical Design Paper
Research Track
Eliciting and Ingraining Cultural Elements in Digital Information Systems with CEFISResearch Preview Paper
Research Track
Embedding Normative Requirements in Fuzzy LogicResearch Preview Paper
Research Track
Extending iStar for Synthetic Data Generation and Simulation Modeling for Industry 5.0Research Preview Paper
Research Track
Fairness as a First-Class Requirement: A Fairness Hazard Analysis Approach to Socio-Technical ProcessesTechnical Design Paper
Research Track
FeClustRE: Hierarchical Clustering and Semantic Tagging of App Features from User ReviewsTechnical Design Paper
Research Track
Pre-print
From Online User Feedback to Requirements: Evaluating Large Language Models for Classification and Specification TasksScientific Evaluation Paper
Research Track
Pre-print
Immersive and Enjoyable Explanations - On Distinct Explainability Requirements in GamesScientific Evaluation Paper
Research Track
Misunderstandings by Design: Using Erroneous Tutorials to Induce Mental Model Conflicts and the Need for ExplanationsScientific Evaluation Paper
Research Track
Opportunities and Limitations of GenAI in RE: Viewpoints from PracticeScientific Evaluation Paper
Research Track
Provably Relevant HAL Interface Requirements for Embedded SystemsTechnical Design Paper
Research Track
Security under Pressure: How Agile Teams Experience and Manage Security RequirementsScientific Evaluation Paper
Research Track
Specifying and Validating Fairness & Transparency Requirements for AI-Based Social Benefit Allocation in Digital GovernmentResearch Preview Paper
Research Track
Supporting Stakeholder Requirements Expression with LLM Revisions: An Empirical EvaluationScientific Evaluation Paper
Research Track
The Software Engineering Simulations Lab: Agentic AI for RE Quality SimulationsResearch Preview Paper
Research Track
Towards a Goal-Centric Assessment of Requirements Engineering Methods for Privacy by DesignResearch Preview Paper
Research Track
Understanding Usefulness in Developer Explanations on Stack OverflowScientific Evaluation Paper
Research Track

Call for Papers

We invite submissions along the following categories:

  • Technical design papers (15 pages incl. references) describe the design of new artifacts, i.e., novel solutions for problems relevant to practice and/or significant and theoretically sound improvements of existing solutions. A preliminary validation of the artifacts is also expected.

  • Scientific evaluation papers (15 pages incl. references) investigate existing real-world problems, evaluate existing artifacts implemented in real-world settings, or validate newly designed artifacts, e.g., by means such as case studies, action research, quasi-controlled experiments, simulations, surveys, or secondary studies if they clearly synthesize the state of reported evidence in literature (via systematic literature reviews or mapping studies). Please refer also to the ACM Sigsoft Empirical Standards for Software Engineering for guidelines and review criteria for each research method: https://github.com/acmsigsoft/EmpiricalStandards

  • Experience report papers (12 pages incl. references) describe retrospective reports on experiences in applying RE techniques in practice, or addressing RE problems in real-world contexts. These papers focus on reporting the experience and give special attention to practical insights, lessons learned, and/or key takeaways and recommendations to the community. Experience reports may also include studies in which the authors interview practitioners about the application of specific RE techniques or about RE problems in practice.

  • Vision papers (8 pages incl. references) state where research in the field should be heading.

  • Research previews (8 pages incl. references) describe well-defined research ideas at an early stage of investigation which may not be fully developed.

Each type of paper has its own review criteria, based on the description above.

Finally, we cordially invite authors to disclose their research artifacts following our open science guidelines. Authors who wish to disclose their artifacts can find further guidance and support under the Open Science Initiative.

Submission, Reviewing and Publication

Contributions must be submitted to Easychair (https://easychair.org/conferences/?conf=refsq2026)

Each submission in the scope of REFSQ will undergo a single-blind review process that will involve at least three members of the program committee. The REFSQ 2026 proceedings will be published in Springer’s LNCS series. Proceedings of previous editions can be found at https://link.springer.com/conference/refsq.

Formatting

All submissions must be formatted according to the Springer LNCS/LNBIP conference proceedings template for LaTeX and Word, available at https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines. As per the guidelines, please remember to include keywords after your abstract.

Furthermore, to facilitate accurate bidding and a better understanding of the papers, each paper submitted to REFSQ 2026 is required to have an abstract structured with exactly 4 paragraphs with the following content:

  • Context and motivation: situate and motivate your research.

  • Question/problem: formulate the specific question/problem addressed by the paper.

  • Principal ideas/results: summarize the ideas and results described in your paper. State, where appropriate, your research approach and methodology.

  • Contribution: state the main contribution of your paper, by highlighting its added value (e.g., to theory, to practice). Also, state the limitations of your results.

To ensure that the research and artifacts are more accessible and that the REFSQ 2026 Open Science Policy is followed, an explicit Data Availability Statement (similar to the acknowledgments) should be included at the end of each submitted paper. Specifically, authors should provide details

  • about any material disclosed alongside their submission, such as data, code, or other relevant material.
  • or justify the reasons why disclosure is not possible (e.g., due to IP agreements).

Open Science Policy and Competition

As in the previous year, REFSQ 2026 encourages and supports authors in making their research and artifacts more accessible, reproducible, and verifiable by adhering to the Open Science Policy.

Moreover, REFSQ’26 and the Open Research Knowledge Graph (ORKG) organize the second Open Science Competition with two challenges to contribute to the promotion of Open Science in Requirements Engineering. All authors are invited to take up these challenges. Fame, honor, ORKG Awards, and prize money await you - don’t miss this chance!

For further details on the objectives, review procedure, policy and related guidelines as well as the competitions please check the website of the Open Science Track.

Important Dates

  • 10 Oct 2025: Abstract submissions (optional)

  • 17 Oct 2025: Paper submissions

  • 24 Oct 2025: End of Grace Period (for Paper updates)

  • 15 Dec 2025: Authors notification

  • 19 Jan 2026: Camera-ready submissions

All dates are AoE.

Contacts

For any questions and clarifications, please contact: r.guizzardi@utwente.nl, joao.araujo@fct.unl.pt;

Each paper category has its own review criteria. We invite authors and reviewers to check the criteria and consider their order of relevance. We also invite authors and reviewers to consider the Open Science initiative.

Technical design papers (15 pages incl. references)

Describe the design of new artifacts, i.e., novel solutions for requirements-related problems or significant improvements of existing solutions. A preliminary evaluation of the artifacts is also expected.

Review Criteria (in order of relevance):

  • Novelty: to what extent is the proposed solution novel with respect to the state-of-the-art? To what extent is related literature considered? To what extent did the authors clarify their contribution? [NOTE: The potential lack of novelty is NOT an argument for rejection, but we expect authors to clearly convey the novelty of their contribution in light of the existing body of knowledge

  • Potential Impact/Relevance: is the potential impact on research and practice clearly stated? Is the potential impact convincing? Has the proposed solution been preliminarily evaluated in a representative setting?

  • Soundness: has the novel solution been developed according to recognised research methods? Is the preliminary evaluation of the solution sound? Did the authors clearly state the research questions? Are the conclusions of the preliminary evaluation logically derived from the data? Did the authors discuss the limitations of the proposal?

  • Verifiability: did the authors share their software? Did the authors share their data? Did the authors share their material? Did the authors provide guidelines on how to reuse their artfiacts and replicate their results? [NOTE: sharing data and software is NOT mandatory, but papers that make an effort in this direction should be adequately rewarded]

  • Presentation: is the paper clearly presented? To what extent can the content of the paper be understood by the general RE public? If highly technical content is presented, did the authors make an effort to also summarise their proposal in an intuitive way?

Scientific evaluation papers (15 pages incl. references)

Investigate existing real-world problems, evaluate existing real-world implemented artifacts, or validate newly designed artifacts, e.g., by means of case studies, experiments, simulation, surveys, systematic literature reviews, mapping studies, or action research. You might want to check the Empirical Standards for guidelines and review criteria for each research strategy at https://github.com/acmsigsoft/EmpiricalStandards.

Review Criteria (in order of relevance):

  • Soundness: has the novel solution been developed according to recognised research methods? Is the research method justified? Is the research method adequate for the problem at hand? Did the authors clearly state the research questions, data collection, and analysis? Are the conclusions of the evaluation logically derived from the data? Did the authors discuss the threats to validity?

  • Potential Impact: is the potential impact on research and practice clearly stated? Is the potential impact convincing? Was the study carried out in a representative setting?

  • Verifiability: did the authors share their software? Did the authors share their data? Did the authors provide guidelines on how to reuse their artfiacts and replicate their results? [NOTE: sharing data and software is NOT mandatory, but papers that make an effort in this direction should be adequately rewarded]

  • Novelty: to what extent is the proposed solution novel with respect to the state-of-the-art? To what extent is related literature considered? To what extent did the authors clarify their contribution? [NOTE: The potential lack of novelty is NOT an argument for rejection, but we expect authors to clearly convey the novelty of their contribution in light of the existing body of knowledge (including and especially when submitting replication studies)]

  • Presentation: is the paper clearly presented? To what extent can the content of the paper be understood by the general RE public? If highly technical content is presented, did the authors make an effort to also summarise their study in an intuitive way?

Experience report papers (12 pages incl. references)

Describe retrospective reports on experiences in applying RE techniques in practice, or addressing RE problems in real-world contexts. These papers focus on reporting the experience in a narrative form, and give prominence to the lessons learned by the authors and/or by the participants.

Review Criteria (in order of relevance):

  • Relevance of the Application: is the application context in which the experience is carried out interesting for the RE public? Is the application context sufficiently representative? To what extent is the paper reporting a real-world experience involving practitioners? Is the experience credible? Relevance of Lessons Learned: are the lessons learned sufficiently insightful? Did the authors report convincing evidence, also anecdotal, to justify the lessons learned?

  • Potential for Discussion: will the presentation of the paper raise discussion at the REFSQ conference? To what extent can REFSQ participants take inspiration to develop novel solutions based on the reported experience? To what extent can REFSQ participants take inspiration to perform sound empirical evaluations based on the reported experience?

  • Novelty: is the context of the study in line with the current RE practice? Does the study report on a contemporary problem that RE practitioners and researchers typically face?

  • Presentation: is the application context clearly presented? Are the lessons learned clearly described? To what extent can the content of the paper be understood by the general RE public?

Vision papers (8 pages incl. references)

State where research in the field should be heading.

Review Criteria (in order of relevance):

  • Potential Impact: will the vision impact the future research and practice in RE? Is a roadmap discussed? Is the vision sufficiently broad to affect different subfields of RE? Do the authors discuss both short-term and long-term impacts of their vision? Potential for Discussion: will the presentation of the vision raise the interest of the REFSQ audience? Will the vision raise discussion? Can the vision raise controversial opinions in the audience?

  • Novelty: is the vision sufficiently novel with respect to existing reflections within the REFSQ community? Do the authors clarify the novelty of their vision?

  • Soundness of Arguments: is the vision supported by logical arguments? Are the implications convincing? Presentation: is the vision presented in a compelling way? Is the vision presented in a way that can elicit reflections in the RE community?

Research previews (8 pages incl. references)

Describe well-defined research ideas at an early stage of investigation which may not be fully developed.

Review Criteria (in order of relevance):

  • Novelty: did the research preview make you say “I heard it first at REFSQ!”? Is the idea sufficiently novel with respect to the state-of-the-art? Do the authors discuss related work and the contribution of their study?

  • Soundness of the Research Plan: do the authors present a convincing research plan? Did the authors discuss the limitations and risks of their plan? Is the plan referring to sound research methods? Do the authors clarify their research questions, planned data collection, and data analysis? Did the authors perform a convincing proof-of-concept or preliminary research step?

  • Potential for Discussion: will the presentation of the preview raise the interest of the REFSQ audience? Will the preview raise discussion? Will the audience be able to provide useful feedback to the authors, given the typical background of the REFSQ audience? Can the preview raise controversial opinions in the audience?

  • Presentation: is the paper clearly presented? To what extent can the content of the paper be understood by the general RE public?

Questions? Use the REFSQ Research Track contact form.