FSE 2025 Preview

The ACM International Conference on the Foundations of Software Engineering (FSE) is one of the leading venues for software engineering-related research---targeting researchers, practitioners, and stakeholders. We're excited to share that several of our recent papers were accepted and will be presented at the main track of the conference this year (a follow-up post will highlight the workshop papers). In this post, we briefly summarize the problems explored, provide an overview of our research findings, and discuss implications from three papers that will appear in the FSE main track:

How do Software Engineering Candidates Prepare for Technical Interviews? [FSE SEET]

Brian Bell, Teresa Thomas, Sang Won Lee, Chris Brown

  • ๐Ÿ” Problem: Current tech hiring interview processes are challenging, difficult to prepare for, and rarely faced by candidates in computing curriculum and education.

  • ๐Ÿงช Study: We surveyed 131 candidates actively pursuing SE-related roles to understand how they prepare for technical interviews.

  • ๐Ÿ“Š Findings: Candidates use varied tools to prepare (mostly YouTube and LeetCode), but rarely train in authentic settings (with coding, communication, and an audience) and lack support in education---leading to increased anxiety and unpreparedness. However, candidates who practice with others (i.e., mock interviews) feel significantly more prepared than those who train alone.

  • ๐Ÿ’ก Implication: We provide implications for how candidates, employers, tech interview preparation resources, and CS education can enhance students' preparedness for technical interviews---a process necessary to obtain employment in the tech industry.

DevCoach: Supporting Students Learning the Software Development Life Cycle with a Generative AI powered Multi-Agent System [FSE SEET]

Tianjia Wang, Matthew Trimble, Chris Brown

  • ๐Ÿ” Problem: The software development lifecycle (SDLC) is critical for developing and maintaining software systems, yet challenging to effectively incorporate within learning environments. Recent work has explored leveraging generative AI to enhance SE education, but how effectively can it support learning and practical experiences for students learning SDLC concepts?

  • ๐Ÿงช Study: We implemented DevCoach, a multi-agent system consisting of AI-powered personas representing roles in SE teams to help students learn and complete tasks related to the SDLC (e.g., requirements, design, implementation, testing, and review). We evaluated our tool with a user study with 20 students across DevCoach and baseline study settings.

  • ๐Ÿ“Š Findings: Our results show DevCoach demonstrated the ability to enhance student learning gains, enhanced students' completion of SDLC activities, and improved perceived aspects of learning environments grounded in the Community of Inquiry framework (social, cognitive, and teaching presence) [Garrison1999].

  • ๐Ÿ’ก Implications: We discuss the potential of generative AI and multi-agent systems in advancing SE education to offer personalized and collaborative experiences to help learners receive training on other software development concepts.

AutoPyDep: A Recommendation System for Python Dependency Management Utilizing Graph-Based Analytics [FSE Tool Demo]

Dibyendu Brinto Bose, Travis Chan, Matthew Trimble, Chris Brown

  • ๐Ÿ” Problem: Managing software dependencies is increasingly complex in modern software development---particularly in popular programming languages such as Python [Neils2024], causing frustration for programmers across varied domains.

  • ๐Ÿงช Study: We implemented AutoPyDep, a graph-based recommendation system to support Python dependency management through various features such as centrality analysis of dependencies, visualizations for library interdependencies, and predicting the nature and timing of releases based on historical data. We conducted a preliminary evaluation to assess the prediction capabilities of our tool.

  • ๐Ÿ“Š Findings: Using a collected dataset of Python release notes, we found AutoPyDep demonstrated accurate capabilities for release type (New Features (F1 = 0.95), Bug Fixes (F1 = 0.94), Security (F1 = 0.92), and Performance (F1 = 0.89)) and date (mean error ~ 2 months) predictions.

  • ๐Ÿ’ก Implication: We briefly discuss implications and future work to enhance AutoPyDep and future dependency management systems.

We look forward to sharing these efforts and engaging with the research community at FSE 2025. We hope these papers---reflecting ongoing threads in our overall research---provide useful insights, and we welcome discussions, critiques, and collaborations that can extend them further. Looking forward to attending the conference and hope to see you in Trondheim, Norway next week! Also stay tuned for an upcoming post on our contributions to several of the FSE workshops.


References

[Garrison1999] Garrison, D. Randy, Terry Anderson, and Walter Archer. "Critical inquiry in a text-based environment: Computer conferencing in higher education." The internet and higher education 2.2-3 (1999): 87-105.

[Neils2024] Niels Cautaerts. "Python dependency management is a dumpster fire". 2024. https://nielscautaerts.xyz/python-dependency-management-is-a-dumpster-fire.html

Comments