How Progress Ends: A Software Engineering Perspective

I was recently invited to give a guest lecture for the course ASPT 6004: AI in History and Its Social and Environmental Implications through the Alliance for Social, Political, Ethical, and Cultural Thought (ASPECT) program at Virginia Tech. This interdisciplinary class discusses the challenges that come with artificial intelligence (AI) and machine learning (ML) technologies from a historical and a political perspective. The assigned reading for students for the guest lecture was "How Progress Ends: Technology, Innovation, and the Fate of Nations" by Carl Benedikt Frey (Chapters 1-3). This book examines the history of civilizations to explore how economic and technological progress advances and stagnates. To prepare for the class and align my talk for the guest lecture, I decided to read through the first three chapters of the book---which I found really interesting.

Discussion Overview

The beginning of the discussion focused on my research---including what is software engineering, our work on AI in hiring processes [Vaishampayan2025], AI for SE education [Wang2025], and general perceptions on AI in software development. We also discussed a variety of other topics, including if AI will peak in performance as more content is AI-generated, difficulties with AI policies and usage (i.e., surveiliance), reducing bias in AI, the importance of teaching social science concepts to computing students (see this talk from Greg Wilson), and accountability for AI decisions, why even more advanced models continue to get common problems wrong (e.g., "how many r's are in 'strawberry'" and "who is the pope") and implications for society.

How Progress Ends

In "How Progress Ends", the author provided insights how technologies progress and stagnate---noting many historical examples. We also briefly discussed the students' reading for this week, and how AI could potentially stagnate progress. In addition, I noticed several concepts that could apply to recent literature on AI in software engineering. Two potential issues came to mind.

Social Disruption

The importance of social networks for innovation is no mystery...one of our most impor­tant resources is our social network, which acts as our "collective brain." And when networked ­people are f­ree to explore, they test more technological pathways. [p. 5, Frey2025]

Historical Example 🍻 At the time, many economists perceived Prohibition in the United States would increase productivity and efficiency of workers. However, one negative implication of banning alcohol was "the disappearance of the saloon, [which] disrupted ordinary ­people’s daily lives and social networks...Skilled workers and craftsmen went ­there not just to drink but to socialize and exchange ideas. And because ­these workers were responsible for developing the most inventive contrivances of the era, it should be no surprise that innovation took a hit as saloons across the country ­were forcibly shut down." [p. 3, Frey2025]. For instance, after Prohibition in the U.S. there was an 18% decline in patents. Frey suggests social networks are critical for innovation, referencing the cofounders of Google and the discovery of mRNA as examples. While Prohibition improved static efficiency, or individuals' operational performance, dynamic efficiency, new ideas and collaborations that lead to economic growth and technological progress, was stagnated.

Relevance to SE 💻 Recently, programmers are integrating AI within software development-related tasks, increasingly replacing human peers in tasks such as tech interview preparation, code reviews, and pair programming. However, this could prevent innovation by limiting the exchange of ideas and transfer of knowledge between individuals. For example, prior work shows humans are the most effective way developers learn about new tools [MurphyHill2011]. However, a recent study also shows that humans pair programming with AI are less likely to engage in knowledge transfer compared to humans, and also lack access to insights derived from tacit knowledge, organization- and team-specific contexts, and domain expertise [Welter2025]. Thus, while generative AI and LLMs can enhance static efficiency, their impacts on dynamic efficiency is still an area of investigation.

Diversity of Thought

The rise of the Chinese state surely encouraged scale, but it came at the price of diversity of thought. [p. 36, Frey2025]

Historical Example 🇨🇳 China was one of the world's early leaders in global innovation and technological progress in the 14th century, yet was rapidly surpassed by Europe in the 15th century. Frey states "The question of why China, despite its early technological superiority, did not produce an industrial revolution, and even ended up being overtaken by the West...[is] one of the greatest paradoxes of Chinese development" [p. 36, Frey2025]. One inhibitor was the increase in lack of diverse thinking, introduced by the Qin and Song dynasties and the Confucian education system. During this time, the selection process for bureaucratic positions was only open to males who could travel to Beijing for the metropolitan exam and pass the examination (95% failed). Meanwhile, individuals were motivated to pursue careers in government. Or, "if Galileo had lived in China, he would have been a bureaucrat and not a scientist" [p. 28, Frey2025]. Simultaneously, "under this system, China did well as long as technological pro­gress was incremental and served the state. Yet this also meant that the ­Middle Kingdom would have no scientific revolution" [p. 38, Frey2025]. This bureaucratization reinforced a hierarchical and patrimonial structure, discouraging many talented individuals from becoming scientists, inventors, and entrepreneurs.

Relevance to SE 💻 There are a few ways diverse thinking relates to AI in software development today. First, diverse thinking has shown to motivate innovation in technology. For instance, Frey notes recently how "e startups like OpenAI and Anthropic are now challenging Meta and Google" [p. 16, Frey2025]. However, the over-reliance on a few dominant AI models or tools can limit creative approaches to problem-solving and may also inhibit future performance of models, leading to the collapse of generative AI. In addition, bias against individuals from underrepresented backgrounds can inhibit diversity of thought and progress. Diversity in SE teams is critical for success and innovation in software development [Hyrynsalmi2025]. Yet, diverse thinking is under-prioritized in the tech industry. For example, current hiring processes incur bias against individuals from underrepresented backgrounds, prioritizing those with more time and effort to prepare for interviews [Behroozi2019]. AI-based filtering can also exhibit bias against a large population of candidates without standardized credentials (i.e., elite degrees, etc.) and from underrepresented backgrounds [Armstrong2024]. The above example suggests when systems reward only one kind of thinking, it can optimize for incremental improvement rather than innovation and progress.

These thoughts were my own and based on a brief reading of the first three chapters of this book (mostly Chapter 1). If I ever get to finish it, I hope to share more thoughts on progression and stagnation in software development and research, and discuss the role SE research could potentially play in preventing the end of progress.


Footnotes and References

[Vaishampayan2025] Swanand Vaishampayan, Hunter Leary, Yoseph Berhanu Alebachew, Louis Hickman, Brent Stevenor, Fletcher Wimbush, Weston Beck, Chris Brown. "Human and LLM-Based Resume Matching: An Observational Study". Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics (NAACL 2025).

[Wang2025] Tianjia Wang, Matthew Trimble, Chris Brown. "DevCoach: Supporting Students Learning the Software Development Life Cycle with a Generative AI powered Multi-Agent System". Foundations of Software Engineering, Software Engineering Education Track (FSE-SEET 2025).

[Frey2025] Carl Benedikt Frey." How Progress Ends: Technology, Innovation, and the Fate of Nations". Princeton University Press, 2025.

[MurphyHill2011] Emerson Murphy-Hill and Gail C. Murphy. "Peer interaction effectively, yet infrequently, enables programmers to discover new tools". Proceedings of the ACM 2011 conference on Computer supported cooperative work. 2011.

[Welter2025] Alisa Welter, Niklas Schneider, Tobias Dick, Kallistos Weis, Christof Tinnes, Marvin Wyrich, Sven Apel. An Empirical Study of Knowledge Transfer in AI Pair Programming.

[Hyrynsalmi2025] Sonja M Hyrynsalmi, Sebastian Baltes, Chris Brown, Rafael Prikladnicki, Gema Rodriguez-Perez, Alexander Serebrenik, Jocelyn Simmonds, Bianca Trinkenreich, Yi Wang, and Grischa Liebel. "Making Software Development More Diverse and Inclusive: Key Themes, Challenges, and Future Directions." ACM Transactions on Software Engineering and Methodology 34, no. 5 (2025): 1-23.

[Behroozi2019] Mahnaz Behroozi, Chris Parnin, and Titus Barik. "Hiring is broken: What do developers say about technical interviews?". In 2019 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp. 1-9. IEEE, 2019.

[Armstrong2024] Lena Armstrong, Abbey Liu, Stephen MacNeil, and Danaë Metaxa. "The silicon ceiling: Auditing gpt’s race and gender biases in hiring." In Proceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, pp. 1-18. 2024.

Comments