To better understand how digital media is shaping the relationship between government institutions and the public, the GovDX lab is conducting these projects:
UX Design and Administrative Burdens
For governments around the world, the digitalization of administrative functions, processes and systems coupled with a user-centered philosophy has shown promise in helping agencies design and deploy digital tools centered on the needs, preferences and experiences of end-users.
Federal agencies across the U.S., for instance, are leveraging user-centered design methods to design and implement new digital tools and interfaces to improve the delivery of public programs and services. Many of these efforts are being touted as a success so far in policy and practice but as public administration scholars, our theoretical understanding of the mechanisms that influence outcomes remains quite limited. To date, we have a growing number of cases that illustrate the “what” but have very little substantive insight into the “how” or “why.”
Drawing on the study and practice of user experience (UX) design, cognitive and behavioral science, and public management theory, this research develops an original conceptual framework mapping dimensions of digital user engagement - affective, behavioral, and cognitive engagement - onto the three types of costs outlined in the administrative burden literature e.g., learning, compliance, and psychological costs.
The aim is to empirically test the framework in an attempt to tease out the theoretical mechanisms that can advance understanding of how digitalization and interface design, more specifically, impact public service outcomes.
This research was awarded the Paul A. Volcker Junior Scholar Grant Award from the American Political Science Association’s Public Administration Section (2024).
Algorithm-Aware Government Communication
How are public agencies leveraging memes, pop culture and ephemeral content to increase engagement on social media, and what are the impacts?
Today, social media platforms such as Facebook, X (formerly Twitter), Instagram, and TikTok represent dominant media environments that provide users with tools to share content, engage with one another, and drive global conversations. As these media have continued to evolve and transform, each platform has come to possess its own affordances, constraints and unique cultural norms.
Of all, the algorithm is perhaps the most influential in shaping each platform’s cultural norms. Social media algorithms shape what content is visible to users and create feedback loops by incentivizing certain types of behavior, content and engagement. These algorithms have come to govern social media platforms through a strong emphasis on engagement metrics that prioritize, reward, and even penalize specific types of content, which in turn, influences what users post and how they engage with one another.
In this way, metrics such as likes, comments and shares function as a form of currency where engagement generates more engagement which translates into greater visibility of users and their content.
This suggests that in order for public sector organizations to be effective on social media, they must adapt their communication practices to align with a given platform’s norms and the incentives of commercial platform ecosystems.
Such algorithm-aware strategies include adapting content to fit the affordances, constraints and logics of specific platforms. It also increasingly means navigating algorithmic pressures that privilege certain types of content over others and operating within a broader online ecosystem and “internet culture” which has its own unique (and often ephemeral) trends, jokes, and even vernacular i.e., “internet-speak.”
Drawing heavily on the work of media theorists Marshall McLuhan and Neil Postman, this research considers how the modalities and incentives of social media platforms are reshaping government communication and the ways that governments are navigating the paradigm shift that commercial social media platforms have produced in the broader media environment.
Additionally, this work is interested in the implications of algorithmically-driven social media strategies for reputation management, institutional voice, and public perceptions, including social equity.
Using a mixed-methods research design, we explore how government agencies are leveraging algorithm-friendly content such as memes, pop culture references, and trending audio, filters, and hashtags to creatively adapt bureaucratic communication into digital culture in ways that maximize visibility and engagement on social media platforms.
This project includes “Meme-ifying Government: Understanding Public Perceptions of Memetic Communication” which was funded by: Minerva Center for High Impact Learning’s Innovation Funding for Research & Creative Endeavors (2025).
The Role of CSOs in AI Governance
“Shaping Ethical and Responsible AI: The Role of Civil Society Organizations in AI Governance” examines how civil society organizations (CSOs) in the US influence the development and implementation of AI governance activities and frameworks.
Using Resource Mobilization Theory as a theoretical framework, we identify over 50 CSOs involved in AI governance, including prominent organizations like the Algorithmic Justice League, Partnership on AI, and the Center for Democracy & Technology.
The study uses qualitative methods to investigate two key questions:
- How are civil society organizations helping to shape the governance of AI?
- What unique opportunities and challenges exist in the mobilization of resources for these CSOs in their work on AI governance?
Preliminary findings indicated that CSOs make significant contributions through a number of pathways:
- Development of AI assessment tools and safety standards;
- Monitoring and accountability;
- Influence on industry standards;
- Building technical and policy expertise;
- Public communication, education, and raising awareness;
- Knowledge production/R&D;
- Policy advocacy.
The research also reveals that while financial resources for AI-related initiatives are currently abundant due to the economic power of “Big Tech”, CSOs have to navigate this environment with great care due to the conditions and potential reputational risks that come with accepting funding from tech companies.
This ongoing research is expected to contribute to our understanding of how civil society is helping to shape the development and deployment of AI technologies through informal and formal governance mechanisms, in addition to the growing importance of technical expertise, regulatory understanding, and public communication as the AI governance landscape continues to evolve.