Designing Human-Centered AI for Social Impact • Using AI to Serve the Public Good, Part One 

d.school instructors Ariam Mogos and Nadia Roumani discuss the process of integrating AI-based solutions into social sector work in this three-part series. 

  • Emerging Tech
  • Social Impact
  • The d.school has always approached design with an interdisciplinary lens. This approach is at the forefront of a new project that looks at ways to bring relevant and impactful ways of using AI to social sector leaders. d.school emerging tech lead Ariam Mogos and senior designer and lead on social sector collaborations Nadia Roumani teamed up to help folks in the social sector who are facing complex challenges use emerging tech in innovative ways. 

    A practical approach to AI for the social sector

    The familiar phrase “at the speed of light” might now be “at the speed of AI.” AI is moving fast, both in how it is built and how it’s being adopted. Every few weeks, a new chatbot, smart assistant, or a more advanced LLM (large language model) emerges, promising to solve big problems in fields like education, healthcare, or government services. Many of these AI tools are built by private sector companies who are optimizing for scale, efficiency, and profit—and not necessarily real impact in these social sector fields, even when heavily subsidized by taxpayer dollars. 

    Leaders of nonprofits, schools, public health institutions, and state and local governments are on the frontlines, working to solve complex human problems. Many are curious about how AI might help them serve more people, or support their work to be more impactful. But the path forward feels chaotic, even overwhelming. 

    In partnership with the Stanford Institute for Human-Centered Artificial Intelligence (HAI), we’ve been working to help social sector leaders move from feeling overwhelmed by AI to actively and confidently shaping its future. We’ve identified a process specific to social sector leaders, and have developed a program to guide them through this work. 

    Our program is informed by our work with social sector leaders. We have consistently heard questions like: I don’t have the resources to hire new staff; how do I train existing staff to apply and develop AI tools? What technology should I be using? I don’t have AI talent on staff, who do I hire? How do I test and scale something with limited resources? What kind of legal structures, policies, or data practices do I need?

    To address these questions, we’ve designed a set of activities that helps participants explore what to prioritize when designing AI solutions and how to deconstruct AI concepts. This informational arc includes a process for addressing the emotional needs that arise throughout. By demystifying the complexity of AI, and supporting participants to tinker and experiment with its various components, our goal is to guide social sector leaders into meaningful uses of AI.  

    Make sure you are building something people wantand need

    The first need that we found was a way for social sector leaders who are navigating AI to slow down and ask the right questions before surfacing solutions and building interventions. The core belief behind our efforts: you need both a solid foundation in understanding, framing, and testing clear community needs and a strong grasp of the fundamentals for designing AI-supported resources to ensure positive social impact. Otherwise, leaders will fall into the inevitable trap of building resources and applications nobody wants and contributing to the growing and costly wasteland of unused AI-supported efforts.

    In our program, we start with the basics to ensure we’re creating something useful. We’ve seen it again and again: expensive AI tools built with good intentions end up unused, untrusted, or entirely unusable—and at a great environmental cost. Too often, teams get swept up in the excitement of the tech and skip over the basics: understanding real human needs and testing early and often. Without a solid foundation rooted in clarity about the end user and repeated testing of assumptions and solutions, failure (and potential harm) is inevitable. Ironically, though, that is exactly what these leaders are positioned to excel at through their proximity to community needs—defining problems first.

    And so we begin with the fundamentals, re-centering humans and their needs in the process. Before talking about tools and vendors, we support participants with going back to the basics: Who are you designing for? What does your community actually need? How have you tested that this need exists with your users? And why AI?

    After going through this project scoping phase, several participants realized they may not have a clearly defined need, they may have made some fundamental assumptions that need to be examined, or they may still have many questions about their users behavior they need to explore before landing on an intervention—let alone an AI-supported intervention. As one participant said, “We jumped to the solution, we don’t even know if this is what people want.” Other participants shared, “I really appreciated the time spent scoping our work, it grounded the entire process.” “The program boosted my confidence as a user.”

    The high costs of missteps for social sector leaders when building with emerging technology

    When you’re leading an under-resourced, overextended organization without the technical capacity or budget in-house to build AI-supported solutions, the cost of a misstep while building isn’t just wasted time or money. Such missteps erode an organization’s confidence in building with emerging technology tools and, above all, can cost an organization the trust they have spent years, sometimes decades, forging with the communities they serve. And in the social sector, trust is everything.

    Our goal is to help leaders build impactful and meaningful interventions, which means grounding in human-centered design practices into the entire scoping and building journey. The fundamentals do not change with AI-supported interventions, except maybe with the process of building and testing prototypes.

    Our approach isn’t to turn social sector leaders into AI engineers. It’s about making them better leaders who are more confident decision-makers on what technology will meet user’s needs. When you’re clear on the problem, centered in your community’s needs, and equipped with the right questions, you’re already ahead of the game—even in the face of fast-moving tech.

    AI is not a magic fix. AI may be innovative, but it is useful only when it helps you focus on, and solve, a real need.

    In our next post, we’ll share how we simplify the complexity of AI for adult learners who have little technical background. We’ll walk through how we break AI down into discrete parts, so that social sector leaders have the language to engage, the confidence to build, and the agency to lead.

     

    Credits

    Special thanks to Cece Malone for help writing this article. Cece is a 2024 graduate of Pitzer College with a degree in Human-centered design and environmental analysis, with extended focus on systems design and design for social impact. Her recent projects have focused on design in philanthropy, healthcare, and narrative story building.