Our Data and Survey Strategy

To improve the quality and strategic value of volunteer, participant, and alumni surveys, Anders Johnson– a USC sophmore studying Quantitative Biology and Public Policy–  recommended changes to 1) align survey content with our long-term goals and 2) increase administrative effectiveness and consistency.

The stated long-term goals in the strategic plan are:

  1. Build community members’ power to shape the spaces where they live and work

  2. Build young leaders’ power to organize alongside them for grassroots change

  3. Apply that power to built projects, starting small and scaling up over time

  4. Use the success of those projects to create a replicable process for community-led design, applied in cities across the country

  5. Embed that process as industry standard, to reduce neighborhood inequality

These goals must be directly reflected in the kind of data A+A collects. Survey tools should be designed to capture the clearest signals of progress toward these outcomes, beginning with the immediate and measurable impacts on participants and communities. Since Architecture + Advocacy outcomes are primarily tracked and measured through surveys, it is critical to concentrate on those who are best positioned to reflect these goals in their responses: primarily young architectural leaders, both during and beyond their time in the program, and community members who engage with the program’s work.

I. Recommended Tools

Event Close Checklist

The A+A Event Close Checklist was created to improve data tracking and consistency by ensuring that volunteers and A+A leaders document key information– such as event location, number of attendees, and number of volunteers–  themselves, rather than relying on external partners who may have different priorities, methods, or levels of rigor in their data practices. By standardizing how event data is gathered and reported, this checklist helps systematize internal evaluation processes and provides valuable insights into the effectiveness and impact of each project, ultimately strengthening organizational learning and accountability.

Volunteer Entrance Surveys

The Volunteer Entrance Survey is designed to establish a consistent and reliable baseline for measuring the impact of the program on its participants. Administered at the beginning of a volunteer’s involvement, this survey captures information on motivations, prior experience with community design and organizing, expectations for the program, and self-assessed skills related to leadership, collaboration, and equity. By focusing on a point where data collection is most accessible—at the moment of enrollment—the survey ensures that outcomes can be measured against a clear starting point. This approach strengthens the robustness of data collection and allows for more meaningful comparisons across cohorts over time.

New questions have been proposed to strengthen both demographic tracking and internal program evaluation. Collecting participant phone numbers will support the development of a longitudinal tracking system, allowing the program to monitor changes in skill application over time. Core evaluation questions should now include self-assessment of growth in three key areas: the ability to apply equitable design, the use of interpersonal skills in professional and collaborative settings, and the ability to facilitate community organizing and engagement. For each area, the goal is to quantitatively measure self-reported ability, track increases in skill confidence, and identify sustained impact over time.

Volunteer Exit Surveys

The Volunteer Exit Survey complements the entrance survey by capturing outcomes at the point of program completion, when reflection is most immediate and data collection is most feasible. This survey assesses changes in knowledge, skills, attitudes, and behaviors, as well as perceptions of the program’s effectiveness and relevance to long-term goals. By administering this survey consistently, the organization can track growth in leadership capacity, community engagement, and preparedness to carry forward community-led design principles. Together with entrance surveys, this tool supports a more rigorous and systematic evaluation of the program’s impact, reinforcing the reliability and strategic value of the data collected.

Participant Exit Surveys

The Participant Exit Survey is designed to capture immediate feedback and perceived impact on community members and event participants following their involvement in an A+A project or event. Given the challenges of maintaining long-term contact and tracking participants after events conclude, this survey serves as a valuable but limited data point for understanding short-term outcomes. It provides insight into participants’ experiences, shifts in perspective, and feelings of empowerment or engagement. However, because it lacks a longitudinal component or a comparison group, the results may not fully represent broader trends or long-term impact. While not comprehensive, this survey adds an important layer of qualitative and quantitative feedback to help inform future program improvements.

Alumni Surveys

The Alumni Survey is a key tool for understanding the long-term impact of the A+A program on its volunteers after they have completed their formal involvement. This survey tracks how alumni continue to engage in community-focused architecture, including whether they apply the principles of community-led design in their academic, professional, or organizing work. It is especially important for assessing progress toward the program’s long-term goal of embedding community-led design as an industry standard by growing a network of young architectural leaders who carry these values into the field. By capturing data on alumni trajectories, influence, and ongoing work, the survey helps measure the ripple effects of the program and informs strategies to strengthen and scale its impact across the architecture and design professions.

Alumni surveys should build upon existing volunteer surveys by evaluating how frequently alumni apply the skills learned during A+A and whether their approach differs from peers who did not participate in the program. These additions will help link program outputs—such as skill development—to broader objectives, providing a clearer picture of the program’s long-term impact.

II. Ongoing Needs + Next Steps

Strengthening Program Evaluation

While the strategic plan provides a strong vision, several elements remain ambiguous and require further clarification to support meaningful evaluation. For example, the goal of “improving design skills” is broad and undefined—without specific learning outcomes, it is difficult to assess what this means in practice or how to measure it. Surveys will need to help define and track what design skills participants are expected to develop, and how those skills align with the program’s mission. Similarly, the phrase “equipping people with the tools of architecture” is vague. It raises important questions: What are the concrete tools being referenced—software, design thinking frameworks, spatial analysis, construction methods? What activities or responsibilities are volunteers engaging in that lead to these outcomes? Beyond high-level concepts, there needs to be a clearer articulation of the learning experiences that contribute to this goal. While the team has started defining learning objectives for participants, those definitions must be consistently integrated into program design and data collection. Additionally, because the program has limited capacity for long-term, longitudinal studies, it will be especially important to focus evaluation efforts on areas where we have the most control—namely, alumni and participant data. Robust, consistent tracking in these areas will be essential for measuring progress and demonstrating impact over time.

Recommendations to Overcome Implementation Obstacles

To ensure that survey questions are effective, future evaluation efforts should include focus groups with volunteers. These sessions would help assess whether questions are interpreted as intended, if responses feel accurate in reflecting the skills being measured, and whether the length of the survey is manageable. Such feedback can guide ongoing iteration and refinement of the surveys to maintain clarity and relevance.

Consistent and strategic survey administration is key to collecting high-quality data. Administering surveys on a regular schedule, and incorporating a pre-commitment to complete surveys during the volunteer application process, can significantly boost response rates, as can offering incentives, such as t-shirts. To ensure reliability—especially for externally used data such as web content or grant applications—neutral and quantitative questions should be prioritized. It is also important to regularly review which questions are required and adjust them as needed. The use of reminders and meaningful incentives, such as tying rewards or recognition to survey completion, can further enhance participation. Whenever possible, administering the surveys in person is recommended for even greater effectiveness.

Survey design should remain consistent to support a longitudinal strategy. Keeping questions unchanged across cohorts will ensure the comparability and reliability of the data. For alumni, surveys should additionally assess how often they apply the skills learned in the program and the extent to which they continue to engage with them. Embedding skill assessments into the initial volunteer application and requiring a pre-commitment to complete the post-survey later in the semester will ensure that the program captures a full cycle of learning and impact. These changes will provide a clearer and more measurable understanding of how A+A supports the long-term growth and effectiveness of its participants.

To strengthen the impact and reliability of our data collection, further research is needed on best practices in survey design and data methodology. This includes exploring what constitutes “good data practices” in nonprofit and community-based settings, with particular attention to how questions are framed, how surveys are formatted, and how to ensure accessibility and clarity for diverse respondents. Understanding how to craft effective questions that minimize bias, encourage thoughtful responses, and align with intended outcomes is essential. In addition, research should consider the overall survey experience—from structure and length to timing and delivery method—to ensure the highest possible response quality and relevance. A key part of this process will be learning how to balance conciseness with the need to gather rich, informative results, so that surveys remain accessible without sacrificing depth.

III. Personal Reflection from the Author

Survey creation is a quick and convenient answer to measuring long-term program impacts. Yet, connecting what you are doing and trying to achieve to the exact question you are asking is often a drawn-out and thought-intensive task. In theory, surveys are one of the most accessible and widely used data collection tools. They are easy to distribute, cost-effective, and familiar to most participants. But I learned their apparent simplicity masks a complex web of considerations that must be addressed to ensure that the data they produce is meaningful. A well-designed survey requires a clear understanding of program goals, defined outcomes, and thoughtful attention to timing, audience, and purpose.

For example, when working with youth or marginalized communities, questions must be tailored to reflect participants’ lived experience and level of knowledge. A technical or abstract question—particularly in fields like architecture or planning—may not yield useful data if respondents don’t have the baseline context to interpret it. Similarly, surveys must be designed with intention about when the data is collected: what a person thinks or feels before a program may differ dramatically from what they reflect on afterward. Without this structure, results can be skewed, incomplete, or unhelpful.

Moreover, data is not neutral or one-size-fits-all. It serves multiple, sometimes competing functions: providing accountability for funders, guiding internal improvement, supporting advocacy, or demonstrating long-term impact. Each of these purposes may require a different kind of question or framing. For instance, a question designed to measure internal program improvement might not satisfy the need to show long-term community impact to an external partner. That’s why survey design isn’t just a technical task—it’s a reflective, iterative process that forces us to define our values, clarify our goals, and ultimately, ask ourselves whether we’re measuring what truly matters.

Written By Anders Johnson

Next
Next

What is your Fire Response Missing? Listening.