Zencity’s community survey offers local government leaders a way to conduct ongoing performance management, using year-round data collection to deliver timely feedback and generate a large sample size that includes more resident voices. For more about the technology that powers Zencity’s community survey and analysis, and the process you can expect when you run a community survey with Zencity, read this overview.

Below, we will cover how we recruit respondents, ensure representativeness, and calculate results, as well as how to access your scores.

Recruiting respondents

We reach residents on a variety of digital platforms via any device they might be using, such as smartphones or tablets. Using targeted ads, we’re able to assemble a representative sample of your community and ensure that voices that might not be included in traditional survey formats are heard.

We choose this distribution method for its ability to let us quickly collect a large sample of diverse backgrounds. In addition, this approach eliminates any need for respondents to download an app or create an account on another site, minimizing a common barrier to participation.

Ensuring representativeness

For every survey, we recruit a sample of respondents that reflects the unique demographic characteristics of your community, as measured by the U.S. Census. This data allows us to track the demographic representativeness of the responses we receive in real-time and adjust our advertising bidding and targeting strategy accordingly.

In communities where many residents are not native English speakers, we make our questionnaires available in multiple languages and reach out to potential respondents in their native languages. This ensures that our surveys reflect the views of all residents, regardless of their linguistic or cultural backgrounds.

For example, our surveys in New York City are conducted simultaneously in English, Spanish, Chinese, and Russian since each of these languages is spoken at home by >2% of the population. We can also offer surveys in Filipino, French, Vietnamese, Korean, or any other language commonly spoken by a community’s residents.

Calculating results

As survey responses come in, we use a common technique called rake weighting (also known as “rim weighting” or “iterative proportional fitting”) to correct for any remaining differences between the makeup of our survey respondents and the community as a whole.

Under this approach, each respondent receives a unique weight based on their various demographic characteristics (including such factors as age, gender, race, education), so that the distribution of each of these characteristics in the final weighted sample is the same as in the community as a whole. For surveys conducted in larger communities, we can also account for geographic characteristics in the weighting process, so that the views of residents from all areas of the community are reflected in the results.

It’s a way of understanding how much to “listen to” a given respondent’s answer as we assemble the total score, making sure no demographic group is overrepresented or underrepresented in the calculation.

Keeping those weights in mind, we arrive at your overall satisfaction score by averaging how each resident rated quality of life and community characteristics on a numeric scale (1-5). Averages of >3.6 are counted as positive. The overall satisfaction score, then, is the weighted percentage of residents who gave an overall positive rating.

Accessing your results

Every time you enter your survey dashboard, you will see your overall satisfaction score as accurate to the last 90-day period. That means that each day, as new responses are recorded and raked, we recalculate the percentage of satisfied residents using data from that day and the previous 90 days. This rolling time frame ensures you always have access to your most updated score.

To ensure you can also match your scores to your annual planning timelines, the reports you see at the conclusion of each survey cycle will present your overall satisfaction score for a capped time period of a quarter or half a year.

Finally, we want to note that our standard for determining the statistical validity of a demographic group is collecting at least 30 responses. In other words, we believe that’s the minimum number of responses needed in order to draw any meaningful conclusions from the data. In the event that a demographic group’s sample size has not reached statistical validity, an icon will appear to notify you of that fact and we will refocus our targeting to more actively recruit respondents from that group. The moment the sample size reaches statistical validity, the icon will disappear.

Did this answer your question?