“What if nobody votes?”
Colby (pictured left) is Vice President of Research and Evaluation at RWJF (my employer). He had been producing an annual list of the most influential health policy research articles for a couple of years, when he decided in 2008 that it was time to let a broader audience help choose the list. But clearly he was struggling with a common fear. What if we throw a party and no one shows up? Would all those party hats go unused? The dip turn brown and develop a crust? Would no one ever hear the ’80s mix tape we had so carefully prepared?
I tried to reassure him (“If no one shows, at least we’ve learned something. And there will be plenty of punch for us.”), but all we could really do is go ahead with the preparations. We posted the poll page listing 25 articles from RWJF’s work that the Research and Evaluation team had identified as influential, and asked our visitors to vote for 10. We sent an e-mail to subscribers on our Web site, contacted five or six bloggers, notified all of the authors, put a notice on our homepage, posted the first tweet on our newly launched Twitter feed, and sat back to see what would happen.
Guess what? I had barely put my feet on the desk and reached for the bag of M&Ms, when people started voting—and voting and voting. In the end, more than 1.400 people weighed in across 48 states, the District of Columbia and Puerto Rico—plenty to provide us with a solid list of our most influential research for the year. Because we asked for demographic data, we also learned about some interesting regional differences.
Colby could breathe easier again, and we learned a few quick lessons:
- 25 articles was too many. (We cut it to 21 this year.)
- Expecting people to rank 10 articles is ambitious. (We dropped it to five this year.)
- Most important, if RWJF throws a party, people will come by (as long as we keep the conversation interesting.)
The project was successful enough that we repeated it at the end of 2009. Since the results just came in from that, it seemed like a good time to corner Colby to get his thoughts on lessons learned and plans for the future.
The Year in Research was around for a few years when you decided it was time to open this up to a much broader audience outside of the foundation. What was the thinking behind that decision?
It was a spontaneous idea. I was thinking about how we could get people more involved in our work and more engaged in our work, and this was fairly simple and contained (people were just checking a box). It was a way of getting more participation.
I know there was worry initially that no one would participate. The participation went very well. I’m wondering what other concerns there were before you tackled this.
There were concerns about how short a list you have to give people to vote on. We post about 400-600 abstracts of articles on our Web site each year. You can’t give people 600 choices. You have to somehow make it a limited list. And you want to make it representative of all the areas the foundation is working in. I had a concern about ballot stuffing—people voting for only their friend’s article, etc. That came up. You mentioned the big worry that I had that nobody would show up. We’d stage an election and nobody would show up. I was clearly proven wrong.
Could you talk about the process of how you narrowed the list down to 21 articles?
This last year we gave all of the program officers in the Research and Evaluation unit a list of the articles in the areas they were working in. They were to nominate four or five of those for me to look at and choose from. I chose three for every area. One of the principles around all of it was to represent different areas of work at the foundation.
That’s a pretty time-consuming process. Are you thinking about a different way of doing this?
In the future, we could take the most read articles off our Web site and use those as the starting list. Have people vote from the most read articles. Or, if people were rating articles, if this were like Amazon, we could have the four- or five-star articles as part of the voting. Have the public narrow them themselves. We could take that list and have people vote on it.
Any surprises from this project?
I was surprised at how engaged some people got. I actually got a phone call about one article and how the person didn’t think the methodology was very good. It did exactly what we wanted it to. It engaged people into thinking about these articles more than they would have otherwise. The other surprise to me was that there were a lot of people who were not researchers who were taken with this whole idea because it filtered things for them. It gave them a short list of important articles in their area. That was a big surprise.
Do you have any lessons learned to share with any other foundation crazy enough to tackle something like this?
For me, the big lesson is that what happened was not what I expected to happen. The lesson is that people really wanted to participate and wanted to be actively involved and that you can get them involved pretty easily. This really boosted traffic to our Web site. It recycled ideas for people. These things had been published earlier in the year. They had already been sent out in e-mail blasts—many of them. They had been posted on our Web site. Some of them had press conferences around them. And yet we were able to give them one more recycling and get them into people’s minds one more time.
Any lessons for another foundation on how they can do it differently? Things you wouldn’t do again?
If you are a relatively large foundation, [work on] getting a handle on how you’re going to make a reasonable list for your [audience]. I think using social media to do that in the first place, allowing people to post their comments on articles and their rating of articles early on, is a way of getting more of their involvement and also making your job easier.
What are the plans for the future?
The Year in Research is going to evolve. I think it will partially evolve because of our plans for incorporating either the popularity of articles or the ratings of articles. [That]will make us adjust how we do the process. I [also] would like to think about a way of getting people to both vote and comment, if they want to. Go beyond just the voting. Let’s find out how intense they are in their feelings about a certain article and what they found was useful there. As people read the articles, what can they apply?
So that at the end of the day you have more than a list …
I have more than a list. I have a sense about why people chose those articles – my own little focus group about what was useful. I think it would also help us in thinking about our grantmaking. Why did this article appeal to the research public, and how did they use it?
Colby would love feedback on the process and choices in the 2009 poll. You can comment directly to him here.
Any similar projects you are working on? I’d love to hear about how they are going. Feel free to comment below.