I feel like we have come a long way in our public discussion efforts, but still need development of some further sense of how we could learn from them. James Schneider has set up some structure for tracking discussions, but it does not seem like we are utilizing it to the extent we could.
I also feel like we are accumulating experience about what makes for a “useful” discussion and how this relates to our projects and their resulting reports. It is my sense that we have distinct discussion audiences and that we haven’t quite figured out if a one-size report fits all or how to adjust to varying levels of awareness and engagement. Some of my best public discussion experiences have been with what I would call our least favorite reports among discussants (I’m thinking specifically about Regulation, where intense discussions among government professionals went very well). On the other hand I have seen one group of citizens laud a report and go away civically stimulated and have the same report go flat with another group of citizens. I hope we can more fully tease out why that might be.
My own thinking about “what works” here revolves around a combination of the topic (area of concern), the presentation of the report (formatting and clarity), the types of individuals in the discussion, and the level of preparation and openness of those discussants. So far my sense is that three of our reports seem to align the first two factors favorably: K-12 education, Energy, and Helping America Talk. The K-12 Education has the best track record in my view and probably has shaped my sense of “what works” the most (I also have considerable lessons learned about what doesn’t work drawn from other reports—including several of my own).
It is these experiences (and informal comments from others) that prompt the following observations and suggestions:
- We may want to place more emphasis on targeted developmental discussions for specific things we want to find out about specific reports or report drafts.
- Use of our contract facilitators might be shifted more to assigned reports reflecting the above point.
- We might want to slow the pace of individual contract facilitator report discussion to build in better planning and tracking of results and maybe limit their discussions to 10-12 a year max per person.
- We need more clarity about whether contract facilitators are recruiting recurrently or in essence are maintaining ongoing discussion groups.
- Because of the positive reception of the K-12 report, we may benefit from some reflection and developmental discussion of what works there and why (we probably bring in some of our contract facilitators into this discussion).
- We might want to undertake some sort of contract facilitator evaluation to see what we can learn about what makes for a “helpful” facilitator of IF reports (James Schneider could be brought into this).
- We might want to address what I am hearing is a problem with “open discussion” of IF reports with little or no participant preparation. Since our discussions are meant to be organized around reports that we put a great deal of energy and expense into, I am starting to believe that effective participation depends a great deal on participant “agreement” to familiarize themselves with the material.
- There may be a place for “open discussion” as a way of discerning what sorts of areas of concern seem to resonate with the public and further exploring tentative areas of concern that do not seem quite ready for prime time.
- I would hope that fellows stay involved in public discussions of our reports, particularly their own, as a way of staying connected to that part of our mission and hearing the things that our contract facilitators hear.
I strongly believe in the public discussion element of IF’s three-fold mission and feel it is a cost-effective feedback means for improving our practice and product. I can see some economies that could be achieved in this area, but because it is a relatively low proportion of our budget I cannot see how we can achieve major savings here.