Primary tabs

Participatory methods

Lead Editor of this section is Kaja Peterson.

This text is based on Pedrosa, T., Pereira, A. G. (2006). Participatory tools. SustainabilityA-Test. http://www.ivm.vu.nl/en/projects/Archive/SustainabailityA-test/index.asp (29 November 2013).

Introduction

Participation is of key importance in integrated assessments and is about mobilising stakeholders and their values, views, knowledge and ideas.

Role of participatory tools in an integrated assessment

Participatory methods play a leading role in the early phases of an IA when the main goals of the assessment and its boundaries are defined (i.e. for problem framing).

Participatory tools support the engagement of stakeholders (experts and laypeople) and the knowledge, ideas and views they hold. This in turn supports an extended reflection upon the problem under study and its boundaries, resulting in shared framing of the problem and/or understanding that different stakeholders frame the problem differently. In short, stakeholders become the (co−)definers of the problem to be addressed.

In addition, participatory tools may also be used for more specific tasks that could be derived from problem framing, such as: exploring the knowledge base (identifying knowledge gaps); assure the relevance of the assessment, increasing its social robustness and assuring the assessment’s quality from a societal point of view (fitness for purpose, relevance and legitimacy).

When exploring interrelationships different effects and establishing policy options participation may bey used to get stakeholders involved in the development of scenarios. By doing so, stakeholders get involved in identifying cause−effect relations needed to build scenarios as well as aid the task of identifying which parts of knowledge are contested (scientific and societal controversies) and the adequacy of the available knowledge base.

Participation can also be used to ensure that the stakeholders agree on the selection of relevant models and indicators and on the definition of multi−criteria analysis (MCA), cost–benefit analysis (CBA) and cost–effectiveness analysis (CEA) criteria used to evaluate different options. Participation in this phase of an assessment is thus organised to support the deployment of other methods, setting the context where stakeholders can participate, supplying the process with knowledge and information in order to enhance understanding of the problem and, if necessary with deliberative purpose helping to decide which criteria and/or indicators to use.

In the advanced phase of an integrated assessment, participatory tools could be used to involve stakeholders in MCA, CBA and CEA. This improves the robustness and legitimacy of such evaluation. Legitimacy of policy options is even further improved through an extended assessment process, whereby participatory tools help with attaining shared ground for concerted action, including deliberation (e.g. attaining consensus).

In ex-post analysis participation could be used to evaluate the assessment process by internal and/or external (peer) review. Participatory methods could be used to allow those involved in the decision−making process, and other stakeholders (not participating) in the process, to reflect on the integrated assessment with the aim to further improve tools, methods and framework used.

Participatory processes may also be used to extend the peer review process. By extended peer review (Funtowicz, S. 2001; Funtowicz and Ravetz, 1990), is meant a reflexive process through which quality of processes and/or products are enhanced by integration of different sources of knowledge. Hence: Inclusion of those affected and/or affecting the issue of concern.

Choosing between different participatory tools

Due to the variety of methods, it is difficult to choose which method, or combination of methods, to use in which situation. The choice of tools depends a great deal on the objectives, context, participants, goals and issues been address.

Participation (and therefore the method chosen to organise or set up participation) should be seen in two main perspectives: those that have an exploratory, investigative nature (with hardly any connection with established institutional arrangements of decision and policy making) and those that are deliberative (having a bidding effect with regards to policy making even if used for explorative or analytical purposes). In fact, if in sustainability assessment there is a deliberative phase, which might be the case (for instance the scoping phase can be considered deliberative in the sense that decisions are taken regarding to what has to be assessed), deliberative participatory methods should be used (e.g. citizen juries) could be appropriate.

Participatory IA processes may serve different purposes and functions (see for instance Tàbara, 2006).

Specifically they may help to: Frame and define in more relevant ways the problems at stake, their possible causes, effects, and feasible courses of action or futures on the basis of the stakeholders' views.

  1. Improve the available information, communication and participation channels both for the production of knowledge and for feeding the policy−making process with preferences and views which would rarely be taken into account otherwise.
  2. Enhance the integration of diverse forms of knowledge and value domains, both from experts and non−experts, as well as from different scientific disciplines.
  3. Optimise the existing processes of social and institutional learning, by rising awareness of complexities and uncertainties of the situation, as well as the limits or the gaps in the available knowledge and of the capacities to deal with them.

When selecting the type of tool(s) to use to carry out a participatory process, six main criteria may be considered (other selection criteria exist, see e.g. Rauschmayer and Risse, 2005):

1. Participants number and method of identification/selection;

2. The goal of carrying out a participation process (Arnstein, 1969);

3. The problem content of the issue to be addressed (Steyaert and Lisoir, 2005);

4. The type of desired outcome (Involve, 2005);

5. The style of moderation required (Guimarães Pereira, 2005);

6. Whether and how ICT is used.

These criteria are explained in more detail in the following section.

Participants

There are several methods to identify and select the participants in a participatory process. Many of the participatory tools methodologies already consider specific formats of identification and selection of participants. The identification and selection method is of crucial importance for the sake of transparency and, if deemed necessary, the ‘representativeness’ of the process.

There are four selection processes considered here (Involve, 2005):

  1. Self−selected participants – anyone who wants to join can. This selection process is appropriate when is wanted the community engagement as widely as possible;
  2. Stakeholder representatives – participants representing views, values and knowledge of specific interest groups or with specific skills;
  3. Demographically – samples are selected to provide a sample of a larger population;
  4. Number of participants – number of participants the tool/method foresees.

Participatory process goal

Participatory processes may entail different type(s) of involvement emerging from the application of a tool/method. The applied tool may foster (or not) a more active participation in the final result of the process by the participants. These types of participation can be divided into three broad categories adopted from the original ladder of Arnstein (1969):

  1. Consultation (gauging opinions, obtaining reactions or options) – co−thinking
  2. Partnership – Citizen engagement (in−depth thinking by citizens about key public policy issues, informing policy and the decision−making process with citizen perspectives and values) – co−operating, co−defining or co−production;
  3. Deliberation – place final decision−making in the hands of the public – co−decision.

Problem Content

The nature and scope of the issue to be addressed can be regarded based on four aspects (Steyaert and Lisoir, 2005):

  1. Knowledge – to what extent does the society already possess a general knowledge of the subject? To what extent relevant common knowledge is possessed by participants?
  2. Maturity – to what extent has the society already developed opinions or even legislation on the subject? Do strong views exist or is the issue so emergent that norms have not become established?
  3. Complexity – is the subject highly complex, such that a great deal of (technical) information is required?
  4. Controversy – is the issue highly controversial and has the debate become polarised, such that consensus is difficult to reach?

Participatory Process Outcome

Different methods produce different types of outcomes (Involve, 2005). Here we take to account also the knowledge already in possession or acquired during the process by the participants. Seven types of outcomes that tools are good at producing are considered:

  1. Map existing options – some methods are good for discovering existing opinions or impacts about an issue;
  2. Map of informed options – methods that involve deliberation usually lead to the creation of better informed opinions;
  3. Improved relationships – some methods are better than others at revealing common interests and thereby improving relationships;
  4. Shared vision – some methods are good for creating a shared vision;
  5. New ideas – some methods are also excellent at producing new ideas and visions for change;
  6. Recommendations – some methods are good at producing recommendations;
  7. Participants empowerment – finally, some methods empower participants by giving them skills and/or confidence to take a more active part in decision−making.

Style of moderation

Each participatory tool requires a specific style of moderation (Guimarães Pereira, 2005) that will affect the way that process are conducted, and results and outcomes are achieved. Practitioners originally trained in certain approaches tend to value those outputs above others. For example, a facilitator trained in Stakeholder Dialogue will run a Focus Group very differently from the way a facilitator with a marketing background would run it. In other words, the shape, use and results of methods are determined by who is using them, as well as by the nature of the methods themselves and the context, purpose etc. (Involve, 2005). Also is to consider that some styles of moderation require more skills than others.

Five styles are considered:

  1. Arbitrator – style of mediation used when the direct discussion between two or more parties need to be arbitrated. The arbitrator facilitates the direct dialogue between participants;
  2. Facilitator – Leads the participants through an agenda, keeps the flow of the dialogue or provides the technical assistance to software deployment;
  3. Mediator – Mediators need the skills of facilitators plus need to assist with the communication between the participants, translating if necessary different languages (jargon). They need a good knowledge of the issues in discussion and if necessary they should assist parties in reaching agreements;
  4. Negotiator – Have an active role on the final result of the participation process. He can have a direct interest on a specific result and his main objective is to achieve an agreement /solution regarding the issue(s) at stake;
  5. Assistance – Give the necessary assistance to the moderator.

Apart from the moderation styles considered above, it is desirable that either the moderator or a dedicated person has the role of ‘integrator’, that is the person that integrates different forms of knowledge feeding into and arising from the participatory process, and mediation of that knowledge in the assessment and policy making process. This task may be assigned to a moderator of the participatory process (as suggested in Tabara, 2006) but it can also be assigned to a specific professional (as suggested in Guimarães Pereira et al., 2003a; 2003b).

Information and Communication Technologies (ICT)

ICT can be used in two main ways in participatory process: they can be used as support to the process, i.e. the tool/method deploys ICTs that can help with the participatory process (introducing issues, facilitating visualization, etc); or they can guide the process itself, allowing stakeholders to participate virtually in the processes (e.g. internet, video conference, email, forums, etc). The deployment of ICT becomes the participatory process itself.

References

Abeison, J., P.G. Forest, J. Eyles, P. Smith, E. Martin and F.P. Gauvin (2001). Deliberations about deliberations: Issues in the Design and Evaluation of Public Consultation processes. McMaster University Centre for Health economics and Policy Analysis Research. Working Paper 01−04, June 2001. Available on: http://www.vcn.bc.ca/citizens−handbook/compareparticipation.pdf.

Arnstein, S. (1969). A ladder of citizen participation in the USA. Journal of the American Institute of Planners. p. 216−224.

Funtowicz, S.O. and J.R. Ravetz (1990). Uncertainty and Quality in Science for Policy. Dordrecht: Kluwer Academic Press.

Funtowicz, S. O. (2001). Peer Review and Quality Control. In: International Encyclopaedia of the Social and Behavioural Sciences. Elsevier, Pp. 11179−11183.

Guimarães Pereira, Â., J.D. Rinaudo, P. Jeffrey, J. Blasques, S. Corral Quintana, N. Courtois, S.O. Funtowicz and V. Petit (2003a). ICT Tools To Support Public Participation In Water Resources Governance & Planning: Experiences From The Design and Testing of a Multi−Media Platform. Journal of Environmental Assessment Policy and Management 5 (3): 395–420.

Guimarães Pereira, Â., J. Blasques, S. Corral Quintana and S.O. Funtowicz (2003b). TIDDD − Tools To Inform Debates Dialogues & Deliberations. The GOUVERNe Project at the JRC. European Commission, Ispra, Italy.

Guimarães Pereira, Â. (2005). Knowledge Assessment Methodologies Fall School – note book, European Communities, PB/2005/IPSC/0384

IAP2 (nd). Public Participation Toolbox. Available on: http://www.vcn.bc.ca/citizens−handbook/participation_toolbox.pdf. [last accessed: 7 August 2006].

Involve (2005). People & participation – How to put citizens at the heart of decision−making, Beacon Press.

Rauschmayer, F. and N.Risse (2005). A framework for the selection of participatory approaches for SEA. Environmental Impact Assessment Review 25 (2005): 650−666.

Rowe, G. and Frewer, L.J. (2000). Public Participation Methods: A Framework for Evaluation. Science, Technology, & Human Values. 25(1):3−29.

Steyaert, S. and Lisoir, H. (2005) Participatory methods toolkit – A practitioner’s manual, King Baudouin Foundation and Flemish Institute for Science and Technology Assessment. Belgium.

Tàbara, D. (2006). Participatory Sustainability Assessment using computer models. In P. Vakerling et al. Puzzle Solving For Policy – II. International Institute of Integrative Studies − European Forum for Integrated Environmental Assessment. Maastricht, the Netherlands. http://www.icis.unimaas.nl/downloads/SummerCourseBook_051201.pdf.

 

Consensus Conference

Introduction

The consensus conference is a participatory method, which is aimed at involving the public [jf2] in the policy making process, and informing policymakers and experts about what citizens find important and why. Hereby it can raise public awareness, may lead to better decisions, may increase the legitimacy and accountability of decision−making and it may stimulate learning (as well for the public as for the decision−makers and experts). Denmark was the first country to organize such a consensus conference. The consensus conference is developed by the U.S. National Institutes of Health (NIH) in 1977, which wanted to settle a controversy over breast cancer screening. The NIH’s consensus conferences evolved into a way to transfer new medical knowledge and devices to clinical practice, and several European nations imported the model to answer similar questions of medical research and practice. Denmark was the first country that altered the format to involve citizens rather than experts and expanded the purview beyond medicine to broad questions of technology, thereby creating the participatory consensus conference (Jørgensen, 1995).

Important characteristics of this method are that the public determines the agenda for the conference and chooses which experts to consult. They gain knowledge about the issue at hand during the process, which enables informed discussions (as opposed to focus groups). The citizens participating in the consensus conference write a report presenting their ideas on the issue. Although the name of this tool suggests a focus on consensus, the citizens are also asked to indicate in the report their points of disagreement.

The consensus conference is usable for topics which are socially relevant, which imply technological/ scientific knowledge and which have to deal with unclear and divergent opinions and points of view.

Methodology

The consensus conference aims to give a voice to the public by forming a citizen panel. The panel (a group of 10−30 citizens) formulates the questions to be taken up and participates in the selection of experts to answer these questions. At the end a report is produced containing the consensus view (expectations, concerns and recommendations) of the (informed) citizens regarding the issue at hand. Though the panel’s report cannot be considered to represent THE voice of the public, it represents the ideas and opinions of a diverse group of citizens who are normally not involved in the policy process. Outcomes can be viewed as a collection of ideas and viewpoints of the public, and can as such be used as input for assessments (together with other stakeholders’ ideas and points of view).

The methodology of the Danish model of the consensus conference is quite precise and detailed. However, many variants of this tool exist. Also, this methodology is conducted under different names than ‘consensus conference’, mainly because of a (cultural) preference for a smaller focus (or no focus at all) on consensus. The methodology of the ‘citizens' jury’ resembles the consensus conference; both are specific types of citizen panels. Since many variants of both methods exist, it is hard to indicate the exact differences between the two methods. For instance, the citizen’s jury held in the context of the River Dialogue in The Netherlands in 2003 followed the same procedure (see below), apart from the fact that the citizen panel did not decide on the agenda of the conference and on which experts to consult. Some say that a difference between the two methods is that the meetings of the consensus conference are generally open for public and media, while the meetings of a citizens' jury are not (this is however not true for the River Dialogue citizens' jury), or that the time scale of a consensus conference is more precise than that of a citizens' jury (Rowe & Frewer, 2000). The procedure of both methods can be changed on the basis of specific design criteria, as a consequence of which differences are gradual or even nonexistent.

Process

The steps of the Danish model of the consensus conference are commonly as follows:

  1. A steering committee of known partisan authorities is chosen, who represent different and opposing perspectives, who are familiar with the full scope of the topic and who are willing to support an unbiased effort. The steering committee will oversee the organization of the consensus conference and the fairness and correctness of its informational materials.
  2. Participants are recruited. This can be done by placing advertisements, or by sending letters randomly. Volunteers should send a one−page letter describing the their background and their reasons for wanting to participate.
  3. From the replies 10 to 30 (mostly about 15; 30 with multilingual panels) are chosen, who roughly represent the demographic breadth of the country’s population and who lack prior knowledge of or partisan interest in the topic.
  4. A background paper (information brochure) is commissioned that maps the political terrain surrounding the issue; this is screened and approved by the steering committee. 
  5. During a preparatory weekend, the citizen panel discusses the background paper, and formulates questions for experts. The panel should also get the opportunity during this weekend to get to know one another and to develop their ability to reason together.
  6. The citizen panel chooses the types of experts that are required. A group of experts is assembled; the citizen panel chooses itself which experts from this group are invited to answer their questions (which are based on information provided by the steering committee). The group of experts covers the broad dimensions of the problem (ethical, societal, technical etc.).
  7. During a second preparatory weekend, the citizen panel discusses the background reading provided by the steering committee, refines their questions and revises the expert panel list to suit their needs. (Choosing the experts can also take place solely during the second weekend.)
  8. The experts prepare oral and written responses to the panel's questions, using language understandable by ordinary people.
  9. An open public forum (a consensus conference) is announced, in which the citizen and expert panel will meet  together, attracting media, legislators and interested citizens.
  10. On day one of the actual consensus conference, each expert speaks for about 15−30 minutes in response to the questions posed by the citizen panel, follow−up questions from the citizen panel are answered and, as time allows, from the audience.
  11. After the public session, the citizen panel discusses what it has heard.
  12. On day two the citizen panel cross−examines the expert panel.
  13. After this public session on day two and on day three, the citizen panel deliberates, and prepares a report that summarizes their points of consensus and disagreement. The citizen panel fully controls the report’s content, but may be assisted by secretaries and editors.
  14. on day four the expert panel gets the chance to correct outright factual misstatements in the report, but not otherwise comments on it.
  15. The citizen panel presents its report at a national press conference; reports are 15−30 pages long, clearly reasoned and nuanced in judgment. (www.co−intelligence.org/P−ConsensusConference3.html)
  16. In most cases, the report is publicized to confront the broad public with it, for instance by the use of local dialogues, leaflets and videos. Policy makers can use the report as input for assessments.

Use of Consensus Conferences in Policy processes

This tool can contribute to policy processes in several ways. Dependent on the phase of the policy process in which the consensus conference is deployed, the tool can be helpful in recognizing problems, identifying conflicting assumptions, exploring possible solutions, analyzing policy proposals, selecting policy options, evaluating policy options and bringing poorly performing policy options to light; all from a citizens’ point of view.

The consensus conference can contribute to the legitimacy of decision making. It is an “opportunity for those with little power to obtain information and to be heard, and thus an opportunity for more democratic decision−making on the use and regulation of new technology” (Andersen & Jaeger, 1999). The consensus conference can help to reduce the distance between policy makers and the public, and it enables citizens to engage in deliberation about the decisions that need to be taken and it can generate support for measures to be taken.

The consensus conference can also contribute to the accountability of policy processes, as participants get an inside view in the decision−making process and feel co−responsible for the process and its outcomes. The tool is a way for a government to become more responsive to the concerns of the public and to create transparency and to give access. Open access of the people to public institutions is needed to give them a share of ownership ('this is my policy maker') and create a sens of trustworthiness ('they have nothing to hide') (Huitema & Van de Kerkhof, 2006).

With respect to the contributions to policy processes a few remarks have to be made. Though, in statistical terms, the citizen panel cannot be considered a representative sample of the public (representative for all ages, socio−economic classes, places of residence, ideas, preferences etc.), the citizen panel can be seen as a group of people that 'resemble' the public, in terms of representing different social perspectives rather than a demographic representation (Brown, 2006). This group of citizens can bring to light certain problems or aspects concerning the topic that were not recognized before. It is the question however, whether this tool is the best option when aiming at making explicit new problems, or unique aspects of problems, ideas, underlying assumptions or points of view. For this goal, tools that focus more on underlying assumptions, such as the Repertory Grid Technique, are probably more suited.

The name of the tool implies that a consensus has to be found. Finding a consensus would mean that deviating ideas are lost, which would be disadvantageous in a search for new aspects of problems. However, this is not true for many past instances. And according to Joss (2000) the citizen panel is asked to pay attention in their report to conflicting points of view (disagreement). Because finding a consensus is often not a central issue, some countries prefer to use another name for the consensus conference, e.g. public debate or public forum.

Although discussion between the citizen panel and the experts is part of the conference, in practice there appears to be little interaction between the experts and the citizens, in the sense that experts and citizens do not work out differences together. This is a point of attention, since this interaction can lead to more, or richer outcomes (and more satisfied participants).

Another thing to keep in mind is that the way this participatory tool is shaped probably induces citizens to codify their knowledge in “expert−terms”. They base themselves solely on expert−knowledge when writing their report. It is not hard to imagine that the citizen panel feels social pressure to deliver a “scientific sound” document. As a consequence, it is quite well possible that specific "lay" knowledge that does not fit in with this format is omitted. This would mean not all aspects concerning a problem are made explicit.

Operational aspects

According to Joss (2000) the time and costs that are needed to organize a consensus conference might easily be underestimated; it takes more than one year to organize a 4−days−conference and in man−months it takes even about four years. The financial costs of organizing a typical consensus conference are about 166.000 € (Joss, 2000).

The data input that is needed in this tool is not high; only a certain amount of expert input is needed to prepare the background paper and to answer the citizen panel’s questions.

The tool is fairly transparent, or by any means, it can be transparent. Transparency depends very much on specific design characteristics. For instance, how are the participants recruited, how is the background paper established, are the participants really free in choosing the experts they want etcetera.

The results of the consensus conference are restricted to information about the group of which the (citizen) panel is a sample: the public. Though the citizen panel can be asked to think about long−term problems, the present is the reference; for a thorough analysis of long−term effects another tool is probably better suited, like the use of scenarios. As for the geographic coverage of this tool it is hard to give precise indications. For instance, consensus conferences were held in the past to discuss ozone in the upper atmosphere and genetic modification; these are not local issues. Probably this tool is suited for all problems that concern citizens; on a local, national, regional or global level, but using the tool at a higher level of scale will be more challenging in terms of participant selection, how to deal with cultural differences, language etcetera.

Experiences

There is much experience with consensus conferences all over the world. A list of European and non−European consensus conferences is provided below. Unfortunately, not many evaluations are available that provide information about the extent to which consensus conferences affected policy processes in the past. This is probably partly due to the fact that the main goal of past consensus conferences was not to improve policy processes, but to stimulate public discussion and to increase public learning (e.g. Canadian consensus conference on food biotechnology; see Einsiedel & Eastlick, 2000). The evaluations that are available seem to convey that past consensus conferences did not have significant political influence (see for instance Einsiedel, Jelsøe & Breck, 2001).

Most examples show that the consensus conference delivers a new (citzens') view on the particular topic; hereby, a consensus conference can focus attention on certain aspects of the topic that were not considered before. A conclusion of the Australian consensus conference on gene technology in the food chain (1999) was e.g. that “science and industry have to take account of the concerns of citizens about ethics, the environment, the right of choice and information, and many others, if they wish to win public support not only for gene technology but also from consumers and for science itself” (McKay, 1999). An important lesson that can be learned from this and other past experiences is that the diffusion of the results of a consensus conference (the emerging of a broad public debate) depends to a large extent on mass media.

Vandenabeele & Goorden (2004) evaluated the Belgian consensus conference on genetic testing. They conclude that there was little interaction between the participating experts and citizens. The emphasis was  very much on one−way−communication, or on extensive monologues of the experts and a question−and−answer−pattern afterwards. They furthermore stress the influence of the press. “Journalists are trained to blow up disagreements instead of focusing on the agreements. If only one journalist picked up the conflicts within a citizen group, it was feared by citizens that the importance of the final report would be reduced and perhaps not taken seriously by politicians. Lay people know this and therefore strive for consensus (Andersen & Jæger, 1999, p. 331)”. Also they mention that the diversity within the citizen panel became a problem. A growing conflict with one participant arose, resulting in endless, not very useful, discussions. Furthermore, they observed that the organizer and facilitators had an enormous faith in the consensus conference as a method. With a central role for citizens the role of the experts is adapted to the questions put forward by these citizens. The focus for the facilitation process is on cooperative skills and expert information. According to Vandenabeele & Goorden this focus hides, paradoxically, an inability to deal with the type of argumentation that can be expected from citizens. Secondly they observe that the facilitation process confirms the image of the layperson with his or her many questions. The emphasis was on seeking answers and the expectation that expert knowledge could alleviate any concerns citizens may have.

Joss&Bellucci (2000) evaluated the Austrian consensus conference on ozone in the upper atmosphere (held in 1997). They conclude that this consensus conference was not a success. There was few input of the citizen panel during the discussion with experts. Experts addressed their selves mainly to other experts and did not go beyond their own field of expertise. To the citizen panel it seemed like there were no policy options; as a consequence they got frustrated and their trust in the experts decreased even more. The citizen panel got the impression that the experts did not want to find solutions. The citizen panel executed the consensus−finding process behind closed doors and without a facilitator. In trying to reach a consensus, the pressure had increased very much. The report of the citizen panel did not include policy options or concrete measures; responsibilities were shifted to policymakers. Media attention was not as big as expected, and not evenly distributed over time. Later on it appeared that the subject of ozone was soon not on the political agenda any more.

The U.S. consensus conference ‘Telecommunications and the Future of Democracy’ (1997) was evaluated by Guston (1999). Guston evaluated the consensus conference on 4 types of impacts: 1) actual impact, 2) impact on general thinking, 3) impact on training of knowledgeable personnel and 4) interaction with lay knowledge. He concludes that the consensus conference did not succeed in any of these impacts. He did find that there were “small−scale impacts on procedural and reflexive learning among elite participants [experts] and all kinds of learning among the panelists.” Media coverage was small, possibly also due to a snowstorm at the time of the conference. The citizens’ report was too broad and not timely. Furthermore, there was minimal interaction between experts and citizens. In accordance with Guston (1999), Einsiedel & Eastlicks (2000) conclude about the Canadian consensus conference on food biotechnology that there were a number of indications that in policy circles, the consensus conference heightened sensitivity to and appreciation for deliberative processes of this nature. Furthermore, they conclude that in the Canadian consensus conference on food biotechnology frustrations emerged with regard to time constraints on the process, the “posturing” or nonresponse by some of the experts and the creation of an “us−them” mentality.

These examples show that theory and practice diverge; in practice the tool faces a multitude of difficulties, hampering the possible contributions to policy processes. These experiences can be used to learn about this tool and its points of particular attention.

List of experiences with consensus conferences:

Argentina: Genetically modified foods (2000); human genome project (2001).

Australia: Gene technology in the food chain (1999)

Austria: Ozone in the upper atmosphere (1997)

Belgium: Genetic testing (2003?)

Canada: Mandatory laptop computers in universities (1998 −− pilot organized by students at McMaster University); McMaster's policy concerning online education (1999 −− pilot organized at McMaster University); food biotechnology (Western Canada, 1999); municipal waste management (Hamilton City/Region, 2000)

Denmark: Gene technology in industry & agriculture (1987); food irradiation (1989); human genome mapping (1989); air pollution (1990); educational technology (1991); transgenic animals (1992); future of private automobiles (1993); infertility (1993); electronic identity cards (1994); information technology in transport (1994); integrated production in agriculture (1994); setting limits on chemicals in food & the environment (1995); gene therapy (1995); consumption & the environment (1997); teleworking (1997); citizens' food policy (1998); future of fishing (1998); genetically modified foods (1999); noise and technology (2000)

France: Genetically modified foods (1998)

Germany: Citizens' Conference on Genetic Testing, (November 23rd − 26th, 2001 at the Deutsches

Hygiene−Museum Dresden)

Israel: Future of transportation (2000)

Japan: Gene therapy (1998); high information society (1999); genetically modified food (2000)

Netherlands: Genetically modified animals (1993); human genetics research (1995) ) (the consensus conference was in The Netherlands actually held under the name of “publiek debat” (public debate))

New Zealand: Plant biotechnology (1996); plant biotechnology 2 (May 1999); biotechnological pest control (Sept. 1999)

Norway: Genetically modified foods (1996); smart−house technology for nursing homes (2000)

South Korea: Safety & ethics of genetically modified foods (1998); cloning (Sept. 1999)

Switzerland: National electricity policy (1998−−conducted in 3 languages with simultaneous translation); genetic engineering and food (June 1999); transplantation medicine (Nov. 2000)

U.K.: Genetically modified foods (1994); radioactive waste management (May 1999)

U.S.A.: Telecommunications & future of democracy (1997; Boston area pilot initiated by The Loka Institute); "Genetically Engineered Food (scheduled for February 2002)"

Combination with other methods

There are no specific combinations or links with other methods. This method does not need input from other steps in the IA process, nor does it provide specific input for other methods. The citizen panel’s report is a very specific outcome, comparable to the report that is the outcome of a citizens' jury. With regard to the assessment of citizens’ views, IA focus groups might deliver comparable results.

Strengths and weaknesses

Strengths:

  • The consensus conference may increase public awareness (dependent a.o. on media attention) It may lead to making better decisions, by enriching the process with relevant points of view. Or as Andersen & Jaeger (1999) state it: “The consensus conference may provide political and public debate and decision−making on new technology with dimensions and reasoning which were not taken into account previously”.
  • Learning is probably also a very important impact of a consensus conference. The citizen panel can learn about the subject, and the experts and policymakers can learn about citizens' views.
  • Another strength is that this tool actively involves citizens, who are normally not asked and who give a deliberate view on the topic. As a consequence the citizen panel may acquire self−confidence with regard to scientific and policy matters (Andersen & Jaeger, 1999).
  • It may increase the accountability of decision−making, as participants get an inside view in the decision−making process and become co−responsible for the process and its outcomes.

Weaknesses:

  • In striving for a shared position on the topic, certain deviating insights/ points of view can get lost. (According to S. Joss (2000) there has to be a constant striving towards a shared position, although he stresses this doesn’t mean that it’s by any means necessary to reach a consensus).
  • As for the impacts of the tool, it seems that there is little effect on policy. Evaluations are unfortunately scarce.
  • The report of the citizen panel cannot be regarded as THE voice of the public. The selection of the citizen panel is not completely random and the sample is small; therefore the validity is low.
  • The Belgian example illustrates that “the facilitation process confirms the image of the layperson with his or her many questions. The emphasis was on seeking answers and the expectation that expert knowledge could alleviate any concerns citizens may have” (Vandenabeele & Goorden, 2004). This characteristic is probably not unique for the Belgian situation.
  • The focus of the facilitation process in this Belgian example was on cooperative skills and expert information; this  hides an inability to deal with the type of argumentation that can be expected from citizens (Vandenabeele & Goorden, 2004). This corresponds to what we argued before about the coding of "lay" knowledge in experts' terms, leading possibly to the omission of specific valuable lay knowledge.
  • There is answer−and−question type of interaction between experts and the citizens. Interaction between experts and citizens, in terms of working out differences together, (often) does not occur; this may result in sub−optimal outcomes.
  • The various strengths and weaknesses can come to expression in various stages of the policy making process. For an important part, the size and impact of the strengths and weaknesses are determined by the accurateness and the thoroughness of the way the tool is being deployed.

References

Andersen, I−E. & Jæger, B. (1999). Danish participatory models. Scenario workshops and consensus conferences: towards more democratic decision−making. Science and Public Policy 26(5): 331−340. Brown, M. (2006). Citizen panels and the concept of representation. Journal of Political Philosophy, 14 (2), 203−225.

Einsiedel, E.F. & Eastlick, D.L. (2000). Consensus conferences as deliberative democracy. Science Communication, 21(4), 323−343.

Einsiedel, E. F., Jelsøe, E. & Breck, T. (2001). Publics at the technology table: The consensus conference in Denmark, Canada, and Australia. Public Understanding of Science 10: 83−98.

Guston, D.H. (1999). Evaluating the first U.S. consensus conference: The impact of the citizens panel on telecommunications and the future of democracy. Science, Technology and Human Values 24(4): 451−482.

Huitema, D. & Van de Kerkhof, M. (2006). Public participation in water management. The outcomes of an experiment with two participatory methods under the Water Framework Directive in the Netherlands: Analysis and Prospects. In: Grover, V. (ed.). Water: Global common and global problems. Science Publishers. pp. 269−296.

Joss, S. (2000). Die Konsenskonferenz in Theorie und Anwendung. Stuttgart: Akademie für Technikfolgenabschatzüng in Baden−Württemberg.

Joss, S. & Durant, J. (Eds.) (1995). Public participation in science: The role of consensus conferences in Europe. London: The Science Museum.

McKay (1999). First Australian consensus conference March 10−12−1999. Gene technology in the food chain. Evaluation report phase 1. Available through: http://www.csiro.au/pubgenesite/eval_rep.htm

Rowe, G. & Frewer, L.J. (2000). Public participation methods: A framework for evaluation. Science, Technology, & Human Values, 25(1), 3−29.

Vandenabeele, J. & Goorden, L. (2004). Consensus conference on genetic testing: Citizenship and technology. Journal of Community & Applied Social Psychology 14: 207−213.

Repertory Grid Technique

Introduction

Repertory Grid Technique (RGT) has its origins in construct psychology, which is one of several major theories of psychology in the world today. Personal construct psychology has mainly been used in clinical settings, as a way of trying to increase the psychologist's understanding of how individuals view and shape their worlds. Since the introduction of this methodology, it has found its home in the areas of marketing business, artificial intelligence, education and human learning. In the field of policy analysis, too, RGT has gradually gained ground. The basic idea of RGT is that the minds of people are 'construct systems', a construct system being defined as the set of qualities, or dimensions, that people use in their everyday efforts to make sense of the world. These construct systems are highly individual in nature and may guide people's behavior, provided that they develop a reflective awareness of how 'negative' constructs that impede their behavior can be changed. People observe, draw conclusions about patterns of cause and effect, and behave according to those conclusions. People's construct systems are not static, but are confirmed or challenged every moment they are concious. Moreover, construct systems are not always internally consistent. People can, and do, live with a degree of internal inconsistency within their construct system. Basically, RGT aims to unfold categorizations by articulating the individual construct systems of people so they can be changed or maintained. This helps to better understand what meaning people give to a certain problem situation and what kinds of solutions they would prefer.

Methodology

RGT includes two concepts: 'elements' and 'constructs'. The elements are the objects of people's thinking to which they relate their concepts or values. The constructs are the discriminations that people make to describe the elements in their personal, individual world. An essential characteristic of constructs is that they are 'bipolar' (e.g. cold−hot, good−bad). In the first applications of RGT, the elements were formed by the people that were important for the person. Constructs were the qualities used to describe these persons, for instance, 'nice' or 'agressive'. Basically, RGT relates the construct of an individual directly to the elements.

Process

RGT procedure can best be characterized as a semi−structured interview (face−to−face, computerized, or a phone interview) in which the respondent is confronted with a triad of elements and is then asked to specify some important way in which two of the elements are alike and thereby different from the third. The characteristic that the respondent uses to distinguish between the elements is the construct. Since the construct is bipolar, it can be presented on a scale. After that, the respondent is asked to rate the elements (that are possible/desirable to rate) on the scale that represents the construct, and to indicate which pole of the construct he or she prefers. Then, the interviewer moves on to the next triad of options. Typically, these steps are repeated until the respondent mentions no new constructs anymore.

The basic procedure of RGT includes ten steps (Jankowicz, 2004, see also Fransella et al., 2004):

  1. Agree on a topic with your respondent*.
  2. Agree on a set of elements.
  3. Explain to the respondent that you wish to find out how s/he thinks about the elements and that you will do this by asking him or her to compare them systematically.
  4. Take three elements and ask the respondent: 'What do two of these elements have in common, as opposed to the third?'
  5. Ask the respondent why: 'What do the two have in common, as opposed to the third?'
  6. Check that you understand what contrast (which construct) is being expressed.
  7. Present the construct as a rating (or ranking) scale, with one pole of the construct on the left and the other pole of the construct on the right.
  8. Ask the respondent to rate each of the three elements on this scale and to make clear which end of the scale they are nearest to.
  9. Ask the respondent to rate each of the remaining elements on this construct. RGT aims to elicit as many different constructs as the respondent might hold about the topic. So, repeat steps 4 to 8, with different triads of elements, asking for a fresh construct each time, until the respondent cannot offer any new ones.
  10. * Often, the first two steps are taken by the RGT designer, meaning the interviewer decides what will be the topic and the elements in the analysis (Fransella et al. 2004: 21).

Use of RGT in Policy processes

RGT gives insight into the ways in which respondents view a specific problem or topic. It reveals the 'construct systems' on which respondents base their viewpoints, including the inconsistencies and conflicts within and between these construct systems. Underlying assumptions can be explored, and changed. The method allows for a systematic comparison of solution options and/or policy proposals in order to elicit respondents' preferences. Using RGT can result in the selection of a preferred policy option thus helping policy makers to make a reasoned choice on the basis of a set of criteria. RGT has been used in environmental research, and the experience with the method in a number of studies has proven that it is a useful technique for eliciting respondents' perceptions of environmental risks, and their preferences for specific policy options.

Operational aspects

Literature does not mention costs of this tool at all. Obviously, the costs depend very much on the number of respondents and elements. Existing computerized versions of this tool, in which the constructs are elicited and analyzed by means of a specific computer program, cost between 0 (some are free) and 400 Euro. Developing project−specific software, like has been done in the project H2 Dialoog, will cost about 2000 euro. With regard to the manpower: the interviews (elicitation) and the analysis of the constructs can be done by one and the same person. If a computerized version of RGT is used, this person will also be the facilitator. When using RGT in a workshop (like in the project H2 Dialoog − a workshop with 25 participants) it is useful to have a number of people (let's say four) as facilitators to assist the workshop participants. On average, manpower can be estimated at 2 to 3 man months.

On average, it will take about 20 to 25 interviews of about one hour each to have a sound overview of the most relevant constructs in a particular context. Assuming one interview takes 4 hours (conducting the interview and processing the data in to the computer), a RTG procedure will take about 80 to 100 hours. In addition, some substantial time will be needed to select and pre−test the elements and to analyse the data (estimated at a 100 hours).

The required data input is low. The only input that may be needed concerns the selection of elements, but this can also be done with the help of the respondents (e.g. like in the COOL project and in the project H2 Dialoog). The output of the tool consists of a list of constructs that respondents use to give meaning to a specific topic, as well as rankings (i.e. respondents' preferences) of elements according to the elicited constructs.

Basically, RGT is simple but powerful. Most articles report that RGT is well understood by the respondents. In the Dutch COOL project this was not the case for all participants, but that is probably because the tool was used in phone interviews instead of in face−to−face interviews.

RGT is very transparent. Many variations exist for different aspects of the procedure such as the elicitation procedure, the sorting technique, and the rating direction. Also, the tool is fairly user−friendly. Eliciting the constructs is not difficult; the questions to ask respondents are simple. Analyzing the constructs is more difficult and, if done quantitatively, requires statistical methods. The transparency of the outcomes to some extent depends on the type of analysis that is conducted.

With regard to the reliability of the tool, researchers argue that RGT is not about producing the same results, but to see to what extent it shows change (in preferences, meanings, etc.), and what that change is signifying. The tool is able to measure the true range of constructs in a particular context and at a particular moment by means of 20 to 25 interviews. This means that after 20 to 25 interviews, no new constructs will come up anymore (saturation). Some test re−test studies show rather stable patterns of construct relationships, whereas other studies report a lower degree.

It is hard to say what the time is before the results become outdated. People's constructs can change as the result of new knowledge, new developments, etc. This is also one of the aims of RGT! With regard to the time scale it needs to be stressed that the present situation is the reference. But the respondents can be asked to specify what their preferences are for the long term, like has been done in the COOL project and in the project H2 Dialoog.

Experiences

Much of the 'grid work' has taken/takes place in the clinical setting, with individuals, as a way of trying to increase the psychologist's understanding of how the person views the world (Kelly, 1955; Ryle, 1975; Bonarius et al., 1981). A few examples of psychological problems for which RGT has been used are: abuse, anorexia nervosa, depression, suicide and phobias. Most of the studies that are published refer to a rather successful use of RGT (see for a detailed overview Fransella et al., 2004 Chapter 8). Some studies report a less successful use of the tool e.g. because it did not sufficiently elicit value−laden constructs.

Other settings in which the tool is used are: development of children, education, nursing, social relationships, drugs use, forensic work, market research, politics, careers, business and sport (see Fransella et al., 2004, Chapter 8). Wright (2004), for instance, reports on the use of RGT to increase the understanding of employees' perceptions towards performance appraisals, and argues that the method allowed the investigation to go much deeper than past research (in which conventional questionnaires were used) into the core perceptions that influenced respondents' attitudes and subsequent behavior. Stein et al. (2003) used RGT to evaluate the extent to which a one−semester university course helped students to better understand technological design processes by conducting a Repertory Grid Technique at the start, and at the end of the semester. Tan & Hunter (2002) report on the use of this tool to understand the cognition of users and information systems professionals.

There are over a 400 articles published on RGT. Only a relative small number of these report on RGT research in the area of environment and sustainability. As far as we are aware of, there are no reports of the use of the tool in actual policy−making settings (although Q Methodology, a method very close to RGT, has), but the method has been used in assessment processes that feed the policy process. Examples of applications of the method on environmental and sustainability issues are:

UK: To evaluate urban planning map formats by lay people (Stringer, P., 1974).

Australia: To probe the ideas that students have about energy (Fetherstonhaugh, 1994); a computerized version of the method to understand consumers' perceptions of food products (Russell and Cox, 2003, 2004);

Argentina: To elicit the perceptions of genetically modified food by consumers (Mucci, 2004).

The Netherlands: To elicit perceptions of scientists and other stakeholders in the Netherlands with regard to climate risk management (Van der Sluijs et al., 2001; De Boer et al., 2000); to compare climate mitigation options and to elicit preferences for long−term climate policy among stakeholders in the Netherlands (Van de Kerkhof, 2004, Chapter 9, Van de Kerkhof et al., 2002), and to compare hydrogen futures and elicit the most important concepts in the discussion on hydrogen technology (Van de Kerkhof et al., 2005). Also in the case study of the Sustainability A−Test project, which concerns an impact assessment on the introduction of biofuels in the European Union, related to the Biofuels Directive and the CAP Energy Crop Premium, Consortium 3 has proposed to use RGT. The aim of RGT in this proposal is to systematically analyze and compare the potential impacts of three different policy packages (scenarios) that can be implemented to increase the share of biofuels in the transport sector.

Combination with other Methods

RGT can be used in interviews, in questionnaires, but also in a computerized version. There are a number of software packages available for both eliciting constructs, and for analyzing the repertory grid data, such as REPGRID, FLEXIGRID, NEWGRID and WEBGRID.

  • It can be used in combination with interviews, observations, and secondary data analysis (Stein et al. 2003; Fetherstonhaugh, 1994).
  • In the Dutch COOL project, the method was used in a participatory assessment as input for a stakeholder dialogue on criteria for climate policy (Van de Kerkhof, 2004).
  • In the Dutch COOL project, the method was used to integrate the outcomes of an interactive Backcasting exercise. In this exercise, groups of stakeholders had explored the implementation pathways of a number of response options to climate change. Since the options were mainly explored in isolation from one another, RGT was used to compare and integrate the different analyses (Van de Kerkhof, 2004).
  • Scholes and Freeman (1994) report on the use of RGT in combination with the reflexive dialogue in order to explore the contribution of nursing practitioners to the therapeutic milieu. 
  • Fetherstonhaugh (1994) used RGT in combination with interviews in order to get insight into students' ideas about energy.
  • In the biofuel case study of the Sustainability A−Test project, the proposal is to use RGT in combination with Scenario Analysis − Application and the outcomes of RGT serve as input for, among others, Global Land Use Accounting/Total Resources Use Accounting emerges, and Life cycle assessment. 
  • Q Methodology, Semantic Differential, Value−Focused Thinking, or Cognitive mapping can be used as alternative methods for RGT.

Strengths and weaknesses

RGT is a flexible method that can be applied in many variations, and in a variety of issue areas (Fransella & Bannister, 1977). It is able to develop the intersection between objective and subjective methods of assessment: it targets the articulation of deeply personal meanings and enables the comparison or compilation of these meanings vis−a−vis the meaning of others (Bannister, 1985, referred to in Neimeyer, 2002). Furthermore, with a limited number of interviews (20 to 25, see Van der Sluijs et al., 2001) RGT is able to elicit the true range of relevant constructs in a particular context (Dunn, 2001). Another strength is that when it is used in a participatory assessment, the method has the capacity to enhance the quality of the argumentative process by facilitating the exploration of conflicting arguments and (underlying) claims on a specific topic (Van de Kerkhof, 2004). A final strength that we would like to mention is that the interviewer, due to his or her minimal role, does not steer the respondent through questioning (Van der Sluijs et al., 2001). The role of the interviewer will become even smaller if the respondents choose the elements in the analysis and not the respondent (Van de Kerkhof, 2004).

Weaknesses of RGT are for example the fact that the method's variations with regard to e.g. elicitation method, sorting technique, rating direction (Neimeyer, 2002; Neimeyer & Hagans, 2002) or variations with regard to the examples that are used to introduce and explain RGT (Reeve et al., 2002) affect the outcomes of the method. In other words: variations in the use of the method may elicit different sets of constructs (this concerns the validity of the method). As a result, the policy maker might consider the grid outcomes insufficiently reliable and, therefore, less relevant. Another weakness is that RGT only elicits the constructs to which a pe rson can attach verbal labels (Fransella et al., 2004). Another weakness is that respondents can be suspicious towards the rather open questions and, as a result, feel constraint to think up constructs with an open mind (Van de Kerkhof, 2004). Finally, in a participatory setting, in which RGT outcomes are fed back to the group, it is possible that participants do not know how to deal with the outcomes, especially when these elicit inconsistencies and changes in their own way of thinking (Van de Kerkhof, 2004).

References

Dunn, W. (2001) Using the method of context validation to mitigate Type III errors in environmental policy analysis. In: Hisschemoller, M., R. Hoppe, W. Dunn and J. Ravetz (eds.). Knowledge, power and participation in environmental policy analysis. Transaction Publishers. New Jersey. USA. 417−436.

Fransella, F, R. Bell & D. Bannister (2004). A manual for repertory grid technique. Second edition. John Wiley & Sons, Ltd. UK.

Jankowicz, D. (2004). The easy guide to repertory grids. John Wiley & Sons Ltd. UK.

Kelly, G.A. (1991). The psychology of personal constructs. Volume one: A theory of personality. Routledge. London. UK (originally published New York, Norton, 1955).

Neimeyer, G.J. & C.L. Hagans (2002). More madness in our method? The effects of repertory grid variations on construct differentiation. In: Journal of Constructivist Psychology 15: 139−160.

Russel, C.G. & D.N. Cox (2004). Understanding middle−aged consumers' perceptions of meat using repertory grid methodology. In: Food quality and preference 15: 317−329.

Smith, H.J. (2000). The reliability and validity of structural measures derived from repertory grids. In: Journal of Constructivist Psychology 13: 221−230.

Van de Kerkhof, M., R. Bode, E. Cuppen, M. Hisschemoller, T. Stam and I. Varol (2005). Verslag van de Scoping Workshop van de H2 Dialoog. Working document 2. IVM W05/15. Amsterdam. The Netherlands.

Van de Kerkhof, M. (2004). Debating climate change. A study of stakeholder participation in an integrated assessment of long−term climate policy in the Netherlands. Lemma Publishers. Utrecht. The Netherlands.

Van der Sluijs, J., M. Hisschemoller, J. de Boer and P. Kloprogge (2001) Climate Risk Assessment: Evaluation of Approaches. Synthesis report. Utrecht University. The Netherlands.

Wikipedia (2013). Public participation.

Publications and applications

Titelsort descending Type of practice Updated date
Border management in the EU Example of an IA Activity 2013-10-29 13:40
Do Policy Impact Assessment Processes Promote Environmental Policy Integration? Project 2014-02-27 18:05
Electronic communications and services (European Electronic Communications Market Authority) Example of an IA Activity 2013-10-15 15:24
Enhancing the security of explosives Example of an IA Activity 2013-10-04 09:37
Entry and residence of highly qualified third country workers Example of an IA Activity 2013-10-29 13:38
European Asylum Support Office Example of an IA Activity 2013-10-15 12:44
Facilitating the release of the digital dividend in the European Union Example of an IA Activity 2013-02-25 16:20
Facilitating the release of the digital dividend in the European Union Example of an IA Activity 2013-02-25 16:20
Global Monitoring for Environment and Security (GMES) – Challenges and Next steps for the Space Component Example of an IA Activity 2013-10-15 12:48
i2010 European initiative on e-inclusion Example of an IA Activity 2013-10-04 09:44
Impact Assessment of Land Use Policies Publication 2014-04-09 21:11
Landscape Scenarios and Multifunctionality: Making Land Use Impact Assessment Operational. Publication 2014-04-14 15:40
Participation of experts and non-experts in a sustainability assessment of mobility Publication 2013-12-09 15:46
Participatory Impact Assessment of Agricultural Practices Using the Land Use Functions Framework: Case Study From India Publication 2014-06-27 19:13
Placing on the market and use of biocidal products Example of an IA Activity 2013-10-04 09:49
Protecting Europe from large scale cyber attacks and disruptions Example of an IA Activity 2013-10-15 14:18
Public participation and environmental impact assessment: Purposes, implications, and lessons for public policy making Publication 2013-12-09 10:19
Public works contracts, public supply contracts and public service contracts in the fields of defence and security Example of an IA Activity 2013-10-15 15:27
Strategy for the internalisation of external costs Example of an IA Activity 2013-10-15 15:11
Support scheme in the cotton sector Example of an IA Activity 2013-10-29 13:39
Textile names and related labelling or textile products Example of an IA Activity 2013-02-25 16:20
Textile names and related labelling or textile products Example of an IA Activity 2013-02-25 16:20
Textile names and related labelling or textile products Example of an IA Activity 2013-02-25 16:20
Thematic Strategy on Air Pollution Example of an IA Activity 2013-10-29 13:30

Other content using this term