Efektivita, evaluace a podpora programů environmentální výchovy

Jan Činčera
Jiří Kulich
Dita Gollová
Článek prezentuje výsledky kvalitativního výzkumu realizovaného mezi českými organizacemi nabízejícími programy environmentální výchovy a orgány veřejné správy, které tyto programy finančně podporují. V rámci výzkumu byly řešeny tři základní výzkumné otázky: jaké evaluační strategie organizace používají? Jak tyto strategie souvisí s metodikou realizovaných programů, resp. do jaké míry odpovídají programy hodnocených organizací požadavkům na efektivní environmentální výchovu? Do jaké míry vycházejí orgány veřejné správy při finanční podpoře těchto programů z jejich skutečné efektivity? Výzkum prokázal poměrně malé zastoupení evaluačních strategií přinášejících relevantní informace o efektivitě programu, metodické nedostatky ve většině hodnocených programů a nedostatečnou schopnost orgánů veřejné správy formulovat a vyhodnocovat požadavky na efektivitu podporovaných programů.

Stažení

Data o stažení nejsou doposud dostupná.

Metriky

Metriky se nahrávají ...

Effectiveness, evaluation and support of environmental education programmes

2009-12-22 00:00:00

Jan Činčera, Jiří Kulich, Dita Gollová

The article presents results of a qualitative survey that has been done among Czech environmental education centres and regional and municipal government institutions providing financial support in this area. Three research questions were discussed: what evaluation strategies are used by the organizations? How are the strategies connected with methodology of the programs, or how do the programs correspond with standards for effective environmental education? How important is the real effectiveness of the programs for local governments' decision about granting them financial support? The research proved that few organizations use evaluation strategies providing relevant information about program effectiveness, there are methodological weaknesses in the majority of analyzed programs, and the local government institutions do not have and do not demand relevant information about the effectiveness of supported programs.

1. Introduction

„Evaluation is a process of critical review of a programme. It includes collection and analysis of information on the programme activities, characteristics and outcomes. Its objective is to formulate statements on the programme to improve its effectiveness and/or provide information for decision making on the programme." (Patton, based on Barch, Duvall, Higgs, Wolske & Zint, 2007) Evaluation in environmental education is used to assess the value of programmes at the product, outcome, process, resources, or resource use level. Distinction is made between formative evaluation, focused on analysing programme implementation, and summative evaluation, assessing the degree of achievement of the expected programme outcomes (Bennett, 1989; Barch et. al., 2008; Frechtling, 2002).

Summative evaluation is one of the preconditions for an effective environmental education programme (Hungerford, 2005). Based on summative evaluation, the programme provider gains data necessary for its critical analysis and possible modifications within the evaluation cycle (Ecosystem, 2004). The summative evaluation methodology for environmental education has been developed in a number of methodological papers abroad (e.g., Bennett, 1989; Barch et. al., 2008; Marcinkowski, 1997; Thomson et. al., 2008; Frechtling, 2002). The outcomes of summative evaluation is often published in reviewed journals abroad, helping develop theory of the entire discipline.

Through supporting environmental education, regional and municipal government authorities carry out their objectives in environmental policy. Succeeding in this presupposes the effectiveness of evaluated programmes - the programmes should guide participants towards responsible environmental behaviour, i.e., such in which the person takes environmental impacts of their decisions into account in everyday decision making. Proenvironmental behaviour is regarded as the main objective of environmental education by a vast majority of foreign authors (Disinger, 2005; Hungerford, Peyton & Wilke, 1980; Ramsey, 2005; Marcinkowski, 2005; Hungerford & Volk, 1990).

Programme evaluation is a crucial precondition for its effectiveness. Without evaluation, the programme deficiencies cannot be identified, its quality improved or provably documented in any way. That is why evaluation of environmental education programmes are a common component of the programme and a precondition for the receipt of funding support from sponsoring organisations in countries with an advanced environmental education theory.

In the Czech Republic, the spread of evaluation in environmental education is very little as of yet. Only sporadic surveys (e.g., Hornová, 2007; Činčera, 2008) and theoretical papers (Činčera, 2007a; Činčera, 2007b, Činčera, 2008) have been published. The present study connects directly to the qualitative survey Evaluation Strategies of Environmental Education Centres (Činčera, 2008b). The survey identified four basic evaluation strategies used in the settings of the Pavučina Environmental Education Centre Network member centres. The results indicated that only a small fraction of the centres were using applied research methods with a potential for providing relevant information on programme effectiveness in evaluating their programmes. The majority of the centres apply a combination of intuitive approaches based on non-standardised monitoring by the lecturer or external evaluator, possibly complemented with non-standardised evaluation of the group outcomes during or at the end of the programme.

One of the objectives of the survey presented below was to verify the presented thesis on a larger survey sample. At the same time, the survey built upon variables identified in a previous paper and tried to make a deeper examination of their interrelations.

2. Research Methodology

2.1. Research Questions

Three research questions were discussed in the survey:

  • What evaluation strategies are used by the organizations offering instruction programmes[1] in environmental education?
  • How are the strategies connected with methodology of the programs, or how do the programs correspond with standards for effective environmental education?
  • How important is the real effectiveness of the programs for local governments' decision about granting them financial support?

The survey was carried out in January to June 2009 in a co-operation of several organisations:

  • BEZK - part project co-ordinator, provided enquirers and data transcription;
  • SEVER - part project co-ordinator, provided enquirers;
  • Institute for Environmental Policy - provided enquirers.

The first author of the present paper designed the survey and analysed the collected data. To improve their credibility, the proposed interpretations were commented on by the survey co-authors from the centre that was part of the survey sample. The interpretations are therefore based on both etic and emic perspectives.

2.2. Sample

The survey worked with two basic sets. The first set, referred to as "providers", comprised 94 organisations operating in the Czech Republic and offering environmental education programmes. The set was defined as part of the project Analysis of the necessity and utilisation of environmental education centres in the Czech Republic, executed by BEZK, SEVER and Agentura Koniklec, as centres "associated in networks, i.e., primarily Pavučina EECN, ČSOP, and networks funded in the last three years substantially by the ESF/Operating Programme Developing Human Resources (OP DHR), (hereinafter, the Organisations) which have executed successful EE projects under the MoE Selective procedure for support to projects presented by citizens' associations and EE projects supported by other major sources (EU Pre-Accession and Transitory Funds)." The choice of organisations matching the above definition was made by an expert team composed of representatives of all the three providers.

The criteria for the group were as follows:

  • Public provision of EE services - i.e., not only to own employees, members, or learners and students;
  • Provision of educational services in a contact form and to a defined specific target group - i.e., not only via publishing and general awareness raising, but via direct educational work;
  • Predominant focus of the institution on EE;
  • Scope of services provided - i.e., entities doing regular work of at least several dozen hours of educational events a year (all the organisations included reported at least 100 hours a year).

All the organisations were addressed with a request to co-operate in the survey. Data were collected from 85 organisations, i.e., over 90% of the basic set. The respondents were visited by an enquirer during the survey period of March to May 2009, and a structured interview was conducted. All the enquirers had taken joint training. The interviews were then transcribed and submitted for processing. Written materials on a selected currently executed programme were also collected from the respondents for the purposes of the survey. These included programme preparations or related presentation papers.

The data collected were further assessed under pre-prepared categories; part of the data was processed by open coding.

The other basic set, referred to as "supporters", comprised regional and regional capital authority employees in charge of supporting environmental education. Ten out of the fourteen regional capitals were involved in the survey. Ten more respondents agreed to be interviewed out of the thirteen regional authorities (outside Prague). The total sample size was thus 20 respondents, i.e., 74% of the basic set.

Data were collected via structured interviews; additional data were obtained from submitted official materials of the organisations. Open coding and data analysis under pre-prepared categories were applied in the processing.

2.3. Survey Design

The survey was conceived as mixed with the main perspective in the qualitative research paradigm (simultaneous combination, QUAL+quan scheme) (Hendl, 2008).

Two types of data were processed:

  • structured interviews with representatives of each organisation;
  • documents - written programme preparations, evaluation tools, presentation materials, criteria for financial support to organisations.

The data were analysed in several ways. The primary means of data reduction was a pre-prepared category system, under which the data were coded using established coding tables (typologies). All the data were processed by open coding at the same time. Axial and selective coding methods, characteristic of anchored theory, were not applied further. Since the survey worked with a relatively large set of data, descriptive and inference statistical methods were used to make clearer and verify the identified findings. Given the interpretative nature of the data analysis, all the numeric values provided need to be understood as indicative only.

Final interpretations are always based on two data sources (interviews and documents). The reason was the desire to improve the credibility of the survey using data triangulation.

The chosen approach combines, to some extent, quantitative and qualitative research methodologies. The choice was made primarily due to the pragmatic focus of the survey, aimed at opening a debate on the effective use of funds in support of environmental education.

Three basic categories were studied during the survey: Programme, Evaluation, and Support. Each category was sub-divided into four to six sub-categories.

programme

 

Figure 1: Programme category

Programme was the first of the categories studied. The respondents representing the organisations classified as providers were asked to choose out of their offer one programme that they think characterises their organisation: typifies its activity and corresponds to the profile of the centre. The respondents were asked for written documents for the programme (the preparations in particular) and enquired about selected characteristics of the programme. Six main areas were studied under this category: Formulation of Goals, Programme Sequence, Group Work, Programme Duration, Variables in the REB model by Hungerford and Volk (1990), and Preparation Quality. The data collected were then processed into pre-prepared coding tables. The category as a whole provides indicative information on the methodological quality of the programme selected by the centre.

evaluation

 

Figure 2: Evaluation category

The other studied category was Evaluation. Again, structured interviews with the providers' representatives were the primary data source, complemented, where relevant, with the evaluation tools provided. The text analyses four basic sub-categories: Evaluation Methods, Evaluation Level according to the Kirkpatrick model (Hogan, 2003), use of the Pavučina EECN Evaluation Table, and the organisation's Self Evaluation.

support

 

Figure 3. Support category

The Support category indicates the strategies used by the respondents in the supporter set, i.e., regional and regional capital authorities, in selecting programmes to grant funding support. The data were obtained both by interviews with the set respondents and by interviews with the providers set respondents. The Level field studies the respondents' views of the importance and utilisation of the components of the programme logical model (W.K. Kellog Foundation, 2004) in decision making on granting funding support. It enquired the respondents both about their own opinion of the weight of each stage of the logical model for evaluation and about the their real weight in deciding on funding support. The data were recorded on a numeric scale expressing the degree of importance of each criterion. The Methodology field analysed the criteria for selecting programmes to be supported and programme evaluation team composition. The Ideal Programme field expressed the respondents' opinions on what a quality environmental education programme should look like. The Providers' Reflection field contains the coded responses by respondents in the providers set, expressing their opinions on evaluation methods applied by municipal or regional officials.

2.4. Survey Limitations

Given the size of the survey, the data were collected by a team of enquirers. Although they had all been trained and were using a uniform interviewing method, some data bias could have occurred due to the enquirers' differing personal dispositions towards interviewing techniques. The data from some respondents were therefore less rich than from others. Although the centres were asked to choose a representative respondents, some bias may also have occurred due to their inappropriate choice.

During the interview, the respondents were asked to characterise one environmental programme of their choice that they considered typical of their organisation. Although this method has its benefits, its weaknesses have to be noted. Some of the respondents, for instance, may have chosen a programme that they knew the most about, yet it was not one of the "typical" programmes of that centre. Others may have chosen one that they considered a "showcase" although it is offered less than others. The survey therefore does not describe the total offer in EE, and must not be construed in this way.

Some of the respondents refused to have their answers recorded or ended the interview prematurely. This was particularly the case with municipal and regional representatives, who may have felt threatened in their position. The quality of such respondents' answers may have further been affected by the situation following the regional elections and the degree of uncertainty of the respective authorities.

For municipalities, the interview was conducted with a single representative - mostly the executive employee in charge of EE, typically under the environment or education departments. However, these are very populous bodies, complexly structured horizontally, vertically and as concerns responsibilities (EE concerns at least two different departments, whose mutual communication is often limited). The answers may have been seriously affected, e.g., by the respondent's department (environment, education, etc.), level (councillor/deputy, head of department/section, executive employee).

The data analysis was conducted by a single processor. The processor is a man active in the academia with experience in co-operating with environmental education centres and dealing with evaluation of environmental education programmes. The processor was based in subtle realism, i.e., the presumption that the social constructions expressed by the respondents point, to a certain degree, to objective reality: in this case, to the quality of programmes and ways of evaluating them (Hendl, 2008). Based on that same intellectual perspective, the processor assumed that some of the methods recommended and tested in the professional discourse can be regarded as objectively more effective than others, and can be used to formulate rating judgements on the subject matters studied. The processor's background and perspectives may have resulted in a biased interpretation of certain responses and their inaccurate coding. For this reason, the interpretation were validated from the emic perspective by the survey co-authors.

For the above reasons, the results presented below must not be understood as quantitative responses describing a situation by means of exact figures, but rather as an interpretation of measured data by the evaluator.

3. Presentation of Results

3.1. Programme Category

3.1.1. Preparation Quality

Detailed written preparation is an important indicator of the programme quality. A quality programme cannot be practically guaranteed without written preparation. Written preparation guarantees a standardisation of a verified version of the programme for its repetition by various instructors, at the same time giving the instructor room for concentration on the precise details of the programme rendition. The quality of written preparation is therefore one of the criteria that are verified by the Pavučina EECN Evaluation Table. Exceptions may include pedocentric and process-oriented programmes created ad hoc depending on the group's needs and on its initiative. Such programmes, however, were not presented in the survey, with one exception, and play a marginal role in the Czech Republic. Moreover, it can be expected even for this type of programmes that some form of written preparation is necessary for their effective implementation as a precondition for observing their basic methodology.

Out of the 85 respondents, 27 (31.7%) were capable of presenting detailed written preparations. Another 11 organisations presented only an indicative timeline instead of preparations, 23 presented only a basic description of the programme, and 24 presented nothing. Given the fact that each of the centres only presented one programme in their offer, these results cannot be generalised entirely. It can be assumed, however, that approximately one third of the set of respondents develop detailed preparations for their programmes.

3.1.2. Programme Duration

The duration is another important factor affecting the programme quality. Based on a meta-analysis of evaluation surveys conducted, Zelezny (1999) assumes that it is true particularly for younger participants that longer programmes are more effective than shorter ones. He quotes eight hours, an equivalent of a daylong programme, as a certain threshold. Obviously, shorter programmes do not have the chance of complexly influencing multiple variables crucial for developing proenvironmental behaviour. If short programmes are part of the standardised offer at the centre and are not created according to the needs of a particular school, it is not very likely that the instructors manage, in the short time, to identify the content of the learners' environmental literacy in the variable that the programme is to develop, and effectively respond to the feedback. In addition, it is impossible, in the short time, to develop learners' skills or work with the group dynamics, which is a precondition for the group's effective co-operation on more challenging tasks.

Out of the 85 programmes evaluated, sixteen (18.8%) exceeded one day in length. Within that, 11 were stay events, and 5 were longer programmes based on a more systemic work with the group. Eight programmes were between three hours and one day; fifty-three programmes were shorter than three hours; and the duration could not be identified for eight programmes.

It can be assumed based on the results that short instruction programmes under three hours predominate noticeably in the offer of the centres. Stay and longer events are offered by the centres, but only comprise a marked minority of their offer.

3.1.3. Programme Sequence

The influence of the sequencing of the activities within an instruction programme on its effectiveness is not quite clear. There are numerous alternatives to the conventional teaching model based on the motivation-exposition-fixation model, building upon various learning theories. NAAEE (Simmons, 2004; Excellence, 2004) materials recommend a constructivist methodology, of which the approach presented by Reading and Writing for Critical Thinking is the most popular in the Czech Republic. The organisation promotes the so-called EUR model, working with the realisation and modification of learners' preconceptions on a given topic (Grecmanová & Urbanovská, 2007). On the other hand, it has not been entirely verified that the constructivist method is more effective than the conventional teaching models (Wright, 2008). Other model sequences used in environmental education are based on experience teaching (Kolb, 1984), Earth education (Matre, 1999), and other sources (O'Donoghue, 2007; Činčera, 2007).

Out of the 85 programmes evaluated, six (7%) were developed following the constructivist EUR model. Another five were more or less based on the learning-by-experience cycle, or the Kolb cycle. Two programmes emanated inspiration by the global education methodology described by Pike and Selby (1994). One of the programmes roughly corresponded to the integrated thematic units methodology described by S. Kovalik (1994), and one case was a pedagogical project. Most of the programmes did not correspond to any of the recommended models. The programmes mostly open with a brief "motivation" in the form of a verbal explanation of the programme content and goal. A combination of activities and lecture follows, including application activities and final revision in some cases. Some programmes resemble the EUR model at the beginning, but lack the final reflective stage. Other programmes are problematic as to group dynamics. Centres often include activities that require a certain degree of group maturity. The aspect is rarely reflected.

It can be assumed based on the analysis that most of the programmes offered by the respondents are compiled based on the instructors' intuition and experience, or correspond to the "conventional" motivation-exposition-fixation model (using activating methods instead of plain lecture) and are not grounded on modern learning models.

3.1.4. Group Work

At present, a great emphasis is placed on co-operative learning and using group work in all subjects. However, the effectiveness of co-operative learning depends on the relationships with the group, grounded on the group's evolutionary stage (Johnson & Johnson, 2008). A group in the raving phase has different needs and capacities than one in the performance phase, which in turn influences the effectiveness of implemented programmes.

Some methodologies therefore recommend developing the group's potential by means of experience teaching activities simultaneously with implementing curricular activities (Činčera, 2007; Frank, 2001; Gibbs, 2001; Henton, 1996). It was therefore identified as part of the survey whether the centres include in their programmes activities for diagnosing or formation of relationships within the group.

Out of the 85 programmes evaluated, 18 (approx. 21%) included these activities. Another thirty-six programmes included elements of group work and co-operative learning, but not activities intended primarily for developing group skills and relationships. Seven programmes did not apply co-operative work, and the question could not be answered in twenty-four cases.

The results indicate that most of the centres include elements of co-operative learning and group work in their programmes. At the same time, only a minority work consciously with group dynamics and try to prepare groups for co-operative activities.

3.1.5. Formulation of Goals

Well-formulated goals, or expected outcomes of a programme is an important precondition for its effectiveness (Hungerford, 2005). Without precisely formulated outcomes, it is not clear how to compile and evaluate a programme. Although there are methodologies with generally formulated goals (in particular, process-oriented methodologies, in which goals surface during the programme, or are formulated by the learners), it is recommended to formulated the programme outcomes using the SMART principles (Hungerford, 2005; Bennett, 1989). In particular, the outcomes should be formulated from the point of view of the learner performance, specific, measurable, achievable, relevant, and defined in time.

Out of the 85 programmes evaluated, fourteen (16.4%) lived up to these requirements. Another two programmes applied alternative methodologies, i.e., relevantly applied generally formulated outcomes for a process-oriented methodology applied to the long-term work in a leisure club and a back-planning method for a stay programme. The goals of the other programmes differed from the SMART methodology more or less. Seventeen programme only defined goals at the level of basic variables; nineteen did not used learner-based formulations; five were too vague and unmeasurable. Twenty-eight programmes did not have any goals formulated, or they could not be identified in the materials submitted.

Formulations noncompliant with the SMART methodology included:

  • "The goal of the programmes is to familiarise children with the issues that the programme deals with."
  • "The goal (...) is to do manual work in the forest, taking the opportunity to educate the programme target, which is concerned young people, students (secondary, college), in nature conservation, forestry care, etc."

In addition, the goals of some programmes were unrealistic given the programme duration: they contained too many goals, or concentrated on all the relevant variable in environmental education within a five-day stay event.

It can be assumed that only a small fraction of the programmes offered have goals formulated in a way that enables good evaluation.

3.1.6. REB Variables

The theory of responsible environmental behaviour (REB) by Hungerford and Volk (1990) is the theory of variables affecting proenvironmental behaviour most frequently applied in the context of environmental education. The model presupposes three main spheres of variables, i.e., entry-level (environmental sensitivity and ecological concepts), ownership (complex understanding of issues), and empowerment (focus of control, empowerment knowledge, and empowerment skills). The theory gave rise to the NAAEE standards (Simmons, 2004; Excellence, 2004), defining the main goal spheres for environmental education in the United States.

Out of the 85 programmes evaluated, most focused on the entry-level variables, i.e., environmental sensitivity (19 programmes) and ecological concepts (22 programmes).

Programmes focusing on ownership variables were noticeably less numerous. Only thirteen programmes focused on complex understanding of environmental issues and conflicts. Six programmes focused on developing research skills, particularly related to nature monitoring. Programmes focusing on empowerment variables were numerous. Twenty-three programmes focused on developing empowering skills; seventeen, on developing empowerment knowledge. The empowerment knowledge and skills concerned either eco-management, i.e., behaviour in which people handle nature and natural resources directly (waste management, energy and water management, outdoor conduct, etc.), and persuasive skills in three cases (methodological programmes for teachers and instructors). None of the programmes were aimed at influencing political, legal or consumer behaviour. One programme expressed a goal focused on empowering the learners' internal focus of control.

Four programmes focused on developing the sense of place. Sense of place is not included in the REB model, but has been frequently discussed in recent years, and some papers (Ardoin, 2009) document its influence on proenvironmental behaviour.

Sixteen programmes focused on goal areas that are not included in the REB model; their relation to environmental education is therefore problematic. These were either programmes focused on traditional crafts, or programmes on animals, herbs, etc. Without a clear description of the goals and means, which would allow assessing whether the programmes contribute to developing environmental sensitivity or understanding ecological concepts, however, it cannot be assessed whether these can truly be considered environmental education programmes. The fact that respondents chose such programmes as model programmes for the organisation, and the way they described them, give evidence of inadequate understanding of theory of environmental education.

Moreover, some of the responses clearly indicated a belief in the KAB (knowledge-attitude-behaviour) model, which is, however, considered unproven and likely non-functional (Disinger, 1997; Hungerford & Volk, 1990; Hungerford, 2005):

„I cannot imagine how this positively perceived activity would not mean a change in the visitors' value attitudes, when they are introduced to nature protection and the need to conserve resources, albeit through walks to springs in Brno. I think the idea that this should not have an impact is odd. We haven't really studied whether a person who participates in our lectures then goes and puts something nasty in a spring; but I don't think that's the case."

Many of the programmes were built upon intuitive, but naïve theories of the interrelations among variables:

„This is a programme - first we do a projection of the ant kingdom, we talk about how things work between ants and people, who is the pest, the intruder, and we too can harm ants. Then I play a few alternative games with them, and we walk in the open, where we find an anthill, we look how they live. The end of the programme is that the kids learn to protect these little creatures."

In other cases, the programme goals concerned primarily an area different than environmental education:

„...for the participants to realise why paper is recycled, what it's good for, how broad the uses of paper are at present, what things are made of paper. And last but not least, the joy of making a thing with your own hands, a product that can be a gift or a favour, or a joy."

The theory of other programmes was very vague and nonspecific:

„...the main goal is that this, generally speaking, ecology is not purely some kind of textbook matter for the kids, for them to experience the linkages in nature, be it at the level of farm animals or at the level of these sort of herbivores and predators, so that even the games that we play with them in the open they tend towards this experience teaching, to experience your sort of ecology, that's where we try to be sort of most successful, that those... putting it like this, you can always make a lecture on the stuff, but it's harder to make the kids experience it, so that it sort of becomes part of their, say, personalities, and that they simply realise that your say milk is not made in boxes somewhere, but how much work it is just to calm down that goat, milk her, and the entire process, that it's much much more multifarious than it looks in the city."

Even some of the concepts behind developing the chosen variables were problematic. Some programmes on developing environmental sensitivity did not bring children into contact with wildlife; most of the programmes were very short. Programmes focused on ownership variables typically concentrated on presenting environmental issues ignoring the key level of environmental conflicts. Research skills developed were only partly related to environmental issues; skills for research into environmental conflicts were absent: not one of the programmes complied with the third target level of environmental education, which focuses on research and evaluation (Hungerford, Peyton & Wilke 1980). There is therefore a threat that children will be confronted with information on problems that they consider big but irrelevant to their own lives and beyond their capacities to influence.

Three out of five areas of proenvironmental behaviour - consumerism, legal and political behaviour (Hungerford & Volk, 1990) - were not dealt with. Since each respondent only chose one of the programmes offered, it must be noted that this conclusion is not generally valid for the offer of EEP. Programmes focused on environmental consumerism in particular are routinely offered by environmental education centres. However, we think it is interesting to note that not a single respondent made reference to them. We also believe that legal and political behaviour is noticeably less present in the EEP than eco-management. Nevertheless, a separate research task focused on an analysis of the overall offer by environmental education providers in the Czech Republic would have to be conducted to bring a relevant answer.

3.2. Evaluation Category

3.2.1. Evaluation Table

Pavučina Environmental Education Centres Network has prepared a so-called Evaluation Table for the purpose of ensuring programme quality: a tool for standardised monitoring and evaluation of programmes of its member centres. Although the Evaluation Table has not been completed yet (work is finishing on defining quality indicators), it is being tested by several member organisations. The Evaluation Table is designed primarily for evaluating so-called ecological instruction programmes, i.e., not for evaluating public workshops, for instance. It can therefore be assumed that the Evaluation Table will not be widely used by respondents who are not members of the Pavučina EECN and do not do instruction programmes.

Out of the 85 respondents, twenty (23.5%) use the Evaluation Table. The following codes were used in evaluating the Table:

evaluationtable

Figure 4: Coding the Evaluation Table sphere

The reasons against using the Evaluation Table most frequently quoted by those respondents who know it (i.e., Pavučina EECN member centres) were related to the complexity of the tool. In part, it discourages respondents; in part, it leads to simplifications to the Table and making own alternative versions. The perceived complexity of the Table leads respondents to the impression that they have neither the time nor the resources to use it, or they lack the know-how required for its completion. The following statement is characteristic:

„...the Table is quite complex, in fact if you were to evaluate things by all the criteria that it has in it, it would be awfully complicated and take huge amounts of time, in the end we have agreed in the quality group, since I go there, that it's way to complicated even after practical verification, and that it will be better if the criteria remain in fact only the best best ones and then the worst, and in between a scale, because as it was in separate little fields, all the things could not be evaluated based on that..."

Strongly critical reservations are expressed among respondents who do not use the Evaluation Table. Some are aimed at the complexity of the tool; others point out that the Table is only suitable for a certain type of programmes in their view:

„I know the Evaluation Table and I refuse to use it as a matter or principle... I think it's way too complicated, way to big, and what's more, it only covers one specific type of programme. And EE may have several types of programmes, but it doesn't cover those. That means it sees your main stream, but the things at the margin, the Table makes them seem poor. And I can see in our experience, when we go to trade fairs for example, there you meet instructors who are supposed to evaluate what takes place there, and they refuse to follow the Table. So I think the Table is a sort of negotiated compromise and I don't know what prospect it has."

Respondents who use the Evaluation Table appreciate its coherence and consider it a useful evaluation tool, or a tool the full application of which is the target of the organisation:

„I think the Table is very good as a basis. Because it shows which way the instructor or on which part the instruction should put emphasis. What in the programme is measurable. Or can be measured. Or, what you can focus on. But then of course, not all the items that are contained in this Evaluation Table in every programme, are relevant to the given programme. So then when evaluating programmes using the Table, I then know that some really aren't built on team co-operation. So I know that scoring a zero for that does not diminish the value of the programme. Or if I don't feel good about one of the criteria. That I rather use it as a perspective from which to evaluate the programme. And I think that it's good."

Based on the analysis, it can be assumed that the Evaluation Table is only used by the minority of the respondents. Most respondents who do not use it, do so due to the complexity of the tool, a lack of time, or refusal of the tool as such.

3.2.2. Evaluation Methods

A great many methods for data collection can be used for evaluating programmes, based on qualitative or quantitative research paradigms. The choice of a specific method depends chiefly on the nature of the evaluation question, programme type, and participant group, which is why methods cannot be qualified as suitable or unsuitable. The trend seems to be to use a mixed evaluation design combining the qualitative and quantitative approaches, and a certain deviation from one-sided quantitative pre-post testing using the questionnaire method (Hart & Nolan, 1999).

Certain typically quantitative as well as qualitative evaluation methods were detected in the survey.

Four respondents showed a comprehensively prepared evaluation models using multiple methods. The most typical qualitative tool is (usually non-structured) observation, used by 44 respondents (51.7%). Twenty respondents (23.5%) evaluate programme quality using programme participants' remarks made during final oral reflection; five respondents used final revision for the same purpose; one respondent mentioned the use of posters. None of the responses that mentioned using qualitative data collection methods indicated that the data would be subject of further processing within qualitative survey methodology, i.e., coding and categorisation.

Other respondents prefer quantitatively focused tools. The principal ones include questionnaires or enquiries assessing the clients' satisfaction (37 respondents, i.e., 43.5%); another 13 respondents (15.2%) derive success rating from records on programme visit rates (how many schools order them); four respondents use variants of so-called hitting, i.e., participants awarding points to the programme; finally, two mentioned using a final test.

„We are very content. We are one of the biggest in the country, we get over 20,000 children a year, and our capacity is not enough."

The results indicate that respondents use qualitative data collection methods for evaluating the programme effectiveness at the learning level, while they apply quantitative tools more in assessing participants' satisfaction with the programmes. Since qualitative methods are not always used correctly and the data are not processed according to qualitative research methodology, it can be assumed that programme evaluation at the learning level brings respondents highly unreliable information.

3.2.3. Evaluation Level

According to the Kirkpatrick model (Hogan, 2005), evaluation should encompass four basic levels:

  1. reactive, i.e., assessment of clients' satisfaction with a programme;
  2. learning, i.e., shift in the group's knowledge, skills, and attitudes;
  3. transfer, i.e., change in the group's behaviour as a result of having taken the programme;
  4. impacts, i.e., the effect of the change in participants' behaviour on their environment and community.

It can be said that the more levels evaluation includes, the more comprehensive it is. The highest level of evaluation applied by the centres was recorded in the survey. Sixty-five respondents (75.5%) only evaluate at the reactive level. The visit rate of the programme and the clients' / visiting teachers' satisfaction are crucial to them:

„When you see kids leave bored, then it was poor. When you see kids leave keen and wanting to return or the kids themselves write you a letter from school, that's amazing... If we would improve the programme; we will improve it by, when we provide the kids - when the museum looks better. Or when there are more objects, when we can tell more stories. But meaning we would improve it by doing self-scrutiny and self-flagellation, I don't think so."

Sixteen respondents (18.6%) try to evaluate at the learning level. Their evaluation is therefore focused on registering new knowledge, skills or attitudes that participants learn:

„To us, the most important thing is a shift in the attitude or realising one's impact on the state of the environment, so we try to move the learners along the value ladder, and that's what the final review at the end of the instruction programme is primarily for. Of course it is complemented by the fact it's not only for them to like, but to really learn something, to advance their knowledge and skills somewhat. And that's what the teacher's questionnaire is for, among other things. Nevertheless, we find out that teachers expect an instruction programme to entertain the kids above all, which is why activities are necessary, but the target is to teach them something."

Four respondents (4.6%) have experience with evaluation at the transfer level, i.e., projection of the knowledge, skills and attitudes attained into participants' behaviour. For two centres, this level was part of the evaluation report prepared by an external partner. The one centre evaluates this level in an international project for adult clients it is preparing; the other one evaluates the behaviour of its adult clients after an instruction workshop.

It can be assumed based on the survey that most of the respondents only evaluate their programmes in terms of clients' and orderers' satisfaction. Evaluation at the behaviour level is linked to external evaluation or with programmes for adult clients.

3.2.4. Satisfaction

A survey of evaluation strategies used by environmental education centres (Činčera, 2008) showed an assumption of connections between the quality of the evaluation strategy and the degree of satisfaction of the centre with its work. The assumption was that centres which are more critical of their work would rather use more demanding evaluation strategies; at the same time, the more demanding evaluation would help them identify flaws in their programmes, which would support their more critical evaluation.

The survey asked each respondent to evaluate their satisfaction with their centre's work in environmental education. The responses were subsequently coded on a 1-5 scale.

satisfaction

Figure 5: Coding the Satisfaction sphere

Out of the 85 respondents, eight centres were not able to evaluate themselves. Sixty respondents (70.5%) rated the work of their centre as effective or very effective. Respondents' positive self-evaluation typically included arguments of clients' interest in the centre's programmes or results of the programme reactive evaluation:

„As for the kids, they really enjoy the instruction programmes, so I would see the effectiveness at up to 80 per cent."

In addition, centres that considered their work effective, but limited by external barriers, were rated as satisfied with their own work. Insufficient funding was the chief such barrier. These respondents were convinced that they were providing quality output, but would show even better results with greater support. Thirteen centres (15.2%) used a mixed evaluation, stating both positive and negative aspects. Negative aspects predominated in the evaluations of four centres. Those derived either from the results of previous evaluations or insufficient utilisation of internal resources or from exterior barriers to effectiveness. Respondents consider the work of their centre effective, but restricted by lacking funds. They are convinced that they are doing their job well "under the given conditions", but could do it even better with better support. Internal reasons for self-critical evaluation occurred only rarely.

„Some of our programmes we see now that they're not effective and that we have to reconsider them, plus we have cancelled instruction programmes for kindergartens completely, because we thought them particularly ineffective, as we were at the conjurer and puppet theatre level there, and the teacher didn't really work with it further on. ... Some lectures for the public, well, the effect there was only that the community perceived us, that we work here and do something for them, but I don't think it would have had any impact on their environmental conduct and behaviour."

The analysis shows that as a rule, respondents who evaluate their own work as rather effective do not work with results of evaluation at levels above evaluating clients' satisfaction. While it cannot be asserted with any certainty that the degree of criticism grows linear to the rising level of evaluation, a statistically significant difference between the distributions of critical and positive respondents in the sets of centres that evaluate at the first, second, and third levels (Kruskal-Wallis H=14.8211 df=2 p=0.0006, α=0.05), can be found. At the same time, it seems that this difference concerns respondents who evaluate at the behaviour level: comparison of the first and second sets yields no statistically significant difference (Kruskal-Wallis H=0.1448 df=1 p=0.7035, α=0.05). It can therefore be assumed that centres which embark on more complex evaluations are more critical towards themselves than centres which are content with less demanding evaluations. Given the small amount of respondents and the imbalance of the groups, however, this conclusion cannot be quite generalised and has to be regarded as purely indicative.

3.2.5. Model for Factors Affecting Choice of Evaluation Level

The survey assumed that application of a higher evaluation level by a centre implies the application of the lower levels. The number of Kirkpatrick model evaluation levels included is one of the important indicators of the quality of evaluation at the centre. Certain correlations between the number of levels at which a centre evaluates the effectiveness of its programmes and the variables characterising the programme can be expected. We will therefore try to assess the correlations between the highest evaluation level applied and the other variables in the Programme category in the table below. The correlations were assessed using the Spearman test.

 

Goals

Evaluation Level

Preparation

Satisfaction

Group

Goals

x

x

x

x

x

Evaluation Level

0.52

x

x

x

x

Preparation

0.54

0.43

x

x

x

Satisfaction

-0.16

-0.17

-0.17

x

x

Group

0.51

0.3

0.43

-0.05

x

Duration

0.12

0.13

0.07

-0.07

0.25

Figure 6: Correlations between selected variables in Programme and Evaluation categories

The analysis shows that a strong positive correlation exists between the variables Goals and Evaluation Level; Goals and Preparation; Goals and Group. In addition, a medium positive correlation exists between the variables Evaluation Level and Group; Preparation and Group. Moreover, a threshold positive correlation was detected between Group and Evaluation Level.

In contrast, the duration of the programme and the degree of satisfaction of the centre with its own work are irrelevant variables. Admittedly, the degree of satisfaction of the centre correlates negatively with the other variables as expected, but the correlation is very weak. This finding is not in contradiction to the previously presented relation between the evaluation level and satisfaction: it may be that the relation begins to hold for centres from the third evaluation level upwards. Given the small amount of organisations in this group, however, the hypothesis can be neither validated nor disproved.

obr7

Figure 7: Schematic of relations between programme and evaluation variables

The number of evaluation levels that an environmental education organisation applies is a crucial variable in order to assess the evaluation strategy of the centre. The choice of the evaluation strategy is connected to the methodology that the centre uses in preparing its programme. The evaluation level chosen is particularly closely related to how the centre defines the goals (expected outcomes) of its programmes. It can be assumed that as long as an organisation formulates the expected outcomes of its programmes precisely, it will tend to choose a higher programme evaluation level. Programme preparation quality is another key factor. If the centre has detailed written preparations, it will also choose higher evaluation levels more often. Moreover, inclusion of group-forming elements in the programme has an effect on the choice of evaluation level. The way how the organisation formulates the goals of its programmes is furthermore closely related to preparation quality and group work during the programme. There is also an interrelation between preparation quality and inclusion of group-forming elements in the programme.

3.3. Support Category

3.3.1. Methodology

The first part of the analysis of responses by municipal and regional authority employees focused on identification and comparison of decision-making mechanisms for allocation of funding support to organisations offering environmental education programmes. Several different methodologies occurred in the study sample:

  • grant support allocated from a special environmental support fund;
  • commissioning;
  • operation of municipal or regional centre focused on environmental education;
  • allocation of funds based on applications for funding support.

The exact mechanism then derives from the funding support form. Some of the organisations publish subsidy rules and project assessment criteria; in other, "there is no grant system and each project is assessed separately."

Significant differences occurred in the composition of the assessment body:

  • environment committee composed exclusively of external experts;
  • department employees without co-operation of external experts;
  • department employees, deputies and external experts hired for issues beyond the employees;
  • employees and representatives of a regional environmental education centre;
  • environment committee composed of employees and externists; council and assembly decide;
  • fund board of trustees composed exclusively of employees;
  • department prepares background info for environment committee composed of externists;
  • each project assessed by two environment department employees and one education department employee;
  • representatives of political parties in the assembly and a representative of the authority;
  • representatives of the authority and self-government.

The definition of members of the assessment bodies shows that respondents differ in their ideas of requirements on the expertise of project assessors. One group of respondents presume that employees of the authority possess adequate qualifications for expert assessment of environmental education programmes. Other respondents reflect the need to co-operate with external experts, but quote barriers that make it difficult or impossible:

 

„Once, three years ago, we managed to get an externist from a college on the committee, but it's quite a problem to get your externists, because then they are not allowed to apply, so there's a conflict in that, because your people who would make good experts for the assessment are in fact in the organisation that is applying for the grant. So there's a conflict of interests. So we chose the option to hire an externist from the college, but in the end he was on only once, and since then the composition has been authority and self-government people again."

Others prefer political qualifications to professional ones:

 

„The steering committee is an altogether political affair. Meaning each party that's in the X Regional Assembly has two representatives, and the department that administrates it has one..."

The last group of respondents relies on co-operation with externists for expertise. Those are either members of a regional environmental education centre, other citizens' associations, or representatives of local businesses in the environmental sector.

It can be summed up that the methodology for granting funding support to environmental education is not unified, each entity opting for its own procedure. The key issue here is the selection of project assessors. These are sometimes experts in environmental education, but more often support to projects presented is decided by authority employees, assembly members, or representatives of organisations without a direct professional link to environmental education, or organisations dealing with various environmental aspects assuming that their professional competencies in that aspect guarantee their expertise in environmental education.

The professional qualities of the authority employees and other environmental education project assessors could not be rated in the survey. They can be assumed to range broadly from former employees of environmental education centres to "lay" persons. The situation is further complicated by the fact that many of the experts are applicants at the same time, and cannot therefore be involved in the assessment due to possible conflicts of interests.

3.3.2. Evaluation Level

The next stage of the interviews focused on analysing what criteria the respondents considered to be crucial for assessing the quality of an environmental education programme and the extent to which the represented organisation bases its decisions on them. The criteria were chosen according to an adjusted version of the logical model[2], i.e., a tool used for presenting a programme theory on relations among its inputs, programmes and outputs (W.K. Kellog Foundation, 2004):

obr8

Figure 8: Programme logical model

On a five-point scale from "absolutely insignificant" to "essential", respondents were asked first to rate their subjectively perceived importance of a criterion and then the degree to which the represented authority bases its decisions on it when assessing programme.

Comparing the ratings for both questions using the Wilcoxon pair test did not prove a statistically significant difference between the two groups of answers in any of the logical model categories. Respondents were thus unable to distinguish between their subjective rating and the "objective" policy of their respective authorities. For this reason, the following text will use information expressing the degree to which the represented authorities base their assessment of quality of environmental education programmes on the given criteria.

obr9

Figure 9: Mean weight of programme logical model criteria in programme assessment by supporters

The chart plots respondents' average rating by category, where 1 stood for "never" and 5, for "always".

Although the differences in rating were minute, two groups of criteria can be identified. The first comprises satisfaction, standards and behavioural impacts of the programme (mode 3, i.e., "sometimes"); the other one (mode 4, i.e., "frequently") comprises the other criteria. None of the respondents chose "never" for any of the criteria.

The following types of comments appeared for the criterion "Resources":

resources

Figure 10. Coding the "Resources" criterion

For one of the respondents, the cost "makes up nearly one third of the rating". For others, formal criteria associated with programme costs play a role:

„We have this co-funding as a specific criterion, so if they put in half of the money or more, they have a bigger chance of getting funds than if they put in a quarter of the money."

background

Figure 11: Coding the "Background" criterion

The assessor's or authority's experience with the applicant plays the most important role in assessing quality by the organisation's background. This version was reiterated by respondents.

One of the respondents even shared a disadvantage of this criterion:

„Here the point is, we know some of the centres, the applicants. We can make a picture how things are with them, what instructors they have, how they work. Then there are applicants who apply for the first time, we've never seen them, we don't even know how they work. So in fact we should be sort of impartial and assess the project based on how they describe, or write it."

needs

Figure 12: Coding the "Region's needs" criterion

The region's needs were one of the most vehemently mentioned criteria. Here, respondents referred to existing municipal or regional EE concepts or other analyses conducted:

„That's the basic thing! Agreement with the regional development programme is the fundamental criterion. If it's not in agreement with the region's needs, the self-government won't give us a crown towards it."


activities

Figure 13: Coding the "Activities" criterion

Activities was indicated as a noticeably polarising criterion.

For some respondents, what goes on in the programme plays a key role in assessing the programme. The statement expresses respondents' ideas of good environmental education:

„Well, this should be very important. As for me, I think that those needs, those activities you could say, that very important, essential, because basically we would like the projects that are supported with our grant, that it's not like a school buys a trip to X, a stay. But to come up with some activities for the children, to say organise a year-long projects. I mean, so that it is, so that they are involved actively, not just buying a service. So this then, would be quite essential to us."

For another group, in contrast, this criterion is not important:

„It may be important from the organiser's point of view, but from mine - regional - this is what we interfere the least with. We leave them a free hand in this, and as for the scope, we leave that up to them... we don't meddle with how they should do it. What we care for is the outcome."

satisfaction2

Figure 14: Coding the "Satisfaction" criterion

The reactive evaluation level was frequently perceived by respondents as crucial, yet also difficult to assess due to fear of subjectivity. Sometimes respondents are aware of the discrepancy:

„...when it's not liked, we may even cancel it. We starve out a programme that doesn't hit home. Even though the assessment is, to be honest, subjective, or emotional sometimes. Nobody can map these things."

Respondents obtain information on fulfilling this criterion from informal sources, providers' final reports, or random inspections.

Although the Standards criterion was rated as "used sometimes", most of the respondents could not recall any standards in environmental education. If they were mentioned, they were standards related to schools, such as the Framework Curricula, or environmental consultancy.


audience

Figure 15: Coding the "Audience" criterion

The quantity and type of the target group is perceived as important by most respondents; the number of participants plays the key role: it is understood as easy to measure. The economic utilisation percentage is important to others.

 

learning

Figure 16: Coding the "Learning" criterion

The impacts of the programme at the learning level, i.e., changes in knowledge, skills, attitudes and values, is perceived as important or crucial by all respondents. Differences exist in methods used by them to obtain data for such assessment. One respondent mentioned sending questionnaires to schools; another requires evaluation from the provider; yet another admits his own "intuitive" assessment. Nearly all respondents stated that it was not clear to them how to assess the criterion:

„It is important, but it's hard to find out in your environmental education, and it's an awfully long-term thing, so you can't tell much from one project."

"We would like to, but so far I haven't seen a way how to assess that... Very interesting idea... I can imagine this as a sophisticated assessment method, but so expensive it would eat more money than the project."

The same problem was identified for the "Behaviour" criterion. Admittedly, respondents state that assessment at this level would be important, but do not know how to ensure it or whether such assessment is in fact possible.

Irrespective, respondents stated at the same time that they assessed programme against this criterion.

A similar situation occurred for the last criterion, i.e., assessment of programme impacts on the community and the environment. Respondents stated that they considered the criterion important and based their assessment on it. At the same time, they admitted they could not measure it:

„It is crucial, but hard to assess... It is always taken into account."

 

Part of the respondents believe that although they do not measure these impacts, they are able to assess them:

„It should certainly be crucial to me. We try to convince people here into sorting, also through these grants, and we keep monitoring it, so we always find it in the city too. Well, we don't do that (we don't check it or investigate what impacts it has) but we know it, I mean we know the impacts."

The analysis shows an interesting paradox. On the one hand, respondents say that they apply nearly all the criteria of the programme logical model in its assessment, and consider nearly all of them of equal importance. On the other hand, they admit that (particularly for assessment of the programme outputs) they do not know any relevant methods that would provide them with data necessary for the assessment. Only some of the regions and municipalities have written and published project assessment criteria. The application of most of the criteria is therefore more at the level of wishes, or the criteria are only assessed intuitively in some cases. This indicates an inadequate capacity of public administration bodies for formulating and assessing requirements on effectiveness of supported programmes.

3.3.3. Ideal Programme

The final stage of the interviews asked respondents to describe how they imagine the ideal environmental education programme.

ideal

Figure 17: Coding the "Ideal Programme" criterion

Respondents' ideas of the ideal programme are rather heterogeneous. Methodological recommendations were made most frequently. The programme should have a clearly defined goal, it should be for a relevant group. The requirement to use activating methods occurred repeatedly, sometimes including a reference to programmes that the respondent was familiar with. Quality aids and programme duration (one day) were mentioned once.

The topic was mentioned relatively less frequently; there, providers should concentrate, for example, on "current issues in nature and landscape protection, touch global issues too". Ecological concepts were also mentioned once in addition to environmental issues.

More of the responses were directed at programme effectiveness. According to them, it is crucial that the programme has impacts:

„The ideal programme is one that has as broad an impact as possible, is successful - that all participants act in agreement with the project goals, important that a programme is successful in the media too."

As a rule, impacts refer to enhancing participants' knowledge. One respondent stated inclusion of evaluation methods - revision and reflection with participants - in the programme as a criterion.

Moreover, the ideal programme should respond to the region's needs, i.e., correspond with its priority areas. The programme instructor should have the minimum teaching qualification and communication skills to captivate the children and to make the feedback between the instructor and the listener functional.

In addition, respondents stated several aspects that they miss in EE in their respective regions:

  • more programmes for adults - several respondents said that centres usually offer events for schools, while the region or municipality is interested in programmes for broader public, businesspeople, farmers, seniors, or other groups ("The centres focus a lot on children and youth and instruction programmes. What they fear and don't want to go for is the adult population. Playing in the sandpit, they're good at that, but to influence the decisive four fifths of the population over 18 years of age, they avoid that like the plague.");
  • focusing on other areas of "domestic ecology" than waste separation ("Most organisations have waste, nature protection, etc., well developed, but not enough room is given for alternative sources of energy and technologies.");
  • environmental education evaluation methods ("Methodology for functioning and evaluating effectiveness of EE. ...The only thing the Ministry of the Environment are capable of mapping is the amount of money that flows into it, but what falls out... Outcomes... Nobody can map those things. ...We forage for the purpose, and sometimes they can't express that.").

3.3.4. Providers' Reflection

The final part of the survey asked respondents in the Providers set to evaluate the methods that their regional or municipal bodies apply in evaluating them. Out of the 85 respondents, only 55 (59%) of the basic set responded. Other respondents either could not make any response, or evaluated their co-operation with the region or municipality in general.

executors

Figure 18: Coding the "Reflection of Supporters" sphere

According to twenty-eight respondents, regions and municipalities do not evaluate environmental education providers and their programmes at all, or do so in a way that is not comprehensible to the providers:

„We are not quite aware of any coherent methods of evaluation that these institutions would apply to us."

 

Two respondents stated that they are evaluated by criteria other than factual ones, or by their political involvement:

 

„Our experience with the municipality is that they don't evaluate us mostly, and if they do, then unfortunately, not by the work we've done but concerning political involvement. They certainly don't evaluate us based on the work we've done, which we regret, because we do a lot of things within the town; plus the departments are not able to communicate that somehow, that we're doing something for the school, then it's under Environment, then for Regional Development, that they aren't able to evaluate us as a whole that works for the town, or in the town's interest. In most cases, something turns out that the municipality don't like and evaluate us based on that, that we've made a statement on a case that's going on and the municipality has a different view."

Another twenty-five respondents state that regions and municipalities only evaluate them based on formal information such as agreement of the fund drawing with the project, numbers of audiences, and funding demand of events:

„If they evaluate, then it's qualitative - based on the numbers of learners and hours of instruction. I don't think there's any qualitative evaluation, it's rather quantitative. And the quality is basically up to us, we try to keep it up as much as we can, because of the very principle that we want to be sought-after, there to be interest in our programmes, and mainly for them to be of some consequence. But otherwise, I don't sense any evaluation by the region - I mean qualitative."

"I'm afraid they don't even evaluate us in any way. That what matters to them is that are events are attended."

Respondents frequently mention that evaluators have not come to the centre to see the programmes:

 

„Because the region doesn't really want to evaluate it. I'm not after completing some tables, but what we would appreciate, if the official had the chance to visit us and see. If there was that sort of experienced familiarity, not just from tables, evaluated."

According to two respondents, regions and municipalities try to evaluate the participants' satisfaction with the programme:

„Because they have the responses from our actual clients, be it citizens or schools, the relationship has now changed a lot, and I think they may even be evaluating us in operation now, in terms of the budget now, and quite as we would have expected, that they are beginning to realise it now."

 

Some respondents state possible reasons for insufficient evaluation of the centres by regions and municipalities. They are as follows:

  • little expertise in environmental education among evaluators ("It's us rather than them who are the experts in environmental education here, so perhaps it would be hard for them to evaluate us in terms of content. They will rather just watch what we're doing. To see if we have enough activities and what they can check is the numbers of participants.")
  • fear of results ("It's a bit of a classic. When you give out money, of course it's not in your interest to document that you've used the money wrongly. That means for most projects the inspection is purely formal in terms of content and fulfilment of goals, so there are three or four outcomes maybe that are necessary for accounting the project duly as approved, because often we apply for say 100,000 and only get 30,000.")
  • ignorance of evaluation methods ("I don't think the municipality is using such standard methods that we have managed to learn in our own initiative as part of developing the non-profit civil sector. I think like objectively that the municipality is in fact lagging behind in using these methods. They may praise the effort or even evaluate something, but we can see in many programmes that thanks to our own training for our own people and members, we are quite simply at a higher level and the municipality has a lot to catch up with in say community and development planning and project evaluation methods."

Overall, it can be summarised that according to the providers, the supporters evaluate their work predominantly at the formal level, i.e., in terms of the inputs and products in the logical model. Programme outputs are not evaluated at all.

4. Discussion and Interpretation

Programme quality is understood in the text as programme effectiveness, i.e., degree of achieving its educational and instructional goals. Therefore, programme quality cannot be assessed without conducting an evaluation of its effectiveness. The quality of evaluation methods and the quality of programme design are two preconditions for effectiveness affecting one another. The survey showed a correlation between the evaluation level and selected variables related to the programmes, i.e., the degree of precision in programme goal formulation, elaboration of written programme preparation, and development of relationship within the group during the programme. This relationship can be assumed to work in both directions: centres that use good methodology in programme preparation will choose more effective evaluation methods than centres that choose less thorough approaches. At the same time, centres that apply effective evaluation methods will develop their programme more thoroughly and update it based on the results of the evaluation report.

cyklThe assumption is of great practical applicability: it shows that promotion of improving the quality of environmental education providers' evaluation strategies cannot be separate from improving the quality of programme design methodology. At the same time, a centre can be expected to start continuously improving the quality of its provided programmes once it achieves improved quality of its programme design methodology and obtains first of evaluations of its programmes.

Given the nature of the qualitative survey, it is difficult to speculate on the numbers of centres with a potential for shifting towards using evaluation as a common part of their programmes in a short time. The survey indicates that approximately one third of the providers develop detailed written preparation for their programmes; approximately one sixth structure their programmes based on one of the modern learning models; one fifth work with the group actively during their programmes; one sixth formulate the educational and instructional goals of the programmes correctly. Many of the programmes make the impression of intuitively piling activities, showing deficiencies that restrain their potential effects.

Although the survey examined only a small fraction of programmes within the existing offer, the analysed programmes do show certain common patterns. Most of the programmes focus on the input variables in the REB model, i.e., developing environmental sensitivity and knowledge of ecological concepts (Hungerford & Volk, 1990). Another large set of the programmes focus on developing empowerment knowledge and skills targeted at eco-management, i.e., behaviours in which persons are in direct interaction with the environmental and natural resources (waste separation, etc.). Part of the programmes evaluated also target the persuasion sphere (environmental education methodology). Programmes influencing ownership variables, i.e., complex understanding of environmental issues, are less abundant. As a rule, programmes in this category fail to observe recommendations based on international research: as a rule, they present problems as external and to be solved by experts, failing to develop adequately the target group's research skills, failing to provide room for independent examination of the problem and how it is reflected in society (environmental conflicts). Moreover, programmes focused on influencing other areas of proenvironmental behaviour, i.e., consumerism, political and legal behaviour, are missing. Although the programmes evaluated constituted only a selection of the existing offer of EEP, it is likely that these areas (in particular, the political and legal aspects of behaviour) are not adequately present. Part of the programmes evaluated do not target at any of the relevant variables and probably miss the goals of environmental education. If a learner attended all the programmes in the studied sample, they would probably embrace a relationship to nature, a basic overview of ecology and global problems, and behaviour patterns related to waste separation and conduct in the open. However, they would probably not learn to examine the problems independently, understand how others interpret them, formulate their own position on them, and decide how they will reflect them in various aspects of their own everyday lives. Based on the analysed sample, it can be said that the offer of environmental education programme does not cover all the key areas to the same extent, and that some key variables leading to proenvironmental behaviour are not influenced sufficiently by the existing offer.

The problems with programme design methodology are probably related to the providers' evaluation strategies. In one half of the centres, the instructor running the programme evaluates its effectiveness. Three quarters of the organisations only evaluate at the reactive level, i.e., degree of clients' satisfaction with the programme. Only one fifth use the standardised tool devised by Pavučina EECN for formative evaluation: the Evaluation Table. Only four of the studied organisations have experience with methodologically adequately conducted summative evaluation. At the same time, the majority of the centres are content with their work in environmental education, and therefore do not have enough internal motivation for change, although some of the centres realise the insufficient nature of their evaluation methods. In contrast, others view evaluation as useless administrative work or excessive work load; yet others consider environmental education to be unmeasurable. The centres' limited capacities and resources, often prohibiting deeper evaluation, are a related problem. If centres do evaluation, it stems from their internal needs; effectiveness requirements are seldom formulated by public administration and schools. The centres' satisfaction with their work in environmental education is largely the result of inadequate feedback that the centres receive on their programmes.

The analysis also indicated that even organisations interested in evaluating the effectiveness of their work lack the relevant know-how. The key problem here is the low level of involvement of external entities - evaluators from colleges, other EE workplaces, or other expert organisations conversant in the methodology of pedagogy research, programme evaluation, and environmental education theory.

A large group of environmental education providers regard numbers of schools booking their programmes as the main source of feedback. Although this information seemingly corresponds to the "market logic", it must be understood that it only measures (partially at that) the degree of clients' satisfaction with the programme. The truth is that clients' satisfaction may be related to other factors as well, such as the existence of a competitor, which may not be sufficient in many regions, and clients' ability to define their needs and assess the degree of their satisfaction. Since schools, being the chief customer for environmental education providers, are not yet used to evaluation the degree of achievement of their own curricula in this area, i.e., they do not evaluate their learners' environmental literacy, the importance of this feedback information must not be overrated.

Inaccurate and insufficient evaluation methods may lead to a long-term stagnation of the domain.

The programme offer evaluated indicates great differences in the quality of programmes run by the different centres. If these differences are not reflected in the quantity of funding support by the subsidising bodies, an inequitable atmosphere is created to a certain extent, and motivation to the natural quality development of environmental education in the Czech Republic is inhibited.

Regional and municipal authorities, being the funding support providers, wish to assess the quality of programmes and state the surmise that they are doing so. Yet their statements indicate that they do not have the sufficient tools for measuring programme outputs. Since officials in charge do not require the centres to provide evaluations of programme effectiveness and are content with formally assessing their quality based on budgetary inspection or other formal information, they have neither the sufficient information to assess the programme quality. Programmes are mostly assess at the project application stage by committees, which sometimes comprise externists, but these are rarely experts in environmental education. Although it is theoretically possible to assess the likely effectiveness of a programme based on an expert analysis of its submitted preparation, this can firstly only always be a qualified guess, and secondly, assessment committee members cannot be expected to be capable of such assessment without extensive knowledge of environmental education theory and methodology. The decisions on granting funding support are therefore not derived from the actual quality of programmes at present.

The relation between the number of programmes run by a centres and their methodological quality and effectiveness would be an interesting question, which might be the subject of subsequent research. The interviews revealed that some centres regard the number of programmes offered as an indicator of the quality of their work; some centres execute up to a hundred different programmes. It is difficult to imagine that a centre could ensure adequate quality in such an amount of programme, or evaluate each of the programmes adequately. It may be the case that consistent evaluation would result not only in increased programme quality, but also a reduction in their numbers and sharing of high-quality established programmes by multiple centres.

At present, evaluation thus remains a matter of internal motivation at the centres that provide programmes at a quality level higher than others. Although inner motivation is essential for decision to act (de Young, 2000), it would be advisable to consider measures to enhance it. According to Ajzen's (1991) planned behaviour theory, three variables are the precondition for resolution to act: attitudes to the behaviour, formed by the subject's opinions of the consequences and benefits of the action; subjective norms, derived from the expected behaviour of reference persons; and conviction of the manageability of the action, stemming primarily from experience with the behaviour.

Environmental education providers' attitudes to evaluation are currently drawing only on their own inner motivation, which is inhibited by financial and other requirements on the evaluation, and is not compensated adequately. Room opens for the possibility to enhance the pro-evaluation attitude, for example, by discrimination in favour of programmes with documented effectiveness or new programmes with an appropriately set evaluation mechanism and overall design in grant competitions.

In addition, the resolution of organisations to evaluate could be enhanced by publishing evaluation reports developed, whether via articles in specialised journals or at conferences and workshops.

The conviction of manageability of evaluation can be encouraged, to some extent, by promoting methodological training for centres. Here, however, one must be aware of the differences between requirements on formative and summative evaluation. Above all, providers should strive to create methodologically sound programmes with minimum deficiencies in the programme theory, and to eliminate flaws in presenting them. That is the subject of formative evaluation, which may be conducted based on external assistance in an environmental of co-operation networks of centres. The Evaluation Table devised for the use of Pavučina EECN can be a suitable tool for both purposes.

Given the greater methodological requirements on summative evaluation, it is probably more advisable to commission this type of evaluation with external entities from the academia, or take advantage of the co-operation within the Pavučina EECN. Possible alternatives include the establishment of a centre or platform focusing on summative evaluation and advising centres on autonomous evaluation efforts. Treating the evaluation methodology in an easily accessible form might help as well.

5. Bibliography

[1] Ajzen, Icek. (1991) The Theory of Planned Behavior. Organizational Behavior and Human Decision Process.1991, vol. 50, p. 179-211. 0749-5978

[2] Ardoin, Nicole M. Sense of Place and Responsible Behavior: What the Research Says. [online] Yale School of Forestry and Environmental Studies. [Cit. 2009-02-01]. Available at http://www.naaee.org/conferences/biloxi/n_ardoin_3_10008a.pdf

[3] Barch, Brian; Duvall, Jason; Higgs, Amy; Wolske, Kim; Zint, Michaela. Planning and Implementing an EE Evaluation. [online] [Last updated 2007-11-06] [Cit. 2008-07-20]. Available at http://66.135.39.45:7080/meera-dev/knowledge-base/plan-an-ee-evaluation/

[4] Bennett, Dean B. Evaluating Environmental Education in Schools. A practical guide for teachers. [online] UNESCO - UNEP, Division of Science, Technical and Environmental Education, 1989. Available at http://unesdoc.unesco.org/images/0006/000661/066120eo.pdf

[5] Činčera, Jan. (2007) Environmentální výchova. Od cílů k prostředkům. Brno: Paido.

[6] Činčera, Jan. (2008) Evaluace programu Ekoškola [online]. Envigogika. Praha: Centrum pro otázky životního prostředí Univerzity Karlovy. Roč. 3, 2008, č. 2. Available at <http://www.czp.cuni.cz/envigogika> ISSN 1802-3061. a

[7] Činčera, Jan. (2008) Evaluační strategie středisek ekologické výchovy [online]. Envigogika. Praha: Centrum pro otázky životního prostředí Univerzity Karlovy. Roč. 3, 2008, č. 2. Available at <http://www.czp.cuni.cz/envigogika> ISSN 1802-3061.b

[8] Činčera, Jan. (2007) Práce s hrou. Pro profesionály. Praha: Grada.

[9] Disinger, John F. (2005) Environmental Education's Definitional Problem. In Hungerford, Harold H.; Bluhm, William J.; Volk, Trudi L.; Ramsey, John M. Essential Readings in Environmental Education. Champaign: Stipes. ISBN 1-58874-469-8. P. 17-32.

[10] Disinger, John F. (1997) Environment in the K-12 Curriculum: An Overview. InWILKE, Richard J. Environmental Education. Teacher Resource Handbook. A Practical Guide for K-12 Environmental Education. Thousand Oaks: Corwin. P. 23-44.

[11] Ecosystem Management Initiative. Measuring Progress. An Evaluation Guide for Ecosystem and Community-based Projects. Ver. 3.0. [online]

[12] Ecosystem Management Initiative; School of Natural Resources and Environment; University of Michigan (2004) [2004-03-22] Available at http://www.snre.umich.edu/emi/evaluation/

[13] Excellence in Environmental Education - Guidelines for Learning(Pre K-12). (2004) North American Association for Environmental Education. [Cit. 2008-07-21]. Available at http://www.naaee.org/npeee/learner_guidelines.php

[14] Frank, L. S. (2001) The Caring Classroom. Using Adventure to Create Community in the Classroom and Beyond. Madison: Project Adventure.

[15] Frechtling, Joy et. al. (2002) The 2002 User-Friendly Handbook for Project Evaluation. The National Science Foundation. [online] [Cit. 2009-01-05] Available at http://www.nsf.gov/pubs/2002/nsf02057/nsf02057.pdf

[16] Gibbs, Jeane. (2001) Tribes. A New Way of Learning and Being Together. Windsor: Center Source System. 432 s. ISBN 0-932762-40-9.

[17] Grecmanová, Helena; Urbanovská, Eva. (2007) Aktivizační metody ve výuce. Prostředek ŠVP. Olomouc: Hanex.

[18] Hart, Paul; Nolan, Kathleen. (1999) A critical analysis of research in environmental education. Studies in Science Education, 1999, issue 34, p. 1-69.

[19] Hendl, Jan. (2008) Kvalitativní výzkum. Základní teorie, metody a aplikace. Praha: Portál. ISBN 978-80-7367-485-4.

[20] Henton, Mary. (1996) Adventure in the Classroom. Dubuque: Kendal / Hunt Publishing; Project Adventure. ISBN 0-7872-2459-6.

[21] Hogan, Christine. (2003) Practical facilitation. A toolkit of techniques. London: Kogan Page. ISBN 0-7494-3827-4.

[22] Hornová, Karolína. (2007) Evaluace výukového programu environmentální výchovy. Envigogika, 2007, č. 3. Available at http://envigogika.cuni.cz . ISSN 1802-3061

[23] Hungerford, Harold; Peyton, Ben R.; Wilke, Richard J. (1980) Goals for Curriculum Development in Environmental Education. The Journal of Environmental Education,1980, Vol. 11, Issue 3, p. 42-47. ISSN 0095-8964.

[24] Hungerford, Harold R.; Volk, Trudi L. (1990) Changing Learner Behavior Through Environmental Education. The Journal of Environmental Education,1990, Vol. 21, Issue 3, p. 8-21. ISSN 0095-8964.

[25] Hungerford, Harold R. (2005) The General Teaching Model (GTM). In Hungerford, Harold H.; Bluhm, William J.; Volk, Trudi L.; Ramsey, John M. Essential Readings in Environmental Education. Champaign: Stipes, 2005. ISBN 1-58874-469-8. P. 423-443

[26] Hungerford, Harold R. (2005) The Myth of Environmental Education - Revisited. In Hungerford, Harold H.; Bluhm, William J.; Volk, Trudi L.; Ramsey, John M. Essential Readings in Environmental Education. Champaign: Stipes, 2005. ISBN 1-58874-469-8. P. 49-56.

[27] Johnson, David W.; Johnson, Frank P. (2006) Joining Together. Group Theory and Group Skills. Boston: Pearson. 650 s.

[28] Kolb, David. (1984) Experiential Learning. Experience as The Source of Learning and Development. Prentice Hall.

[29] Kovalik, Susan J.; Olsen, Karen D. (1994) Kid's eye view of science. A Teacher's Handbook for Implementing an Integrated Thematic Approach to Teaching Science, K-6. Kent: Center for the Future of Public Education.

[30] Marcinkowski, Thomas. (2005) Predictors of Responsible Environmental Behavior. A Review of Three Dissertation Studies. In Hungerford, Harold H.; Bluhm, William J.; Volk, Trudi L.; Ramsey, John M. Essential Readings in Environmental Education. Champaign: Stipes. ISBN 1-58874-469-8. P.265-294.

[31] Marcinkowski, Thomas. (1997) Assessment in Environmental Education. In Wilke, Richard J. Environmental Education. Teacher Resource Handbook. A Practical Guide for K-12 Environmental Education. Thousand Oaks: Corwin, 1997. P. 143-198.

[32] Martin, Duncan. (2003) Research in Earth Education. Zeitschrift für Erlebnispädagogik.23. Jhrg. Heft 5/6 (Mai / Juni), p. 32-47. ISSN 0933-565X.

[33] Matre, Steven van. (1999) Earth Education .. a new beginning. Greenville: The Institute for Earth Education.

[34] O' Donoghue, Rob. (2007) Environment and Sustainability Education in a Changing South Africa: A critical historical analysis of outline schemes for defining and guiding learning interactions. Southern African Journal of Environmental Education. Learning in a Changing World.Vol. 24 (2007), p. 141-157. ISSN 1810-0333.

[35] Pike, Graham; Selby, David. (1994) Globální výchova. Praha: Grada. ISBN 80-85623-98-6. 322 s.

[36] Ramsey, John. (2005) Comparing Four Environmental Problem Solving Models: Additional Comments. InHungerford, Harold H.; Bluhm, William J.; Volk, Trudi L.; Ramsey, John M. Essential Readings in Environmental Education. Champaign: Stipes. ISBN 1-58874-469-8. P. 161-172.

[37] Simmons, Bora et. al. (2004) NonformalEnvironmental Education Programs: Guidelines for Excellence. North American Association for Environmental Education. Available at http://naaee.org/npeee/nonformal/nonformalguidelines.pdf

[38] Thomson, Gareth; Hofman, Jenn. Measuring the Success of Environmental Education Programs. [online] Canadian Parks and Wilderness Society; Sierra Club of Canada. [Cit. 2008-04-02] Available at http://www.peecworks.org/PEEC/PEEC_Inst/I00052276.0/ee-success.pdf

[39] Wessa P., (2009), Spearman Rank Correlation (v1.0.0) in Free Statistics Software (v1.1.23-r3), Office for Research Development and Education, URL http://www.wessa.net/rwasp_spearman.wasp/

[40] W.K. Kellog Foundation . Logic Model Development Guide. Using Logic Models to Bring Together Planning, Evaluation, and Action. [online] Battle Creek: W.K. Kellog Foundation, 2004. [Cit. 2008-07-20] Available at http://www.wkkf.org

[41] Wright, Michael J. (2008) The Comparative Effects of Constructivist Versus Traditional Teaching Methods on the Environmental Literacy of Postsecondary Nonscience Majors. Bulletin of Science, Technology & Society.August 2008, Vol. 28, n. 4, p. 324-337. ISSN 0270-4676.

[42] De Young, Raymond. (2000) Expanding and Evaluating Motives for Environmentally Responsible Behavior - Statistical Data Included. Journal of Social Issues. Vol. 56 (2000), n. 3. ISSN 0022-4537.

[43] Zelezny, Lynnete C. (1999) Educational Interventions That Improve Environmental Behaviors: A Meta-Analysis. The Journal of Environmental Education,1999, Vol. 31, Issue 1, p. 5-14. ISSN 0095-8964.

The research was conducted for the Ministry of the Environment of the Czech Republic as part of the commissionAnalysis of the necessity and utilisation of environmental education


[1] The term „programme" is understood herein as an interlinked group of activities that fulfil common educational objective.

[2] A number of versions of the logical model are used. The present survey used a version where outputs at the learning level are understood as "outputs", transfer of what is learnt into behaviour is "outcomes", and long-term effects of the programme are the "impacts".