Sunday, April 6, 2008

Week 5 & 6 Evaluation Methods

Although not clearly determined yet, my evaluation project will most likely focus on methodology within an existing structure and as such, I have been looking at articles that encompass summative evaluation with monitoring or integrative evaluation. I am not sure at this stage, that formative evaluation is relevant but am open-minded to methods under this category if further development could be considered practicable within my area.

I found the article by Rob Phillips – “We can’t evaluate e-learning if we don’t know what we mean by evaluating e-learning” raised several interesting points.

In particular, Phillips states that in order to study the effectiveness of e-learning products, a mixture of evaluation and research needs to be employed. This to me would concur with the multiple methods model associated with the eclectic mixed methods-pragmatic paradigm. In this article, Phillips illustrates a model derived from work by Alexander & Hedberg (1994) and Reeves and Hedberg (2002) and proposed by Bain (1999) which identifies the various stages of evaluation process from analysis and design evaluation (front-end analysis) to institutionalisation (monitoring or integrative evaluation). In order to gain meaningful evaluation results within an e-learning environment , it would appear that a variety of techniques need to be employed, depending on whether focus is on qualitative or quantitative data in order to meet the needs of the research. Phillips refers to three independent studies carried out to research students’ use of e-learning, each using one of the recognised educational paradigms: analytic-empirical-positivist-quantitative, constructivist-heremeneutic-interpretivist-qualative and critical theory-neomarxist-postmodern-praxis. The findings provided relevant insights, but each had weaknesses which Phillips suggests could have been addressed by the use of a mixed methods approach.

Whilst the Multiple Methods Evaluation Model would appear to be a good match, I also feel consideration should also be given to Stake’s Responsive Evaluation Method which recognises the need to provide more focus on evaluation processes and methods that promote continuous communication thus allowing questions to emerge during the evaluation process. I feel that both methods support an ‘open mind’ approach which would accommodate the constantly moving target that evaluation seems to be.

References:
Alexander, S., & Hedberg, J.G. (1994) Evaluating technology-based learning: which model? In K. Beattie & C. McNaught & S. Wills (Eds.) Multimedia in Higher Education: Designing for Change in Teaching and Learning. Amsterdam: Elsevier.
Bain J. (1999) Introduction to special issue on learning centred evaluation of innovation in higher education. Higher Education and Development, 18(2), 165-172
Phillips, R (2005) we can’t evaluate e-learning if we don’t know what we man by evaluating e-learning! Interact, 30, p 3-6. Learning Technology Support, University of Bristol
Reeves, T.C., & Hedberg J.G. (2002). Interactive Learning Systems Evaluation: Educational Technology Press.

Another article that I found to be appropriate to my research was the following:
Learning through online discussion: A case of triangulation in research by Michael Hammond and Mongkolchai Wiriyapinit

This article focuses on online discussion and its focus within e-learning and distance learning programmes. It identifies that a commitment to student-student and student-tutor interaction is an important feature and concurs with the constructivist approach to e-learning. The studies carried out focus on triangulation and the findings from each of the methods used were in respect of consistency (if there was a match between findings) and contrast (findings were contradictory). It was however, identified throughout the study that the interpretation of some of the surveys carried out did not adequately measure likely variables. Analysis of data therefore, would point to ensuring that evaluation methods are identified that adequately reflect the purpose of the research. Notwithstanding the strengths and weaknesses of the study, it:-

“Reinforced the case for triangulation and showed three major advantages.
  • There were some perspectives which could only be accessed via one method e.g. students’ management of time, their engagement with reading and approaches to composing messages only emerged clearly during interviews.
  • Findings from one method could be put in a wider perspective through comparison with those from other methods, e.g. students’ accounts of their online activity could be compared to the objective data concerning frequency of message postings.
  • Consistency between findings gave a greater authority in reporting, e.g. the claim that students valued the module and adopted a task focused approach to group work is credible”.

If my area for evaluation and proposed project emanates around an existing e-learning structure, then I believe there would be a strong consideration for triangulation and bracketing approaches.


At this point, I think I need to ‘scratch my head’ and identify an evaluation project that will justify further research and help me identify a relative and appropriate structure.
Reference
Hammond, M. & Wiriyapinit, M. (2005). Learning through online discussion: A case of triangulation research. Australian Journal of Educational Technology, 21(3), 283-302.

1 comment:

Bronwyn hegarty said...

Hilary you are the second person to favour Stake's responsive model. Yvonne has also mentioned this as a possible model for use in her project - see: http://yvonne-learns-online.blogspot.com/2008/04/project-thoughts.html

Yes I also like the way Phillips critiques three different studies using three different paradigms in his paper, and affirms the use of a mixed-methods approach.

The second paper illustrates, as you say, the importance of evaluating from a broad range of angles and perspectives.

I wonder are there some existing courses and/or resources which have been running for a while in your area which might benefit from an mpact evaluation. That is how are people applying what they are learning , and are they going on to do further study as a result?