Thursday, April 17, 2008

Evaluation Project Thoughts

I have had to rethink my original ideas for my project due to restrictions within the area I work. So I have had another ‘head scratching’ session and picked something else that will be more suitable and not create the same barriers.


For my project, I will focus on impact evaluation of an on-line module that we provide within a blended learning course. This module is often a first time experience with online learning for many of our students and as such, is quite often met with little enthusiasm. I made reference in my previous blog to a model used by Phillips (2005) which illustrates a development cycle into which various stages of evaluation fit, namely: Design, Production, Implementation and Maintenance. I wish to look at the third phase of evaluation and identify whether students are able to apply what they are learning and in particular whether or not they are motivated to look at further study using the same methodology.


I feel that my previous thoughts regarding paradigm and models are still appropriate to my new project, namely Eclectic-Mixed Methods-Pragmatic Paradigm and utilising either Stake’s Responsive or Multiple Methods evaluation models. However, I have had to revisit the guidelines I will use as the previous ones posted, do not appear appropriate. I have therefore selected the following which I have adjusted to suit my ideas (as suggested by Bronwyn- thanks).


TD3 Does the e-learning encourage a realistic progression towards self direction? Does it recognise varied starting points in confidence and motivation?


SD5 Do students acquire the learning skills that promote staircasing to higher learning?
S08 Do students get guidance on study skills for the e-learning environment?


S010 Do students get an explanation of any differences to the e-learning modules compared to a more familiar approach?


I feel that all of the above guidelines follow a similar pattern and I will probably look at paring them down to perhaps two when I have formulated my plan a little further.

Sunday, April 6, 2008

Week 5 & 6 Evaluation Methods

Although not clearly determined yet, my evaluation project will most likely focus on methodology within an existing structure and as such, I have been looking at articles that encompass summative evaluation with monitoring or integrative evaluation. I am not sure at this stage, that formative evaluation is relevant but am open-minded to methods under this category if further development could be considered practicable within my area.

I found the article by Rob Phillips – “We can’t evaluate e-learning if we don’t know what we mean by evaluating e-learning” raised several interesting points.

In particular, Phillips states that in order to study the effectiveness of e-learning products, a mixture of evaluation and research needs to be employed. This to me would concur with the multiple methods model associated with the eclectic mixed methods-pragmatic paradigm. In this article, Phillips illustrates a model derived from work by Alexander & Hedberg (1994) and Reeves and Hedberg (2002) and proposed by Bain (1999) which identifies the various stages of evaluation process from analysis and design evaluation (front-end analysis) to institutionalisation (monitoring or integrative evaluation). In order to gain meaningful evaluation results within an e-learning environment , it would appear that a variety of techniques need to be employed, depending on whether focus is on qualitative or quantitative data in order to meet the needs of the research. Phillips refers to three independent studies carried out to research students’ use of e-learning, each using one of the recognised educational paradigms: analytic-empirical-positivist-quantitative, constructivist-heremeneutic-interpretivist-qualative and critical theory-neomarxist-postmodern-praxis. The findings provided relevant insights, but each had weaknesses which Phillips suggests could have been addressed by the use of a mixed methods approach.

Whilst the Multiple Methods Evaluation Model would appear to be a good match, I also feel consideration should also be given to Stake’s Responsive Evaluation Method which recognises the need to provide more focus on evaluation processes and methods that promote continuous communication thus allowing questions to emerge during the evaluation process. I feel that both methods support an ‘open mind’ approach which would accommodate the constantly moving target that evaluation seems to be.

References:
Alexander, S., & Hedberg, J.G. (1994) Evaluating technology-based learning: which model? In K. Beattie & C. McNaught & S. Wills (Eds.) Multimedia in Higher Education: Designing for Change in Teaching and Learning. Amsterdam: Elsevier.
Bain J. (1999) Introduction to special issue on learning centred evaluation of innovation in higher education. Higher Education and Development, 18(2), 165-172
Phillips, R (2005) we can’t evaluate e-learning if we don’t know what we man by evaluating e-learning! Interact, 30, p 3-6. Learning Technology Support, University of Bristol
Reeves, T.C., & Hedberg J.G. (2002). Interactive Learning Systems Evaluation: Educational Technology Press.

Another article that I found to be appropriate to my research was the following:
Learning through online discussion: A case of triangulation in research by Michael Hammond and Mongkolchai Wiriyapinit

This article focuses on online discussion and its focus within e-learning and distance learning programmes. It identifies that a commitment to student-student and student-tutor interaction is an important feature and concurs with the constructivist approach to e-learning. The studies carried out focus on triangulation and the findings from each of the methods used were in respect of consistency (if there was a match between findings) and contrast (findings were contradictory). It was however, identified throughout the study that the interpretation of some of the surveys carried out did not adequately measure likely variables. Analysis of data therefore, would point to ensuring that evaluation methods are identified that adequately reflect the purpose of the research. Notwithstanding the strengths and weaknesses of the study, it:-

“Reinforced the case for triangulation and showed three major advantages.
  • There were some perspectives which could only be accessed via one method e.g. students’ management of time, their engagement with reading and approaches to composing messages only emerged clearly during interviews.
  • Findings from one method could be put in a wider perspective through comparison with those from other methods, e.g. students’ accounts of their online activity could be compared to the objective data concerning frequency of message postings.
  • Consistency between findings gave a greater authority in reporting, e.g. the claim that students valued the module and adopted a task focused approach to group work is credible”.

If my area for evaluation and proposed project emanates around an existing e-learning structure, then I believe there would be a strong consideration for triangulation and bracketing approaches.


At this point, I think I need to ‘scratch my head’ and identify an evaluation project that will justify further research and help me identify a relative and appropriate structure.
Reference
Hammond, M. & Wiriyapinit, M. (2005). Learning through online discussion: A case of triangulation research. Australian Journal of Educational Technology, 21(3), 283-302.