p@psyche-zine on educational and instructional psychology |
Das erste pädagogisch-psychologische e-zine im internet Seit 1996 |
ISSN 1561-2503 |
6. Jahrgang 2001 |
Marc Jelitto
University of Lüneburg, Germany
Essay for the course
"Certificate in Online Education and Training (OET)"
http://www.fz.uni-lueneburg.de/FERN/3-STU-PRO/Projekte/OET.htm
This document is available online at: http://MarcJelitto.de/lernen/oet_eval.htm
List of contents:
I often quote some parts of my Internet servers
"www.evaluieren.de" and "www.marcjelitto.de". Instead of writing the
whole quotation, I will only use the URL. http://www.evaluieren.de/evaluat.ion/definiti.htm#synonyme
has to be read like following example:
Jelitto, Marc (2000): Definitionen für Begriffe rund um
Evaluation : Definitionen aus dem Evaluationsbereich. http://www.evaluieren.de/evaluat.ion/definiti.htm#synonyme,
last checked XX.05.2000.
All URLs in this text were checked in May 2000.
In the cases I´m not sure of the right translation, I will
add the German word in brackets (Klammern). This is my first English
text since 12 years, so please excuse some mistakes.
In this text I want to make an overview about how online education can be evaluated. I find this a very important theme because from experiences I know how important evaluation is when producing software (developing a software for an environmental center we found a mistake too late to repair it). In my seminars at the University of Lüneburg (all face to face) I give the students a short questionnaire before beginning the course and another one at the end. By this way I know what the students expect and what I have to improve at the next time. As evaluation of online education is a new field for me (I just looked for evaluation of WWW, CD-ROM and computers used in exhibitions) and therefore I want to work out answers to the following questions: "Why should online education be evaluated?" (see goals below), "What can be evaluated?" (see targets), "How can online education be evaluated?" and "What is the outcome of evaluation?" (see results).
I won´t give you information about what has been found out while evaluating online education. There are a lot of links to results of evaluation, but I have only used the text to find out information about research possibilities in this area.
In the era of lifelong-learning exist a lot of learning forms. One can learn alone (self-study, for instance reading books), in free groups (going to a museum with the family or friends) or in organized groups with professional support (supported learning in school class, university seminars, commercial seminars). Organized groups use two traditional ways of learning - face to face action (F2F) and distance learning. In the information age computers get a bigger influence and importance on these forms. This starts from the use of web pages instead of books while learning alone and goes up to complete online-courses. In professional education the traditional learning was first supported by the computer (learning software on disk, CD-ROM and today on DVD). Today online learning has an increasing role, I think there are three forms to distinguish between. Traditional courses can be online-supported (list with Internet addresses or chat room); online-courses can have F2F-meetings or are pure online-courses.
Online education has actually four learning contexts. It´s used at school, for pupils in the afternoon (Nachmittagsmarkt), at higher schools like universities and for the education of adults. This entire context has different influences on the way of teaching, learning and, last but not least, evaluating.
Today online education means the use of a computer to pull and push data via Internet or (getting less) via mailboxes.
There are a lot of different techniques that can be used in online education. Paulsen (1995) lists a lot of them, starting with one-alone techniques like online databases, online journals, online applications, software libraries, online interest groups and interviews. Then he describes one-to-one techniques like learning contracts, apprenticeships, internships and correspondence studies; followed by one-to-many techniques like lectures, symposiums and skits. Many-to-many techniques are debates, simulations or games, role plays, case studies, discussion groups, transcript based assignments, brainstorming, Delphi techniques, nominal group techniques, forums and project groups. He found some techniques not to be utilized in CMC (Computer-Mediated Communication) like in-basket exercises, panels, committee hearings, cognitive networks and jigsaws. Salmon (12.1997) worked out another overview over 21 techniques, Bubenheimer (02.2000) describes the use of e-mail in schools and Brammerts (10.1999) has a portal side for tandem-learning. To use these techniques many different software products and learning styles are used.
A lot of other terms play a role in online education. You can use tele-learning, tele-teaching, virtual learning, online learning, web-based training and so on. For further terms and abbreviations please look at http://MarcJelitto.de/lernen/abkuerz.htm.
Many definitions of "evaluation" from dictionaries look like this example:
e-val-u-ate
tr.v. e-val-u-at-ed, e-val-u-at-ing,
e-val-u-ates. American Heritage Dictionary (1995) |
It´s not clear where the word "evaluation" comes from. Maybe from the English value (Der Große Brockhaus (1978) p. 590 f.), from the french évaluer (Meyers Großes Universallexikon (1981) p. 540), from Old French evaluer (American Heritage Dictionary, 1995), or the Latin valere (Brockhaus, 1997, p. 716.). In German the word evaluation is often used in context with schools and lessons. Manfred Karbach (01.2000) describes the history of the word in the German language (in the beginning the word "Evalvation" was used); he says that instead of "Evaluation" the word "Lehrplanbeurteilung" (assessment of the curriculum) should be used. But today the word evaluation is used in much more fields. For example you can find three definitions from the field of media research under http://www.evaluieren.de/evaluat.ion/definiti.htm#Medien (in German).
In this text I want to use the thesis of an Austrian evaluation expert in the field of multi-medial learning, Peter Baumgartner. He says that "Evaluation are all activities or results which help to judge, access or rate the import, usability, (cash) value, importance, functionality ... of something. Only this widen understanding of evaluation accepts the characteristics of special fields of evaluation and helps finding a thesis for evaluation." (German version: http://www.evaluieren.de/evaluat.ion/definiti.htm#evaluation, Baumgärtner (1999) p. 71.)
A lot of other terms (synonyms) can be used in this field, for example judgement or assessment (Beurteilung, bewertende Analyse, Bewertung), experience utilization (Erfahrungsauswertung), achievement test (Leistungsvergleich), test (Test). Other terms are quality measurement, quality securing, quality judgment, quality control (Qualitätsmessung, Qualitätssicherung, Qualitätsbeurteilung, Qualitätskontrolle), determination of the value (Wertbestimmung) (from http://www.evaluieren.de/evaluat.ion/definiti.htm#synonyme).
There exist several forms of evaluation. Three forms are usually time depending. Front-end evaluation (assessment) means evaluation before doing something (like developing a course) to find out background information. (Evaluation in der Vorbereitungsphase, Grundlagenermittlung). Formative evaluation is done while something happens to improve it. Summative evaluation is made after something is finished. In the literature I found another differentiation. Process evaluation means analyzing a process (like leading a course) while product evaluation is done by evaluating a product (like a course). A point should be mentioned here, self-evaluation (teacher or learner; Selbstevaluation) or external evaluation (extern evaluator or usage of external results; Fremdevaluation) make an important difference. (Tergan, 2000) "Quantitative evaluation relies on a breadth of response and is patterned after experimental research focused on the collection and manipulation of statistically relevant quantities of data. In contrast, qualitative evaluation focuses on a depth of response, using more subjective methods such as interviews and observation to query a smaller number of respondents in greater depth. Qualitative approaches may be of special value because the diversity of distant learners may defy relevant statistical stratification and analysis. The best approach often combines quantitative measurement of student performance with open-ended interviewing and non-participant observation to collect and assess information about attitudes toward the course's effectiveness and the delivery technology." (Gottschalk, 10.1995)
Before starting an evaluation, it´s important to remember some standards and ethical hints. The "Arbeitsstelle für Evaluation pädagogischer Dienstleistungen" (11.1999a) gives an overview over guidelines for good evaluation with links to the "Ethical Standards of AERA" (aera.net, 06.1992) and the "Guidelines for the Ethical Conduct of Evaluations" (Australasia Evaluation Society, 01.1998) and others. I found two more information places, the Zentrum für Umfragen, Methoden und Analysen (11.1999) and the Schweizerische Evaluationsgesellschaft (seval) (1999).
There are a lot of reasons for evaluation. When starting for the first time in online education, you must find out how online education works and which tools and techniques you should use. Therefore you need evaluation know-how. Often you want to improve your teaching and need information. If online education is a result of an order (Auftrag), the customer wants a report of the success for his money. Sometimes a law exists which demands for an evaluation (see for German and Austrian laws at the Arbeitsstelle für Evaluation pädagogischer Dienstleistungen, 11.1999b). Teachers need material to give a mark to their pupils. Researchers want to make basic research. In some cases students demand for an evaluation. A good example is the collection of negative experiences by myself in the OET-Course (see appendix "Students demand for an evaluation - an example"). I read some problems in the e-mails and felt dissatisfied and so I decided to collect the problems and worth feelings by making a list. After seeing the problems of the others I felt better and I hope some points will be better in the next course.
When you know why you evaluate, you should have a look at who will use the results (audience for which the evaluation is intended). If it´s you, no big problem will appear. If your pupils want to know why they got a bad mark you should be able to tell them some arguments. If somebody in a department who doesn´t know what you´re doing maybe you must evaluate in another way to get the best result.
After knowing why and for whom you evaluate, you can start to work in content. First you have to look what you want to find out. There are a lot of goals for evaluation from finding out what software you want to use for a online seminar over looking at the costs of developing a learning hour to the information about students to help them learning in the most effective way.
After defining the goal, you have to look what you have to analyze. These targets reach from software over learning material to messages from the students, from log files of use of an Internet page over chat participation to final works.
With the knowledge of goals and targets, it´s the time to decide which methods and techniques you want to use. There are easy to use checklists or questionnaires, you can look at the work of the students or ask an expert and so on.
After you picked out one method or made a method mix, you have to remember advantages and problems of evaluation (not mentioned in this text), not to forget the standards and ethical aspects (see above). In most cases it´s useful to tell the people why you evaluate. Now you can evaluate.
After evaluation you have to sum up the results and work with them. For example improve the seminar or send a report to a department. (Or hide the result, if it´s not a good one).
Oliver (1999a) describes following steps of a procedure for evaluation: stakeholder analysis, refining the evaluation question, selecting a methology, selecting data capture methods, selecting data analysis methods and presenting the findings.
In this part I describe some goals that can be reached by evaluation. I´ve sorted the goals so they should fit to front-end, formative and summative evaluation. I will start with typical goals in front-end evaluation.
It´s interesting to find out what the learners think about using online education before planning a course. In the semester 1999/2000 I made 2 weekends with different students, the seminar was titled "exhibition medium computer" (Ausstellungsmedium Computer). At the end of the weekend I asked the students, if they would prefer to do the seminar at "one weekend", "weekly" or "complete online". I added to complete online "This would mean, there will be a kick off meeting where you get materials to read, everyone gets a small task, additionally there will be online discussions. At the end there will be a final meeting with an evaluation of the course. One student wrote nothing, one said weekly and eleven said they would prefer the weekend. No one wanted to make the online-course. I assume the students enjoyed the places of the learning (the first day was placed in an environmental center with an exhibition where the students worked in, the second day we worked together in a computer room at the university) and it´s importance for the theme. (There´s a German description of the seminar at http://www.evaluieren.de/jelitto/lernen/seminar/ws99_00/bisher.htm, the official page of the seminar can be found under http://www.uni-lueneburg.de/infu/team/jelitto/lehre2.htm.) But as an optimist I think if I offer this seminar for all students in Germany, it would work.
It´s also important to find out what the teaching persons think about using online education. It´s not useful to invest a lot of money in hard- and software but nobody is interested in using this. In Germany professors can be forced to use multimedia in their lessons, the "Kultusministerien" (governmental institutions of German federal countries) decided this (Kultusministerkonferenz, 10.99). I´m not sure if this raises the quality of teaching.
If you want to start online education, there are a lot of points to mention. For example before developing materials for online education, it´s important to find out the best learning environment because the kind of material depends on the technical abilities of the learning environment.
If you want to use commercial software to give content to the learners you have to find good learning software. You could test it by yourself but there are different problems. Unlike a book, in most cases a CD-ROM can´t be tested in a shop. Sometimes there is a lot of software for the same theme so you haven´t got the time to check them by yourself. In this case you can use the results and experiences of others, like awards or reviews (see below at results).
It´s important to find out in which way and how much multimedia is used before starting an own project so you can use the positive and negative experiences. The Hochschul-Informations-System (05.1997) had a look at the use of multimedia in teaching in higher schools in Germany. As a result of a survey it exists a database of the use of multimedia for teaching in Lower Saxony (Niedersachsen) (Landesarbeitskreis Multimedia, 4.2000).
It´s interesting find out advantages and problems for online education to be used later. This can be general points to be looked after, but although evaluations depending on software or teaching techniques.
Information about the motivation of teachers and students are an important field of evaluation. First studies showed an increasing motivational push through the new media, but there the "Hawthorne-Effekt" must be mentioned. It says that every new thing gives a motivational push. I found one article from Warschauer (1996) who describes motivational aspects.
Something that is done regularly is to draw a competition between different learning methods (or combinations of methods), for example learning "face to face" against tele-learning.
Some people say that evaluating the behavior of learners is not correct because of their human rights. I think it´s important to help students to find out where they are and give them specific hints for working. It´s like in school when teachers look at homework, tests, cooperation and so on. And by using log files and other techniques the teacher can be able to look at the learning of anyone; not only those who take an active part in the lessons, but although the lurkers.
Some things can only be evaluated while a course is in use, this is called the formative evaluation. One point is to find out problems with international participant. There are problems at the use of language for non-native speakers, the discussion background differ (what does an American understand under higher education and what a German), the use of abbreviations can be confusing (HE, F2F, CMC) and so on. A result could be the "Guidelines for Online Writing in English as an International Language (EIL)", as shown in the appendix.
Evaluation should be a standard tool to find information to optimize the course. It could be simple by finding errors or difficult by finding out motivational obstacles.
Summative evaluation takes place after something, for example after the end of a seminar. Then you can access the results or find out if the learners passed the lesson.
With targets I want to give an overview about what can be evaluated to reach the goals of evaluation.
The basic of any online course is the software. This can be divided in different classes. Technical software like learning environment, e-mail-programm, browser are basic instruments for online education. Picture collections and information databases deliver material for developing learning materials and learning software with information can be used instead of self developed software. Software can be checked for abilities, disadvantages, ease of use, quality and so on. Interesting is to check services (Dienstleistungen) like to hire a server or an online learning environment. Using ready materials or analyzing an existing course needs a view on quality, usability, relevant, combination and so on. Evaluating the learner means looking at learning type, quality of participation, success of learning... Looking at a teacher the way of teaching and the reaction to students can be tested. Instead of testing parts of a seminar like material or people, you can evaluate a seminar at whole. Different from a seminar together with other people a self-learning course needs other points of view. Looking at the efficiency of online education you should check out cost (see Arvan and others (09.1998), Moonen (08.1997), Oliver and others (1999b)) and need of time (Zeitbedarf), both for teacher and learner. The number of messages can be interesting, but more interesting is the quality of discussion. Here´s the problem with lurkers (people only reading but don´t take part at the discussion). It´s interesting to look at the outcome of a course. This can be final reports or essays or produced web-pages or guidelines.
Interesting is the existence of media-breaks (Medienbrüche). This means a course is completely digital and the final paper must be in paper form. I have the feeling that this text I´m writing now is such a break. Instead of writing an essay of 6000 words, I would prefer to build out the Web page of my developed seminar through this OET-Course at http://MarcJelitto.de/lernen/seminar/ and write some descriptions why I did something while planing and developing this course. I can understand this problem because it´s not easy to change the examination regulations. But I´m happy that my students are allowed to give me a digital work with a description on paper.
At my web page http://www.evaluieren.de/evaluat.ion/methoden.htm I´ve listed a lot of different methods that can be used to evaluate digital media. Here I want to give an overview the most important methods available for online education. I have sorted the evaluation methods in two fields - the classical used in other areas and specific ones new appearing with the new media.
Classic methods can be divided in four parts, the use of experts, let people test, use the results of others and have a look of the use in real life. Experts can be the teacher himself or an external person. An expert can check the content of a software, have a look at the quality of finding fast an information, he could use a checklist. Further on he can make an assessment (from free - just explain his impression - till using a checklist), he can draw a comparison between different versions, he can make a ISO-quality-check (9000), he could judge the development or two or more people analyze a software from different points of view.
Tests by persons is often used in formative evaluation to test software before selling it (observation). You can look and listen to a person using software directly, via a mirror, a camera or a microphone. Looking at several persons working at the same computer, you can analyze their behavior and their talking. Thinking aloud ("Lautes Denken") means that the user talks about his way of using the software and the feelings and thoughts he has. It´s possible to record videos and sound files to check them later. After making a video of a person, this can be shown to the user to be commented by him. You can use a questionnaire online, via telephone, written or personally. You can make interviews. By asking before a learning course and afterwards you can try to find out what the user has learned (Lernerfolgsmessung). You can look if persons can use their new knowledge in other contexts (Transferanalyse). You can let people solve tasks and look if it works good. It can be helpful to categorize something to make it comparable with other products. If you´ve a high-tech testing room, you can make eye tracking (follow the movement of the eyes using a camera) and medical tests.
A good starting point for evaluation is to look at the results of others. You can make a metastudy (a study about studies, Metastudie). You can collect the first impressions of persons, self-reflections and their reports of use. You can check newspapers, books and databases for reviews, look at awards and elections for the best product of a year.
Looking at practice means to look at the economical value, where, when and why the product is used, ask if the user is happy with the software and if there are a lot of technical questions.
This methods above I call classical evaluation methods, a few can be added for digital media and online education. They can not be used in any case because it´s depending of the software used. Often logfiles give you information of the way a user learns, what has been seen, how long somebody was online and so on. Interesting is to look how often and at which places a help page is called. Using an online quiz gives you the chance to get information that can be analyzed mostly automatic. If developing software, you can use beta-versions for testing, and look at the results of registrations. Delivering materials need test on how they work on different platforms (Mac, Windows and Unix) and if they can be printed out correctly.
New to the methods list on my www-page are special thing happening only in online courses. These methods count to the practical area. An internal / external expert can make a (not-) participant observation of a course. It can although be important to analyze the user to help them to optimize their learning.
For examples see Appendix: Online links of evaluation: "Examples for use of some evaluation methods"
Most results of evaluation will be used for internal purposes. In the worst case the results (filled out questionnaires) won´t be used. Sometimes the evaluator just has a look at it to find his opinion in it. In normal case he will use the results to improve the running project or let it fluid into the next developments. If the evaluator thinks that some of the results of evaluation are interesting enough to be known by others, he will publish it. He can use the classical way by publishing in newspapers (Germany: c´t, Computerbild, screen multimedia...) or books (learning software (not only for kids) are reviewed by Thomas Feibel in the Kinder Software-Ratgeber 1996, 1997, 1998, 1999, 2000). Today the results are often published in the Internet (see reviews of CD-ROMs at http://www.amazon.com).
Results in governmental education (schools and higher education) are marks. Often learners need a paper for participation where they have to fulfil minimum standards. In the commercial field the user will get a paper for participation (Teilnahmebestätigung).
Sometimes evaluation is made for public use. The most famous is the lending of an awards. There are some official awards for software like Milia Dór or the Europrix (see these and more prices with a description at http://www.uni-lueneburg.de/fb4/pr/MMEvaluation/Awards.htm). There are a lot of unimportant awards like most of the ones listed in the German database "Web-Awards-Datenbank" at http://www.awards.de/, where over 1900 (!) internet-awards can be found.
CD-ROMs are often evaluated. These are not typical media for online education, but can be added to a course. SODIS, a German server driven by school-organisations at http://www.sodis.de, offer a big database with short descriptions of thousands of software products. Most of the reviews have a filled out questionnaire with additional data. In many cases descriptions of experiences of use in school are added. Two other examples for evaluation of CD-ROMs are Velgos, Tina with "the review zone" at http://www.TheReviewZone.com/ and the Kids Domain Review at http://www.kidsdomain.com/review/index.html.
To find out good software more easy, some organizations give out medals (Gütesiegel), for example the "Gütesiegel" of the organization "Bildungswege in der Informationsgesellschaft" (initiative of ways of education in the information society, see http://www.big-internet.de/guetesiegel.htm). This is imaginable for online courses as well.
I think collections of experiences like the text of Ben Watson (1996), who describes tricks and traps from his lessons are a result of a self-evaluation over a longer term. All the problems listed in that text have been assessed.
In Europe it´s getting more and more usual that research projects are only paid partly and the rest is paid when the final report appears. And in most cases evaluation is part of the final report.
It´s probable that in future there will be servers with comments on online education like http://www.ratingsonline.com/ where U.S. students can rate their professor today; I think that will then be a world-wide opportunity. An example for professional evaluation is "Blue Web´n" where online courses and materials for education are tested, with a rating with stars. Themes are Web Based Tutorials, Web Based Activities, Web Based Projects, Unit & Lesson Plans, Hotlists, Other Resources, References & Tools, look at http://www.kn.pacbell.com/wired/bluewebn/.
One result that can be used while a seminar is running are the famous FAQ-Lists. This lists of frequently asked can sample technical or content specific questions asked by the learners and, most important, the answer of the teacher. By this way the teacher mustn´t answer some questions again and again, but he can force the students to look first at the FAQ and only ask new questions.
Online education has a lot of faces. Evaluation has a lot of faces, too. I think the information above gives stimulation to use evaluation while planning, making and using online materials and courses. To get information how to start to evaluate I added an idea into the appendix, see below at "How to learn to evaluate". Further information about evaluation in the World Wide Web can be found in the appendix E. Online-information about evaluation.
Writing his last sentence, I wish I had evaluated the OET-Course and my behavior / feelings / thought in a professional way.
The next time it will be better ;-)
Marc Jelitto, Lüneburg, 12.05.2000
aera.net The Education & Research Network (06.1992):
Ethical Standards of AERA
http://www.aera.net/about/policy/ethics.htm
American Heritage Dictionary (1995): Deluxe Edition, CD-ROM, Talking Dictionary, München.
Arbeitsstelle für Evaluation pädagogischer
Dienstleistungen (11.1999a): Die Standards für die
Evaluation von Programmen im Vergleich mit weiteren Regelwerken zur
Qualität fachlicher Leistungserstellung
http://www.uni-koeln.de/ew-fak/Wiso/q_vergl.htm
Arbeitsstelle für Evaluation pädagogischer
Dienstleistungen (11.1999b): Evaluation in Rechtsgrundlagen
deutschsprachiger Länder
http://www.uni-koeln.de/ew-fak/Wiso/s_ges.htm
Arvan, Lanny; John C. Ory, Cheryl D. Bullock, Kristine K.
Burnaska, Matthew Hanson (09.1998): The SCALE Efficiency Projects.
In: JALN Volume 2, Issue 2 - September 98
http://www.aln.org/alnweb/journal/vol2_issue2/arvan2.htm
Australasian Evaluation Society (01.1998): Guidelines for
the Ethical Conduct of Evaluations
http://203.32.109.1/aes/guidelines.htm
Baumgärtner, Peter (1999): Evaluation mediengestütztes Lernens : Theorie - Logik - Modelle. In: Kindt, Michael [Hrsg.]: Projektevaluation in der Lehre : Multimedia an Hochsschulen zeigt Profil(e). Münster [u.a], Waxmann: S. 71.
Brammerts, Helmut (10.1999): International Tandem Network :
Language learning in tandem via the Internet
http://marvin.uni-trier.de/Tandem/email/idxdeu00.html
Brockhaus (1997): Die Enzyklopädie: in 24 Bänden, Band 6, 20., überarbeitete und aktualisierte Auflage, Leipzig [u.a.], S. 716.
Bubenheimer, Felix (02.2000): E-Mail-Projekte im Deutsch
als Fremdsprache-Unterricht
http://www.uni-bielefeld.de/~felixbub/emdafkom.html
Gottschalk, Tania H. (10.1995): Distance Education at a
Glance : Guide #3 : Instructional Development for Distance
Education
http://www.uidaho.edu/evo/dist3.html#evaluation
Der Große Brockhaus (1978): Band 3, achtzehnte, völlig neu bearbeitete Aufl., Wiesbaden, S. 590 f.
Hochschul-Informations-System (05.1997): Dokumentation
medienunterstützten Lehrens und Lernens an Hochschulen
http://www.his.de/abt3/proj/676/index.html
Karbach, Manfred (01.2000): Anmerkungen zum Wort
Evaluation
http://schulen.hagen.de/GSGE/ew/EvalW.html
Kultusministerkonferenz (10.99): Neue Medien und
Telekommunikation im Bildungswesen (Hochschulbereich) -
dienstrechtliche Aspekte (Lehrverpflichtung, Haupt- und Nebenamt,
Verwertungsrechte, Personalstruktur) -
http://www.lak-nds.de/lak/dokumente/kmk_neue_medien_1099.pdf
"Die am 29.10.99 veröffentlichte Stellungnahme der
Kultusministerkonferenz bzgl. "Neue Medien und Telekommunikation im
Bildungswesen (Hochschulbereich) - dienstrechtliche Aspekte
(Lehrverpflichtung, Haupt- und Nebenamt, Verwertungsrechte,
Personalstruktur)" is [sic!] Ende Dezember offiziell als
KMK-Beschluss wirksam geworden. Die Hochschulen sind aufgerufen, ggf.
geeignete Maßnahmen entlang der hier entwickelten Leitlinien zu
treffen." http://www.lak-nds.de/ang_start.htm
Landesarbeitskreis Multimedia (4.2000): Datenbank
"Multimedia in der Lehre"
http://147.172.59.206/lak/skripts/abfrage.asp
Meyers Großes Universallexikon (1981): Band 4, Mannheim [u.a.], S. 540
Moonen, Jef (08.1997): The Efficiency of Telelearning. In:
JALN Volume 1, Issue 2 - August 1997
http://www.aln.org/alnweb/journal/issue2/moonen.htm
Oliver, Martin (1999a): The ELT Toolkit
http://www.unl.ac.uk/tltc/elt/toolkit.pdf
Oliver, Martin; Grainne Conole, Lisa Bonetti (1999b): The
hidden costs of change: Evaluating the impact of moving to online
delivery. In: Oliver, M. (Ed) (1999): The Evaluation of Learning
Technology: Conference proceedings. University of North London. p.
76-81
http://www.unl.ac.uk/tltc/elt/elt99.pdf
Paulsen, Morten Flate (1995): The Online Report on
Pedagogical Techniques for Computer-Mediated Communication.
http://home.nettskolen.nki.no/~morten/innled.html#online
publications
Salmon, Gilly (12.1997): Techniques for CMC
http://pcbs042.open.ac.uk/gilly/cmctech.html
Schweizerische Evaluationsgesellschaft (seval) (1999):
Evaluationsstandards
http://www.seval.ch/deutsch/stad/standd.pdf
Tergan, Sigmar-Olaf (2000): Grundlagen der Evaluation: ein Überblick. In: Schenkel, P. Sigmar-Olaf Tergan. A. Lottmann, Ed. (2000). Qualitätsbeurteilung multimedialer Lern- und Informationssysteme : Evaluationsmethoden auf dem Prüfstand. Nürnberg, Bildung und Wissen.
Warschauer, Mark (1996): motivational aspects of using
computers for writing and communication
http://www.lll.hawaii.edu/nflrc/NetWorks/NW1/NW01.html
Watson, Ben (1996): Tricks & Traps: Lessons the
Microsoft Online Institute has Learned
http://www.uvm.edu/~hag/naweb96/zwatson.html
Zentrum für Umfragen, Methoden und Analysen (11.1999):
Informationsquellen : Richtlinien
http://www.or.zuma-mannheim.de/inhalt/Informationsquellen/richtlinien.htm
This source contains several ADM-guidelines of the Arbeitskreis
deutscher Markt- und Sozialforschungsinstitute e.V. like the
"Guidelines for Interviewing Minors". The German original side of
ADM-guidelines can be found at http://www.adm-ev.de/richtlinien.html.
This Message in the OET-Course about negative experiences in the German group was posted by Marc Jelitto on 10. April 2000. This is a abridged version of the German document (more than 6 pages when printed out). Some of the experiences may be subjective and only seen by a few students.
Title: <negative experiences> <German report>
Hi!
In the German area we collected some negative experiences we made in
the course, a few are special for the German part of this course.
Feel free to comment and add some other points.
I (Marc Jelitto) made the initial paper and optimized the paper for a
final report. You can have a look at the original Paper in
"German_DK", the final paper is called "<neg. Erfahrungen>
<Abschlußpapier>" (ups, written in German
<bg>).
Next to some comments at the end of the paper, there are following 14
points mentioned and discussed by 6 users, 5 points I think are
special for the German group and not mentioned here:
1. too much or too less students in discussion groups
2. sometimes very little participation
3. humor is missing (only a small life in the cafe)
4. texts are too long (sometimes only one page of 30 was important
for the work - and a lot of time was killed)
5. high drop out quote
6. mostly negative experiences
7. missing scientific input
8. not enough change of methods of learning
9. missing reflection, discussion of what we have learned until
now
10. no use of the experiences of the participant
11. missing or worse praise (Lob)
12. some tasks were difficult to understand
13. bad navigation (numbered weeks)
14. bad material (not possible to open, could not be printed without
problems, URLs missing, not for print optimized Power
point-presentation, literature could not be cited correctly because
information missing)
Marc
Guidelines for Online Writing in English as an International Language (EIL)
A. For English Speakers:
1. Avoid colloquial language and restricted cultural references (e.g.
most learners find The Sun harder to read than The Telegraph).
2. English Latinate vocabulary, which natives think of as obscure and
formal, is often more accessible than down to earth Anglo-Saxon.
(e.g. "continue" rather than "get on with it").
3. Always use one term in one text, not different words meaning the
same. (e.g. ask, request, implore, intercede...)
B. For all:
1. Keep it short and simple (not just for nonnative readers!).
2. Don´t use acronyms without saying what it means. EE
(Environmental Education)...
3. If it´s an world wide communication, remember to give some
information to the background / context.
(In German "Higher Education" means studying at universities or
"Fachhochschulen", students are aged over 18. I think in other
countries it´s got another meaning, depending on their
educational system.)
4. It often doesn't matter if your English is not the most perfect or
the clearest, as long as your thought / meaning (significado; Meinung
- ? ) is clear. Quickly check your meaning is clear before you
send.
C. For English as Foreign Language Speakers:
1. If it suits you, consider using translations of key words in your
text (e.g. Marc's writing). It gives an international dimension to
the meaning and includes the perspective of your language - which can
only be enriching for us all.
2. Use the spelling check of your program (e.g. First Class users
find it under: Edit : Check Spelling)
How can you learn something about evaluation? Read books, get lessons at schools or universities or follow the links below at "Online-information about evaluation". If you want to learn to evaluate you must train it or just do it. For example you can make an online seminar on your own to learn to evaluate web-pages. Pam Berger´s "Web Evaluation Guide" at http://www.infosearcher.com/cybertours/tours/tour04/_tourlaunch1.htm gives you the chance to see the Internet with the eyes of an evaluator. For more academic way of learning make an online course at the Centre for Program Evaluation at the University of Melbourne, Australia at http://www.edfac.unimelb.edu.au/cpe/cpefiles/CPEcourses.html like "Program Evaluation: Forms and Approaches" http://www.edfac.unimelb.edu.au/cpe/cpefiles/CourseDescriptions.html #482822 or "Qualitative Methods in Evaluation" at http://www.edfac.unimelb.edu.au/cpe/qual.html. But I think the best way to do it by yourself. Start by looking at your days work and think about what went wrong and what was good, then go on by testing other goals, targets and using different methods.
TLTC, The Learning Centre, University of North London (02.1999):
Evaluation of Learning Technologies - The BP ELT project -
http://www.unl.ac.uk/tltc/elt/
Campbell, J. Olin (08.19997): Evaluating ALN: What Works, Who's
Learning? In: ALN Magazine Volume 1, Issue 2
http://www.aln.org/alnweb/magazine/issue2/campbell_alntalk.htm
Gottschalk, Tania H. (10.1995): Distance Education at a Glance :
Guide #4 : Evaluation for Distance Educators
http://www.uidaho.edu/evo/dist4.html
Lander, Rachel; John Burns (February 2000): De Montfort University
Case Studies in the use and evaluation of Videoconferencing in
Teaching and Learning.
http://www.jtap.ac.uk/reports/htm/jtap-046.html
- questionnaires
Enzinger, Christoph (1997): Fragebogen zur Evaluation von
BIBOS
http://www.cosy.sbg.ac.at/~leo/diplomarbeit/fragebogen.html
Hibbert, John (September 1996): Online Rolling User Feedback
Questionnaire
http://www.jtap.ac.uk/reports/htm/jtap_1.htm
Kenyon, Paul (-): How to Put Questionnaires on the Internet
http://salmon.psy.plym.ac.uk/mscprm/forms.htm
Courseware Evaluation Database
http://Kramer.ume.maine.edu/cev/eval.html
EDUVINET Didactically and Methodically: Questionnaire
http://www.eduvinet.de/eduvinet/quest.htm
- checklist
"Developing Playful Media For Children and Families" with a list
of some characteristics of succesful interactive programs
http://www.larsmedia.com/playful.htm
Schulz, S. [u.a.] (01.1999): Quality Criteria for
Electronic Publications in Medicine
http://www.imbi.uni-freiburg.de/medinf/gmdsqc/e.htm
Röllinghoff, Andreas (September 1998): SoftWare Evaluation
Checkliste
http://it-resources.icsa.ch/Evaluation/SoftEvQE.html
Alexander, Jan; Marsha Ann Tate (December 1999): Checklist for an
Informational Web Page
http://www2.widener.edu/Wolfgram-Memorial-Library/inform.htm
- list of marks (Bewertungsbogen)
EDUVINET Team (1999): Lehrerüberlegungen und
Bewertungskriterien für Schülerbeiträge auf dem
Internet
http://www.eduvinet.de/eduvinet/de019.htm
- inquiry
Kindt, Michael; Birgit Oelker, Erwin Wagner (1997): Umfrage zu
Multimedia- und Telematikprojekten an niedersächsischen
Hochschulen
http://www.lak-nds.de/umfrage97/index.htm
- comparison of different VLE:
Britain, Sandy; Oleg Liber (October 1999): A Framework for
Pedagogical Evaluation of Virtual Learning Environment
http://www.jtap.ac.uk/reports/htm/jtap-041.html
Fritsch, Mirjam (November 1997): Platforms for Virtual Seminars :
Comparison of the features of the three internet-communication-tools
HyperNews, BSCW and WebBoard 2.0
http://www.fernuni-hagen.de/ZIFF/vergleichE.html
- evaluation of web resources:
Alexander, Jan; Marsha Ann Tate (December 1999): Evaluating Web
Resources
http://www2.widener.edu/Wolfgram-Memorial-Library/webeval.htm
Trochim, William (11.1999): Evaluating Websites
http://trochim.human.cornell.edu/webeval/webeval.htm
McKenzie, Jamie (06.1997): Comparing & Evaluating Web
Information Sources. In: From Now On - The Educational Technology
Journal, 6, 9
http://fromnowon.org/jun97/eval.html
Schrock, Kathy (2000): Kathy´s Schrock´s Guide for
Educators: Critical Evaluation Information
http://school.discovery.com/schrockguide/eval.html
Linklist "Evaluation of information sources" as a part of the
Information Quality WWW Virtual Library
http://www.vuw.ac.nz/~agsmith/evaln/evaln.htm
- testing two alternatives:
Milligan, Colin (November 1998): The Role of Virtual Learning
Environments in the Online Delivery of Staff Development
http://www.icbl.hw.ac.uk/jtap-573/573r1-0.html
Arbeitsstelle für Evaluation pädagogischer
Dienstleistungen
http://www.uni-koeln.de/ew-fak/Wiso/
Deutsche Gesellschaft für Evaluation e.V.
http://www.degeval.de
Deutsche Gesellschaft für Online Forschung e.V.
http://www.dgof.de/
online-forschung.de
http://www.online-forschung.de/index.htm/linx/
Zentrum für Umfragen, Methoden und Analysen -
OnlineResearch
http://www.or.zuma-mannheim.de/
forum-evaluation
http://www.uni-koeln.de/ew-fak/Wiso/mailing.htm
German Internet Research List (gir-l)
http://www.dgof.de/info/info3.html
(enrolment)
http://www.online-forschung.de/index.htm/gir-l/
(description)
m 3 e s (Kommunikationsforum zu Methoden der Markt-, Meinungs- und
empirischen Sozialforschung)
http://www.uni-kl.de/FB-SoWi/LS-Bliemel/m3es/
American Evaluation Association Discussion List:
EVALTALK@BAMA.UA.EDU
http://bama.ua.edu/archives/evaltalk.html
© 2000, Marc Jelitto, Lüneburg
mirror für das e-zine p@psych