Module 4: Notes and Helpful Diagrams

Module 4. Assessment Planning and Construction; Reporting and Feedback.

Here are some diagrams I found useful in my EDS 113 journey…

(Please click to enlarge diagrams)

rubric_checklist

http://www.aisj-jhb.com/
lesson-3-developing-a-teacher-made-test-11-728
http://image.slidesharecdn.com/lesson3developingateachermadetest-121012222240-phpapp01/95/lesson-3-developing-a-teacher-made-test-11-728.jpg?cb=1350080756

Module 3: Notes and Helpful Diagrams

Module 3. Types of Classroom Assessment

Here are some diagrams I found useful in my EDS 113 journey…

(Please click to enlarge diagrams)

the-future-of-assessment-in-physical-education-8-638

http://image.slidesharecdn.com/afpresentation2lr-131221172314-phpapp02/95/the-future-of-assessment-in-physical-education-8-638.jpg?cb=1387648036

summative

http://www.edudemic.com/

alternative-assessment-6-638

http://image.slidesharecdn.com/ed225-reportonalternativeassessment-140603073103-phpapp02/95/alternative-assessment-6-638.jpg?cb=1401780710

Self and Peer Assessment

Self-and-peer-assessment-prompt-sheets

http://www.greatmathsteachingideas.com/

guiding principles

https://www.google.com.ph/url?sa=i&rct=j&q=&esrc=s&source=images&cd=&cad=rja&uact=8&ved=0CAMQjRxqFQoTCNrZssuK78YCFcWhlAod368J_w&url=https%3A%2F%2Fwww.pinterest.com%2Fexplore%2Fdifferentiated-instruction%2F&ei=oLqvVdqJA8XD0gTf36b4Dw&bvm=bv.98197061,d.dGo&psig=AFQjCNF069SLaRU4LwCd5NFfj07WAzzG6Q&ust=1437666336211323

images (3)

http://image.slidesharecdn.com/diradha-120411084533-phpapp01/95/differentiated-instruction-6-728.jpg?cb=1334134342



Assessment Glossary

images (1)

http://www.treco.co.uk/

Academic Aptitude Test
An aptitude test predicts achievement in academic pursuits. Ideally, in constructing this type of test, the developer tries to minimize the effect of exposure to specific materials or courses of study on the examinee’s score.

Accommodation
An adjustment in the administration of an assessment to meet the needs of a student. Accommodations could be extended time or a test booklet with larger print.

Achievement Test
An assessment that measures a student’s acquired knowledge and skills in one or more common content areas (for example, reading, mathematics, or language).

Adult Accountability Test
An assessment intended primarily for individuals 18 years old or older who are no longer attending elementary or secondary school.

Alternative Assessment
An assessment that differs from traditional achievement tests. For example, an alternative assessment may require a student to generate or produce responses or products rather than answer only selected-response items. This type of assessment may include constructed-response activities, essays, portfolios, interviews, teacher observations, work samples, and/or group projects.

Analytic Scoring
A scoring procedure in which a student’s work is evaluated for selected traits or dimensions, with each dimension receiving a separate score.

Aptitude Test
A test consisting of items selected and standardized so that the test predicts a person’s future performance on tasks not obviously similar to those in the test. Aptitude tests may or may not differ in content from achievement tests, but they do differ in purpose. Aptitude tests consist of items that predict future learning or performance; achievement tests consist of items that sample the adequacy of past learning.

Authentic Assessment
An assessment that measures a student’s performance on tasks and situations that occur in real life. This type of assessment is closely aligned with, and models, what students do in the classroom.

Battery
A test battery is a set of several tests designed to be administered as a unit. Individual subject-area tests measure different areas of content and may be scored separately; scores from the subtests may also be combined into a single score.

Bias
A situation that occurs in testing when items systematically measure differently for different ethnic, gender, or age groups. Test developers reduce bias by analyzing item data separately for each group, then identifying and discarding items that appear to be biased.

Ceiling
The upper limit of performance that can be measured effectively by a test. Individuals are said to have reached the ceiling of a test when they perform at the top of the range in which the test can make reliable discriminations. If an individual or group scores at the ceiling of a test, the next higher level of the test should be administered, if available.

Checklist
An assessment that is based on the examiner observing an individual or group and indicating whether or not the assessed behavior is demonstrated.

Composite Score
A single score used to express the combination, by averaging or summation, of the scores on several different tests.

Comprehensive Equal-Interval Scale
A scale marked off in units of equal size that is applied to all groups taking a given test, regardless of group characteristics or time of year. Each test yields its own scale. On TABE, for example, scale scores are expressed in numbers ranging from 0 to 999. The continuity of the scale among levels comes from administering special test forms containing items from adjacent test levels to random groups of students. This allows the TABE scales to be calibrated so that a given adult learner is expected to obtain the same scale score regardless of the form or level of the test he or she takes. However, the standard error of measurement associated with that student’s score will vary systematically from level to level.

Computer Adaptive Tests 

Computer adaptive tests (CATs) are computer-administered tests that tailor the selection of test items during the administration of the assessment based on the responses of examinees. By adapting the difficulty of the test to the ability level of the examinee, CATs generally provided greater precision and/or enable shorter testing time, when compared to non-adaptive tests.

Construct
The concept or characteristic that a test is designed to measure.

Constructed-Response Item
An assessment unit with directions, a question, or a problem that elicits a written, pictorial, or graphic response from a student. Sometimes called an “open-ended” item.

Content Validity
Content validity indicates the extent to which the content of the test samples the subject matter or situation about which conclusions are to be drawn. Methods used in determining content validity are textbook analysis, description of the universe of items, adequacy of the sample, representativeness of the test content, inter-correlations of subtest scores, and opinions of a jury of experts.

Conversion Tables
Tables used to convert a student’s test scores from scale score units to grade equivalents, percentile ranks, and stanines.

Criterion
A standard or judgment used as a basis for quantitative and qualitative comparison; that variable to which a test is compared to constitute a measure of the test’s validity. For example, grade-point average and attainment of curricular objectives are often used as criteria for judging the validity of an academic aptitude test.

Criterion-Referenced Test
A test in which every item is directly identified with an explicitly stated educational behavioral objective. The test is designed to determine which of these objectives have been mastered by the examinee.

Culture-Fair Test
A test devised to exclude specific cultural stimuli so that persons from a particular culture will not be penalized or rewarded on the basis of differential familiarity with the stimuli.

Curriculum-Referenced Test
An assessment that measures what a student knows or can do in relation to specific, commonly taught curriculum objectives.

Derived Score
A test score pertaining to a norm group (such as a percentile, stanine, or grade equivalent) that is an outgrowth of the scale scores. Derived scores are useful descriptors; however, they are not calibrated on an equal-interval scale, so they cannot be added, subtracted, or averaged across test levels the way scale scores can.

Diagnostic Test
A test intended to locate learning difficulties or patterns of error. Such tests yield measures of specific knowledge, skills, or abilities underlying achievement within a broad subject. Thus, they provide a basis for remedial instruction.

Discrimination Parameter
The property that indicates how accurately an item distinguishes between examinees of high ability and those of low ability on the trait being measured. An item that can be answered equally well by examinees of low and high ability does not discriminate well and does not give any information about relative levels of performance.

Distractor
An incorrect answer choice in a selected-response or matching test item.

Early Childhood Test
An assessment intended for students in kindergarten and grades 1 through 3.

Educational (Instructional) Objective
A statement that defines an intended outcome of instruction. It describes what a successful learner is able to do at the end of the lesson or course, defines the conditions under which the behavior is to occur, and often specifies the criterion or standard of acceptable performance.

Equal-Interval Scale
A scale marked off in units of equal size that is applied to all groups taking a given test, regardless of group characteristics or time of year. Each test yields its own scale. On TABE, for example, scale scores are expressed in numbers ranging from 0 to 999. The continuity of the scale among levels comes from administering special test forms containing items from adjacent test levels to random groups of students. This allows the TABE scales to be calibrated so that a given adult learner is expected to obtain the same scale score regardless of the form or level of the test he or she takes. However, the standard error of measurement associated with that student’s score will vary systematically from level to level.

Equated Score
A score from one test that is equivalent to a score from another test. Equated scores are usually obtained by administering the two tests of interest to a representative sample of students. Scores from one test are then aligned with scores on the other test using equating analysis.

Face Validity
An evaluation of a test based on inspection only.

Floor
The opposite of ceiling, it is the lowest limit of performance that can be measured effectively by a test. Individuals are said to have reached the floor of a test when they perform at the bottom of the range in which the test can make reliable discriminations. If an individual or group scores at the floor of a test, the next lower level of the test, if available, should be administered.

Formative Assessment
Assessment questions, tools, and processes that are embedded in instruction and are used by teachers and students to provide timely feedback for purposes of adjusting instruction to improve learning.

Frequency Distribution
An ordered tabulation of individual scores (or groups of scores) showing the number of persons who obtained each score or placed within each range of scores.

Functional Range
The functional range of a test is the range of grades for which the test can be administered in order to obtain accurate norm-referenced data. For most tests, this range is two grades above or below the grade for which the test was intended.

Grade Equivalent
A score on a scale developed to indicate the school grade (usually measured in months) that corresponds to an average chronological age, mental age, test score, or other characteristic of students. A grade equivalent of 6.4 is interpreted as a score that is average for a group in the fourth month of Grade 6. Grade equivalents do not compose a scale of equal intervals and cannot be added, subtracted, or averaged across test levels the way scale scores can.

Grade Norm
The average test score obtained by students classified at a given grade placement.

Guessing Parameter

The probability that a student with very low ability on the trait being measured will answer the item correctly. There is always some chance of guessing the answer to a multiple-choice item, and this probability can vary among items. The guessing parameter enables a model to account for these factors.

Holistic Scoring
A scoring procedure yielding a single score based on overall student performance rather than on an accumulation of points. Holistic scoring uses rubrics to evaluate student performance.

Intelligence Test
A test that measures the higher intellectual capacities of a person, such as the ability to perceive and understand relationships and the ability to recall associated meaning–in other words, measures the ability to learn.

Interim Assessment
An assessment that occurs multiple times throughout the academic year rather than just at the end. Through an interim assessment, teachers can see weaknesses and strengths of students that otherwise may have gone unnoticed.

Interpretation
The act of explaining test scores to students so they understand exactly what each type of score means. For example, a percentile rank refers to the percentage of students in the norm group who fall below a particular point, not the percentage of items answered correctly.

Item
A question or problem on a test.

Item Bias
An item is biased when it systematically measures differently for different ethnic, cultural, regional, or gender groups.

Item Response Theory
The basis of various statistical models for analyzing item and test data. In TABE, the three-parameter model was used in the selection and scaling of items. This model takes into account discrimination, difficulty, and chance level of success (guessing) to describe each item’s statistical characteristics.

K–12 Assessment
An assessment intended primarily for students in elementary and secondary schools. CTB assessments may assess students in the entire K–12 range or just in selected grades, e.g., Grades 2–12 .

Local Norms
Norms that have been obtained from data collected in a limited locale, such as a school system, county, or state. They may be used instead of, or along with, national norms to evaluate student performance.

Location Parameter
A statistic from item response theory that pinpoints the ability level at which an item discriminates, or measures, best.

Mean
The quotient obtained by dividing the sum of a set of scores by the number of scores; also called “average.” Mathematicians call it “arithmetic mean.”

Median
The middle score in a set of ranked scores. Equal numbers of ranked scores lie above and below the median. It corresponds to the 50th percentile and the 5th decile.

Mixed-Format Tests
An assessment that includes different forms of questions. The questions could include a mix of multiple choice, essays, or performance tasks.

Mode
The score or value that occurs most frequently in a distribution.

Multiple Measures
Assessments that measure student performance in a variety of ways. Multiple measures may include standardized tests, teacher observations, classroom performance assessments, and portfolios.

Multiple-Choice Item
A question, problem, or statement (called a “stem”) which appears on a test, followed by two or more answer choices, called alternatives or response choices. The incorrect choices, called distractors, usually reflect common errors. The examinee’s task is to choose from, among the alternatives provided, the best answer to the question posed in the stem. These are also called “selected-response items.”

Norm-Referenced Test
A standardized assessment, in which all students perform under the same conditions. This type of test compares a student or group of students with a specified reference group, usually others of the same grade and age for K–12 students, or for adults, those with similar characteristics, such as those in an adult basic education class.

Normal Distribution Curve
A bell-shaped curve representing a theoretical distribution of measurements that is often approximated by a wide variety of actual data. It is often used as a basis for scaling and statistical hypothesis testing and estimation in psychology and education because it approximates the frequency distributions of sets of measurements of human characteristics.

Norms
The average or typical scores on a test for members of a specified group. They are usually presented in tabular form for a series of different homogeneous groups.

Number Correct or “Raw” Score
The Number of Correct Responses (NCR) is the number of items answered correctly by a student on any given test section.

Objective
A desired educational outcome such as “constructing meaning” or “adding whole numbers.” Usually several different objectives are measured in one subtest.

Objective Test
A test for which a list of correct answers, one for each test item, can be provided so that subjective opinion or judgment is eliminated from the scoring procedure. Multiple-choice, true/false, and matching-item tests are purely objective, while short answer and completion-item tests are less so.

Percentile
One of the 99 point scores that divide a ranked distribution into groups, each of which contains 1/100 of the scores. The 73rd percentile denotes the score or point below which 73 percent of the scores fall in a particular distribution of scores. (See also the table under “stanine.”)

Performance Assessment
An assessment activity that requires students to construct a response, create a product, or perform a demonstration. Usually there are multiple ways that an examinee can approach a performance assessment and more than one correct answer.

Performance Standard
A level of performance on a test, established by education experts, as a goal of student attainment.

Power Test
A test that samples the range of an examinee’s capacity in particular skills or abilities and that places minimal emphasis on time limits. A “pure” power test is sometimes defined as one in which every examinee has sufficient time to complete the test.

Predictive Validity
The ability of a score on one test to forecast a student’s probable performance on another test of similar skills. Predictive validity is determined by mathematically relating scores on the two different tests.

Prompt
An assessment topic, situation, or statement to which students are expected to respond.

Raw Score
The first score obtained in scoring a test, which is often the number of correct answers. Sometimes it is the number right minus a fraction of the number wrong, the time required to complete the test, the number of errors, or some other number obtained directly from the test’s administration.

Readiness Test
A test of ability to engage in a new type of specific learning. Level of maturity, previous experience, and emotional and mental set are important determinants of readiness.

Reliability
The consistency of test scores obtained by the same individuals on different occasions or with different sets of equivalent items; accuracy of scores.

Rubric
A scoring tool, or set of criteria, used to evaluate a student’s test performance.

Scale
An organized set of measurements, all of which measure one property or characteristic. Different types of test-score scales use different units, for example, number correct, percentiles, or IRT scale scores.

Scale Scores
Scores on a single scale with intervals of equal size. The scale can be applied to all groups taking a given test, regardless of group characteristics or time of year, making it possible to compare scores from different groups of examinees. Scale scores are appropriate for various statistical purposes; for example, they can be added, subtracted, and averaged across test levels. Such computations permit educators to make direct comparisons among examinees, compare individual scores to groups, or compare an individual’s pre-test and post-test scores in a way that is statistically valid. This cannot be done with percentiles or grade level equivalents.

Selected-Response Item
A question or incomplete statement that is followed by answer choices, one of which is the correct or best answer. Also referred to as a “multiple-choice” item.

Special Admissions Test
A test of a student’s ability to participate in special programs or advanced learning situations. For example, an honors-level class or a magnet school may require the attainment of high scores on an assessment for admission.

Speed Test
A test in which one aspect of performance is measured by the number of tasks performed in a given time. A “pure” speed test is one in which examinees make no errors and that cannot be completed by any examinee in the allotted time.

Standard Deviation
A statistic used to express the extent of the divergence of a set of scores from the average of all the scores in the group. In a normal distribution, approximately two-thirds (68.3%) of the scores lie within the limits of one standard deviation above and one standard deviation below the mean. One-sixth of the scores lie more than one standard deviation above the mean, and one-sixth lie more than one standard deviation below the mean.

Standard Error of Measurement
A measure of the amount of error to be expected in a score from a particular test. The smaller the standard error of measurement, the greater the accuracy of the test score. The standard error of measurement is the standard deviation of a theoretical distribution of a set of variations, each of which is the difference between the obtained score and true score. Thus, if a standard error of measurement is 5, the chances are two to one that an obtained score lies within five units of the true score.

Standard Score
A derived score scaled to produce an arbitrarily assigned mean and standard deviation. For example, deviation IQs are standard scores with a mean of 100 and, usually, a standard deviation of 16.

Standardization
The process of administering a test to a nationally representative sample of examinees using carefully defined directions, time limits, materials, and scoring procedures. The results produce norms to which the performance of other examinees can be compared, provided they took the test under the same conditions.

Standardization Sample
That part of the population that is used in the norming of a test, i.e., the reference population. The sample should represent the population in essential characteristics, some of which may be geographical location, age, or grade for K-12 students, or, for adults, participation in a specific type of program (for example, adult basic education).

Standardized Test
A test constructed of items that are appropriate in level of difficulty and discriminating power for the intended examinees, and that fit the pre-planned table of content specifications. The test is administered in accordance with explicit directions for uniform administration and is interpreted using a manual that contains reliable norms for the defined reference groups.

Stanine
A unit of a standard score scale that divides the norm population into nine groups with the mean at stanine 5. The word stanine draws its name from the fact that it is a STAndard score on a scale of NINE units.

Stem
The part of an item that asks a question, provides directions, or presents a statement to be completed.

Stimulus
A passage or graphic display about which questions are asked.

Test Battery
A test battery is a set of several tests designed to be administered as a unit. Individual subject-area tests measure different areas of content and may be scored separately; scores from the subtests may also be combined into a single score.

Test Developer
One who prepares and develops tests.

Test Item
A question or problem on a test.

Test Objective
A desired educational outcome such as “constructing meaning” or “adding whole numbers.” Usually several different objectives are measured in one subtest.

Test User
One who uses test results for some decision-making purpose

Test-Taker
One who takes a test whether by choice, direction, or necessity.

Validity
The capability of a test to measure what its authors or users intend it to measure.

Web-Based Assessment
An assessment that is delivered over the World Wide Web and is accessed via a Web browser.

Source:

DRC. (2015). Assessment glossary. Retrieved from http://www.ctb.com/ctb.com/control/assessmentGlossaryTabAction?startLimit=A&endLimit=B&p=underAssess

Just Click and Learn!

Just Click and Learn!

easy access to references

mouse-based-targetingRobot_Glossary

http://online-behavior.com/
http://assets.stockopedia.com/

Here’s my easy access to resources, taken from our EDS 113 modules, credits again to our dear professor, Dr. Lou Juachon. 🙂

Here, I compiled the different reading materials which have been helpful to me in my EDS 113 journey:

Module 1: Assessment Basics

American Public University System (2013). Glossary of Assessment Terms. http://www.apus.edu/community-scholars/learning-outcomes-assessment/university-assessment/glossary.htm

Center for School Success. (2011). Assessment vs. Evaluation. http://www.centerforschoolsuccess.org/index.php?option=com_content&view=article&id=86&Itemid=150

Centre for the Study of Higher Education-Australian Universities Teaching Committee. (2002). Core principles of effective assessment. http://www.cshe.unimelb.edu.au/assessinglearning/05/

CIIA (2008). Using Assessment to Improve Instruction. [YouTube video]. Available at https://www.youtube.com/watch?v=BZ3USs16J3Y&index=2&list=PL8BC599D3B6289157

CTB/McGraw-Hill LLC. (2014). Assessment Basics (Overview; Types of Assessment). Web http://www.ctb.com/ctb.com/control/assessmentBasicsTabAction?p=underAssess

CTB/McGraw-Hill LLC. (2014). Assessment Glossary. http://www.ctb.com/ctb.com/control/assessmentGlossaryTabAction?startLimit=A&endLimit=B&p=underAssess

Difference Between (n.d.) http://www.differencebetween.com/

Duke. What is the difference between assessment and evaluation? http://duke.edu/arc/documents/The%20difference%20between%20assessment%20and%20evaluation.pdf

Edutopia. (2008). Assessment Professional Development Guide. Retrieved from http://www.edutopia.org/assessment-guide

Field-Tested Learning Assessment Guide. [Web] http://www.flaguide.org/start/start.php. August 1, 2014

Formative Assessment for Middle School: Gathering and Analyzing Evidence. [Video]. Available at http://www.youtube.com/watch?v=rhL_sQwGl5c&list=PL9s6JUcLAVlAf7tEFUOyw4QWH0HMzm-D5&index=40

James, R. (2002). Core principles of effective assessment. [Excerpt from James, R., McInnis, C. and Devlin, M. (2002) Assessing Learning in Australian. Universities.] Retrieved from http://www.cshe.unimelb.edu.au/assessinglearning/05/index.html

Massachusetts Elementary and Secondary Education. (2014). Basics of Assessment. [YouTube video] Available at https://www.youtube.com/watch?v=ucwzvG6JkOI

National Institute for Learning Outcomes. (2012). Providing Evidence of Student Learning: A Transparency Framework. [Web]. http://www.learningoutcomeassessment.org/ TransparencyFramework.htm

Rogers, G.M. (2005). Assessment: Keeping it Simple [Powerpoint slides]. Retrieved from https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0CCwQFjAB&url=http%3A%2F%2Fwww.utexas.edu%2Fprovost%2Fsacs%2Fppt%2FAssessment-Keeping%2520it%2520Simple_UT-Austin.ppt&ei=QlXcU87lCIvaoASJ-IFA&usg=AFQjCNGCE3fgNUsEO_ufg0Su-23cP-PGTQ&sig2=TpfkLFOWAqj2uhvuS3OO0g

Soulsby, E. (2009). Assessment Notes [PDF document]. Retrieved from http://assessment.uconn.edu/docs/resources/Eric_Soulsby_Assessment_Notes.pdf

Suskie, L (2006). What are good assessment practices? [PDF document]. Retrieved from http://www.clark.edu/tlc/outcome_assessment/documents/suskie1.pdf

Teaching and Learning Laboratory. Assessment and Evaluation. Retrieved from http://tll.mit.edu/help/assessment-and-evaluation

Teaching and Learning Laboratory. Types of Assessment and Evaluation. Retrieved from http://tll.mit.edu/help/types-assessment-and-evaluation

Teaching Strategies. (n.d.). The Importance of the Assessment Cycle in the Creative Curriculum for Preschool. [PDF document]. Retrieved from0020http://teachingstrategies.com/content/pageDocs/ Theory-Paper-Assessment-Creative-Curriculum-Preschool-10-2012.pdf

TKI. (n.d.) Using evidence for learning. (Assessment Online Webpage). http://assessment.tki.org.nz/Using-evidence-for-learning

Types of assessment – some definitions. In University of Exeter Website http://as.exeter.ac.uk/support/staffdevelopment/aspectsofacademicpractice/assessmentandfeedback/principlesofassessment/typesofassessment-definitions/

University of Connecticut. Assessment. http://assessment.uconn.edu/index.html

Vanderbilt Institutional Research Office. (2010). Vanderbilt University Assessment Website http://virg.vanderbilt.edu/AssessmentPlans/About.aspx

Victoria Department of Education and Early Childhood Development. Assessment Advice Page. http://www.education.vic.gov.au/school/teachers/support/Pages/advice.aspx

Weaver, B. (n.d.) Formal vs. Informal Assessments. Retrieved from http://www.scholastic.com/teachers/article/formal-versus-informal-assessments

Westminster College. (n.d.). Accreditation and Assessment. [Web] http://www.westminster.edu/acad/oaac/index.cfm

 Module 2: Framework for Assessment of Student Learning

Biggs, J. & Tang, C. (2007). Teaching for Quality Learning at University. (Maidenhead: Open University Press). Available at http://docencia.etsit.urjc.es/moodle/pluginfile.php/18073/ mod_resource/content/0/49657968-Teaching-for-Quality-Learning-at-University.pdf

Brabrand, C. & Dahl, B. (n.d.).Using the SOLO Taxonomy to Analyze Competence Progression of University Science Curricula. Retrieved at http://itu.dk/~brabrand/progression.pdf

Carnegie Mellon. Teaching Excellence & Educational Innovation: Articulate Your Learning Objectives. Available at http://www.cmu.edu/teaching/designteach/design/learningobjectives.html

Carnegie Mellon. Teaching Excellence & Educational Innovation: Bloom’s Taxonomy. Available at http://www.cmu.edu/teaching/designteach/design/bloomsTaxonomy.html

Carnegie Mellon. Teaching Excellence & Educational Innovation: Learning Objectives Samples. Available at http://www.cmu.edu/teaching/designteach/design/learningobjectives-samples/index.html

Classroom Assessment: Every Student a Learner. [PDF document] Retrieved at http://ati.pearson.com/downloads/chapters/CASL_02E_C01.pdf

Earl, L. & Katz, S. (2006). Rethinking classroom assessment with purpose in mind. Western & Northern Canadian Protocol for Collaboration on Education. [PDF documents]. Available at http://www.edu.gov.mb.ca/k12/assess/wncp/

Earl, L. (2006) Viewing and Discussion Guide (VDG) for the webcast on “Rethinking Classroom Instruction with Purpose in Mind.” Curriculum Services Canada.

Earl, L. (2006) Webcast on “Rethinking classroom assessment with purpose in mind.” Curriculum Services Canada.

Field-Tested Learning Assessment Guide (FLAG). Available at http://www.flaguide.org/start/primerfull.php

GECDSB AER. (2011). Assessment FOR, AS, & OF Learning. [YouTube video]. Available at https://www.youtube.com/watch?v=Q7QuQpMStS4

Goldner, S. (2014). Purposes of Classroom Assessment. [YouTube video]. Available at https://www.youtube.com/watch?v=noGP2DNesDU

Gravelis, A. (2013). Teaching, learning and assessment cycle [Youtube video]. Available at https://www.youtube.com/watch?v=Dz6StcOdOZg&list=PLLt6OuwVc3L93y-jzutjJH8c8FYFJsO0p

Huba, Mary E. and Freed, Jann E. (2000). Learner-Centered Assessment on College Campuses: Shifting the Focus from Teaching to Learning, Allyn & Bacon

Illinois Online Network. Developing Course Objectives. Available at http://www.ion.uillinois.edu/resources/tutorials/id/developObjectives.asp

NILOA. (2012). Making Learning Outcomes Usable & Transparent. [Web]. http://www.learningoutcomeassessment.org/TFcomponents.htm

Shermis, M. & Di Vesta, F. (2011). Classroom Assessment in Action. MD: Rowman & Littlefield.

Soulsby, E. (2009). Assessment Notes [PDF document]. Retrieved from http://assessment.uconn.edu/docs/resources/Eric_Soulsby_Assessment_Notes.pdf

Teaching and Educational Development Institute. (n.d.) Biggs’ structure of the observed learning outcome (SOLO) taxonomy. Retrieved from http://www.tedi.uq.edu.au/downloads/Biggs_Solo.pdf 9

i SOLO Classification of verbs according to Brabrand & Dahl (n.d.),

Teaching Strategies, LLC. (2012). The Importance of the Assessment Cycle in The Creative Curriculum® for Preschool. [PDF document] Retrieved from http://teachingstrategies.com/content/pageDocs/Theory-Paper-Assessment-Creative-Curriculum-Preschool-10-2012.pdf

University of Connecticut. Assessment. http://assessment.uconn.edu/index.html

Vanderbilt Institutional Research Office. (2010). Vanderbilt University Assessment Website http://virg.vanderbilt.edu/AssessmentPlans/About.aspx

Westminster College. (n.d.). Accreditation and Assessment. [Web] http://www.westminster.edu/acad/oaac/index.cfm

Walvoord, B. (n.d.). Assessment Clear and Simple. [PDF document]. Retrieved from http://www.westliberty.edu/institutional-research-and-assessment/files/2012/03/Assessment-Clear-and-Simple.pdf

Module 3: Types of Classroom Assessment

Module 3A. Formal and Informal Assessments

Weaver, Brenda. (2015). Formal vs. informal assessment. Retrieved from http://www.scholastic.com/teachers/article/formal-versus-informal-assessments

Williams, Yolanda. (2013-2015). Formal assessments: examples and types [lesson]. Retrieved from http://study.com/academy/lesson/formal-assessments-examples-types-quiz.html

Module 3B. Summative and Formative Assessments

Bilash, Bio Olenka. (2011). Summative assessment. Retrieved from http://www.educ.ualberta.ca/staff/olenka.bilash/best%20of%20bilash/summativeassess.html

Eberly Center. What is the difference between formative and summative assessment? Retrieved from http://www.cmu.edu/teaching/assessment/basics/formative-summative.html

Education Service Australia. Formative use of summative assessment. Retrieved from http://www.assessmentforlearning.edu.au/professional_learning/formative_use_of_summative_assessment/formative_landing_page.html

Rona, Amanda. (2015). Every Teacher’s Guide to Assessment. Retrieved from http://www.edudemic.com/summative-and-formative-assessments/

Module 3C. Traditional and Alternative Assessments

Dikli, S. (2003). Assessment at a distance: Traditional vs. alternative assessments. The Turkish Online Journal of Educational Technology, 2(3) Article 2 [PDF document]. Retrieved from http://www.tojet.net/articles/v2i3/232.

Kwako.  A brief summary of traditional and alternative assessment. Retrieved from www.stat.wisc.edu/~nordheim/Kwako_assessment4.doc

Traditional vs. Authentic Assessment. (2012). Retrieved from http://www.cssvt.org/wp/wp-content/uploads/2012/05/Traditional-vs-Authentic-Assessment.pdf

Wiggins, G. (1990). The case for authentic assessment. Retrieved from http://pareonline.net/getvn.asp?v=2&n=2

Module 3D. Peer and Self Assessments

NCLRC. (2014). Peer and self-assessment. Retrieved from http://www.nclrc.org/essentials/assessing/peereval.htm

UNSW. (2015). Student peer assessment. Retrieved from https://teaching.unsw.edu.au/peer-assessment

UNSW. (2015). Student self-assessment. Retrieved from https://teaching.unsw.edu.au/peer-assessment

Module 3E. Differentiated Assessments

BOSTES. (n.d.). Diffrentiated assessment. Retrieved from http://syllabus.bos.nsw.edu.au/support-materials/differentiated-assessment/

Burrus, Z. & Messer, D. (n.d.). Differentiation and assessment. Retrieved from https://sites.google.com/site/aceeducatorresources/Home/assessment-resources/differentiation-and-assessment

Dodge, J. (2009). 25 Quick formative assessments for a differentiated classroom. Retrieved from http://store.scholastic.com/content/stores/media/products/samples/21/9780545087421.pdf

Kinzie, C.L. & Markovchick, K (n.d.). Comparing traditional and differentiated classrooms. Retrieved from http://www.mainesupportnetwork.org/pdfs/sing07/Singapore%20-%20Handout%20-%20DI%20-%20Comparing%20Traditional%20and%20Diff.pdf

Teaching as Leadership. (n.d.) P-4: Differentiate your plans to fit your students. Retrieved from http://teachingasleadership.org/sites/default/files/How_To/PP/P-4/P4_Trad_v_Diff_Classroom.pdf

Module 4: Assessment Planning and Construction; Reporting and Feedback

APICS. (n.d.). Understanding a scaled score. Retrieved from http://www.apics.org/docs/cert-faq-pdf/scaledscoredocument.pdf?Status=Master

Ben Clay Kansas Curriculum Center. (2001). Is this a trick question. Retrieved from

http://www.ksde.org/Portals/0/CSAS/CSAS%20Home/CTE%20Home/Instructor_Resources/TrickQuestion.pdf

Caribbean Examinations Council. (n.d.). Classroom assessment. Retrieved from https://www.cxc.org/SiteAssets/CPEADocuments/Assessment.pdf

Cornell University Center for Teaching Excellence. (2014.) Using rubrics. Retrieved from http://www.cte.cornell.edu/teaching-ideas/assessing-student-learning/using-rubrics.html

Duquesne University. Good, better, best: Multiple choice exam construction. Retrieved from http://www.duq.edu/about/centers-and-institutes/center-for-teaching-excellence/teaching-and-learning/multiple-choice-exam-construction

Elberly Center. (n.d.). Grading methods for group work. Retrieved from http://www.cmu.edu/teaching/assessment/assesslearning/groupWorkGradingMethods.html

Enerson, D., Plank, K., and Johnson, R.D. (2001). Classroom assessment techniques. Retrieved http://www.schreyerinstitute.psu.edu/pdf/Classroom_Assessment_Techniques_Intro.pdf

PALS Guide. (2005). Rubrics and scoring. Retrieved from http://pals.sri.com/guide/scoringlearn.html

SFSU. (n.d.). Practice contruicting items. Retrieved from http://www.sfsu.edu/~testing/MCTEST/practiceconstructing.html

SFSU. (n.d.). Test construction and assembly. Retrieved from http://www.sfsu.edu/~testing/MCTEST/testconstruction.html

Tan, X. and Michel, R. (2011). Why do standardized testing programs report scaled scores? Retrieved from http://www.ets.org/Media/Research/pdf/RD_Connections16.pdf

Teacher Vision. (2015). Creating rubrics. Retrieved from https://www.teachervision.com/teaching-methods-and-management/rubrics/4521.html

University of Exeter. (n.d.). Marking and giving feedback. Retrieved from http://as.exeter.ac.uk/support/staffdevelopment/aspectsofacademicpractice/assessmentandfeedback/markingandgivingfeedback/generalprinciples/

Wikipedia. (2013). Test score. Retrieved from https://en.wikipedia.org/wiki/Test_score

Hope this helps folks. Thanks again Teacher Malou!  🙂