№ | Слайд | Текст |
1 |
 |
Reviewing systematic reviews: meta-analysis of What WorksClearinghouse computer-assisted reading interventions. October 2012 Improving Education through Accountability and Evaluation: Lessons from Around the World Rome, Italy Andrei Streke ? Tsze Chan |
2 |
 |
Session 4.2: Building and Interpreting Scientific Evidence Thursday,October 4th 2 |
3 |
 |
Presentation OverviewWhat Works Clearinghouse (WWC) systematic reviews Meta-analysis of computer-assisted programs across WWC topic areas, reading outcomes Meta-analysis of computer-assisted programs within Beginning Reading topic area (grades K-3) 3 |
4 |
 |
Key termsWhat Works Clearinghouse (WWC) is a “central and trusted source of scientific evidence for what works in education.” WWC produces systematic reviews on the effectiveness of educational interventions (programs, curricula, products, and practices) grouped by topic areas. Meta-analysis is a statistical technique that summarizes quantitative findings across similar studies. Each study’s findings are converted to a standard effect size. Computer-assisted interventions encompass reading software products, and programs that combine a mix of computer activities and traditional curriculum elements. 4 |
5 |
 |
WWC Systematic ReviewA clearly stated set of objectives with pre-defined eligibility criteria for studies An explicit reproducible methodology A systematic search that attempts to identify all studies that would meet the eligibility criteria An assessment of the validity of the findings of the included studies A systematic presentation, and synthesis, of the characteristics and findings of the studies 5 |
6 |
 |
Meta-Analysis of Reading interventionsExtraction of statistical and descriptive information from intervention reports and study review guides Aggregation of effect sizes across studies Moderator Analysis -- ANOVA type -- Regression type 6 |
7 |
 |
WWC Systematic ReviewWWC products: Intervention reports http://ies.ed.gov/ncee/wwc/publications_reviews.aspx Practice guides Quick reviews Normative documents (http://ies.ed.gov/ncee/wwc ): WWC Procedures and Standards Handbook WWC topic area review protocol 7 |
8 |
 |
8 |
9 |
 |
Meta-analysis of computer-assisted programs across WWC topic areas,reading outcomes Does the evidence in WWC reports indicate that computer-assisted programs increase student reading achievement? 9 |
10 |
 |
Computer-assisted interventions10 |
11 |
 |
Example of computer-assisted programsEarobics® is interactive software that provides students in pre-K through third grade with individual, systematic instruction in early literacy skills as students interact with animated characters. The program builds children’s skills in phonemic awareness, auditory processing, and phonics, as well as the cognitive and language skills required for comprehension. 11 |
12 |
 |
Meta-Analysis proceduresEffect Sizes Aggregation Method Testing for Homogeneity Fixed and Random Effects Models Moderator Analysis -- ANOVA type -- Regression type 12 |
13 |
 |
Effect Size(1) Effect size (Hedges & Olkin, 1985): 13 |
14 |
 |
Flowchart for calculation of effect size (Tobler et al, 2000) 14 |
15 |
 |
Number of students and effect sizes by topic areaType of Program Number of students Number of students Number of students Number of effect sizes Number of effect sizes total intervention control Adolescent Literacy 26970 12717 14253 59 Beginning Reading 2636 1339 1297 151 Early Childhood Education 910 447 463 39 English Language Learners 308 173 135 6 Total 30824 14676 16148 255 15 |
16 |
 |
Aggregation of Effect Sizes(1) Effect size (Hedges): (2) Effect size variance: Weight (w)= (Variance)-1 (3) Weighted average effect size: (4) Weighted average effect size variance: 16 |
17 |
 |
Fixed and Random Effects Model weightsFixed effects model weights each study by the inverse of the sampling variance. Random effects model weights each study by the inverse of the sampling variance plus a constant that represents the variability across the population effects (Lipsey & Wilson, 2001). This is the random effects variance component. 17 |
18 |
 |
Computer-assisted programs, random effects31 0.13 0.03 0.07 0.18 4.56 0.00 33 0.28 0.06 0.16 0.40 4.71 0.00 6 0.12 0.07 -0.01 0.25 1.74 0.14 3 0.30 0.27 -0.23 0.83 1.11 0.38 WWC Topic Area Number of Studies Weighted Effect Size Standard Error Lower Confidence Interval Upper Confidence Interval Z-value P-value Adolescent literacy Beginning reading Early childhood education English language learners 18 |
19 |
 |
Computer-assisted reading interventions, topic area effects and 95%CIs 19 |
20 |
 |
Meta-analysis of computer-assisted programs within Beginning Readingtopic area Are computer-assisted reading programs more effective than non-computer reading programs in improving student reading achievement? 20 |
21 |
 |
Selection Criteria for Beginning Reading Topic AreaManuscript is written in English and published 1983 or later Both published and unpublished reports are included Eligible designs: RCT; QED with statistical controls for pretest and/or a comparison group matched on pretest; regression discontinuity; SCD At least one relevant quantitative outcome measure Manuscript focuses on beginning reading Focus is on students ages 5-8 and/or in grades K-3 Primary language of instruction is English 21 |
22 |
 |
Beginning Reading Topic Area22 |
23 |
 |
Example of “other” reading programsReading Recovery® is a short-term tutoring intervention intended to serve the lowest-achieving first-grade students. The goals of Reading Recovery® are to promote literacy skills, reduce the number of first-grade students who are struggling to read, and prevent long-term reading difficulties. Reading Recovery® supplements classroom teaching with one-to-one tutoring sessions, generally conducted as pull-out sessions during the school day. 23 |
24 |
 |
Number of students and effect sizes by type of program: BeginningReading topic area Type of Program Number of students Number of students Number of students Number total intervention control of effect sizes BR Computer-Assisted Programs 2636 1339 1297 151 Other BR Programs 7591 4042 3549 174 Total Beginning Reading 10227 5381 4846 325 24 |
25 |
 |
Beginning Reading programs, random effectsType of Program n M Standard Error 95% Lower 95% Upper Z-value P-value Computer-assisted programs 33 0.28 0.06 0.16 0.40 4.71 0.000 Other BR programs 47 0.39 0.04 0.32 0.47 9.84 0.000 Beginning Reading Total 80 0.35 0.03 0.29 0.42 10.65 0.000 25 |
26 |
 |
Beginning Reading Interventions, Random Effects, 95% ConfidenceIntervals 26 |
27 |
 |
Moderator Analysis, random effectsModeling between study variability: Categorical models (analogous to a one-way ANOVA) Regression models (continuous variables and/or multiple variables with weighted multiple regression) 27 |
28 |
 |
Categorical analysis: moderators of program effectivenessPopulation Design Sample size Control group Reading domain 28 |
29 |
 |
Weighted mean Effect Sizes for moderators: 80 studies, BeginningReading, random effects 29 |
30 |
 |
Weighted mean Effect Sizes for moderators: 80 studies, BeginningReading, random effects 30 |
31 |
 |
Dummy Variables for Regressions31 |
32 |
 |
Regression Statistics for BR Programs, Random effects32 |
33 |
 |
Regression Statistics for BR Programs, Random effectsType of Program n M Standard Error 95% Lower 95% Upper Z-value P-value Computer-assisted programs 33 0.28 0.06 0.16 0.40 4.71 0.000 Other BR programs 47 0.39 0.04 0.32 0.47 9.84 0.000 Beginning Reading Total 80 0.35 0.03 0.29 0.42 10.65 0.000 33 |
34 |
 |
Regression Statistics for BR Programs, Random Effects34 |
35 |
 |
Meta-Analytic Multiple Regression Results from the Wilson/Lipsey SPSSMacro 35 |
36 |
 |
ConclusionsInvestments in education have become an important national policy tool across the globe. With schools facing substantial costs of hardware and software, concerns naturally arise about the contribution of technology to students’ learning. The present work lends some support to the proposition that computer-assisted interventions in reading are effective. The average effect for beginning reading computer-assisted programs is positive and substantively important (that is >0.25). For the Beginning Reading topic area (grades K-3), the effect appears smaller than the effect achieved by non-computer reading programs. 36 |
37 |
 |
ReferencesBorenstein, M., Hedges, L.V., Higgins, J.P., and Rothstein, H.R. (2009). Introduction to meta-analysis. John Wiley and Sons. Hedges, L. V. and Olkin I. (1985). Statistical Methods for Meta-Analysis. New York: Academic Press. Lipsey, M.W., & Wilson, D.B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage. Tobler, N.S., Roona, M.R., Ochshorn, P., Marshall, D.G., Streke, A.V., & Stackpole, K.M. (2000). School-based adolescent drug prevention programs: 1998 meta-analysis. Journal of Primary Prevention, 20(4), 275-336. 37 |
38 |
 |
For More InformationPlease contact: Andrei Streke AStreke@mathematica-mpr.com Tsze Chan TChan@air.org 38 |
39 |
 |
39 |
40 |
 |
Beginning Reading programs, random and fixed effectsType of Program n M Standard Error 95% Lower 95% Upper Z-value P-value Computer-assisted programs 33 0.28 0.06 0.16 0.40 4.71 0.000 Other BR programs 47 0.39 0.04 0.32 0.47 9.84 0.000 Beginning Reading Total 80 0.35 0.03 0.29 0.42 10.65 0.000 Type of Program n M Standard Error 95% Lower 95% Upper Z-value P-value Computer-assisted programs 33 0.26 0.04 0.18 0.34 6.50 0.000 Other BR programs 47 0.34 0.02 0.29 0.39 14.35 0.000 Beginning Reading Total 80 0.32 0.02 0.28 0.36 15.65 0.000 40 |
41 |
 |
Computer-assisted programs, random and fixed effects41 |
42 |
 |
Random versus Fixed Effects ModelsFixed effects model assume: (1) there is one true population effect that all studies are estimating (2) all of the variability between effect sizes is due to sampling error Random effects model assume: (1) there are multiple (i.e., a distribution) of population effects that the studies are estimating (2) variability between effect sizes is due to sampling error + variability in the population of effects (Lipsey and Wilson, 2001) 42 |
43 |
 |
Beginning Reading Interventions, Random Effects, 95% ConfidenceIntervals 43 |
44 |
 |
Examples of problematic study designs that do not meet WWC criteriaDesigns that confound study condition and study site Programs that were tested with only one treatment and one control classroom or school Non-comparable groups Study designs that compared struggling readers to average or good readers to test a program’s effectiveness 44 |
«Reviewing systematic reviews: meta-analysis of What Works Clearinghouse computer-assisted reading interventions» |
http://900igr.net/prezentacija/anglijskij-jazyk/reviewing-systematic-reviews-meta-analysis-of-what-works-clearinghouse-computer-assisted-reading-interventions-60490.html