From Approved Solution to Dynamic Mosaic: A Personal Reflection on the Accelerating Evolution of Education
The following is an edited version of a paper submitted as coursework for the Educational Psychology program at the University of New Mexico in May 2012.
To situate myself by educational era, I learned how to “touch type” in a semester-long elective course in high school. I also learned how to use a slide rule, a technology that was necessary for the more mathematics-intensive courses offered by my small high school in Texas. This technology was the basis for state-wide academic competition sponsored by the Texas University Interscholastic League, just like band, drama, and football.
I did not participate in the slide rule competition, but I continued to develop my own proficiency with the technology throughout my college career at the U.S. Air Force Academy. The first handheld scientific calculator I saw, the HP-35, appeared during my sophomore year, but its cost was prohibitive for me and for most of my classmates. I took the required Computer Science course as a junior, the computer being a Burroughs mainframe computer that was programmed via IBM punch cards. We wrote programs in the ALGOL programming language, tediously typing the commands into the punch cards, then leaving the stack of cards to batch run overnight. The next morning we could pick up a printout that provided the output of our program – or more likely, the error codes with cryptic explanations for why the program failed to run, such as “undeclared variable in line 17.”
In my math and engineering courses, the instructors employed the instructional technique of “going to the boards.” Each wall in every classroom was covered with a blackboard. A portion of most class periods was spent with the cadets standing at the board, working out assigned problems under the scrutiny of the instructor. At the completion of the drill, he (the Academy’s faculty, as well as the Cadet Wing, was all male at that time) would walk through “the approved solution” to each problem.
Now approaching the 40th anniversary of the day I entered the Air Force Academy, I’ve attended my last class session and am about to graduate from the Educational Psychology Masters program at the University of New Mexico (UNM). The “blackboard” employed in this last class did not use chalk – it’s the brand name of a software Learning Management System (LMS) that was, ironically, displayed to the class by projection onto a whiteboard. Slide rules are now trivia answers or Halloween costume accessories. On most days, I carried two computers to my UNM class, one in a backpack and one in my pocket. The class itself was held in a computer lab with a couple of dozen networked computers available to students. So far as I know, punch cards haven’t been used in decades.
The purpose of this preface, however, is not to reminisce about the good old days but to establish a baseline from which we might consider the mind-blowing (or mind-numbing?) pace of technological change and what it might mean for education. I contend that we are not only well into an indeterminate period of “disruption” with respect to technologies themselves (Conole, de Laat, Dillon, & Darby, 2008; Henn, 2012), but we also find ourselves in great uncertainty about how these disruptive technologies have and will cause us to think differently about virtually all aspects of education.
In this paper I want to discuss how technology may well impact an area of education that is, in my opinion, ripe for widespread and long overdue disruption – research.
The Approved Solution
While my Academy instructors used references to “the approved solution” only in the context of mathematics and engineering problems, I’m going to appropriate the phrase as a metaphor. The phrase here serves to represent a mindset that believes there is always one answer, or approach, or process, that is right, best, only, or simply what works. In philosophical terms, one might say that “the approved solution” succinctly captures the gist of the paradigm or worldview that has been called positivism (Guba & Lincoln, 1994) or objectivism (Marley & Levin, 2011). I believe it is also fair to say that this worldview is the foundational philosophy that underlies the current attitude toward educational research, especially in the decade since No Child Left Behind and the unmistakable bias favoring evidence-based, empirical research (Eisenhart & Towne, 2003).
One of the nation’s leading educational psychologists, Dr. Joel Levin, visited UNM for two days in March 2012. I was fortunate to attend his two lectures. The first, “How to conduct more scientifically credible educational intervention research,” provided a comprehensive summation of the theory and practice for rigorous intervention experiments in educational research. In the second, “Practical points for the prospective professional publisher,” Dr. Levin shared his wisdom and experience regarding the academic publishing business for aspiring researchers who would prefer to publish rather than perish.
Among my many takeaways from these two lectures, I’ll mention two that are relevant to this paper. Regarding rigorous research, I was impressed that valid and reliable research in education is exceedingly difficult to achieve or assess, even among educational researchers. For the average teacher, the results of studies that are published in peer-reviewed journals are not usually understandable or meaningful. As a result, Dr. Levin noted there is a need for “translators” who are skilled in both analyzing research and communicating results to the layperson. Regarding his tips for publishing, he stepped through a typical timeline to illustrate that the process from research idea to publication is much longer than most students realized – almost three years.
This caused me to wonder not only about the timeliness of new research, but also about the age of research that is referenced in new research. To satisfy a curiosity, I compiled the age of references for two datasets from a course on principles of classroom learning, required in the UNM Educational Psychology program. The first dataset included all of the references in the textbook for the course (Bruning, Schraw, and Norby, 2011). The second dataset included a list of 21 articles assigned during the course, including all the references in those articles. Table 1 summarizes the data.
The textbook contained 1,250 unique references. The mean date of publication for the references was 1994, while the median date was 1996. The age at publication data are not included since the book was originally published in 1990 and updated last in 2011. With a median date of publication of 1996, the median age of the textbook’s references is 16 years in 2012. The 21 articles contained a total of 1,307 references, an average of 62 per article with a mean of 48. For the 1,307 references, the mean date of publication was 1986, while the median date was 1996. The mean age at publication was 10.4 years, with a median age at publication of 7.4 years. With a median date of publication of 1989, the median age of the articles’ references is 23 years in 2012.
What do these data mean? I do not suggest any specific interpretation or inference is appropriate, other than these general observations. First, one would expect that the faster a domain is changing and discovering (or creating) new findings, the more recent the references will be in published books and articles. Secondly, the older the references are, one can assume that more of the knowledge in a domain is settled, accepted, and not as susceptible to change or challenge. And thirdly, one might infer from more older references that there is less relevance in more recent research. In this context, therefore, a 3-year wait before the results of the latest educational research are published does not strike me as a concern.
Now I want to return to my takeaway from Dr. Levin’s first lecture and his suggestion that the field of educational research is in need of “translators” to help communicate the results of studies to educators. Spawned and funded by No Child Left Behind legislation, the federal Internet-based What Works Clearinghouse was created in 2002 by the Department of Energy’s Institute of Education Sciences as the “single place for policy makers, practitioners, and parents to turn for information on what works in education” (Whitehurst, 2003). Operating with an initial 5-year contract worth $26 million, the WWC did not produce any “product” until 2004, causing critics to derisively refer to it as the “nothing works clearinghouse” (Viadero, 2006). A Government Accounting Office report in 2010 documented a litany of concerns with the WWC, including the fact that the WWC had failed to establish any cost/benefit metrics; its screening criteria excluded “some rigorous research designs that may be appropriate;” the dissemination of its results to states and school districts was not timely or adequate; and the WWC had failed to disclose potential conflicts of interest with respect to research that had been provided by textbook publishers and authors who stood to benefit financially (U.S. Government Accountability Office, 2010). After ten years, $80 million, and two different contractors, the WWC now offers online reviews of 275 different interventions with the results shown in Table 2.
The Effectiveness column refers to an assessment of the extent to which the intervention achieved the intended outcome, scored as Negative Effects, Potentially Negative Effects, No Discernible Effects, Mixed Effects, Potentially Positive Effects, and Positive Effects. Table 2 indicates only the number of interventions scored as Positive or Potentially Positive. The Extent of Evidence column is an indicator of sample size and number of studies included, which relates to generalizability. The three possible extent indicators are Not Rated, Small, and Medium-Large. This column in Table 2 reflects only the number of interventions with Medium-Large extent of evidence, of those that also had Positive or Potentially Positive Effects.
From a cost-effectiveness standpoint, for its $80 million investment over 10 years, the federal government has so far received 275 intervention reports ($ .291M/report), 139 reports that showed Positive or Potentially Positive Effects ($ .575M/report), and 34 reports that showed Positive or Potentially Positive Effects with a Medium-Large Extent of Evidence ($2.35M/report). Of those 34, only one was in Math and Science. I would also point out that of those 34 “highest rated” interventions, 27 (79%) are commercial textbooks or curriculum offerings that have financial interest in the WWC ratings.
To summarize this “approved solution” mindset in educational research, based on this brief analysis:
- it reflects a positivist or objectivist worldview that is beholden to strict and rigorous scientific method;
- only research that employs such scientific methods can be deemed to provide evidence on which educational policy and practice decisions should be based;
- most results of educational research studies are dated and not understandable by everyday teachers; and
- given the 10-year results of the WWC, only a modest number of credible or definitive research results has been reported, and those positive results have come at a high cost.
The Developing Dynamic Mosaic
To attempt any analysis of why technology matters to education is to trivialize it, but I will note a few things to consider.
- There are families of hardware devices, software services, web or cloud-based capabilities, and applications and platforms that scale from the individual to the enterprise.
- There are advantages that can be characterized by cost, speed, accessibility (in terms of location, disability, disadvantage), learning modality, pacing, interests, individual need, and institutional scale.
- The scope of technology applications cross domains and dimensions, from pre-school to adult, formal education to personal enrichment to corporate training, online distance learning to classroom seminars, scheduled synchronous connections to asynchronous on-demand access.
One manifestation of how digital technologies are disrupting education is the degree to which they are disrupting, or influencing, every aspect of our cultural and social lives. Students of all ages who meet even the most rudimentary levels of technology literacy have adopted and integrated these capabilities in their own lives such that “it is central to how they organize and orientate their learning” (Conole, de Laat, Dillon, & Darby, 2008). Our always-on communication devices connect us with friends, family, music, video, games, Facebook, Twitter, email, Flickr, and the rest of the online world all the time. Some educators and researchers want to take this momentum and ride it to an educational philosophy that goes beyond “learning anytime and anywhere” to “learning all the time and everywhere” (Cook, 2012). Just as previous generations sought a work|life balance before idealizing the concept of seamlessly integrating a distinction-less work-as-life, the current generation may be the first to recognize no distinction between living and learning.
The disruption to education has been so tumultuous it has spawned a theory of learning by George Siemens, now affiliated with Athabasca University, Canada’s open university. He has proposed what he calls connectivism, “an integration of principles explored by chaos, network, complexity, and self-organization theories” (Siemens, 2005).
The rate of change has been so swift that the relatively recent incorporation of Learning Management Systems (LMS) as integral components of university online offerings is already viewed by some as an institutional albatross. Rather than the centralized, capable, but cumbersome do-everything system that serves the university, some academics are now advocating for a student-centered approach they call Personal Learning Environments, or PLE (Mott, 2010; Tu, Sujo-Montes, Yeh, Chan, & Blocher, 2012). Tu, Sujo-Montes, Yeh, Chan, and Blocher attribute three characteristics to a PLE: students set their own learning goals, they manage their own learning process and content, and they interact with others throughout their learning process (p. 14). Mott contrasts the PLE from the LMS by describing it as “the educational manifestation of the web’s ‘small pieces loosely joined’.”
I think a better name than “small pieces loosely joined” is dynamic mosaic.
Two examples that illustrate where the momentum of this dynamic mosaic may be heading are the educational initiatives begun by Sebastian Thrun and Salman Khan.
Thrun might be the nearest thing there is to a digital renaissance man. He is the visionary behind Google’s driver-less car project and the new Google glasses. He is also the head of Stanford’s Artificial Intelligence (AI) lab. Last year, he and a colleague, Peter Norvig, offered a free online course in AI through Stanford. They were astounded when over 160,000 people around the world enrolled. Thrun has since given up some of his teaching responsibilities at Stanford and raised venture capital to found udacity.com. The new site offers free, world-class courses that focus primarily on computing-related topics. The philosophy that drives udacity is a commitment to “free online education for everybody” (Henn, 2012).
Salman Khan, founder of the Khan Academy and holder of multiple degrees from MIT and Harvard Business School, consented to help a 13-yeard old cousin with math in 2004. From phone calls to Yahoo chats to crude videos, Khan gradually built a library of short (generally less than 10 minutes) video tutorials covering a variety of classroom lessons and posted them to YouTube. The number of views rapidly climbed to over a million. So began a dizzying ascent to a position of influence in education such that he has attracted funding from the Bill and Melinda Gates Foundation and Google, as well as being profiled by “60 Minutes” and the New York Times. His YouTube channel is approaching an astonishing 150 million views. His Khan Academy website offers over 3,100 videos where one can “learn almost anything for free.”
In late 2010, Khan used some of his funding to hire developers to create a teacher management system to monitor and direct students in a classroom that uses Khan videos extensively. The students can proceed at their pace while the teacher follows their progress, or their stumbles, on her own monitor. The system also provides real time data on progress, quiz scores, incentive rewards reached, how much time is spent on each module, and other relevant data. That kind of data provides ongoing real time information to the teacher that “old school” researchers must envy.
At the other end of the scale are the data associated with those nearly 150 million viewed tutorial videos. Khan and his associates are now mining that “massive pile of data about how people learn and where they get stuck” (Thompson, 2011). Their objective is to develop algorithms that can be used to tailor or customize lessons for students based on variables such as how many times a video should be viewed before taking a quiz, correlations among results from different subject areas, and how to spot someone who’s stuck on a concept.
More and more individuals and organizations are launching education initiatives based on open source, non-proprietary standards at no, or low, cost. These include a variety of offering agencies that target different constituencies, such as: iTunes U, TED Talks, Connexions, Bill Hammack’s EngineerGuy.com, Google Code U, PBS Teachers, and YouTube Edu.
To summarize the key characteristics of the emerging dynamic mosaic enabled by online technologies:
- there is increasing focus on the needs and expectations of the learner rather than the provider;
- platforms cross technologies and modalities;
- high quality (even world-class) lessons are offered for free; and
- data on user progress, time on task, and other key metrics are collected and in some cases analyzed in near real-time.
From Approved Solution to Dynamic Mosaic
I close by highlighting what’s missing in the evolution from the educational research mindset of the approved solution to the development of the dynamic mosaic.
In fact, I contend that nothing is missing. So far as I know, the educational research establishment had nothing to do with the development of online technologies. The technologies and capabilities that are beginning to be woven into the ever-developing mosaic of online education did not wait for educational research to give a thumbs-up that online education would or ought to work.
Of course, that’s not to say that the mosaic cannot be improved for future learners. But I will go out on a limb and speculate that the kind of educational research that’s been in search of the “approved solution” is rapidly becoming as obsolete as my old slide rule. When a provider like Salman Khan is collecting real time data on student learning during actual classroom activities, the need for rigorous and expensive experimental trials that produce inconclusive and difficult-to-understand results is obviated.
Some progress just doesn’t want to wait for research.
Bruning, R.H., Schraw, G.J., & Norby, M.M. (2011). Cognitive Psychology and Instruction (5th ed.). Boston: Pearson Education.
Conole, G., de Laat, M., Dillon, T., & Darby, J. (2008). ‘Disruptive technologies’, ‘pedagogical innovation’: What’s new? Findings from an in-depth study of students’ use and perception of technology. Computers and Education, 50(2), 511-524.
Cook, V. (2012). Learning everywhere, all the time. The Delta Kappa Gamma Bulletin, 78(3), 48-51.
Eisenhart, M. & Towne, L. (2003). Contestation and change in national policy on “scientifically based” education research. Educational Researcher, 32(7), 31-38.
Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of Qualitative Research (pp. 105–117). Thousand Oaks: Sage Publications.
Henn, S. (January 23, 2012). Stanford takes online schooling to the next academic level. Retrieved from http://www.npr.org/blogs/alltechconsidered/2012/01/23/145645472/stanford-takes-online-schooling-to-the-next-academic-level
Marley, S.C. & Levin, J.R. (2011). When are prescriptive statements in educational research justified? Educational Psychology Review, 23(2), 197-206.
Mott, J. (2010). Envisioning the Post-LMS Era: The Open Learning Network. EDUCAUSE Quarterly, 33(1). Retrieved from http://www.educause.edu/EDUCAUSE+Quarterly/EDUCAUSEQuarterlyMagazineVolum/EnvisioningthePostLMSEraTheOpe/199389
Siemens, G. (April 5, 2005). Connectivism: A Learning Theory for the Digital Age. Retrieved from: http://www.elearnspace.org/Articles/connectivism.htm
Thompson, C. (July 15, 2011). How Khan Academy is changing the rules of education. Wired, August 2011. Retrieved from http://www.wired.com/magazine/2011/07/ff_khan/all/1
Tu, C-H., Sujo-Montes, L., Yeh, C-J., Chan, J-Y., & Blocher, M. (2012). Personal Learning Environments & Open Network Learning Environments. TechTrends, May/June 2012. Retrieved from http://www.springerlink.com/content/36541j5782346770/
U.S. Government Accountability Office. (2010, July). Improved dissemination and timely product release would enhance the usefulness of the What Works Clearinghouse. (Publication No. GAO-10-644). Retrieved from: http://www.gao.gov/products/GAO-10-644
Viadero, D. (2006). ‘One stop’ research shop seen as slow to yield views that educators can use. Education Week, September 26, 2006. Retrieved from http://www.edweek.org/ew/articles/2006/09/27/05whatworks.h26.html?p
Whitehurst, G.J. (2003). Statement of Assistant Secretary Grover J. Whitehurst before the House Subcommittee on Labor/HHS/Education Appropriations on the FY 2004 budget request for the Institute of Education Sciences. Retrieved from http://www2.ed.gov/news/speeches/2003/03/03132003a.html.