8

Martin Weller

Editor’s Note: The following was originally published by Educause with the following citation:

Weller, M. (2018, July 2). 20 Years of EdTech. EDUCAUSE Review 53(4). Available at https://er.educause.edu/articles/2018/7/twenty-years-of-edtech 

 

An opinion often cited among educational technology (edtech) professionals is that theirs is a fast-changing field. This statement is sometimes used as a motivation (or veiled threat) to senior managers to embrace edtech because if they miss out now, it’ll be too late to catch up. However, amid this breathless attempt to keep abreast of new developments, the edtech field is remarkably poor at recording its own history or reflecting critically on its development. When Audrey Watters recently put out a request for recommended books on the history of educational technology,1 I couldn’t come up with any beyond the handful she already had listed. There are edtech books that often start with a historical chapter to set the current work in context, and there are edtech books that are now part of history, but there are very few edtech books dealing specifically with the field’s history. Maybe this reflects a lack of interest, as there has always been something of a year-zero mentality in the field. Edtech is also an area to which people come from other disciplines, so there is no shared set of concepts or history. This can be liberating but also infuriating. I’m sure I was not alone in emitting the occasional sigh when during the MOOC rush of 2012, so many “new” discoveries about online learning were reported—discoveries that were already tired concepts in the edtech field.

The twentieth anniversary of EDUCAUSE presents an opportune moment to examine some of this history. There are different ways to do so, but for this article I have taken the straightforward approach of selecting a different educational technology, theory, or concept for each of the years from 1998 through 2018. This is not just an exercise in nostalgia (although comparing horror stories about metadata fields is enjoyable); it also allows us to examine what has changed, what remains the same, and what general patterns can be discerned from this history. Although the selection is largely a personal one, it should resonate here and there with most practitioners in the field. I have also been rather arbitrary in allocating a specific year: the year is not when a particular technology was invented but, rather, when it became—in my view—significant.

Looking back twenty years starts in 1998, when the web had reached a level of mainstream awareness. It was accessed through dial-up modems, and there was a general sense of puzzlement about what it would mean, both for society more generally and for higher education in particular. Some academics considered it to be a fad. One colleague dismissed my idea of a fully online course by declaring: “No one wants to study like that.” But the potential of the web for higher education was clear, even if the direction this would take over the next twenty years was unpredictable.

1998: Wikis

Perhaps more than any other technology, wikis embody the spirit of optimism and philosophy of the open web. The wiki—a web page that could be jointly edited by anyone—was a fundamental shift in how we related to the internet. The web democratized publishing, and the wiki made the process a collaborative, shared enterprise. In 1998 wikis were just breaking through. Ward Cunningham is credited with inventing them (and the term) in 1994. Wikis had their own markup language, which made them a bit technical to use, although later implementations such as Wikispaces made the process easier. Wikis encapsulated the promise of a dynamic, shared, respectful space—the result partly of the ethos behind them (after all, they were named after the Hawaiian word for quick) and partly of their technical infrastructure. Users can track edits, roll back versions, and monitor contributions. Accountability and transparency are built in.

With Wikipedia now the default knowledge source globally with over 5.5 million articles (counting only those in English), it would seem churlish to bemoan that wikis failed to fulfil their potential. Nevertheless, that statement is probably true in terms of the use of wikis in teaching. For instance, why aren’t MOOCs conducted in wikis? It’s not necessarily that wikis as a technology have not fully realized their potential. Rather, the approach to edtech they represent—cooperative and participatory—has been replaced by a broadcast, commercial publisher model.

1999: E-Learning

E-learning had been in use as a term for some time by 1999, but the rise of the web and the prefix of “e” to everything saw it come to prominence. By 1999, e-learning was knocking on the door of, if not already becoming part of, the mainstream. Conventional and distance colleges and universities were adopting e-learning programs, often whenever the target audience would be willing to learn this way. One of the interesting aspects of e-learning was the consideration of costs. The belief was that e-learning would be cheaper than traditional distance-education courses. It wasn’t, although e-learning did result in a shift in costs: institutions could spend less in production (by not using physical resources and by reusing material), but there was a consequent increase in presentation costs (from support costs and a more rapid updating cycle). This cost argument continues to reoccur and was a significant driver for MOOCs (see year 2012).

E-learning set the framework for the next decade in terms of technology, standards, and approaches—a period that represents, in some respects, the golden age of e-learning.

2000: Learning Objects

E-learning was accompanied by new approaches, often derived from computer science. One of these was learning objects. The concept can be seen as arising from programming: object-oriented programming had demonstrated the benefits of reusable, clearly defined pieces of functional code that could be implemented across multiple programs. Learning objects seemed like a logical step in applying this model to e-learning. As Stephen Downes argued:

There are thousands of colleges and universities, each of which teaches, for example, a course in introductory trigonometry. Each such trigonometry course in each of these institutions describes, for example, the sine wave function. . . .

Now for the premise: the world does not need thousands of similar descriptions of sine wave functions available online. Rather, what the world needs is one, or maybe a dozen at most, descriptions of sine wave functions available online. The reasons are manifest. If some educational content, such as a description of sine wave functions, is available online, then it is available worldwide.2

This made a lot of sense then, and it still makes a lot of sense today. A learning object was roughly defined as “a digitized entity which can be used, reused or referenced during technology supported learning.”3 But learning objects never really took off, despite the compelling rationale for their existence. The failure to make them a reality is instructive for all in the edtech field. They failed to achieve wide-scale adoption for a number of reasons, including over-engineering, debates around definitions, the reusability paradox,4and the fact that they were an alien concept for many educators who were already overloaded. Nevertheless, the core idea of learning objects would resurface in different guises.

2001: E-learning Standards

By the turn of the millennium, e-learning was seeing significant interest, resulting in a necessary concentration of efforts: platforms that could be easily set up to run e-learning programs; a more professional approach to the creation of e-learning content; the establishment of evidence; and initiatives to describe and share tools and content. Enter e-learning standards and, in particular, IMS. This was the body that set about to develop standards that would describe content, assessment tools, courses, and more ambitiously, learning design. Perhaps the most significant standard was SCORM, which went on to become an industry standard in specifying content that could be used in virtual learning environments (VLEs). Prior to this, considerable overhead was involved in switching content from one platform to another.

E-learning standards are an interesting case study in edtech. Good standards retreat into the background and just help things work, as SCORM has done. But other standards have failed in some of their ambitions to create easily assembled, discoverable, plug-and-play content. So while the standards community continues to work, it has encountered problems with vendors5 and has been surpassed in popular usage by the less specific but more human description and sharing approach that underlined the web 2.0 explosion (see year 2006).

2002: Open Educational Resources (OER)

Now that the foundations of modern edtech had been laid, the more interesting developments could commence. In 2001, MIT announced its OpenCourseWareinitiative, marking the initiation of the OER movement. But it was in 2002 that the first OER were released and that people began to understand licenses. MIT’s goal was to make all the learning materials used in its 1,800 courses available via the internet, where the resources could be used and repurposed as desired by others, without charge.

Like learning objects, the software approach (in particular, open-source software) provides the roots for OER. The open-source movement can be seen as creating the context within which open education could flourish, partly by analogy and partly by establishing a precedent. But there is also a very direct link, via David Wiley, through the development of licenses.6 In 1998 Wiley became interested in developing an open license for educational content, and he directly contacted pioneers in the open-source world. Out of this came the Open Content License (OCL), which he developed with publishers to establish the Open Publication License (OPL) the next year.

The OPL proved to be one of the key components, along with the Free Software Foundation’s GNU license, of the Creative Commons licenses, developed by Larry Lessig and others in 2002. These went on to become essential in the open-education movement. The simple licenses in Creative Commons allowed users to easily share resources, and OER became a global movement. Although OER have not transformed higher education in quite the way many envisaged in 2002 and many projects have floundered after funding ends, the OER idea continues to be relevant, especially through open textbooks and open educational practice (OEP).

The general lessons from OER are that it succeeded where learning objects failed because OER tapped into existing practice (and open textbooks doubly so). The concept of using a license to openly share educational content is alien enough, without all the accompanying standards and concepts associated with learning objects. Patience is required: educational transformation is a slow burn.

2003: Blogs

Blogging developed alongside the more education-specific developments and was then co-opted into edtech. In so doing, it foreshadowed much of the web 2.0 developments, with which it is often bundled.

Blogging was a very obvious extension of the web. Once people realized that anyone could publish on the web, they inevitably started to publish diaries, journals, and regularly updated resources. Blogging emerged from a simple version of “here’s my online journal” when syndication became easy to implement. The advent of feeds, and particularly the universal standard RSS, provided a means for readers to subscribe to anyone’s blog and receive regular updates. This was as revolutionary as the liberation that web publishing initially provided. If the web made everyone a publisher, RSS made everyone a distributor.

People swiftly moved beyond journals. After all, what area isn’t impacted by the ability to create content freely, whenever you want, and have it immediately distributed to your audience? Blogs and RSS-type distribution were akin to giving everyone superhero powers. It’s not surprising that in 2018, we’re still wrestling with the implications. No other edtech has continued to develop and solidify (as the proliferation of WordPress sites attests) and also remain so full of potential. For almost every edtech that comes along—e-portfolios, VLEs, MOOCs, OER, social media—I find myself thinking that a blog version would be better. Nothing develops and anchors an online identity quite like a blog.

2004: The LMS

The learning management system (LMS) offered an enterprise solution for e-learning providers. It stands as the central e-learning technology. Prior to the LMS, e-learning provision was realized through a variety of tools: a bulletin board for communications; a content-management system; and/or home-created web pages. The quality of these solutions was variable, often relying on the enthusiasm of one particular devotee. The combination of tools also varied across any one higher education institution, with the medical school adopting one set of tools, the engineering school another, the humanities school yet another, and so on.

As e-learning became more integral to both blended-learning and fully-online courses, this variety and reliability became a more critical issue. The LMS offered a neat collection of the most popular tools, any one of which might not be as good as the best-of-breed specific tool but was good enough. The LMS allowed for a single, enterprise solution with the associated training, technical support, and helpdesk. The advantage was that e-learning could be implemented more quickly across an entire institution. However, over time this has come to be seen more as a Faustian pact as institutions found themselves locked into contracts with vendors, most famously with providers (e.g., Blackboard) that attempted to file restrictive patents.7 More problematically, the LMS has become the onlyroute for delivering e-learning in many institutions, with a consequent loss of expertise and innovation.8

2005: Video

YouTube was founded in 2005, which seems surprisingly recent, so much has it become a part of the cultural landscape. As internet access began to improve and compression techniques along with it, the viability of streaming video had reached a realistic point for many by 2005. YouTube and other video-sharing services flourished, and the realization that anyone could make a video and share it easily was the next step in the broadcast democratization that had begun with HTML. While the use of video in education was often restricted to broadcast, this was a further development on the learning objects idea. As the success of the Khan Academy illustrates, simple video explanations of key concepts—explanations that can be shared and embedded easily—met a great educational demand. However, colleges and universities for the most part still do not assess students on their use of video. In some disciplines, such as the arts, this is more common, but in 2018, text remains the dominant communication form in education. Although courses such as DS106 have innovated in this area,9 many students will go through their education without being required to produce a video as a form of assessment. We need to fully develop the critical structures for video in order for it to fulfil its educational potential, as we have already done for text.

2006: Web 2.0

The “web 2.0” tag gained popularity from Tim O’Reilly’s use in the first Web 2.0 Conference in 2004, but not until around 2006 did the term begin to penetrate in educational usage, with Bryan Alexander highlighting the relevance of social and open aspects of its application.10 The practical term “web 2.0” gathered together the user-generated content services, including YouTube, Flickr, and blogs. But it was more than just a useful term for a set of technologies; it seemed to capture a new mindset in our relation to the internet. After O’Reilly set out the seven principles of web 2.0, the web 2.0 boom took off.11

Just as the fascination with e-learning had seen every possible term prefixed with “e,” so the addition of “2.0” to any educational term made it fashionable. But soon the boom was followed by the consequent bust (a business plan was needed after all), and problems with some of the core concepts meant that by 2009, web 2.0 was being declared dead.12 Inherent in much of the web 2.0 approach was a free service, which inevitably led to data being the key source for revenue and gave rise to the oft-quoted line “If you’re not paying for it, you’re the product being sold.”13 As web 2.0 morphed into social media, the inherent issues around free speech and offensive behavior came to the fore. In educational terms, this raises issues about duty of care for students, recognizing academic labor, and marginalized groups. The utopia of web 2.0 turned out to be one with scant regard for employment laws and largely reserved for “tech bros.”

Nevertheless, at the time, web 2.0 posed a fundamental question as to how education conducts many of its cherished processes. Peer review, publishing, ascribing quality—all of these were founded on what David Weinberger referred to as filtering on the way in rather than on the way out.14 While the quality of much online content was poor, there was always an aspect of what was “good enough” for any learner. With the demise of the optimism around web 2.0, many of the accompanying issues it raised for higher education have largely been forgotten—before they were even addressed. For instance, while the open repository for physics publications (arXiv) and open-access methods for publication became mainstream, the journal system is still dominant, largely based on double-blind, anonymous peer review. Integrating into the mainstream the participatory culture that web 2.0 brought to the fore remains both a challenge and an opportunity for higher education.

2007: Second Life and Virtual Worlds

Online virtual worlds and Second Life had been around for some time, with Second Life launching in 2003, but they begin to see an upsurge in popularity around 2007. Colleges and universities began creating their own islands, and whole courses were delivered through Second Life. While the virtual worlds had strong devotees, they didn’t gain as much traction with students as envisaged, and most Second Life campuses are now deserted. Partly this was a result of a lack of imagination: they were often used to re-create an online lecture. The professor may have been represented by a seven-foot-tall purple cat in that lecture, but it was a lecture nonetheless. Virtual worlds also didn’t manage to shrug off their nerdy, role-playing origins, and many users felt an aversion to this. The worlds could be glitchy as well, which meant that many people never made it off Orientation Island in Second Life, for example. However, with the success of games such as Minecraft and Pokémon Go, more robust technology, and more widespread familiarity with avatars and gaming, virtual worlds for learning may be one of those technologies due for a comeback.

2008: E-Portfolios

Like learning objects, e-portfolios were backed by a sound idea. The e-portfolio was a place to store all the evidence a learner gathered to exhibit learning, both formal and informal, in order to support lifelong learning and career development. But like learning objects—and despite academic interest and a lot of investment in technology and standards—e-portfolios did not become the standard form of assessment as proposed. Many of their problems were similar to those that beleaguered learning objects, including overcomplicated software, an institutional rather than a user focus, and a lack of accompanying pedagogical change. Although e-portfolio tools remain pertinent for many subjects, particularly vocational ones, for many students owning their own domain and blog remains a better route to establishing a lifelong digital identity. It is perhaps telling that although many practitioners in higher education maintain blogs, asking to see a colleague’s e-portfolio is likely to be met with a blank response.

2009: Twitter and Social Media

Founded in 2006, Twitter had moved well beyond the tech-enthusiast bubble by 2009 but had yet to become what we know it as today: a tool for wreaking political mayhem. With the trolls, bots, daily outrages, and generally toxic behavior not only on Twitter but also on Facebook and other social media, it’s difficult to recall the optimism that we once held for these technologies. In 2009, though, the ability to make global connections, to easily cross disciplines, and to engage in meaningful discussion all before breakfast was revolutionary. There was also a democratizing effect: formal academic status was not significant, since users were judged on the value of their contributions to the network. In educational terms, social media has done much to change the nature of the relationship between academics, students, and the institution. Even though the negative aspects are now undeniable, some of that early promise remains. What we are now wrestling with is the paradox of social media: the fact that its negatives and its positives exist simultaneously.

2010: Connectivism

The early enthusiasm for e-learning saw a number of pedagogies resurrected or adopted to meet the new potential of the digital, networked context. Constructivism, problem-based learning, and resource-based learning all saw renewed interest as educators sought to harness the possibility of abundant content and networked learners. Yet connectivism, as proposed by George Siemens and Stephen Downes in 2004–2005, could lay claim to being the first internet-native learning theory. Siemens defined connectivism as “the integration of principles explored by chaos, network, and complexity and self-organization theories. Learning is a process that occurs within nebulous environments of shifting core elements—not entirely under the control of the individual.”15 Further investigating the possibility of networked learning led to the creation of the early MOOCs, including influential open courses by Downes and Siemens in 2008 and 2009.16 Pinning down exactly what connectivism was could be difficult, but it represented an attempt to rethink how learning is best realized given the new realities of a digital, networked, open environment, as opposed to forcing technology into the service of existing practices. It also provided the basis for MOOCs, although the approach they eventually adopted was far removed from connectivism (see 2012).

2011: PLE

Personal Learning Environments (PLEs) were an outcome of the proliferation of services that suddenly became available following the web 2.0 boom. Learners and educators began to gather a set of tools to realize a number of functions. In edtech, the conversation turned to whether these tools could be somehow “glued” together in terms of data. Instead of talking about one LMS provided to all students, we were discussing how each learner had his/her own particular blend of tools. Yet beyond a plethora of spoke diagrams, with each showing a different collection of icons, the PLE concept didn’t really develop after its peak in 2011. The problem was that passing along data was not a trivial task, and we soon became wary about applications that shared data (although perhaps not wary enough, given recent news regarding Cambridge Analytica17). Also, providing a uniform offering and support for learners was difficult when they were all using different tools. The focus shifted from a personalized set of tools to a personalized set of resources, and in recent years this has become the goal of personalization.

2012: MOOCs

Inevitably, 2012 will be seen as the year of MOOCs.18 In many ways the MOOC phenomenon can be viewed as the combination of several preceding technologies: some of the open approach of OER, the application of video, the experimentation of connectivism, and the revolutionary hype of web 2.0. Clay Shirky mistakenly proclaimed that MOOCs were the internet happening to education.19 If he’d been paying attention, he would have seen that this had been happening for some time. Rather, MOOCs were Silicon Valley happening to education. Once Stanford Professor Sebastian Thrun’s course had attracted over 100,000 learners and almost as many headlines,20 the venture capitalist investment flooded in.

Much has been written about MOOCs, more than I can do justice to here. They are a case study still in the making. The raised profile of open education and online learning caused by MOOCs may be beneficial in the long run, but the MOOC hype (only ten global providers of higher education by 2022?)21 may be equally detrimental. The edtech field needs to learn how to balance these developments. Millions of learners accessing high-quality material online is a positive, but the rush by colleges and universities to enter into prohibitive contracts, outsource expertise, and undermine their own staff has long-term consequences as well.

2013: Open Textbooks

If MOOCs were the glamorous side of open education, all breathless headlines and predictions, open textbooks were the practical, even dowdy, application. An extension of the OER movement, and particularly pertinent in the United States and Canada, open textbooks provided openly licensed versions of bespoke written textbooks, free for the digital version. The cost of textbooks provided a motivation for adoption, and the switching of costs from production to purchase offers a viable model. As with LMSs, open textbooks offer an easy route to adoption. Exploration around open pedagogy, co-creation with students, and diversification of the curriculum all point to a potentially rich, open, edtech ecosystem—with open textbooks at the center.22 However, the possible drawback is that like LMSs, open textbooks may not become a stepping-stone on the way to a more innovative, varied teaching approach but, rather, may become an end point in themselves.

2014: Learning Analytics

Data, data, data. It’s the new oil and the new driver of capitalism, war, politics. So inevitably its role in education would come to the fore. Interest in analytics is driven by the increased amount of time that students spend in online learning environments, particularly LMSs and MOOCs. The positive side of learning analytics is that for distance education, it provides the equivalent of responding to discreet signals in the face-to-face environment: the puzzled expression, the yawn, or the whispering between students looking for clarity. Every good face-to-face educator will respond to these signals and adjust his/her behavior. If in an online environment, an educator sees that students are repeatedly going back to a resource, that might indicate a similar need to adapt behavior. The downsides are that learning analytics can reduce students to data and that ownership over the data becomes a commodity in itself. The use of analytics has only just begun. The edtech field needs to avoid the mistakes of data capitalism; it should embed learner agency and ethics in the use of data, and it should deploy that data sparingly.23

2015: Digital Badges

Providing digital badges for achievements that can be verified and linked to evidence started with Mozilla’s open badge infrastructure in 2011. Like many other edtech developments, digital badges had an initial flurry of interest from devotees but then settled into a pattern of more laborious long-term acceptance. They represent a combination of key challenges for educational technology: realizing easy-to-use, scalable technology; developing social awareness that gives them currency; and providing the policy and support structures that make them valuable.

Of these challenges, only the first relates directly to technology; the more substantial ones relate to awareness and legitimacy. For example, if employers or institutions come to widely accept and value digital badges, then they will gain credence with learners, creating a virtuous circle. There is some movement in this area, particularly with regard to staff development within organizations and often linked with MOOCs.24 Perhaps more interesting is what happens when educators design for badges, breaking courses down into smaller chunks with associated recognition, and when communities of practice give badges value. Currently, their use is at an indeterminate stage—neither a failed enterprise nor the mainstream adoption once envisaged.

2016: The Return of AI

Artificial intelligence (AI) was the focus of attention in education in the 1980s and 1990s with the possible development of intelligent tutoring systems. The initial enthusiasm for these systems has waned somewhat, mainly because they worked for only very limited, tightly specified domains. A user needed to predict the types of errors people would make in order to provide advice on how to rectify those errors. And in many subjects (the humanities in particular), people are very creative in the errors they make, and more significantly, what constitutes the right answer is less well defined.

Interest in AI faded as interest in the web and related technologies increased, but it has resurfaced in the past five years or so. What has changed over this intervening period is the power of computation. This helps address some of the complexity because multiple possibilities and probabilities can be accommodated. Here we see a recurring theme in edtech: nothing changes while, simultaneously, everything changes. AI has definitely improved since the 1990s, but some of its fundamental problems remain. It always seems to be a technology that is just about to break out of the box.

More significant than the technological issues are the ethical ones. As Audrey Watters contends, AI is ideological.25 The concern about AI is not that it won’t deliver on the promise held forth by its advocates but, rather, that someday it will. And then the assumptions embedded in code will shape how education is realized, and if learners don’t fit that conceptual model, they will find themselves outside of the area in which compassion will allow a human to alter or intervene. Perhaps the greatest contribution of AI will be to make us realize how important people truly are in the education system.

2017: Blockchain

Of all the technologies listed here, blockchain is perhaps the most perplexing, both in how it works and in why it is even in this list. In 2016 several people independently approached me about blockchain—the distributed, secure ledger for keeping the records that underpin Bitcoin. The question was always the same: “Could we apply this in education somehow?” The imperative seemed to be that blockchain was a cool technology, and therefore there must be an educational application. It could provide a means of recording achievements and bringing together large and small, formal and informal, outputs and recognition.26

Viewed in this way, blockchain is attempting to bring together several issues and technologies: e-portfolios, with the aim to provide an individual, portable record of educational achievement; digital badges, with the intention to recognize informal learning; MOOCs and OER, with the desire to offer varied informal learning opportunities; PLEs and personalized learning, with the idea to focus more on the individual than on an institution. A personal, secure, permanent, and portable ledger may well be the ring to bind all these together. However, the history of these technologies should also be a warning for blockchain enthusiasts. With e-portfolios, for instance, even when there is a clear connection to educational practice, adoption can be slow, requiring many other components to fall into place. In 2018 even the relatively conservative and familiar edtech of open textbooks is far from being broadly accepted. Attempting to convince educators that a complex technology might solve a problem they don’t think they have is therefore unlikely to meet with widespread support.

If blockchain is to realize any success, it will need to work almost unnoticed; it will succeed only if people don’t know they’re using blockchain. Nevertheless, many who propose blockchain display a definite evangelist’s zeal. They desire its adoption as an end goal in itself, rather than as an appropriate solution to a specific problem.

2018: TBD

We’re only halfway through 2018, so it would be premature to select a technology, theory, or concept for the year. But one aspect worth considering is what might be termed the dark side of edtech. Given the use of social media for extremism, data scares such as the Facebook breach by Cambridge Analytica, anxieties about Russian bots, concerted online abuse, and increased data surveillance, the unbridled optimism that technology will create an educational utopia now seems naïve. It is not just informed critics such as Michael Caulfield27 who are warning of the dangers of overreliance on and trust in edtech; the implicit problems are now apparent to most everyone in the field. In 2018, edtech stands on the brink of a new era, one that has a substantial underpinning of technology but that needs to build on the ethical, practical, and conceptual frameworks that combat the nefarious applications of technology.

Conclusion

Obviously, one or two paragraphs cannot do justice to technologies that require several books each, and my list has undoubtedly omitted several important developments (e.g., gaming, edupunk, automatic assessment, virtual reality, and Google might all be contenders). However, from this brief overview, a number of themes can be extracted to help inform the next twenty years.

The first of these is that in edtech, the tech part of the phrase walks taller. In my list, most of the innovations are technologies. Sometimes these come with strong accompanying educational frameworks, but other times they are a technology seeking an application. This is undoubtedly a function of my having lived through the first flush of the digital revolution. A future list may be better balanced with conceptual frameworks, pedagogies, and social movements.

Second, several ideas recur, with increasing success in their adoption. Learning objects were the first attempt at making teaching content reusable, and even though they weren’t successful, the ideas they generated led to OER, which begat open textbooks. So, those who have been in the edtech field for a while should be wary of dismissing an idea by saying: “We tried that; it didn’t work.” Similarly, those proposing a new idea need to understand why previous attempts failed.

Third, technology outside of education has consistently been co-opted for educational purposes. This has met with varying degrees of success. Blogs, for instance, are an ideal educational technology, whereas Second Life didn’t reach a sustainable adoption. The popularity of—or the number of Wired headlines about—a technology does not automatically make it a contender as a useful technology for education.

This leads into the last point: education is a complex, highly interdependent system. It is not like the banking, record, or media industries. The simple transfer of technology from other sectors often fails to appreciate the sociocultural context in which education operates. Generally, only those technologies that directly offer an improved, or alternative, means of addressing the core functions of education get adopted. These core functions can be summarized as content, delivery and recognition.28 OER, LMS, and online assessment all directly map onto these functions. Yet even when there is a clear link, such as between e-portfolios and recognition, the required cultural shifts can be more significant. Equally, edtech has frequently failed to address the social impact of advocating for or implementing a technology beyond the higher education sector. MOOCs, learning analytics, AI, social media—the widespread adoption of these technologies leads to social implications that higher education has been guilty of ignoring. The next phase of edtech should be framed more as a conversation about the specific needs of higher education and the responsibilities of technology adoption.

When we look back twenty years, the picture is mixed. Clearly, a rapid and fundamental shift in higher education practice has taken place, driven by technology adoption. Yet at the same time, nothing much has changed, and many edtech developments have failed to have significant impact. Perhaps the overall conclusion, then, is that edtech is not a game for the impatient.

Notes

  1. Audrey Watters, “What Are the Best Books about the History of Education Technology?” Hack Education (blog), April 5, 2008. 
  2. Stephen Downes, “Learning Objects: Resources for Distance Education Worldwide,” International Review of Research in Open and Distributed Learning2, no. 1 (2001). 
  3. Robin Mason and D. Rehak, “Keeping the Learning in Learning Objects,” in Allison Littlejohn, ed., Reusing Online Resources: A Sustainable Approach to E-Learning (London: Kogan Page, 2003). 
  4. David Wiley, “The Reusability Paradox” (August 2002). 
  5. See, e.g., Michael Feldstein, “How and Why the IMS Failed with LTI 2.0,” e-Literate (blog), November 6, 2017. 
  6. David Wiley, keynote address, OER18, Bristol, UK, April 19, 2018. 
  7. Scott Jaschik, “Blackboard Patents Challenged,” Inside Higher Ed, December 1, 2006. 
  8. Jim Groom and Brian Lamb, “Reclaiming Innovation,” EDUCAUSE Review 49, no. 3 (May/June 2014). 
  9. Alan Levine, “ds106: Not a Course, Not Like Any MOOC,” EDUCAUSE Review 48, no. 1 (January/February 2013). 
  10. Tim O’Reilly, “What Is Web 2.0: Design Patterns and Business Models for the Next Generation of Software,” O’Reilly (website), September 30, 2005; Bryan Alexander, “Web 2.0: A New Wave of Innovation for Teaching and Learning?” EDUCAUSE Review 41, no. 2 (March/April 2006). 
  11. O’Reilly, “What Is Web 2.0.” 
  12. Robin Wauters, “The Death of ‘Web 2.0,’” TechCrunch, February 14, 2009. 
  13. Jason Fitzpatrick, “If You’re Not Paying for It; You’re the Product,” Lifehacker,November 23, 2010. 
  14. David Weinberger, Everything Is Miscellaneous: The Power of the New Digital Disorder (New York: Times Books, 2007). 
  15. George Siemens, “Connectivism: A Learning Theory for the Digital Age,” elearnspace (blog), December 12, 2004. See also Stephen Downes, “An Introduction to Connective Knowledge,” December 22, 2005, and “What Connectivism Is,” February 5, 2007. 
  16. George Siemens, “What Is the Theory That Underpins Our MOOCs?,” elearnspace (blog), June 3, 2012. 
  17. Patrick Greenfield, “The Cambridge Analytica Files: The Story So Far,” The Guardian, March 25, 2018. 
  18. Laura Pappano, “The Year of the MOOC,” New York Times, November 2, 2012. 
  19. Clay Shirky, “Napster, Udacity, and the Academy,” Clay Shirky (blog), November 12, 2012. 
  20. Anne Raith, “Stanford for Everyone: More Than 120,000 Enroll in Free Classes,” KQED News, August 23, 2011. 
  21. Steven Leckart, “The Stanford Education Experiment Could Change Higher Learning Forever,” Wired, March 20, 2012. 
  22. Robin DeRosa and Scott Robison, “From OER to Open Pedagogy: Harnessing the Power of Open,” in Rajiv S. Jhangiani and Robert Biswas-Diener, eds., Open: The Philosophy and Practices That Are Revolutionizing Education and Science(London: Ubiquity Press, 2017). 
  23. Sharon Slade and Paul Prinsloo, “Learning Analytics: Ethical Issues and Dilemmas,” American Behavioral Scientist 57, no. 10 (2013). 
  24. Abby Jackson, “Digital Badges Are the Newest Effort to Help Employees Stave Off the Robots—and Major Companies Are Getting Onboard,” Business Insider, September 28, 2017. 
  25. Audrey Watters, “AI Is Ideological,” New Internationalist, November 1, 2017. 
  26. Alexander Grech and Anthony F. Camilleri, Blockchain in Education, JRC Science for Policy Report (Luxembourg: Publications Office of the European Union, 2017). 
  27. Michael A. Caulfield, Web Literacy for Student Fact-Checkers, January 8, 2017. 
  28. Anant Agarwal, “Where Higher Education Is Headed in the 21st Century: Unbundling the Clock, Curriculum, and Credential,” The Times of India, May 12, 2016. 
question mark   Please complete this short survey to provide feedback on this chapter: http://bit.ly/20yearsEdTech

Dr. Weller is Professor of Educational Technology at the Open University. His interests are in Digital Scholarship, open education and impact of new technologies. He has written some books, including The Digital Scholar and Battle for Open which are available under an open access licence. Other publications can be found here. He is also co-editor of the open access journal, JIME.

He is Director of the OER Research Hub, and the ICDE Chair in OER. He regularly provide workshops and keynote talks on the use of social media by academics, open education, and online learning. He blogs at http://blog.edtechie.net/about/.

 

by

License

Icon for the Creative Commons Attribution 4.0 International License

Foundations of Learning and Instructional Design Technology Copyright © 2018 by Richard E. West is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book