Personal learning environment – my space?

Personal learning environments (PLEs) are simply a collection of applications, websites and technologies that we use for studying. Because I also learn from people, I have included my personal learning network (PLN) incorporated with my PLE. It also changes all the time and this one was created a couple of years ago. I would make Twitter a much bigger part of it today if I were to redraw it and I would include Moodle too.

A representation of my PLE (mixed with my PLN)

My PLE also includes things which I am not representing on this image, because they don’t have icons. For example I am typing on a Chromebook and this little notepad computer has become the place where I study most of the time. I don’t write assessments and I can’t use Mendeley on it, but for reading, making notes and searching it is great and very portable. But the most important thing for me is that I am not at my desk, if I were at my desk I would be worrying about work rather than working on my studies. So I do think that PLEs need to also include a sense of the physical environment as well as the technological one.

When I first did this exercise I looked at a lot of other people’s PLEs and saved their images to Pinterest. Pinterest then became more interesting as a space for keeping diagrams, images of other research topics – as well as a shopping wishlist!

Follow Mandy's board PLE on Pinterest.

So where are PLE headed?

The very nature of PLE are that they are fluid, the applications and technologies will change as our needs change. So at the moment I am using Twitter much more than I have done in the past. Partly this is because it is encouraged by the course I am taking (MA ODE) and many of the students are using the #H800 hashtag to support each other and share experiences. It does make me wonder though whether the use of Twitter is therefore not really part of my PLE at all for this module but has actually been usurped by the module team? However, because I still follow many other people who are constantly introducing me to interesting resources and material, I think I can be relaxed about this.

I have also moved from eBlogger to hosting this blog in my own WordPress environment. As I learn from reflecting and I am using my blog for reflecting it therefore also needs to be included in my PLE. I purposefully chose to keep my blog away from the OU’s hosting service for it (still part of Moodle), because I wanted to use my blog more openly and, in the end, of course am intending to attract an audience. I don’t think that happens via the university’s hosted blog service. I certainly hardly ever read any blogs that are there, but at the same time recognise that for students who don’t want to have to start their own account anywhere else, it is more convenient to satisfy course requirements by taking the simplest route.


Students should be free to make choices and work together. Whether these choices are free of influence is a different matter and probably one that will become more interesting to look at in the future. At the moment I think we are in a time of settling in. We are getting more used to incorporating different technologies into our learning as students and our teaching as teachers. It is only when those technologies are embedded that we will be able to really see the effect they have had. There is a tension between innovation and experimentation and being able to give students a good learning experience.

I learnt this the hard way when I used a beta version of AppInventor with some students on a GCSE project; unfortunately the hosting of this application was changed halfway through their project and this caused a few problems. I had assumed that something hosted by Google would be more stable – now I know better. Considering the stability of any technology being incorporated into an assessment has to be a priority. Although everything turned out okay in the end, it was unnecessarily stressful at the time. When I was teaching my approach was always to stay ahead of the curve with technologies and I think that my students appreciated that they were getting to try out new things and it often made the tasks they needed to do fresh and exciting. But sometimes, as is the way with all technologies, there were delays and frustrations too.

This shouldn’t, however, ever stop us from assessing new technologies in order to find fresh ways to approach learning and teaching.

Thoughts about video as a learning technology

I am considering the use of video in the context of tutoring a STEM subject on a higher education graduate course. There are several ways students encounter video: as part of their module materials, as an activity where they are required to produce one themselves, as recordings of long tutorials they were unable to attend or for review and, finally, tutor created videos.

It is likely that students will be fairly media literate  nevertheless the quality of video module materials would be critical to the way they perceived the course. The Open University (OU) once had a reputation for promoting lecture-type videos by men with beards and jumpers and this may have taken quite a few years to escape from, but the university probably now has. All module material videos now include transcripts for accessibility but these transcripts also serve another purpose – enabling students to scan the content. Harvard’s eDx is using a more sophisticated video embedding technology which includes a separate transcript but also includes a rolling transcript next to the video pane. This functions to allow the student to read and listen, review and scan ahead. At the OU the videos are sometimes external resources but more often they are produced in house and in contrast to its reputation no longer attempt to replicate the lecture hall. Were they still to feature the traditional OU tutor character they would be unlikely to impress these media savvy students.

Still from film
Still from Educating Rita 

We tutors often berate students for failing to attend online tutorials. And it is true that, in my experience, if students are not assessed on attendance most will not attend. However at this time there is no easy access to data that would provide us with viewership figures. Our online tutorials normally last around an hour. We need to know how our students access and watch the recordings of them. I suspect that they may use the inbuilt facilities in the OU Live recordings to find parts of the tutorials they will be able to use for helping with assessment rather than seeing this as a deep learning experience. The OU live environment also constrains the tutor. But this may not be a bad thing.

Many tutors are now producing, sometimes shorter, quick fire video messages using YouTube for dissemination. Some of these have a distinctly professional feel  but the majority will be more simple offerings.

It is a very good question, with interesting implications, whether or not these amateur productions enhance the overall OU offering. I have been producing quite a few of these, usually aiming to be less than 5 minutes long. I use a tag “open university ” when I publish and this makes me question whether there needs to be a more formal approach. At the same time short to the point video messages which could, for example, include feedback to groups and even individual feedback perhaps using another online video grab from Collaj, would certainly help to create more presence between tutor and student.

If tutors were able to use these video technologies to help students reflect more deeply on feedback I believe those of us working online would find that this also improved our communications with students. However, I believe that there continues to be a danger that the face to face lecture is replicated via video. I Don’t believe that this continues to be the most appropriate use of the technology because student attention is too early diverted elsewhere.


Exploring authentic assessment

Authentic Assessment

Redefining objectives: A government school in Faridabad. From Livemint.

I find myself having a rather immature emotional reaction to this subject. It stems from experiencing years of inauthentic assessment at a grammar school during the 70s when the only thing that mattered was memory. This also is pretty much the only thing I don’t have, as I have intelligence in bucket loads! (I’m not saying this to blow my own trumpet, only in order to try to throw light on how frustrating, and damaging, experiencing false assessment can be.)

So from a deeply personal perspective I am hyper-aware that assessment of knowledge is a very different thing than assessment of memory. When I finally discovered the joy of learning, it came hand in hand with being assessed for work I was doing, whether that was simply writing an essay (in the peace of my own room) or producing a film in a group of like-passioned people.
Therefore I am tempted to define authentic assessment as judgement of students’ knowledge and learning through means other than memory.
However, it is clear from reading Whitelock and Cross (2012) and Mueller (1993) that this requires a far more thoughtful and positive response! The former demonstrate that the argument over its definition has raged for many years but the latter produces quite a simple definition:

“A form of assessment in which students are asked to perform real-world tasks that demonstrate meaningful application of essential knowledge and skills.” 
Having taught KS3 & KS4 the problems of authentic assessment are very obvious; sheer numbers of students requiring frequent reporting of progress; the simplest and quickest way to get information is to set a summative computer marked test. Nevertheless changes in use of materials in exams are certainly a move in the right direction, for example in English language and literature, students are supplied with texts to analyse and are not required to quote by rote. This differs hugely from my experience in the 70s having to memorise poems, novels and Shakespeare plays. Unfortunately our current government seems intent on returning to this type of testing and eradicating coursework.

Whitelock, D. and Cross, S. (2012) ‘Authentic assessment: what does it mean and how is it instantiated by a group of distance learning academics?’, International Journal of e-Assessment, vol. 2, no. 1 [Online]. Available at ijea/ index.php/ journal/ article/ view/ 31 (accessed 30 June 2014).

Mueller, J., (1993). What is Authentic Assessment? (Authentic Assessment Toolbox). [Online] Available at: (Accessed 30 Jun. 2014)

Reflecting on our group project

Averting disaster

These screenshots represent the first page of our prototype website viewed on my mobile phone.
The objective was to work in a group to produce a resource enabling people to learn about local history (of some location) using mobiles and social networking.

I think, rather remarkably, we succeeded in doing exactly this and we haven’t even yet fallen out.

Having read ahead through the materials I knew this task was coming up and it wouldn’t be exaggerating to say that I was absolutely dreading it. My earlier experience, during Block 1 of H808, of feeling responsible for sending the group off down completely the wrong track was not yet a distant memory and I was quite determined not to do that again! But I also made the decision not to allow that to stop me from just getting on and doing my best to ensure that we all succeeded.
I was quite happy when I was assigned to the local history project, because it was similar to a project that I had started to work on several years ago and I felt that that work might be useful to this. In the end it really wasn’t helpful except as a talking point and a reference for myself when it came to creating a template for teachers to use to set up their own activity. However it also quickly became apparent that almost everyone else wanted to be doing something else which created a slight air of negativity that needed dissipating. It was interesting that quite a few people saw little to no purpose for their own learning to be doing this project, needing it instead to be something tangible that they would be able to use in their own environments in order for it to have meaning. This is something that we struggled with throughout the project.
I additionally chose this project because it played to my strengths; I understand mobile technology, I understand web development and I understand how to mash everything together using social networking. My personal challenge was to step back and allow others to demonstrate their own strengths, for example with organising the site or getting to grips with appropriate theory and working through the module required design challenge it. My contribution to working through the module requirements was to create a Gantt chart for us and continue to encourage everyone to use it. I think most people did end up using it and venture that when people didn’t was when they became unsure about what they should be doing. I also decided to toughen up a bit and if other people were struggling with aspects of the project to resist imagining it was my responsibility to untangle them. That was also the reason I didn’t want the team leader role even though I took on several aspects of it and shared other aspects particularly with Lawrence.
It was interesting to see people shine though and everyone did at one point or another. Our meetings were generally amiable and only occasionally did anyone (including me) succeed in taking us off track. My weakness in meetings was that I had such a clear idea of how the project was going to pan out, not specifically our vision, but the practical side of it, that I was continuously pushing towards that outcome and sometimes not taking into account or enough heed of a suggestion that might take us too far off that track. But at the same time, there was often an unhappy equilibrium between just getting on and doing things, without asking others for permission, and trying to seek consensus.
Following the process outlined by the module had its issues but overall it provided a means to an end. It was interesting, if sometimes a little frustrating, to see a learning design theory in practice. Once we allowed ourselves the liberty of assigning different tasks to different people (at the point when we split up the theoretical and case study research), the whole project became a lot smoother. Before that there was a constant concern that one or two people might not be getting something done and might be holding up the group.
I would be sad if now I were to read that some people were unhappy with their own contribution or unhappy with anyone else because I sincerely believe that everyone in the group contributed critical aspects to the overall process and the product, because whether or not they achieved their personal goal, as a group, we achieved our collective one.

Just one open education technology

One technology that I believe is increasingly useful in open education is Google hangouts.
Hangouts provide free video conferencing though it only allows 10 video participants, each of whom is required to install the chat plugin from Google; the chat/text hangout allows 100 participants. The video hangout, however, can be streamed to YouTube and chat enabled there too which can therefore enlarge the scope for greater numbers.
It is possible to share documents and screens, so it can be a good environment for working on collaborative projects at a distance as well as hosting discussions, viewing demos etc. Google ran their own education conference season and it provides an interesting example of what might be possible in the future:

It is important for open education because it is free and yet, with minimum technical requirements, can enable anyone to create a learning experience.

Comparing MOOCs

A cMOOC v. an xMOOC

(cMOOCs and xMOOCs are defined here by Stephen Downes)

This is DS106 v. an iversity course also called digital storytelling.

This blog was supposed to be comparing one of the original xMOOC platforms with DS106, but instead I decided to compare an equivalent platform offering a similar subject. I was more interested in the treatment of the subject and pedagogy being the main focus of the comparison rather than the technology, but nevertheless….

the technology, just to get this out of the way:

DS106 is built using a WordPress (WP) content management system (CMS). Highly modular with lots of addons available, WP allows anyone with a modicum of web development experience to build as complicated a network of websites as they want. And DS106 has become a pretty deep website, with many offshoots and choices for visitors (including a link for teachers who want to use the resources as an OER source for their own courses).

iversity is a MOOC platform built specifically for this purpose and generally mirrors others like Udacity, FutureLearn and Coursera. The major difference is that this one was built in, and operates from, Germany, though it still uses English as its language.

Another major difference is that iversity offers academic credit in the form of European credit transfer (ECTS) where as DS106 does not, in the traditional sense of the word.

iversity courses appear to be led by Professors, only, though there are seven people in the storyMOOC team and twelve visiting guest lecturers. This even differs from many of the other xMOOCs. But now at least we are getting closer to looking at the teaching.

DS106 is designed as a way to enable students to learn about digital storytelling by reading/watching/listening to background material and then telling stories digitally. They set their own assignments, they choose their own methods and there doesn’t appear to be anyone in charge, in fact one of the original pathways is even called “headless DS106”. The only thing that we know for certain is that the idea originated at the University of Mary Washington and was created by Jim Groom, after that it appears to have taken on a life of its own. An ongoing environment that continues to grow within a structure without dates and times “The Open DS106“. This course has become the standard bearer for a connectivist approach to teaching online. This is because it relies on the connections between and experience of the people undertaking the course itself in order to progress the students.

Both courses have a curriculum: DS106 has 12 Units (open ended), iversity has 8 chapters (based on weeks of study).



1: Bootcamp storytelling basics
2: Getting Through Bootcamp / Personal Cyber Infrastructure – serial formats (on the TV, web and beyond)
3: What Mean Ye Digital Storytelling? – storytelling in role-playing games
4: Listening to Audio – interactive storytelling in video games
5: Telling Stories in Photos – transmedia storytelling
6: It’s All By Design – alternate-reality gaming
7: Advanced Audio And Radio Show Production – augmented reality and location-based storytelling
8: Telling Stories Within the Web – the role of tools, interfaces and information architectures in current storytelling.
9: Reading Movies
10: Making Movies
11: ximeR and M@$#up
12: Final Projech

The major superficial difference between these two curricula are that one appears to be more embedded in an academic framework, in particular it is using academic language to convey what might arguably, be pretty similar material. But there is little doubt that ds106 is all about doing, producing, experimenting and learning that way. iversity requires listening and watching the experts telling the students about the subject. iversity also broadcasts via Facebook and also had a twitter feed which attracted 323 followers. DS106 seems to have completely avoided Facebook but has several hashtag feeds, each focussing on a different aspect of the module (making a numerical comparison a little more difficult). Perhaps this is because it is not person centric so therefore there is no one leader to follow. @dsradio has over 500 followers, @ds106 a couple of hundred and @ds106dc nearly 700.

The general approach and philosophy of ds106 seems to really have a life of its own, it is revelling in creativity and that creativity is exemplified by its own approach to teaching, assessment (peer) and openness. There is little doubt that iversity’s storyMOOC is also celebrating creativity, but its approach is steeped in its own appraisal of that creativity, this contrasts strongly with the overwhelming sense of joy that participants in ds106 appear to be experiencing. This surely is what learning should be about?

Background to MOOCs

Thoughts from interview 

in which George Siemens and Dave Cormier are interviewed by Martin Weller, about a range of issues concerning MOOCs.

  • The basis of thinking about MOOCs are not necessarily to criticise the idea of MOOCs from a personal, or first world, perspective, as it seems many do, but instead to take the approach that if Universities around the world really are willing to publish courses open to students from everywhere then just this should be celebrated.
  • One of the major issues is the perception that a business model has to be attached to the MOOC. It seems, to the speakers, that as soon as this happens the MOOC loses its focus and purpose.
  • The technology and presentational methods of MOOCS need to continue to be innovated; it is not enough to throw another MOOC onto Coursera (or anywhere else) and consider the job done. (They all pointed to DS106 as an example of where this didn’t occur).
  • Burn out of staff delivering the course. Interesting idea that as staff energy flags, so does student participation and this can be mapped! They talk about around twenty people being an optimum number for a massive course. My experience certainly is that the Moodle MOOC I attended which had at least 10 people working solidly during the MOOCs window, was more successful than those where there was only one or two visible “leaders”.

Thoughts from Maturing the MOOC

Conflicting perspectives on MOOCs divide education communities

Split between “elite” universities (in the US) who are keen to explore the potential of MOOCs (and are able to do so financially) and smaller universities who don’t have the same kind of resources. 

But perhaps the criticisms that MOOCs are unable (currently) to help students with complex learning needs, though less visible, are more important.

Learning Practitioners disagree about the value of MOOCs

Though MOOCs could be innovative, they also can be seen as packaging over content. The format itself has many issues that are yet to be resolved.

Formal comprehensive analyses of MOOCs mostly concur that they are disruptive and possibly threatening to current HE models 

Dramatic change is imminent :), so say various government think tanks.

Reporting of MOOC learner experiences is positive

Even though many don’t complete the courses on offer this doesn’t mean that they are not positive about their experiences. Nevertheless there is little data to support this.

The MOOC is maturing – and engaging with its business and accreditation issues

Two biggies, sustainability and accreditation are both high on universities priority list for MOOCs and therefore these issues will be solved one way or another.

Could MOOCs be used in “my area”

One of my colleagues is very keen to produce a Computer Science MOOC for young people (K12). I think this would be a great idea. Now just to find someone to pay us to create it, run it, market it, host it……

An OER course

A digital skills course using OER

This is a short 5 week online course aimed at adult learners who want to improve their understanding of the digital skills necessary to engage with an increasingly digital world.
Excellent digital skills framework resource at the OU Library which students could use as self-assessment, but this skills course is aiming at adults who may not be quite at level 1 yet.We were only supposed to use the following OER banks, which meant not being able to use which, when I happened upon it following a link or two, turned out to be the very easiest to search:

The main conclusion I came to however, was that one needed to have a very clear idea of audience and structure before starting out. My choice was an adult online audience and it turned out that one resource was head and tails above all the others for this particular student group — OpenLearn.

A brief review of issues follows:

  • Ariadne: often broken links
  • Jorum : searching not easy, previews don’t work
  • MIT: too advanced for purpose
  • CNX: too difficult to use and many non-English texts without a way to filter these out.
  • Merlot: ratings useful, description of learning material also useful and search good, but out of date links.
  • OpenLearn: despite my criticisms in my earlier blog about this OER, I found this to provide excellent resources and ones that I could pick and chose from easily.
I would always look for existing suitable resources before deciding to create my own so I have no hesitation in using OER. However, ease of searching for and finding the right ones quickly would inevitably mean that I would end up on relying on those OER that were the easiest and quickest to use.
The course is not complete by any means but here is the beginning:
Week number
(G=good, M=medium, B=bad)
Week 1
Using a computer or mobile smartphone and getting connected
Software – Browsers and apps


Both of these were bad in terms of suitability.
Week 2
Creating and caring for your digital identity
Creating a profile
Creating accounts
Establish your icon
OpenLearn (Online Safety)


OpenLearn Identity online


Week 3
Searching and evaluating
Using search engines
Search terms
Evaluating results
Finding like-minded people


Week 4
Organising your digital things, offline and online
File storage
Cloud storage


Week 5
Communicating and collaborating
Web 2.0
Collaborative applications
From OpenLearn




Exploring OER issues

Activity 7: Exploring OER issues

From OER InfoKit  YouTube channel

The JISC report on OER discusses several issues in OER:

  • embedding sustainable practice
  • funding and resourcing
  • time involved in repurposing materials
  • widening engagement
  • licensing and locating license holders for permission
  • multiple OER models
  • institutional policies, practices and coherent strategies
  • technical infrastructures
  • staff skills, understanding and raising individuals’ digital literacies
  • “quality, institutional branding and marketisation” (p9), quality and trust of the materials
  • lack of awareness of OER and their benefits

But perhaps more importantly identifies that progress has been made in significant areas:

“Increased awareness, knowledge and expertise around issues to do with technical, quality, accessibility, and legal aspects have led to the development of systems, policies and procedures to support ongoing OER activities.” (Jisc, 2013, p11)

I have been exploring the OER Research Hub’s impact map and based on the results (though there are not many participants yet) explored the issue of widening participation more deeply. The impact map for Access is representing evidence gathered about the hypothesis that OER will widen participation in education. Without any evidence presented from K12 and most of the evidence originating from the USA, it would appear that there is more evidence to support the idea that access is not widened nevertheless there is a slight skew towards more broad access in higher education and at colleges but less in informal. In the HEFCE survey 55% of people who work with OERs found increased access for learners to be of most significance. As Emma Blake (2014) points out there is also a dearth of evidence from other regions than the first world. However, the Open Educational Resources Survey from Unesco (2012) shows promising responses from developing regions and also shows that OER activity is spread across the different educational sectors.

This same Unesco report also highlights the need for policies and funding to help support the establishment of OER. And here the impact map demonstrates clearly that once an OER policy is adopted then this bring financial benefits to institutions and student, particularly the open textbook movement. Where countries report that they do not have a policy, this is not necessarily the end of the story because many are in the process of creating one.

The third issue I want to address is that concerned with digital literacies, it seems from the review that although digital skills are improving amongst staff, when combined with the need for students to cooperate in the production of OER things get a little trickier. So perhaps although students are engaged with OER they are not necessarily getting the best out of them.

It seems as though sometimes we forget that in the fast paced technological era there are some things that always do take time. The impact map is a little disappointing in terms of the results that it displays, but this is due to the voluntary nature of how evidence is included but also that it depends on individuals proposing the evidence. This could certainly skew the perception of the information that it delivers. It is my experience in K12 that making a change and assessing its impact takes many years, for example, were I to propose a digital reading list to year 7’s the interesting result (their English GCSE results) would take five years to come through.