Complex for the Few
It seems like Learning Analytics is headed for more complexity. With so many analysis tools that could be used to measure learning and behavior in online learning environments, it is not clear what the best ways are for combining Social Network Analysis (SNA), discourse analysis, multilevel regression modeling of count data, descriptive web analytics tools, etc. But even if a preferred method is established among researchers for these tools in their current form, who else would really be able to use the results from these analyses at the point of learning to make informed, data enabled decisions? Not many. So, as the methods to analyze learning become more advanced and combine more complex methods, the number of people who can make sense of the data grows smaller.
SNA is great, but what is a teacher or undergrad student going to do with an SNA of her Blackboard sessions? Multilevel regression analysis is very powerful but who besides someone trained in advanced predictive statistics is going to be able to interpret the effects of “meditated moderation” for a particular class or an individual? My sister, for example, works as a fourth-grade teacher in L.A. Unified. She recently received her Value Added Analysis report of her performance. She was not happy with it and at the same time she did not know what it meant even though she has a master’s degree in education and has been teaching since I was in elementary school. The report that the district handed out explained the results in terms that only a researcher would understand. It’s funny that they would spend all that time to do the analysis and then shoot themselves in the foot by not setting up a proper communication plan for the results. Anyway, this is the same danger that Learning Analytics faces. In the LAK11 MOOC, some very impressive technologies were shown from the semantic web tools and literature. Cohere, for example, is a powerful technology for annotating and linking meanings among artifacts you interact with on the internet, but it does not have the feel of something easy to use at the classroom level. I know that it is early on in the Learning Analytics world, but if its technology is going to be used by the masses like the social web technologies that preceded it, than we need to consider from the beginning, how easy can these tools be used by instructors and students in the classroom and by life-long-learners on their own. Otherwise, Learning Analytics will be relegated to academic circles and have relatively little impact on the majority of learners.
Simple for the Masses:
Learning Analytics should be going towards more simplicity. Learning Analytics needs what blogging and social networks did for the world of user-generated content (made it easy so anyone could do it). If Learning Analytics were easy to deploy and interpret it would move from the ivory tower (or the open source ivory tower) to the masses. It seems that it would have its greatest use with a legion of Learning Analytics creators/interpreters rather than a select few in forward-thinking higher education instructional technology and learning science departments. Just as sociability has taken on a new fabric and culture in the new digital age of virtual spaces and as millions have networked together to extend and innovate how the communicate, so would analysis of learning transform and integrate with daily life if it were made easy for those who are involved at the classroom level and at the life-long-learning level.
Klout.com seems to be going in the simple-social-analytics direction, though maybe simple in not all of the right areas. Take my Klout score today for example. It seems relatively simple to monitor your online influence on certain social networks with its graphs and scores. I think that the idea is that if you have a dashboard, analytics will be simple. But it is deceptively difficult to communicate the right message and information through dashboards. For example, my Klout score changed, but why it changed is not clear to me. It went down from the upper 30s to a 20 in a day. Apparently I did something to loose my Klout. It is nice that there is an aggregate score, but there is not a simple way of knowing what that score means. They were nice enough to say that I need to interact more with my network in order for my score to go up, but that’s hardly an actionable, strategic suggestion.
My network influence score is a little easier to interpret, I am assuming that the metrics on the right influence the graph on the left. By the way, I think the big drop in my score is about the time we had our baby a few weeks ago. I’ll take my baby girl over my Klout score any day of the week!
Klout-like dashboards are a lot more helpful to people in-the-know on social web analytics than they are to the average user of Facebook. This is because they display information rather than teach you about ways to interpret the information along the way. Similarly, if learning analytics dashboards are going to be useful for learners and instructors, information needs to be communicated in a way that those who are not used to the analysis or are unfamiliar with monitoring their own learning may be able to easily see what is happening, follow a recommendation, and know why they are following it.
If teachers or students cannot see fairly quickly how they are going to us a Learning Analytics technology to either make what they already do easier or enable them quickly do something new that will make their life better/more rich, then it will not take off. It will have limited potential. Decision makers at the top of technology hierarchies may be able to make decisions that affect those who use the technology (e.g. Google), but this is much less of an effect than would be if the numerous users of a technology were able to easily adapt and use it for their own purposes (e.g. the blogosphere).
Google’s Auto Fill, but for Decisions and Interventions
Simplicity in web tools brings usefulness to a wider audience. And the wider audience brings a new playing field of data. So what technology advances could bring Learning Analytics into the simple-for-the–masses space? One idea could be to have technology like Google’s auto fill but for decisions and interventions that instructors can make in the classroom. What is auto fill? It’s a great tool that suggests in your search field the most likely next letters or words that you are going to type so that you can select the complete word or phrase rather than typing the rest of it out. The algorithms that Google runs on billions of words enable models of what words go together most frequently. These models make it so Google can display suggestions to the user of what is commonly typed next.
Some may object to applying this technology to decision making, because it would appear to take away a teacher’s discretion. But wait; let’s take a look at how we would get there before any conclusions are drawn.
The Analytics Playing Field of Tomorrow
So, as Learning Analytics is made easy and ubiquitous in virtual learning environments and personal learning environments, massive amounts of data will exist about, not only the single-loop learning according to specified outcomes, but also the decisions users make as a result of viewing the Learning Analytics, and the success of those decisions. This extension of Learning Analytics data and technologies seems to be an area yet unexplored in education applications. The Learning Analytics standards we are trying to establish today will make the analytics playing filed of tomorrow.
An Example of Tomorrow’s Learning Analytics
Imagine if an instructor of an online class were able to see a learning analytics profile of their class, but not just a profile…the instructor would see that her class is part of a population of 10,000 other online classes of similar size and profile characteristics that have taken place in the past five years. Given the point that the instructor and class are at in the semester and given the groupings of levels of performance among the class participants, an auto-fill-like suggestion engine would show the instructor an array of next steps to address the performance issues that she is facing in the class. Again, these suggestions would be based on the actions of other instructors in her population of 10,000 similar classes. It would show that 20% chose Action A, 40% chose Action B, 25% Chose action C, 10% chose Action D, and 5% chose other actions. It would also show how certain performance outcomes were related to the action choices just described (e.g. 80% of classes in which an instructor chose Action B at this point in the semester saw a positive average grade change of 1.0 on the 4 point scale).
I know that I am being vague about “actions”, but the point I am going for here is that Learning Analytics data of online classroom-level interactions that learners and instructors make is just the inner layer. What also will need to be tracked, mashed up, etc. will be what humans decide to do with the Learning Analytics data they interact with and the outcomes of those decisions. Then Learning Analytics comes full circle as comprehensive decision support.
Such an analytics infrastructure in teaching of course would invite all sorts of ethical dilemmas. For instance, what if administrative bodies of educational institutions locked down instructors to only deal with suggested decisions that had a certain threshold of success rates. It seems that such performance-oriented restrictions would limit instructors’ ability to innovate. Many more issues could arise but it is a topic for another day. For now, it seems that if Learning Analytics goes simple, decision support in learning like I have just described is just around the corner.