“We have to walk a fine line in the use of learning analytics. On the one hand, analytics can provide valuable insight into the factors that influence learners’ success (time on task, attendance, frequency of logins, position within a social network, frequency of contact with faculty members or teachers). Peripheral data analysis could include the use of physical services in a school or university: access to library resources and learning help services. On the other hand, analytics can’t capture the softer elements of learning, such as the motivating encouragement from a teacher and the value of informal social interactions. In any assessment system, whether standardized testing or learning analytics, there is a real danger that the target becomes the object of learning, rather than the assessment of learning.”

via Top Ed-Tech Trends of 2011: Data (Which Still Means Mostly “Standardized Testing”) | Hack Education.


What is a MOOC?

Massive Open Online Courses (MOOCs) are large-scale online courses (in the thousands of participants) where an expert or group of experts from a particular field both 1. create the large draw to the course, and 2. facilitate a multi-week series of interactive lectures and discussion forms on critical issues from that field. Participants are expected to self-organize, to share and discuss the course material, and to create and publish new artifacts that represent their learning. Additionally, MOOC participation is recorded and published openly so that those who come upon it later may follow peripherally.

Where did MOOCs Come From?

This is best answered in the words of David Cormier and George Siemens,

“The term was coined in response to Siemens and Downes’s 2008 “Connectivism and Connective Knowledge” course. An initial group of twenty-five participants registered and paid to take the course for credit. The course was then opened up for other learners to participate: course lectures, discussion forums, and weekly online sessions were made available to nonregistered learners. This second group of learners–those in The Open Course who wanted to participate but weren’t interested in course credit–numbered over 2,300. The addition of these learners significantly enhanced the course experience, since additional conversations and readings extended the contributions of the instructors.” (2010, p. 32).

Since 2008, several other MOOCs have developed. These include:

What is a MOOC Experience?

The scale of interaction among MOOC participants is like that of massively multiplayer online games, such as World of Warcraft, but where as in the gaming environment large numbers of people come together online to play, self-organize, develop skill, strategize as a group, and execute strategies, MOOCs, on the other hand, facilitate learning about or the development of a particular knowledge domain at a participation scale ripe for diversity.

As Mackness, Mak, and Williams described, “The experience was, in part, positive and stimulating, and in part frustrating and negative…For participants not only was the course design unique, but so too was the learning experience. Easy access to advancing technologies means that learners can now take control of where, when, how, what and with whom they learn. There has been a massive growth in online social networking in recent years. The use of online and other web 2.0 technologies is becoming common. Increasingly some learners can, and do, choose not to use the learning environment provided by a course or institution, but to meet instead in locations of their choice, such as Facebook, Twitter, wikis or blogs (Beetham, 2008; Guldberg & Mackness, 2009)” (2010, p. 267). This great flexibility can also detract from the learning for many participants. It has been difficult for some to find the right group to join, consequently parts of the MOOC experience have not been well received (2010).

Other ways to experience a MOOC are to lurk or to follow the course after-the-fact. For example, unlike the live MOOC participant, I have only accessed posted materials and recorded MOOC sessions, which I have found to be engaging and full of value. I also noticed that my trajectory of feelings followed what many in the live MOOC also experienced. For example, in the LAK11 MOOC, a significant drop-off occurred and some disillusionment was expressed when data mining and data science were the focus. For someone not from those fields, it was overwhelming to see all the skills that one did not possess. It made me think about how relevant my contributions to the field of Learning Analytics could be if I were not also a data miner. But as I continued through the sessions I regained confidence that there were lots of ways to participate in the filed of learning analytics. I thought it was remarkable how much I felt that I was there in the class, this feeling of presence was much more so than if I would have been just watching a webinar after the fact. I felt immersed through my after-the-fact peripheral participation.

Is MIT’s OpenCourseWare a MOOC?

The short answer is no. I again point to Cormier and Siemens:

“In an open course, participants engage at different levels of the educator’s practice, whether that be helping to develop a course or participating in the live action of the course itself. This is distinctly different from the idea of open in the open content movement, where open is used in the sense of being free from the intellectual property stipulations that restrict the use and reuse of content” (2010, p. 32).

Though MIT’s OpenCourseWare is revolutionary, making content publicly available is not enough because it only focuses on the content. The proposed benefit of MOOCs, on the other hand, is “the interaction, the access to the debate, to the negotiation of knowledge–not to the stale cataloging of content” (2010, p. 32). Essentially, MOOCs and other open courses are “open” (i.e., transparent) in the practice of knowledge negotiation and developing the field of study (2010, p. 32), opposed to just allowing open content consumers to be aware of latest developments.

Are Stanford’s Massive Online Courses MOOCs?

Stanford has opened three courses to the public for the fall of 2011: AI, Databases, and Machine Learning. The number of participants in these courses will be unprecedented: 135,455, 38,499, and 38,779 respectively as of the middle of the day on Aug 27, 2011. The number will continue to increase until registration closes and the courses begin in October. According to the course pages, participants “receive a statement of accomplishment from the instructor,” including a normative performance ranking against other online students, but only enrolled Stanford students receive credit and grades. Online students can submit questions to the instructor and staff, but these questions will go through an aggregation and rating process where only “top-rated” questions will be answered (“Introduction to Artificial Intelligence – Fall 2011,” n.d., “Introduction to Databases – Stanford University,” n.d., “Machine Learning – Stanford University,” n.d.).

MOOCs seem to differ from Stanford’s classes in these principle ways:

  1. Direct access to course facilitators: MOOC (yes), Stanford (no)
  2. Inclusion of all participation: MOOC (yes), Stanford (no)
  3. Ranking of performance: MOOC (no), Stanford (yes)
  4. Degree of separation between accredited and online participants: MOOC (lesser), Stanford (greater)
  5. Flexible, personalized curriculum: MOOC (yes), Stanford (no)
  6. Define or develop the field: MOOC (yes), Stanford (no)
  7. Other differences may emerge as the Stanford courses proceed.

Stanford’s large-scale courses do not appear to be MOOCs, but they are massive, are online, have celebrity draw (Peter Norvig), appear to invite both real-time and asynchronous participation and self organization, and make the sessions and forms publicly available like MOOCs do. The Stanford courses seem to have a technological innovation over the MOOC model, however, in the ability to rank individuals’ course performance, which should be interesting to see what metrics and technologies are used to achieve such measures at scale.


MOOCs and the similar variations I have discussed appear to be carving out a substantial niche in the array of online learning experiences. They are a significant and unique addition to how people may engage virtually at scale for both learning and exploration.


Cormier, D., & Siemens, G. (2010). Through the Open Door: Open Courses as Research, Learning, and Engagement. Educause Review, 45(4), 30-39.

Introduction to Artificial Intelligence – Fall 2011. (n.d.). . Retrieved August 27, 2011, from http://www.ai-class.com/

Introduction to Databases – Stanford University. (n.d.). . Retrieved August 27, 2011, from http://www.db-class.org/

Machine Learning – Stanford University. (n.d.). . Retrieved August 27, 2011, from http://www.ml-class.com/

Mackness, J., Mak, S. F. J., & Williams, R. (2010). The Ideals and Reality of Participating in a MOOC. Proceedings of the 7th International Conference on Networked Learning 2010.

Mcauley, A. A., Stewart, B., Cormier, D., & Siemens, G. (2010). MOOC Model for Digital Practice. Practice. Retrieved from http://davecormier.com/edblog/wp-content/uploads/MOOC_Final.pdf

Complex for the Few

It seems like Learning Analytics is headed for more complexity. With so many analysis tools that could be used to measure learning and behavior in online learning environments, it is not clear what the best ways are for combining Social Network Analysis (SNA), discourse analysis, multilevel regression modeling of count data, descriptive web analytics tools, etc. But even if a preferred method is established among researchers for these tools in their current form, who else would really be able to use the results from these analyses at the point of learning to make informed, data enabled decisions? Not many. So, as the methods to analyze learning become more advanced and combine more complex methods, the number of people who can make sense of the data grows smaller.

SNA is great, but what is a teacher or undergrad student going to do with an SNA of her Blackboard sessions? Multilevel regression analysis is very powerful but who besides someone trained in advanced predictive statistics is going to be able to interpret the effects of “meditated moderation” for a particular class or an individual? My sister, for example, works as a fourth-grade teacher in L.A. Unified. She recently received her Value Added Analysis report of her performance. She was not happy with it and at the same time she did not know what it meant even though she has a master’s degree in education and has been teaching since I was in elementary school. The report that the district handed out explained the results in terms that only a researcher would understand. It’s funny that they would spend all that time to do the analysis and then shoot themselves in the foot by not setting up a proper communication plan for the results. Anyway, this is the same danger that Learning Analytics faces. In the LAK11 MOOC, some very impressive technologies were shown from the semantic web tools and literature. Cohere, for example, is a powerful technology for annotating and linking meanings among artifacts you interact with on the internet, but it does not have the feel of something easy to use at the classroom level. I know that it is early on in the Learning Analytics world, but if its technology is going to be used by the masses like the social web technologies that preceded it, than we need to consider from the beginning, how easy can these tools be used by instructors and students in the classroom and by life-long-learners on their own. Otherwise, Learning Analytics will be relegated to academic circles and have relatively little impact on the majority of learners.

Simple for the Masses:

Learning Analytics should be going towards more simplicity. Learning Analytics needs what blogging and social networks did for the world of user-generated content (made it easy so anyone could do it). If Learning Analytics were easy to deploy and interpret it would move from the ivory tower (or the open source ivory tower) to the masses. It seems that it would have its greatest use with a legion of Learning Analytics creators/interpreters rather than a select few in forward-thinking higher education instructional technology and learning science departments. Just as sociability has taken on a new fabric and culture in the new digital age of virtual spaces and as millions have networked together to extend and innovate how the communicate, so would analysis of learning transform and integrate with daily life if it were made easy for those who are involved at the classroom level and at the life-long-learning level.

Klout.com seems to be going in the simple-social-analytics direction, though maybe simple in not all of the right areas. Take my Klout score today for example. It seems relatively simple to monitor your online influence on certain social networks with its graphs and scores. I think that the idea is that if you have a dashboard, analytics will be simple. But it is deceptively difficult to communicate the right message and information through dashboards. For example, my Klout score changed, but why it changed is not clear to me. It went down from the upper 30s to a 20 in a day. Apparently I did something to loose my Klout. It is nice that there is an aggregate score, but there is not a simple way of knowing what that score means. They were nice enough to say that I need to interact more with my network in order for my score to go up, but that’s hardly an actionable, strategic suggestion.

My Klout Aggregated Social Web Influence Score

My Klout Aggregated Social Web Influence Score

My network influence score is a little easier to interpret, I am assuming that the metrics on the right influence the graph on the left. By the way, I think the big drop in my score is about the time we had our baby a few weeks ago. I’ll take my baby girl over my Klout score any day of the week!

My Klout Network Influence Score

My Klout Network Influence Score

Klout-like dashboards are a lot more helpful to people in-the-know on social web analytics than they are to the average user of Facebook. This is because they display information rather than teach you about ways to interpret the information along the way. Similarly, if learning analytics dashboards are going to be useful for learners and instructors, information needs to be communicated in a way that those who are not used to the analysis or are unfamiliar with monitoring their own learning may be able to easily see what is happening, follow a recommendation, and know why they are following it.

If teachers or students cannot see fairly quickly how they are going to us a Learning Analytics technology to either make what they already do easier or enable them quickly do something new that will make their life better/more rich, then it will not take off. It will have limited potential. Decision makers at the top of technology hierarchies may be able to make decisions that affect those who use the technology (e.g. Google), but this is much less of an effect than would be if the numerous users of a technology were able to easily adapt and use it for their own purposes (e.g. the blogosphere).

Google’s Auto Fill, but for Decisions and Interventions

Simplicity in web tools brings usefulness to a wider audience. And the wider audience brings a new playing field of data.  So what technology advances could bring Learning Analytics into the simple-for-the–masses space? One idea could be to have technology like Google’s auto fill but for decisions and interventions that instructors can make in the classroom. What is auto fill? It’s a great tool that suggests in your search field the most likely next letters or words that you are going to type so that you can select the complete word or phrase rather than typing the rest of it out. The algorithms that Google runs on billions of words enable models of what words go together most frequently. These models make it so Google can display suggestions to the user of what is commonly typed next.

Google's Auto Fill Feature on a Search Field

Google's Auto Fill Feature on a Search Field

Some may object to applying this technology to decision making, because it would appear to take away a teacher’s discretion. But wait; let’s take a look at how we would get there before any conclusions are drawn.

The Analytics Playing Field of Tomorrow

So, as Learning Analytics is made easy and ubiquitous in virtual learning environments and personal learning environments, massive amounts of data will exist about, not only the single-loop learning according to specified outcomes, but also the decisions users make as a result of viewing the Learning Analytics, and the success of those decisions. This extension of Learning Analytics data and technologies seems to be an area yet unexplored in education applications. The Learning Analytics standards we are trying to establish today will make the analytics playing filed of tomorrow.

Data Layers for Meta-level Learning Analytics

Data Layers for Meta-level Learning Analytics

An Example of Tomorrow’s Learning Analytics

Imagine if an instructor of an online class were able to see a learning analytics profile of their class, but not just a profile…the instructor would see that her class is part of a population of 10,000 other online classes of similar size and profile characteristics that have taken place in the past five years. Given the point that the instructor and class are at in the semester and given the groupings of levels of performance among the class participants, an auto-fill-like suggestion engine would show the instructor an array of next steps to address the performance issues that she is facing in the class. Again, these suggestions would be based on the actions of other instructors in her population of 10,000 similar classes. It would show that 20% chose Action A, 40% chose Action B, 25% Chose action C, 10% chose Action D, and 5% chose other actions. It would also show how certain performance outcomes were related to the action choices just described (e.g. 80% of classes in which an instructor chose Action B at this point in the semester saw a positive average grade change of 1.0 on the 4 point scale).

I know that I am being vague about “actions”, but the point I am going for here is that Learning Analytics data of online classroom-level interactions that learners and instructors make is just the inner layer. What also will need to be tracked, mashed up, etc. will be what humans decide to do with the Learning Analytics data they interact with and the outcomes of those decisions. Then Learning Analytics comes full circle as comprehensive decision support.

Such an analytics infrastructure in teaching of course would invite all sorts of ethical dilemmas. For instance, what if administrative bodies of educational institutions locked down instructors to only deal with suggested decisions that had a certain threshold of success rates. It seems that such performance-oriented restrictions would limit instructors’ ability to innovate. Many more issues could arise but it is a topic for another day. For now, it seems that if Learning Analytics goes simple, decision support in learning like I have just described is just around the corner.

Posted by: Michael Atkisson | July 28, 2011

LAK11 MOOC: Learning Analytics Successes?

The two most compelling cases for the impact of Learning Analytics in the LAK11 MOOC were those of UMBC and SunGard’s Signals.


John Fritz presented on his work with Blackboard at UMBC. Using Learning Analytics he was able to give student performance feedback to students and instructors through the LMS. Of the classes he studied, he found that students who received a D or an F interacted with Blackboard 39% less than students who earned higher grades.

One of the instructors that John studied used a tool called Adaptive Release. That particular instructor’s students were performing 20% higher on the departmental econ final than other students. The instructor did not administer or develop the final.


Kimberly Arnold of Purdue presented on the Signals program. Students receive feedback in the LMS by means of a traffic light metaphor and receive recommendations on what to do differently to improve.

The model that they developed to predict grades based on certain levels of performance and certain behaviors (50 courses, 40 instructors,, and 1,500 students) was 66% accurate for all students studies. Accuracy was 77% for first-year students.

The Signals team was able to improve grades on the average in those students who used Signals. Overall students with Ds went to Cs and Students with Cs went to Bs. Students with As stayed As and there was little movement from Bs to As.

Signals also affected retention. For the 2008 cohort, 94% of students who used signals were still in school one semester after using Signals vs. 77% of the cohort. 82% of the Signals students were still in school one year after using Signals vs. 76% of the cohort.

Student Retention as a Result of Signals

Student Retention as a Result of Signals

Other Institutions that are Developing Learning Analytics

I’m sure that there are other success stories for Learning Analytics, but these are the two discussed in the MOOC. I am sure we will see more in the near future as there are several schools and organizations promoting the development of Learning Analytics (Arnold, 2011; Fritz, 2011), such as:

  • SunGard Education/Signals
  • iStrategy Solutions (Blackboard)
  • UMBC’s Blackboard Reports and CMA
  • Starfish Early Alert (Starfishsolutions)
  • Blackboard Greenhouse Grant, Project ASTRO
  • Argosy University
  • Purdue University (Signals)
  • Slippery Rock, University of Pennsylvania
  • South Texas College
  • SUNY Buffalo
  • University of Alabama
  • University of Central Florida
  • University system of Georgia
  • Hofstra University
  • Educause Center for Applied Research (ECAR)
  • Capella
  • University of new England (AU) looking at student sentiment to predict student success
  • University of Phoenix
  • Institute of Educational Technology at the Open University of the United Kingdom
  • Minnesota State


Arnold, K. (2011, January 26). Purdue Signals and Learning Analytics. Presented at the LAK11 MOOC, Purdue Signals and Learning Analytics. Retrieved from https://sas.elluminate.com/site/external/jwsdetect/playback.jnlp?psid=2011-01-26.1256.M.340DDA914E66190DED68B759DCF9C3.vcr&sid=2008104

Fritz, J. (2011, January 11). Learning Analytics. Retrieved from https://sas.elluminate.com/site/external/jwsdetect/playback.jnlp?psid=2011-01-11.1101.M.340DDA914E66190DED68B759DCF9C3.vcr&sid=2008104


In preparation for the Learning Analytics and Knowledge 2011 (LAK11) conference, George Siemens, Jon Drown, Dave Cormier, Sylvia Elias and Tanya Currie hosted a massive open online course (MOOC) on the subject. A MOOC is a course where large numbers of participants converge online in order to discuss and debate a subject for a set period of time, where all the resources and interaction data is publicly available and recorded for future use. The participants use formal means of communicating through Moodle (a learning management system) and Elluminate (an online collaboration tool), as well as less formal means such as social network groups, micro blogging, aggregated personal blogs, and social bookmarking. In the LAK11 MOOC, participants used these means to become aware of and deliberate the fundamentals and the future of Learning analytics. The LAK11 MOOC has become the most comprehensive source for Learning Analytics anywhere. I experienced the LAK11 MOOC after the fact by watching the Elluminate sessions, reading the suggested content, perusing the Moodle forms, and playing with the suggested analysis tools.

Excerpt from the LAK11 MOOC Syllabus Page

Excerpt from the LAK11 MOOC Syllabus Page

What are the Learning Analytics  Fundamentals?

In general, it seemed like those participating in the LAK11 MOOC conceptualized Learning Analytics as the intersection of 1. Student/machine data, 2. Analysis: how the data are connected and why, 3. Curation: personalized and adapted content and relationships, and 4. Prediction: targeting remediation and interventions, recommending resources and behaviors.

Potential Learning Analytics Infrastructure

Potential Learning Analytics Infrastructure

But my reduced conceptualization (note that this is my version of what was deliberated in the MOOC) is lacking. Other important elements of learning analytics were also discussed in the MOOC, but it is not clear to me yet how they come together: such as,

  • Ethics,
  • Privacy
  • Data ownership
  • Learning philosophy fit
  • Open vs. closed systems
  • How to deal with the necessity of closed learning systems for minors?
  • What are the advantages and risks of the timing of decisions based on predictions (preemptive, just-in-time, postmortem)?
  • Who consumes Learning Analytics data and recommendations?
  • Who make the Learning Analytics decisions (i.e. algorithms, etc.) (for students, instructors, departments, schools, etc)?
  • How are data, recommendations, and requirements best communicated to those involved?
  • How can technical decisions in the Learning Analytics workflows be easily evaluated by consumers of the data and recommendations?
  • What is learning?
  • What is success?
  • Can meaningful learning be measured and evaluated by Learning Analytics?

Dr. Linda Baer (2011) from the Gates Foundation also presented a useful hierarchy for strategic intelligence in higher education. As she cited it, the framework was from Competing on Analytics (Davenport & Harris, 2007).

Optimization What’s the best that can happen?
Predictive Modeling What will happen next?
Forecasting/Extrapolation What if these trends continue?
Statistical Analysis Why is this happening?
Alerts What actions are needed?
Query/Drill Down Where exactly is the problem?
Ad hoc Reports How many, how often, Where?
Standard Reports What happened?

There is still quite a lot to iron out in Learning Analytics, and I am beginning to wonder whether it is useful talk about Learning Analytics for the sake of Learning Analytics, given that it is so different depending on what is being measured and in what context.

Learning Analytics is Broad and a Bit Overwhelming

One thing is clear to me about Learning Analytics, it is a bit overwhelming. In order to be conversant in its variety of applications, it appears that one needs to have familiarity and or expertise in the areas described in the image below. What kind of program provides the opportunity to gain skills, know how, and knowledge in all of these areas? With such a high bar for entry, it hardly seems that this field can take off. It appears that there needs to be a way for specialists to come together easily so that interdisciplinary contributions can be made. But are there some skills that should be pervasive? Are data mining and SNA the new language for social new research literacy? If so, it seems that we need what blog software did for web publishing for learning analytics, to learning measurement, evaluation, and prediction may be available to the masses.

Learning Analytics Required Knowledge and Skills?

Learning Analytics Required Knowledge and Skills?

Who Benefits from Learning Analytics?

George Siemens addressed the beneficiaries of Learning Analytics in his January 11, 2011 MOOC session (2011). He described what technology or technique is used in Learning Analytics at varying levels of hierarchy in formalized education and who benefits from analytics at those levels respectively.

  1. Course level:
    1. Social networks, Conceptual development, language analysis
    2. Learners, faculty
  2. Aggregate:
    1. Predictive modeling, patterns of success/failure
    2. Learners, faculty
  3. Institutional:
    1. Learner profiles, performance of academics, knowledge flow
    2. Admin, funders, marketing
  4. Regional:
    1. Comparisons between systems
    2. Funders, admin
  5. National and International:
    1. Governments

I think that it will be good to also address who will be disadvantaged by Learning Analytics. In Ian Ayes’ presentation to Google (2007), he mentioned some studies that showed that programed instruction had greater positive effect on student test scores than other methods. The danger in returning to such methods is that teachers become disengaged over time by such restrictions. Disengaged teachers often produce uninspired students. There is no doubt that for certain types of learning, strict regimens are an advantage to students. But that is not to say that strict regimens should be imposed on all learners for all types of learning.

But what does this have to do with Learning Analytics? Well, the immense amount of evidence behind certain techniques of learning and instruction that will be possible through learning analytics may turn teacher discretion into a thing of the past. The current entry point of modern technologies into the educational system usually comes with the assumption of, “let computers do what they do well and people do what they do well”. This may turn into, however, “make people and computers do what computers do well.” As school districts and legislatures look for ways to be efficient with money in education, they often look for large standardized programs that restrict teacher’s creativity and resourcefulness in the name of raising the minimum bar. Will Learning Analytics have the same effect? Will holders of the education purse strings interpret easy-to-graph and-display data as the only data worth entertaining for decision making, thus reducing the teachers’ input and freedom to education and engender passion for learning by restricting them to only follow prescribed, trackable methods? I am sure some of this will happen, but I surly hope that it is in the minority of potential outcomes in the long run.


Ayres, I. (2007, November 8). YouTube – Authors@Google: Ian Ayres. Retrieved from http://www.youtube.com/watch?v=5Yml4H2sG4U&feature=player_embedded

Baer, D. L. (2011, February 8). Systemic Adoption of Learning Analytics. Presented at the LAK11 MOOC, Systemic Adoption of Learning Analytics. Retrieved from https://sas.elluminate.com/site/external/jwsdetect/playback.jnlp?psid=2011-02-08.1140.M.340DDA914E66190DED68B759DCF9C3.vcr&sid=2008104

Davenport, T. H., & Harris, J. G. (2007). Competing on Analytics: The New Science of Winning (1st ed.). Harvard Business School Press.

Fritz, J. (2011, January 11). Learning Analytics. Retrieved from https://sas.elluminate.com/site/external/jwsdetect/playback.jnlp?psid=2011-01-11.1101.M.340DDA914E66190DED68B759DCF9C3.vcr&sid=2008104

Siemens, G. (2011, January 10). Learning Analytics: A foundation for informed change in education. Presented at the EDUCAUSE ELI Webinar: Recording. Retrieved from http://educause.adobeconnect.com/p63014716/

Posted by: Michael Atkisson | June 18, 2011

“Crazy Like Us: Globalizing the American [Educational] Psyche”

On the way home from taking my father out for brunch, I heard part of the Anxiety episode of To the Best of Our Knowledge hosted by Jim Fleming. He was talking to Ethan Watters who wrote Crazy Like Us: The Globalization of the American Psyche. In the part that I heard Watters was talking about the marketing that GlaxoSmithKline did in Japan for the antidepressant drug Paxil.

The challenge that GlaxoSmithKline had in Japan was that, culturally, the Japanese viewed depression in a very different way than Americans from the U.S. In the U.S., it was viewed as a common, pathologic, mental health disorder that could be treated with chemical intervention. In Japan, on the other hand, it was viewed as a rare, extreme disease and was hardly ever diagnosed. The threshold for what constituted depression as a pathology had a much higher threshold in Japan. More variety of melancholy was acceptable and even respected in the cultural and religious narrative than in the U.S.

So GlaxoSmithKline spent a ton of money to figure out how to convince the Japanese citizens that depression was a common and treatable disease. They came up with the line, “The cold of the soul,” which signified both of the marketable traits. The campaign took off. Antidepressants are now some of the most prescribed medications for mental health in Japan.

The author felt that there were both benefits and curses brought to Japan by this marketing effort. The pharma companies have argued that suicide rates are down significantly since the introduction of these types of drugs in Japan, but it has also homogenized the way depression is viewed and treated in a dramatic way. Though, the statistics I found from the WHO show the suicide rate per 100,000 going up in Japan.

Suicide Rates in Japan by Gender per 100,000 people

Suicide Rates in Japan by Gender per 100,000 people. http://www.who.int/mental_health/media/japa.pdf


Will pharma marketing forever change what it means to be Japanese by changing the level at which people look to medications to change their state of minds?

To Watters’ point, great care needs to be taken in evaluating ethical and cultural implications of proselytizing scientific innovation.

Crazy Like Us: Globalizing the American [Educational] Psyche:

So what does this mean for innovations in education? The higher education system in the U.S. is arguably the most coveted in the world (not to say that there are not great places to study elsewhere in the world). At the same time, a great upheaval is taking place in the U.S., removing many of the foundations upon which higher education rests. In many cases, debt burdens for graduating undergrads are too high for the types of jobs that are available. Many employers require specialized degrees rather than liberal arts degrees. States are reducing funding under increased revenue pressures from the slow recovery from the “depression.” More people are going to college than ever before.  And more people are dropping out than ever before (gaining school debt without the benefit of the degree advancement). Undergraduate degrees are worth less now in many cases as advanced and professional degrees are required for career advancement. Take a look at the graph I made with Gapminder on “Education Expenditure per Student by Income per Person from 1991-2004: USA, Japan, UK, Netherlands, Germany, France, and Norway.” With the exception of Norway, they all go almost directly up. So income is staying about the same while student expenditure is going up significantly. All of these and other factors are converging to force innovation in how higher education is carried out in the U.S and elsewhere.

Education Expenditure per Student by Income per Person from 1991-2004: USA, Japan, UK, Netherlands, Germany, France, and Norway

Education Expenditure per Student by Income per Person from 1991-2004: USA, Japan, UK, Netherlands, Germany, France, and Norway

Many government and business leaders are looking to online and blended models of higher education as a way to match the growing demand and the rising cost. With this shift to virtual spaces has come the affordance of increased measurability at much less cost and in much less time than would be possible otherwise. With the explosion of data online, education is beginning to ride the wave.

Big Ethical and Evaluative Questions for Virtual Education:

  • So what will be the consequences of this great shift to virtual space for undergraduate education?
  • Will the ability to measure and incentivize diversify and enable, or will it ultimately homogenize and limit the type of “degree validated” learning available to the masses? Soon, mountains and worlds of data will be behind certain types of learning and instructional strategies to the extent that it will seem on the surface that there will be no question as to what “works” and what does not. But will we be just gathering enormous data sets on things that are easy to track or on things that make a difference?
  • Will virtual learning make the intended difference in the U.S. and in dissimilar cultures?
  • Will the cultures that are advancing online education and learning analytics stamp out diverse educational innovations from cultures with other educational values?
  • Will the funding structure for educational innovation be skewed to only innovations that play well with online components because it is the easiest environment to make it appear that return on investment has been achieved?
  • Will the virtual learning juggernauts be looking to other cultures to drop their historical practices and genres of education in order to step into the “modern economy,” forcing them to adopt “modern” measures of educational performance and achievement at the expense of what has been culturally valuable in the past?


I am not saying with the questions that I have asked that there will be negative outcomes across the board as we move toward greater incorporation of virtual learning environments. But given the great effect that technological, scientific, and educational advances have on cultures, especially as the world becomes smaller than it ever has before because of the Internet, it is important to do our best to anticipate cultural and ethical implications of the virtual learning methods we prescribe. It may not be possible to do no harm, but it is a worthy goal when it comes to educational advancements across the world.

Posted by: Michael Atkisson | June 18, 2011

Visualizing my Twitter Network with NodeXL


As part of the Semantic Web, Linked Data, and Intelligent Curriculum subject matter for week three of the LAK11 Open Course, I made NodeXL visualizations of my Twitter network and of a Twitter search on “Learning Analytics.” NodeXL is a Microsoft-developed, open source Excel add on built for entry-level, social network analysis (SNA) researchers who are not experienced in data mining. So far, I have found it very easy to get started. You can download data sets directly from Twitter, Facebook, YouTube, and Flicker, as well as import other data sets that you create from other sources. I just downloaded the software and connected it with my Twitter account. There was just one hiccup. It looked like an authentication error resulted because I was asking for too much data. Once I rolled that back a bit, it downloaded my Twitter data with ease. On the third analysis I did in the hour, I got a limit placed on me by Twitter and I had to wait about an hour or so for the third request to be processed. Overall it was pretty slick, seeing that I didn’t have to waste time worrying about how to structure the data and what visualizations to build in. It is all right there. There is a learning curve to be able to organize the data and adjust the visualization properties, but within two hours I was able to read the manual and manipulate data in ways that I would be able to answer research questions. I’m no pro yet, but I felt more encouraged than I thought I would seeing that my only other SNA excursion was with SNAPP (an easier to use program to use but with much less flexibility). There are some limitations though, which I will discuss through the examples.

Visualization 1: SNA of Twitter Accounts by Number Followed

First, I queried all the Twitter accounts that I follow and all the accounts that follow me and decided to look at what the variance was in how many accounts a particular account follows. So, I scaled the size of the nodes to that data across all the accounts that I follow. “hootsuite” is the big node outlier in my network, following over 520,000 Twitter accounts.

This data set was of the Twitter accounts that I follow. The lines between nodes are directional representing the connections among the accounts that I follow. The nodes are scaled by how many Twitter accounts each node is following.

This data set was of the Twitter accounts that either I follow or follow me. The lines between nodes are directional, representing the connections among the accounts that I follow. The nodes are scaled by how many Twitter accounts each node is following. The nodes towards the center of the graph have more connections among themselves than the nodes around the periphery.

Visualization 2: SNA of Twitter Accounts by Number Following

Next I looked at the same data set but scaled the nodes to how many followers each account has. Here more variance appeared in the large numbers. There were about 15 accounts in the data set that had significant numbers of followers, opposed to previous visualization where only one account was following others on a large scale.

This is a NodeXL SNA visualization of Twitter accounts that either I follow or that follow me. I scaled the nodes by how man followers each account has.

This is a NodeXL SNA visualization of Twitter accounts that either I follow or that follow me. I scaled the nodes by how many followers each account has.

Visualization 3: Scaling Nodes by Number of Tweets

On the third try I wanted to see how the network varied in numbers of Tweets in too ways. I wanted to see which accounts were tweeting more than others, but I also wanted to draw a line in the sand; who in the network has tweeted 1,00o or more times? So in this graph I also had the shape of the node change to a square when the account had 1,000 or more tweets. There are some small squares, so the larger ones have a significant amount of tweets. The largest, for example is daveyp at 74,550 total tweets on the day that I did the analysis.

This is an SNA of my twitter network with the nodes scaled by number of tweets. The node shape also changed from a circle to a square as the number of tweets exceeds 1,000.

This is an SNA of my Twitter network with the nodes scaled by number of tweets. The node shape also changes from a circle to a square as the number of tweets exceeds 1,000.

Visualization 4: Tweet-Scaled Nodes Grouped by ????

Next I wanted to test the clustering feature. It did a great job of breaking my data set up into visually distinct groups, but I could not find anywhere in the documentation as to what the groupings were based on. Maybe this is because I am new to data mining and SNA, but generally it is good to make explicit the data that is being sourced for clusters. What algorithm is used to make the calculation is cited, but what use is that to the audience of the tool? Auto clustering is Cool, but this falls short on the immediate usefulness of the other features in NodeXL.

SNA of My Twitter Network Using NodeXL's Auto Grouping

SNA of My Twitter Network Using NodeXL's Auto Grouping.


Visualization 5: Twitter Search for “Learning Analytics”

Next I queried a Twitter search for “Learning Analytics.” It returned recent tweets and the connections among them, if any. Edges (the lines between nodes) were color categorized by connection type. Dark blue lines mean that one node is following the other. Sky blue lines are mentions. I scaled the nodes by the number of tweets the account has made overall and assigned the label to display the hash tags used by any tweets from the account within this query. The auto grouping is also turned on here, but again I am not sure at this point what properties the nodes are being grouped by. Frequent tags in this query were #Calrg11, #sakai11, #mupple, #edchat, and #kmiou. I was surprised to see only 1 LAK12 tag.

I did a Twitter search for "Learning Analytics" and this network resulted.

I did a Twitter search for "Learning Analytics" and this network resulted.


Visualization 6: Circle Diagram of Twitter Search

The circle diagram of the same data that I have in visualization 5 sheds a different light. Rather than show the centrality of connections among nodes by the x and y coordinates of the nodes, this put all the nodes equally distant from one another so you can see the density and direction of the edges. One interesting thing here is that the number of overall tweets does not seem to be indicative of the number of connections between tweeters on this topic.


Circle SNA of a Twitter Query for "Learning Analytics" Labeled by Present Hash Tags.

Circle SNA of a Twitter Query for "Learning Analytics" Labeled by Present Hash Tags.



It seems that this tool is more versatile than SNAPP. I may be mistaken but I was under the impression that SNAPP only allows visualization of discussion forms. With NodeXL, a wider variety of data is made instantly available. Instructors of online or blended courses could reacquire students to tweet with a class hash tag and save the search for the class in their twitter account or aggregator. Then a daily snapshot of the discussion could easily be monitored outside of an LMS. Though more steps would be required to get discussion form data into NodeXL than the ease of pushing one button in your browser like you do with SNAPP, more flexable analysis is available in NodeXL and it is also fairly easy to use. The challenge that may arrise is if there are a lot of students; then, skill in filtering and sorting data will be needed to make sense of how to display the visualizations.

At a superficial level, I have had a good experience with NodeXL. However, some noticeable challenges prevented me from making serious analysis of my data in a short a mount of time. The variables by which clustering was suggested were not made known. Also, the worksheets that supposedly had the totals and averages were not displaying the statistics that the manual said they would. This made it difficult to get a descriptive feel for the data in terms of quantities rather than visual cues. Nevertheless, with a little more time exploring, I’m sure I’ll figure it out. The bottom line is, for the beginner, NodeXL is great for small and medium size SNA data sets.

Posted by: Michael Atkisson | June 17, 2011

A Precious Morning

June 17, 2011

Just had a precious morning with my twins, Peter and Greta, one of the last just with them. They are two-and-a-half and Kristen is going to have our new baby girl in a week or two. It’s Friday so I don’t go into work on my 30 hour per week schedule and if I do, it’s later and to work on my dissertation. Peter woke up at 6:30. He came into our room and started playing with his truck on top of Kristen’s nightstand, right by her open bottle of Tums. I heard his truck rattling around. Kirsten was already down stairs. So I rolled over, a little groggy from my earache and bad sleep, and snapped the bottle shut. At that point, Peter looked at me for a second and then turned and ran out of the room. I called for him but he kept on going. When I caught up to him he was half way down the stairs and watching Kristen open the door for her walk. Kristen picked him up at the bottom of the stairs and Peter said he wanted the car. We thought that he meant a toy, but when we turned to go into the living room he screamed no and said, “the car!” He wanted to go with Kristen. I pulled him away as he cried for mom, and Kristen went out the door. We went into the kitchen as I tried to sooth him (because I was worried that he was going to wake Greta, because she is always a grump when she gets woken up early). I poured milk into his green bottle (we usually use glass but they love these colored one when we use them) with one hand got the milk to him. We walked up stairs with him curled up in my arms as he drank the milk, binky in one hand and bottle in the other, and Eli (his blue night-night elephant) under his arm. I changed his diaper and we curled up in bed. After he finished his milk, I put it on my nightstand and he curled up in the pillows next to me, his little bottom against my stomach and his feet together, wedged between my thighs. We lay together there with soft morning light peeking through the blinds and Peter nuzzling around until 7:00.

Greta woke with her morning cry at 7:00. I got up and cracked her door. She was fumbling around on her bed trying to gather Baby and Draffy (here sleep animals). As I opened the door, she was dropping down out of her bed feet-first, making little moans and sniffles. I said good morning happily and leaned down and put my hands on her shoulders and said for her to go in to mom and dad’s room while I got her milk. I walked with her to Kristen’s side of the bed and lifted her up onto the pillows on her back and tucked her in. Peter was excited to see her and leaned in on the backside of her pillow and began talking to her. She rolled to her side and closed her eyes. When I came up with the yellow bottle she was still there and Peter was still leaned in towards her. When she saw me coming she looked up and then saw the little children’s paper book that I stuffed into the space between the base of the ceiling fan and the fan a couple of nights before so it would stop making noise when it spun. She really wanted the book and got upset. She wouldn’t drink her milk and tried to just lay down with it. I took it away until she finally agreed to start drinking it. I got out the Truckery Rhymes book and Greta made a fuss about me reading it. I knelt on the bed in front of them and started to sign through the book. Greta finally decided not to object anymore and I sat between them on the bed and finished the book. Just as I was getting out some more books, Kristen got home and said from downstairs as she always does when she gets home, “My babies!? My babies!?”” When she came in, Peter handed her the Popcorn book and she read that to them while they scooted up to her where she was sitting on the corner of the bed. I stayed where I was, but moved some pillows and put them on my legs so P&G could lean back while they listened. I started Tumble Me Tumbley and Kristen and I alternated lines while Greta listened and Peter began to play alone with a train engine where he was on the bed on the discarded books we had read. Next we got out the large book of children’s stories and read several stories, including their favorite in that book, “Mouse,” which is about a mouse that misses its family and helps them escape from peril and ends up with a new baby brother he wished for at the end.

Kristen fell asleep (she is less than two weeks away from her due date after all) and I took the kids down to have some breakfast. Peter came first, and I fixed him some multigrain toast and got him started on that. I went back up and got Greta. I split a plumb from Costco and the skin was too sour so they didn’t eat that. Peter wanted some yogurt. So I split a peach and then a strawberry yogurt between them. I got them to put down their food and so we could say a prayer. They both folded their arms and bowed their heads (It went much better than last night’s dinner prayer where a lot of food and utensils started going around mid-prayer). I sat with them while they ate and munched on some toast. They were both working so hard at dipping their spoons in their little cups and getting it into their mouths. They would be concentrating and then would look up and smile. They were so sweet and innocent. I thought this must be heaven. An overwhelming sense of gratitude came over me as I looked and smiled at each one of them. Just as I was getting teary-eyed, Greta started bossing Peter around for pointing his hand at her spoon. She thought he was going to take it. They were covered with yogurt. When they finished, I cleaned up Peter while Greta ran and jumped on the couch and got some on it. I cleaned her up and wiped her mouth too hard as she exclaimed, “You hurt me!” I said I was sorry and announced, “Bath time!” They both hurried to follow me out of the kitchen but then got sidetracked by seeing their night animals on the floor and tried to grab them. I explained that we were going to have bath time. They put them down eventually and came upstairs. Peter took his usual little break, laying down on his back on the bottom stair so the rest of us had to step over him. I turned on the water and then Peter and Greta helped me put the bath toys into the tub. Greta got in right away. Peter didn’t want to all of the sudden, he just wanted to lean in over the tub wall and play with his red Fairlane matchbox car under the water. I finally lifted him in and he didn’t want to, he stomped one foot down into the water, pinning a toy down to the bottom, pushing up, trying to stay out of the water. Eventually he got in. They played for a minute while I sat on the toilet seat. Then I washed them. By the time I finished rinsing off Peter, Greta has gulped a few ounces of bath water. She loves it no matter how many times I tell her it’s gross and that we don’t drink bath water! When I washed Greta she got soap in her eyes and wasn’t too happy about it. I popped the drain plug out and Peter was ready to go. I pulled him out, dried off his head and the rest of his little body and wrapped him up as a waddling cocoon and sent him into the bedroom where Kristen was up and waiting with the hair dryer. Greta wanted to stay in the bath until every last drop of water went down the drain. I pulled her out and dried her off with her mermaid towel, and she was vary particular about making sure that the print was on the outside when I wrapped her up. I cleaned out her nose and then carried her to my spot on the bed where Shaun the Sheep season 1 was streaming on Netflix (their favorite show). I put Greta’s diaper on and tucked her in.

I took down the rolled up diapers to the trash and then wrote up instructions for David and Connie for the sleep over that they are having tonight over there while Kristen got the kids ready. I went back up and carried P&G, one in each arm, down stairs and we ran around the house playing monster chase. They went up stairs and then forgot the chase when they saw Kristen’s makeup bag on the counter and got into it. Kristen exclaimed something about, “Why do you have to always get into my stuff!” and “Can’t I just have five seconds!” I got Peter back down stairs to put his socks and shoes on. He cuddled up tight in my arms and giggled. We went back up stairs to get Greta. Kristen wanted me to put her purple fleece on, but Greta just look at me and ran. P&G ran into Greta’s room. Greta hid under the slide and Peter hid behind the book case shelves. We played peek-a-boo for a few minutes until it got closer for them to head out. Kristen called me in to our room to see if I thought the maternity pants that just put on were terrible. They weren’t, she looked great. She grabbed the keys off the dresser and said self-mockingly, “Why are my keys here? I always put my keys on the hook in the kitchen,” lampooning one of her rants from a couple of weeks ago when she called me at work not being able to find her keys. I headed down stairs and asked Greta if she wanted her pink flower sandals or her purple shoes. She wanted her shoes. So I went and got some white footie socks that had a red stripe at the top and put them on her. The red clashed with the purple in her pants shirt and shoes, but it didn’t matter, it was time for them to go. Greta lined up my shoes so I could put them on. I still wasn’t dressed and wasn’t going to be going with them to the park or to Costco (Today is the last day of the term for my last class before my dissertation starts officially and I am two weeks behind from over 80 hours of overtime at work, moving my 104-year-old grandma into her first assisted living facility, two weeks of being sick, my brother-in-law’s wedding and everything else that happened this term). Kristen came down ready to go and opened the door. I stood out of the way so I didn’t flash anyone in the hall and Kristen said, “What do you say?” and both P&G yelled out, “Thank you dad!” Kristen and I laughed. Greta bolted out the door. Peter stood still, turned, and then looked at me and said, “Bye dad.” Kristen took his hand, and with her diaper-full purse on her shoulder and the Costco list in her other hand, they stepped out, closing the door behind them.

Posted by: Michael Atkisson | June 16, 2011

Tying Business Goals and Behavioral Outcomes to Training Design

Allen Communication Learning Services, my employer, recorded its designers talking about tips for analyzing client needs during project kick offs. This was done in conjunction with a new iPad app we are developing for instructional designers. I was asked to talk about tying business goals and behavioral outcomes to training and performance support design. The linked video is me (eeek!) raising my eyebrows too much while I cite an example from a training needs analysis I did recently for an Aerospace/Defense company. I’m not great on camera but the substance of what I said is good. Check it out. I would have embedded the video, but the source YouTube channel it comes from bars embedding. Enjoy! I would be interested in hearing your feedback.

Posted by: Michael Atkisson | June 10, 2011

SNAPP Visualization of the LAK11 Forum

As part of the Big Data subject matter for week two of the LAK11 Open Course, I made SNAPP visualizations of three forums from the the course. There were not a lot of participants overall and most of the forum interaction focused around the question originator. This exercise has shown me that the SNAPP software is promising even though the following three examples were not too informative.

Forum Visualization 1:

The forum question was, “Where are you online?”by George Siemens – Sunday, 9 January 2011, 07:18 PM. There were 50 respondents and 56 posts, nearly all single responses to the initial response. Because the point of this question was to gather information from individuals about how they can be contacted and referenced online throughout the open course and the conference, I would not expect to see many interactions among respondents.

SNAPP Visualization of the LAK11 Forum, "Where are you on line?"

SNAPP Visualization of the LAK11 Forum, "Where are you on line?"

Forum Visualization 2:


The forum questions was, “Critiques of learning analytics? What are your concerns with analytics when applied to learning and knowledge? What types of critiques and concepts should we explore/consider? I’ve started with a few quick thoughts on the topic here: http://www.learninganalytics.net/?p=101,” by George Siemens – Sunday, 16 January 2011, 09:50 AM. There were 22 respondents and 29 posts; again, nearly all the resondents were direct to the question initiator. In this case, however, the question was meant to generate conversation. There were about 1/3 more posts than there were respondents, resulting in some conversation, but still no significant interaction.

SNAPP Visualization of a LAK11 Forum, "Concerns about Learning Analytics".

SNAPP Visualization of a LAK11 Forum, "Critiques of Learning Analytics".

Forum Visualization 3:

The fourm question was, “Playing around with Hunch. If you created a Hunch account (week 1 activities: http://learninganalytics.net/syllabus.html#Week_1 , share your reactions with others – were the Hunch recommendations accurate? What are the educational uses of a Hunch-like tool for learning?” by George Siemens – Sunday, 9 January 2011, 08:24 PM. http://scope.bccampus.ca/mod/forum/discuss.php?d=16362. There were 65 respondents and 114 posts, a significant increase in overall posts and in multiple posts per respondent over the other two visualizations. Over 10 small conversations started on this forum outside of direct interaction with the question originator. Unfortunately, I am not very sofisticated in my SNA as of yet. But from what I understand so far, it is interesting to get a birds eye view of the concentration of interaction.

SNAPP Visualization of a LAK11 Forum, "Playing Arround with Hunch."

SNAPP Visualization of a LAK11 Forum, "Playing Arround with Hunch."

Intial impressions of SNAPP:

SNAPP, though relatively easy to use for the purpose of visualization, it is less clear how it would be used as a tool for instructors, learners, and participants in virtual environments. Yes, an instructor or facilitator would be able to create a visualization, but there is no sense of social practice given in the tool as to what kind of decisions an instructor or student would make as a result of this information. In order for SNAPP to have an effect at the class level, it would need to provide guidance as to the interpretation and reasonable actions to take. Otherwise, the value of this information remains inert with the researchers who are removed in space and time from the virtual interaction.

This was an exploratory learning activity as part of the the Learning Analytics Open Course. It is in conjunction with the 2011 Learning and Knowledge Analytics (LAK11) conference (which I presented at), organized by George Siemens of Athabasca University.

« Newer Posts - Older Posts »


%d bloggers like this: