TinCan API Sparks: Matters of Authority
One thing a Learning Management System does very well is convey an implicit, if perhaps unearned, sense of authority to the learning activities it contains. To those of us who tend to do our learning informally, it may seem a bit quaint, but in the TinCan era, as the corporate LMS becomes a side-show to the main act of “all that other content out there,” issues surrounding authoritativeness and quality will have to be addressed.
In the healthcare domain I work in, worries about quality of information, compliance, risk, consistency, and up-to-date-ness are very real. They don’t prevent anyone from searching the web for whatever they need, but they may be barriers to the spread of good ideas, perhaps using some type of chain or cascade of authoritative approval. Whose judgement do you trust, and whose judgement do they trust, and what are they using? And is there any data anywhere that supports it?
As the LMS’s walled garden of content become less important, we’ll need to consider using all the tools at our disposal that could represent the value of content or activities. Besides peer review, numbers of citations, user evaluations and reviews, there are all the sharing and content collection tools used on the web to curate and “float” things to the top: “Like” and “+1” buttons, favorites, sharing, stacking, ordering, ranking, reviewing, annotating, etc. We may even need to find some way to assert that an official document has not been tampered with.
I’m wondering if some of these functions should be incorporated into the TinCan spec at some point, either as extensions or as verbs.
On a larger scale, search engines have had to think about quality for quite a while now, and something akin to the Google Page Rank algorithm may also have some application here.