One of first questions people ask me about 3D (and sometimes VR or HD visualization technologies) is about instructional effectiveness and research. “What is the difference some of these technologies make in learning?” “How effective are these technologies with young learners?” they ask.
Of course, a key issue is “What kind of research are you talking about?” In school environments, there are many types of formal and informal research. There are survey data, focus group reporting, and case studies. There is also anecdotal evidence, which can provide very useful empirical insight, when collected well and over time. There is action research, informal classroom research, and even research on fidelity of implementation—how to implement well. There is industry-conducted research, sponsored external research, and independent research (if the latter exists!) There is also planned research, which is also quite insightful, because we get pre-knowledge about the upcoming purpose or key research questions being asked in an upcoming study.
Then, of course there is capital ‘R’ Research—the gold standard—with control groups and rigorous evaluative processes. The most expensive kind, I must add. And let’s not forget my favorite type of research: the meta-analysis, or the compilation or big picture of what we have learned from many dozens of previous research studies.
But back to my kick-off sentence: Regarding modern visualization technologies, the first question educators typically ask me is “How much does it cost?” But the second question invariably targets the effectiveness, or research, question. Of course, providing an answer for this question in the spare seconds that the listener is willing to offer becomes a difficult proposition, to say the least. I usually offer to send the requester an insightful chart, which succinctly summarizes what we know to date about the instructional effectiveness of 3D (and to a lesser extent VR and HD visualization technologies). I'll show you this chart in next week's post.