Unfold Learning

exploring the best innovations in learning and teaching


From Instruction to Construction: What Does “Platonic” Teaching Teach?

Part 2

Pink Floyd’s 1979 The Wall presented a harrowing vision of the “educational industrial complex”

In my last article, I described two kinds of educational approach: the “Platonic,” that prizes “pure” abstract or conceptual information, and the “Aristotelian,” that focuses on embodiment and application of knowledge in learning-by-making and real-world contexts. In other words, it’s the difference between instruction and construction as teaching strategies. As I discussed, these approaches represent a dichotomy in today’s educational practice. However, they’re not evenly distributed. Despite copious evidence to support a more “Aristotelian” approach, the “Platonic” approach prevails in schools throughout much of the world. Instruction has eclipsed construction. And this poses a profound challenge for our collective future that most educators haven’t even considered….

Continue reading →


From Instruction to Construction: Plato & Aristotle

part 1

In a previous edition of my career, when I was a professor of literature and literary theory, I used to tell my students that much of literary history could effectively be seen as an argument between Plato and Aristotle….

Plato believed in an absolute “reality” that exists outside of human perspective and experience — a perfect realm of universal “forms” that shape and give meaning to everything. He believed that the physical universe around us is an inferior, decaying shadow of these forms — nothing but a poor copy. Since only a few “elect” people can see beyond the distracting surface of the material universe, most people don’t really understand what’s important. And what’s important is not the concrete, physical world, but only the “abstract” one that hides beyond it in the perfect, ethereal plane. Human creation (whether by art, skill, or application) is merely another distraction associated with the inferiority of this material world: it’s okay for the “lesser” people, but not appropriate for those “elites” who know what’s what.

Continue reading →


A new beginning: hello, again…

It’s been far too long since I posted to the Unfold Learning blog, but my absence has been profoundly productive. For about the last 18 months, I had the privilege of leading an exceptional team of learning designers as we developed a learning approach centered on learning-by-making. The conversations were challenging and rich, and we made some spectacular learning materials. Along the way, we recognized the extreme importance of supporting our work with research and making sure it’s academically sound yet also easily accessible and easy to implement.

One of the challenges of the cubic learning model that’s been the subject of so many posts here is that while it can be a very helpful model for diagnosing learning situations and for understanding the interrelationships between the three central elements of learning — content, context, and community — it’s not necessarily immediately clear how to apply it for creating projects or how to integrate it into larger curricular or teaching plans. This new paradigm is designed to remedy that, providing teachers and learners with a simple “fractal” seed that can scale to any dimension for creating meaningful, engaging project-based and constructionist learning.

Over the next few weeks, I’ll be exploring the details and background of this paradigm in a series of posts, but here’s a quick taste:

Continue reading


Considering technology use with SAMR

Note: It seems this post is required reading for a group of students using Google Classroom. I’d love to know what the class is saying about SAMR. Feel free to leave a comment.

Developed by Dr. Ruben Puentadura based on his observations and study of a statewide laptop initiative in the us state of Maine, the SAMR model has become a popular, though frequently misunderstood, benchmark for considering the incorporation of technologies in education. The chief point of misunderstanding comes from a mistaken belief that the levels constituting this taxonomy can be entirely distinguished by differences in the learning content or its delivery or that a particular technology or tool automatically places one at a particular level in the taxonomy. Of course, both of these elements may be part of the story, but as we have seen with the Cubic Learning model, considering only the content dimension (either when considering content delivery or the use of a particular tool) only tells part of the story – and oversimplifying the SAMR model to focus primarily on content delivery obscures this model’s broad applicability as a guide for educators. Continue reading


Making Conferences More Dimensional: Bett 2018

Bett Arena L

While we’ve seen considerable experimentation and exploration scattered across the educational landscape, one of the holdout areas often untouched by the transformations of recent technologies is the standard conference presentation. Think about it: because of their logistics and their average venue — an auditorium with a stage facing row upon row of chairs or a rigid constellation of tables packed together to maximize attendance — most conference sessions focus primarily on a leader delivering information for an audience’s consumption. If that ‘delivery & consumption’ model is something we’re working to transform in classrooms, couldn’t we also work to transform it at conferences?

This is why it was especially exciting to team with the Bett content team this year to explore ways to do just that. You can read more about our rationale for the experiment and some of the outcomes we were hoping to achieve here. Did we succeed in helping people move from being passive consumers to active partners? We’re still collating data and following up with participants… I’ll post the results here once they’re available. But today, I wanted to consider some of the complexities of the challenge… Continue reading


How Hard Is It?

Rigid Post Image

The definition of rigor, according to the dictionary, is “strictness, severity, or harshness, as in dealing with people.” Over the past decade, this word has crept into eduspeak more and more, often to argue that students are earning high grades for work that is too easy and does not lead to learning. Do we really mean to make classrooms inflexible places? Probably not.

Search for the words “rigor” and “education” together and you will mostly find a wide range of new “definitions” for this word. More than definitions, many of these are circumlocutions: rambling explanations about challenging students without really getting at how to challenge students and how to ascertain the aforementioned rigor. There are no tools or scales for measuring rigor and no advice on how to identify it.

The buzz about rigor is not new. A post at the Hechinger Report from 2010 states that students are leaving high school unprepared for college-level work or the workplace. The post mentions designing a curriculum that prepares students for college, but gives no specifics. The post also mentions an expert who tells us there is a difference between rigorous teaching, rigorous questioning, and rigorous assessment, but the differences are not actually expressed. I have some rigorous questions of my own about “rigor” to help me think through this word that seems to be more of a political construct with suspect motives.

What is the point of rigor in schools? If the goal is to ensure students are working hard, is the hard work a means to an end or an end in itself?

Challenge is necessary for learning. Doing what is already known is easy, but does not lead to further acquisitions of skills or knowledge. Learning happens at the edge of what has already been learned and what can be learned next. It happens at the edge of what is easy and what starts to pose a challenge. Vygotsky used this expansion metaphor to describe learning when he wrote about the “zone of proximal development” (ZPD). If rigor ensures students are challenged so they are constantly learning, there must be a way to figure out if students are challenged, and how much so. Too much of a challenge (a leap too far into the ZPD) leads to frustration. Not enough challenge leads to boredom and a lack of learning. However, hard or easy varies with each individual, and teachers and students judge the ease or difficulty of any situation very differently and they do so for different reasons.

The Glossary of Education Reform defines “rigor” as an element of “…lessons that encourage students to question their assumptions and think deeply, rather than […] lessons that merely demand memorization and information recall.” This is not very different from the goal of Deeper Learning, which is defined by the Alliance for Excellent Education and the Hewlett Foundation as academic activities that lead to students acquiring the following six competencies.

  1. Mastery of core content
  2. Critical thinking
  3. Problem solving
  4. Collaboration
  5. Communication
  6. Self-directed learning

If learning activities are designed to challenge students while promoting the development of these competencies, rigor does not need to be quantified and assessed as a separate element. So why is it?

We ask students to learn things that are definitely hard, but have little or no connection to what will be required of them after graduation, and we do it in the name of rigor. As the Glossary of Education Reform notes, “While some educators may equate rigor with difficultly, many educators would argue that academically rigorous learning experiences should be sufficiently and appropriately challenging for individual students or groups of students, not simply difficult” (emphasis added). Until there is a consensus on what rigor means and how it can be observed and measured for individual students, we should ask ourselves questions that help us ensure students are challenged for a clear purpose and that learning outcomes are genuinely productive. Continue reading


1 Comment

‘Cubic’ ELM Assessments 3: A Problem-Based Learning Course…

Scattered cube

This post is the third in a series using “engagement and learning multiplier” (ELM) assessments to examine some common teaching and learning methods. If you’d like to (re)familiarize yourself with how these assessments work, you can refer to this post. If you’d like to compare this current post’s ELM assessment with others I’ve done, you can find the assessment of a laboratory course here, and you can find the rest of the “cubic” assessments in the sidebar (which may be at the bottom of this screen if you’re reading this on a mobile device).

For each of these assessments, I’ll set the learning scenario and then present analysis about why that approach has a given “cubic” shape and why it receives a particular ELM score. These posts are designed to provide useful examples and guidance as you evaluate your own learning approaches and as you make your own teaching and technology choices.

A problem-based learning course

Description

This middle-school social studies course is organized around a single major concept: building a human colony on Mars. Learners work through three separate phases of this concept over the course of the year: 1) the initial planning and colonization — deciding what would be needed to establish a colony and who should be invited as initial settlers; 2) the running of the colony — deciding what system of laws and government should characterize the settled colony; and 3) the expansion of the colony — deciding how to attract new settlers from earth to come and expand the colony’s capabilities. While the teacher has determined this overall structure, the details of what learners plan, what they make to demonstrate their plan (to both peers and parents), and how they present their work are all entirely left up to the learners. The teacher begins the course by introducing learners to Scrum, the collaboration method originally developed to help software developers work more productively together (see this helpful post by Bea Leiderman about Scrum in school). All work in the course is developed by learners using this method, with the teacher serving as the “Product Owner.”

Learners form their own teams of three to five members whom they choose based on a “skills résumé” (accompanied by examples where appropriate) that each learner prepares and presents to the class: descriptions of drawing or artistic ability, experience making movies, writing or math skills, knowledge of particular software or apps, etc. These teams will stay together throughout the year — though learners are also encouraged to “cross-pollinate” by seeking help from other teams if they need something no one on their own team can provide. Cross-pollination works by means of barter: teams have to negotiate, with one team offering services the other team wants in exchange for the services the first one needs. Any team that finishes a project before the other teams is signified a “consulting group”: its members are expected to split up and serve the other groups by helping with whatever they need. If a team experiences any interpersonal difficulties, its members are responsible for working those difficulties out themselves (though the teacher offers guidance and resources if the team members request help). The role of Scrum Master rotates through all of the team’s members during the first unit, with every member serving as Scrum Master at least once. After that, team members are allowed to choose their own roles based on their abilities and their team’s collective sense of how they can serve best.

Every class day begins with a “Stand-Up,” during which learners show the products of their work to one another, deal with delays or impediments, and decide what their work for the day will involve. Following this initial meeting, the teacher might briefly present relevant materials, involve students in a mini-project, or ask one of the teams to present some of their recent discoveries or work. She also provides materials on the class blog with the understanding that learners will use these as a starting point for their own explorations and creations. As learners develop their projects, they conduct research, develop media, and share results, all facilitated by the tablet devices the school provides for each learner. Teams present the results of the first two project phases in December and March during evening assemblies open to the public. Each team posts its assembled materials on the course blog for “public review” one week prior to the assembly, and is expected to use feedback gathered from this review period and from the public forum to revise their work. Each group gives a 10-minute presentation followed by 10 minutes of public Q&A. At the end of each forum, the assembled audience votes on which group presented the most compelling plan, which produced the best presentation, and which demonstrated the best responses to the audience’s questions. Many former class members participate in these public reviews “just for fun,” though the top team from the previous year serves as a formal “review committee” — service they perform both for the honor of the position and for the pizza party they get during final reviews. The “review committee” provides specific observations about what each group has done well and what each group needs to improve. In late May, learners present the results of the final project phase to the entire school, and the assembled school votes on which settlement they’d most like to join — and why. These three public forums (and the materials prepared for them) take the place of course exams.

The top team (and next year’s “review committee”) is chosen by combining the results of the three public forums and an end-of-year, in-class vote determining which overall project was the best researched, best supported, and best presented — a process which the previous year’s “review committee” referees.

Continue reading


2 Comments

‘Cubic’ ELM Assessments 2: A Laboratory Course…

Scattered cube

This post is the second in a series using “engagement and learning multiplier” (ELM) assessments to examine some common teaching and learning methods. If you’d like to (re)familiarize yourself with how these assessments work, you can refer to this previous post. If you’d like to compare this current post’s ELM assessment with others I’ve done, you can find the assessment of lecture here, and you can find the rest of the “cubic” assessments in the sidebar (which may be at the bottom of this screen if you’re reading this on a mobile device).

For each of these assessments, I’ll set the learning scenario and then present analysis about why that approach has a given “cubic” shape and why it receives a particular ELM score. These posts are designed to provide useful examples and guidance as you evaluate your own learning approaches and as you make your own teaching and technology choices.

A laboratory course

Description

This secondary-level chemistry course is designed to introduce learners not only to the main concepts of general chemistry, but also to much of the basic equipment and lab protocols used in this field. Part of the standard science curriculum, all learners are expected to pass through this general education course prior to graduation. Because of its broad and introductory nature, the teacher has tried to make course concepts accessible and follows a carefully organized curriculum in which more complex concepts and skills build on the simpler ones that precede them. Each two-week unit begins with an introduction featuring a presentation by the teacher. He uses a variety of media to illustrate concepts, including videos and images made by learners in previous years, which he attributes to their authors. This begins the first phase, focusing on conceptualization. The teacher’s introduction is followed up with homework assignments and in-class scenarios designed to give learners practice in understanding and internalizing the unit’s central concepts. The teacher chooses five learners for each unit (eventually rotating through the whole class twice) who present their homework as a basis for class discussions. Their fellow learners are asked to critique and correct the work presented with the requirement that they double-check the science and also describe “something great and something that needs improvement” for each peer presenter. All learners are encouraged to find resources on the web or in the library that they find helpful, posting links to them in the course’s learning portal so others can access them for help understanding course concepts. At the next phase, focusing on experimentation, the teacher presents learners with a hypothesis that will anchor their laboratory explorations. He follows this with a brief introduction to lab and safety protocols, introducing learners to the equipment and procedures they’ll use to conduct the unit’s central experiment. Each learner is also assigned two partners for the experiment, with partners changing for every unit. The teacher has designed these rotating partnerships to help learners make more connections with their classmates as well as to distribute the “advantages” offered by high performing learners. The three-person lab teams record experimental data not only by writing results, but also by making photographs and videos. These will be used to illustrate lab reports, which are jointly authored and submitted digitally. Each partner is assigned specific parts of the lab report and is expected to identify the sections she has authored, but each must also “sign off” on the other partners’ work. During laboratory experiments, the teacher circulates to answer questions, correct improper uses of the equipment or errors in the experimental protocols, and to ensure that all teams are on task and distributing work evenly among learners. The teacher grades lab reports for scientific and experimental accuracy as well as for writing and media quality. For evaluation of the non-empirical aspects of the lab report, the teacher also follows the “something great / something that needs improvement” model, offering all comments via audio files which are delivered to each team via the learning portal. Learners receive both an individual and a team score. The teacher asks those who produce exemplary reports for permission to use them to illustrate concepts for learners in future classes.
Continue reading


3 Comments

‘Cubic’ ELM Assessments 1: Traditional Lecture…

Scattered cube

In my last post, I described the ways the “cubic” model could be used to evaluate  learning approaches and described a method for calculating “engagement and learning multiplier” (ELM) scores. If you aren’t familiar with these concepts, you might want to review that post before continuing…

Over the next several posts, I’ll perform cubic ELM assessments of several common learning approaches. For each, I’ll set the learning scenario and then present analysis about why that approach has a given “cubic” shape and why it receives a particular ELM score. Hopefully, these posts will provide useful examples and guidance as you evaluate your own learning approaches and as you make your own teaching and technology choices.

 


A traditional lecture course

Description

In this course, the teacher presents information most days through a combination of lectures and presentations — some conducted using an “interactive” white board. The teacher also incorporates materials from the course textbook in her lectures, highlighting the points learners will have to know for exams. Learners are expected to take careful notes, and exams and other assessments come largely from material the teacher has covered in lecture, though some also comes from exercises and readings in the course text. During class, learners are encouraged to ask questions if they don’t understand a concept, and the teacher organizes weekly discussions where she probes learners’ understanding of course topics. In addition to homework exercises, learners are expected to complete a major research project. This project is designed to introduce learners to important books and journals in the discipline, and they must use materials from the school’s library, including the library’s online, full-text database, to complete it successfully. Learners choose from a list of topics furnished by the teacher, who has ensured that library holdings are adequate to support each topic. Assessment of these projects (as with exams and homework) is completed by the teacher, who writes comments designed to correct errors, to help learners acquire disciplinary literacies and conform to disciplinary norms, and to praise particularly insightful or advanced responses. The teacher periodically presents exceptional or noteworthy homework exercises, exam responses, and final projects to the class, being careful to protect the authors’ anonymity, in order to encourage dedicated, thoughtful work. She makes herself readily available outside of class to discuss course concepts and encourages learners to come by her office or contact her by email if they have questions or difficulties. Continue reading


3 Comments

Evaluating Learning Approaches in ‘Cubic’ Space: Engagement & Learning Multipliers…

Measuring

In my preceding posts, I’ve described the dimensional levels of each facet of our “cubic” learning model: content, community, and context. This post (and the next several) will incorporate and build on content from those posts, so if you haven’t yet seen them, you might want to review them before continuing here.

In “cubic” learning, each element’s “dimensionality” increases as the learner becomes more engaged and plays a more central role. The increasing agency, skills, and responsibility learners must demonstrate at each progressive level also means that more and more, they need support rather than direction, individual resources rather than a one-size-fits-all recipe, and companions and partners rather than controllers. More dimensionality means more learner-centered — and learner-driven — learning.

As we’ve seen previously, the dimensional levels of the “cubic” model look like this, with the higher levels increasing the volume of the cube they generate:

Cubic dimensions & values

While “volume” in this model is something of a metaphor, it’s one backed up by research. For example, as both Anderson and Krathwohl’s revision of Bloom’s taxonomy and Webb’s “depth of knowledge” (DOK) argue, creating content not only requires more ability from learners than recalling, it also increases their learning potential: the deeper level of engagement makes learning more likely to take hold. In other words, moving from recall to creation increases the potential “volume” for learning — and therefore makes a bigger cube in this model.

Continue reading