Monday, July 6, 2015

Transitioning to Competency

Imagine a day when instead of asking “what is my grade”, a student asks “what is my learning velocity?  Or what can I do to change my learning momentum?   Would I reach competency faster with a different approach, maybe some mental training, or maybe a different model?  How confident am I that I’ve “got” this. 

John Hattie's book on “visible learning” is really just the idea that we have to get student centered in education!  This is the toughest mental change for teachers who have grown up with and grown accustomed to teacher centric schools.  But then here we sit with mostly teacher centric or classroom centric methods and tools for support.  From Report Cards to GPA’s the system has just built itself around the fixed paced classroom.  If you have read "Disrupting Class" then you know how hard it is for a system to disrupt itself.   You go out on a limb once and succeed.   Now you are defined by what you do and not what you could do so you just stay the course.   But then you become obsolete when the tipping point actually arrives.

It usually happens as a series of patches that ultimately evolve into the main idea, so you often can’t see it coming before it is too late.   In Portland for example (see ref) many fixed pace dropouts who could not keep up ultimately graduate and still go to college or at least earn a high school education.  But the Mainstream is not the variable pace.  That option still exists outside of the current program rather than inside. And the label of "Dropout" still exists.  So getting methods that can live just fine with the current "fixed pace", classroom or teacher centric structure and then quickly or slowly morph into supporting a variable paced "student centric" variable paced structure is critical if we expect schools to change.

Current technology allows for about 90% of student evaluation to be in a teachers head and maybe 10% on paper or technology.   Competency education tends to put more of that evaluation onto paper or technology as knowledge scores are collected that were once just buried into the summary grades based on the pace of a curriculum.  So attacking the measurement overload dilemma from as many directions as possible is important.  For example, Bob Marzano once said that at least 5 measurement topic scores were required to give teachers a learning trend outlook on a student.  But rarely did any teacher have the time to collect that many per student per measurement topic that needed a score.  To make this transition really possible, several internal changes also need to occur around measurements of student knowledge or skill. 

First there has to be more focus on the reliability of the measurements. More reliable measurements mean fewer measurements. That reliability could easily come from a simple consensus of

a. Student self-measurement (personal confidence)
b. Trained observer/assessor opinion (more authentic assessment)
c. System confirmation (Common test).

Next the system needs to somehow allow only focused measurements. Not Every student needs every assessment possible for every skill. Once a reliable consensus of competence on a learning goal has been reached for a student you can stop measuring! But this means teachers have to tackle the legacy fairness problem. How come I had to take the extra test and he or she didn't?   The easiest answer is because they passed the 1st one if we can make that work.

Third is the realization that traditional grades in some form are still required to report a student’s level of effort applied to the learning goals and degree in meeting an “Expected” although not required learning pace. But that these grades are not conclusive of specific knowledge and should not be used for antiquated evaluations like GPA’s or Valedictorian types of recognition.  And they ultimately can't be derived in the traditional fashion of simply averaging classroom grades over time.

And last but not least is the creation of new focus of measurement statistics like learning acceleration, velocity and momentum where acceleration is the speed of change per learning goal, velocity is the current rate of progress in mastering learning goals, and momentum is the combined learning progress over time.

These may seem far fetched, but a students learning acceleration could help evaluate the value of specific content per learning style.  And their learning velocity could be indicative of students in need of more assistance at a given point in time.  And their learning momentum is probably a good predictor of the level of effort a student is applying to the learning process and more indicative of a traditional grade.

Making these disruptive changes in philosophy with students, parents and communities has proven itself a very difficult task. Districts that take on the challenge of teaching communities first and transitioning only after this understanding is in place are showing good progress. Those who think the understanding will just happen on its own are paying some heavy prices for the over-site.


Tuesday, June 16, 2015

Finding Confidence in Competence

It is a pretty simple concept really.  Anyone who tests knows that they really want some level of confidence that the results of the test actually reflex the reality of what was being tested.  You test students and you probably have some level of confidence that the scores produced actually reflect what the student does or does not know or understand.  
However, what about a teacher’s own personal judgment of that same student’s knowledge or skill?   What if it does not agree with the results of the test?   Do they believe as a trained observer that the student really knows the material and yet they for some reason struggled with the test?  Or the opposite where they are reluctant to believe the student knows something even though they passed the test.  Do they have the confidence to believe their own judgment over that of the exam?

And then there is the student’s confidence in their own knowledge or ability?   We have all probably said or at least considered saying, “Look, I know what I know”.  More important however is the implication from such a thought or statement, that if you indeed do “know what you know”, you probably also have a pretty good handle on what it is that you do not know.  

So what does all this have to do with the confidence of competency?  Well a lot actually.  As schools slowly start to transform themselves into more student based and confidence based entities, knowing when you have sufficient confidence to actually mark a student competent and begin building on that knowledge or skill with new curriculum can be critical.  Legacy systems just assumed that a student passing the tests and doing all the other prescribed learning work somehow magically produced confidence in the ability to build on that work in the future.  But that has proven to be a very poor assumption.

 One way to look at the problem is by first asking yourself if a student would say I “think” I know this stuff, or simply… I know this stuff!  Would they defend their answer with an explanation if challenged?   Would they properly identify errors in their understanding of future conceptual models because they believed that past foundations were sound, therefore the problem could only be with the new learning and not the old?

Dr. Robert Marzano is an established expert in the field of standards based and competency based education.  In most of his publications he calls for a “Preponderance of Evidence” when evaluating student performance for competency.  The need for such a “preponderance of evidence “ is to confirm some reasonable level of confidence and reliability in the final declaration of competency.  But how do you know what that is?  When has such a “preponderance” level been established?  Most likely it is slightly different for each thing being measured, who or what is doing the measurement and who is being measured.  Yet no system of tracking is designed to handle that extreme level of diversity.  But there are some good ways to think about it.

The best way is probably to just take those three dimensions of the tools, the evaluators and the learners, and somehow apply a good ordered estimate of competency or understanding to each learning goal.  Knowledge and skill also have to be looked at separately with different priorities.  For example if you can demonstrate that you can hammer a nail and that was the goal, then the assessment is complete via just the observation by a trained observer.  Yet if the goal is the addition of 3 digit numbers, it is impractical to test all of the combinations of two 3 digit numbers being added together.
 
You really need three things.  You have to have a tool that challenges a variety of options, plus a trained observer to try and visualize that a process of addition has been captured by the learner when presented with the challenge, and the opinion of the learner that they indeed have the confidence that they could complete any of the million options if required.
If we can get these three views of the competency combined in proper proportion for the competency being considered, the preponderance of evidence we are looking for should be maximized.  So new competency tracking systems will need:
  • The ability to organize a variety of different and independent competency measurements.
  • Estimate the level of confidence each offers into the real level of understanding by the learner.
  • Logical and user friendly ways of adjusting those estimates from the structure of the goal itself. 

And they will also need user friendly ways of illustrating these unique pathways to competency measurement as they apply to the equally unique ways students will have of completing their competency based learning.  It is quite possible that not only are the Grades as we have traditionally used them slowly finding their way to the history books, but with them will go the spreadsheet of grades that simply average out how well a student is meeting expectation rather than describing specific knowledge or skill.   If we don’t need the formulas, tracking the same measurements for the same students at the same time, do we still need complex spreadsheets to average the results?

Probably a more practical way would be a simple individual learning path map, adopted from and tied to a larger competency based map with options and requirements.   Where students are merging their maps would indicate opportunities for collaboration.  Where they are diversifying their maps, opportunities would exist for re-grouping.  Progress along their map simply becomes their personal “grade book” or portfolio of competency sign-offs, confirmations, badges, evidence and enabled opportunities for future adoptions of learning pathways into their individual map. 

Their competency report will become  an accumulation of both completion and confidence  in their knowledge or skill established along their learning path.   Incorporating this confidence component in more of an on-going fashion greatly reduces the pressure on tools like final exams to try and estimate this confidence level in a more summative format.    This certainly might be a new way of thinking about measurements of competency in learners, but the advantages are obvious.  Manually this is a lot of extra work, but with the right tools and technology for teachers it won’t have to be difficult.  And with it, everyone including the learner themselves can have the confidence that a competency is complete and ready for prime time.

Tacit Knowledge Teaching

Tacit Knowledge Teaching


We all remember our great teachers.  Yet, it may be that great learning is not a result of great teaching, but that great teachers are simply justified in taking some credit for its occurrence.  I have heard so much about how good teachers inspire or engage students in learning.  And I have experience it first hand.   And there is a lot of time and money being spent on trying to define effective teaching attributes and discover how to best measure these and somehow transfer them to teachers who are missing them.   But the concept of "teaching" is actually a bit elusive.  I remember meeting a great juggler once and asked if he could teach me to juggle.  I was surprised when he said he could not teach me to juggle, but that he could help me with some concepts on how to teach myself.

The Hungarian philosopher-chemist named Michael Polanyi introduced a new concept in his 1966 book "The Tacit Dimension".  It was the idea of "Informal Knowledge" that could not actually be formally taught that he called tacit knowledge.   Formal knowledge is basically the same for everyone, whereas informal knowledge is unique for each of us.   Most experts today seem to think that this informal knowledge is the larger part of a person’s knowledge base, typically built from years of collecting experience, insight, and intuition.   It may be that somehow enabling this informal knowledge development is now becoming the primary focus of the learning process over the more traditional consumption of formal knowledge. 

Certainly teachers once thought it their job to deliver the "formal" more teachable knowledge to students who would then go out into the world and use this knowledge base to build their own personal "informal" and less teachable knowledge.  Indeed our traditional classrooms were designed to deliver the same formal knowledge to the entire class at the same time.   Now we seem to think it is important to redefine teaching as either an art or a science or both, and somehow explicitly measure and quantify it.   

So in addition to learning, the art of teaching itself is most likely a "tacit" skill, one learned by collecting experience, insight, and intuition.   There is little evidence that increasing a teachers base knowledge of teaching makes them more effective, or that removing technology or other tools from them actually makes them less effective.   It is as if we can easily “measure” someone’s ability to juggle or keep their balance, yet still know only practice and failure and experience and practice and some successes followed by more practice, can ultimately make them better at the task.   Interestingly enough there are robots now that can balance on a ball very skillfully. 


So has technology such as the internet taken over the science of teaching, formal knowledge delivery, or the job of engaging and inspiring?   Certainly the vast amount of knowledge available on the internet is alluring, fascinating, captivating, and engaging to the point of addiction.    It has quickly put formal knowledge (right or wrong) from nearly everyone... nearly everywhere.    Academics call this phenomenon distributed cognition.    Young minds are already wiring quickly to deal with the "critical consumption" or more simply the BS detection once performed by the text book authors or qualified consolidators.   Potentially the internet has become, or will soon, the almost sole content provider and the world repository of formal knowledge, but what of Informal knowledge?

Lev Vygotsky's constructivism theories predated the ideas of formal and informal knowledge. However, I believe that all of the newly claimed "effective" teachers will somehow, by whatever means required, be able to cause informal knowledge within each student to be constructed internally.    It would be like every task was teaching students how to juggle, and not simply learn the area of a circle, unless that fact was somehow required for the student to become a better juggler.  It is a concept that I believe to be fundamental to the idea of personalized learning.

I also believe that technology, and a connected world, has or will soon become more than adept at delivering on any required formal knowledge, engagement or inspiration that might be required.  It will also become sufficiently adept at the majority of communication, collaboration and coordination needs.   Just like many other aspects of our life, many of the things we once had a job doing are now being done by technology.  Certainly education is no different and no more immune to the evolution.

Within education, informal knowledge development is rising to take over from the age of formal knowledge delivery.   The new skills required of teachers as informal knowledge development comes into focus, will be directing and delivering the "experience" while measuring levels of "insight" and "intuition" that result from that experience and which indirectly imply that the "informal knowledge" has indeed developed.  These are skills that require the teacher's own experience, insight and intuition, and something that technology continues to struggle with.  The delivered experience will need to somehow cause learning to happen through failures and successes and the resulting levels of informal knowledge development will determine a student's readiness to continue the process on their own.  These are things great teachers have always done and a bit of tacit knowledge skill that many teachers are still going to need to develop.  And the changes are already happening in schools across the country.

  • There is a change to focus more on students instead of classrooms. 
  • The classrooms are “flipping” and grade level boundaries are fading.
  • Competency is replacing the time based expectations for growth.
  • Assessments are going authentic and becoming more adaptive and interactive.
  • Individual learning maps are replacing traditional one-size-fits-all content.

There is still a long road ahead, but as new tools and new support for change becomes available then more schools and teachers will start to make the journey.   And as the economic wheels begin turning then these new comers will drive even better tools and better support for the changes and everybody wins. 

But the big winners will be the students and the world they will make for all of us.