Education ∪ Math ∪ Technology

Tag: The Reflective Educator (page 3 of 43)

For the last two years, the project I am currently working with has been asking teachers in many different schools to use common initial and final assessment tasks. The tasks themselves have been drawn from the library of MARS tasks available through the Math Shell project, as well as other very similar tasks curated by the Silicon Valley Math Initiative.

Here is a sample question from a MARS task with an actual student response. The shaded in circles below represent the scoring decisions made by the teacher who scored this task.

This summer I have been tasked with rethinking how we use our common beginning of unit formative assessments in our project. The purposes of our common assessments are to:

• provide teachers with information so they use it to help plan,
• provide students with rich mathematics tasks to introduce them to the mathematics for that unit,
• provide our program staff with information on the aggregate needs of students in the project.

We recently had the senior advisors to our project give us some feedback, and much of the feedback around our assessment model fell right in line with feedback we got from teachers through-out the year; the information the teachers were getting wasn’t very useful, and the tasks were often too hard for students, particularly at the beginning of the unit.

The first thing we are considering is providing more options for initial tasks for teachers to use, rather than specifying a particular assessment task for each unit (although for the early units, this may be less necessary). This, along with some guidance as to the emphases for each task and unit, may help teachers choose tasks which provide more access to more of their students.

The next thing we are exploring is using a completely different scoring system. In the past, teachers went through the assessment for each student, and according to a rubric, assigned a point value (usually 0, 1, or 2) to each scoring decision, and then totaled these for each student to produce a score on the assessment. The main problem with this scoring system is that it tends to focus teachers on what students got right or wrong, and not what they did to attempt to solve the problem. Both focii have some use when deciding what to do next with students, but the first operates from a deficit model (what did they do wrong) and the second operates from a building-on-strengths (what do they know how to do) model.

I took a look at a sample of 30 students’ work on this task, and decided that I could roughly group each students’ solution for each question under the categories of “algebraic”, “division”, “multiplication”, “addition”, and “other” strategy. I then took two sample classrooms of students and analyzed each students’ work on each question, categorizing according to the above set of strategies. It was pretty clear to me that in one classroom the students were attempting to use addition more often than in the other, and were struggling to successfully use arithmetic to solve the problems, whereas in the other class, most students had very few issues with arithmetic. I then recorded this information in a spreadsheet, along with the student answers, and generated some summaries of the distribution of strategies attempted as shown below.

One assumption I made when even thinking about categorizing student strategies instead of scoring them for accuracy is that students will likely use the strategy to solve a problem which seems most appropriate to them, and that by extension, if they do not use a more efficient or more accurate strategy, it is because they don’t really understand why it works. In both of these classrooms, students tended to use addition to solve the first problem, but in one classroom virtually no students ever used anything beyond addition to solve any of the problems, and in the other classroom, students used more sophisticated multiplication strategies, and a few students even used algebraic approaches.

I tested this approach with two of my colleagues, who are also mathematics instructional specialists, and after categorizing the student responses, they both were able to come up with ideas on how they might approach the upcoming unit based on the student responses, and did not find the amount of time to categorize the responses to be much different than it would have been if they were scoring the responses.

I’d love some feedback on this process before we try and implement it in the 32 schools in our project next year. Has anyone focused on categorizing or summarizing student types of student responses on an assessment task in this way before? Does this process seem like it would be useful for you as a teacher? Do you have any questions about this approach?

I’ve been working hard to read research carefully, both research with which I agree, and research with which I disagree. I still struggle with my tendency to overlook the flaws in research with which I agree, and to find fatal flaws in research with which I disagree.

This does not mean that I should ignore research; only that I continue to be careful to read all research with a critical eye, and discuss the findings with other people. My suspicion is that norming about what research means with people who have a wide variety of view-points might reduce my tendency toward personal bias.

According to Grant Wiggins, there are seven essential elements of effective feedback.

Feedback should be:

• goal-referenced,
• tangible and transparent,
• actionable,
• user-friendly,
• timely,
• ongoing,
• consistent.

I recently looked at the new Common Core Math section of the Khan Academy. I’d like to analyze it within this framework of feedback.

Here is a sample exercise from the Khan Academy, related to the Common Core grade 3 standard of 3.G.A.1. Not familiar with this standard? Just follow the previous link; the language is identical to what is written on the Khan Academy website.

Goal-referenced:

There are two places where goals are embedded within this question. The first is an implicit goal, described by the Common Core standard linked to this problem. The second goal is embedded within the problem; get five correct in a row.

An obvious issue here, given that the Khan Academy is mostly intended to be used independently by students (the definition of personalized learning the Khan Academy is apparently using), the language of the standard is probably not student-friendly, and in particular it does not lend itself well to be a goal for students.

Getting five questions correct in a row is clearly a goal that a 3rd grade student can understand whether or not they have achieved, although the question remains about whether five correct questions in a row means students understand the standard being assessed by these questions.

Tangible and transparent:

There are two main ways students get feedback from the Khan Academy exercises; they can check their answer, and they can ask for a hint. If students click on check answer, and their answer is wrong, the button they just clicked on shakes back and forth. Presumably this means the answer is not correct, but both of these feedback opportunities are opaque.

The box shaking back and forth does not tell the learner WHAT they did incorrectly, nor is a shaking button even explicitly a signal that something is wrong.

Next, feedback that is tangible is something a student understands. Look at the series of four hints students are given when working on this problem.

Now imagine you are a 3rd grade student, and you do not know what a right angle is, what is meant by 4 equal sides, or what a rhombus, rectangle, or square is. How does this feedback help you understand these key concepts or the connection between these concepts and the diagrams given. Finally, telling students that the correct answer is “none of these” is not helpful feedback because it does not respond to whatever thinking a student may have done about this problem, it just states something that should be accepted as fact.

Actionable:

Giving students the correct answer to a problem or telling them that they are wrong does not give them actionable feedback because it doesn’t give them any actions they can take in order to improve. A student who receives this feedback has no other alternative but to continue guessing until they are able to get the answers right. This does not lead to student understanding of mathematical concepts.

User-friendly:

The key to user-friendly feedback is to give a small amount of feedback at once so that students have an opportunity to make use of the feedback and not be overwhelmed, but at the same time give them feedback that they know how to act on. Feedback that is constructed in language students may not know, or which requires them to use knowledge they do not have is not user-friendly. What evidence do the developers of the Khan Academy have of the effectiveness of the feedback they have included in their system?

Timely:

What little feedback the Khan Academy exercises give, it is certainly timely.

Ongoing:

Again, this feedback is ongoing. If students want another chance to try again, they just need to do another exercise.

Consistent:

The feedback offered by the Khan Academy is certainly consistent, in fact for any given exercise, it is likely virtually identical. This actually leads to problems, since some variance in the language used in feedback is often necessary in order for students to make sense of it.

This analysis of the feedback offered by the Khan Academy platform suggests that their mechanisms for feedback fail on four out of seven of the criteria shared by Grant Wiggins. One open question here; is it possible for a computerized feedback system to give feedback that passes all seven of these questions? Next question; what is learned from using a system that gives poor feedback?

What constitutes “good teaching” is not a well defined term. My evidence for this claim is that so many organizations appear to use very different exemplars of good teaching when sharing their work.

For example, this is considered good teaching by the Whole Brain Teaching institute.

The Program for Complex Instruction would likely define this as good teaching.

Seymour Papert, and other constructivists would likely define this as good teaching.

People who follow John Sweller‘s (and company) work on Cognitive Load Theory might offer this as an example of good teaching.

People who believe that the future of education lies in personalized education might offer this example as good teaching.

All of these methods of teaching are very different from each other. Would people who use these methods agree on what good teaching looks like? There would likely be some overlap, but if you took a representative of each of these teaching methods and asked them to observe a classroom (which as far as I know has never been done), I would be willing to bet that it would be very unlikely that they would agree as to whether or not the teaching they observed was “good teaching”.

A better measure of effectiveness is to look at the goals of the teaching, and the impact the teaching has on the learners in terms of meeting these goals. If you have x goal for your students, how much impact does your teaching have on your students? “Good teaching” would be therefore defined as teaching that has a greater impact on achieving a specific goal, and consequently, we are not able to define “good teaching” without knowing the goals. In the examples above, it is hopefully clear the goals of each the users of each teaching method are different, and consequently each of these could be considered good teaching, within the set of goals defined.

What goals do you have for your students? Are your goals the right goals for your students? Who has defined the goals for your students? How do you know if your students are closer to achieving your goals than when you started teaching them?

If you can answer these questions, you will be a lot closer to knowing what kind of teaching you should be using, and whether or not it is effective.

This is the presentation proposal I submitted last Thursday to the NCTM conference committee. Would you attend this workshop?

Description:

Effective mathematics teaching is more than just teaching procedures; students must have opportunities to grapple with rich mathematics. In this workshop we will collaboratively investigate using rich math tasks to explore students’ use of the Common Core Standards for Mathematical Practice as part of formative assessment for learning.

Objective:

Participants will walk away from this workshop with a source of rich mathematics tasks they can use in their classroom, and a flexible and useful protocol they can use to interpret student thinking about these tasks as part of formative assessment practices. Participants will also explore the power of teachers collaborating together to make evidence based decisions and improve their own practice.

Focus on Math:

The participants in this session will be given appropriate rich mathematical tasks and samples of student work, all along the continuum of algebraic reasoning from grade 9 to grade 12. Participants will not only be able to use these tasks in their own classrooms, they will be able to apply the protocol for looking at student work for their own students’ work, and build their school teams’ capacity for collaboration at the same time.

I’m not an expert on standards by any means, but I know that the standards in British Columbia (where I was trained to teach) were coherent and made sense. You could follow the threads through the years and understand why they had been designed that way. I know that the Common Core content standards in Math have the same level of coherence.

I don’t know if they are always appropriate, or how one even defines appropriate given the strong relationship between what set of standards students learn under, and what they are therefore capable of learning in later years. I know that recent research suggests that young kids are capable of learning higher level math than what is currently expected, with many or even most kindergarten classrooms practicing skills with students almost all of whom have those skills. I believe in play based early years teaching, but this doesn’t preclude teachers from focusing on problem solving and pattern finding and continuing to develop students’ number sense.

What I do know is that the Common Core Standards for Mathematical Practice (SMP) are not pedagogy-neutral.

These non-content standards require students to be able to make sense of problems and persevere in solving them. This requires teachers to offer opportunities for students to problem solve (this is how some people define “doing mathematics”).

Students have to be able to construct viable arguments and critique the reasoning of others. While this could be done entirely through paper and pencil means, it is far easier to teach students to do this by regularly engaging them in dialogue and giving them opportunities to discuss mathematics together.

Students have to be able to model with mathematics, which again means that they have to be given opportunities to do mathematical modelling. The type of mathematical modelling described in the standards requires students to be able to make sense of problems resulting from everyday life, which means that teachers should be using examples of problems that result from the cultural contexts students live in (it’s not everyday life if it’s someone else’s life).

These are just three of the eight SMP, and the other five SMP also have pedogical signficance attached to them as well. The SMP require that some teachers teach differently than they do, and that therefore hopefully more students will get opportunities to grapple with mathematical ideas.

What I think we need to be careful to recognize is;

• Even though the standards and the increase in testing happened at the same time, these are two different issues. I can like the Common Core standards, and also think the testing is excessive.
• Many of the “Common Core Math problems” that have been shared via social media have either not been very good problems, or have had insufficient context to explain them. However, none of these problems is “Common Core” since the Common Core is a set of standards, not a set of curriculum. Standards define what kids are supposed to know and when; curriculum is a tool used to align specific mathematical examples to those standards. The fact this has sometimes been done poorly is not the fault of the people who wrote the Common Core standards, and in fact, these kinds of poorly written problems have been plaguing education for many years.
• Value added measured (eg. teacher evaluation programs), privatization of education, ALEK, and a number of other issues that have risen across the USA in recent years are also not actually related to the Common Core Standards. Again, I can support the standards and not support people excessively profiting off of education.
• Children being given problems that are too challenging or being given insufficient support when attempting these problems at home is again not the fault of the Common Core. Every standard in the Common Core has a range of possible curricular resources, and hence challenge levels, and educators just need to be careful when selecting amongst these. If students are being sent home assignments that they cannot reasonably be expected to do with miminal support from their parents, then these are the wrong types of homework assignments to assign. Homework is probably not appropriate at all in elementary and middle school, but fortunately the Common Core does not come with a requirement to assign homework.
• There is tonnes of interesting and rich mathematics available that falls under the set of content defined by the Common Core. Almost every puzzle or challenging problem on this website, for example, is aligned to some Common Core standard.
• The Common Core is not going to solve the problems of inequity, poverty, and racism in our education system. It is unreasonable to expect a set of standards to do this.

tl;dr: Strategic inquiry is a lesson study structure.

One of my roles in my current job is to help facilitate team meetings for two schools. In these team meetings, our objective is to collaboratively study our individual impact on student learning, and work together to design instructional strategies for improving the learning outcomes of students.

This means we collaboratively:

1. Look at student work.
2. Take the time to notice things about the student work that everyone agrees on.
4. Identify a specific area that we should focus on for this student.
5. Offer suggestions as to what we would do if we were the teacher for this student.
6. Make a plan that includes a plan to re-assess students.
7. With the assessment information from our plan, if necessary, revisit steps 1 to 6.

Teachers definitely need to collaborate in this process. The most important reason to collaborate with other teachers when studying the impact of your own teaching; other teachers can offer insight and feedback that you cannot see yourself. Also, when you first start looking carefully at the impact of your teaching, it can also be disheartening sometimes to see how little impact you sometimes and having some colleagues to reflect on this and offer support is invaluable.

There is evidence to suggest that teachers improve faster when they work together to plan and reflect on their teaching. Two central tenets of John Hattie’s book, Visible Learning for Teachers (2012), are that teachers should know their impact, and work together to improve each other’s teaching. A highly effective model, Hattie suggests, is for teams of teachers to norm around what it means to be successful in their subject area, look at sources of student data, and collaboratively create instructional plans to attend to trends in that student data. In Ilana Horn’s summary of her research into professional learning of math teachers, she suggests that teachers learn most about teaching when their conversations are centred around teaching, students, and mathematics.

I have a proposal. I would like to form and facilitate an inquiry group of 3 or 4 other people from the online mathematics community. We would start the inquiry process for next September.

Here’s what would make your participation ideal:

• You are interested in studying your impact as a teacher,
• You have enough time to meet once a week (or once every two weeks) for about an hour or 90 minutes,
• You are teaching a course which is substantially similar to everyone else that participates (or alternatively, happy to help someone study their teaching in a course you are not teaching),
• You are able to give your students a pre-assessment before you start teaching, and the same post-assessment after you teach a unit,
• Have the technical know-how to upload your student work, minus personal identifying features, into a shareable space,
• And are able to participate using our chosen communication technology (Google+ Hangout or BigMarker are my two suggestions for now).

Here are two other ideas to make our work even MORE ideal:

• All participants are teaching the same course using roughly the same scope and sequence. For example, we could all teach the Common Core aligned Integrated Algebra 1 using the Mathematics Design Collaborative scope and sequence.
• All participants use the same pre-assessments and the same post-assessments (but are obviously free to sequence and teach the unit topics as they see fit).

Benefits:

• Your teaching will improve (probably),
• Your students will likely benefit,
• You will have a source of other people teaching roughly the same content at roughly the same time, which will make collaboration around resources and lesson planning much easier.

If you are interested in participating, fill out this form here: http://wees.it/inquiry.

First, give an exit slip to your students based on a critical math concept for which you want to check for understanding.

After class, sort the exit slips into piles based on the method students chose to use (whether they used it perfectly or not). Choose two examples from the student work that highlight one or two probable misconceptions students still have on the chosen critical math concept.

Remove identifying information from the student work, photograph it (or use a document camera) and show it the next day in your class. Ask students a question about the work that requires them to think about the work. “Which one of these two examples is correct?” is not a very good question because it can be answered by guessing. “Why do I really want these two students to talk to each other about their solution?” is a better question because if students answer it, they will have to think about the concept a bit differently.

Ask students to think about their own answer, write it down (if necessary), then turn and talk and share their work with a partner while you circulate and listen in on student discussions. Select 1 – 3 students to share their thinking with the whole class.

Repeat this every day.

The objective of my presentation at NCTM in New Orleans was to introduce participants to social media, which was made difficult because participants did not have Internet access. As it turns out, this ended up forcing me into a couple of activities which were pivotal experiences for participants.

Here are my slides from my presentation.

Instead of trying to bring my participants to the Internet, I brought the Internet (or at least a portion of it) to my participants, and in doing so, provided them with concrete examples of how people use social media to interact.

I started my presentation by sharing some of the stories I have from my use of it, and who I have been able to interact with and how this has enriched my professional learning. If you use social media as a professional tool, then you have some of these stories too.

Next, I gave them an experience of what it might be like to participate in a live Twitter session. Participants were given a question, 30 seconds to find new group members, and 140 seconds (suggested by Dvora) to discuss the question in their small groups. This highlights that Twitter conversations are often with people you don’t know very well, and can be brief and short interactions.

I then asked participants to describe the attributes of our face to face conversations, and then speculate as to how these might transfer to an online conversation. I then highlighted for participants some of the features of these kinds of conversations. In particular, the conversations parallel conversations you would have with people face to face, but that conversations online can take place between participants who are separated by vast geographic (and cultural) differences.

Participants went around the room and read one or two of the four blog posts I had printed and put up on the wall, and put sticky notes up to comment on the blog posts. We then debriefed the experience with the main observation being that blogging is a lot like reading and responding to a letter from the editor.

Finally I wrapped up by talking about some of the specific projects that have been created through collaboration with other people in the online mathematics education community, and how our participation online has resulted in resources of real value in our teaching.

In the final questions at the end, one participant astutely observed that it would be easy to find a “how-to” guide online, but that he felt my “why-to” session was more helpful. There’s no reason to tell people how to do a bunch of technical details if they don’t see a reason to do them.