Tuesday, August 19, 2014

What is acceptable evidence?

I often try to keep the goals of my course in mind when designing a course, and when making decisions day-to-day in a course. But I have a harder time thinking about evidence of student learning. As it turns out, verbs are helpful.

Through my work with teachers, and in partnering with teacher educators, I came into contact with the book Understanding by Design, by Grant Wiggins and Jay McTighe. In their model for developing curriculum and lessons, there are three stages, embodied in the three questions: What are the learning goals? What is acceptable evidence? What activities, experiences, and lessons will lead to the desired results as evidenced by the assessments? 

I teach a range of courses, some for aspiring elementary teachers, some for math majors, and some for practicing secondary teachers. In planning any of these courses, I generally begin with my learning goals for the course. While the official course syllabus sets a direction for the course, I sometimes find it helpful to rephrase the goals, and to prioritize them. For purposes of illustration, one phrasing of a course goal from the course for future elementary teachers is: Students will understand fractions and their representations, and be able to solve problems involving fractions. My rephrasing of that goal is: Students will be able to explain operations on fractions using models such as the area model and the number line, apply the models in realistic contexts, solve problems involving fractions, and interpret their answers.

Pivoting from goals to assessment, I am now faced with the question, What am I looking for during and after the unit on fractions that will let me know if students have reached the goals for the unit? Whereas the initial phrasing of “understand fractions” does not translate easily to something that can be assessed, the use of the verbs explain, apply, solve, and interpret give more direction to how to assess student learning. It lets me know that I am going to assess the students via problems in context that include a requirement to interpret their answers, as well as problems that require explanation of diagrams or models. I will know students have succeeded if their explanations are coherent, their diagrams illustrate mathematical reasoning, and students are purposeful in interpreting their answers in context, for instance by using appropriate units, or rounding up or down as appropriate to the problem. Although I am sure it is possible to have shallow goals using the same verbs, I find that using words such as explain, apply, solve, and interpret translates an abstract concept like understanding into a measurable quantity. 

Rethinking my goals with assessment in mind has helped me keep a focus on what is important in my classes. Having clear goals phrased with verbs that make them measurable makes it easier to write exams. But more importantly, because I am interacting with students at every class meeting, if I feel students are not reaching the goals, I know what is important to emphasize, or what is important enough that we need to slow down, since it is embedded in the goal statements.

Have you worked with your goal statements? Do you find that you assess progress toward your goals regularly? 

Tuesday, August 12, 2014

Examining Reasons to Use Technology in the Classroom: Mathematical Modeling of Flight Times

In this post, I explore how technology has made mathematical modeling more accessible.

This summer, I had the privilege of teaching a 3-week institute for eighth grade teachers. One of our aims was to help teachers grapple with mathematics in the Common Core State Standards that is new to (or long forgotten by) the teachers. One of the major changes is the inclusion of a mathematical modeling standard (Standards for Mathematical Practice 4), and in eighth grade, three standards refer to investigating patterns in bivariate data. This includes thinking about whether a pattern fits a linear model, and informally fitting a line to data. Thus, we spent a number of sessions engaged with bivariate data. For purposes of this post, the main point I want to make is how technology has made fairly sophisticated mathematics more accessible, and to briefly describe how we used technology to do mathematical modeling.

As I have written previously, I think the TI-Nspire is worthwhile in spite of its price, and so that was the focus of much of our work. The problem Gate-to-Gate, which I updated and adapted from a problem I found in a book, is a good example of our work. In brief, the goal of the lesson is to build and assess a model that predicts flight times from Chicago given the flight distance from Chicago. 

In that problem, we started by making observations about a map produced by http://www.flighttimesmap.com that shows concentric rings labeled with estimated flight times. I used Chicago as the point of origin. First, we collected observations, such as: the rings appear to be circles, the circles appear to be equally spaced, and the first circle is marked as a 1 hour flight. Next, we discussed the meaning of the observations, and we conjectured that the equal spacing is an indicator of a fairly constant flight speed. We also wondered whether the map was completely accurate with its times. 

The next step was to have the teachers explore a data set. I gathered actual flight times that I had looked up and put the data in a TI-Nspire file. The teachers were then saved the trouble of typing in their own data. Teachers then created scatter plots, attempted to fit their own informal lines to the data, and ran a linear regression on the data. With the line, they then chose flight destinations, looked up the flight distances (via web search), and compared the flight times predicted by the model to those they found on the web. They also tried to think of cities on the map that would be 3 hours away from Chicago by air, and again compared the real data to the predictions of the model.

Finally, we had a summary discussion about the quality of the model fit, the meaning of the values in the linear equation, and considerations about what is an appropriate domain for the linear function. (What does it mean to have a flight covering a distance of 0 miles?) Some teachers graphed distance as the independent variable, and others graphed time as independent, giving two different equations. This led to different insights from the different slope and intercept values. It was a good discussion and led to good insights about both modeling and the meaning of slope and intercept in context.

Stepping back from the problem, here is a look at how technology enhanced this exploration.
  • Data is more easily shared. This saves a tremendous amount of time. I shared the TI-Nspire file as a Dropbox link on a Lino board that I established for the class. This is a long way from having to either plot data points by hand, or even sharing the data but having each person enter data into their own spreadsheet for analysis. If I were not doing this lesson with iPads, I would either have to pre-load data onto handheld calculators, or if those were not available, perhaps give the data in a table and an already-plotted graph (or two graphs, with the two choices of independent variable).
  • Data is more easily analyzed. Fitting a line informally can be easily explored with touch screens. And, since the technology handles finding the equation of the moveable line, the focus of the conversation is on the quality of fit of the model, rather than a focus on the procedure of finding the line equation. Computing technology to perform tasks such as regression has been available for many years. Nonetheless, it is a powerful tool, and the ability to use regression with a button click means that there can be a discussion of how our informal lines compared with the regression line. If this were a calculator lesson, we might still use moveable lines, but less easily. And, barring that, we would have spaghetti on a paper graph, but then we would not be able to compute the line equations quickly.
  • Access to the web helps make it easier to test a model with real-world data. With access to maps and the ability to look up flights, teachers had a lot of freedom to test their models. If we did not have the web, I would have had to preselect a set of cities, listed with distances and flight times, and use that as the basis for testing the model.
  • Sharing results is easier. We used Baiboard, and I selected individual teachers, who then uploaded screen shots of their models and results. This meant that when teachers were sharing, either they or I could add annotations to the screen shots. Moreover, as others shared, we could swap back and forth between the current person’s work and the work already shared by others. If this were a lesson on calculators, teachers would have had to keep a separate handwritten record of their work, and switch back and forth between sharing their written work and sharing the work on the calculator. We would probably have to keep a (partial) record of what was shared on a whiteboard for later reference.
In looking at the effect of technology on the lesson, it is not the case that without iPad technology, the lesson is impossible. Compared with, say, having classroom calculators, it is that the technology makes the lesson run more smoothly and quickly, adds the authenticity of finding one’s own data, and improves the way results can be shared.

Tuesday, August 5, 2014

Keep Tinkering

I am always making adjustments, tinkering with my courses, both during but especially between iterations of the courses. My teaching is never a finished product. It is in the nature of teaching that what worked in one year for one course may not work for another course or in a subsequent year of the same course. I want to share one change that I have made over the past year, the effect it had, and what I am doing as a result.

In my Transition to Proof course last fall, I began building concept questions to supplement the regular proofs, and using them to target specific misconceptions or difficulties that students are having (or that I expect based on past experience). By concept questions, I mean short questions, usually multiple choice or true/false, that are designed to draw out students’ thinking and generate productive disagreement. Every time we had one of those discussions, I was exhilarated by the amount of discourse in the room. This practice evolved because I promised myself that I would focus on getting more discussion out of students in that class, since, in the past, I felt that there were too few students able to comment or question the proofs presented by their peers at the board. With the concept questions, I felt like I was seeing what the students were getting or missing from those proof presentations. In particular, the questions really helped to draw out the main points of proofs, points that I thought they would have gotten from a direct discussion of the proof, but which may have been less apparent than I had assumed. I almost feel like students in previous iterations of the course were shortchanged because they did not get this added layer of discussion to push their thinking forward. That’s when tinkering pays off.

As a result, I have planned some form of concept questions into both of my courses for this fall. Accompanying this change, I have also included the concept questions into the course grade. In addition to using the concept questions as a teaching tool, I am curious as to: (a) whether simply participating in the concept questions correlates with performance in the course, and (b) whether answering questions correctly on the first try correlates with performance in the course. Most of all, I would like to know whether using the concept questions as a tool in class improves the class’ understanding of the key concepts, but this will be hard to measure. I am thinking that I may have some items on some exams in one of the classes that I will reuse from prior years, so that I can compare performance. That’s not as good as an experiment, but at least I will have a basis for comparison.

The larger message is that it is healthy to revisit one’s goals for a course, to think about personal goals for improving one’s teaching, and to be willing to try new ideas that show promise of bringing students closer to the learning goals, and to measure the impact of the changes so that what works remains in place, and tactics that don't work are revised or edited out of the course.