Adolescent Polling and School Improvement - Papers & Essays

CHILD RESEARCH NET

HOME

TOP > Papers & Essays > School & Teachers > Adolescent Polling and School Improvement

Papers & Essays

Adolescent Polling and School Improvement

Schools are expected to monitor learning and provide reports of individual progress. Most youth are motivated to earn good grades and believe that the marks their teachers give them are an accurate reflection of achievement. However, in the past decade, grade inflation has become a national problem.1 Grade inflation occurs when students are given higher grades than they have earned.2 The reasons why some teachers assign high grades to almost all students are to support self-esteem, avoid pressure from parents, eliminate grievance, and maintain mental health.3,4 However, when the scores of standardized tests reveal lower performance than is portrayed by grades, students are being misled regarding their competence, deficits, and tutoring needs.5 The dangers related with inaccurate grading practices were prominent factors in motivating the United States Congress to pass the No Child Left Behind Act in 2001. This law mandates annual testing for every state to detect success and failure of students. Early identification of difficulties should result in the help individuals need to avoid feeling discouraged, giving up, or leaving school.6

Underperforming Schools in the United States

There is agreement that schools underperform just as students do and therefore need assistance. Underperforming schools should be identified and required to implement changes that will produce an acceptable learning environment. Every school receives an annual report card that describes its progress in relation to federal and state mandates. No Child Left Behind assigns a status label, making known whether adequate yearly progress (AYP) has been made toward measurable objectives. Annual evaluations consider grade level, certain groups (gender, racial/ethnic, students with disabilities, English language learners, economically disadvantaged), whether 95% of students take the required tests, and one additional factor which is graduation rate for high schools and attendance rate for elementary schools. Title I schools that fail to reach annual goals for two consecutive years are put on improvement status.7 The Title I designation identifies schools eligible for federal monies because at least 40% of students live in poverty.28 Although all schools are monitored for progress, escalating sanctions are imposed only on Title I institutions based on number of consecutive years without achieving progress. These sanctions range from an initial warning to restructuring the school in year five of improvement status.6

Underperforming schools have to submit an improvement plan that details the changes proposed to increase achievement and close the gap between advantaged and disadvantaged. In addition to monitoring federal standards of No Child Left Behind, the department of education in each of the 50 states sets their own expectations that must be met by schools. Formulating school improvement plans have become even more complex in urban areas where there are disagreements between politicians and educators over how to help the many children that fail to read or perform mathematics at basic levels. Politicians often believe educators overestimate the quality of public schools while underestimating the need for reform. In response, mayors of some cities have sought approval from their state government or city council to grant them control over the education administration.8,9,10

Respecting Student Opinions through Polling

George Gallup established polling as a credible methodology for finding out how some population or representative sample thinks about particular issues. In his book The Pulse of Democracy, Gallup speculated that polling could become a national equivalent of New England town meetings by giving the public a voice in government affairs.11 He believed that polls could support democracy by reducing the power of corporate lobbyists in favor of allowing common people to participate in dialogue on public policy. The Gallup organization has teamed with Phi Delta Kappan since 1969 to conduct an annual poll to determine what people think of the public schools.12 This innovative effort has contributed significantly to improvement of education.

Most polls that assess the opinions of youth are sponsored by businesses that regard them as an important consumer group. A growing number of television programs include polling so viewers can have input to decide winners of competitions and the contestants to be eliminated. Nickelodeon, VH1, and American Idol typically invite viewers to phone in or log on to a web site and cast their vote based on specified criteria. Responses are tallied and results are announced. The opportunity to vote on issues that interest them motivates viewers to feel more involved than when they are limited to being passive spectators.13 Because of innovations in technology, we appear to be at the beginning of a new era, an 'age of personal and participatory media in which the boundaries between audiences and creators become blurred and are often invisible'.14

Many reporters have urged educators to consider students as a source of opinion about changes needed in school practices.15,16 The opinions of students about conditions of learning should be understood because improving the quality of their experience is the purpose of educational reform.17,18 Finding out how consumers see education could yield insights on preferred ways of learning, obstacles to progress, and factors that influence motivation, engagement, and satisfaction. Discovering how students interpret their school experience would not necessarily suggest change but could add to the perspective of adults, thereby enabling more informed decisions. Instead, the current practice is to look to adults as the single source of ideas on school change. This practice leads students to conclude that adults do not value reciprocal learning.19

A promising departure from the customary paradigm was taken in 2007 by the New York City Department of Education. Students from grades 6-12 were invited to complete a survey in the classroom on attitudes about their school. A different survey was sent to parents by postal mail. Both generations could fill out the paper and pencil form or respond online. The Children First Initiative was aimed at 1.8 million potential respondents at a cost of $2 million. Mayor Michael Bloomberg explained that the survey was expected to reveal hard facts about schools that are succeeding and others that are falling behind. The parent survey asked them to agree or disagree with statements regarding school quality, activities, course work, and teacher support. Ten potential ways to improve schools like smaller classes and more teacher training were listed so parents could identify the most important options from their vantage. A similar process was used to assess agreement among students on statements like "My teachers inspire me to learn", and "Most students at my school like to put others down." Schools receive a report card grade from A through F, with survey data counting for 10% of school grade. The plan is to assign slightly greater weight to parent responses.20

Construction of Polls about Conditions of Learning

Our strategy to gather student perceptions began with draft construction of polls about conditions of learning. Focus groups were then convened with middle school and high school students. Adolescents read and reacted to the polls, each containing 15 to 20 multiple-choice items including an open-ended 'other' response for views not represented by the given options. Based on feedback, item revisions were made and polls checked for readability using the Flesch-Kincaid Grade Level formula, 17,18 The ten topics assessed by our Conditions of Learning Polls include: (1) Internet Learning, (2) Tutoring, (3) Time Management, (4) Cheating, (5) Student Responsibilities, (6) Stress, (7) Boredom, (8) Cyberbullying, (9) Support From Peers, and (10) Career Exploration.

The three polls chosen for exploring polling procedures and student responses were Internet Learning, Time Management, and Tutoring.

Internet Learning Poll. Teachers frequently identify lack of interest as the major obstacle that keeps students from reaching academic standards.21 The purpose of this poll is to discover how students feel about learning on the Internet. Information regarding student motivation and teacher instruction is shown by responses to items on why students spend time on the Internet, benefits from the Internet, and need for electronic search skills. Educators should know the problems students face in trying to navigate the Internet and how to support self-directed learning. The value adolescents see in homework that involves web searching and how often teachers assign homework tasks that require the Internet should be understood. The ways that Internet learning compares to other forms of learning deserves consideration. The role students believe parents should have related to Internet learning can clarify the division of home and school accountability and create opportunities for parent involvement. Increased parent involvement is mandated as part of No Child Left Behind. Student estimates of teacher ability to use technology tools, advantages and shortcomings of virtual schooling, and how the school web site could be more helpful warrant attention.

Time Management Poll. An important factor in success for any endeavor is time management. People who schedule time so that priority concerns get sufficient attention typically feel more in control of their lives, experience more satisfaction, and have a more productive work record.22 It is vital to equip adolescents with the sense of balance that time management skills can provide so they are able to avoid over scheduling themselves, procrastinating important tasks, breaking promises to others, and ignoring people who matter most. Items on this poll explore experiences with time management questions related to school such as: How do the students feel about spending the same amount of class time in subjects of varying difficulty? How suitable is the school library and computer center schedule? What do students think about timing of how final exams are scheduled, optional possibilities for attendance, access to teacher help outside class, and having enough time for themselves? How do youth feel about the morning start time for classes at school? How often does hurrying and over scheduling occur based on tasks adolescents have chosen or had thrust on them by others?

Tutoring Poll. Tutoring is recognized as an essential tool to help students make progress. There is a body of research documenting benefits and shortcomings of tutoring practices. The Supplemental Education Services (SES) provision of the No Child Left Behind Act makes free tutoring available for every qualified low-income student.23 However, even though all parents are informed of this resource as required by No Child Left Behind, tutoring participation rates are low, rural areas lack services for English language learners, and students in special education are problematic in all geographic contexts. Because tutoring has provided less benefit than expected, there is growing opposition to continued reliance on corporate tutors that lack accountability for gains.24 In 2005 the Arizona State Department of Education allocated $10 million to districts for tutoring students who failed the state test and were in danger of not being able to graduate.25 The statewide implementation resulted in just $3 million spent for the intended purpose because fewer students participated than anticipated. Knowing how students feel about tutoring as a way to learn is an important aspect of being able to provide help.

This poll identifies how students perceive the relevance of tutoring to overcome failure, ways of motivating them to admit a need for help, convenient times to schedule tutoring, anticipated response of friends and relatives to admission of need for tutoring, reasons why individuals might require assistance, preferred conditions for tutoring, ways to deal with difficult course content, subjects where tutoring is needed, expected teacher response to requests for tutoring, methods to announce availability of tutoring, making known results of tutoring, and willingness to volunteer as a tutor.

Adequate yearly progress required for No Child Left Behind's is based on how particular student subgroups perform. Therefore, it is relevant to find out if subgroups have distinctive views on ways to improve the effectiveness of their school experience. For example, do males differ from females in perceptions about conditions of learning? National attention has focused on gender disparity in achievement.26 How influential is ethnicity or race on student impressions? Most underperforming schools have high proportions of minority students. How do student perceptions compare among schools in the same district? Answers to these questions can reveal unrecognized aspects of success, contexts for intervention, and student-preferred solutions.

Field Test of Online Polling

A field test was conducted to assess the merits and shortcomings of online polling as a process to gather student perceptions about conditions of learning. Students in grades seven through grade twelve from Title I schools in rural Arizona were the participants. In rural Arizona, 50% of students graduate from high school, third worst rate in the nation.27 The percentage of rural students that speak English 'less than very well' in Arizona is 12%, second highest in the country. As indicated earlier, a Title I designation identifies schools that are eligible for federal monies because at least 40% of students live in poverty.28 Schools throughout the state were chosen including ones near the border with Mexico, proximate to Indian reservations, in remote areas, and near urban areas. Eleven principals were oriented and asked to participate; eight accepted. The three high schools, two junior highs, one middle school, and two elementary schools ranged in enrollments from 100 to 500 students. Five schools had been on improvement status in the past three years. A high proportion of the students were from minority groups, with Hispanics accounting for over 50% of enrollment in four schools.

Each principal followed the online instructions that would later be expected of the students. Then, the principals sent a common letter to their students explaining why they should take the polls, potential benefits for the school, and the computer lab schedule for completing their task within two weeks. Accommodations for English language learners were described. Students were assured of anonymity so that personal identity could not be linked with responses. The students chose whether to participate and were encouraged to share the letter with their family.

Most principals directed teachers to bring classes to the computer lab at specified times and then randomly give each student who chose to participate a copy of their letter received earlier but this time containing additional coded information. Details included password protected entry data for separate access to each of the three polls. School codes were printed to ensure that information could be disaggregated by school. The individual student codes, produced with a random generator, were attached on small self-adhesive labels to ensure anonymity and limit students to one vote per poll. Number of completers at each school was monitored every couple days with the update figures provided to the principals.

The number of students enrolled in the grade levels invited to complete polls was compared with the actual number who entered responses. Percentages varied but most of the schools had completion rates of 75% or higher on all three polls. Overall, number of students who completed the Internet Learning Poll was 956, Time Management Poll 834, and Tutoring Poll 766. Administrators reported that the polls were given in sequence and lab schedule variations may not have allowed some students time to finish all three polls. This is one possible explanation for lesser numbers of respondents for respective polls.

When polling was complete, data files were used to create reports for individual schools showing distribution of student responses. The report consisted of colored graphs and charts displaying responses on every item for the three polls. Student demographic data on age, grade, gender, and race/ethnicity were separately displayed and narratives for 'other' responses provided in a box beneath each item. Reports to principals included the suggestion to share data with stakeholders such as faculty, students, parents, district administrators, and the school board.

Student and Principal Perceptions

Quantitative and qualitative analyses were applied to determine outcomes. Chi-square tests were done on individual responses for each question of the three polls. Each response had to be tested because the students could select more than one option so each had a separate data field. The purpose for testing was to find out if the relationships were dependent or independent between responses and the demographic variables of gender, grade, and race/ethnicity. The same tests were conducted between responses and specific schools to detect whether there were significant differences between student perceptions from school to school.

Comparison of percentage of student responses showing dependent relationships to the four tested variables is presented in Table 1, Percentage of Poll Responses with Significant Relationships to Demographic Variables. Sixty-nine percent of the responses showed dependent relationships with one or more variables. The school variable had the highest frequency of response relationships at 46%, followed by gender at 35%, and then grade at 23%. Race/ethnicity had the lowest numbers of significant relationships at 17%. These findings support the assertion that the local school context should be the primary consideration for assessment and implementation of changes that influence conditions of learning.

Table 1. Percentage of Poll Responses with Significant Relationships to Demographic Variables
Poll N Gender
Percentage
Grade
Percentage
Ethnicity
Percentage
School
Percentage
Internet Learning 956 35 21 17 49
Time Management 834 29 19 19 41
Tutoring 766 39 28 16 48
Overall Total 2,556 35 23 17 46


Qualitative evaluation took place six weeks after poll reports were distributed to schools. Sixteen questions were sent to each principal for reflection prior to an interview at their school. The 30-45 minute interviews were taped with permission of principals so their complete responses could be considered. Several questions related to the advantages and disadvantages of our procedures. Overall, reaction of principals was highly favorable, identifying advantages such as good use of technology, appealing method to get students involved, suitable pacing so most students could finish, using the school computer lab to ensure equal opportunity, greater accuracy because responses were confidential, getting easy to interpret results, and confirming that students have insights to improve decision making about school change.

Use of the school computer lab to conduct polling was identified as an advantage. Each principal explained that an unknown but large proportion of students did not have a computer at home. This assertion was corroborated by a survey two of the administrators carried out on their own. They found that only 38% of students at one school and 33% in the other had Internet access at home. The common speculation was that computers were lacking at home mainly because of poverty, especially among families close to the Indian reservations. Principals near the border with Mexico or having a large number of English language learners agreed that the procedures were inadequate to gain the perspective of this subgroup. Even though students could get help with translation, a Spanish version of each poll would yield more accurate results and should be available.

Scheduling questions centered on timing for taking polls and getting the results. Everyone agreed that polling should not be scheduled during the two months preceding annual state testing in April because this is the most busy and stressful time for teachers and students. Instead, consensus favored September as the best time to poll, shortly after the school year begins. This would allow for consideration of results when developing a school improvement plan that must be submitted to the state department of education in January. Polling students again at the end of the year could evaluate effects of changes implemented based on initial polling. Feedback of results should be available as quickly as possible. All of the administrators preferred to get their findings within a week. Every principal perceived continued use of polls as a valuable method to improve conditions of learning at their school.

The selection of polling topics was discussed. For the field test it was necessary to have students in every school to complete the same polls. Administrators were not given a choice about the three topics for polling. During their interviews principals were asked to identify, from a list of other already prepared polls, the ones most appropriate for their school. The topics they chose respectively were Cyberbullying, Cheating, Dress Codes, Boredom, and Career Exploration.17,18 Everyone agreed that principals should get to choose some topics they feel are of greatest benefit for them.

Although only six weeks had passed since schools had received summaries, an inquiry was made to find out whether poll results had motivated consideration of change in school practices. Most principals described certain changes that had already been made or were pending. For example, one school increased the amount of homework required of students that involved using the Internet. Another school had begun to post homework assignments online so that students could check if they were unsure about tasks and absentees would know obligations expected of them with the due dates. Procedures for identifying students to have tutoring was altered at one school where teachers expressed surprised and concern about students reporting they felt embarrassed to request tutoring. Two schools were changing their web site in response to insightful student suggestions.

One procedure used by most of the schools warrants improvement. There is a need to identify stakeholder groups, notify them of poll results, and create a process so they can convey suggestions for potential actions. All the principals shared findings with grade level representative teachers or the teachers on the school improvement committee. However, only two of the eight schools provided feedback to students and their parents. There was agreement that all stakeholders need to receive information but this kind of reporting presented a new challenge for schools and a plan for communication had not yet been formulated. When principals were asked about mechanisms for stakeholders to suggest reforms based on their interpretation of poll results, helpful ideas emerged. First, groups of common stakeholders were identified that include Title I committees, school improvement committees, community citizen committees, homeroom groups, parent- teacher organizations. Each of these vested groups should have procedures to convey recommendations for faculty consideration. The school newsletter and school web site were suggested as essential outlets to inform and solicit input from a larger community audience made up of groups external to the school.

Implications for School Failure and Success

The use of online polling to gather student views about conditions of learning revealed aspects of failure and success in relation to questions that guided our inquiry.

1. Is Internet polling a viable method to solicit student views in Title I schools for rural areas with high minority populations?

Results support viability because students were able to complete three polls online with ease. The patterns of participation accurately reflected demographics of each school, indicating that the processes were relevant for both genders and for individuals from diverse racial and ethnic backgrounds. Many students did not have Internet access at home so scheduling time for polling in the computer lab at school allowed for equal opportunity to respond. However, perceptions of students whose first language is not English would be better understood if they were polled in their primary language. For Arizona, that would require translation of polls to Spanish.

2. What is the value and usability of student perception data to school leaders?

All administrators reported that learning student opinions was valuable and expressed a desire to continue polling on other conditions of learning. Half of the schools had already taken initiatives to make changes based on poll results. Principals would like some choice of polls so that local circumstances can be reflected. They also prefer the flexibility of deciding when to schedule polling. If certain polls were administered at the beginning and end of a school year, results could be part of accountability reports to state and federal officials.

A conspicuous barrier involved our failure to provide principals procedures to identify stakeholder groups, share poll results with them, and announce a mechanism of choices for each group to recommend changes. Principals were asked to share data with stakeholders but it was not done in most cases. To address this need, a practical process model with the following features will be conveyed to schools in the future.

(a) Principals select the topics of some polls they consider relevant to needs and priorities of their school based on student achievement, district mandates, local concerns, and state and federal accountability directives.

(b) Principals will receive a rationale for all poll topics, to be augmented by indigenous information they provide reflecting the local setting. This strategy can support student, teacher, and parent understanding of the significance of topics, generate interest among recruitment groups, and foster ownership in dealing with concerns.

(c) Students complete polls in some specified period (e.g., 2-week window) using English or Spanish versions. The school computer lab schedule should allow all students to finish their polls within the time frame communicated to everyone.

(d) Results should be conveyed to the principal within one week following the polling.

(e) Principals distribute results directly to specific stakeholder groups and on the school Web site. After deliberation, each stakeholder group formulates their interpretations and suggestions to convey to the principal.

(f) The principal solicits written and oral recommendations from stakeholder groups and considers them in cooperation with Title I school improvement committee and faculty.

(g) The Title I committee decides on possible interventions to apply during the next term.

(h) A designated period is given to try out and monitor effects of recommended changes.

(i) Intervention is evaluated to decide if change should be adopted, revised or abandoned.

3. Are there differences in student responses based on demographic variables of gender, race/ethnicity and grade level?

Under No Child Left Behind, each demographic subgroup must make adequate annual progress. Therefore, the stakes are high for schools to make sure that needs of these groups are met. To illustrate, one item on the Tutoring Poll inquires about how student willingness to seek tutoring might be interpreted by friends. A greater proportion of responses indicating that friends would make fun of them or try to talk them out of tutoring were from boys (65%) than from girls (35%). In contrast, the belief that friends would encourage them to seek help was more often expressed by girls (62%) than boys (38%). Knowing that boys may be less likely to request tutoring since they anticipate a lack of support from friends, a school improvement team might create a process by which boys can access tutoring without drawing attention to their initiative. Conversely, to motivate tutoring for girls, schools may establish ways that encourage groups to participate since girls may be more willing if supportive friends are involved. There may also be school plans to build cooperative learning norms that support tutoring.

Another item on the Tutoring Poll determined that students in grades 7 and 8 are more likely than students in grades 9, 10, and 11 to ask teachers for help when they have difficulty with a subject. Students in high school are less likely to seek help, even if they fail. More students in grade 8 than other grades ask friends for help if they find a subject difficult. Therefore, high school teachers may increase their effectiveness by more often assessing students, formally or informally, to detect level of understanding, recognizing that students in these grades are less likely to seek help even if they are struggling. While cooperative learning groups are used effectively in all grades, eighth grade teachers may want to ensure that they take advantage of this grade's interest in learning from friends.

Responses differences based on racial/ethnic background was the least prevalent of the demographic variables tested. Nevertheless, knowing questions that elicit different responses can be helpful to a school in meeting the needs of a diversified student body. For example, the Internet Poll includes an item on ways to improve the school Web site. More Whites and Blacks suggested that students should be recognized for good conduct and community service than was chosen by Hispanics or Native Americans. This is important information for a school that is trying to serve a diverse student population.

4. Are there differences in student responses based on the local context of the school?

An important outcome was awareness that student perceptions have a dependent relationship with local school context. On average, over three polls, 46% of the items had significant differences based on the individual school variable. The way students view a topic at one school may not be the same as students see things at another location. There are decision making implications because NCLB requires schools to implement research-based interventions that have proven effective in other settings. The assumption that an intervention will be effective anywhere because it worked at one place or one school may be contributing to the high rate of interventions that are less successful than anticipated. Some research-based interventions might be more successfully replicated if the student views at local school views are taken into account as relevant for implementation. This strongly supports the notion that while some views of students might be similar across schools, every school needs to systematically assess local impressions for changes that impact conditions of learning.

Next Steps in Polling

The purpose of this project was to assess the viability of polling students online as a method to discover their opinions about conditions of learning. There were no models to follow so an exploratory focus was necessary. Local school context had the greatest influence on student experience, more powerful than gender, grade, ethnicity, and race. This finding supports the federal and state emphasis on individual school accountability.

Looking ahead, responses of students attending Title I urban schools could be examined to determine how they resemble and differ from rural schools. Additional comparisons could contrast underperforming with highly performing schools and large schools with small ones. There could be benefit in examining how subgroups like immigrants, special education students, and slow learners view school in comparison with their classmates. Society expects its schools to provide each student with personal experience in democratic decision-making. This goal can be achieved when educators pursue reciprocal learning from students by inviting their anonymous opinions online and assuring them that their experience in the classroom will be considered in guiding efforts to create better schools.

References

1. Brian L. Burke, "For the Grader Good: Considering What You Grade and Why," Observer, vol. 19, November 2006, pp. 33-37.

2. Valen E. Johnson, Grade Inflation (New York: Springer, 2003).

3. James D. Allen, "Grades as Valid Measures of Academic Achievement of Classroom Learning," Middle School Journal, vol. 78, May/June 2005, pp. 218-223.

4. Dorothea Anagnostopoulos, "Real Students and True Demotes: Ending Social Promotion and the Moral Ordering of Urban High Schools," American Educational Research Journal, vol. 43, Spring 2005, pp. 5-42.

5. Craig Huhn, "How Many Points Is This Worth?" Educational Leadership, vol. 63, November 2005, pp. 81-82.

6. Richard Simpson, Paul LaCava, and Patricia Graner, "The No Child Left Behind Act: Challenges and Implications for Educators," Intervention in School and Clinic, vol. 40, November 2004, pp. 67-75.

7. Jack Jennings and Diane Rentner, "Ten Big Effects of the No Child Left Behind Act on Public Schools," Phi Delta Kappan, vol. 88, October 2006, pp. 110-113.

8. Jack Jennings, "Answering the question that matters most: Has student achievement increased since No Child Left Behind?" Center on Education Policy, June 2007, available at http://www.cep-dc.org

9. Tony Favro, "US mayors are divided about merits of controlling schools," City Mayors Education, February 2, 2007, available at
http://www.citymayors.com/education/usa_schoolboards.html

10. Jean Johnson, Ana Maria Arumi, and Amber Ott, The Insiders: How Principals and Superintendents See Public Education Today (New York: Public Agenda, 2006).

11. George Gallup, The Pulse of Democracy (Princeton, NJ: Gallup, 1940).

12. Lowell C. Rose, "A brief history of the Phi Delta Kappa/Gallup Poll," Phi Delta Kappan, vol. 87, April 2006, pp. 631-633.

13. David E. Campbell, Why We Vote: How Schools and Communities Shape Our Civic Life (Princeton, NJ: Princeton University Press, 2006).

14. James Trier, "Cool Engagement with YouTube," Journal of Adolescent and Adult Literacy, vol. 50, February 2007, pp. 408-412.

15. Susan Black, "Listening to Students," American School Board Journal, vol. 192, 2005, pp. 39-41.

16. Mark Girod, Michael Pardales, Shane Cavanaugh, and Pam Wadsworth, "By Teens, for Teachers: A Descriptive Study of Adolescence," American Secondary Education, vol. 33, 2005, pp. 4-19.

17. Paris Strom and Robert Strom, "Cyberbullying by Adolescents: A Preliminary Assessment," The Educational Forum, vol. 70, 2005, pp. 21-36.

18. Paris Strom and Robert Strom, "Cheating in Middle School and High School," The Educational Forum, vol. 71, 2006, pp. 104-116.

19. Catherine Gewertz, "Student-Designed Poll Shows Teenagers Feel Lack of Adult Interest," Education Week, vol. 24, 2004, pp. 6-7.

20. Julie Bosman, "Views of Parents, Students and Teachers Sought," The New York Times, May 1, 2007, p. B6.

21. David Buckingham and Rebekah Willett, Digital Generations: Children, Young People, and the New Media (Mahwah, NJ: Erlbaum, 2006).

22. Peter Drucker, "Managing Oneself," Harvard Business Review, January 2005, pp. 1-10.

23. Margaret Spellings, Building on Results: A Blueprint for Strengthening the No Child Left Behind Act (Washington, DC: U.S. Department of Education, January 2007).

24. Patricia Burch, Supplemental Education Services Under NCLB: Emerging Evidence and Policy Issues (East Lansing, MI: The Great Lakes Center for Education Research and Practice, May 2007).

25. David J. Hoff, "Arizona students welcome tutoring for graduation exam," Education Week, vol. 24, February 16, 2005, p. 6.

26. Dan Kindlon, Alpha Girls: Understanding the New American Girl and How She is Changing the World (New York: Rodale, 2006).

27. Jerry Johnson, Why Rural Matters: The Facts about Rural Education in the 50 States. (Washington, DC: The Rural School and Community Trust, 2005).

28. Lorna Jimerson, Placism in NCLB: How Rural Children are Left Behind, Equity and Excellence in Education, vol. 38, 2005, pp. 211-219.

PAGE TOP