Through the Keyhole

Through the KeyHole: Insights into Classroom Technology Integration

Practices in K-12 Schools

Copyright Miguel Guhlin 05/04/2002

CAN WE REALLY BE SCIENTIFIC IN K-12 SETTINGS?

Terms like reliability and validity, the use of "rigorous, systematic and objective procedures," call to mind the scientific method. Hypotheses are put forth, control and test groups set, data collected and analyzed, and then the conclusions shared. Existing programs are evaluated using experimental or quasi-experimental designs. These terms and expectations for quantitative

data analysis are liberally sprinkled through the reauthorized Elementary and Secondary Education (ESEA) Act. Is your district ready to provide this type of information to grant funders? And, can any one measure really give us the whole picture, or are multiple measures and approaches needed?

WHY BE ACCOUNTABLE?

The story of TIF is often recounted where educational technology professionals gather. As the legislature meets again, the data collection effort--too late in coming in the minds of some--has begun. It is clear that K-12 may be losing a valuable source of funding as multiple sources compete for the TIF funds. And, it all seems to hinge on whether TIF can show, not whether it's original

purpose--getting connectivity in to schools--but rather, what impact all these boxes and wires had on student achievement. In the case of TIF, it is clear that implementation strategies will differ from evaluation strategies.

The TIF grant, the need to document the impact technology infusion has had on teaching and learning across Texas is critical. Yet, doing an evaluation AFTER the fact is difficult. The potency of evaluation rests in its formative assessment rather than an end of program evaluation.

We perform formative and summative assessments for various reasons,

among them the following:

1) Be accountable for funding invested in education so far.

2) Determine how we will assess future outcomes.

3) Highlight effective technology-enhanced learning environments that impact student achievement.

4) Discover what makes a particular program educationally worthwhile so that it can be replicated.

If we see evaluation as a continuum, then we can see that there must be a:

*Shift from Quantitative to Qualitative Measures

*Shift from "out of the box," self-reported surveys to on-site,

customized measures adapted to your needs.

Currently, what passes for accountability includes the following:

*Lab Schedules

*Computer Lab Usage Reports

*Hardware and/or software purchase order requests

*Sign-in sheets

*Professional development evaluation forms

*Multiple-choice questionnaire on the quality of professional development

*TAAS and/or other Standardized Test Scores

And, despite the ESEA's focus on quantitative evaluation, research findings refute a single approach. We know that longitudinal studies are difficult to conduct, that teachers need several years to become comfortable using technology

so that it is difficult to assess their progress. We also know that it is very difficult to isolate the effects of technology within a dynamic

environment where so many elements work together. Also, perhaps most importantly, controlled experimentation--quasi-experimental design--is probably the wrong model.

NEEDS ASSESSMENT OR SOMETHING MORE?

The most challenging obstacle any school district faces is not knowing where it stands. Unless one knows exactly what one's district's perception of technology is, one will make little progress. There are several ways to assess where you are

now that will help provide a portrait of the district, instructional practice at the campus level, and the level of technology implementation.

Our school districts have a need to establish a baseline of technology needs and chart patterns of growth and achievement. As such an assessment should be performed prior to beginning any other professional development, and/or infrastructure

investment.

This baseline will allow the school district to show growth in moving forward from its present position as a result of new initiatives, influx of grant funds, bond monies. Furthermore, it provides valuable data in a way that meets the reauthorizing Elementary and Secondary Education Act's (ESEA) emphasis on quasi-experimental research. Establishing accountability through a variety of

procedures is critical to the school district's funding of new initiatives in technology integration. Unfortunately, a Mankato-type scale focusing on strictly technology needs can no longer be our focus.

Use of these assessments should give you an overview of where the district is in terms of hardware, software, and staff perception of your district program by campus. In the long run, your assessment pays off in asking for money from

your school board (i.e. they can't avoid the issue of hardware access if the perception is that administrators have all the computer and teachers do not) and the State.

Some of the tools that I would recommend for use by a district technology coordinator/director include:

1) Levels of Technology Implementation (LOTI): The school district should contract with an external evaluator to determine the levels of instructional use and levels of technology implementation. An excellent tool to use is the

Levels of Technology Implementation (LOTI) 50-item survey developed by Dr. Chris Moersch. The survey would establish a baseline, not only for teachers, but also administrators. Dr. Moersch would then follow up in person to review the data

gathered with The school district staff. This type of approach has proven successful in the 6th largest school district in Texas, Northside ISD, as well as several other districts in Area 20. The LOTI Survey results can help the

district respond to:

*What impact is technology having on student achievement?

*How has technology professional development changed teaching practices?

*How are teachers "integrating" technology into their instruction?

The LOTI is a consistent set of measures that accurately reflects the progressive nature of teaching with technology. It has been used nationally and internationally to assess over 100,000 classroom teachers' level of technology implementation. Yet, it's most important impact has been to get principals sharing ideas about how to assess, and what constitutes, technology

integration.

2) School Technology and Readiness (STaR) Chart aligned with the Texas Long-Range Plan for Technology: Your school district will profit from employing the STaR Chart as a way to help the district determine its progress toward meeting the

goals of the Long Range Plan for Technology as well as meeting the goals of their district. The Texas STaR Chart will also assist you in the measurement of the impact of state and local efforts to improve student learning through the use of technology.

The STaR Chart will also help your school district respond to these questions:

*What is your district's current educational technology profile?

*What evidence can be provided to demonstrate the district's progress in meeting the goals of the Long Range Plan for Technology?

*What areas should your district focus on to improve its level of technology integration to ensure the best possible teaching and learning?

Currently, a campus version of the Texas STaR Chart is being developed for use.Combining these two instruments--the district and the campus STaR Chart--will provide valuable information to school district central office staff. More importantly, new funding initiatives may require the Charts to be completed. In my work with the PAVE TIE Grant implementation, a Filemaker Pro version of the STAR Chart was developed and is available for download at http://www.mguhlin.net/service/dbase/

3) Taking a Good Look at Instructional Technology (TAGLIT): Another self-report survey includes Bill and Melinda Gates Foundation's TAGLIT. The Texas Association of School Administrators (TASA) uses it for their Technology Leadership

Academy (interested in TASA's TLA? Read Sidebar 2).

TAGLIT is a "set of assessment tools designed to provide school personnel with information about the current status of instructional technology use at their school." After completing TAGLIT online, the data is organized in several areas, including: Plan, Teachers, Students, Community, and Stuff. According to the TAGLIT web site, each component is described in the following way:

1. Plan addresses technology planning, policies, and expenditures.

2. Teachers addresses teachers' technology skills, teachers' technology use in teaching and learning, technology-related professional development, and technology-related instructional support.

3. Students addresses students' technology skills, students' frequency of technology use for learning, and students' and teachers' perspectives about how technology affects their classroom environment.

4. Community addresses technology-related community connections.

5. Stuff addresses hardware, software and electronic/online resources, and technical support.

The aspects of an instructional technology program that a school develops over time are scored on a 4-point scale. The lower the score, the less developed is that aspect of the program. The four points refer to the following stages of development:

1. Embarking: The school is just getting with this aspect of technology for teaching and learning.

2. Progressing: The school is making some effort and showing some progress with this aspect of using technology for teaching and

learning.

3. Emerging: The school is making considerable effort and showing considerable progress with this aspect of using technology for teaching and learning.

4. Transforming: The school's use of technology is transforming the way teaching and learning take place. (Information about TAGLIT obtained from http://www.taglit.org/Taglit/Preview/)

4) External Evaluation Team: Self-reported surveys are not enough. It is important to perform quasi-experimental research and classroom observation in the district in line with ESEA's guidelines. In my work with the PAVE grant, I have had the opportunity to work with The Solutions and Services Group, Inc. In working with them, I have gained some insights into the evaluation process--a process that is important to all districts now in light of the re-authorized ESEA.

For example, in the PAVE grant's Program Evaluation Design--authored by The Solution and Services Group, Inc.--submitted to the Texas Education Agency, one can read that

...the evaluation plan incorporates both quantitative and qualitative data to

provide a comprehensive and balanced record of project implementation

and of the impact of the use and incorporation of technology in teaching, learning and

administrative tasks. The role of the external evaluation firm is to monitor,

inform, and guide project implementation and its impact from an objective and

standardized perspective.

Notice that the project's progress will be recorded and monitored in

this manner:

1) pre-project survey to set baseline

2) post-project survey to measure outcomes and identify areas of need

3) Training evaluations to record the quality, type, and degree of professional development

4) Use of participants' electronic journals to provide rich information regarding the change in the level of use and incorporation of technology

5) On-Site observation that provide ongoing monitoring of project implementation

6) Formal and informal interviews or focus groups of stakeholders and participants to measure specific program impact

What is fascinating about these is that they embody a balanced approach to formative assessment and evaluation. Process evaluation, product evaluation and ongoing monitoring come together to not only help us find value AFTER the program is over...they also help us find out way as the program progresses.

WHY USE THEM NOW? DON'T WE HAVE ENOUGH DATA ALREADY?

Using these measures, your school district can develop all its professional development activities around gathered information rather than supposition about what is happening in its schools. With your schools and in this new environment, it is important that research be an active, recursive process. The differences become as clear as shifting from still photographs to digital

video. Both types of media can give us insights into the work we are about.

In your role as Director of Instructional Technology, your job is about looking at new technologies, finding ways to integrate them so as to maximize their impact on professional development practices. It is also about establishing a process for acceptance of technologies that involves primary stakeholders and pilot projects.

The suggested approaches for collecting data may provide some insights through the keyhole of teachers' door. A balanced approach to data collection is preferred. After all, as the old saying goes, "Not everything that counts can be

counted and not everything that can be counted counts."

SIDEBAR #1: LOTI

Visit the official LoTi (Level of Technology Implementation) portal called the LoTi Lounge that will serve as a resource for teachers and school districts throughout the country. Provided below is the URL to the Loti Lounge.

http://www.lotilounge.com

This site will enable you to do the following:

  • preview the MyePortfolio module
  • try out the online Professional Development Planner
  • explore the latest research and articles about the LoTi Framework

*elicit feedback from those who have used the LoTi online assessment system

*sign-up to take the LoTi survey

Two of the modules highlighted include the Professional Development Planner and MyePortfolio. The Professional Development Planner allows teachers to create personalized professional development plans based on different diagnostic tools.

MyePortfolio allows participants to create their own online portfolio by adding different types of work samples (e.g., video clips, web pages, PDF documents, images) based on specific local, regional, and/or national teacher standards. MyePortfolio also enables outside reviewers/assessors to assess each portfolio entry based on those pre-established standards.

Both modules can be accessed directly from the LoTi Lounge front page or

separately.

LoTi Professional Development Planner (PDP):

http://lotilounge.com/modules.php?name=pdp

MyePortfolio: http://lotilounge.com/modules.php?name=eport

SIDEBAR #2: SIGN UP FOR TASA'S TECHNOLOGY LEADERSHIP ACADEMY

Get more information regarding TASA's Technology Leadership Academy online at http://www.tasanet.org/TechConf/

SIDEBAR #3: Quantitative and Qualitative

Quantitative data are reported in numerical form: test scores, numbers of people attending, rate of drop-out, or percentages.

Quantitative data can be counted and measured. It is useful for describing concrete phenomenon and for statistical analysis of your

results. Surveys and questionnaires often include rating scales that can be used in quantitative analyses.

Qualitative data are reported in narrative form: written description of program activities, testimonials of program effects, comments about how a program was or wasn't helpful. Qualitative information can be used to describe how your project functions and what it may mean to the people involved. Qualitative data provide a context for the project and can be used to convey information about people's perceptions and reactions to a program

SIDEBAR #4: External Evaluators

Have professional, external evaluators who have no vested interest in the program participating from the beginning-and these may not be available through the low bid. Evaluations should be formative. Be aware of the ethics of control and experimental groups.

Pay what is necessary to get a "good" mix of quantitative and qualitative data rather than just a standard percent of the grant.

Who is the recipient of the change? Focus on what they are to be supposed to be doing.

Acknowledgements

Dr. Philip Linerode, Office of Testing and Evaluation, Northside ISD

Ellen Bell, Texas Association of School Administrators (TASA)

Dr. Sandra Pakes and Dr. I.G. Young, The Solutions and Services Group, Inc

(Email: sjpakes@msn.com,igyoung@swbell.net)

REFERENCES NOT MENTIONED IN THE TEXT

Qualitative Interview in

Evaluation

http://ag.arizona.edu/fcr/fs/cyfar/Intervu5.htm

Evaluating Your Program Using the Logic

Model

http://www.open.org/~westcapt/ev5a1.htm