Essay Writing Help on Evaluate Items and Response Scales

Evaluate Items and Response Scales

Issues involved

Most of the children evaluated occasionally envision diverse means through which they can effectively receive response reasoning about an item and response scale (Pressley, Snyder, & Cariglia-Bull, 2007, p. 83). For instance, hopeful thinking among such children, though beneficial in the long run, has implications that may be too much in the short run and the child’s cognitive search for goals may be impended (Pressley, Snyder, & Cariglia-Bull, 2007, p. 86). Similarly, any form of hopeful reasoning or rather thinking can be crucial in aiding the current medical condition and treatment procedures of a child. Such kind of hope in the item and response scale can be defined best based on its ability to contribute to a cognitive set of theories en route the path of total recovery (Pressley, Snyder, & Cariglia-Bull, 2007, p. 88).

This part aims at the construction and analysis of Hope Scales among kids with a comprehensive response criterion (Pressley, Snyder, & Cariglia-Bull, 2007, p. 93). The item and response scales  is the first initiative towards the development of a comprehensive item and response scales so as to derive pooled items reflecting the urgency and direction of a child’s cognitive ability (Pressley, Snyder, & Cariglia-Bull, 2007, p. 95). Secondly, an agreement and a compromise is reached to select approximately six items reflecting urgency reasoning and the other six reflecting response thoughts (Pressley, Snyder, & Cariglia-Bull, 2007, p. 97).

The sample of hopeful reasoning among children is represented by the number of items chosen and taken into considerations for analyses. Notably, the children’s concentration span is not affected during the analyses or response in any way whatsoever (Pressley, Snyder, & Cariglia-Bull, 2007, p. 101). This pilot study aims at getting the relevant and most adequate responses on diverse meanings and implications concerning the lives of the children (Pressley, Snyder, & Cariglia-Bull, 2007, p. 103). Consequently, the participating children choses suitable response from a 6-option scale items that ranges from 1-none of the time, to 6-all of the time (Pierce, Gardner, Cummings, & Dunham, 2009, p. 624). Upon the consent of guardians and parents, the children are encouraged to fill the item and response scales, and the analyses comprehensively done thereafter (Pierce, Gardner, Cummings, & Dunham, 2009, p. 626).

Based on the responses, the items in the scales may be rewritten simplifying the structures of various questions and responses (Pierce, Gardner, Cummings, & Dunham, 2009, p. 628). In testing validity of items response, the same scale can be subjected to the same children to test whether there is a coherent pattern created in the item responses (Pierce, Gardner, Cummings, & Dunham, 2009, p. 629). When the response has been determined and gathered from the scale feedbacks pathways, a number of issues may ensue (Pierce, Gardner, Cummings, & Dunham, 2009, p. 632). For instance, the differences in scores between different genders are closely monitored, though no significant differences expected (Pierce, Gardner, Cummings, & Dunham, 2009, p. 634). Correspondingly, the differences in races had no significant effects on the responses collected, though notable differences noted as compared to those from various genders of same racial affiliations (Pierce, Gardner, Cummings, & Dunham, 2009, p. 637).                               

Major issues involved with creating items and response scales used in measuring construct

Some of the major issues that occasionally do arise in the creation of items and response scales includes the emergence of internal consistency issues especially in responding to the questions form the scale (Pierce, Gardner, Cummings, & Dunham, 2009, p. 639). For instance, since the theory explaining and ascertaining hope among children demands the adequate summing of the agentic and response contemplations, its components may be separated during application in the subsequent analysis thereby causing some internal inconsistencies (Pierce, Gardner, Cummings, & Dunham, 2009, p. 641). Additionally, the issue of temporal stabilities are major menace to the operations and authenticity of items and response scales. This can be attributed to the fact that the item and response scale are developed based on dispositional measures that hypothesizes that any child retaking the scale should in essence produce some consistency and similarities in scores (Pierce, Gardner, Cummings, & Dunham, 2009, p. 643). However, this may only be temporary recorded from the first test and may not be observable from the subsequent retake of the item and response scales (Pierce, Gardner, Cummings, & Dunham, 2009, p. 645). In addition to these, a variation in responses may be detected through the variation coefficients reflecting the standard deviation ratios in relation to the net scale scores.                    

Evaluation of the Snynder process

In the evaluation of the Snyder process, three primary phases are involved that are responsible for ascertaining and building a cohesive process (Marston, & Smith, 2001, p. 617). The Snyder process involves critical evaluation of a situation, evaluating the outcomes of the process and the designing and evaluation of the short term lifecycle (Marston, & Smith, 2001, p. 619). The evaluation of the process entails developing ideals, defining the targeted outcomes, a precise comparison of the ideals and the targeted outcomes (Marston, & Smith, 2001, p. 621). Similarly, the activities involved in the item and response scales should be defined explicitly. The evaluation of the outcome should be based on a precise comprehension of the performance indicators (Marston, & Smith, 2001, p. 623). The achievements made by the process are critically evaluated to ensure that the validity and authenticity of the whole process is maintained (Marston, & Smith, 2001, p. 626).

 During the short term life cycle analyses, the core issues is a critical review of the criteria used in the evaluation processes creating the information systems (Marston, & Smith, 2001, p. 627). Notably, the Snyder process is all-inclusive and seeks to involve relevant stakeholders in the evaluation processes. The methods applied in the Snyder process are rigorous and action oriented allowing room for enough flexibility in the evaluation of the items and response scales (Marston, & Smith, 2001, p. 629). Essentially, the Snyder process applies an amalgamation of both qualitative and quantitative approaches encouraging an in-depth analysis and reflection in a flexible manner (Marston, & Smith, 2001, p. 630). The validity of the evaluation process will be dependent upon the outcomes of the process measured both in the long run and short run.                                        

Definition and discussions on the construct

Arguably, the validity of a construct is based on definition of the quality of measurement to be taken and the overall content of the items and responses (Zaichkowsky, 2005, p. 343). The scale aimed at measuring a variation of valid constructs exploring a number of untested assumptions. In defining the constructs, one should first ascertain whether there are satisfying support on the assumptions that the results collected from the scales matches those made from outside observations (Zaichkowsky, 2005, p. 346). Secondly, on whether the predicted assumptions of hopes relates in any way to assumed competency and control of the children and their ability to respond to the scales (Zaichkowsky, 2005, p. 343). Last task should be on the examination of the predicted values relating to a child’s self-worth and self-actualization. Subsequently, the examination of the self-presentation biases when responding to the item and response scale is necessary to ensure that the responses are the best under such conditions (Zaichkowsky, 2005, p. 348).                   

A response scale

The response scale developed in this document describes the thinking process of everything in general of the examined children (Cox III, 2000, p. 408). Each question in the response scale tends to ignite the thinking processes of children depending on the situation best describing the child (Cox III, 2000, p. 409). The particular child is to tick on the right response from the descriptions provided and should note that the questions aims at testing the cognitive ability of the child rather than the correct or wrong answer (Cox III, 2000, p. 411). Such questions include but not limited to the questions listed below. 

I am okay with my situation (the responses may include)

  • All of the time
  • Most of the time
  • A lot of the time
  • Some of the time
  • A little of the time
  • None of the time

These questions go all the way to number six and the responses are used to gauge the children’s cognitive ability and reasoning (Cox III, 2000, p. 412).    

References

Cox III, E. P. (2000) (Eds.). The optimal number of response alternatives for a scale: A review. Journal of marketing research, 407-422.

http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CB4QFjAA&url=http%3A%2F%2Fwww.rangevoting.org%2Foptinumb.pdf&ei=YuNKVcrvH8iuU5D2gNgD&usg=AFQjCNH_7DH0Vl4Il0kRwV_JwqqWcNqORg&sig2=jUpdtwRoZovYDjMdhgVSlw&bvm=bv.92765956,d.d24&cad=rja

Marston, S. A., & Smith, N. (2001). States, scales and households: limits to scale thinking? A response to Brenner. Progress in human geography, 25(4), 615-619.

http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CB4QFjAA&url=http%3A%2F%2Fphg.sagepub.com%2Fcontent%2F25%2F4%2F615.full.pdf&ei=-eNKVcwMgelQuo2B8AE&usg=AFQjCNHJi5myT-3E6u3K_jo1LfbNUrcqMg&sig2=BIzwYoYQvXlqHDoEnWx58Q&bvm=bv.92765956,d.d24

Pressley, M., Snyder, B. L., & Cariglia-Bull, T. (2007) (Eds.). How can good strategy use be taught to children? Evaluation of six alternative approaches. Transfer of learning: Contemporary research and applications, 81-120.

http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0CCcQFjAB&url=http%3A%2F%2Fwww.aare.edu.au%2Fdata%2Fpublications%2F1998%2Fnaj98081.pdf&ei=OONKVYLyDMSTU7yBgbAJ&usg=AFQjCNH0DKBs3i2pMStbzL_HNBGwmb71Jg&sig2=PdYmDfdCqFpl4ogSYXzlJQ&bvm=bv.92765956,d.d24

Pierce, J. L., Gardner, D. G., Cummings, L. L., & Dunham, R. B. (2009). Organization-based self-esteem: Construct definition, measurement, and validation. Academy of Management Journal, 32(3), 622-648.

http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CB4QFjAA&url=http%3A%2F%2Fjournal-bmp.de%2Fwp-content%2Fuploads%2F2012%2F05%2F13-21_Kanning_final1.pdf&ei=0ONKVfekGsT5UsfmgfAB&usg=AFQjCNFowS88BELAN-547f1N_h4aMVcJAQ&sig2=bI95gJhZOEezjY68suwfxg&bvm=bv.92765956,d.d24

Zaichkowsky, J. L. (2005) (Eds.). Measuring the involvement construct. Journal of consumer research, 341-352.

http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CB4QFjAA&url=http%3A%2F%2Fwww.sfu.ca%2F~zaichkow%2FJCR%252085.pdf&ei=p-NKVcigF8HvULDSgfgP&usg=AFQjCNGhjz5fyWgDASFE6EGi-kMJzVOeBw&sig2=joj6qwH8GSu_GS1sSn2B5Q&bvm=bv.92765956,d.d24&cad=rja