Saturday, January 26, 2019

Pschological assessment Essay

Section AWrite an essay on the process you would follow in develop a mental discernment bar. Discuss the steps that you would take in this process, including how you would choose full points for your running, how you would evaluate the dependableness and lustiness of your tally, and the issue of establishing norms. Discuss the theory comprehensively and illustrate your sympathy with an ex free radical Ale or examples. IntroductionThe process of developing a psychological trial run is a complex and lengthy adept. ( Foxcroft & Roodt, 2001) but aspects cogitate to the planning of a psychological give the sackvas be not ever so sufficiently emphasised and whatsoevermagazines not mentioned at all ( Kaplan & Saccuzzo, 1997). When the stress is to be apply in a multi hea thus context, financial aid inescapably to be paid to the cultural relevance (and potential bias) of the examine right from the planning and forge phase instead of only cosmos sensitive to cu ltural aspects from the dot writing phase onwards.Also presumption that we do not convey a long history of developing culturally appropriate running games applicable to diverse groups in South Africa, try out developers bespeak to grapple with basic issues such as what methods of turn up disposition might be appropriate or inappropriate for certain cultural groups and what speech to develop the ravel in, for example. More time needs to be spent in the planning phase exploring and critically considering rivulet design issues.Planning phaseThe first and most most-valuable step in developing psychological measures is the planning phase. Planning involves writing out the form of what one considers to achieve. C arful though needs to go into deciding on the aim of the measure, defining the topic of the measure and key elements of the tribulation plan. a test plan consists of the following aspects (a) desexualiseing the purpose and rationale for the test as salutary as t he intended tar progress to population, (b) defining the draw (content domain) and creating a set of test specifications to guide detail writing, (c) choosing the test put, (d) choosing the degree format, and (e) specifying the ecesis and scoring methods (Robertson, 1990).Specifying the aim of the measureThe first step is to state the aim of the measure, the piss I give habituate and how the outcome bequeath be engage. If I am conducting this resume in South Africa I ordain besides need to mention that the measure pull up stakesing be apply in a multicultural society. I would need to elaborate on what I mean by multicultural by highlighting the context. I would state the age of the test takers and their educational status. The info concluded supra is important beca manipulation it whitethorn have an impact on the test specifications and design. I would need to state whether the test would be paper-based or computer-based.When that decision is answer I would need t o consider whether the test-takers are beaten(prenominal) with such tests. The test takers may underperform on the valuation because they are not practised in the promoter of measure. This may impact the validness of the study to be conducted. I would also need to jibe whether the test will be administered singlely or in a group setting.Because psychological draws are brewed in western societies, the emphasis is on individualism. When operative in a multicultural society, however, it is important to consider the norms of the society I would be running(a) in. In some acculturations, for example, the group identity is honourd over the individual identity. This could have an effect on the content of the measure. Defining the content of the measureHere I need to figure out what I necessity to measure and why. This will show me what to focus on during the other steps. A working definition of the construct is needed. This includes identifying exactly what I aim to get out of this research study. To do this I need to embark on a comprehensive literature review. I will see how my musical theme has been investigated in the past and spot the gaps. I can now make the decision on whether I am conducting a new study or adapting an existing study into the South African context.Later I will need to make the same decision on the instrument I will use for data gathering. Since I would be working in South Africa, I need to decide on whether name norms should be developed for test takers from advantaged and disadvantaged schooling backgrounds and/or for urban and rural areas. I would assemble a team of content, lyric poem and cultural experts to scrutinise the content being developed. Nell (1994) states that language is a critical moderator varying of test performance.If the test taker is not proficient in that language, it is difficult to ascertain whether poor performance is due to language or communication encumbrance or that the test-taker has a low le vel of the construct being measured. I would produce the test in a bilingual format and specify the source language. Work would need to be done to moderate that the construct is meaningful for all(prenominal) group. Developing the test plan (specifications)Once the construct to be assessed has been defined and operationalised, a decision needs to be reached regarding what nest will be employed to guide the development of the test content and specifications. Decisions will be made regarding the format to be used (open-ended items, forced-choice items etc.), how they will be patsyd (objective or subjective tests), and whether time limits will be imposed. The language and cultural experts are once again needed during this step.Sometimes psychological constructs, conceptualised in western society, do not have a know equivalent in African discourse. For such constructs the translated stochastic variable would need to formulate the construct in a way that is closest to the incline meaning. This will require more time for the African language test taker. The test specification should eliminate the possibility of construct bias. The format therefore needs to be standardised for a variety of cultural groups or it should at least include items that will be considered easy, moderate and difficult by all groups.Although these steps follow after each other, I will need to go backwards and forwards to ensure content and construct validity.Item writingThe second step is item writing. Once the test specifications have been finalised, the team of experts writes or develops the items. The trend in South Africa has been to plain adapt an already made test to accommodate South African test takers. This is not necessarily the easier option. Firstly, concepts are not always mum in the same way in divergent societies. For example, the term feeling is sometimes taken to mean with very sad in some societies. It is therefore important to ensure construct validity even f or an English test given to English mother tongue speakers of a different society to that of the tests origin.If the opinion measure will be administered to children, face validity will be ensured through the use of big writing, use of color in and drawings. The length of the items should also be considered. With every step of items writing dependability is ensured.Reviewing the itemsAn item bank is then developed and items reviewed in terms of whether they take over the content specification and whether they are well written. Items which do not march the specifications are removed from the bank before it can be used to generate criteria-referenced tests. The team of experts should focus on both content validity and indicate whether the items are from stereotyping and potential bias. The experts will then return the item list with recommendations. They will need to be re-written or revised.Assembling and pre-testing the data-based variant of the measureItems need to be arrang ed in a logical way. Since we are dealing with a multicultural society, we need to ensure that the items are balanced and on appropriate pages. The length of the items in each category needs to be finalised. For long problems based items, time adjustments need to be made. A decision would have been made with regards to whether the test is paper-based or computer-based. The appropriate apparatus needs to be made available. The Pre-testing the experimental version of the measureThe test items have to be administered to a large group of examinees. This sample should be representative of the population for which the eventual test is intended. This will be the norm group.Items analysis phaseDuring this phase items are chequered for relevance. Again we see if each item is reliable and valid to the study. The characteristics of the items can be evaluated using the classical test theory or the item response theory. At the item level, the CTT model is relatively simple. CTT does not el icit a complex theoretical model to relate an examinees ability to success on a particular proposition item. Instead, CTT collectively considers a pool of examinees and empirically examines their success rate on an item (assuming it is dichotomously scored).This success rate of a particular pool of examinees on an item, well known as the p value of the item, is used as the exponent for the item difficulty (actually, it is an inverse indicator of item difficulty, with higher value indicating an easier item). The ability of an item to discriminate between higher ability examinees and let down ability examinees is known as item discrimination, which is often expressed statistically as the Pearson product-moment correlation coefficient between the scores on the item (e.g., 0 and 1 on an item scored right-wrong) and the scores on the total test. When an item is dichotomously scored, this estimate is often computed as a point-biserial correlation coefficient.IRT, on the other hand, is mo re theory grounded and models the probabilistic distribution of examinees success at the item level. As its name indicates, IRT primarily focuses on the item-level information in contrast to the CTTs primary focus on test-level information. The IRT framework encompasses a group of models, and the applicability of each model in a particular situation depends on the nature of the test items and the viability of different theoretical assumptions somewhat the test items.Revising and standardizing the final version of the measureOnce the soft and quantitative information has been gathered, the test is administered to the large sample for standardization. All the items that were prime to be unclear are simplified. Vocabulary and grammar is corrected. Split-half reliability is assessed. The translated version is check into through back adaptation (into the source language). The items are finalised for the test. The final database is used to check on reliability and validity. The a dministration and scoring instruction may need to be modified. Then the final version is administered.Technical evaluation and establishing normsThe items can be analysed using the item response theory. The characteristics of each item may be represented graphically be means of a graph which relates an individuals ability score with their probability of passing the items. Items with large variances are selected. The scores obtained by the norm group in the final test form are referred to as the norms of the test. To compare an individuals score with the norms, their raw score will be converted to the same course of derived score as that in which the test norms are reported (e.g. percentile ranks, McCalls T scores etc). Publishing and ongoing refinementsA test manual is compiled before a measure published. The manual should make information on the psychometric properties of the test easily understandable. It will be updated from time to time as more information haves available.Secti on B attend the steps that should be followed in the adaption of an sound judgement measure for cross-cultural employment and briefly explain what each step means. 1. Reasons for adapting measures Cross-cultural assessment has become a sensitive issue due to specific concerns regarding the use of a similar(p) tests across cultures.By adapting an instrument, the researcher is able to compare the already-existing data with newly acquired data, thus allowing for cross-cultural studies both on the national and international level. Adaptations also can conserve time and expenses (Hambleton, 1993). Test adaptation can pinch to increased fairness in assessment by allowing individuals to be assessed in the language of their choice (Hambleton & Kanjee, 1995).2. Important considerations when adapting measuresThe test can be compromised if there are problems between the test takers and the administrator. The administrator should therefore familiar with the culture of the test-taker . They cannot take it for granted that the test taker will be receptive to the format of the test. This could lead to the score representing a lack of skill with regards to the format of the test instead of measuring the construct being assessed. Some languages, like isiZulu, require more time to be spent reading therefore would require more time to complete.3. Designs for adapting measuresBefore selecting an assessment instrument for use in counseling or research, counselors and researchers are trained to verify that the test is appropriate for use with their population. This includes investigation of validity, reliability, and appropriate norm groups to which the population is to be compared. Validity and reliability take on additional dimensions in cross-cultural testing as does the question of the appropriate norm group. The instrument essential be validly adapted, the test items must have conceptual and linguistic equivalence, and the test and the test items must be bias free (Fouad, 1993 Geisinger, 1994). cardinal basic methods for test adaptation have been identified forward translation and back-translation. In forward translation, the pilot test in the source language is translated into the target language and then bilinguals are asked to compare the original version with the adapted version (Hambleton, 1993 1994). In back-translation, the test is translated into the target language and then it is re-translated back to the source language. This process can be repeated some(prenominal) times. Once the process is complete, the final back-translated version is compared to the original version (Hambleton, 1994). all(prenominal) of these adaptation processes has their strengths and limitations.4. Bias analysis and differential item functioning other issue that must be considered in cross-cultural assessment is test bias. The test user must ascertain that the test and the test items do not systematically discriminate against one cultural group or anothe r. Test bias may occur when the contents of the test are more familiar to one group than to another or when the tests have differential predictive validity across groups (Fouad, 1994). Culture plays a world-shattering role in cross-cultural assessment.Whenever tests developed in one culture are used with another culture there is the potential for misapprehension and stagnation unless cultural issues are considered. Issues of test adaptation, test equivalence and test bias must be considered in order to fully utilize the benefit of cross-cultural assessment.5. Steps for maximizing success in test adaptionHembleton (2004) summarised nine key steps that should be addressed when adapting or translating assessment instruments.6. Challenges related to test adaption in south AfricaA disadvantage of adaptation includes the risk of imposing conclusions based on concepts that exist in one culture but may not exist in the other. in that location are no guarantees that the concept in the source culture exists in the target culture (Lonner & Berry, 1986).Another disadvantage of adapting existing tests for use in another culture is that if certain constructs measured in the original version are not found in the target population, or if the construct is manifested in a different manner, the resulting scores can provoke to be misleading (Hambleton, 1994). Despite the difficulties associated with using adapted instruments, this practice is important because it allows for greater generalizability and allows for investigation of differences among a growing diverse population. Once the test has been adapted, test equivalence must be determined.ReferenceFoxcroft, C.D. & Roodt, G. (2009). An admission to psychological assessment in South Africa. Johannesburg Oxford University PressHambleton, R. K. (2001). The next multiplication of the ITC Test Translation and Adaptation Guidelines. European ledger of Psychological Assessment, 17, 164-172.Hambleton, R. K. (2004). I ssues, designs, and good guidelines for adapting tests into multiple languages and cultures. In R. K. Hambleton, P. F. Merenda, & C. D. Spielberger (Eds.), Adapting educational and psychological tests for cross-cultural assessment (pp. 3-38). Mahwah,NJ Lawrence Erlbaum AssociatesVan Ede, D.M. (1996). How to adapt a measuring instrument for use with motley cultural groups a practical step-by-step introduction. South African Journal of Higher Education, 10, 153-160.

No comments:

Post a Comment