Global Journal of Human Social Science, G: Linguistics and Education, Volume 22 Issue 9
© 2022 Global Journals Volume XXII Issue IX Version I 10 ( ) Global Journal of Human Social Science - Year 2022 G The Status and the Specifications of the Questions of an Achievement Exam from the Points of View of the Teaching Staff- Members of Palestine Technical University- Kadoorie (PTUK)\Tulkarm-Campus confirms that "questions must not be opaque and ambiguous by nature and must not contain complex syntax, difficult vocabulary, or unintended clues." Leeds (2000, as cited in Swart, 2009) indicate that "Effective questions include problem-solving or informational questions." (Leeds (2000); Black, Harrison & Lee (2003); Chin (2004); Jones, Harland, Reid & Bartlett (2009, as cited in Köksal, & Ulum, 2018 ), confirm that "Efficient exam questions should cover various difficulty levels to refer to the different capabilities of learners." Piaget (2001; Bruner 1960, as cited in Schneider, 2017) "argue that assist devices on exams facilitate the interaction of the test taker with key exam elements so that they may better construct their understanding of test questions. Arguably then, use of such devices should actually improve test validity." Hand, Prain & Wallace (2002, as cited in Ali, 2005) "showed that students prefer low-order questions and don’t prefer questions which need to be thought on." Black, Harrison & Lee (2003, as cited in Swart, 2009) state that "Effectual questions must help to raise issues on which academics need feedback or about which the students need to think." Lundberg (2004, as cited in Swart, 2009) further explains that "Short answers or multiple-choice questions requiring mainly factual recall tend to elicit surface learning, while essays (or long-answer questions) are more likely to encourage deep learning. "Ali (2005) concludes that "Teachers are in need of preparing questions which develop students’ scientific thinking." Ali (2005) further elaborates "that teachers should prepare questions together and they should pay attention for choosing questions from every step of cognitive levels." Andrade (2005, as cited in Balch, Blanck & Balch, 2016) concludes that "a rubric provides feedback which in turn provides clear and individually focused diagnostic feedback." Lord & Baviskar (2007) confirms that "It is generally believed by the test creator that, while short- answer and multiple-choice questions can be used efficiently to test the lower levels of learning behaviors, they are not sufficient to assess the higher levels." Thompson, Luxton-Reilly, Whalley & Robbins (2008) confirm that "During the analysis of the examinations, we found examples of questions that could be reworded in such a way that the cognitive level is altered." Jones, Harland, Reid & Bartlett (2009, as cited in Köksal & Ulum, 2018), propose that "A good assessment requires an exam paper that covers different cognitive levels to accommodate diverse capabilities of learners." Swart (2009) confirms that "academics must acquire the art of skillful questioning if they are to produce effective questions that will engage students in higher order cognitive processes such as problem-solving and critical thinking." Swart (2009, as cited in Jayakodi, Bandara, Perera, & Meedeniya, 2016) by the same token, adds that "When questions are prepared, there should be an effective balance between questions that assess the high level of learning and questions that assess the basic level of learning." Swart (2009) accordingly, further explains that "The number of multiple-choice questions (38% on average in the Knowledge objective) used in these examination papers further suggests that surface learning is being promoted. "Marquardt (2011, as cited in Swart, 2009) concludes that" the quality of the questions often depends on the nature of the topic. For example, children frequently ask questions that may merit a simple “yes” or “no” reply (closed-ended questions, according to Marquardt." Marquardt (2011, as cited in Swart, 2009) further explains that "Critical thinking is promoted through open-ended questions." Demir & Eryaman (2012) conclude that "It is necessary to ask high cognitive level questions to enable prospective student teachers to think in a multifaceted way. Therefore, they can avoid the tendency of superficial thinking that they get used to by answering cognitive level questions." Demir & Eryaman (2012) further adds that "The questions given in the exams by the instructors reflect the objectives, goals, outputs and the methodologies that the instructors apply in their teaching." Omar, Haris, Hassan, Arshad, Rahmat, Zainal & Zulkifli (2012, as cited in Köksal & Ulum, 2018) state that "Although a list of assessment types are available, a written exam is the most employed tool chosen by academic institutions." Freahat & Smadi (2014, as cited in Köksal & Ulum, 2018) confirm that "While low level cognitive questions increase the acquisition of the accurate knowledge and pave the way for acquiring high-cognitive skills, high level questions are practical tools for prompting thinking and improving other cognitive skills like problem solving and decision making." Paul, Naik & Pawar (2014, as cited in Köksal & Ulum, 2018) confirm that "choosing the right question is obviously the most difficult part of forming the exam paper, in addition to being the most time taking activity." Abduljabbar & Omar (2015), point out that "the process of questions writing is very challenging step for the lecturer. The situation is getting more challenging when lecturers try to produce good quality and fair questions to assess different level of cognitive." Abduljabbar & Omar (2015), further add that "the question must be provided in accordance with the subject content learned by students to fulfill learning objectives." Balch, Blanck & Balch (2016) reasoned that "as long as tools such as rubrics are incorporated, the student and the teacher will produce the optimum learning experience. The reward will be mutual." Chandio, Pandhiani & Iqbal (2016) conclude that "if questions are repeated in examinations, which is a very dangerous trend as it gives rise to rote learning. The reason being, that even if the questions belong to higher order thinking domain and are repeated, the repetition will cause students to memorize the answers to such questions." Jayakodi, Bandara, Perera, & Meedeniya (2016), state that "An exam question often falls into more than one level of
RkJQdWJsaXNoZXIy NTg4NDg=