INTRODUCTION
Educational assessment is fast becoming a big enterprise in Nigeria. The
first examination body in Nigeria,
the West African Examinations Council, came into being in 1952 (Adeyegbe:
1993). The Joint Admission and Matriculation Board followed it in 1978 (JAMB,
1995). The number examination bodies recently increase when the Federal
Government of Nigeria established the National Board for Educational
measurement and the National Business and Technical Examination Board in 1991.
Following this increase in the number of examination bodies in Nigeria, the
influence of these bodies on educational system in the country is likely to
become tremendous. This development, if accompanied by reform in educational
assessment, is probably appropriate as it is in tune with the trend worldwide.
For instance, Tamir (1993) opined that, “It appears that the
1990s will be
remembered as the decade of reform in student assessment” (p.537). This
conference is, therefore appropriate at this time, to examine the challenges
lying ahead of the examination bodies.
Being a stakeholder in educational
assessment in Nigeria
as a researcher, an author, and a parent, one cannot be disinterested in the
opportunities that this forum may offer. The purpose of this paper is to draw
the attention of participants to some ideas related to assessment of students’
in business education knowledge that may not have received adequate attention
of the examination bodies hitherto. Firsts, I propose that the examination
bodies in Nigeria
should rely solely on multiple-choice form of items in their examinations to
reduce cost and labour. Second, if examination bodies will use external
personnel for item construction, I propose that they use external examination
personnel that have good background in education, especially, those with
measurement and evaluation background, rather than specialists without
education background. Third, I discuss the need for examination bodies to
intensify their research effort in such a manner that they will allow all their
activities to be research driven either through in-house research or by
catching up on the bodies of available research conducted by education.
Finally, on item construction, I discuss the need for examination bodies to be
mindful of gender equity in their examination items. Concerning item validity,
I shall present and discuss alternative ways of making multiple-choice items
more valid such that they will test examiners’ proper understanding.
Construction
Item construction is a crucial stage
in any valid and reliable examination. According to Linderman (1970), “the
production of a high-quality multiple-choice item is a relatively difficult
task which requires experience concentration a thorough knowledge of the
subject matter, and a good deal of patience” (p.499). I intend to propose a
number of ways, by which we can meet these conditions, in a manner that would
improve assessment practice in Nigeria.
First, concerning item construction
examination bodies should review their policy on personnel for item construction.
There is no doubt that subject specialist are knowledgeable in their areas of
specialization, however, they are not likely to be knowledgeable in item
construction that is the province of educators and teachers in relevant
subjects or educators with bias in test and measurement.
Second, it is important for
examination bodies to base their work on research. There are several ways to do
this, One, is to base all they do on findings from their in-house research
rather than common experience that may not be generalizable. We can improve
item quality when examination bodies make it a point of duty to calculate item
difficulties, item discrimination powers, and foil selection analyses after
each major examination. I recognize that they usually do these types of
analyses in pilot status before the real examination to establish the validity
and reliability of the proposed examination items, however, carrying them out
again after the examination would serve to provide a more authentic piece of
information that they can use in improving the existing items in a manner that
would make them reusable.
Next, examination bodies need to
base their work on findings of research that educators conduct in different
subjects that they examine. This practice would serve to encourage teachers and
students to be up-to-date in their knowledge and be more responsive to new
research information that educational researchers may provide them in
textbooks, seminars and conferences. An example in business education is the
body of knowledge that is available about student’s misconceptions and
alternative conceptions in the various business subjects. Abimbola & Baba
(1996). Defined a misconception as “an idea that is clearly conflict with
scientific conceptions and therefore wrong” (p.15). They also defined an
alternative conception as “an idea that is neither clearly conflicting nor
clearly compatible with scientific conceptions but which has its own value and
is, therefore, not necessary wrong” (p.15). Evidence abounds that these misconception
and alternative conceptions exist among members of the society, teachers,
students, and in textbooks (Abimbola & Baba 1996).
What examination bodies need to do
about these conceptions is to include them as distractors in their items to
test student’s proper understanding.
Abimbola’s (1996) booklet is, perhaps, the first attempt in Nigeria to base
revision questions wholly on research findings related to misconceptions and
alternative conceptions in business.
Third, a recurring theme among educators,
especially business educators, and in society at large, is gender equity or
gender friendliness. Gender equity in the construction of examination items
requires that such items be closely examined to be sure that they do not
possess any characteristic that may favour male or female differentially.
Reports of many studies showed small differences among male and female students
in their performance on science achievement tests (Comber & Keeves, 1973;
Mullis & Jenkins, 1988; National Assessment of Educational progress, 1978)
Researchers commonly use two of methods for analyzing items for their gender
friendliness. These are judgmental and statistical methods. The first method is
a subjective one that uses trained judges to identify items that exhibit bias
towards a particular gender by stereotypic terms or contexts. Examiners can
either remove or redesign such items to make them gender friendly before using
them.
VALIDATION
A fourth thing that examination
bodies need to do is to carry out item validity analysis that is different from
the classical one that people teach in traditional measurement and evaluation
classes. Kind (1969) defined classical item analysis as:
The
different methods of evaluating single test items in order to determine the degree
of difficulty of the item and its ability to discriminate between successful and unsuccessful
candidates (p.143)
From this definition, it would seem that the two components
of item analysis are item difficulty and item validity. However, Borg and Gall
(1979) adds a third component, namely, item reliability. Also Anastasi (1976)
thinks, “items can be analyzed qualitatively, in terms of statistical
properties” (p.198). Qualitative analysis including the consideration of
content validity, and the evaluation of items for effective item writing
procedures. Quantitative analysis includes mainly the measurement of item
difficulty, item validity, and item reliability. Quantitative analysis of
examination items seems to take place usually before administering the
examination to the examinees whereas quantitative analysis of examination items
usually takes place after the examination. This is probably why Nunnaly (1978)
thinks, “it should be emphasized that item analysis of achievement tests is
secondary to content validity” (p.264)
Yarroch (1991) has proposed a
technique for item validity analysis, and a coefficient for estimating item
validity, that are all different from the traditional ones. These proposals
were as a result of his research on the pilot examination organized by the
Michigan Educational assessment Program designed to assess the science
knowledge of all students in the state of Michigan, U.S.A. This process of item
validity analysis relates to the “relationship between the examinees knowledge
and the knowledge the examination item was intended to measure” (p.621). The
process involves the use of clinical interviews on a sample of the examination
items that were equivalent to the multiple-choice items to probe the students’
proper understanding. Figure 2 is an adoption of a table that he prepared to
represent the represent the patterns of students’ knowledge as a factor of
their item response patterns. From the table, a student’s knowledge id judged
adequate if after the interview the researcher finds his or her response to be
correct for a correct reason or incorrect for a wrong reason. His or her
knowledge will be judged inadequate if the researcher finds his or her response
to be correct for a wrong reason or incorrect for a correct reason. Examination
bodies can carry out this kind of analysis like a kind of exit poll whereby
their representatives across the country interview examinees (by previous
arrangement) immediately after an examination and researchers will subsequently
track their scripts for analysis.
ADMISSION
All the examination bodies in Nigeria use the
multiple-choice form of items in their examination either wholly or partly. My
strong position is that all the examination bodies in Nigeria can use
the multiple-choice format for all their examination as JAMB is currently
doing. Multiple-choice item is much better than all other forms of items for
measuring all kinds of cognitive objectives apart from those involving
organization, synthesis, and verbal expression. In these latter areas, the easy
item is best for assessing them. Odor, Solanke, & Azeke (1986) had found
out that WAEC did not go beyond the application level in all its examination of
the “O” level. The present system of examination is rather labour intensive and
expensive that it may be difficulty to sustain it for a long time. All this
goes to show that we can use the multiple-choice items solely for assessing
students’ achievement in many subjects. What we need to do is to look for ways
of improving the quality of examination items in a manner that will serve the
same purpose with the current system. I recognize that this practice may
encourage and accentuate the incidence of examination malpractice in the
country. The solution will lie in employing responsible people in coordinating
the invigilation of examinations as the JAMB is doing at present. Even when a
computer breakdown occurs during the processing of the examination papers, a
few examiners can easily mark multiple-choice examination papers manually,
suing templates. This type of examination is effective for cost and will
minimize the need to increase examination fees every year in line with
inflationary trends. No examination body can afford to continue to increase its
examination fees without giving thought to a reduction in the expenditure on
its examinations. The National Board for Educational Measurement (NBEM), too,
can, follow this suggestion. My focus, therefore, will be on the use if
multiple-choice items in assessment. I believe that what we need is how to improve
upon the manner in which we construct examination items that will make them
valid and reliable.
CONCLUSION
In this paper, I have focused my
attention on ways to improve upon what examination bodies are doing at present.
As they continue to work hard on the difficult indexes of their examinations,
their discrimination power, and reliability, examination bodies should make
effort to improve the validity of their examination. This, I believe they can
achieve, by concerning their attention on preparing good multiple-choice items
using the techniques that I suggested, to improve the items thereby reducing
cost and labour. There is also the need for them to hire appropriate external
personnel to help in item construction. The preparation of good items at the outset
will contribute to improved psychometric properties of the examination items. I
cannot emphasize enough the need to be up-to-date concerning research findings
in the areas of student learning and knowledge in the disciplines. Examination
bodies cannot ignore their curriculum-developing role whereby advances in
knowledge and learning make their impact only if they are examinable. I
reminded examination bodies of the need to jump on the bandwagon of gender
equity so as not to find them left behind. This they can achieve by screening
their items before and after administration to establish the gender equity in
the items. Finally, I suggested ways by which examination bodies can use
alternative ways of constructing multiple-choice items that test examinees’
proper understanding.
Examination bodies will continue to
perform important role in the educational development of the country. There is
need to rethink how to execute their mission statements in a manner that will
make them friends of the public. There is need for our examination bodies to
have other avenues of relating to the public other then through their dreaded
examinations. One of the ways by which they can achieve this is to take
interest in how teachers teach their recommended syllabi. Concerning this, all
examination bodies need to provide feedback to teachers and students, after
each examination, on their expectations of what to teach and learn,
respectively. The country benefits greatly if examination bodies play their
part in ensuring that candidates demonstrate proper understanding of the
contents of their syllabi. The country does not only take interest in knowing
how many candidates passed or failed, it also has interest in finding out why
those that passed, passed and why those that failed, failed to be able to
improve upon the performance of future candidates.
REFERENCES
Abimbola,
I.O. (1996)
|
J.S.S. Integrated Science Revision questions with
answer Osogbo: Olatunbosun Publishers.
|
Abimbola,
I.O. & Baba, S. (1996)
|
Misconceptions and alternative conceptions in science
Textbooks: The role of teachers as filters. The American Biology Teacher,
58 (14 0-19)
|
Adeyegbe,
S.O. (1993)
|
The West African Examination Council (WAEC) and curriculum
development. In U.M.O Ivowi (Ed.), Curriculum development in Nigeria
(pp.285-293), Abuja:
The Editor
|
Anastasi,
A. (1976)
|
Psychological testing, 4th edition New York: Macmillan
|
Bello, G. (in progress)
|
Comparative effects of two forms of concept-mapping
instructional strategies on senior secondary school students’ achievement in
biology.
Ph.D. thesis University
of Ilorin, Ilorin.
|
Borg.
W.R. & Gall, M.D. (1979)
|
Psychological testing, 4th edition New York: Macmillan
|
Comber,
I.C. & Keeves, J.P. (1973,
|
Science education in nineteen countries, Stockholm: Almquist & Wiksell
|
Gipps,
C.V. (1993)
|
Reliability, validity and manageability in large-scale
performance assessment.
paper presented in the symposium: Technical & Policy Issues n Performance
Assessment: A British View, at the American Educational Research Association
Annual Meeting, Atlanta, Georgia, U.S.A., April 12-16, 1993.
|
Jegede,
O.J. Alaiyemola, F.F., & Okebukola, P.A.O (1990)
|
The effect of concept mapping on students’ anxiety and
achievement in biology Journal of Research in Science Teaching, 27(10)
951-960.
|
Joint
Admissions and Matriculation Board (1985)
|
Biology: Lagos
JAMB
|