What Do You mean By Reliability Of a Test

The degree to which an evaluation instrument yields reliable and consistent results is known as reliability. Reliability is defined as the “consistency” or “repeatability” of your measurements in the daily sense.

The degree to which a metric produces consistent findings is known as reliability. Your scale is dependable if it displays a generally constant reading each time you walk on it.
If Bilal administers her survey to a certain individual twice and the results indicate that he is extremely racist on the first occasion, she wouldn’t anticipate the opposite on the second occasion. If all other factors are equal, a trustworthy measure yields the same result.

There are two types of reliability – internal and external reliability

Internal reliability assesses the consistency of results across items within a test.

External reliability refers to the extent to which a measure varies from one use to another.

Types of Reliability

Internal

(extent to which a measure is consistent within itself.)

  • Split-half method: Measure are

extent to which all parts of the test of a test contribute equally to what is being measured.

External

(the extend to which a measure varies from one use to another)

  • Test re-test: Measure the stability of a test over time. 
  • Inter-rater: to the degree to which different raters give consistent estimates of the same behaviour.