Measuring Vision and Vision Loss
Table Of Contents
ASPECTS OF VISION LOSS|
ASSESSMENT OF VISUAL ACUITY
VISUAL ACUITY MEASUREMENT
ASSESSMENT OF FUNCTIONAL VISION
DIRECT ASSESSMENT OF VISUAL ABILITIES AND FUNCTIONAL VISION
DIRECT ASSESSMENT OF PARTICIPATION
|ASPECTS OF VISION LOSS|
|Since the visual system alone provides as much input to the brain as all
other senses combined, it is not surprising that vision loss can have
a devastating impact on people's lives. Different observers have
different points of view and therefore emphasize different aspects of
vision loss and its consequences. Clarity about these differences is
important.1 They will be discussed using as a conceptual framework the four aspects
of functional loss that were first introduced in the World Health Organization
Classification of Impairments, Disabilities and Handicaps (ICIDH).2 The aspects are distinct, although different publications may use slightly
different terms to describe them (Table 1).|
Two of the four aspects refer to the organ system, the other two refer to the person. The first aspect is that of anatomic and structural changes. The second aspect is that of functional changes at the organ level; examples include visual acuity loss and visual field loss. The third aspect describes the generic skills and abilities of the individual. The final aspect points to the social and economic consequences of a loss of abilities. In colloquial use, persons with vision loss are often described as “blind”; this terminology is inappropriate since most people with vision loss are not blind but have residual vision. We will return to this issue when discussing ranges of vision loss.
ANATOMIC AND STRUCTURAL CHANGES
This aspect describes the underlying disorders or diseases at the organ level. Ophthalmoscopy and slit-lamp biomicroscopy have given ophthalmology tools to describe anatomic changes in more detail than is possible for many other organ systems. Most of the ophthalmic literature is devoted to this aspect. Yet, these changes give us relatively poor cues to the severity of their functional consequences.
This aspect describes functional changes at the organ level. Here again, ophthalmology has developed unique tools that can measure visual functions, such as visual acuity and visual field, in great detail. These tools are well developed and give objective measurements. These measurements can be used for two purposes: to assist in diagnosing the underlying disorder and to predict the functional consequences (Fig. 1). For example, tests such as electroretinography and visual evoked potentials (VEPs) are helpful in diagnosing the underlying condition but are poor predictors of the functional consequences. Because visual acuity loss can have many different causes, visual acuity testing adds little to the differential diagnosis, but it can help in predicting the impact on activities of daily living (ADLs). The Ishihara color test is good at diagnosing even minor red-green deficiencies for genetic studies, but it overestimates the functional consequences. The D15 color test on the other hand, was designed to be insensitive to minor deficiencies and to detect only those that might have functional consequences. The discussion in this chapter is oriented toward the functional consequences.
This aspect reaches beyond the description of organ function by describing the skills and abilities of the individual. It describes how well the individual is able to perform ADLs given the vision loss. This aspect has been described under different names. In the field of vision, the term functional vision is used. In ICIDH-802 loss (or lack) of ability was described as disability. Its successor, ICIDH-23 provides a taxonomy of activities and of the ability to perform them. The use of the term disability is discouraged since it can have different meanings in different contexts. (Having a disability may be a synonym for having an impairment; being disabled points to a loss of ability; being on disability points to an economic consequence.) In the AMA Guides to the Evaluation of Permanent Impairment4 the term impairment refers to organ function, and impairment rating refers to an estimate of the ability to perform activities of daily living.
SOCIETAL AND ECONOMIC CONSEQUENCES
The last aspect describes the societal and economic consequences for the individual caused by an impairment or by a loss of ability. In ICIDH-80 this aspect was described as handicap and measured in terms of loss of independence; in ICIDH-2 it is described under the heading Participation. Handicaps do not preclude participation. The story of Helen Keller is one example of how some people can achieve full participation despite extraordinary handicaps.
The different aspects are measured in very different ways. Visual functions are measured with clinical tests, such as a letter chart, a tangent screen, or a color test. Functional vision is assessed by the ability to perform generic ADLs. Different impairments have different effects. Visual acuity loss affects activities such as reading ability and face recognition. Visual field loss manifests primarily by difficulties in orientation and mobility (O&M) tasks. The participation aspect looks beyond the ADL abilities to the actual environment. How well is the individual able to hold a job and to earn a living? What “reasonable accommodations” are mandated by stat-utes such as the Americans with Disabilities Act(ADA)? Do difficulties in face recognition limit a person's social activities? This aspect is not limited to generic daily living skills but can consider the effect of specific environmental conditions and demands. Uncorrected myopia, for instance, would be a severe handicap for a hunter but might be an asset for a watchmaker.
Improving the participation aspect is the ultimate goal of all medical and social interventions. There clearly are links between the aspects: a disorder may cause an impairment, an impairment may cause a loss of abilities, a loss of abilities may cause a lack of participation. However, these links are not rigid. Medical and surgical interventions can reduce the impairment caused by a disorder. Assistive devices may improve abilities in the face of a given im-pairment. Changes in the human and physical en-vironment may increase participation, regardless of reduced abilities. The art of rehabilitation is to manipulate each of these links so that a given disorder results in the least possible loss of participation.
The outcome of various interventions must be measured in different ways. Visual acuity measurement is useful as an outcome measure for medical and surgical interventions but cannot be used to measure the outcome of rehabilitative interventions. Rehabilitative effects must be judged by an improved ability to perform ADLs. This can be expressed in an ability profile.
This chapter will pay much attention to visual acuity and visual acuity measurement. The reader should keep in mind, however, that visual acuity is only one of many organ functions and that organ function is only one of the many aspects of vision loss. Particularly among the elderly, measuring functions such as contrast sensitivity, glare sensitivity, and vision at low luminance may reveal deficits that are missed by the usual visual acuity measurement at high contrast.5
|ASSESSMENT OF VISUAL ACUITY|
|The visual function that is measured most often is visual acuity. Here
again, different users may measure different aspects of visual acuity. Various
basic aspects of visual acuity, such as detection, resolution, and
hyperacuity, are discussed elsewhere. This chapter considers the
clinical testing of visual acuity, which is based on letter recognition. Letter
recognition is a rather complex function, requiring not only
the optical ability to resolve the image but also the cognitive ability
to recognize it and the motor ability to respond. In young children, in
developmentally delayed individuals, and in elderly stroke patients, it
may be their inability to respond, rather than optical factors, that
limits their test performance.|
Reading tests have been used since before the Middle Ages to test the function of the eye. Major changes started to occur in the middle of the 19th century.
In 1843 Kuechler, a German ophthalmologist in Darmstadt, wrote a treatise advocating the need for standardized vision tests.6 He developed a set of three charts to avoid memorization. Unfortunately, he was a decade too early, and his work was almost completely forgotten.
Around 1850 started what later would be called the Golden Age of Ophthalmology. In 1850, Franciscus Donders, from Utrecht, The Netherlands, visited William Bowman, of anatomic and histologic fame, at an international conference in London. There he met Albrecht von Graefe, who would become the father of German clinical ophthalmology. Donders and von Graefe became lifelong friends.* With Bowman and Hermann von Helmholtz, who invented the ophthalmoscope in 1851, they became the foursome that would lead ophthalmology to become the first organ-oriented specialty. In 1850 von Graefe had just opened his famous eye clinic in Berlin. In 1852 Donders would open what would later become the Royal Dutch Eye Hospital in Utrecht.
Donders later wrote, “I had just seen Jaeger [Friedrich, Eduard's father] performing cataract surgery alternately with the left and the right hand, when a young man stormed into the room embracing his preceptor. It was Albrecht von Graefe. Jaeger thought that we would fit well together and we soon agreed. Those were memorable days. Von Graefe was my guide for all we heard in practical matters, and in scientific matters he listened eagerly to the smallest detail. We lived together for a month to separate as brothers. To have William Bowman and Albrecht von Graefe as friends became an incredible treasure on my life's path.”
Thus, the scene had changed considerably when, in 1854, Eduard von Jaeger, the son of a well-known ophthalmologist in Vienna, published a set of reading samples.7 His reading samples were first published as an appendix to his book About Cataract and Cataract Surgery.8 They became an immediate success as a means to document functional vision. Because Vienna was an international center, he published samples in German, French, and English as well as a variety of Central European languages. He used fonts that were available in the State Printing House in Vienna and labeled them with the numbers from the printing house catalog.
Meanwhile Donders, who was a professor of physiology before he decided to concentrate on ophthalmology, was working on his epoch-making studies on refraction and accommodation. He clarified the nature of hyperopia as a refractive error, rather than as a form of “asthenopia,” and brought the prescription of glasses from trial and error at the county fair to a scientific routine. His work would be published in London in 1864.9 For this work, Donders needed not only reading samples for presbyopes but also distance targets to use in the refractive process of myopes and hyperopes. Initially, he had used some of the larger type samples from Jaeger's publication as a distance target. However, he felt the need for a more scientific method and for a measurement unit to measure visual function. He coined the term visual acuity to describe the “sharpness of vision” and defined it as the ratio between a subject's performance and a standard performance. In 1861, he asked his coworker and later successor Herman Snellen to devise a measurement tool.
In 1862 Snellen published his letter chart.10 His most significant decision was not to use existing typefaces but to design special targets, which he called optotypes. He experimented with various targets designed on a 5 × 5 grid (Fig. 2). Eventually, he chose letters (Fig. 3). Some others published charts based on Donders' formula in the same year, using existing typefaces rather than optotypes. Snellen's chart prevailed and spread quickly around the world. One of the early big orders came from the British army, wanting to standardize the testing of recruits.
To implement Donders' formula, Snellen defined “standard vision” as the ability to recognize one of his optotypes when it subtended 5 minutes of arc. This choice was inspired by the work of the English astronomer Robert Hooke, who, two centuries earlier,11 had found that the human eye can separate double stars when they are 1 minute apart. Since Snellen chose an external, physical standard, others could accurately reproduce his charts. This was different from Jaeger's samples, which were based on existing typefaces. When others wanted to reproduce them, they had to use whatever typefaces were available locally. This accounts for the wide variability among today's Jaeger samples.
Donders and Snellen were well aware that their standard represented less than perfect vision and that most normal healthy eyes could do better. Thus, it is wrong to refer to “20/20” (1.0) vision as “normal,” let alone as “perfect” vision. Indeed, the connection between normal vision and standard vision is no closer than the connection between the standard 12-inch measure for the American foot and the average length of “normal” American feet. The significance of the 20/20 (1.0) standard can best be thought of as the lower limit of normal or as a screening cutoff. When used as a screening test, we are satisfied when subjects reach this level and feel no need for further investigation, even though the average visual acuity of healthy eyes is 20/16 (1.25) or 20/12 (1.6).
While Snellen was preparing his chart, Donders already commissioned a study by one of his doctoral students to document the normal changes in visual acuity with age12 using prototypes of Snellen's symbols. The study was published in 1862, the same year that Snellen published his chart. The similarity with more recent data (Fig. 4) is remarkable.
Since Snellen's days, few major improvements in visual acuity measurement have been made. Many tried to devise better optotypes, but, as A.G. Bennet remarked in an exhaustive review of historical developments while preparing for the British standard,15 “The road of visual acuity measurement is littered with stillborn charts.”16 Some developments, however, are worth mentioning.
In 1866 John Green of St Louis had spent some time with Donders and Snellen and had written a small paper there about the measurement of astigmatism. He developed his own chart, which he presented to the American Ophthalmological Society in 1868,17 modifying a prior proposal from 1867. His chart featured sans-serif letters (Snellen used let-ters with serifs), proportional spacing of the char-acters, and a geometric progression of letter sizes(10 steps = 10× ; Fig. 5), three features that are now part of standardized letter chart design. He was a century too early; his proposals gained little acceptance. Green went back to letters with serifs because letters without serifs were said to “look unfinished.” A century later, the British standard would choose sans-serif letters, because letters with serifs “look old-fashioned.”
Snellen originally calibrated his charts in Parisian feet. At the time there were some 20 different measurement systems used in Europe. It is not surprising that the uniform metric system18 was gaining ground. Snellen soon changed from 20 Parisian feet to 6 meters or, for adherents of the decimal system, to 5 meters. Today, the 20-foot distance prevails in the United States, 6 meters prevails in Britain, and 5 or 6 meters is used in continental Europe. Conversion between these different measurements is awkward. In 1875 Felix Monoyer* of Lyons, France, proposed to replace the fractional Snellen nota-tion with its decimal equivalent (e.g., 20/40 = 0.5,6/12 = 0.5, 5/10 = 0.5).19 Decimal notation makes it simple to compare visual acuity values, regardless of the original measurement distance and is used in large parts of Europe (Table 2).
Monoyer is also known for the introduction of the diopter20 in 1872. The diopter is the reciprocal of any metric distance; it greatly simplified lens formulas. Earlier, the power of a lens was expressed by its focal distance (f). Changing to the reciprocal of the focal distance (D) simplified the awkward formula 1/f1 + 1/f2 = 1/f3 to D1 + D2 = D3. We will see later that the diopter notation can also simplify Snellen's formula when used for near vision.
Edmund Landolt had worked with Snellen in Utrecht and later became a professor of ophthalmology in Paris. In 1874 Snellen and Landolt had cooperated in publishing a major chapter on “optometrology,”21 the science of measuring vision. They recognized that not all of Snellen's optotypes were equally recognizable. This led Landolt to propose the broken ring symbol (1888), a symbol that has only one element of detail and varies only in its orientation.22 Landolt's Cs (Fig. 6) would become the preferred visual acuity measurement symbol for laboratory experiments but gained only limited acceptance in clinical use.
Relatively little happened in the period that followed. Efforts at standardization were made, such as a standard proclaimed by the International Council of Ophthalmology in 1909.24 Such documents were filed and never gained a wide following. That clinicians did not feel an urgent need for standardization can be explained by the fact that the everyday letter chart uses do not require it. For refractive correction, any set of targets will do, since the only question is “better or worse?” For screening, the distinction between “within normal limits” and “not within normal limits” is the most important. We have seen that Snellen's standard is well positioned for screening purposes. At the lower end, the difference between 20/200 (0.1) and 20/400 (0.05) is unimportant for screening purposes.
After 1945 the interest in low-vision rehabilitation was gaining ground. It was recognized that most of those considered “industrially blind” actually had some level of usable vision. In 1952 the first low-vision services were opened in New York at the Industrial Home for the Blind and at the New York Lighthouse. For rehabilitation purposes the difference between 20/200 and 20/400, which was unimportant for screening, became very important, since the patient with 20/400 needs twice as much magnification as the patient with 20/200. It is not surprising then that major refinements in clinical visual acuity measurement came from individuals involved in low-vision rehabilitation.
In 1959 Louise Sloan, the founder of the low-vision service at the Wilmer Eye Institute of Johns Hopkins University in Baltimore, designed a new optotype set of 10 letters25 (see Fig. 11). She chose sans-serif letters while maintaining Snellen's 5 × 5 grid. This is in contrast to the British standard,16 which selected a 4 × 5 grid for its sans-serif letters. She recognized that not all letters were equally recognizable. To avoid this problem she proposed to use all 10 letters on each line. The larger letter sizes thus required more than one physical line.
Louise Sloan also proposed a new letter size notation.26 To implement Donders' definition of visual acuity as the ratio between a subject's performance and a standard performance, Snellen had used the following formula:
where d = the distance at which the subject recognizes the optotype, and D = the distance at which a standard eye recognizes the optotype.
Sloan simplified this rather verbose definition and made use of the metric system implicit by introducing the term M-unit for the “distance in meters at which a standard eye recognizes the optotype” (i.e., at which the optotype subtends 5 minutes of arc). The formula then becomes
where m (lower case) = test distance (in meters), and M (upper case) = letter size (in M-units).
In line with other definitions of measurement units in the SI system, this terminology allows us to define the measurement unit for visual acuity more easily by stating that standard acuity (1.0, 20.20) represents the abilityto recognize a standard letter size (1 M-bunit) at a standard distance (1 meter).
The relationship between the three variables—letter size, viewing distance, and visual acuity—can be demonstrated in the nomogram in Figure 7. Connecting the markers for any two of the variables with a straight line will point to the marker for the third variable. It demonstrates that any letter size can represent any visual acuity value, depending on the viewing distance. The gray scales represent the preferred, metric notations, as will be discussed later. The nonmetric scales are given for comparison. The Jaeger numbers in this figure are based on Jaeger's original print samples.27
In the 1960s, the WHO had surveyed national definitions of legal blindness and found that 65 countries used as many different definitions. In 1974 the World Health Assembly approved the 9th Revision of the International Classification of Diseases (ICD-9).28 In it, the old dichotomy between “legally sighted” and “legally blind” was abandoned for a series of (numbered) ranges of vision loss. In the same year, the International Council of Ophthalmology (ICO)29 adopted the same ranges, extended them to include normal vision and adopted the naming convention used in ICD-9-CM30 and here.
In 1976, Ian Bailey and Jan Lovie (then at the Kooyong Low Vision Service in Melbourne) published a new chart31 featuring a novel layout with five letters on each row and spacing between letters and rows equal to the letter size. This layout standardized the crowding effect and the number of errors that could be made on each line. Thus, the letter size became the only variable between the acuity levels. Their charts have the shape of an inverted triangle and are much wider at the top than traditional charts. Like Sloan's chart, they followed a geometric progression of letter sizes.
That same year, Hugh Taylor, also in Melbourne, used these design principles for an illiterate E chart,32 used to study the visual acuity of Australian Aborigines. He found that, as a group, Australian Aborigines had significantly better visual acuity than Europeans.14 This is another reason not to regard 20/20 visual acuity as “normal” or as “perfect” vision (see Fig. 4).
Based on the above work, Ferris and col-leagues of the National Eye Institute chose the Bailey-Lovie layout, implemented with Sloan letters, to establish a standardized method of visual acuity measurement for the Early Treatment of Diabetic Retinopathy Study (ETDRS).33 These charts were used in all subsequent clinical studies and did much to familiarize the profession with the new layout and progression (Fig. 8). Data from the ETDRS were used to select letter combinations that give each line the same average difficulty, without using all letters on each line.
The ICO approved a new Visual Acuity Measurement Standard, also incorporating the above features.23
|VISUAL ACUITY MEASUREMENT|
Ranges of Vision Loss
Vision loss is not an all or none phenomenon. Since the 1970s, the WHO has recognized this by replacing the simplistic dichotomy between those who are considered legally blind and those who are considered legally sighted with a set of ranges. In ICD-928 and ICD-9-CM,30 the range of low vision took its place between the ranges of normal (or near-normal) vision and blindness (or near-blindness). The word low indicates that these individuals do not have normal vision; the word vision indicates that they are not blind. The ranges used in ICD-9-CM are listed in Table 3.
Although these changes were made a quarter century ago, the use of the term blindness to denote partial vision loss is still prevalent. This is regrettable, since it fosters misconceptions among patients and practitioners. Patients tend to accept the statement that they are legally blind as an irreversible verdict of hopelessness. Telling them that they have severe low vision (the corresponding ICD-9-CM term) tells them that they have a problem but that there are ways to cope with it. To call a patient with a severe vision loss “legally blind” is as preposterous as calling a patient with a severe heart ailment “legally dead.”
Letter recognition, on which clinical visual acuity measurement is based, is a rather complex function that involves not only optical factors but also cognitive and motor abilities. When choosing our test parameters, we strive to keep the cognitive and motor requirements minimal so that we measure mainly optical factors. Within the group of optical factors, we strive to keep factors such as contrast and illumination optimized so that the main remaining variable is magnification.
Visual acuity can be thought of as the reciprocal of the magnification threshold for letter recognition. Magnification is the factor on which Snellen's formula is based. If a subject needs letters that are twice as large or twice as close as those needed by a standard eye, the visual acuity is said to be 1/2 (20/40, 0.5), if the magnification need is 5× , the visual acuity is 1/5 (20/100, 0.2), and so forth.
It is not always possible to avoid the cognitive factors. This is the case for infants and preschool children who do not yet know the entire alphabet. For them, we often use other methods, such as grating detection or picture recognition. It is important to realize that these are different tasks, which may have different magnification requirements. Similar considerations exist for developmentally delayed individuals. Sometimes it appears that the motor concept of directionality that is required to respond to tumbling Es is a limiting factor. Testing with different modalities may help to give an insight into these nonoptical factors. In elderly stroke patients with macular degeneration, the question may arise whether inability to read is the result of the macular degeneration or of the stroke. Failure to respond to larger print may point to cognitive rather than optical factors. In the following discussions, it will be assumed that cognitive and motor factors are indeed trivial. Even so, many choices remain to be made, including test distance, letter size progression, criterion, contrast and illumination, visual acuity notation, and test symbols.
CHOICE OF TEST DISTANCE FOR NORMAL AND NEAR-NORMAL VISION.
Most patients seen in ordinary practice have visual acuity in the range of normal and near-normal vision (20/60 or better, ICD-9-CM; see Table 3). For these patients, the most commonly used testing distances are 20 feet, 6 meters, and 5 meters. These distances were chosen not because they are especially appropriate for visual acuity measurement but because at these distances the optical difference with infinity may be ignored. The stimulus for the development of the letter chart came from Donders' work on refraction. Traditional chart designs reflect the emphasis on screening and on refractive use. In the near-normal range, the steps between letter sizes are small; for lower acuity they become larger (see Fig. 10); for acuity worse than 20/200 (0.1), vague statements such as “count fingers” and “hand motions” are used.
In 1973 Hofstetter proposed the use of a 4-meter test distance34 for use in smaller rooms. For visual acuity measurement, this distance is as valid as any other distance, provided that it is properly entered into the Snellen formula. Sloan liked the 4-meter distance because it made for easy conversion to a 40-cm reading distance. The ETDRS charts adopted it because charts with the Bailey-Lovie layout would have to be substantially wider if designed for 5 or 6 meters. At 4 meters, however, the accommodative demand becomes 0.25 diopter and can no longer be ignored. Another option for small rooms is the use of mirrors.
For young children, a test distance of 10 feet or 3 meters is often recommended, because it is easier to hold their attention at the shorter distance.
CHOICE OF TEST DISTANCE FOR LOW VISION.
A much smaller group of patients has visual acuity in the low-vision range (less than 20/60, ICD-9-CM; see Table 3). For this group, the magnification need for visual rehabilitation becomes an important objective. Kestenbaum35 pointed out that the magnification need can be found by taking the reciprocal of the visual acuity (e.g., 20/100 requires 100/20 = 5× , 20/200 requires 200/20 = 10× ). Bringing the chart from 20 feet (6 meters) to 10 feet (3 meters) can double the measurement range, but bringing the chart to 1 meter extends it by a factor 6× . Measuring at 1 meter has the additional advantage that the Snellen fraction is as simple as possible (1/ …) and can be converted easily to an equivalent for any other distance by multiplying numerator and denominator by the same number (e.g., 1/20 =20/400 = 5/100 = 6/120 = 0.05). The 1-meter column in Table 3 shows that a 1-meter chart with letters up to 50 M can cover the entire low-vision range down to 1/50 (20/1000, 0.02). Taking the same chart to 10 feet would extend the measurement range only to 20/300 (0.06).
At short distances, such as at 1 meter, it becomes critically important to maintain the viewing distance accurately. A movement of only 10 cm (4 inches) would introduce a 10% error. This can be prevented with a 1-meter cord attached to the chart (Fig. 9). Such charts can be homemade or purchased commercially.36
Optical correction for refractive error is important for this group, but the question “better or worse?” looses significance when the patient cannot see the letters on a chart at 20 feet. Being able to see several lines on a 1-meter chart can provide major encouragement and better responses to subjective refraction. Presbyopic patients need a 1 D correction for the 1-meter distance. This is easier to provide than a 1/3-D correction for a 10-foot (3-meter) distance.37,37a
CHOICE OF LETTER SIZE PROGRESSION.
Snellen's original charts had small steps for the normal range and larger steps for the lower ranges. Introduction of the decimal acuity notation19 led to charts with visual acuity steps in 0.1 increments. On these charts the steps at the top of the scale, such as 0.9 → 1.0 → 1.1, are too small to be practical. If equal increments of the denominator were used, the steps at the bottom of the scale would be too small to be useful. The only scale that can span the full range is a logarithmic scale, based on equal ratios between each pair of successive lines. This is in accordance with Weber-Fechner's law,38 which states that geometric increments in stimulus give rise to linear increments in sensation. Westheimer39 has shown that this also holds for visual acuity. Figure 10 compares various progressions.
Use of Preferred Numbers.
Various geometric progressions are possible. The one that fits best with the decimal system is one in which 10 steps equal 10× , so that the same numbers repeat in each 10× interval, with only a shift in decimal place. A very convenient feature of this series is that 3 steps equal 2× . When this series includes the values 1 and 10, it is known as the preferred numbers series. It is extensively used in international standards* and, indeed, is the subject of an international standard itself.40 This is the series that Green used in 1868.
Its use in standards goes back to Renard, a French army engineer, who used it in the 1870s to reduce the number of cables for hot-air balloons from 400 to 17. In his honor, the series is also known as Renard series.
An important characteristic of the preferred numbers series is that the product or quotient of two preferred numbers is again a preferred number. Thus, if letter sizes and viewing distances follow the series, so will the resulting visual acuity numbers. A visual acuity chart based on this feature was published by M.C. Colenbrander41 in 1937.
Sloan and Bailey both used the progression but apparently were unaware of the preferred numbers standard. For the Sloan and ETDRS charts, this does not make a difference, since 20 feet and 4 meters are both preferred numbers. Bailey anchored his series at a 6-meter viewing distance, which is 5% off the closest preferred number (6.3); therefore, his letter sizes include values such as 19, 48, and 95 instead of 20, 50, and 100 (Table 4). For clinical use, these 5% differences can be ignored. The tables and figures in this chapter are based on the use of preferred numbers.
CHOICE OF CONTRAST AND ILLUMINATION.
Contrast and illumination both influence visual acuity. Fortunately, in the range of commonly used values, this influence is minimal. If contrast is reduced to a level where it affects visual acuity, we speak of a contrast sensitivity test, which is discussed elsewhere. If illumination is lowered to threshold values, we may speak of a dark adaptation test.
Visual acuity is usually not affected until contrast drops below 20%. Normal visual acuity charts have contrasts of 80% or better. For use in a routine eye examination, projector charts in a dim or darkened room are generally preferred. In the United States, the average projector chart has a luminance of about 85 cd/m2; European charts are generally brighter, up to 300 cd/m2. The lower luminance has the advantage that the pupil may be wider, so that refractive errors may be more obvious; the brighter charts have the advantage that they suffer less from stray light, which causes contrast degradation. The ICO Visual Acuity Measurement Standard23 recommends a range that includes both the lower and the higher values.
To predict the everyday performance of patients, a lighted printed chart in a lighted room is preferred. Front lighting is easiest to implement. Back lighting of a translucent chart on a light box gives the most even and most reproducible illumination. The usual backlit ETDRS chart has an illumination level of about 200 cd/m2. For patients with conditions such as albinism or rod dystrophy, it should be possible to reduce the illumination, which may result in a significant increase in visual acuity.
A presentation method, which undoubtedly will gain more widespread use in the future, is presentation on a computer screen. This allows presentation of single letters, as well as presentation in a letter chart format. It also allows control over parameters such as crowding, contrast, and brightness.
CHOICE OF VISUAL ACUITY NOTATION.
The result of the visual acuity measurement can be recorded in a variety of ways.
True Snellen Fractions.
The notation promoted by Snellen was that of a true Snellen fraction, in which the numerator indicates the actual test distance and the denominator indicates the actual size of the letter seen. The advantage of this notation is that it indicates the actual test conditions. The disadvantage is that it becomes awkward to compare visual acuity values measured under different conditions. This is especially true for projector charts, where the projector magnification is often adjusted to accommodate fractional viewing distances.
To overcome this difficulty, Snellen equivalents are used. In Europe, the decimal equivalent of the Snellen value is used most often. This notation is clear because there is no numerator or denominator. The notation becomes confusing when the decimal notation is converted back to a pseudo-Snellen fraction. For example,5/25 = 0.2 = 2/10; the 2/10 fraction would suggest that the subject saw a 10 M letter at 2 meters, instead of a 25 M letter at 5 meters.
In the US Notation, a 20-foot fraction is usually used as a Snellen equivalent. For example, in an examination lane of 18 or 21 feet, the true Snellen fractions would be 18/18 or 21/21. Instead, the visual acuity is recorded as 20/20 in both cases. Thus, seeing 20 as the numerator of a visual acuity fraction rarely implies that the actual measurement was made at 20 feet.
In Britain, the 6/6 notation is similarly used as a Snellen equivalent.
Visual Angle Notation was used by Louise Sloan. It refers to the visual angle of the stroke width of 5 × 5 letters. Thus, 1 minute equals 20/20 (1.0), 2 minutes equals 20/40 (0.5), and so forth. The visual angle is the reciprocal of the visual acuity value and equals the denominator of the 1-meter Snellen fraction. Others have used the acronym MAR. In the context of physiologic optics, this term is usually interpreted as “minimal angle of resolution” and best describes grating acuity; in the context of psychophysics and clinical testing it might be better interpreted as “minimum angle of recognition,” while in the context of vision rehabilitation it might be interpreted as “magnification requirement.” Because higher MAR values indicate poorer vision, MAR should be considered a measure of vision loss.
LogMAR Notation was introduced by Bailey.31 As the name implies, logMAR is the logarithm of the MAR value, thus converting a geometric sequence of letter sizes to a linear scale. Like MAR, logMAR is a notation of vision loss, since positive logMAR values indicate reduced vision, while normal vision (better than 20/20, 1.0) is indicated by negative logMAR numbers. Standard vision (20/20, 1.0) equals 0. On a standard chart, each line is equivalent to 0.1 logMAR; thus + 1.0 logMAR means 10 lines lost or 20/200 (0.1), and + 2.0 logMAR means 20 lines lost or 20/2000 (0.01).
Since Bailey used the logMAR notation with a geometric progression of letter sizes, the term logMAR chart is often used to imply a geometric progression. This is not necessarily so; a logarithmic scale could be applied to any progression. The decimal values and reverse scale do not make the logMAR notation particularly user-friendly. For everyday clinical practice, Snellen equivalents are easier, since they relate directly to the measured quantities of letter size and viewing distance.
The logMAR notation has gained widespread use in psychophysical studies, for statistical calculations, and for graphical presentation of the results of multicenter clinical studies. It provides a more scientific equivalent for the traditional clinical statement of “lines lost” or “lines gained,” which is valid only when all steps between lines are equal.
Visual Acuity Rating (VAR; Bailey42) and Visual Acuity Score (VAS; Colenbrander43) are two names given to a more user-friendly equivalent of the logMAR scale. On the VAR or VAS, 20/20 (1.0) is rated as 100, 20/200 (0.1) is rated as 50, and20/2000 (0.01) is rated as 0. On an ETDRS-type chart, each line thus represents a five-point increment. The score can therefore be interpreted as a count of the total number of letters read, starting from 20/2000 (0.01). See Table 4 to relate the VAS/VAR, MAR, and logMAR notations to various visual acuity levels. The VAR relates only to visual acuity; the VAS is part of a broader scoring system.
The VAS, VAR, and logMAR notations convert the geometric sequence of visual acuity values to a linear scale. This is important if visual acuity values are to be averaged or subjected to other statistical calculations. The difference between averaging on a geometric scale and averaging on a linear scale is best demonstrated with an example. What is the average of 20/20 and 20/200? Averaging the denominators yields 20/110, a value too close to 20/200 (see Table 4). Averaging the decimal equivalents (1.0 and 0.1) yields 0.55, a value too close to 1.0. On the VAS scale, the average of 100 and 50 is 75, which can be converted back to 20/63 or 0.32 (rounded to 20/60 or 0.3), exactly halfway.
CHOICE OF CRITERION AND ROUNDING OF VALUES.
The recorded visual acuity value can be influenced by the choice of completion criterion and by rounding. Most clinicians record visual acuity in line increments and consider a line read if more than half of the letters are read correctly (e.g., three of five on an ETDRS-type chart). A suffix such as -1 or + 2 may be added to indicate one letter missed or two letters read on the next line. These suffixes are most meaningful if the number of letters on each line is constant. On most charts, the test-retest confidence limits are about ±2 letter increments or about 0.5 line increment.44 For routine clinical use, in which the patient generally reads each line only once, rounding to line values is common practice. It is appropriate, since the rounding errors are of the same order as the confidence limits.
For a finer gradation on an ETDRS-type chart, letter increments can be counted. The total number of letters read, starting from 20/2000 (0.01), is the VAR or VAS discussed above. Letter increments are appropriate in research settings, where measurements are repeated and then averaged to detect smaller changes.
Another factor that can affect the score is whether subjects are encouraged to guess. Since different subjects may vary in their willingness to guess, forcing all to guess will produce more homogeneous results.
When a subject cannot read a line on a chart, some clinicians present an isolated line or an isolated letter. This reduces the crowding effect, makes fixation easier and can improve the VAS. Pointing to a letter may also make the task easier. One should be aware that using different presentation modes at different times reduces the comparability of the scores.
CHOICE OF TEST SYMBOLS.
Most visual acuity charts use letters. For the patient, this choice gives a sense of immediate validity when the primary objective is to read. For the practitioner, errors are easy to spot, since most practitioners know their chart by heart. Use of letters, however, is warranted only if the assumption may be made that familiarity with the alphabet plays a trivial role. The Sloan letter set is shown in Figure 11.
For less literate adults, the use of a number chart may be more appropriate.
For illiterate patients and preschool children, pictures may be used. However, it is difficult to judge the equivalence of letters and pictures, and a child's performance may depend on whether naming of pictures is a game that is played at home.
LEA symbols (Fig. 12) were devised by Lea Hyvärinen.45 They form a set of four simple symbols (square, circle, house, apple) that require little naming ability. They are left-right symmetrical, so that left-right reversals in young children will not influence the results. They have been designed to blur equally and have been calibrated against Landolt Cs46 (Fig. 13). They form excellent tests for children and can also be used for adults. The same symbols are used in a variety of tests—as a letter chart, as a contrast sensitivity chart, in a reading format, on single symbol cards, on a domino game for older children, and as a jigsaw puzzle for the very young.
The HOTV test contains four symbols—H, O, T and V—also chosen because they have no characteristics that require a sense of laterality. To standardize the effect of contour interaction when the symbols are presented singly, they may be surrounded by crowding bars.
Tumbling Es are probably the symbols most often used for the testing of children. They do require a sense of laterality, which can be a stumbling block for young and for developmentally delayed children. They can be presented in a chart format or as single symbols. When comparing findings, it should be remembered that presentation as single symbols is an easier test than presentation in a chart format. Comparison of these different conditions and of findings on a closer spaced chart may give insight in the importance of crowding and of lateral contour interaction, which can be particularly informative in the treatment of amblyopia.
Tumbling Es also are the basis for the WHO low-vision training kits, which are widely used in developing countries and in countries where the Roman alphabet is not used.
Landolt Cs22 have become the symbols of choice for many scientific measurements. They are much less frequently used in a clinical setting, except in Japan, where the characters of the Kanji alphabet are too complex. When used in a chart format, it is harder to detect errors unless the observer points to the symbol. However, pointing, like single presentation, affects the difficulty of the test.
The Visual Acuity Measurement Standard of the ICO23 requires that letter charts in non-Roman alphabets (e.g., Cyrillic, Arabic, Hindi, Kanji, Hebrew) be calibrated against Landolt Cs for equal recognizability.
Grating acuity is another visual acuity measurement that is used mostly in the laboratory and mostly in connection with contrast measurements. For infants it can be used on cards as a preferential looking test. Preferential looking is a detection test and thus not strictly equivalent to a recognition test.
When recording visual acuity for patients in the range of normal vision, the preferred measurement tool will often be a projector chart at 5 or 6 meters or 20 feet in a darkened room. The preferred notation will be a Snellen equivalent. In continental Europe this is most often decimal notation; in Britain it is the 6/6 equivalent; in the United States it is the 20/20 equivalent.
When recording visual acuity for patients in the low-vision range, the preferred tool is a lighted chart in a lighted room at a distance of 1 meter. The preferred notation will be a true Snellen fraction with 1 as the numerator. It is often useful to add the commonly used Snellen equivalent in parentheses. Thus, the ability to recognize an 8 M letter at 1 meter would be recorded as 1/8 (20/160) or 1/8 (0.125). If the same patient were tested on an ETDRS chart at 4 meters, the notation could be 4/32 (1/8, 20/160).
NEAR VISION MEASUREMENT
Although the testing of reading vision predated the development of letter charts to measure distance vision, the methodology to accurately measure reading acuity has lagged behind. This is in part due to the fact that the prescription of a reading correction for normally sighted individuals is aimed more at achieving reading comfort than at accurate measurement. It is also due to the lack of accurate measuring tools. Reading distances are more often estimated than measured, while the “Jaeger numbers,” which are widely used in the United States, have no numerical meaning. Under these circumstances, it is not surprising that many practitioners believe that reading acuity and distance acuity have little in common. We will show that this is not so.
As is the case for distance vision, accurate determination of near vision acuity requires measurement of two variables—letter size and viewing distance. For distance vision, the viewing distances are standardized, so that only the letter sizes vary. For individuals in the normal visual acuity range, reading distances may be standardized, but the standards vary. Some use 40 cm (16 inches, 2.5 D reading), or 14 inches (35 cm, 2.75 D), others use 33 cm (13 inches, 3 D) or even 30 cm (12 inches, 3.25) or 25 cm (10 inches, 4 D, the reference point for the power of magnifiers). Individuals in the low-vision range often need distances that are even shorter and certainly cannot be handled with a “one size fits all” distance. They need a formula in which both the letter size and the viewing distance can be varied easily.
Modified Snellen Formula
The standard Snellen formula, V = viewing distance ÷ letter size, becomes awkward to use when the numerator (viewing distance in meters) is itself a fraction within a fraction. This can be overcome by using the reciprocal value of the viewing distance. The reciprocal of a metric distance is known as the diopter (e.g., 2 D = 1/2 meter, 5 D = 1/2 meter).20 The traditional formula
1/V = M × D = letter size (in M-units) × viewing distance (in diopters)
Use of this modified Snellen formula has several advantages:
The results of these calculations are listed in Figure 14. This figure is based on the use of preferred numbers, so that the same values appear for the viewing distances, the letter sizes, and the resulting visual acuity values.
Many reading cards are calibrated for a specific reading distance (i.e., for a specific column in Fig. 14). This has led to the habit of using visual acuity values to refer to letter sizes. For instance, a letter size that would represent 20/100 at 40 cm might be referred to as a 20/100 letter. Figure 14 shows that the same letter at 25 cm would represent an entirely different acuity value. A 20/100 letter on a 20-foot chart is very different again.
As visual acuity drops (M × D increases), subjects can compensate in two ways: they may move to a different column, bringing the same print size closer by increasing the reading add (or the amount of accommodation in younger people), or they can move to a different row, thereby enlarging the print size while maintaining the reading distance. Large-print books enlarge the physical print size; various magnification devices enlarge the virtual print size.
Under most circumstances, letter chart acuity and reading acuity—if measured appropriately and with the proper refractive correction—are similar. However, when measuring letter chart acuity, subjects are often pushed for threshold or marginal performance, whereas reading tests more often aim at a level of comfortable performance. For this reason, the magnification requirement for reading acuity may be somewhat greater than that for letter acuity. The difference, known as the magnification reserve,47 is needed for reading fluency.
While 20/20 (1.0) acuity implies the ability to read 1 M print at 1 meter, comfortable reading of newsprint (1 M) is generally done at 40 cm, indicating a 2.5× magnification reserve (four line intervals). Traditionally, the power of magnifiers is referenced to the ability to read at 25 cm (10 inches). One M at 25 cm denotes 20/80 (0.25). This is the top value in the low-vision band (see Table 3).
To verify the relation between reading acuity and letter chart acuity, the two values were compared for 150 consecutive patients from my low-vision service. The results are shown in Figure 15. I found that a close relationship exists between letter chart acuity and reading acuity and that this relationship holds up at all visual acuity levels. Usually, the two are within one line from each other (diagonal gray band in Fig. 15); for some patients the magnification need for reading is larger than the magnification need for letter recognition (spread to the right of the diagonal). This difference is the magnification reserve, defined above. Since the objective of visual acuity measurement in the low-vision range is to help patients function with their own fixation ability, I do not push patients for maximum letter chart acuity by pointing to letters or by isolating letters (see the earlier discussion under choice of criterion). If I had used these techniques to improve the letter chart acuity, the magnification reserve for reading fluency would probably have appeared somewhat greater.
Letter Size Notations for Continuous Text
For letter charts with metric notation, the unit for letter size measurement is the M unit, as it was defined by Snellen and named by Sloan. A corresponding F unit for charts with feet notation was never defined and would probably only lead to confusion since calculating with nonmetric measurements is so much harder. The situation for continuous text letter sizes is more diverse.
In the United States, Jaeger numbers are widely used. We have seen that these numbers have no numeric meaning since they refer to item numbers in a printing house catalog in Vienna in 1854. They cannot be used for calculations. Furthermore, since Jaeger did not establish an external reference, those who wanted to produce similar samples had to approximate Jaeger's samples with fonts that happened to be available at their local print shop. The result is great inconsistency in the use of Jaeger numbers. The first column in Figure 14 indicates the range of Jaeger ratings that were found to represent the same physical letter size on a number of contemporary Jaeger cards.
Other countries have used similar samples, such as de Wecker samples in Germany and Parinaud samples in France.
The need for a numeric designation lead some practitioners to the use of printer's points. This might have been useful if printer's points referred to the letter height; instead they refer to the height of the slug on which letters used to be mounted. On average, lowercase letters tend to be about 50% of the slug height. Thus,
1 point (slug height) = 1/72 inch
However, this relationship varies with the type font. For example, in the TrueType (TT) family of computer fonts, an Arial letter of 8 points has the same size as a Times New Roman letter of 9 points. Another problem is that the point notation does not apply to the optotypes used for distance vision, so that comparison of far and near measurements is impossible.
A AND N SERIES.
On British type samples the size inprinter's points is designated by the notation N = .British cards often also carry an A = notation. The A series is based on the logarithm of the letter size. As such, it is related to the “letter size credit” mentioned in Figure 7 (A = 17 - [letter size credit] ÷ 5).
The M-unit is the only letter size unit that applies to distance charts as well as to reading samples, so it is the only unit that allows comparisons between the two tests. The M-unit is used in this chapter and on an increasing number of newer reading cards. It is convenient that 1 M is the size of average news print.
By definition, 1 M-unit subtends 5 minutes of arc at 1 meter and equals 1.454 mm. Useful equivalents include the following: 7 M = 10 mm (error -2% or 0.1 line interval) and 1 M = 1/16 inch (error + 10% or 0.4 line interval). Based on the size of lowercase letters without ascenders or descenders (x-height), 8 points = 1 M for the TT Arial and TT Courier computer fonts, but for the TT Times New Roman computer font, 1 M = 9 points.
For reading tests, it is important to record not only the letter size and the distance at which the subjects can just decipher the text but also the level at which they can read with reasonable fluency. Most reading cards have short paragraphs with large letters and longer paragraphs with smaller letters. On such cards only a subjective comparison of reading fluency with different levels of magnification is possible.
Cards on which all paragraphs have the same length offer the opportunity to measure the reading speed objectively. This layout was pioneered by the MN-read cards48 and is now also available in other cards in multiple languages (Fig. 16).36
When the reading time is recorded for each print size, the usual pattern is that the subject reads at a reasonably stable rate at larger print sizes. At smaller sizes, reading becomes slower and then impossible (fast—fast—slow). The print size just before the reading speed starts to drop off is the critical print size. Providing magnification of ordinary print to the critical print size will give the best reading performance with the least magnification (largest field of view).
Some subjects show a pattern that can be characterized as slow—fast—slow. This pattern occurs when macular degeneration patients read in a small island of vision within a pericentral scotoma. For large text, the island is not large enough to cover a whole word; this slows reading down. At medium print sizes, more letters are covered and reading speeds up. At the smallest sizes, reading slows down again. The same pattern can be seen in patients with extreme tunnel vision in end-stage glaucoma or retinitis pigmentosa. In these cases, it is important not to prescribe too much magnification. Using the underlining technique to facilitate tracking along the line may also be beneficial.
Occasionally, the pattern is slow—slow—slow. This pattern, which can be seen in patients with scattered drusen, indicates that magnification alone will be of limited benefit. In these patients, other means, such as underlining to facilitate tracking, together with training and practice in the most effective use of the available retinal areas can lead to more improvement than magnification alone.
Infant Vision Testing
In infants, both the physical basis of visual acuity and the cognitive skills to use it are still developing. Standard visual acuity testing is impossible, yet early detection of deficits is extremely important. Not acting on a suspicion of vision loss may cause developmental delays, since it deprives the infant of its most abundant source of stimulation.
Instead of adult vision testing techniques, we must use behavioral observations. The list in Table 5, supplied by Lea Hyvärinen, provides a transition to the discussion of the next aspect of vision loss: functional vision.
|ASSESSMENT OF FUNCTIONAL VISION|
|In the introduction (see Table 1), a distinction was made between visual functions and functional vision. Visual
functions (such as visual acuity) can be measured for each eye
separately. Functional vision is a property of the person; for adults, it
denotes the ability to perform ADLs such as reading. Measuring
reading fluency begins to measure such an ability, although full reading
proficiency also includes other factors, such as reading comprehension
and reading endurance.|
When we embark on an individualized rehabilitative plan, such as to improve reading proficiency, we need to measure the individual's performance directly and then compare the findings before and after the intervention. For other purposes, however, it may be sufficient to estimate the reading ability based on the measured visual acuity (Fig. 17). Such estimates are necessarily based on statistical averages and ignore individual differences. This approach has some advantages if the purpose is the assignment of disability benefits, since we want to avoid penalizing those who have made a successful adjustment by reducing their benefits.
FUNCTIONAL VISION ESTIMATES
Use of statistical ability estimates is meaningful only if a reasonable correlation can be established between visual function measurements and functional vision. Ophthalmology was one of the first fields in which attempts were made to establish such a correlation. Best known, although not the first, is the Visual Efficiency Scale developed by Snell in 1925. Snell did a survey establishing that persons with 20/200 (0.1) visual acuity had lost 80% of their employability in 1925.49 He combined this with a study about progressive visual blur to come up with a formula assigning a visual efficiency percentage rating to every visual acuity level.50 In the same year, his report was adopted by the AMA Committee on Compensation for Eye Injuries.51
In 1958, this report was one of several published as Guides to the Evaluation of Permanent Impairment.52 Several editions followed in which some additions were made but in which the basis of Snell's scale remained unchanged. This was the situation up to the 4th edition (1993) of the AMA Guides. The Visual Efficiency Scale can be found quoted in many publications.
The AMA Vision chapter in the 5th edition of the Guides4 (2000) incorporates radical changes. The Visual Efficiency Scale has been replaced by the Functional Vision Score system. The major change is that 20/200 (0.1) visual acuity is no longer rated as an 80% loss (of employability in 1925) but as a 50% loss of the generic ability to perform ADLs. This statistical estimate, combined with individual factors such as specific job requirements, can then contribute to an administrative decision about the assignment of benefits, which is a separate step not covered by the AMA Guides. Other alterations in the 5th edition involve some changes in the rules for combining losses and the elimination of various inconsistencies that had crept in over the years.
General Ability Score
To compare performance across dissimilar abilities (e.g., reading ability, hearing ability, walking ability), a set of generic ability ranges is needed. A useful scale is shown in Table 6. It recognizes that most functions have reserve capacity. When the reserve capacity is lost, peak performance suffers, but average performance is still acceptable. When loss proceeds further, some assistive devices are needed to enhance the function (enhancement aids). When loss proceeds beyond the midpoint of the scale, performance is restricted and finally impossible. At some point, enhancement aids are no longer useful, and the patient needs substitution aids to replace the lost function (e.g., talking books instead of magnifiers, lip reading instead of a hearing aid, a wheelchair instead of crutches).
Visual Acuity Score
In Table 3, the set of ICD-9-CM visual acuity ranges was compared with a set of reading ability ranges. We found a good fit. The VAS, discussed under visual acuity notations, fits equally well with the visual acuity ranges as with the General Ability Scale quoted above. We may conclude that the VAS (Table 7) provides a reasonable statistical estimate of the generic ability to perform tasks requiring detail vision.
Converting a visual acuity value to a VAS converts the geometic sequence of visual acuity values to a linear scale that can be used for averaging and for other calculations.
In the process outlined in the new AMA Guides, this is the first of three steps. The second step is to derive a statistical estimate of the acuity-related abilities of the person. This is done by calculating weighted average of the scores obtained for the right eye, the left eye, and binocularly. Since normal vision is binocular vision, the binocular score receives 60% of the weight; the right eye and left eye receive 20% each. Thus, the formula is FAS = (3 × VASOU + VASOD + VASOS) ÷ 5. The result is called the Functional Acuity Score (FAS).
The last step is to combine the FAS with a similarly derived functional field score (FFS) to a single functional vision score (FVS), as indicated in Figure 18.
Visual Field Score
A similar scoring system can be developed for visual field loss. While visual acuity loss may manifest itself primarily in a loss of reading ability, visual field loss affects another set of ADL skills, commonly covered under the term orientation and mobility (O&M) skills. The importance of these skills is obvious, but designing a good measurement tool for O&M skills is difficult. The technical aspects of visual field measurement have been discussed elsewhere. Modern static perimetry plots are a great help in defining the underlying disorder; they are harder to interpret with regard to the functional consequences. Traditional Goldmann isopters were easier to interpret in this regard. Also, for diagnostic purposes, the central 30 degrees are the most informative, whereas a full-field plot is needed to predict O&M skills. Capturing all aspects of visual field loss in a single number is a serious oversimplification of a complex reality. Nevertheless, it has been attempted because of administrative demand.
The old AMA Guides offered two options: a formula-based calculation and use of overlay grids. The formula gave equal weight to the upper and lower field and to peripheral and central loss. The overlay grids, designed by Esterman,53–55 gave double weight to the lower field and concentrated most weight in the Bjerrum area. The two methods do not give the same result and differ from the traditional legal blindness criterion (20-degree diameter, 10-degree radius).
The new AMA Guides use a method that can be implemented with paper and pencil or with an overlay grid and that has the potential of being implemented on an automated perimeter.56 Fifty points are assigned to the central 10-degree radius (20-degree diameter), since this area corresponds to about 50% of the primary visual cortex; the other 50 points are assigned to the periphery. The points are arranged along 10 meridians, 3 in each of the lower quadrants and 2 in each of the upper quadrants. This gives the lower field 50% extra weight. Measuring along meridians within the quadrants, rather than along the principal meridians, avoids special rules for hemianopias. Along each of the 10 meridians, 5 points are counted from 0 to 10 degrees and 5 points from 10 to 60 degrees. This maintains the traditional equivalence between a visual acuity loss to 20/200 (0.1) and a visual field loss to 10 degrees, and assigns 100 points to a field of 60-degree average radius. The assignments are summarized in Figure 19.
Similar to the VAS, which can be calculated as the number of letters read on a standardized chart, the visual field score (VFS) can be calculated as the number of points seen on a standardized grid. Table 7 compares the VAS with ranges of reading skills and the VFS with ranges of O&M skills. There is reasonable agreement, indicating that the VFS is a reasonable estimate of O&M ability.
The next step is to combine the VFSs obtained for the right eye, left eye, and binocularly to obtain a statistical ability estimate: the FFS. As for the FAS, this is done by averaging. The formula is FFS = (3 × VFSOU + VFSOD + VFSOS) ÷ 5. The binocular visual field is not measured directly but constructed by superimposition of the monocular fields.
Combining Visual Acuity and Visual Field Values
After the FAS and the FFS have been calculated, they can be combined to a single FVS for the whole person. The formula is FVS = (FAS × FFS) ÷ 100. The process is summarized in Figure 18. For more detailed rules, refer to the AMA Guides.4
|DIRECT ASSESSMENT OF VISUAL ABILITIES AND FUNCTIONAL VISION|
|While the statistical estimates of functional vision as outlined above
can be useful for administrative purposes, the planning of rehabilitative
efforts requires a more detailed assessment of an individual's
abilities. This can take the form of an ability profile in which the ability to perform each of a series of ADL activities is
rated. Such profiles can be simple or complex.|
A simple, yet effective, visual ability profile is used by Lea Hyvärinen.57 Her model is particularly effective for children and infants, since it contains only four ADL groups:
The early recognition of vision defects and their remediation or compensation is important, since vision alone provides as much input to the brain as all other senses combined. Loss or reduction of this input can have a profound effect on all aspects of an infant's development (see Table 5).
Overall visual functioning can be affected by several types of impairments. With regard to rehabilitative efforts, it is important to make a distinction between ocular visual impairment (OVI; caused by prechiasmal lesions, such as a macular scar) and cerebral (or cortical) visual impairment (CVI; caused by postchiasmal lesions). Cerebral lesions can also cause defects in the higher visual functions—visual perceptual impairment (VPI). CVI and VPI are harder to quantify, but the newer neuroimaging techniques have helped our understanding of these processes.
In children, the major cause of CVI and VPI is perinatal asphyxia and ischemia. Since this affects all parts of the brain, such children often have other, non—vision-related problems, which make the diagnosis more difficult. Nevertheless, it has been possible to identify types of visual processing that are affected in some children and not in others. A child can be said to have VPI if his or her visual processing capability is more restricted than the general developmental (not chronological) age would suggest.58 VPI can be task specific and can exist in the presence of normal acuity. Thus, the fact that a child's visual acuity is not in the low-vision range (see Table 7) does not mean that the child does not need rehabilitative interventions. Unfortunately, many agencies and professionals are not yet aware of this fact.
Visual perceptual impairments can also exist in the adult (e.g., after a stroke in the elderly). In the adult brain, the effects may be less generalized. It is important to separate a VPI from a possibly coexisting OVI (e.g., macular degeneration), since the rehabilitative efforts are different.
For vision rehabilitation plans for adults, the various activities and performance levels need to be specified in more detail. I have proposed a profile43 with 10 ADL groups:
To rate performance for each of these activities, the 100-point scale used for the visual acuity and visual field scales is too detailed and should be reduced to a 10-point ability scale. Although the scores for the 10 activity groups could be combined to a 100-point global score, this is not recommended, since the purpose of an ability profile is to highlight differential performance, and hence different rehabilitation needs, in different areas.
Other groups have devised numerous other lists, often directed at specific problems. ICIDH-23 provides a detailed taxonomy of activities, from which relevant ones can be selected.
|DIRECT ASSESSMENT OF PARTICIPATION|
|To judge the actual impact of vision loss on a person's quality of
life, an even broader perspective is needed. In ICIDH-80 this aspect
was described as handicap and measured in terms of loss of independence; in ICIDH-2 it is described
under the heading participation. These terms describe different subaspects. Handicap describes the barriers that need to be overcome, participation describes the result of overcoming them. Loss of independence seems to
imply full independence as an ideal; participation also stresses interdependence.|
How well different individuals can overcome their barriers depends not only on the impairment and on the abilities of the individual but also on the society and the environment in which the individual operates. The ADA has drawn much needed attention to accommodations that can be made in the workplace. The story of Helen Keller is one example of how some people can achieve full participation despite extraordinary handicaps.
Improving the quality of life and the participation aspect remains the ultimate goal of all rehabilita-tive interventions. For this reason, the National EyeInstitute (NEI) has developed a Visual Function Questionnaire (VFQ)59 with 50 or 25 items. The NEI-VFQ is used in many NEI-sponsored clinical studies and also in private studies. Other groups have developed similar instruments, and more activity in this field can be expected.
|The assessment of vision loss can be approached from different points of
view. Visual functions such as visual acuity are easily measured and
are often used to characterize patients or patient groups. Functional
vision and the ability to perform ADLs are more difficult to measure.|
For administrative uses, a statistical estimate of the ADL ability can be derived from the visual function measurements. For individual rehabilitation plans, the individual abilities must be evaluated in an ability profile.
Improving the participation aspect is the ultimate goal of all rehabilitative efforts. Instruments such as the NEI-VFQ can be used to assess this aspect.
36. Low Vision Test Chart. One side: Letter chart from 50M to 1M, representing acuity values from 1/50 (20/1000, 0.02) to 1/1 (20/20, 1.0); 1-m cord attached. Other side: standardized reading segments (10M to 0.6M) for reading rate measurements; diopter ruler included. Available in English, Spanish, Portuguese, German, French, Dutch, Swedish, Finnish. Precision Vision, 944 First Street, LaSalle, IL 61301; fax: 1-815-223-2224
43. Colenbrander A. The functional vision score: A coordinated scoring system for visual impairments, disabilities and handicaps. In: Kooijman A et al, eds. Low Vision: Research and New Developments in Rehabilitation. Studies in Health Technology and Informatics. Amsterdam: IOS Press, 1994:552
56. Colenbrander A, Lieberman MF, Schainholz DC. Preliminary implementation of the functional vision score on the Humphrey Field Analyzer. Proceedings of the International Perimetric Society, Kyoto, 1992