# What is a Weber Fraction?

### From Panamath

## Model Representations of the ANS

In modeling performance on tasks that engage the ANS, it is necessary first to specify a model for the underlying approximate number representations. It is generally agreed that each numerosity is mentally represented by a distribution of activation on an internal “number line.” These distributions are inherently “noisy” and do not represent number exactly or discretely ^{[1]}^{[2]}. This means that there is some error each time they represent a number; and this error can be thought of as a spread of activation around the number being represented.

## The Mental Number Line

The mental number line is often modeled as having linearly increasing means and linearly increasing standard deviation ^{[2]}. In such a format, the representation for e.g., cardinality seven is a probability density function that has its mean at 7 on the mental number line and a smooth degradation to either side of 7 such that 6 and 8 on the mental number line are also highly activated by instances of seven in the world. In Figure 1a I have drawn idealized curves which represent the ANS representations for numerosities 4-10 for an individual with Weber fraction = .125. You can think of these curves as representing the amount of activity generated in the mind by a particular array of items in the world with a different bump for each numerosity you might experience (e.g., 4 balls, 5 houses, 6 blue dots, etc). Rather than activating a single discrete value (e.g., 6) the curves are meant to indicate that a range of activity is present each time an array of (e.g., 6) items is presented ^{[3]}. That is, an array of e.g., *six* items will greatly activate the ANS numerosity representation of 6, but because these representations are noisy this array will also activate representations of 5 and 7 etc with the amount of activation centered on 6 and gradually decreasing to either side of 6.

## Neuronal Associations of the Mental Number Line

The bell-shaped representations of number depicted in Figure 1a are more than just a theoretical construct; “bumps” like these have been observed in neuronal recordings of the cortex of awake behaving monkeys as they engage in numerical discrimination tasks (e.g., shown an array of six dots, neurons that are preferentially tuned to representing 6 are most highly activated, while neurons tuned to 5 and 7 are also fairly active, and those tuned to 4 and 8 are active above their resting state but less active than those for 5, 6, and 7. These neurons are found in the monkey brain in the same region of cortex that has been found to support approximate number representations in human subjects. This type of spreading, “noisy” activation is common throughout the cortex and is not specific to representing approximate number. Rather, approximate number representations obey principles that operate quite generally throughout the mind/brain.

## Interpreting the Gaussian Curves

The bell-shaped representations of number depicted in Figure 1a are more than just a theoretical construct; “bumps” like these have been observed in neuronal recordings of the cortex of awake behaving monkeys as they engage in numerical discrimination tasks (e.g., shown an array of six dots, neurons that are preferentially tuned to representing 6 are most highly activated, while neurons tuned to 5 and 7 are also fairly active, and those tuned to 4 and 8 are active above their resting state but less active than those for 5, 6, and 7. These neurons are found in the monkey brain in the same region of cortex that has been found to support approximate number representations in human subjects. This type of spreading, “noisy”, activation is common throughout the cortex and is not specific to representing approximate number. Rather, approximate number representations obey principles that operate quite generally throughout the mind/brain.

When trying to discriminate one numerosity from another using the Gaussian representations in Figure 1a, the more overlap there is between the two Gaussians being compared the less accurately they can be discriminated. Ratios that are closer to 1 (Ratio = bigger# / smaller#), where the two numbers being compared are close (e.g., 9 versus 10), give rise to Gaussians with greater overlap resulting in poorer discrimination (i.e., “ratio-dependent performance”). Visually, the curve for 5 in Figure 1a looks clearly different in shape from the curve for 4 (e.g., curve 4 is higher and skinnier than curve 5); and discriminating 4 from 5 is fairly easy. As you increase in number (i.e., move to the right in Figure 1a), the curves become more and more similar looking (e.g., is curve 9 higher and skinnier than curve 10?); and discrimination becomes harder.

But, it is not simply that larger numbers are harder to discriminate across the board. For example, performance at discriminating 16 from 20 (not shown) will be identical to performance discriminating 4 from 5 as these pairs differ by the same ratio (i.e., 5/4 = 1.125 = 20/16); and the curves representing these numbers overlap in the ANS such that the representation of 4 and 5 overlap in area to the same extent that 16 overlaps with 20 (i.e., although 16 and 20 each activate very wide curves with large standard deviations, these curves are far enough apart on the mental number line that their overlap is the same amount of area as the overlap between 5 and 4, i.e., they have the same discriminability). This is ratio-dependent performance.

## Tasks That Give Rise to the Gaussian Curves

The Gaussian curves in Figure 1a are depictions of the mental representations of 4-10 in the ANS. Similar looking curves can be generated by asking subjects to make rapid responses that engage the ANS, such as asking subjects to press a button 9 times as quickly as possible while saying the word “the” repeatedly to disrupt explicit counting. In such tasks, the resulting curves, generated over many trials, represent the number of times the subject pressed the button when asked to press it e.g., 9 times. Because the subject can’t count verbally and exactly while saying “the”, they tend to rely on their ANS to tell them when they have reached the requested number of taps.

When this is the case, the variance in the number of taps across trials is the result of the noisiness of the underlying ANS representations and so can be thought of as another method for determining what the underlying Gaussian representations are. That is, if starting to tap and ending tapping etc did not contribute additional noise to the number of taps (i.e., if the ANS sense of how many taps had been made were the only source of over and under tapping) then the standard deviation of the number of taps for e.g., 9 across trials would be identical to the standard deviation of the underlying ANS number representation of e.g., 9.

When attempting to visualize what the noisy representations of the ANS are like, one can think of the Gaussian activations depicted in Figure 1a, and these representations affect performance in a variety of tasks including discriminating one number for another (e.g., 5 versus 4) and generating number-relevant behaviors (e.g., tapping 9 times).

## Numerical Discrimination in the ANS

To understand how numerical discrimination is possible in the ANS, consider the task of briefly presenting a subject with two arrays, e.g., 5 yellow dots and 6 blue dots, and asking the subject to determine which array is greater in number (Figure 2a). The 5 yellow dots will activate the ANS curve representation of 5 and the 6 blue dots will activate the ANS curve representation of 6 (assume that the subject uses attention to select which dots to send to the ANS for enumerating and then stores and compares those numerosity representations bound to their respective colors) (Figure 2a-b).

## ANS Modeling and Subtraction

An intuitive way to think about ordinal comparison within the ANS is to liken it to a subtraction; this will be mathematically equivalent to other ways of making an ordinal judgment within the ANS and my use of subtraction here should be thought of as one illustration among several mathematically equivalent illustrations.

Imagine that an operation within the ANS subtracts the smaller (i.e., five-yellow) representation from the larger (i.e., six-blue) representation (Figure 2b). Because the 5 and 6 representations are Gaussian curves, this subtraction results in a new Gaussian representation of the difference which is a Gaussian curve on the mental number line that has a mean of 1 (viz., 6 - 5 = 1) and a standard deviation of √(σ52 + σ62); Figure 2c (i.e., when subtracting one Gaussian random variable from another (i.e., X6 – X5), the result is a new Gaussian random variable with the mean at the difference (6 – 5 = 1) and a variance that adds the variances of the original variables (σ52 + σ62)). This results in a Gaussian curve that is centered on 1, but that extends to both the left and right of 0 (Figure 2c).

One can think of 0 as the demarcation line separating evidence “for” and “against” in that the area under the curve to the right of 0 is the portion of the resulting representation that correctly indicates that *six* is greater than *five* while the area under the curve to the left of 0 is the portion of the resulting representation that incorrectly indicates that *five* is greater than *six*. This area to the left of 0 results from the overlap between the original Gaussian representations, *five* and *six*, that were being discriminated in which some of the area of *five-yellow* is to the right (i.e., greater than) some of the area of six-blue (Figure 2b).

## Interpreting Gaussian Overlap: Weber's Law

One can think of 0 as the demarcation line separating evidence “for” and “against” in that the area under the curve to the right of 0 is the portion of the resulting representation that correctly indicates that six is greater than five while the area under the curve to the left of 0 is the portion of the resulting representation that incorrectly indicates that five is greater than six. This area to the left of 0 results from the overlap between the original Gaussian representations, five and six, that were being discriminated in which some of the area of five-yellow is to the right (i.e., greater than) some of the area of six-blue (Figure 2b).

Another method would rely on assessing the total evidence for blue and the total evidence for yellow. Either of these ways of making a decision will have the result that, on a particular trial, the probability of the subject getting the trial correct will depend on the relative area under the curve to the left and right of 0 which is itself determined by the amount of overlap between the original Gaussian representations for the numerosities being compared (i.e., *five* and *six*).

The more overlap there is between the two Gaussian representations being compared, the less accurately they can be discriminated. Consider comparing a subject’s performance on a 5 dots versus 6 dots trial to a trial involving 9 versus 10 dots. Using the curves in Figure 1a as a guide, we see that the overlapping area for the curves representing 5 and 6 is less than the overlapping area for the curves representing 9 and 10, because the curves flatten and spread as numerosity increases. This means that it will be easier for the subject to tell the difference between 5 and 6 than between 9 and 10, i.e., the resulting Gaussian for the subtraction will have more area to the right of 0 for the subtraction of 5 from 6 than for the subtraction of 9 from 10.

The more overlap there is between the two Gaussian representations being compared, the less accurately they can be discriminated. Consider comparing a subject’s performance on a 5 dots versus 6 dots trial to a trial involving 9 versus 10 dots. Using the curves in Figure 1a as a guide, we see that the overlapping area for the curves representing 5 and 6 is less than the overlapping area for the curves representing 9 and 10, because the curves flatten and spread as numerosity increases. This means that it will be easier for the subject to tell the difference between 5 and 6 than between 9 and 10, i.e., the resulting Gaussian for the subtraction will have more area to the right of 0 for the subtraction of 5 from 6 than for the subtraction of 9 from 10.

How rapidly performance rises from chance (50%) to near-asymptotic performance (100%) in this kind of dot numerosity discrimination task is controlled by the subject’s Weber fraction (w).

## The Weber Fraction: Overview

The Weber fraction indexes the amount of spread in the subject’s ANS number representations and therefore the overlap between any two numbers as a function of ratio (described in a succeeding section). The precision of the ANS varies across individuals with some people having a smaller Weber fraction (i.e., better performance and sharper Gaussian curves) and others having a larger Weber fraction (i.e., poorer performance owing to wider noisier Gaussian curves).

Numerical discrimination (e.g., determining which color, blue or yellow, has more dots in an array of dots flashed to quickly for explicit counting) is possible in the ANS through a process that attempts to determine which of the two resulting curves (e.g., five-yellow or six-blue) is further to the right on the mental number line. The fullness of these noisy curves is used to make this decision (and not just the mode, or mean or some other metric) and successful discrimination thereby depends on the amount of overlap between the two activated curves (i.e., ratio-dependent performance).

The amount of overlap is indexed by a subject’s Weber fraction (w) with a larger Weber fraction indicating more noise, more overlap, and thereby worse discrimination performance. This model has been found to provide an accurate fit to data from rats, pigeons, and humans of all ages.

## References

- ↑ {{{author}}},
*The Number Sense : How the Mind Creates Mathematics*, Oxford University Press, [[{{{date}}}]]. - ↑
^{2.0}^{2.1}Gallistel, C., & Gelman, R.,*Non-verbal numerical*cognition: from reals to integers*, [[{{{publisher}}}]], [[{{{date}}}]].* - ↑ Nieder, A., & Dehaene, S.,
*Representation of number in the brain*, [[{{{publisher}}}]], [[{{{date}}}]].