# Weber Fraction (Beginners)

### From Panamath

You walk into a restaurant and order the same item as a friend. A few sodas and some jokes later, the waiter comes with both orders. Something, though, seems wrong. You can't quite tell what. After a quick count, you raise your head, motion for the waiter to come closer: you'd gotten fewer appetizers, so could you please have some more?

What your brain used in the first part of this scenario was its approximate number system, which sees "ballpark estimates" rather than exact quantities. This ability can be modeled by the "Gaussian curve”. You might know what that is already; the Gaussian's also called the normal or "bell-shaped" distribution in statistics. If not, then imagine the outline of a bell sitting on a table. You can see that it's symmetrical, and that there's one point that will always be the highest. This is the mean, the number that your data tends to "cluster" around, even if there might be some variation.

The area under the Gaussian curve depicts probability. This fits well with our ANS model. Remember that number line from grade school? Imagine putting a Gaussian curve on top of it. The function here depicts the amount of brain activity associated with a certain number.

Let's say you were shown seven circles. The ANS looks at it and goes, hey, there are seven! (Seven, at this point, would be the mean.) However, as the ANS doesn't detect exact quantities, it also determines that there's a high chance it might be six or eight or other numbers further away from seven, but those have a lower probability.

Now consider a higher number, say 35. This one's pretty tricky. Unlike in seven's case, this higher amount could be a whole lot of other numbers. And you're a lot less sure that it would be 35. This is consistent: the more you increase the number, the lower the Gaussian "peak" (mean) and the wider the "spread" (range of reasonable and possible values).

Because the range steadily increases, we can use Weber's law for modeling the ANS: how well you can discriminate numbers depends on a ratio. This ratio, or Weber fraction, is calculated by the following formula:
(*n**u**m**b**e**r**o**f**i**t**e**m**s**i**n**t**h**e**b**i**g**g**e**r**s**e**t**n**u**m**b**e**r**o**f**i**t**e**m**s**i**n**t**h**e**s**m**a**l**l**e**r**s**e**t*) / (*n**u**m**b**e**r**o**f**i**t**e**m**s**i**n**t**h**e**s**m**a**l**l**e**r**s**e**t*).*T**h**i**s**n**u**m**b**e**r**h**e**l**p**s**u**s**c**a**l**c**u**l**a**t**e**t**h**e**s**p**r**e**a**d**s*(*s**t**a**n**d**a**r**d**d**e**v**i**a**t**i**o**n* = *n**u**m**b**e**r* * *W**e**b**e**r**f**r**a**c**t**i**o**n*) of any Gaussian placed on the number line. This, in turn, can be used to show how well someone is able to discriminate between any possible combination of relative quantities.

Why is this so? When you have two Gaussian functions, they may or may not overlap. Let's take a look at the first case. You're comparing 36 objects vs. 37. The Gaussian curves for these numbers are relatively close and overlap with each other. In these overlapping areas, it's hard to determine which one is which. However, no matter how close the numbers are, there will always be an area, no matter how small, where the bigger one will always be perceived of as bigger, provided that the numbers are not exactly the same. There's a small probability, then, that you can deduce the "37" group is greater. The Weber fraction allows you to calculate the spreads and the amount of overlap or non-overlap, allowing us to model even this almost improbable happenstance.

**Other uses of the Weber fraction:**

You might have already heard of the Weber fraction as the basis for calculating some sort of "threshold" below which someone wouldn't be able to discriminate two numbers, or as the midpoint between one's best performance and an "equalized" value where everyone has the same chance of getting the ANS task correct. However, though these may be true, they utilize the Weber fraction in a very specific way.