Σύντομα Ελληνική Μετάφραση
Ιστορία του μηδέν
Μια απο τις πιο απλές ερωτήσεις που οι αναγνώστες μας θέτουν είναι το ποιός ανακάλυψε η επινόησε τον αριθμό μηδέν ? .
Much ado about nothing: First a placeholder and then a full-fledged number, zero had many inventors
The number zero as we know it arrived in the West circa 1200, most famously delivered by Italian mathematician
Fibonacci
(aka Leonardo of Pisa), who brought it, along with the rest of the
Arabic numerals, back from his travels to north Africa. But the history
of zero, both as a concept and a number, stretches far deeper into
history—so deep, in fact, that its provenance is difficult to nail down.
"There are at least two discoveries, or inventions, of zero," says Charles Seife, author of
Zero: The Biography of a Dangerous Idea (Viking,
2000). "The one that we got the zero from came from the Fertile
Crescent." It first came to be between 400 and 300 B.C. in Babylon,
Seife says, before developing in India, wending its way through northern
Africa and, in Fibonacci's hands, crossing into Europe via Italy.
Initially, zero functioned as a mere placeholder—a way to tell 1
from 10 from 100, to give an example using Arabic numerals. "That's not a
full zero," Seife says. "A full zero is a number on its own; it's the
average of –1 and 1."
It began to take shape as a number, rather than a punctuation mark
between numbers, in India in the fifth century A.D., says Robert Kaplan,
author of
The Nothing That Is: A Natural History of Zero
(Oxford University Press, 2000). "It isn't until then, and not even
fully then, that zero gets full citizenship in the republic of numbers,"
Kaplan says. Some cultures were slow to accept the idea of zero, which
for many carried darkly magical connotations.
The second appearance of zero occurred independently in the New
World, in Mayan culture, likely in the first few centuries A.D. "That, I
suppose, is the most striking example of the zero being devised wholly
from scratch," Kaplan says.
Kaplan pinpoints an even earlier emergence of a placeholder zero,
a pair of angled wedges used by the Sumerians to denote an empty number column some 4,000 to 5,000 years ago.
But Seife is not certain that even a placeholder zero was in use so
early in history. "I'm not entirely convinced," he says, "but it just
shows it's not a clear-cut answer." He notes that the history of zero is
too nebulous to clearly identify a lone progenitor. "In all the
references I've read, there's always kind of an assumption that zero is
already there," Seife says. "They're delving into it a little bit and
maybe explaining the properties of this number, but they never claim to
say, 'This is a concept that I'm bringing forth.'"
Kaplan's exploration of zero's genesis turned up a similarly blurred
web of discovery and improvement. "I think there's no question that one
can't claim it had a single origin," Kaplan says. "Wherever you're
going to get placeholder notation, it's inevitable that you're going to
need some way to denote absence of a number."
One of the commonest questions which the readers of this archive ask is:
Who discovered zero? Why then have we not written an article on zero as
one of the first in the archive? The reason is basically because of the
difficulty of answering the question in a satisfactory form. If someone
had come up with the concept of zero which everyone then saw as a
brilliant innovation to enter mathematics from that time on, the
question would have a satisfactory answer even if we did not know which
genius invented it. The historical record, however, shows quite a
different path towards the concept. Zero makes shadowy appearances only
to vanish again almost as if mathematicians were searching for it yet
did not recognise its fundamental significance even when they saw it.
The first thing to say about zero is that there are two uses of zero
which are both extremely important but are somewhat different. One use
is as an empty place indicator in our place-value number system. Hence
in a number like 2106 the zero is used so that the positions of the 2
and 1 are correct. Clearly 216 means something quite different. The
second use of zero is as a number itself in the form we use it as 0.
There are also different aspects of zero within these two uses, namely
the concept, the notation, and the name. (Our name "zero" derives
ultimately from the Arabic sifr which also gives us the word "cipher".)
Neither of the above uses has an easily described history. It just did
not happen that someone invented the ideas, and then everyone started to
use them. Also it is fair to say that the number zero is far from an
intuitive concept. Mathematical problems started as 'real' problems
rather than abstract problems. Numbers in early historical times were
thought of much more concretely than the abstract concepts which are our
numbers today. There are giant mental leaps from 5 horses to 5 "things"
and then to the abstract idea of "five". If ancient peoples solved a
problem about how many horses a farmer needed then the problem was not
going to have 0 or -23 as an answer.
One might think that once a place-value number system came into
existence then the 0 as an empty place indicator is a necessary idea,
yet the Babylonians had a place-value number system without this feature
for over 1000 years. Moreover there is absolutely no evidence that the
Babylonians felt that there was any problem with the ambiguity which
existed. Remarkably, original texts survive from the era of Babylonian
mathematics. The Babylonians wrote on tablets of unbaked clay, using
cuneiform writing. The symbols were pressed into soft clay tablets with
the slanted edge of a stylus and so had a wedge-shaped appearance (and
hence the name cuneiform). Many tablets from around 1700 BC survive and
we can read the original texts. Of course their notation for numbers was
quite different from ours (and not based on 10 but on 60) but to
translate into our notation they would not distinguish between 2106 and
216 (the context would have to show which was intended). It was not
until around 400 BC that the Babylonians put two wedge symbols into the
place where we would put zero to indicate which was meant, 216 or 21 ''
6.
The two wedges were not the only notation used, however, and on a tablet
found at Kish, an ancient Mesopotamian city located east of Babylon in
what is today south-central Iraq, a different notation is used. This
tablet, thought to date from around 700 BC, uses three hooks to denote
an empty place in the positional notation. Other tablets dated from
around the same time use a single hook for an empty place. There is one
common feature to this use of different marks to denote an empty
position. This is the fact that it never occured at the end of the
digits but always between two digits. So although we find 21 '' 6 we
never find 216 ''. One has to assume that the older feeling that the
context was sufficient to indicate which was intended still applied in
these cases.
If this reference to context appears silly then it is worth noting that
we still use context to interpret numbers today. If I take a bus to a
nearby town and ask what the fare is then I know that the answer "It's
three fifty" means three pounds fifty pence. Yet if the same answer is
given to the question about the cost of a flight from Edinburgh to New
York then I know that three hundred and fifty pounds is what is
intended.
We can see from this that the early use of zero to denote an empty place
is not really the use of zero as a number at all, merely the use of
some type of punctuation mark so that the numbers had the correct
interpretation.
Now the ancient Greeks began their contributions to
mathematics around the time that zero as an empty place indicator was
coming into use in Babylonian mathematics. The Greeks however did not
adopt a positional number system. It is worth thinking just how
significant this fact is. How could the brilliant mathematical advances
of the Greeks not see them adopt a number system with all the advantages
that the Babylonian place-value system possessed? The real answer to
this question is more subtle than the simple answer that we are about to
give, but basically the Greek mathematical achievements were based on
geometry. Although
Euclid's
Elements
contains a book on number theory, it is based on geometry. In other
words Greek mathematicians did not need to name their numbers since they
worked with numbers as lengths of lines. Numbers which required to be
named for records were used by merchants, not mathematicians, and hence
no clever notation was needed.
Now there were exceptions to what we have just stated.
The exceptions were the mathematicians who were involved in recording
astronomical data. Here we find the first use of the symbol which we
recognise today as the notation for zero, for Greek astronomers began to
use the symbol O. There are many theories why this particular notation
was used. Some historians favour the explanation that it is omicron, the
first letter of the Greek word for nothing namely "ouden".
Neugebauer,
however, dismisses this explanation since the Greeks already used
omicron as a number - it represented 70 (the Greek number system was
based on their alphabet). Other explanations offered include the fact
that it stands for "obol", a coin of almost no value, and that it arises
when counters were used for counting on a sand board. The suggestion
here is that when a counter was removed to leave an empty column it left
a depression in the sand which looked like O.
Ptolemy in the
Almagest written around 130 AD uses the Babylonian sexagesimal system together with the empty place holder O. By this time
Ptolemy
is using the symbol both between digits and at the end of a number and
one might be tempted to believe that at least zero as an empty place
holder had firmly arrived. This, however, is far from what happened.
Only a few exceptional astronomers used the notation and it would fall
out of use several more times before finally establishing itself. The
idea of the zero place (certainly not thought of as a number by
Ptolemy who still considered it as a sort of punctuation mark) makes its next appearance in Indian mathematics.
The scene now moves to India where it is fair to say the numerals and
number system was born which have evolved into the highly sophisticated
ones we use today. Of course that is not to say that the Indian system
did not owe something to earlier systems and many historians of
mathematics believe that the Indian use of zero evolved from its use by
Greek astronomers. As well as some historians who seem to want to play
down the contribution of the Indians in a most unreasonable way, there
are also those who make claims about the Indian invention of zero which
seem to go far too far. For example Mukherjee in [
6] claims:-
... the mathematical conception of zero ... was also present in the spiritual form from 17 000 years back in India.
What is certain is that by around 650AD the use of zero as a number came
into Indian mathematics. The Indians also used a place-value system and
zero was used to denote an empty place. In fact there is evidence of an
empty place holder in positional numbers from as early as 200AD in
India but some historians dismiss these as later forgeries. Let us
examine this latter use first since it continues the development
described above.
In around 500AD
Aryabhata
devised a number system which has no zero yet was a positional system.
He used the word "kha" for position and it would be used later as the
name for zero. There is evidence that a dot had been used in earlier
Indian manuscripts to denote an empty place in positional notation. It
is interesting that the same documents sometimes also used a dot to
denote an unknown where we might use
x. Later Indian
mathematicians had names for zero in positional numbers yet had no
symbol for it. The first record of the Indian use of zero which is dated
and agreed by all to be genuine was written in 876.
We have an inscription on a stone tablet which contains a date which
translates to 876. The inscription concerns the town of Gwalior, 400 km
south of Delhi, where they planted a garden 187 by 270 hastas which
would produce enough flowers to allow 50 garlands per day to be given to
the local temple. Both of the numbers 270 and 50 are denoted almost as
they appear today although the 0 is smaller and slightly raised.
We now come to considering the first appearance of
zero as a number. Let us first note that it is not in any sense a
natural candidate for a number. From early times numbers are words which
refer to collections of objects. Certainly the idea of number became
more and more abstract and this abstraction then makes possible the
consideration of zero and negative numbers which do not arise as
properties of collections of objects. Of course the problem which arises
when one tries to consider zero and negatives as numbers is how they
interact in regard to the operations of arithmetic, addition,
subtraction, multiplication and division. In three important books the
Indian mathematicians
Brahmagupta,
Mahavira and
Bhaskara tried to answer these questions.
Brahmagupta
attempted to give the rules for arithmetic involving zero and negative
numbers in the seventh century. He explained that given a number then if
you subtract it from itself you obtain zero. He gave the following
rules for addition which involve zero:-
The sum of zero and a negative number is negative, the sum of a
positive number and zero is positive, the sum of zero and zero is zero.
Subtraction is a little harder:-
A negative number subtracted from zero is positive, a positive number
subtracted from zero is negative, zero subtracted from a negative
number is negative, zero subtracted from a positive number is positive,
zero subtracted from zero is zero.
Brahmagupta then says that any number when multiplied by zero is zero but struggles when it comes to division:-
A positive or negative number when divided by zero is a fraction with
the zero as denominator. Zero divided by a negative or positive number
is either zero or is expressed as a fraction with zero as numerator and
the finite quantity as denominator. Zero divided by zero is zero.
Really
Brahmagupta is saying very little when he suggests that
n divided by zero is
n/0.
Clearly he is struggling here. He is certainly wrong when he then
claims that zero divided by zero is zero. However it is a brilliant
attempt from the first person that we know who tried to extend
arithmetic to negative numbers and zero.
In 830, around 200 years after
Brahmagupta wrote his masterpiece,
Mahavira wrote
Ganita Sara Samgraha which was designed as an updating of
Brahmagupta's book. He correctly states that:-
... a number multiplied by zero is zero, and a number remains the same when zero is subtracted from it.
However his attempts to improve on
Brahmagupta's statements on dividing by zero seem to lead him into error. He writes:-
A number remains unchanged when divided by zero.
Since this is clearly incorrect my use of the words
"seem to lead him into error" might be seen as confusing. The reason for
this phrase is that some commentators on
Mahavira have tried to find excuses for his incorrect statement.
Bhaskara wrote over 500 years after
Brahmagupta. Despite the passage of time he is still struggling to explain division by zero. He writes:-
A quantity divided by zero becomes a fraction the denominator of
which is zero. This fraction is termed an infinite quantity. In this
quantity consisting of that which has zero for its divisor, there is no
alteration, though many may be inserted or extracted; as no change takes
place in the infinite and immutable God when worlds are created or
destroyed, though numerous orders of beings are absorbed or put forth.
So
Bhaskara tried to solve the problem by writing
n/0 = ∞. At first sight we might be tempted to believe that
Bhaskara has it correct, but of course he does not. If this were true then 0 times ∞ must be equal to every number
n,
so all numbers are equal. The Indian mathematicians could not bring
themselves to the point of admitting that one could not divide by zero.
Bhaskara did correctly state other properties of zero, however, such as 0
2 = 0, and √0 = 0.
Perhaps we should note at this point that there was another civilisation
which developed a place-value number system with a zero. This was the
Maya people who lived in central America, occupying the area which today
is southern Mexico, Guatemala, and northern Belize. This was an old
civilisation but flourished particularly between 250 and 900. We know
that by 665 they used a place-value number system to base 20 with a
symbol for zero. However their use of zero goes back further than this
and was in use before they introduced the place-valued number system.
This is a remarkable achievement but sadly did not influence other
peoples.
You can see a separate article about
Mayan mathematics.
The brilliant work of the Indian mathematicians was
transmitted to the Islamic and Arabic mathematicians further west. It
came at an early stage for
al-Khwarizmi wrote
Al'Khwarizmi on the Hindu Art of Reckoning
which describes the Indian place-value system of numerals based on 1,
2, 3, 4, 5, 6, 7, 8, 9, and 0. This work was the first in what is now
Iraq to use zero as a place holder in positional base notation. Ibn
Ezra, in the 12
th
century, wrote three treatises on numbers which helped to bring the
Indian symbols and ideas of decimal fractions to the attention of some
of the learned people in Europe.
The Book of the Number
describes the decimal system for integers with place values from left to
right. In this work ibn Ezra uses zero which he calls galgal (meaning
wheel or circle). Slightly later in the 12
th century
al-Samawal was writing:-
If we subtract a positive number from zero the same negative number
remains. ... if we subtract a negative number from zero the same
positive number remains.
The Indian ideas spread east to China as well as west to the Islamic countries. In 1247 the Chinese mathematician
Ch'in Chiu-Shao wrote
Mathematical treatise in nine sections which uses the symbol O for zero. A little later, in 1303,
Zhu Shijie wrote
Jade mirror of the four elements which again uses the symbol O for zero.
Fibonacci was one of the main people to bring these new ideas about the number system to Europe. As the authors of [
12] write:-
An important link between the Hindu-Arabic number system and the European mathematics is the Italian mathematician Fibonacci.
In
Liber Abaci he described the nine Indian
symbols together with the sign 0 for Europeans in around 1200 but it was
not widely used for a long time after that. It is significant that
Fibonacci
is not bold enough to treat 0 in the same way as the other numbers 1,
2, 3, 4, 5, 6, 7, 8, 9 since he speaks of the "sign" zero while the
other symbols he speaks of as numbers. Although clearly bringing the
Indian numerals to Europe was of major importance we can see that in his
treatment of zero he did not reach the sophistication of the Indians
Brahmagupta,
Mahavira and
Bhaskara nor of the Arabic and Islamic mathematicians such as
al-Samawal.
One might have thought that the progress of the number
systems in general, and zero in particular, would have been steady from
this time on. However, this was far from the case.
Cardan
solved cubic and quartic equations without using zero. He would have
found his work in the 1500's so much easier if he had had a zero but it
was not part of his mathematics. By the 1600's zero began to come into
widespread use but still only after encountering a lot of resistance.
Of course there are still signs of the problems caused by zero. Recently
many people throughout the world celebrated the new millennium on 1
January 2000. Of course they celebrated the passing of only 1999 years
since when the calendar was set up no year zero was specified. Although
one might forgive the original error, it is a little surprising that
most people seemed unable to understand why the third millennium and the
21st century begin on 1 January 2001. Zero is still causing problems!
http://www-gap.dcs.st-and.ac.uk/~history/HistTopics/Zero.html
https://www.washingtonpost.com
As you may have noticed lately, a lot of ordinarily sensible citizens
tend to develop fairly severe fruitcake syndrome when the calendar
turns up a year with a lot of zeros. There's something about 2000, with
its three creepy ciphers, that seems important, if not actually ominous.
Of
course, it doesn't mean anything. Calling this 2000 A.D. is based on a
dating system that uses the wrong year for the start of the A.D. era:
Jesus of Nazareth probably was born in 4 B.C. And even then, the 21st
century -- and the new millennium -- don't start until 2001. That's
because there's no year 0 in our numbering scheme, which dates from the
early medieval period.
In fact, zero -- as a number or a concept
-- is a shockingly new idea in western civilization. For nearly everyone
in Europe, such a thing was unthinkable until the 14th century at the
earliest. That's nearly 1,000 years after zero's invention.
Why the delay?
There
are several answers, and they begin about 10,000 years ago in the
earliest grubby epoch of organized human activity. In those days, people
didn't really need a zero. Arithmetic probably arose from the social
need for counting things -- number of goats sold, bundles of grain
delivered and the like.
So who wanted a symbol for nothing? As
the English philosopher Alfred North Whitehead once observed, "No one
goes out to buy zero fish."
All that was necessary for commerce
or inventory was to make a mark on something to represent each goat or
grain bundle or non-zero seafood order.
But as quantities grew
larger, societies more complex and human mathematical curiosity a bit
friskier, people had to rely on fancier kinds of tallies. There is
evidence of such rudimentary numbering systems as long ago as 3,500
B.C., and each culture devised its own.
The differences were
sometimes spectacular. You might think that it would be "natural" to
base a number system on 10, as we do now, because there are 10 digits on
both hands. And in fact, the Sumerians, Egyptians and, later, the
Romans did.
But the ancient Babylonians, the same outfit that
brought you the 360-degree circle and the 60-minute hour, used a system
based, not surprisingly, on 60. The Maya in Central America used a
base-20 system. Some early Greek cultures bumbled along with base-5
before the decimal (base-10) system took over.
There are still
societies in Brazil that employ a base-2, or binary, system. They count
this way: one, two, two-and-one, two-and-two, two-and-two-and-one and so
on, as Charles Seife describes in his forthcoming book Zero: The
Biography of a Dangerous Idea.
And lots of computer programmers
learn to write in base-16, or hexadecimal, which accords nicely with PC
systems that use eight-bit "bytes" and 16-bit "words."
In truth,
it really doesn't matter what your base is; you'll get the same result
if you do your arithmetic correctly. What does matter, in terms of
efficiency of computation, is how you represent the numbers.
Unfortunately, most of the early systems lacked two characteristics that
make modern calculations so easy.
One is called "positional
notation," a fancy term for a way of depicting numbers so you can stack
them in columns to add or subtract. The other is a number for zero. They
arrived in the West in that order. And they were a long time coming.
Some
ancient cultures used a different, unique symbol for each number up to
some limit. The Greek system, for example, employed dozens of
alphabetical characters. Alpha was 1, beta 2, kappa 20, sigma 200 and so
forth.
Many numbering schemes left room for a lot of
uncertainty. For example, in the Babylonian system, meant one. But the
same symbol also stood for 60. And also 60 times 60, or 3,600. So
was
2 or 61 or maybe 3,601. (Similar logic seems to persist in Washington
today when dealing with the federal budget.) The intended value was
determined by context.
But by 300 B.C., the Babylonians had
solved this problem -- at least somewhat -- with an innovation. They
added a symbol that functioned as a place holder.
This was
possible because their numbering system employed positional notation, or
place value. That is, the value of a number depended on its position
within the whole number, just as it does in our own system. The number 2
signifies something quite different in 12 than it does in 429 or 2,763,
and the difference depends on its place within the number.
So the Babylonians stuck in a couple of hash marks to tell readers at a glance what position a symbol was supposed to occupy.
The
place holder wasn't a real number and certainly not zero as we know it
today. It was more like an empty string on an abacus or an empty row in
the typically sand-covered "counting boards" used by various societies.
In those, each row represented quantities of a different magnitude.
For
example, suppose you wanted to depict the number 315 on a base-10
counting board. You'd put five stones in the farthest-right "singles"
row; then one stone in the next, or "tens" row to the left of that; and
then three stones in the "hundreds" row to the left of that. So popular
was this system that our word "calculate" comes from the Latin word
calculus, meaning pebble. Not exactly rocket science, but a clear step
in the right direction.
Counting boards, and thus empty spaces on
counting boards, had been around at least since the Socratic era. But
it would be nearly 1,000 years until a genuine zero arose independently
within two cultures -- Mayan civilization in the Yucatan peninsula,
which was unknown to Europe at the time, and India to the far more
accessible east.
The Maya didn't mess around. Not only did they
develop positional notation, but they also had a real zero and used it
boldly. The first day of each 20-day month was the 0th; the last was the
19th. Unfortunately, Europeans, whose systems eventually would dominate
world culture, didn't have a clue about the Maya until the 16th
century.
Instead, our zero came from India, which seems to have
picked up the rudimentary concept of a empty-value place holder from the
invading forces of Alexander the Great in the 4th century B.C. For a
fascinating discussion of how this cross-cultural transplant probably
took place, see Robert Kaplan's new book, The Nothing That Is: A Natural
History of Zero.
Whatever happened, the Hindus eventually
elevated zero to the rank of a number. That is, it was not just a
place-holder. It was a real part of the number system, and it
represented a real quantity: nothing.
Nobody knows exactly when
the first Indian nullmeister came up with this improved zero and a
circular symbol to represent it, but it seems to have been well
established by at least the 7th century A.D., when Islamic peoples began
pushing toward China.
Without a doubt, zero behaved badly. It
scarcely made sense, for example, that a real quantity such as, say, 352
would, if multiplied times zero, simply equal nothing. Where did all
those 352 real things go? Dividing by zero or raising a number to the
zeroth power yielded equally baffling results. Nonetheless, zero soon
was headed west.
As the epic wave of Islamic conquest washed
eastward, it swept up the best ideas of local populations on the way.
One of those was the Indian zero. In fact, Arab scholars admired the
entire base-10 Indian symbol set -- nine numbers and a circle for zero
-- and brought it back.
Shortly after zero reached Baghdad, it
attracted the attention of the leading Arab mathematician of the 9th
century, the great al-Khwarazmi. He may seem rather obscure, but a
corruption of his name gives us the modern computer-science word
"algorithm," and the word "algebra" comes from the Arabic term al-jabr
that means something like "reassembly" and was part of the title of one
of his major works.
Al-Khwarazmi popularized use of the Indian
symbols, which we now incorrectly call "Arabic numerals," as well as the
exotic notion of zero -- a symbol for nothing whatsoever.
European sages, in turn, picked up the idea of zero from the Arabs.
At
first, the notion of a complete absence (true zero) didn't sit well
with western minds. Greco-Roman philosophy, as typified in the teachings
of Aristotle, had been hostile to the concept of nothingness. It had no
place for the idea of complete emptiness, even in what passed for the
idea of "outer space," which in those days of an Earth-centered cosmos
included everything above the moon.
You might say that classical
thinkers abhorred a vacuum. Certainly, they avoided the void. Indeed,
until the end of the 19th century, thousands of highly sophisticated
scientists believed that outer space simply could not be empty. They
figured it had to be filled with some stuff they called "ether."
Moreover,
zero messed up the orderly behavior of arithmetic. Sometimes, it didn't
do anything at all: 473 -- 0 = 473. Big deal. But sometimes it made a
colossal mess: 473 / 0 = . . . um, well, infinity. Or something else. Or
whatever you want it to be.
But to Eastern minds, nothing was no
big deal. After all, many Hindu and Buddhist beliefs were based on the
idea that reality actually is illusory, a sort of fictional movie that
our brains foolishly project on a screen of cosmic nothingness. In
addition, Judeo-Christian and Islamic creation stories involved a deity
shaping the Earth from a featureless void.
Philosophy aside,
Arabic numerals, with their positional notation and nifty zero, were
terrific business tools for those more interested in money than
metaphysics.
Local Politics Alerts
Breaking news about local government in D.C., Md., Va.
So
by the early 13th century, the Islamic-Indian method of calculating was
being advocated by a few Europeans, among them a well-traveled Italian
merchant and part-time mathematician named Fibonacci. His book Liber
Abaci, or Book of the Abacus, urged the benefits of the Arabic numbering
system.
It made a lot of sense. In those days, Europeans were
still doing math with Roman numerals. And you couldn't get very far by
placing CDXXXVII over LXIV and adding the columns. But put 103 over 21,
and you almost automatically ended up with 124. Indeed, calculating with
Arabic numerals seemed to be just as fast as using a counting board and
gave the user far greater range.
Nonetheless, acceptance was
slow. In 1299, Florence banned use of Arabic numerals, ostensibly
because it was so easy to alter them -- for example, by turning a 0 into
a 6. Of course, a Roman I could be changed to a V with as little
effort, but never mind.
Finally, by 1500, there had been plenty
of head-to-head computation competitions between counting boards and
Arabic numbers. And the numbers were starting to win every time.
Roman
numerals finally began to disappear except for ceremonial purposes. And
zero, the numerical embodiment of nothing at all, was here to stay.