1 is 0.9999999999999............

Captain Kremmen,
It's not about rounding it's about limits. Real numbers are about limits.

$$\pi = \lim_{n \to \infty} \sum_{k=0}^n \frac{(-1)^{k} 4}{2 k + 1} = \lim_{n \to \infty} \sum_{k=0}^n \frac{(-1)^{k} ( 16 \times 5^{-2k-1} - 4 \times 239^{-2k-1} ) }{2 k + 1} = \lim_{n \to \infty} \sum_{k=0}^n \frac{(-1)^{k} 2 \times 3^{\frac{1}{2}-k}}{2 k + 1} = \lim_{n \to \infty} \frac{(k!)^2 2^{k+1}}{(2k+1)!} = ...$$

Until you understand limits, you do not understand continuity. Until you understand continuity, you do not understand what separates the concept of real numbers from the rational numbers.
 
Captain Kremmen,
It's not about rounding it's about limits. Real numbers are about limits.

$$\pi = \lim_{n \to \infty} \sum_{k=0}^n \frac{(-1)^{k} 4}{2 k + 1} = \lim_{n \to \infty} \sum_{k=0}^n \frac{(-1)^{k} ( 16 \times 5^{-2k-1} - 4 \times 239^{-2k-1} ) }{2 k + 1} = \lim_{n \to \infty} \sum_{k=0}^n \frac{(-1)^{k} 2 \times 3^{\frac{1}{2}-k}}{2 k + 1} = \lim_{n \to \infty} \frac{(k!)^2 2^{k+1}}{(2k+1)!} = ...$$

Until you understand limits, you do not understand continuity. Until you understand continuity, you do not understand what separates the concept of real numbers from the rational numbers.

Yes. Without those limits within continuity, there is no way to define mathematics and no way of using it.
 
Are you saying that 1 divided by 3 is .333...? If so, I am saying that ".333..." is a notation that represents the result of 1 divided by 3. Let me clarify that ".333...", i.e. a short sequence of threes followed by "..." is a short hand notation that is defined to equal 1/3, while 1 divided by 3 actually is a decimal point followed by an infinite sequence of threes.

No, what I am saying is that your assertion that the statments are only true because they are defined that way is false.

the only definition that is relevant is the use of ... or () to represent a set of recurring decimals.

The statements are true because they are mathmatically true not because they are defined to be true.
 
No, what I am saying is that your assertion that the statments are only true because they are defined that way is false.

the only definition that is relevant is the use of ... or () to represent a set of recurring decimals.

The statements are true because they are mathmatically true not because they are defined to be true.
Thanks, I know they are true. I also was under the impression that the "..." was, as you say, to represent the infinite sequence, and so I used terms that I am familiar with like "notation" and "designation" to refer to what the use of "..." in place of an infinite sequence meant. But thanks for reminding me that mathematicians are clear about .999... and 1 being the same value.
 
I shouldn't have mentioned the words "rounding up"
It really has nothing to do with my argument.

My argument is that the number 0.3333 recurring can be seen in two ways.
1. As the decimal equivalent of one third. For this purpose it is perfectly serviceable, but will forever be infinitesimally inaccurate,
even with an infinity of 3's. Three times this amount is equal to 1, not 0.9999 recurring.
2. As the number 0.3333 recurring. For this purpose the number is accurate. Three times this amount is 0.9999 recurring.

1 and 2 are not the same thing.
 
I believe that the teachers were playing with context... an old trick..

X = 0.99999..... : value of one "x" only. Context:"We have one object with the value of 0.9999..."
10X = 9.999999....: quantity of x and subsequent value of x. Context: "we still have only 10 OF object 0.999....."
10X - X = 9 :quantity of x - the value of x Contextual : mixing
9X = 9 : quantity of value only
X = 1 :quantity of value only



Nope: 1 = a value of 0.9999.... not a quantity of 0.99999
we have 1 OF 0.999999 this is not the same as 1 equals 0.9999


the logical fallacy of mixing context seems to be happening here...
On the surface it makes sense but if you look at it deeper it is logically unsound IMO
The equal sign is being used in two ways.
I don't see it...

X = 0.9999... "we have one object with a value of 0.9999..."
10X = 9.9999... "we have 10 objects of 0.9999, but the VALUE is 9.9999...."
10X-X = 9 "we have 9 objects of 0.99999 and we have a value of 9."

There is no mixing that was not inherent in the opening statement.
You are reading 10X-X to be a quantity - value... when it is still quantity - quantity.
On the left of the "=" we continue to use quantity; on the right we continue to use value.

As said, the key is in accepting that 10X = 9.9999....

But I do not see a mixing of contexts.
Example:
say
1x= 1a
10x = 10a
10x - 1a = 9x
9x = 9a
1x= 1a
But the bolded is NOT what the proof does.
That step should read: "10x - 1x = 10a - 1a"

So no context mixing.

If anything it simply begs the question in that it relies, as stated, on 10 lots of 0.9999.... being 9.9999.... and not 0.9999....0 (i.e. with a 0 at the end of the infinith(?) decimal place - if such is even meaningful to talk about), and that 9.9999... - 0.9999... (both values) = 9 (value).
But it is not context mixing as far as I can tell.
 
1 divided by 3 = .333...

yet

0.333 times 3 = 0.999... , and not the original 1 (Can't get there from here).

When the 9 is continued to infinity, there needs to be a logical assumption not based on the math that it is close enough. It keeps getting 9/10's closer infinitely.

One way to pose this problem using math and no logic assumptions is to try to subtract 0.999... from 1.

1 - 0.999... = 0.000... Which goes on infinitely and would have a one at the end if infinity had an end, but to anybody doing this in long division for a hundred years would see zero difference between 1 and 0.999...

So 0.999 = 1 is based on math as well as assumption.

Using Ellipsis is a one of several accepted methods of showing a repeating decimal.
 
I don't see it...

X = 0.9999... "we have one object with a value of 0.9999..."
10X = 9.9999... "we have 10 objects of 0.9999, but the VALUE is 9.9999...."
10X-X = 9 "we have 9 objects of 0.99999 and we have a value of 9."

There is no mixing that was not inherent in the opening statement.
You are reading 10X-X to be a quantity - value... when it is still quantity - quantity.
On the left of the "=" we continue to use quantity; on the right we continue to use value.

As said, the key is in accepting that 10X = 9.9999....

But I do not see a mixing of contexts.
But the bolded is NOT what the proof does.
That step should read: "10x - 1x = 10a - 1a"

So no context mixing.

If anything it simply begs the question in that it relies, as stated, on 10 lots of 0.9999.... being 9.9999.... and not 0.9999....0 (i.e. with a 0 at the end of the infinith(?) decimal place - if such is even meaningful to talk about), and that 9.9999... - 0.9999... (both values) = 9 (value).
But it is not context mixing as far as I can tell.

it is an interesting logic issue and one worth pursuing IMO.

Possibly another thread devoted to this sort of thing might be in order.

Possibly one of the problems I see, after thinking about it a bit, is that
by using the simple : x= 0.9999....
we are transposing an irrational infinite repeater into a finite quantity called "x"
The value of x is "real" or "complete" or "whole" where as the value we wish to place upon it is irrational. [infinite repeating, infinitely incomplete]

As soon as we infer x=0.999.... we are already stating that x= 1x = 1(0.9999....) because there is only one x
in writing [including the the implied value of 1x]:
1x = 0.9999.....
1= 0.9999...
of course if x = 0.999.... there is no one x as x now equals less than one.

To then go on and claim x as being a real number quantity (1) is false as it is not (1) it is 0.999.....

the value of 1x is now an irrational rather than real 1x.
(I am not sure I have the terminology correct re: "real" and "irrational" - numbers )

any ways... thoughts..only
 
1 divided by 3 = .333...

yet

0.333 times 3 = 0.999... , and not the original 1 (Can't get there from here).

When the 9 is continued to infinity, there needs to be a logical assumption not based on the math that it is close enough. It keeps getting 9/10's closer infinitely.

One way to pose this problem using math and no logic assumptions is to try to subtract 0.999... from 1.

1 - 0.999... = 0.000... Which goes on infinitely and would have a one at the end if infinity had an end, but to anybody doing this in long division for a hundred years would see zero difference between 1 and 0.999...

So 0.999 = 1 is based on math as well as assumption.

Using Ellipsis is a one of several accepted methods of showing a repeating decimal.
The problem is 1/3 does not equal 0.333....unless you add an infinitesimal to it.

There is a need to prove that 1/3 = 0.333... to begin with and I believe that it does not and can not equal exactly 1/3 unless an infinitesimal is added.


Can I suggest that that (1) at the end of an infinite set of (0)'s would be what is commonly refereed to as an "infinitesimal"

1 - 0.999... = 0 + (1/infinity)
or
1- 0.999... = 0 + 1(infinitesimal)
 
can I suggest that that (1) at the end of an infinite set of (0)'s would be what is commonly refereed to as an "infinitesimal"

1 - 0.999... = 0 + 1/infinity
or
1- 0.999... = 0 + infinitesimal
Hmm, interesting way to describe 'infinitesimal". Could we write it .000...1?
 
For all numbers, there is an number greater than that number. Because x + 1 > x.
1 is a number.
0.999... is a number.
If 0.999... = 1 then 1 - 0.999... = 0. But this is not what the OP assumes.

Our hypothesis must be 1 > 0.999.. from which the following follows:

If 1 > 0.999... then 1 - 0.999... > 0.
Define x = 1 - 0.999....
Then x is a number. And x > 0.
Then 1/x is a number. Therefore there is a number N > 1/x.

Therefore our hypothesis leads invariably to $$\exists N\quad N \gt \frac{1}{1 - \lim_{n \to \infty } \sum_{k=1}^{n} 9 \times 10^{-k} } = \lim_{n \to \infty } \frac{1}{1 - \sum_{k=1}^{n} 9 \times 10^{-k} } = \lim_{n \to \infty } \frac{1}{ 10^{-n} } = \lim_{n \to \infty } 10^n $$.

But no such N can exists, because there is always a value of M which makes the above untrue for all values of n > M. We can even write:
$$M = 1 + \lceil \log_{10} N \rceil$$ and prove
$$\forall n \geq M \quad N \not\gt 10^M \leq 10^n$$ .


Therefore the hypothesis that $$1 > 0.999...$$ cannot be true because 1 - 0.999... is smaller than any positive number.
Not even if N = $$(\textrm{1 googol})^{\tiny \textrm{1 googol}} = (10^{100})^{(10^{100})} = (10^{10^2})^{(10^{100})} = 10^{\left( 10^2 \times 10^{100} \right)} = 10^{10^{102}}$$ can we avoid this Archimedean property of numbers.
There is always a larger number -- the natural numbers are without bound.
And so $$M = 1 + \lceil \log_{10} 10^{10^{102}} \rceil = \textrm{100 googol} + 1$$
 
Last edited:
I believe in my naive way that the invention of the infinitesimal was to allow Zeno's paradox to be resolved mathematically with out the Hare ever catching the Tortoise.

In other words zero can only be proved as a nul value when everything else reduces to the infinitesimal.

Perhaps another thread.
 
The problem is 1/3 does not equal 0.333....unless you add an infinitesimal to it.

Define $$\epsilon_Q = 1/3 - 0.333... \\ \epsilon_c = 1 - 0.999... $$ in honor of QQ and the OP.

Then it follows that $$3 \epsilon_Q = 3 \times ( 1/3 - 0.333... ) = 1 - 0.999... = \epsilon_c$$ so $$3 \epsilon_Q = \epsilon_c$$, right?

So why is $$\epsilon_Q \neq \epsilon_c$$?

Doesn't your scheme require an infinite number of infinitesimals if $$\epsilon_c \neq 0$$ ? How is this an improvement on the Real Numbers which have no such infinitesimals?
 
Define $$\epsilon_Q = 1/3 - 0.333... \\ \epsilon_c = 1 - 0.999... $$ in honor of QQ and the OP.

Then it follows that $$3 \epsilon_Q = 3 \times ( 1/3 - 0.333... ) = 1 - 0.999... = \epsilon_c$$ so $$3 \epsilon_Q = \epsilon_c$$, right?

So why is $$\epsilon_Q \neq \epsilon_c$$?

Doesn't your scheme require an infinite number of infinitesimals if $$\epsilon_c \neq 0$$ ? How is this an improvement on the Real Numbers which have no such infinitesimals?

I am not sure that I understand the question fully, [perhaps one day I will understand the notation you are using] but I tend to believe that the use of an infinitesimal is a way that mathematics can resolve the paradox using the legitimate "fudge" factor of 1(1/infinity).

It has to do I think with the philosophical question raised by Zeno's paradox and acutely demonstrated in the notion of 10/3 or 1/3 using base 10.

Extended:
Philosophically it sums up to the ability for mathematics to prove that "zero" is indeed a null value.
From what I have come to understand after exhaustive philosophical discussion, is that zero can only be proved as a "nul" quantity or value with the use of infinitesimals as being the smallest quantity or value, we can go with out arriving at zero. Thus proving zero only by default and not directly. Deductive reasoning or logic? perhaps...
An infinitesimal is unable to be rationally multiplied or divided as it's definition is always going to be 1(1/infinity) regardless of the number of times it is used.
Same argument would I think apply to infinity * infinity = infinity...

To cut a portion of cake exactly 1/3 [in absolutum] is impossible [just like it is to resolve Pi] as infinite reduction towards the infinitesimal, as a part of measuring that 1/3, is all that is possible.

The other issue worth mentioning is that looking at any whole number... it may be realised that the "wholeness" is only available when used in the context of "Quantity" and not "Quality" [aka value]


Example:
We have 10 apples each weighing 1 kg.

10 apples is true [quantity]
1 kg exactly is false [ due to infinite reduction.] [quality - value] Suffice to say it can never be exactly 1kg [in absolutum]

The issue displayed in the process Sarkus mentioned is I believe erroneous due to the fact that it mixes quantity with quality and there is a subtle but important loss of contextual consistency in that process. [as exampled above with the apples]
 
Last edited:
X = 0.99999.....
10X = 9.999999....
10X - X = 9
9X = 9
X = 1

Therefore X = 1 = 0.9999....

using number of apples and weight

1 apple = 0.999...kg
10 apples = 10 (0.999...kg)

10 apples - 1 apple = 9 apples
9 apples = 9 apples
1 apple = 1 apple

no paradox evident.
the issue appears to be when we multiply our apples by their weight. instead of maintaining Quantity as context we shift to a mix of quality and quantity.
To arrive at 1= 0.999... is simply saying that yes 1 apple weighs 0.999...kg
 
@ QQ,

No. saying 1 = 0.999... is an equation. If you are going to state that both sides of the equation are unrelated and that one side is an apple, yet the other side is the weight is incorrect.

The matter is best seen if you subtract 0.999... from 1 as I suggested in my last post, followed by similar statements by Rpenner.
 
@ QQ,

No. saying 1 = 0.999... is an equation. If you are going to state that both sides of the equation are unrelated and that one side is an apple, yet the other side is the weight is incorrect.

The matter is best seen if you subtract 0.999... from 1 as I suggested in my last post, followed by similar statements by Rpenner.

the problem is we start out by not saying 1= 0.999.... we start out by saying X= 0.999...

big difference..
x is an algebraic symbol that is being replaced with the value 0.999....
thus the equation is simply
x = 0.999...
or transposed
0.999.... = 0.999...

X is being treated as whole number where as it is not a whole number. It is merely an algebraic symbol.
so to say that x= 0.999... is simply replacing x for the value of 0.999... hence the result 0.999...= 0.999....

so if
X = 0.999.....
10X - X = 9x
9X = 9x
X = X

Therefore X = X = 0.999.... = 0.999...
x cannot be replace by a 1
the other lines are redundant and unnecessary if one sticks to the value of x= x

The original version again:
X = 0.99999.....
10X = 9.999999....
10X - X = 9 <<<:replaces 9X with a real number... logical inconsistency (loss of context)
9X = 9
X = 1 <<: irrational number now converted to real

Therefore X = 1 = 0.9999.... <<: 1 is now an irrational number


Why is my version incorrect and the one below it deemed correct?
try

x = 0.7777...
10x=7.7777...
10x-x = 7
9x = 7
therefore 1= 1.285714586 (9/7)
which of course is total BS...:)
using the method suggested could lead to 1 being just about anything...
 
QQ, you don't seem to understand how algebra works.

x = 0.7777...
10x=7.7777...
10x-x = 7
9x = 7
So far so good.
Next step:
x = 7/9

therefore 1= 1.285714586 (9/7)
You've divided the let hand side by 9x, and the right hand side by 9. Are you thinking that x=1, instead of x=0.777...?
 
Back
Top