1- (0.999999...) is an infinitesimally small value not quite equal to zero

OK, if it's not zero, then what is it? If you're so sure that it's a nonzero value you must be able to tell me what it is! And the leaving-out or writing-in of ellipses makes all the difference here! No ellipsis means that the number ends just where you leave off.

When we talk about 0.99999999... we are talking about the limit of x as x approaches 1, which is represented lim(x) x --> 1. This is understood to be infinitesimally close to 1 but not quite one.

No, we are talking about the decimal representation of a number, that is, how do we write it? We all have a concept of what the integer "one" means. We are discussing whether it is more valid to write 1.00000.... or 0.999999... to express that number. And the point here is that either way is equally acceptable, both decimals (yes! including the 1.0000000...) must be extended to infinity to correctly represent the integer we all call 1.

The proof that shows that writing 0.99999... means the exact same thing as writing 1.00000... shows how to write it as an infinite series (because that's what decimal really is), that is, an infinite sum of numbers. The value of the series approaches 1 as the number of the terms in the sum becomes infinite. For any small difference from 1 that you choose to inquire about, I can always add enough terms that I am closer to 1 than the difference you've specified. (This is more or less the epsilon-delta definition of a limit: you prove that for ANY small difference you are interested in, no matter show small, you can make your function/number/decimal representation get closer to its limit than that specified difference.)