Hi Pete.

Agin I beg to differ. Here is why...

The decimal NOTATION construct has PRE-EXISTING 'empty places' (STRING of "empty or zero symbol) from the units position upwards (extending towards the left from the decimal point SEPARATOR symbol which denotes TRANSITION to a PRE-EXISTING 'empty places' extending towards the right from the "tenths" position.

**All 'leading zeros and trailing zeros are TRIVIAL inclusions for convention purposes to clarify 'level of accuracy' termination and not for any fundamantal 'value' information as such.**
So, the decimal notation expression of 000007.00000 is trivially extended and can be reduced to fundamentals by writing it as 7. That's it.

So the multiplication operation by 10 is adding a "0" to 7, making it 7 x 10 =70 and that's it. anything more is trivial and unnecessary. So any 'proofs' depending on trivial unnecessary manipulations which hide this fundamental operation is not a 'proof' at all, but rather a trivial exercise with no point to it at all except the circuitous 'proofs' we have been seeing.

Anyhow, see where there is NO "." decimal point TO move, either direction, in the fundamental treatment?

Whereas there IS a "0" to add in some position in the "units" place position which FORCES the 10 in the target string to move to the left and so placing the 1 and 0 of the original 10 notation into the hundreds and tens place respectively while the "0" in the units place is the additional "0" effectively brought by the multiplier 10 to the string?

It is simple and straightforwardly more fundamental to think of it that way than it is to think of some decimal point being moved (especially when in this case NO "." decimal point is non-trivially invoked/involved at all in the string/operation?).

That was all I wanted to point to. No more than that aspect which makes all the trivial manipulations/format 'proofs' totally unnecessary/unconvincing in the more fundamental reality context.