Faldage is right. It's not the decimal point being part of the ten based system, it's the system you are used to.
Consider a nonal system:
If you move the nonal point one over to the left, you get 100. But that is actually nine nines, or 81 decimal. In nonal, it's 100 (and you can call it a hundred if you want to, since that's just a name for a number that's your base plus one. One of nice things about a nonal system is you can take a square root of the base and have an even number: 3 x 3 = 10 (nine decimal).
It really is just that the decimal system is what you are used to. And you can use any base you want to. If it's higher than 10 you just have to come up with new symbols (or recycle familiar ones) to represent the numbers from 1 to your new 10. Don't think of your 10 as ten in decimal, (you can call it that if you want to); you have to train yourself to think of it in terms of your base.
For example, if you use base 16, the convention is to use 1 through 9, then A, B, C, D, E, and F to represent what we think of as 10 through 15 in base ten. 10 is not ten, it's F plus 1. And 100 is 10 times 10, but if you convert it it comes out to 256 in our decimal system. Makes for compactness when dealing with large numbers, and works really well on the computer.
The one I'd like to see is a trinary system for computers. Binary of course is 1s and 0s. Ons and Offs. But in a trinary system, you would have plus charge, minus charge, and neutral, at least the way I put it together in my mind. In fact, though I've thought of it before, this may well be the first time I've actually told anyone about it. It just seems to me that you can make computations a LOT faster than you can in binary, since information would flow 50 percent faster. Though not being a computer whiz I'll admit that's speculation on my part.
It would be 1, 2, 10, 11, 12, 20, etc. 10 would be our decimal 3, 20 our decimal 6, and 100 our decimal 9.