# Tensors and moment of inertia

Discussion in 'Physics & Math' started by arfa brane, Dec 16, 2017.

1. ### arfa branecall me arfValued Senior Member

Messages:
5,501
Since it's about the moment of inertia, this is something a rotating body has
But first a mathematical way to define a rotating point on a circle. It traces out an arc, s, which is equal to the angle times the radius.

But the angle is only defined if there's are two points on the circle; assume the horizontal radius line is fixed and vary the angle. The point of interest is at the end of r in the diagram, hence r is a position vector and the point it identifies has a tangential velocity. Of course when you physically rotate a particle with mass in a circular path about the centre, there are no little arrows that pop up out of nowhere, what you see is motion.
The tangential velocity, the momentum and so on are abstractions, but well-defined ones.

3. ### QuarkHeadRemedial Math StudentValued Senior Member

Messages:
1,540
This operation, which you seem to think is some sort of "magic" is explained in the thread I referenced. I suggest you read it carefully.

I cannot parse this sentence, it is unintelligible to me. But I insist again. The inner product of 2 vectors is a number. Likewise the scalar product of a vector and a co-vector, reciprocally.

5. ### Confused2Registered Senior Member

Messages:
479
QuarkHead - really a request for help even if it doesn't look like it...
You lost me in the first line. I'm not sure Arfa Brane is up to that first line either. I'm guessing (Italics? Why?) the reason for using tensors is that the notation is immensely efficient - despite this it may be helpful to step back to what the notation 'hides' (or represents) so 'we' can get a better grasp of what is actually going on.

7. ### arfa branecall me arfValued Senior Member

Messages:
5,501
Well, I can say I merely conjured it up so that I could write down: $u_jv^j = u^iv^j \delta_{ij}$.

I did this just because everything is well defined on both sides of the = sign. Talk about motivated.

You say the inner product is a scalar, a number. I say it depends on which end of the map you mean. It's an operation, and it's a number. The inner product is linear in both arguments you say, and the inner product is a number (how does a number have linear arguments?). The inner product is a (0,2) tensor Schuller says and so does Wikipedia.

Schuller also says, around 65 minutes into lecture 3, that everything he's covered so far says nothing about numbers, doesn't mention numbers etc.

Last edited: Dec 22, 2017
8. ### arfa branecall me arfValued Senior Member

Messages:
5,501
Look at the diagram I posted, it's a vector space because Euclidean space is a vector space. If you say that r is rotating about the centre then r is a rotating position vector and the point on the circle r points to has a velocity.

9. ### arfa branecall me arfValued Senior Member

Messages:
5,501
--https://ncatlab.org/nlab/show/bilinear form
Where is "the inner product is a real number"? Already the concept is swimming with terminology--possibly not all that well-defined for most people.

What is degeneracy? What is a positive definite bilinear map?

10. ### arfa branecall me arfValued Senior Member

Messages:
5,501
$u_jv^j = u^iv^j \delta_{ij}$.

Let's stare at the above for a little, and ask, why are some indices upper, and some lower? Does it matter in a Euclidean vector space like $\mathbb {R}^2$.

If you have the row and column indices written up or down, it doesn't matter to matrix algebra and it doesn't matter to Euclid either, or Pythagoras. A vector in Euclidean space always has a basis which 'yields' Pythagoras well-defined relation between the sides of a right triangle. The equation represents this relation with tuples of (scalar) components, ordered with indices. Summing over an index is another way to say multiplying matrices together.

So $u^iv^j$ represents the tuple-multiplication defined as $(u_1, u_2, . . .)\cdot (v_1, v_2, . . .) = u_1v_1 + u_2v_2 + . . .$ , a (possibly infinite) sum over indices. There's another way to define it as $u^{ \intercal} v$, where u and v are n x 1 matrices. In the case of a rotating vector you have implicit polar coordinates, but fixing a direction (the horizontal radius line), means you can now 'find' another line perpendicular to it, you can use vector algebra to fix a vertical line and bingo, Cartesian vector coordinates (i.e. their components).

Last edited: Dec 22, 2017
11. ### arfa branecall me arfValued Senior Member

Messages:
5,501
So you could say the inner product of the rotating vector, after fixing a direction (which is to say, choosing the horizontal radius in the diagram), gives you a Cartesian pair, or basis pair, of position vectors on the circle and you can 'store' this information in something.

So you introduce, to a frame of reference which is the centre of a locus of points given by the equation $r^2 = x^2 + y^2$, a second Cartesian frame of reference based only on the properties of abstract vectors (and include the concept of a rotating point).

Note there are two variables on the right we can take in pairs (x,y), the definition of a position vector.

Last edited: Dec 22, 2017
12. ### arfa branecall me arfValued Senior Member

Messages:
5,501
Of course, there are an infinite number of ways to "choose" a pair of orthogonal vectors, but doing so means we can in the abstract "shrink" everything in the basis to the size of unit vectors, since two parallel vectors in a Euclidean domain are the same vector up to a scale factor.

So we just divide "the length" of r by a suitable number and call r a unit vector, the circle is then a unit circle, abstractly speaking. But wait, that does something to the norm, it means $r^2 = r$ when r = 1. Have we made a mistake already?

13. ### Confused2Registered Senior Member

Messages:
479
Ace. I'll stare at that till either I get the notation or the notation gets me. Many thanks.

14. ### arfa branecall me arfValued Senior Member

Messages:
5,501
--http://www.mathpages.com/rr/s5-02/5-02.htm

b

Confused2 likes this.
15. ### arfa branecall me arfValued Senior Member

Messages:
5,501
Note the axes in the above vector diagram aren't perpendicular, they must have (even if you don't "take it") an inner product.
In two dimensions this is just called the dot product, and each vector only needs two components.

These, from the diagram, are the pairs $(p^1, p^2), (P_1, P_2)$, the first has contravariant components, the second has covariant components, in the frame of reference, aka system of coordinates.

It's a linear system because: vectors are linear objects (are defined that way), and the coordinate lines are straight, no curvature or concept thereof.

Last edited: Dec 22, 2017
16. ### arfa branecall me arfValued Senior Member

Messages:
5,501
Consider the apparently different notions of a point (or pointlike object) moving along a circular path, and every point of a circle moving the same way.

In both thought experiments, each point has a unique position vector (r, θ). If we suppose or choose that r = 1, then the (x,y) Cartesian form means $x^2 + y^2 = 1$. Transforming from one system of coordinates to the other means having a linear map from one to the other which is bijective, there are two orthogonal components in each system, the map needs to be bilinear. Once we sort that out, we can give our unit radius a real physical radius, and measure it.

So when all the pointlike individual (but connected) parts of a ring rotate, the total moment of inertia is a sum over the contributions of each part. We know calculus can let us define a small volume element dV, into which we can 'pour' some mass dm. We want the density, dm/dV of the entire ring to be the same everywhere in the total volume, so we can sum over all the volumes. All the mass is (approximately) at the same radius so the radius is the radius of gyration.

Given the polar form (r,θ) of a vector you can easily say it has magnitude r and direction θ. But the Cartesian form seems to have neither. Aha, you mean the change of basis maps a direction θ to the concept of vector addition somehow?

17. ### arfa branecall me arfValued Senior Member

Messages:
5,501

Compare this diagram with the previous one. Consider that it looks like a pair of orthogonal Cartesian frames rotated at their common point.
Except one side of the rotated frame has the axes switched around, and with this mathematical trick contravariant projections (dashed lines) and covariant projections (more dashed lines) are all parallel to some axis, and perpendicular to a second axis.

So imagine that you can rotate $X^1$ downwards, and also rotate $\Xi^1$ upwards, taking the components with each coordinate line. What does this do?

18. ### arfa branecall me arfValued Senior Member

Messages:
5,501
The moment of inertia of a unit, or pointlike, mass m at a radius r, is just $mr^2$ and it's independent of the motion of the mass. \\

That is it's given by only geometric relations, such that you can sum over all the 'unit' masses and show that the sum is a homomorphism of integrating differential units dm of mass over volume elements dV, given you know how to calculate this volume geometrically, i.e. that you have information about the distribution of mass in a total volume V.

This is fairly straightforward to show, you have two well defined operations, summation and integration. if the density is the same for each unit $m_i$ at each position $r_i$ then also $dm/dV$ is constant.

Last edited: Dec 23, 2017
19. ### arfa branecall me arfValued Senior Member

Messages:
5,501

The first diagram is contained in the second so the first is a subset of a system (a composition of simpler systems) of coordinates. It's an illustration of a relation between parallel and perpendicular lines, but it does have a notion of rotatedness in it. But then you could construct a line parallel to $\Xi^1$ and call it $X^{*1}$, and get the same projections thus 'recovering' a rectangular system.
From mathpages where I pinched the diagrams:
So the first diagram, when it's "completed" properly, which means exchanging a pair of the axes after rotating two rectangular frames essentially, represents a vector space, and its dual space (since every vector space has a dual). The first diagram is missing a few things from either the vector space or from its dual, then (but which things?).

I have to correct a typo I've noticed.

This should be: "$u^jv^j$ represents the tuple-multiplication . . ."
$u^iv^j$ represents $u_1v_1 + u_1v_2 + ... + u_2v_1 + u_2v_2 + ...$

Last edited: Dec 23, 2017
20. ### arfa branecall me arfValued Senior Member

Messages:
5,501
So from mathpages, a pair of diagrams that say you can construct a vector space and its dual space, by changing the symmetry of a pair of Cartesian coordinate systems, which are rotated by an angle θ.
You exchange (rotate?) a pair of the axes of the latter, either blithely or you need to define some function that can do it mathematically perhaps, the thing here is, you are free to choose any coordinate system you think might be useful. Now you have symmetric relations between covariant (perpendicular projection) and contravariant (parallel projection) in the construction, so what got transformed?.

Schuller does say somewhere that "all of this is constructible". So mathpages is showing us such a construction, I assume.

21. ### arfa branecall me arfValued Senior Member

Messages:
5,501
Back to the math of tensors, we have a geometry (diagram 1) which says it's a vector basis in non-rectangular coordinates $X_1, X_2$. The indices are just numbers, notice.
The geometry means there are two distinguished sets of components, one set is perpendicular, the other parallel to, a chosen axis.

But notice that if in the diagram $X_n$ was rectangular (meaning the inner product $X_1 \cdot X_2$ is zero), then there would be only one set of components, each component would be perpendicular to one axis and parallel to the other axis, so would be both covariant and contravariant.

If this can be defined algebraically (like, as a sum over components) you can distinguish between a basis which is rectangular and one which isn't (because its components are partitioned into two sets, one with upper "parallel" indices, the other with lower "perpendicular" indices).

In the second diagram all the coordinate lines have upper indices, I don't think that's a mistake.

Last edited: Dec 23, 2017
22. ### arfa branecall me arfValued Senior Member

Messages:
5,501
What struck me about the radius of gyration of a solid body being "where all the mass can be concentrated" without changing the moment of inertia, was that you can take a unit of mass and distribute it several ways, firstly it has a radius of gyration equal to the radius of a single "miminally distributed" pointlike mass. So you could by changing (i.e. lowering) the density but keep it constant, re-distribute the point mass over a larger set, like a ring, redistribute the ring over a cylindrical or spherical shell, etc.

That might be the limit, the sphere is also "all the mass concentrated" at a constant distance from the common centre. From the pointlike mass to a ring is interesting--can you maximise the mass density in a ring so it gets close to the same density of the unit pointlike mass. That means you try to approach the initial pointlike density by increasing it, so you compress it by reducing the cross section.

The moment of inertia and radius of gyration let you play around with densities, you don't need to worry about how you can scale a density you just assume it's possible.

Last edited: Dec 24, 2017
23. ### arfa branecall me arfValued Senior Member

Messages:
5,501
Ok. I can say I'm pretty confident the moment of inertia is a (0,2) tensor. Wikipedia says: "the component tensors are each constructed from two basis vectors".
In two dimensions (as in the rotating point diagram), basis vectors have two components, or, each index runs from 1 to 2, so there are four component tensors:

$e_1 \otimes e_1,\; e_1 \otimes e_2,\; e_2 \otimes e_1,\; e_2 \otimes e_2\;$. There is also a $I_{ij}$ such that each component of the inertia tensor (for motion in a plane) is described by $I_{ij}e_i \otimes e_j$.

Just a little more explicitly in case you missed something. You have a set of elements, ordered by indices which you express more succinctly as the elements of a matrix (or table) ordered by row and column indices, so here we have:

$\{ I_{11} e_1 \otimes e_1,\; I_{12} e_1 \otimes e_2,\; I_{21} e_2 \otimes e_1,\; I_{22} e_2 \otimes e_2\; \} = I_{ij}e_i \otimes e_j$.

The moment of inertia (a scalar or real quantity, but not just a number) depends non-dynamically on geometry, but when the body (a ring say) rotates, it has angular momentum $L$, and $L = I \omega$, there are two vectors involved. You need to be careful here because the two vectors are in different subspaces (you can't add momentum to velocity, they have different units).
But you can always take the cross product of any two vectors in two dimensions; what the cross product "is" is another vector (but the cross product is also a (p,q) tensor), but it has to be perpendicular or orthogonal to the two vectors it's a product of. The $\omega$ is a cross product $v \times r$, so it's already "out of plane", and $L$ points in the same direction.

However in two dimensions you can still use cross products if you identify the plane (i.e. which side of the plane) with a circular arrow, i.e. introduce rotation and a direction normal to the plane. This identification has a complementary rotation in the opposite direction, so it identifies the other side of the plane and points down. It's where the axis of rotation of a circle lives too.

What I mean there is you can ignore the philosophical objection to a vector having a direction which doesn't lie in the plane, by notating direction in the plane itself and letting the notation (a circular arrow, not a straight arrow) identify a fictitious "up" direction.

Last edited: Dec 28, 2017