Schur-Weyl Duality and Keeping Track of the Geometry
Misc. Notes — 15 September 2025
1. Writing out in a different way
Earlier today, we wrote out the expression
The explanation regarding vector spaces is great and gets the main point across. Let’s see if we can put this into a crazy rigorous mathematical formulation so that we can identify the structure more easily (there may be something simple we can exploit). First, let’s define the space
so that the space of interest is . We can always decompose the tensor product of a vector space with itself into the symmetric and antisymmetric components:
Example 1
If this statement is not obvious, let’s construct the following example: We can write the Hilbert space of a single qubit spanned by the basis , and the tensor product of with itself being the Hilbert space of 2 qubits with the basis
spanned over . Then we can write the second symmetric power as
What we mean by the second symmetric power is the collection of tensor products of such that . Then we can see here that swapping the order of the tensor above will always yield the same tensor.
Then the second exterior power of is just spanned by the single vector:
This vector is the only vector that will satisfy the expression , which we usually denote with a wedge: . The direct sum of these two spaces yields the whole tensor product , however it may not necessarily be an isometry. For this to be an isometry, I think we just need to take the two linear combinations of the elements from the original space and divide out by , but I have not checked this.
Now, we are going to further decompose the following:
Ok woah woah woah. This decomposition is super unobvious and even after writing and understanding a whole proof I am still trying to understand it. That said, I will present two proofs of this construction, the first being more from first-principles and the second will be a proof using the Schur-Weyl duality that we haven’t even gotten to yet.
Proof 1
Let’s select and to be two vector spaces over some field with a characteristic that is not 2. Then when we examine the space
there exists a canonical “shuffle” isomorphism of the form
- We call the shuffle isomorphism “canonical” because it is defined without making any choice of basis and it is an isomorphism of -modules.
Now, let’s define the (anti)symmetrizing function as
where is the tensor swapping function that takes . Then we can identify the image of this function as
This part is not as obvious at first, but, under , the flip is given by
Instead of ‘proving’ this to you, let’s just show the following diagram (which is a perfectly fine proof):
(A⊗B)⊗(A⊗B) ----α----> (A⊗A)⊗(B⊗B) | | τ_{A⊗B} τ_A⊗τ_B | | v v (A⊗B)⊗(A⊗B) ----α----> (A⊗A)⊗(B⊗B) Explicitly: (a₁⊗b₁)⊗(a₂⊗b₂) -----> (a₁⊗a₂)⊗(b₁⊗b₂) | | v v (a₂⊗b₂)⊗(a₁⊗b₁) -----> (a₂⊗a₁)⊗(b₂⊗b₁)which clearly commutes, implying that , constructing the expression we were looking for. Since the diagram above commutes, we can therefore see that
where the superscript on the expression above implies that we are taking the eigenspace of .
From here, we can decompose into the eigenspaces of their flips:
But all we have done so far is show that the tensor product of a vector space with itself can be decomposed into a sum of symmetric and antisymmetric subspaces, but we already knew that! The magic here is in examining the space in the eigenspace of the operator . We can immediately break this space up into the and parity sectors:
But from a previous observation we made with the commutative diagram, we know that the left-hand side is just the second symmetric power of , concluding the proof.
Proof 2
This one is quite a bit simpler but uses facts that we need to take for granted right now. A special case of the Schur-Weyl duality leads us to the Cauchy identity given by
which is stated in Lecture 6 of Fulton and Harris’s ‘Representation Theory: A First Course’ under section 6.1 titled ‘Schur Functors and Their Characters’ where the authors immediately identify as being the Schur functor or Weyl module corresponding to where means that is a partition of . Then for , we only have the partitions and , and we can identify from equations (6.1) and (6.2) in the Fulton and Harris text the expressions
S_{(d)}V = \text{Sym}^d V \tag{6.1}
S_{(1,\ldots,1)}V = \bigwedge^d V, \tag{6.2}
which concludes the proof.
Ok, whatever, now that this is all out of the way we can just keep chugging forward with the calculation. To move forward, we need to motivate the following:
This should be fairly clear in terms of the representation theory. The second symmetric power of a two dimensional space will have three linearly independent irreducible symmetric polynomials, while the second exterior power of a two dimensional space is spanned only by a single anti-symmetric wedge product of the two linearly independent vectors in . Then we can write the tensor products of these objects in the shorthand notation
and therefore
Now that all of this is out of the way, we need to decompose the exterior power of . To do this, we can write
Proof of the decomposition above
All we need to do is recall from the first proof of the symmetric power decomposition that we also have the eigenspace of the operator which realizes the second exterior power of . Then we just use the canonical shuffle isomorphism in a similar way and build another commutative diagram to conclude that the above decomposition works. I’ll let someone else fill in the details since this follows the previous proof almost exactly.
Once we examine the dimensions of the vector spaces in question, we find that we can write this second exterior power of as
Therefore, we can combine the last two results to find
Now, the last step is to figure out how we can resolve the sum of irreps. The way that we can resolve the result we began with is to restrict to the diagonal subgroup, written as
- : the diagonal acts simultaneously on this object, so we have to use the Clebsch-Gordon decomposition in spin-1 spin-1 to obtain .
- : the diagonal acts trivially, just giving us the singlet .
- : The diagonal acts as on the triplet factor and trivially on the singlet, so the result is just .
- : acts the same exact way, producing another .
Combining everything here, we get back
Remark 1
Group actions make this make more sense, in my opinion. We can write