Douglas Blumeyer and Dave Keenan's Intro to exterior algebra for RTT
This article is meant as an introduction to Exterior Algebra (EA)^{[1]} as it was developed for use in regular temperament theory primarily by Gene Ward Smith. It also includes some concepts from Multilinear Algebra (MLA), and represents structures using an extended version of bra-ket notation.
This article was prepared by Dave Keenan and Douglas Blumeyer during a period of deep study of Gene's work on RTT, and as such is colored with their perspective on these concepts. Some of the terminology here is the terminology they tend to use in their interpretation of RTT, and differs from some of the other mathematical theory pages. Mike Battaglia also took part in editing some of this; they offered to let him change some of the terminology to match the rest of the mathematical pages (e.g. "multival" instead of "multimap") but he ultimately decided not to, as although the page derives from their somewhat unique interpretation and philosophy, it remains an excellent read with some good information for those learning about the exterior algebra.
Introduction
From vectors to multivectors
If you have browsed around the regular temperaments part of this wiki before, it's likely that you've come across the wedgie, which is a special way to represent a regular temperament.
The mathematical structure used to represent a wedgie is called a multicovector, and this structure is indeed related to the mathematical structure called the covector (also called a linear form). Like how a (co)vector is an element in linear algebra, a multi(co)vector is an element in exterior algebra. You are likely familiar with covectors already due to their use for representing maps in RTT, such as the map for 5-limit 12-ET, ⟨12 19 28]. A good introductory example of a multicovector would be the wedgie for meantone: ⟨⟨1 4 4]].
Covectors and multicovectors both:
- represent information written horizontally, as a row.
- use left angle brackets on the left and square brackets on the right, ⟨...], to enclose their contents.
- exist on the left half of tuning duality, on the side built up out of ETs which concerns rank (not the side built up out of commas, which uses vectors/columns, and concerns nullity).
- have a dual structure with the same name minus the "co" part.
The main difference between the two is somewhat superficial, and has to do with the “multi” part of the name. A plain covector comes in only one type, but a multicovector can be a bicovector, tricovector, quadricovector, etc. Yes, a multicovector can even be a plain covector, which can be called a monocovector when there might otherwise be ambiguity. A multicovector can even be a nilocovector^{[2]} (a scalar).
Depending on its numeric prefix, a multicovector will be written with a different count of brackets on either side^{[3]}. For example, a bicovector uses two of each: ⟨⟨...]]. A tricovector uses three of each: ⟨⟨⟨...]]]. And of course a (mono)covector is written with one of each, like ⟨...]. And a nilocovector, with zero brackets, is indistinguishable from a simple scalar. As you can see, our meantone example is a bicovector, because it has two of each bracket.
Just as covectors have dual structures called vectors, multicovectors have duals which are called multivectors. When we need to refer to multivectors and multicovectors in the general case, we call them "varianced multivectors"^{[4]}. Which brings us to our next section: on variance.
Variance
Multivectors and multicovectors cannot be arbitrarily mixed; it is of critical importance to keep track of which type one is. To convert between a multivector and its dual multicovector, we use a "dual" function. This function is discussed later in this article.
Variance is a word which captures this notion of duality, because multicovectors are covariant, while multivectors are contravariant. Variance has to do with how vectors vary with respect to the system's units: covectors vary in the same direction as the units, hence the co- prefix meaning "with", while (contra)vectors vary in the opposite direction to the units, hence the contra- prefix meaning "against".^{[5]}^{[6]}
In other words, let's say we divide octaves into 1200 parts to get cents, so that they're now 1/1200th the size they used to be. If we wanted our system to represent the same information, then, we have a couple options for recalibrating with respect to the new units.
- We could choose to change our vectors. If we choose this, we must multiply the numbers in each of our vectors by 1200.
- We could choose to change our covectors. If we choose this, we must divide the numbers in each of our covectors by 1200.
So we say that the vectors vary in the opposite direction to the units, but the covectors vary in the same direction as the units. In other words the vectors are "contravariant" with the units and the covectors are "covariant" with the units.^{[7]}
As compressed antisymmetric tensors
You can think of the varianced multivectors used in RTT as compressed forms of antisymmetric tensors, a concept that comes to us from MLA. Let's define both of those words.
A tensor is like a multi-dimensional array, or you could think of it as a generalization of square matrices to other dimensions besides 2, i.e. cubes, hypercubes, etc.
And as for antisymmetric, we should first say that a symmetric tensor is one that remains unchanged if you swap any two of its dimensions^{[8]}, for example transposing a 2D matrix so that its rows become columns and vice versa. So an antisymmetric tensor is one that becomes its own negation, i.e. the signs of all the numbers change, if you swap any two of its dimensions.
To visualize how, let's take that example of meantone again. For these purposes, ignore variance; we'll just use the terms multivector and vector here. So, while in RTT materials the wedge product (from where "wedgie" gets its name) is described as returning a multivector, in EA (which is where the wedge product comes to us from), when you take the wedge product of two vectors, the result looks like a matrix but is really a bivector represented in tensor form, as a tensor of order 2:
[math] \left[ \begin{array} {rrr} 0 & 1 & 4 \\ -1 & 0 & 4 \\ -4 & -4 & 0 \\ \end{array} \right] [/math]
But as you can see, this isn't a particularly efficient representation of this information. It exhibits a lot of redundancy. Everything in the bottom left triangular half is a mirror of what's in the top right triangular half, just with the signs changed. This is the aforementioned antisymmetry! So, in RTT, we compress this as the multivector ⟨⟨1 4 4]] instead, leveraging the antisymmetry.
For a higher dimensional temperament, the tensor representation would be a higher-dimensional square, such as a cube, hypercube, 5-cube, etc. and the antisymmetry would lead to one tetrahedral, 4-simplex, 5-simplex, etc. half being mirrored but with signs changed. So we compress it into a multivector with even more brackets. But no matter how many brackets, the multivectors in RTT will always be 1D strings of numerals, sorted in lexicographic order by their indices in the higher-dimensional square.
For more information, see: https://en.wikipedia.org/wiki/Exterior_algebra#Alternating_tensor_algebra
Multimaps & multicommas
In everyday RTT, we don't usually need to use the words for the mathematical structures such as "covector" or "vector" which underpin our work. We usually simply refer to the musical objects that these mathematical objects represent, such as "maps", or "commas". Similarly, in EA, we can generally stick to using words closer to the domain application, so henceforth we will stick primarily with multimap, the object typically represented by a multicovector, and multicomma, the object typically represented by a multivector. This is an extrapolation from the terms "map" and "comma", which could be written as "monomap" and "monocomma" if necessary to distinguish them from other multimaps and multicommas, but generally can be shortened to "map" and "comma". A wedgie is a multimap. For example, a wedgie for a rank-2 temperament would be a bimap or 2-map.
When speaking of multimaps and multicommas in general, we still need the mathematical term "varianced multivector". But this is no different than how we use the term "varianced matrix" when speaking of mappings and comma bases in general.
In order to make these materials as accessible as possible, we will be doing what is possible to lean away from jargon and instead toward generic, previously established mathematical and/or musical concepts, especially descriptive ones. That is why we avoid the terms "monzo", "val", "breed", and "wedgie". When established mathematical and/or musical concepts are unavailable, we can at least use unmistakable analogs built upon what we do have.
Wolfram Language implementation
Dave and Douglas have also collaborated on producing a code library for RTT implemented in Wolfram Language, including functions for all major operations in EA form as well: eaDual[]
, eaCanonicalForm[]
, matrixToMultivector[]
, and multivectorToMatrix[]
, as well as helpers like eaDimensionality[]
, eaRank[]
, and eaNullity[]
.
The library represents varianced multivectors with a custom data structure. The first entry is its list of numerals. The second entry is its variance. The third entry is its grade, which is the generic word for rank or nullity, so you can think of it as the count of the angle brackets.^{[9]}.
The code is maintained and shared on GitHub here: https://github.com/cmloegcmluin/RTT
More details can be found on the README there.
Converting matrices to varianced multivectors
Basic steps for converting mapping to multimap
To begin, we’ll just list the steps. These steps work for converting a mapping to a multimap; if you're interested in converting a comma basis to a multicomma, that process is a variation on this process, and will be discussed at the end of this section. Don’t worry if it doesn’t all make sense the first time. We’ll work through several examples and go into more detail soon.
- Take each combination of [math]r[/math] primes where [math]r[/math] is the rank, sorted in lexicographic order, e.g. if we're in the 7-limit, we'd have [math](2,3,5)[/math], [math](2,3,7)[/math], [math](2,5,7)[/math], and [math](3,5,7)[/math].
- Convert each of those combinations to a square [math]r×r[/math] matrix by slicing a column for each prime out of the mapping-row-basis and putting them together.
- Take each matrix's determinant.
- Set the results inside [math]r[/math] brackets.
And you've got your multimap.
As alluded to earlier, the conversion process from a mapping to a multimap is closely related to the wedge product. But to be clear, what we're doing here is different from the strict definition of the wedge product as you may see it elsewhere. That is because we are going straight from a matrix representation, bypassing a tensor representation, and going straight for the varianced multivector representation.
More on the wedge product here: Varianced Exterior Algebra#The wedge product
Canonical form
Now, if you want the canonical multimap, two further steps are required:
- Change the sign of every result if the first non-zero result is negative.
- Divide each of these results by their GCD.
If your mapping is in canonical form as a matrix, the resultant multimap will already be the canonical multimap.
Examples
Rank-2
Let’s work through the steps broken out above, on our the example of meantone.
We have rank [math]r[/math] = 2, so we’re looking for every combination of two primes. That’s out of the three total primes we have in the 5-limit: 2, 3, and 5. So those combinations are [math](2,3)[/math], [math](2,5)[/math], and [math](3,5)[/math]. Those are already in lexicographic order, or in other words, just like how alphabetic order works, but generalized to work for size of numbers too (so that 11 comes after 2, not before).
Here's the meantone mapping-row-basis again, with some color applied which should help identify the combinations:
[math] \begin{bmatrix} \color{red}1 & \color{lime}0 & \color{blue}-4 \\ \color{red}0 & \color{lime}1 & \color{blue}4 \\ \end{bmatrix} [/math]
So now each of those combinations becomes a square matrix, made out of bits from the mapping-row-basis, which again is:
[math] \begin{array}{ccc} \text{(2,3)} & \text{(2,5)} & \text{(3,5)} \\ \begin{bmatrix}\color{red}1 & \color{lime}0 \\ \color{red}0 & \color{lime}1 \end{bmatrix} & \begin{bmatrix}\color{red}1 & \color{blue}-4 \\ \color{red}0 & \color{blue}4 \end{bmatrix} & \begin{bmatrix}\color{lime}0 & \color{blue}-4 \\ \color{lime}1 & \color{blue}4 \end{bmatrix} \end{array} [/math]
Now we must take each matrix’s determinant. For 2×2 matrices, this is quite straightforward.
[math] \begin{bmatrix} a & b \\ c & d \end{bmatrix} → ad - bc [/math]
So the three determinants are
[math] (1 × 1) - (0 × 0) = 1 - 0 = 1 \\ (1 × 4) - (-4 × 0) = 4 - 0 = 4 \\ (0 × 4) - (-4 × 1) = 0 + 4 = 4 [/math]
So just set these inside a number brackets equal to the rank, and we’ve got ⟨⟨1 4 4]].
Let's go for the canonical multimap. The first term is positive, so no sign changes are necessary. And these have no GCD, or in other words, their GCD is 1. So we're already there. Which is to be expected because we used the canonical form of the meantone mapping as input.
Rank-1 (equal)
This method even works on an equal temperament, e.g. ⟨12 19 28]. The rank is 1 so each combination of primes has only a single prime: they’re [math](2)[/math], [math](3)[/math], and [math](5)[/math]. The square matrices are therefore [math]\begin{bmatrix}12\end{bmatrix} \begin{bmatrix}19\end{bmatrix} \begin{bmatrix}28\end{bmatrix}[/math]. The determinant of a 1×1 matrix is defined as the value of its single term, so now we have 12 19 28. [math]r[/math] = 1, so we set the answer inside one layer of brackets, so our monocovector is ⟨12 19 28].
Going for the canonical multimap, we check that the leading term is positive, and no GCD. Both yes. So, this looks the same as what we started with, which is fine.
Rank-3
Let’s try a slightly harder example now: a rank-3 temperament, and in the 7-limit. There are four different ways to take 3 of 4 primes: [math](2,3,5)[/math], [math](2,3,7)[/math], [math](2,5,7)[/math], and [math](3,5,7)[/math].
If the mapping-row-basis is
[math] \begin{bmatrix} \color{red}1 & \color{lime}0 & \color{blue}1 & \color{magenta}4 \\ \color{red}0 & \color{lime}1 & \color{blue}1 & \color{magenta}-1 \\ \color{red}0 & \color{lime}0 & \color{blue}-2 & \color{magenta}3 \\ \end{bmatrix} [/math]
then the combinations are
[math] \begin{array}{ccc} \text{(2,3,5)} & \text{(2,3,7)} & \text{(2,5,7)} & \text{(3,5,7)} \\ \begin{bmatrix}\color{red}1 & \color{lime}0 & \color{blue}1 \\ \color{red}0 & \color{lime}1 & \color{blue}1 \\ \color{red}0 & \color{lime}0 & \color{blue}-2 \end{bmatrix} & \begin{bmatrix}\color{red}1 & \color{lime}0 & \color{magenta}4 \\ \color{red}0 & \color{lime}1 & \color{magenta}-1 \\ \color{red}0 & \color{lime}0 & \color{magenta}3 \end{bmatrix} & \begin{bmatrix}\color{red}1 & \color{blue}1 & \color{magenta}4 \\ \color{red}0 & \color{blue}1 & \color{magenta}-1 \\ \color{red}0 & \color{blue}-2 & \color{magenta}3 \end{bmatrix} & \begin{bmatrix}\color{lime}0 & \color{blue}1 & \color{magenta}4 \\ \color{lime}1 & \color{blue}1 & \color{magenta}-1 \\ \color{lime}0 & \color{blue}-2 & \color{magenta}3 \end{bmatrix} \\ \end{array} [/math]
The determinant of a 3×3 matrix is trickier, but also doable:
[math] \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \\ \end{bmatrix} → a(ei - fh) - b(di - fg) + c(dh -eg) [/math]
In natural language, that’s each element of the first row times the determinant of the square matrix from the other two columns and the other two rows, summed but with an alternating pattern of negation beginning with positive. If you ever need to do determinants of matrices bigger than 3×3, see this webpage. Or, you can just use an online calculator.
And so our results are [math]-2[/math], [math]3[/math], [math]1[/math], [math]-11[/math]. Set these inside triply-nested brackets, because it’s a trimap for a rank-3 temperament, and we get ⟨⟨⟨-2 3 1 -11]]].
Now for canonical form. We need the first term to be positive; this doesn’t make a difference in how things behave, but is done because it canonicalizes things (we could have found the result where the first term came out positive by simply changing the order of the rows of our mapping-row-basis, which doesn’t affect how it works as a mapping at all, or mean there's anything different about the temperament). And so we change the signs^{[10]}, and our list ends up as [math]2[/math], [math]-3[/math], [math]-1[/math], [math]11[/math]. There's no GCD to divide out. So now we have ⟨⟨⟨2 -3 -1 11]]].
Converting comma basis to multicomma
You may have noticed that the canonical multimap for meantone, ⟨⟨1 4 4]], looks really similar to the meantone comma, [-4 4 -1⟩. This is not a coincidence.
To understand why, we have to cover (or review) a few key points:
- Just as a vector is the dual of a covector, we also have a multivector which is the dual of a multicovector. Analogously, a multicomma is what we call the thing the multivector represents.
- We can calculate a multicomma from a comma basis much in the same way we can calculate a multimap from a mapping-row-basis
- We can convert between multimaps and multicommas using an operation called “taking the dual”, which basically involves reversing the order of terms and changing the signs of some of them.
Calculating the multicomma is almost the same as calculating the multimap. The only difference is that as a preliminary step you must transpose^{[11]} the matrix, or in other words, exchange rows and columns.
Example
To demonstrate these points, let’s first calculate the multicomma from a comma basis. Later in this article, but not too much later, we'll confirm our answer by calculating the same multicomma as the dual of its dual multimap.
Here’s the comma basis for meantone: ⟨[-4 4 -1⟩]. In our extended bra-ket notation, that just looks like replacing ⟨[-4 4 -1⟩] with [⟨-4 4 -1]⟩. Now we can see that this is just like our ET map example from the previous section: basically an identity operation, breaking the thing up into three 1×1 matrices [math]\begin{bmatrix}-4\end{bmatrix} \begin{bmatrix}4\end{bmatrix} \begin{bmatrix}-1\end{bmatrix}[/math] which are their own determinants and then nesting back inside one layer of brackets because nullity is 1. So we have [-4 4 -1⟩.
Canonical form
As with the canonical multimap:
- if we want the canonical multicomma, dividing out any GCD is necessary, and normalizing the signs is necessary. However, with the multicomma, we need to ensure that the trailing nonzero entry is positive, not the leading.
- if your comma basis is in canonical form as a matrix, the resultant multicomma from wedging its comma vectors together will already be the canonical multicomma.
Wolfram Language implementation
At its most basic, conversion of a mapping to a multimap can be implemented in Wolfram Language as:
Minors[mapping, rank]
if you provide the rank of the mapping yourself manually. This will return you the list of entries, which you can understand as appearing inside a number of brackets equal to that rank.
The full implementation for matrixToMultivector[]
which is found in the library builds upon that core definition, adding new capabilities. It:
- works for either mappings or comma bases
- works on edge cases like nilovectors
- automatically finds the rank (identifying any deficiencies in the matrix you provided)
- returns the result in canonical form
- uses a data structure which encodes both a multivector's entries list as well as its variance, grade, and dimensionality, so that it can then be used for other EA purposes
Converting varianced multivectors to matrices
As for getting from a varianced multivector back to a matrix, there is a way to do that too!
Dave discovered a code implementation for this process written by Gene, reverse-engineered it, and used his understanding of it to develop his own simpler algorithm using a tensor-flattening approach. Here we will first document Gene's method, and then the tensor-flattening one.
Gene's algorithm
Original code
Gene's code here was written in Maple, and came from a page named Basic abstract temperament translation code, which is an astoundingly rich resource that is unfortunately only plugged into one other wiki page, at the end of the subsection linked here: Mathematical theory of regular temperaments#The normal comma list within a section titled "Translation between methods of specifying temperaments". Here is the relevant segment of the code, with its original indentation restored, and heavily commented by Dave:
# wedgie2e(w, n, p) converts a multimap (in list-of-minors form) to a non-square matrix (in list-of-lists form) in reduced-row echelon form (RREF).
# We don't like RREF because it may contain non-integer rational elements, and when its rows are multiplied by their lowest common denominator to integerise them, the matrix may become enfactored.
# The only reason RREF is used is because Gene decided this was the most convenient form to serve as a common intermediate for conversions between multiple types.
# We could have it return HNF instead (in the hope of preserving any enfactoring).
# The arguments to wedgie2e(w, n, p) are the multimap w, its rank n, and its prime limit p whose only purpose is to provide its dimension m.
# We wouldn't bother with prime limit, we'd just compute the multimap's dimension from its list length and the rank, which we might generalize to a signed grade, allowing this conversion to work on multicommas as well as multimaps.
# Maple uses (...) for function arguments where Wolfram uses [...].
# Maple uses [...] for forming lists and for list/array indexing, where Wolfram uses {...} for forming lists and [[...]] for indexing.
# Maple has a concept of a sequence which is like a list without its brackets. It is formed purely by putting commas between things.
# The function op(), when used with a single argument, strips the brackets off a list and turns it into a sequence.
# The function nops() gives the length of a list.
wedgie2e := proc(w, n, p)
# rank n p-limit multival to rref
local b, c, i, j, k, m, u, v, x, y, z;
# Obtain the dimension m from the prime limit p.
m := numtheory[pi](p);
# Create a list b of lists of m indices taken n at a time. e.g. combinat[choose](3,2) = [[1,2],[1,3],[2,3]]
# This is like Subsets[] in Wolfram.
b := combinat[choose](m, n);
# Create a list c of lists of m indices taken n-1 at a time.
c := combinat[choose](m, n-1);
# Initialise a sequence of lists (rows) that will become the output matrix.
z := NULL;
# For each combination of n-1 indices (where n is the rank) do.
for i from 1 to nops(c) do
# Set u to the current combination of n-1 indices.
u := c[i];
# Initialise a new matrix row.
v := NULL;
# For each single index j (from 1 to the dimension) do.
for j from 1 to m do
# Append the current single index to the end of the current combination of n-1 indices and call it y.
y := [op(u), j];
# If the combination of a single index with another n-1 indices
# contains any duplicate index, then append a zero to the current row-so-far.
if nops(convert(y, set))<n then v:=v,0 fi;
# Rearrange the indices of y into lexicographic order and call it x.
# Note this may have duplicates, but combinations that have duplicates
# will never match any combination from b below.
x := sort(y);
# For each combination of n indices (where n is the rank) do. Which means:
# For each entry of the multimap do.
for k from 1 to nops(b) do
# Here's where we finally refer to the multimap entries.
# If the current sorted combination of a single index with another n-1 indices
# matches the combination of indices for the current entry of the multimap,
# then append the current entry of the multimap to the current row-so-far,
# but first multiply it by the parity (1, -1 or 0) of the
# combination of indices for the current entry of the multimap
# relative to the unsorted combination of a single index with another n-1 indices.
if x=b[k] then v := v,relpar(b[k], y)*w[k] fi od od;
# Change the row v from a sequence to a list.
v := [v];
# Append the row to the end of the matrix-so-far.
z := z,v od;
# Convert the matrix to RREF, delete any rows of all zeros, and return it as the result of this function.
vec2e([z]) end:
These are defined elsewhere in the provided code (also re-indented below): relpar()
, vec2e()
, ech()
(which is called by vec2e()
)
relpar := proc(u, v)
# relative parity of two permutations
local t;
# Create an empty antisymmetrised array or tensor t.
t := table('antisymmetric');
# Write a 1 to the entry of t indexed by the index combination u.
t[op(u)] := 1;
# Read out the entry of t indexed by the index combination v,
# to learn whether it was set to 1 or -1, or left as 0, by the previous write operation.
# Return this 1, -1 or 0 as the result of this functio.
t[op(v)];
end:
vec2e := proc(w)
# rref temperament identifier from val list or projection matrix w
local i, u, v, z;
# Convert the matrix to RREF.
u := ech(w);
# Delete any rows of all zeros.
z := NULL;
for i from 1 to nops(u) do
v := u[i];
if not convert(v, set)={0} then
z := z,v fi od:
[z] end:
ech := proc(l)
# reduced row echelon form of listlist l
local M;
M := Matrix(l);
convert(LinearAlgebra[ReducedRowEchelonForm](M), listlist) end:
These appear to be Maple built-ins:
numtheory[pi]()
, combinat[choose]()
, nops()
, op()
, convert()
, sort()
, table()
, Matrix()
, LinearAlgebra[ReducedRowEchelonForm]()
By hand example
Here's an example of doing Gene's algorithm by hand on a 4D rank-3 temperament, adapted from an email correspondence from Dave to Douglas:
Here's the 7-limit Marvel trimap (wedgie) with [math]\binom43 = 4[/math] elements: ⟨⟨⟨1 2 -2 -5]]]
Let's see if we can convert it into the RREF 7-limit Marvel mapping:
[⟨1 0 0 -5] ⟨0 1 0 2] ⟨0 0 1 2]⟩
The trimap above was obtained as the result of Minors[{{1, 0, 0, -5}, {0, 1, 0, 2}, {0, 0, 1, 2}}, 3].
Gene's algorithm will often initially produce a matrix with many more rows than it needs. These will not all be linearly independent and so will reduce to the expected number of rows when row-reduced.
It produces [math]\binom{d}{r-1}[/math] rows where [math]d[/math] is the dimension and [math]r[/math] is the rank. This may be more or less than the number of elements in the multimap, depending whether you're on the right or left half of Pascal's triangle. In this case it will generate [math]\binom42 = 6[/math] rows.
We list in lexicographic order the 6 sorted combinations of 2 indexes chosen from 4. One for each row that the algorithm will generate. This is Gene's list c
.
c = 12 13 14 23 24 34
We list in lexicographic order the 4 sorted combinations of 3 indexes chosen from 4. These are the compound indices of the entries in the trimap ⟨⟨⟨1 2 -2 -5]]]}. This is Gene's list b
.
b = 123 124 134 234
We're going to make the first row of the matrix. So we take the first index combo from c
, which is 12, and for each element of that matrix row we combine "12" with the index of the element's column. I'll put a vertical bar between the row and column parts.
12|1 12|2 12|3 12|4 appended indices
We compute the sign of each permutation. It's 0 if it contains a duplicate, -1 if it requires an odd number of swaps to sort, and +1 otherwise.
12|1 12|2 12|3 12|4 original 123 124 sorted index 0 0 +1 +1 sign
Now we find the entries in the multimap whose indices match the sorted indices above. Here's the multimap with the indices above each element.
123 124 134 234 index <<< 1 2 -2 -5]]] multimap
So we have:
12|1 12|2 12|3 12|4 original 123 124 sorted index 0 0 +1 +1 sign 1 2 multimap entry
Now we multiply the signs by the multimap entries to obtain our first matrix row:
0 0 1 2 1st matrix row
Now the second row:
13|1 13|2 13|3 13|4 original 123 134 sorted index 0 -1 0 +1 sign 1 -2 multimap entry 0 -1 0 -2 2nd matrix row
Now the remaining rows:
14|1 14|2 14|3 14|4 original 124 134 sorted index 0 -1 -1 0 sign 2 -2 multimap entry 0 -2 2 0 3rd matrix row 23|1 23|2 23|3 23|4 original 123 234 sorted index +1 0 0 +1 sign 1 -5 multimap entry 1 0 0 -5 4th matrix row 24|1 24|2 24|3 24|4 original 124 234 sorted index +1 0 -1 0 sign 2 -5 multimap entry 2 0 5 0 5th matrix row 34|1 34|2 34|3 34|4 original 134 234 sorted index +1 +1 0 0 sign -2 -5 multimap entry -2 -5 0 0 6th matrix row
So our matrix is:
0 0 1 2 1st matrix row 0 -1 0 -2 2nd matrix row 0 -2 2 0 3rd matrix row 1 0 0 -5 4th matrix row 2 0 5 0 5th matrix row -2 -5 0 0 6th matrix row
I used Wolfram's RowReduce[]
on the above matrix and obtained:
{{1,0,0,-5}, {0,1,0,2}, {0,0,1,2}, {0,0,0,0}, {0,0,0,0}, {0,0,0,0}}
Using Last[HermiteDecomposition[]]
instead of RowReduce[]
gives the same result in this case.
Removing the three rows of zeros leaves:
[⟨1 0 0 -5] ⟨0 1 0 2] ⟨0 0 1 2]⟩ as expected.
You're a bloody genius, Gene.
[from Dave Keenan in email, on completing the above single-stepped example of Gene's multivector-to-matrix algorithm]
Wolfram Language implementation
This can be found in the RTT library as smithMultivectorToMatrix[]
.
Earlier form: subgroup commas
Gene's method appears to be a simplification from an earlier method he described here which was designed for finding "subgroup commas".
Simple example
Here's my alternative rendition of an example work-through. Start with the multimap for septimal meantone:
(2,3) (2,5) (2,7) (3,5) (3,7) (5,7) 1 4 10 4 13 12
No work in subsets of [math]r+1[/math] indices, where [math]r[/math] is the rank, and for each of these subsets, raise each member to the power of the multimap entry whose compound index is the set of the other [math]r[/math] indices. The other thing we need to recognize is the pattern of negations within these groups, which are based on lexicographic sorting swap count as we often see in EA operations, and in this case it's the swap count for sorting the list which is the concatenation of the index being used as the base with the remaining indices used as the power (which are already in order). The pattern of signs will be the same within each index subset.
the {2,3,5} group: 2 ^ (3,5) 3 ^ -(2,5) 5 ^ (2,3) → [4 -4 1⟩ the {2,3,7} group: 2 ^ (3,7) 3 ^ -(2,7) 7 ^ (2,3) → [13 -10 0 1⟩ the {2,5,7} group: 2 ^ (5,7) 5 ^ -(2,7) 7 ^ (2,5) → [6 0 -5 2⟩ the {3,5,7} group: 3 ^ (5,7) 5 ^ -(3,7) 7 ^ (3,5) → [0 12 -13 4⟩
And if you take the Hermite normal form of the matrix formed by concatenating those four vectors, ⟨[4 -4 1 0⟩ [13 -10 0 1⟩ [6 0 -5 2⟩ [0 12 -13 4⟩] and trim off the rows of zeros at the end, you get ⟨[4 -4 1 0⟩ [13 -10 0 1⟩], which is indeed a comma basis for septimal meantone. So we've converted from varianced multivector to matrix form, but with the quirk of also switching sides of duality: covariant thing in, contravariant thing out. This could be hidden by taking the dual as a last step.
Another example, with more generalization and explanation
Let's do another example, to show how to generalize to other dimensions and ranks. Let's try something with [math]d=5[/math] and [math]r=3[/math].
Here's unimarv's mapping: [⟨1 0 0 -5 12] ⟨0 1 0 2 -1] ⟨0 0 1 2 -3]⟩ And here's it as a multimap: ⟨⟨⟨1 2 -3 -2 1 -4 -5 12 9 -19]]] So we want to know if we can get from the multimap back to the mapping using this method.
(2,3,5) (2,3,7) (2,3,11) (2,5,7) (2,5,11) (2,7,11) (3,5,7) (3,5,11) (3,7,11) (5,7,11) 1 2 -3 -2 1 -4 -5 12 9 -19 the {2,3,5,7} group 2 ^ (3,5,7) 3 ^ -(2,5,7) 5 ^ (2,3,7) 7 ^ -(2,3,5) → [-5 2 2 -1 0⟩ (225/224, marvel) the {2,3,5,11} group: 2 ^ (3,5,11) 3 ^ -(2,5,11) 5 ^ (2,3,11) 11 ^ -(2,3,5) → [12 -1 -3 0 -1⟩ (4096/4125) the {2,3,7,11} group: 2 ^ (3,7,11) 3 ^ -(2,7,11) 7 ^ (2,3,11) 11 ^ -(2,3,7) → [9 4 0 -3 -2⟩ (41472/41503) the {2,5,7,11} group: 2 ^ (5,7,11) 5 ^ -(2,7,11) 7 ^ (2,5,11) 11 ^ -(2,5,7) → [-19 0 4 1 2⟩ (529375/524288) the {3,5,7,11} group: 3 ^ (5,7,11) 5 ^ -(3,7,11) 7 ^ (3,5,11) 11 ^ -(3,5,7) → [0 -19 -9 12 5⟩ (off the charts)
So in the end we just HNF ⟨[-5 2 2 -1 0⟩ [12 -1 -3 0 -1⟩ [9 4 0 -3 -2⟩ [-19 0 4 1 2⟩ [0 -19 -9 12 5⟩] to get ⟨[5 -2 -2 1 0⟩ [-12 1 3 0 1⟩]. And if we take the dual of that comma basis (its anti-null-space) indeed we get the mapping [⟨1 0 0 -5 12] ⟨0 1 0 2 -1] ⟨0 0 1 2 -3]⟩ which is indeed the mapping we were looking for. Great.
The tensor-flattening algorithm
How it works
Dave's tensor-flattening approach is a simplification of Gene's approach. Dave saw that Gene's algorithm's triple-nested loops were designed to generate only the upper triangle (or tetrahedron, etc.) worth of rows. He reasoned that since the tensors representing varianced multivectors are always anti-symmetric, it wouldn't matter if you included the lower triangle as well; it would be redundant but harmless. And of course the all-zero rows on the diagonal would have no effect either, once you took the HNF of the lot.
Wolfram Language implementation
The core of this algorithm is what it does in the case of a multimap with minors list [math]w[/math] and grade [math]g[/math] 2 or more. This can be written simply as:
multivectorToMatrix[w_] := hnf[Flatten[multivectorToTensor[w], grade[w] - 2]];
After uncompressing the varianced multivector back to its full antisymmetric tensor form, you flatten it down by a number of dimensions (note: not the dimensionality of the temperament) equal to its grade minus 2, i.e. into a 2-dimensional state no matter what state it started in, or in still other words, a matrix.
The rest of this code is mainly for handling edge cases (grade less than 2), detecting nondecomposable input varianced multivectors, and including the proper transposes and such to make it work for multicommas as well as multimaps.
This can be found in the RTT library as multivectorToMatrix[]
.
The dual
Now let’s see how to do the dual function.
Comparison with LA dual
In linear algebra, the dual function involves the null-space.
For a full explanation, see: RTT How-To#Null-space
A brief explanation is:
- To get from a mapping to a comma basis, find a basis for its null-space.
- To get from a comma basis to a mapping, take the anti-null-space, where the anti-null-space is simply the null-space operation but in an "antitranspose sandwich", or in other words, first antitranspose, then take the null-space as if it were a mapping, and then antitranspose again back into the original comma basis form.
So the EA dual in RTT is the equivalent of this operation, but defined on varianced multivectors rather than matrices.
Simplified method for low limit temperaments
If your temperament's dimensionality [math]d[/math] is 6 or less (within the 13-limit), you can take advantage of this table I've prepared, and use this simplified method:
- Find the correct cell in Figure 2 below using your temperament's dimensionality [math]d[/math] and grade [math]g[/math], which stands for grade. Again, grade is like rank or nullity, but generic; so if you are taking the dual of a multimap, you would use rank as the grade, and if you are taking the dual of a multicomma, you would use nullity as the grade. This cell should contain the same number of symbols as there are terms of your multimap.
- Match up the terms of your multimap with these symbols. If the symbol is [math]+[/math], do nothing. If the symbol is [math]-[/math], change the sign (positive to negative, or negative to positive; you could think of it like multiplying by either +1 or -1).
- Reverse the order of the terms.
- Set the result in the proper count of brackets.
So in this case:
- We have [math]d=3[/math], [math]r=2[/math], so the correct cell contains the symbols [math]+-+[/math].
- Matching these symbols up with the terms of our multimap, we don't change the sign of 1, we do change the sign of 4 to -4, and we don't change the sign of the second 4.
- Now we reverse 1 -4 4 to 4 -4 1.
- Now we set the result in the proper count of brackets: [4 -4 1⟩
Ta-da! Both operations get us to the same result: [4 -4 1⟩.
What’s the proper count of brackets though? Well, the total count of brackets on the multicomma and multimap for a temperament must always sum to the dimensionality of the system from which you tempered. It’s the same thing as [math]d - n = r[/math], just phrased as [math]r + n = d[/math], and where [math]r[/math] should be the bracket count for the multimap and [math]n[/math] should be the bracket count for the multicomma. So with 5-limit meantone, with dimensionality 3, there should be 3 total pairs of brackets. If 2 are on the multimap, then only 1 are on the multicomma.
Canonical form
The EA dual in RTT is defined to always divide out the GCD and normalize the leading (first nonzero) entry to positive in the case of a multimap. Therefore it is defined to always return its varianced multivector in canonical form as discussed above: Varianced_Exterior_Algebra#Canonical_form
Some insights re: the dual arising from Pascal's triangle
Note the Pascal’s triangle shape to the numbers in Figure 2. Also note that the mirrored results within each dimensionality are reverses of each other. Sometimes that means they’re identical, like [math]+-+-+[/math] and [math]+-+-+[/math]; other times not, like [math]+-++-+-+-+[/math] and [math]+-+-+-++-+[/math].
An important observation to make about multicommas and multimaps is that — for a given temperament — they always have the same count of terms. This may surprise you, since the rank and nullity for a temperament are often different, and the length of the multimap comes from the rank while the length of the multicomma comes from the nullity. But there’s a simple explanation for this. In either case, the length is not directly equal to the rank or nullity, but to the dimensionality choose the rank or nullity. And there’s a pattern to combinations that can be visualized in the symmetry of rows of Pascal’s triangle: [math]{d \choose n}[/math] is always equal to [math]{d \choose {d - n}}[/math], or in other words, [math]{d \choose n}[/math] is always equal to [math]{d \choose r}[/math]. Here are some examples:
[math]d[/math] | [math]r[/math] | [math]d - r[/math] | [math]{d \choose r}[/math] | [math]{d \choose {d - r}}[/math] | count |
---|---|---|---|---|---|
3 | 2 | 1 | [math](2,3) (2,5) (3,5)[/math] | [math](2) (3) (5)[/math] | 3 |
4 | 3 | 1 | [math](2,3,5) (2,3,7) (2,5,7) (3,5,7)[/math] | [math](2) (3) (5) (7)[/math] | 4 |
5 | 3 | 2 | [math](2,3,5) (2,3,7) (2,3,11) (2,5,7) (2,5,11) (2,7,11) (3,5,7) (3,5,11) (3,7,11) (5,7,11)[/math] | [math](2,3) (2,5) (2,7) (2,11) (3,5), (3,7) (3,11) (5,7) (5,11) (7,11)[/math] | 10 |
Each set of one side corresponds to a set in the other side which has the exact opposite elements.
A comparison of duals in relevant algebras
This operation uses the same process as is used for finding the complement in exterior algebra, however, whereas exterior algebra does not convert between vectors and covectors (it can be used on either one, staying within that category), with EA's dual in RTT you switch which type it is as the last step. More details can be found below. The dual used in EA for RTT is #2 in the table. It essentially combines elements from both #1 and #3.
# | dual type | notes | variance changing | using RTT's extended bra-ket notation to | operator | example | alternate example 1 | alternate example 2 | alternate example 3 | example (ASCII only) | alternate example (ASCII only) |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | Grassman/EA/orthogonal complement | also called "dual" within EA, but "complement" is preferred to avoid confusion with the variance-changing MLA dual | no | demonstrate agnosticism to and unchanging of variance | negation, overbar, tilde | ¬[1 4 4⟩ = [4 -4 1⟩ | [1̅ ̅4̅ ̅4̅⟩⟩ = [4 -4 1⟩ | ~[1 4 4> = [4 -4 1> | |||
2 | MLA dual | is in exterior algebra form: compresses the antisymmetric matrix/tensor into a list of minors; this operation is the dual that EA in RTT uses | yes | distinguish covariance from contravariance | diamond operator, sine wave operator, (postfix) degree symbol, tilde | ⋄⟨⟨1 4 4]] = [4 -4 1⟩ | ⟨⟨1 4 4]]° = [4 -4 1⟩ | ∿⟨⟨1 4 4]] = [4 -4 1⟩ | <><<1 4 4]] = [4 -4 1> | ~<<1 4 4]] = [4 -4 1> | |
3 | MLA complement | is in tensor form: uses the full antisymmetric matrix/tensor itself; this operation is also known as the "Hodge dual", but "Hodge star" is preferred to avoid confusion with a variance-changing MLA dual | no | demonstrate agnosticism to and unchanging of variance | Hodge star, asterisk operator | ⋆[⟨0 1 4] ⟨-1 0 4] ⟨-4 -1 0]⟩ = ⟨4 -4 1] | ⋆[[0 1 4] [-1 0 4] [-4 -1 0]]⁰₂ = [4 -4 1]⁰₁ | ∗[⟨0 1 4] ⟨-1 0 4] ⟨-4 -1 0]⟩ = ⟨4 -4 1] | ∗[[0 1 4] [-1 0 4] [-4 -1 0]]⁰₂ = [4 -4 1]⁰₁ | *<<0 1 4] <-1 0 4] <-4 -1 0]] = ⟨4 -4 1] | *[[0 1 4] [-1 0 4] [-4 -1 0]] type (0,2) = [4 -4 1] type (0,1)) |
A generalized method that works for higher-limit temperaments
If you need to do this process for a higher dimensionality than 6, then you'll need to understand how I found the symbols for each cell of Figure 2. Here's how:
- Take the rank, halved, rounded up. In our case, [math]\lceil \frac{r}{2} \rceil = \lceil \frac{2}{2} \rceil = \lceil 1 \rceil = 1[/math]. Save that result for later. Let’s call it [math]x[/math].
- Find the lexicographic combinations of [math]r[/math] primes again: [math](2,3)[/math], [math](2,5)[/math], [math](3,5)[/math]. Except this time we don’t want the primes themselves, but their indices in the list of primes. So: [math](1,2)[/math], [math](1,3)[/math], [math](2,3)[/math].
- Take the sums of these sets of indices, and to each sum, also add [math]x[/math]. So [math]1+2+x[/math], [math]1+3+x[/math], [math]2+3+x[/math] = [math]1+2+1[/math], [math]1+3+1[/math], [math]2+3+1[/math] = [math]4[/math], [math]5[/math], [math]6[/math].
- Even terms become [math]+[/math]'s and odd terms become [math]-[/math]'s.
Wolfram Language implementation
The Wolfram Language implementation developed by Dave and Douglas closely parallels the process described here, using Wolfram Language's built-in HodgeDual[]
to determine and apply those all-important sign-changing sequences per the grade and dimensionality:
This can be found in the RTT library as eaDual[]
.
A deeper look at the math underlying the sign patterns of the EA dual
You may wonder: but why are these sign patterns the way they are? Well, to deeply understand this, you'd probably have to hit some math books. I can shed a little more light on it, but it will still be fairly hand-wavy. The basic gist of it is this. The sets of prime numbers we've looked at, such as (2,3) or (3,5,11) are really like combinations of basis vectors, or in other words, atomic elements that are incomparable, in different terms, orthogonal, however you want to call it. In most exterior algebra texts, you'll see them in abstract, variable form, like this: e₁, e₂, e₃, etc. and for this demonstration it will be easier to use them that way, so that's what we'll do. To be clear, 2 is like e₁, 3 is like e₂, 5 is like e₃, etc.
So as you've seen, when we take the dual, the resulting dual's terms correspond to the complementary set of these basis vectors; i.e. if there are 5 of them, the corresponding term in the dual for the term with basis vector combination e₁e₄ will be e₂e₃e₅. We can write this like:
∗(e₁∧e₄) = (e₂∧e₃∧e₅)
Let's look at a simple case: dimensionality 3. Here's all such pairs of dual basis vector combinations:
e₁ ↔ e₂e₃
e₂ ↔ e₁e₃
e₃ ↔ e₁e₂
We can rewrite this like
1|23
2|13
3|12
Now, for each line, we need to swap elements until they are in order. Note that this is the kind of swapping where you imagine the objects are in boxes and we pick up two and a time and swap which box they're in; not the kind of swapping where you slide them around a table to change order and afterwards shift things around to fill in gaps. The two boxes chosen each time don't have to be adjacent (though it won't affect the count of swaps if you did decide to restrict yourself to that for whatever reason).
1|23 (already done)
2|13, 1|23 (done, 1 swap necessary)
3|12, 1|32, 1|23 (done, 2 swaps necessary)
Finally, you count the number of swaps required to get things in order. If the count is odd, then this term will be negative in the dual. If it is positive, this term will be positive. This is the derivation of the sign change pattern for d=3 g=1 being +-+. But we still haven't really explained why this is. That's because a property of exterior algebra is that it's not exactly commutative. e₁e₂ ≠ e₂e₁. Instead, e₁e₂ = -e₂e₁! So every time you need to swap these elements, it introduces another sign change. let's try one more example: d=5 r=3.
e₁e₂e₃ ↔ e₄e₅
e₁e₂e₄ ↔ e₃e₅
e₁e₂e₅ ↔ e₃e₄
e₁e₃e₄ ↔ e₂e₅
e₁e₃e₅ ↔ e₂e₄
e₁e₄e₅ ↔ e₂e₃
e₂e₃e₄ ↔ e₁e₅
e₂e₃e₅ ↔ e₁e₄
e₂e₄e₅ ↔ e₁e₃
e₃e₄e₅ ↔ e₁e₂
Rewrite it:
123|45
124|35
125|34
134|25
135|24
145|23
234|15
235|14
245|13
345|12
Swap until in order:
123|45
124|35, 123|45
125|34, 123|54, 123|45
134|25, 124|35, 123|45
135|24, 125|34, 123|54, 123|45
145|23, 125|43, 123|45
234|15, 134|25, 124|35, 123|45
235|14, 135|24, 125|34, 123|54, 123|45
245|13, 145|23, 125|43, 123|45
345|12, 145|32, 125|34, 123|54, 123|45
Counting swaps, we see 0,1,2,2,3,2,3,4,3,4 and that gives even,odd,even,even,odd,even,odd,even,odd,even, so that gives +-++-+-+-+. Great!
One last note on the e₂∧e₃∧e₅ style notation. We've mentioned that the bra-ket notation RTT uses does not come from EA^{[12]}. So if we wanted to use a more EA-style notation, we could write the multimap ⟨⟨1 4 4]] like 1e₁∧e₂ + 4e₁∧e₃ + 4e₂∧e₃, or perhaps 1(2,3) + 4(2,5) + 4(3,5).
The wedge product
Uses
The wedge product is primarily used in RTT for joining and meeting temperaments.
In linear algebra, join and meet are accomplished by concatenating matrices (and then putting into canonical form, to eliminate potential resultant rank-deficiencies or enfactoring). Join is when multiple mappings are thus concatenated and canonicalized. Meet is when multiple comma bases are concatenated and canonicalized.
Think of the wedge product as the EA version of concatenation in this situation; in other words, the wedge product is used either for join or for meet. Wedging two multimaps is a join. Wedging two multicommas is a meet.
Collinearity exception
There is an exception to the above relationship, however. When two varianced multivectors are collinear, every entry in the resultant wedge product will be 0.
What do we mean by collinear, though? Well, there are a couple different ways that may be helpful for you to think about it:
- Looking at the numbers: they share a common vector. For example, the canonical multimap for 11-limit meantone is ⟨⟨1 4 10 18 4 13 25 12 28 16]] and the canonical multimap for 11-limit opossum extension with an extra dimension is ⟨⟨3 5 9 19 1 6 20 7 27 22]]. We know they share a common vector because you can find meantone's multimap as the wedge product of ⟨7 11 16 19 23] and ⟨5 8 12 15 19], and you can find this opposum's multimap as the wedge product of ⟨7 11 16 19 23] and ⟨15 24 35 42 52]; the vector shared in common is ⟨7 11 16 19 23]. So, wedging ⟨⟨1 4 10 18 4 13 25 12 28 16]] and ⟨⟨3 5 9 19 1 6 20 7 27 22]] won't give something useful, as you might expect! Normally, wedging two 5-D bicovectors would give a 5-D quadricovector representing another temperament. But in this case we get ⟨⟨⟨⟨0 0 0 0 0]]]]. You can verify this using the method described in the next section if you would like. Note that you can still join these temperaments using the linear algebra method — by concatenating their mappings and reducing — just fine. That looks like:
join([⟨1 0 -4 -13 -25] ⟨0 1 4 10 18]⟩, [⟨1 2 3 4 6] ⟨0 3 5 9 19]⟩) = [⟨1 0 0 -1 -5] ⟨0 1 0 -2 -2] ⟨0 0 1 3 5]⟩
and if you wedge together those three covectors you get the tricovector for this temperament, ⟨⟨⟨1 3 5 2 2 -4 -1 -5 -10 -8]]]. - Looking at a diagram: they intersect in PTS. Think about how these tunings would look in projective tuning space, or PTS, where joins look like unions and meets look like intersections. So if two temperaments don't meet — for example they are two different rank-1 points, or a rank-1 point and a rank-2 line that does not pass through it, then it's clear that there'd be no redundancy if you join them. But if two temperaments do meet — like two intersecting rank-2 lines, or a rank-2 line and a rank-1 point it passes through — then when you join them there will definitely be some redundancy, and it's that redundancy that's equivalent to collinearity.^{[13]} So again, using linear algebra methods, the join of two collinear temperaments will work out fine: it will simply give you the temperaments corresponding to the expected union of the input points, lines, planes, etc. However the exterior algebra approach with the wedge product cannot handle inputs with any collinearity.
How to think about why it's different in EA
As for why the wedge product gives all zeros when the input vectors are collinear, it may help to think of it like this. The wedge product gives a directed area. This is a generalized area, i.e. it isn't necessarily 2-dimensional. It is of whichever dimension is required of the given multivector, and that will always be the multivector's grade (its multi number).
The wedge product of two (mono)vectors represents the area of the parallelogram formed with these two vectors as its sides. To be specific, that's the vector leading from an origin O to a point A, the other vector leading from O to point B, and then the first vector again but this time from point B to point AB, and the second vector again but from point A to point AB as well.
If the two input vectors were collinear (and if they are monovectors the only way this could be would be if they were the same vector or a multiple of it!) then of course the area would be 0. In this case we are indeed looking for a standard 2-D area, because the grade of this result is 1 + 1 = 2. In other words, even the wedge product of two collinear monovectors is a bivector.
Now let's think about the example of wedging a monovector with a bivector. Assuming linear independence, the resulting shape will be a parallelepiped, i.e. a higher dimensional parallelogram. Basically if the bivector is that paralellogram we described connecting points O, A, B, and AB, and the monovector is from O to C, then we need to create two new parallelograms: one between C and A in the same way we made one between A and B, and also one between C and B. But then we'll only have half of our parallelepiped. Like a cube it has 6 faces but these are only 3 of those 6 — the three which touch the vertex O. So each of these parallelograms needs a copy on the opposite side (in the same way that in the simpler case we needed a copy of A opposite the first A, and a copy of B opposite the first B, to close the shape). And so, the wedge product of this monovector and bivector is a trivector which represents the 3-D area — or volume as we typically call this dimensionality of area — of this resulting parallelepiped.
But now what if we wedged a monovector with a bivector, but they were collinear? This means that our monovector from O to C is not in a new direction, but a point along the same trajectory as either A or B. That means that the resulting parallelopiped is going to be just the same parallelogram that we had already from A and B! However, we have increased the grade from 2 to 3 by wedging with this monovector, so now, instead of asking for the 2-D area of this parallelogram, which has a positive value, we're asking for its 3-D area, and of course a 2-D shape has no 3-D area. So that's why we get zeros here.
This principle gets more difficult to imagine beyond the 3-D realm that we inhabit as physical beings, but hopefully if the limitation makes sense this far, you can accept the abstraction.
Generalization to higher grade varianced multivectors
We've seen that the process for converting matrices into varianced multivectors is closely related to the wedge product. That's because the mere act of treating multiple maps as rows of a mapping matrix (or multiple commas as columns of a comma basis matrix) is the same as concatenating them as per a join or meet. So converting a mapping or comma basis matrix to a varianced multivector is equivalent to wedging together all of the maps (or commas). To visualize this, let's compare how we can write:
[⟨1 0 -4] ⟨0 1 4]⟩ = ⟨⟨1 4 4]]
but we can also write it:
⟨1 0 -4]∧⟨0 1 4] = ⟨⟨1 4 4]]
where ∧ is the wedge symbol.
When we converted a higher rank example, that was like showing:
[⟨1 0 1 4] ⟨0 1 1 -1] ⟨0 0 -2 3]⟩ = ⟨⟨2 -3 -1 11]]
and we could also write that like:
⟨1 0 1 4]∧⟨0 1 1 -1]∧⟨0 0 -2 3] = ⟨⟨2 -3 -1 11]]
Now that this is clear, we can show how you can actually use the wedge product to combine any number of varianced multivectors of any grades together to get a new varianced multivector.
Basic steps
- For each entry in each varianced multivector, find its compound indices.
- Make a list of every combination of one entry from each varianced multivector, and a corresponding list of the concatenations of these entries' compound indices.
- If a concatenation of compound indices includes duplicate indices, throw it and its corresponding entry combination away. Otherwise, take the product of its corresponding entry combination.
- Sort all concatenations of compound indices into lexicographic order, using only swaps of two indices at a time. If an odd count of swaps is required, negate the corresponding product.
- Some sorted concatenations of compound indices will now match. For each unique sorted concatenation of compound indices, sum all of its corresponding products.
- These sums, sorted by the lexicographic order of their corresponding sorted concatenations of compound indices, are the entries in the target varianced multivector.
This method was developed by Dave Keenan as a simplification of the explanation given by John Browne in chapter 1.2 of this book: https://grassmannalgebra.com/index_htm_files/Browne%20John%20Vol%201%20Chapter%201.pdf
According to Dave, "...a basic aspect... is that wedge product is anticommutative, so ab = -ba, i.e. a swap requires a change of sign. But what if a and b have the same index. In that case their unit vectors are identical. And ab = -ba implies also that aa = -aa which implies that a = 0. So a duplicated index implies a zero result."
Example
Let's take the wedge product of 16-ET with septimal meantone and see what we get.
⟨16 25 37 45]∧⟨⟨1 4 10 4 13 12]]
Step one: find the compound indices for each entry in these varianced multivectors.
[math]
\begin{array} {c}
& (2) & (3) & (5) & (7) & & & & & & (2,3) & (2,5) & (2,7) & (3,5) & (3,7) & (5,7) \\
\langle & 16 & 25 & 37 & 45 & ] & & ∧ & & \langle\langle & 1 & 4 & 10 & 4 & 13 & 12 & ]] \\
\end{array}
[/math]
Step two: make two corresponding lists. One is a list of every possible combination of entries from these varianced multivectors. The other is a list of concatenations of the entries' indices.
Note that in this case we're only wedging together two varianced multivectors at once. But this method works for any number of varianced multivectors at a time. The only different is that each combination would be of [math]k[/math] elements, where [math]k[/math] is the count of varianced multivectors being wedged simultaneously.
These two lists aren't going to fit in one row on most screens, so I'll break it halfway.
[math]
\begin{array} {c}
(2)(2,3)\!\! & \!\!(2)(2,5)\!\! & \!\!(2)(2,7)\!\! & \!\!(2)(3,5)\!\! & \!\!(2)(3,7)\!\! & \!\!(2)(5,7)\!\! & \!\!(3)(2,3)\!\! & \!\!(3)(2,5)\!\! & \!\!(3)(2,7)\!\! & \!\!(3)(3,5)\!\! & \!\!(3)(3,7)\!\! & \!\!(3)(5,7)\!\! & ... \\
{16·1}\!\! & \!\!{16·4}\!\! & \!\!{16·10}\!\! & \!\!{16·4}\!\! & \!\!{16·13}\!\! & \!\!{16·12}\!\! & \!\!{25·1}\!\! & \!\!{25·4}\!\! & \!\!{25·10}\!\! & \!\!{25·4}\!\! & \!\!{25·13}\!\! & \!\!{25·12}\!\! & ... \\
\end{array}
[/math]
[math] \begin{array} {c} ... & \!\!(5)(2,3)\!\! & \!\!(5)(2,5)\!\! & \!\!(5)(2,7)\!\! & \!\!(5)(3,5)\!\! & \!\!(5)(3,7)\!\! & \!\!(5)(5,7)\!\! & \!\!(7)(2,3)\!\! & \!\!(7)(2,5)\!\! & \!\!(7)(2,7)\!\! & \!\!(7)(3,5)\!\! & \!\!(7)(3,7)\!\! & \!\!(7)(5,7) \\ ... & \!\!{37·1}\!\! & \!\!{37·4}\!\! & \!\!{37·10}\!\! & \!\!{37·4}\!\! & \!\!{37·13}\!\! & \!\!{37·12}\!\! & \!\!{45·1}\!\! & \!\!{45·4}\!\! & \!\!{45·10}\!\! & \!\!{45·4}\!\! & \!\!{45·13}\!\! & \!\!{45·12} \\ \end{array} [/math]
Step three: throw away any combination whose indices contain duplicates. Otherwise, take their product.
[math]
\begin{array} {c}
(\color{red}2\color{black})(\color{red}2\color{black},3)\!\! & \!\!(\color{red}2\color{black})(\color{red}2\color{black},5)\!\! & \!\!(\color{red}2\color{black})(\color{red}2\color{black},7)\!\! & \!\!(2)(3,5)\!\! & \!\!(2)(3,7)\!\! & \!\!(2)(5,7)\!\! & \!\!(\color{red}3\color{black})(2,\color{red}3\color{black})\!\! & \!\!(3)(2,5)\!\! & \!\!(3)(2,7)\!\! & \!\!(\color{red}3\color{black})(\color{red}3\color{black},5)\!\! & \!\!(\color{red}3\color{black})(\color{red}3\color{black},7)\!\! & \!\!(3)(5,7)\!\! & ... \\
\cancel{16·1}\!\! & \!\!\cancel{16·4}\!\! & \!\!\cancel{16·10}\!\! & \!\!64\!\! & \!\!208\!\! & \!\!192\!\! & \!\!\cancel{25·1}\!\! & \!\!100\!\! & \!\!250\!\! & \!\!\cancel{25·4}\!\! & \!\!\cancel{25·13}\!\! & \!\!300\!\! & ... \\
\end{array}
[/math]
[math] \begin{array} {c} ... & \!\!(5)(2,3)\!\! & \!\!(\color{red}5\color{black})(2,\color{red}5\color{black})\!\! & \!\!(5)(2,7)\!\! & \!\!(\color{red}5\color{black})(3,\color{red}5\color{black})\!\! & \!\!(5)(3,7)\!\! & \!\!(\color{red}5\color{black})(\color{red}5\color{black},7)\!\! & \!\!(7)(2,3)\!\! & \!\!(7)(2,5)\!\! & \!\!(\color{red}7\color{black})(2,\color{red}7\color{black})\!\! & \!\!(7)(3,5)\!\! & \!\!(\color{red}7\color{black})(3,\color{red}7\color{black})\!\! & \!\!(\color{red}7\color{black})(5,\color{red}7\color{black}) \\ ... & \!\!37\!\! & \!\!\cancel{37·4}\!\! & \!\!370\!\! & \!\!\cancel{37·4}\!\! & \!\!481\!\! & \!\!\cancel{37·12}\!\! & \!\!45\!\! & \!\!180\!\! & \!\!\cancel{45·10}\!\! & \!\!180\!\! & \!\!\cancel{45·13}\!\! & \!\!\cancel{45·12} \\ \end{array} [/math]
Let's take a pass to clean up, merging the concatenating indices and erasing the thrown away bits.
[math]
\begin{array} {c}
\!\!(2,3,5)\!\! & \!\!(2,3,7)\!\! & \!\!(2,5,7)\!\! & \!\!(3,2,5)\!\! & \!\!(3,2,7)\!\! & \!\!(3,5,7)\!\! & \!\!(5,2,3)\!\! & \!\!(5,2,7)\!\! & \!\!(5,3,7)\!\! & \!\!(7,2,3)\!\! & \!\!(7,2,5)\!\! & \!\!(7,3,5)\!\! \\
\!\!64\!\! & \!\!208\!\! & \!\!192\!\! & \!\!100\!\! & \!\!250\!\! & \!\!300\!\! & \!\!37\!\! & \!\!370\!\! & \!\!481\!\! & \!\!45\!\! & \!\!180\!\! & \!\!180\!\! \\
\end{array}
[/math]
Step four: we must sort each concatenation of compound indices until they're in order. But we can only swap two indices at a time (for a detailed illustration of this process, see Varianced Exterior Algebra#A deeper look at the math underlying the sign patterns of the EA dual). And as we go, we need to keep track of how many swaps were required. We don't need to know the exact count of swaps — only if the count is odd or even. And if odd, negate the product.
[math]
\begin{array} {c}
\!\!0\!\! & \!\!0\!\! & \!\!0\!\! & \!\!\colorbox{yellow}{1}\!\! & \!\!\colorbox{yellow}{1}\!\! & \!\!0\!\! & \!\!2\!\! & \!\!\colorbox{yellow}{1}\!\! & \!\!\colorbox{yellow}{1}\!\! & \!\!2\!\! & \!\!2\!\! & \!\!2\!\! \\
& & & \color{blue}\curvearrowleft\;\;\; & \color{blue}\curvearrowleft\;\;\; & & \color{blue}\curvearrowright\curvearrowright & \color{blue}\curvearrowleft\;\;\; & \color{blue}\curvearrowleft\;\;\; & \color{blue}\curvearrowright\curvearrowright & \color{blue}\curvearrowright\curvearrowright & \color{blue}\curvearrowright\curvearrowright \\
\!\!(2,3,5)\!\! & \!\!(2,3,7)\!\! & \!\!(2,5,7)\!\! & \!\!(\color{blue}2\color{black},3,5)\!\! & \!\!(\color{blue}2\color{black},3,7)\!\! & \!\!(3,5,7)\!\! & \!\!(2,3,\color{blue}5\color{black})\!\! & \!\!(\color{blue}2\color{black},5,7)\!\! & \!\!(\color{blue}3\color{black},5,7)\!\! & \!\!(2,3,\color{blue}7\color{black})\!\! & \!\!(2,5,\color{blue}7\color{black})\!\! & \!\!(3,5,\color{blue}7\color{black})\!\! \\
\!\!64\!\! & \!\!208\!\! & \!\!192\!\! & \!\!\colorbox{yellow}{-100}\!\! & \!\!\colorbox{yellow}{-250}\!\! & \!\!300\!\! & \!\!37\!\! & \!\!\colorbox{yellow}{-370}\!\! & \!\!\colorbox{yellow}{-481}\!\! & \!\!45\!\! & \!\!180\!\! & \!\!180\!\! \\
\end{array}
[/math]
Step five: Now that we've sorted these indices, we've got a bunch of matches. Four sets of three. Here's those color-coded:
[math]
\begin{array}{c}
\!\!\colorbox{CarnationPink}{(2,3,5)}\!\! & \!\!\colorbox{Dandelion}{(2,3,7)}\!\! & \!\!\colorbox{LimeGreen}{(2,5,7)}\!\! & \!\!\colorbox{CarnationPink}{(2,3,5)}\!\! & \!\!\colorbox{Dandelion}{(2,3,7)}\!\! & \!\!\colorbox{CornflowerBlue}{(3,5,7)}\!\! & \!\!\colorbox{CarnationPink}{(2,3,5)}\!\! & \!\!\colorbox{LimeGreen}{(2,5,7)}\!\! & \!\!\colorbox{CornflowerBlue}{(3,5,7)}\!\! & \!\!\colorbox{Dandelion}{(2,3,7)}\!\! & \!\!\colorbox{LimeGreen}{(2,5,7)}\!\! & \!\!\colorbox{CornflowerBlue}{(3,5,7)}\!\! \\
\!\!\colorbox{CarnationPink}{64}\!\! & \!\!\colorbox{Dandelion}{208}\!\! & \!\!\colorbox{LimeGreen}{192}\!\! &
\!\!\colorbox{CarnationPink}{-100}\!\! & \!\!\colorbox{Dandelion}{-250}\!\! & \!\!\colorbox{CornflowerBlue}{300}\!\! &
\!\!\colorbox{CarnationPink}{37}\!\! & \!\!\colorbox{LimeGreen}{-370}\!\! & \!\!\colorbox{CornflowerBlue}{-481}\!\! &
\!\!\colorbox{Dandelion}{45}\!\! & \!\!\colorbox{LimeGreen}{180}\!\! & \!\!\colorbox{CornflowerBlue}{180}\!\! \\
\end{array}
[/math]
So now we consolidate the matches, summing the products.
[math]
\begin{array} {c}
& \colorbox{CarnationPink}{(2,3,5)} & \colorbox{Dandelion}{(2,3,7)} & \colorbox{LimeGreen}{(2,5,7)} & \colorbox{CornflowerBlue}{(3,5,7)} \\
& \colorbox{CarnationPink}{64 + -100 + 37} & \colorbox{Dandelion}{208 + -250 + 45} & \colorbox{LimeGreen}{192 + -370 + 180} & \colorbox{CornflowerBlue}{300 + -481 + 180} \\
\end{array}
[/math]
Step six: Due to the natural way we ordered the original lists, our resulting indices are already in lexicographic order. So here's our final result!
[math]
\begin{array} {c}
& (2,3,5) & (2,3,7) & (2,5,7) & (3,5,7) \\
\langle\langle\langle & 1 & 3 & 2 & -1 & ]]] \\
\end{array}
[/math]
Ah, so we've found that
⟨16 25 37 45]∧⟨⟨1 4 10 4 13 12]] = ⟨⟨⟨1 3 2 -1]]]
Or in other words, 16&meantone (where the "&" is read "join") in the 7-limit gives us starling.
Example for monovectors
This generalized method still works on the typical case of wedging two monovectors, such as ⟨7 11 16]∧⟨5 8 12]. Check it out:
[math]
\begin{array} {c}
& (2) & (3) & (5) & & & & & & (2) & (3) & (5) \\
\langle & 7 & 11 & 16 & ] & & ∧ & & \langle & 5 & 8 & 12 & ] \\
\end{array}
[/math]
Form the two lists of combinations:
[math]
\begin{array} {c}
(2)(2) & (2)(3) & (2)(5) & (3)(2) & (3)(3) & (3)(5) & (5)(2) & (5)(3) & (5)(5) \\
7·5 & 7·8 & 7·12 & 11·5 & 11·8 & 11·12 & 16·5 & 16·8 & 16·12 \\
\end{array}
[/math]
Throw away dupe indexed ones, and take products of others:
[math]
\begin{array} {c}
(\color{red}2\color{black})(\color{red}2\color{black}) & (2)(3) & (2)(5) & (3)(2) & (\color{red}3\color{black})(\color{red}3\color{black}) & (3)(5) & (5)(2) & (5)(3) & (\color{red}5\color{black})(\color{red}5\color{black}) \\
\cancel{7·5} & 56 & 84 & 55 & \cancel{11·8} & 132 & 80 & 128 & \cancel{16·12} \\
\end{array}
[/math]
Clean up, and merge indices:
[math]
\begin{array} {c}
(2,3) & (2,5) & (3,2) & (3,5) & (5,2) & (5,3) \\
56 & 84 & 55 & 132 & 80 & 128 \\
\end{array}
[/math]
Swap to sort indices, and negate products if odd swap count:
[math]
\begin{array} {c}
0 & 0 & \colorbox{yellow}{1} & 0 & \colorbox{yellow}{1} & \colorbox{yellow}{1} \\
& & \color{blue}\curvearrowright\; & & \color{blue}\curvearrowright\; & \color{blue}\curvearrowright\; \\
(2,3) & (2,5) & (2,\color{blue}3\color{black}) & (3,5) & (2,\color{blue}5\color{black}) & (3,\color{blue}5\color{black}) \\
56 & 84 & \colorbox{yellow}{-55} & 132 & \colorbox{yellow}{-80} & \colorbox{yellow}{-128} \\
\end{array}
[/math]
Note matches:
[math]
\begin{array} {c}
\colorbox{CarnationPink}{(2,3)} & \colorbox{Dandelion}{(2,5)} & \colorbox{CarnationPink}{(2,3)} & \colorbox{LimeGreen}{(3,5)} & \colorbox{Dandelion}{(2,5)} & \colorbox{LimeGreen}{(3,5)} \\
\colorbox{CarnationPink}{56} & \colorbox{Dandelion}{84} & \colorbox{CarnationPink}{55} & \colorbox{LimeGreen}{132} & \colorbox{Dandelion}{80} & \colorbox{LimeGreen}{128} \\
\end{array}
[/math]
Consolidate, and sum:
[math]
\begin{array} {c}
\colorbox{CarnationPink}{(2,3)} & \colorbox{Dandelion}{(2,5)} & \colorbox{LimeGreen}{(3,5)} \\
\colorbox{CarnationPink}{56 + -55} & \colorbox{Dandelion}{84 + -80} & \colorbox{LimeGreen}{132 + -128} \\
\end{array}
[/math]
Done!
[math]
\begin{array} {c}
& (2,3) & (2,5) & (3,5) \\
\langle\langle & 1 & 4 & 4 & ]] \\
\end{array}
[/math]
So ⟨7 11 16]∧⟨5 8 12] = ⟨⟨1 4 4]], or 7&5 is meantone.
Relationship to Gene's algorithm for converting from varianced multivector to matrix
We can see some similarities between this process and Gene's method for converting varianced multivectors to matrices, in particular the concatenation of compound indices and the zeroing or negating of their corresponding values based on the parity of their swap counts to achieve lexicographic order: Varianced Exterior Algebra#Gene's algorithm
As a minors list
Another way to think about the wedge product when it is used on matrices (or equivalently, lists of monovectors) is as the list of the minors (short for “minor determinants”) of the matrix, where a minor is the determinant of a square submatrix.
Relation to the cross product
For two monovectors, or in other words, ordinary vectors, the wedge product is equivalent to the cross product.
In terms of other EA products
The wedge product comes from Exterior Algebra, where it is also known as the progressive product or the exterior product.
The progressive product is the dual of the regressive product, which uses the symbol ∨, which naturally is the opposite of the ∧ symbol used for the progressive product.
The exterior product is not quite dual to the interior product, though. For more information, see: Talk:Interior product
Wolfram Language implementation
These products are implemented in the RTT library as progressiveProduct[]
, regressiveProduct[]
, and interiorProduct[]
, using Wolfram Language's built-in TensorWedge[]
.
In terms of the outer product
Be careful not to confuse the exterior and interior products with the outer and inner products. They have similar names, but are quite different.
In RTT, the outer and inner products are ordinary matrix products between a vector and a covector:
- The outer product gives a matrix, e.g. [1 2⟩⟨3 4] = ⟨[3 4⟩ [6 8⟩]
- The inner product gives a scalar. e.g. ⟨3 4][1 2⟩ = 11. The inner product is the same as the dot product we use for maps and intervals.
Another way to think of the wedge product is as the difference between two outer products. For example, consider wedging ⟨7 11 16] with ⟨12 19 28]. Call these u and v, respectively. So u∧v (that's the wedge product) is the same thing as u.v - v.u (where the dots are outer products). Those two outer products are:
[⟨84 133 196]
⟨132 209 308]
⟨192 304 448]⟩
and
[⟨84 132 192]
⟨133 209 304]
⟨196 308 448]⟩
And so it's clear to see that the difference is
[⟨0 1 4]
⟨-1 0 4]
⟨-4 -4 0]⟩
Note also that those two outer products are transposes of each other, which explains the symmetry across the diagonal (thanks for pointing that out, Kite!).
So it's really just a different way of slicing and dicing a bunch of determinants. With this difference of outer products approach, you do all the multiplications at once as the first step, then all of the differences at once as the second step. Whereas with the minors approach, you do a ton of separate differences of products individually.
Preimage generators
Using LA, it is possible to find preimage generators using the method explained here: Transversal generators#Finding the transversal generators
Using EA, a different method has been found for finding preimage generators, but it only works for rank-2 temperaments. The method is described with a mathematical lean and accompanying proof here: Wedgies and multivals#How the period and generator falls out of a rank-2 wedgie Some alternatively styled walkthroughs of this method are presented here.
Example 1: full walkthrough
Consider meantone. Here is its multimap, with its indices labelled:
[math]
\begin{array} {c}
& (2,3) & (2,5) & (3,5) \\
\langle & 1 & 4 & 4 & ]] \\
\end{array}
[/math]
We only care about the entries whose indices have 2 in them, so that's the first two. We can throw away the remainder:
[math]
\begin{array} {c}
& (2,3) & (3,5) & \color{red}\cancel{(3,5)} \\
\langle & \color{blue}1 & \color{blue}4 & \color{red}\cancel{4} & ]] \\
\end{array}
[/math]
We need to know the remaining numbers' greatest common divisor, because that will tell us the period. We don't need this information right away, but we'll use it in a moment. In this case, because one of the numbers is 1, the answer is obvious. We'll look at general ways of finding this later, here: Varianced Exterior Algebra#How to solve the equations.
[math]\text{gcd}(\color{blue}1\color{black},\color{blue}4\color{black}) = 1[/math]
Let's associate each of these numbers with a variable. How about [math]x[/math] and [math]y[/math].
[math]
\begin{array} {c}
& (2,3) & (3,5) \\
\langle & 1 & 4 & ... & ] \\
& \color[rgb]{0,0.666,0}x & \color[rgb]{0,0.666,0}y \\
\end{array}
[/math]
So now we make each of those numbers into a coefficient on its variable.
[math]
\begin{array} {c}
& (2,3) & (3,5) \\
\langle & \color{blue}1 & \color{blue}4 & ... & ] \\
& \color{blue}1\color{black}x & \color{blue}4\color{black}y \\
\end{array}
[/math]
And set up an equation where they sum to the period we found in the earlier step to be 1.
[math]1x \color[rgb]{0,0.666,0}+\color{black} 4y \color[rgb]{0,0.666,0}= 1\color{black}[/math]
Solve. This one's pretty easy to eyeball. Sometimes it won't be though, so we'll come back to a general method for this step in more detail later too (it's actually connected to the method for the GCD calculation). For now the simplest solution that falls out seems to be
[math]x=1, y=0[/math]
Now this is the clincher: what we do with these results. Let's bring them back with the rest of our info so far.
[math]
\begin{array} {c}
& (2,3) & & (2,5) \\
\langle & 1 & & 4 & ... & ] \\
& 1x & + & 4y & = & 1 \\
& \color[rgb]{0,0.666,0}x=1 & & \color[rgb]{0,0.666,0}y=0 \\
\end{array}
[/math]
From each compound index, take the other number besides the 2:
[math]
\begin{array} {c}
& (2,\color{blue}3\color{black}) & & (2,\color{blue}5\color{black}) \\
\langle & 1 & & 4 & ... & ] \\
& 1x & + & 4y & = & 1 \\
& x=1 & & y=0 \\
& \color{blue}3 & & \color{blue}5 \\
\end{array}
[/math]
And now raise those numbers to the [math]x[/math]th and [math]x[/math]th powers, respectively.
[math]
\begin{array} {c}
& (2,3) & & (2,5) \\
\langle & 1 & & 4 & ... & ] \\
& 1x & + & 4y & = & 1 \\
& \color{blue}x\color{black}=1 & & \color{blue}y\color{black}=0 \\
& 3^{\color{blue}x} & & 5^{\color{blue}y} \\
\end{array}
[/math]
The product of those powers is our target preimage generator.
[math]
\begin{array} {c}
& (2,3) & & (2,5) \\
\langle & 1 & & 4 & ... & ] \\
& 1x & + & 4y & = & 1 \\
& x=\color{blue}1 & & y=\color{blue}0 \\
& 3^{\color{blue}1} & \color[rgb]{0,0.666,0}× & 5^{\color{blue}0} & \color[rgb]{0,0.666,0}= & \color[rgb]{0,0.666,0}\frac31 \\
\end{array}
[/math]
So 3/1 is a valid JI tuning of the generator for meantone, or we could say it's a JI interval the generator approximates.
Because of how this method works, it's clear that it would never be capable of returning any preimages with factors of 2 in them, so there's no way it could have given us 3/2 or 4/3 straight away. We are likely to need to octave reduce, take the reciprocal, or adjust the result by commas in the temperament. But we've got somewhere to start.
Example 2: simplified
Another example: magic. Slightly abridged this time through, hopefully it's the right amount of info to follow along:
[math]
\langle\langle 5 \; 1 \; -10 ]] \\
5\;1\\
\text{gcd}(5,1) = 1 \\
5x + 1y = 1 \\
x=0, y=1 \\
3^{0}5^{1} = \frac51 \\
[/math]
Again, 5/1 is valid, though we usually reduce it to 5/4.
Example 3: higher-dimensionality
This also works for higher-dimensionality temperaments. Each higher dimension adds another variable, which can make it harder to solve. Let's look at septimal meantone:
[math]
\langle\langle 1 \; 4 \; 10 \; 4 \; 13 \; 12 ]] \\
1\;4\;10\\
\text{gcd}(1,4,10) = 1 \\
1x + 4y + 10z = 1 \\
x=1, y=0, z=0 \\
3^{1}5^{0}7^{0} = \frac31 \\
[/math]
In that case it was still really easy to solve the equation by eye-balling it. We do still plan to look at a method that works in general here: Varianced Exterior Algebra#How to solve the equations
Why it doesn't work for higher-rank temperaments
So we've seen that the method works for higher-dimensionality temperaments. However, it fails as soon as you try to go higher-rank than 2. The problem arises the step where you select the numbers from the indices that aren't 2. These numbers are to be used as bases for powers of the variables solved for in the previous step. But if there is more than one number in the index besides 2, what could that possibly mean? For example:
[math]
\langle\langle 1 \; 2 \; -2 \; -5 ]] \\
1\;2\;-2\\
\text{gcd}(1,2,-2) = 1 \\
1x + 2y + -2z = 1 \\
x=1, y=0, z=0 \\
(3,5)^{1}(3,7)^{0}(5,7)^{0} = \; ? \\
[/math]
We have no way of raising [math](3,5)[/math] to a power, so the method breaks down.
How to solve the equations
Sometimes the answers to the GCD or the [math]x, y, z ...[/math] equations won't fall out for you. In that case you have several options.
In Wolfram Language, the ExtendedGCD[]
function will give you everything you need. Consider the example of porcupine, whose first two entries are 3 and 5:
In: ExtendedGCD[3,5] Out: {1, {2, -1}}
The first result, 1, is the GCD. The other two results are a solution for [math]x[/math] and [math]y[/math]:
[math] 3x + 5y = 1 \\ 3(2) + 5(-1) = 1 \\ 6 + -5 = 1 \\ [/math]
So that tells us our generator is [math]3^{2}5^{-1} = \frac95[/math], which is reasonable, though usually taken the reciprocal and octave-reduced to [math]\frac{10}{9}[/math].
It's also possible to find just the answers to the equations in Wolfram Language using FindInstance[]
, if you prefer to work it out this way:
In: 3^x*5^y /.First[FindInstance[3x+5y==1,{x,y}, Integers]] Out: 9/5
Working through the Extended Euclidean Algorithm
To get technical, the solutions to the equations with [math]x[/math] and [math]y[/math] are Bézout coefficients. They are essentially byproducts of a particular method of calculating the GCD, the Extended Euclidean Algorithm, which is why they come bundled in one neat package as part of Wolfram Language's ExtendedGCD[]
function. You can work through this method by hand if you'd like to understand it more deeply. At least the case of finding the GCD of just two numbers is fairly easy to work through. Let's try it on the example of porcupine. So we're looking for the GCD of [math]\color{Aquamarine}3\color{black}[/math] and [math]\color{Magenta}5\color{black}[/math].
Begin by finding the [math]\color{Orange}\text{remainder}[/math] after dividing the [math]\color{Magenta}\text{bigger}[/math] of the two numbers by the [math]\color{Aquamarine}\text{smaller}[/math]:
[math] \color{Magenta}5\color{black} \bmod \color{Aquamarine}3\color{black} = \;\color{Orange}?\color{black} [/math]
Or, equivalently, [math]\color{Orange}\text{how much you still have to add}[/math] in order to get the [math]\color{Magenta}\text{bigger}[/math] number after you find how many times the [math]\color{Aquamarine}\text{smaller}[/math] number goes into the bigger one without going over:
[math] \color{Magenta}5\color{black} = \;?×\color{Aquamarine}3\color{black}\; + \;\color{Orange}?\color{black} [/math]
So this value we're looking at is the [math]\color{Orange}\text{orange question mark}\color{black}[/math]. The other [math]\text{black question mark}[/math] is less important now, but we will need it later, and at that time it will be revealed why we needed to rewrite the equation in this more complicated way. So, our solution is:
[math] \color{Magenta}5\color{black} = 1×\color{Aquamarine}3\color{black} + \color{Orange}2\color{black} [/math]
Now we need to recurse the process. Use the colors to follow which numbers went where.
[math] \color{Aquamarine}3\color{black} = \;?×\color{Orange}2\color{black}\; + \;\color{YellowGreen}?\color{black} [/math]
So this time, we still want to find a remainder, but it's the remainder when the [math]\color{Orange}\text{previous step's remainder}[/math] is divided into the [math]\color{Aquamarine}\text{previous step's divisor}[/math]. And so this time we're looking for the [math]\color{YellowGreen}\text{green question mark}\color{black}[/math]. And that gets us:
[math] \color{Aquamarine}3\color{black} = 1×\color{Orange}2\color{black}\; + \color{YellowGreen}1\color{black} [/math]
Recurse again. Now we're looking for the [math]\color{Orchid}\text{purple question mark}\color{black}[/math]:
[math] \color{Orange}2\color{black} = \;?×\color{YellowGreen}1\color{black}\; + \;\color{Orchid}?\color{black} [/math]
And that gives us:
[math] \color{Orange}2\color{black} = 2×\color{YellowGreen}1\color{black}\; + \color{Orchid}0\color{black} [/math]
Upon reaching [math]\color{Orchid}0\color{black}[/math], the recursive process is complete. We've found the GCD: it's the most recent non-zero remainder, which in this case is [math]\color{YellowGreen}1\color{black}[/math].
But what we're really interested in now are those Bézout coefficients! To find these, we just need to work our way backwards through this series of equations we've created. The last equation where we found the [math]\color{Orchid}0\color{black}[/math] is irrelevant now. So beginning with the penultimate one and working backwards, then, our solved equations are:
[math] \color{Aquamarine}3\color{black} = 1×\color{Orange}2\color{black}\; + \color{YellowGreen}1\color{black} \\ \color{Magenta}5\color{black} = 1×\color{Aquamarine}3\color{black}\; + \color{Orange}2\color{black} [/math]
Each of these equations are currently solved for the previous steps' divisor (or in the first case, the [math]\color{Magenta}\text{bigger}\color{black}[/math] of our two original numbers). We're going to rearrange them so that they're solved instead for the remainders.
[math] \color{Aquamarine}3\color{black} - 1×\color{Orange}2\color{black} = \color{YellowGreen}1\color{black} \\ \color{Magenta}5\color{black} - 1×\color{Aquamarine}3\color{black} = \color{Orange}2\color{black} [/math]
Now, we can begin substituting. In the first equation, substitute in for the [math]\color{Orange}\text{divisor}\color{black}[/math] the value it's equal to in the next equation as the [math]\color{Orange}\text{remainder}\color{black}[/math]:
[math] \color{Aquamarine}3\color{black} - 1×(\color{Magenta}5\color{black} - 1×\color{Aquamarine}3\color{black}) \;= \color{YellowGreen}1\color{black} [/math]
Distribute:
[math] \color{Aquamarine}3\color{black} - 1×\color{Magenta}5\color{black} + 1×\color{Aquamarine}3\color{black} = \color{YellowGreen}1\color{black} [/math]
For clarity, we're going to add a black [math]1×[/math] at the beginning:
[math] 1×\color{Aquamarine}3\color{black} - 1×\color{Magenta}5\color{black} + 1×\color{Aquamarine}3\color{black} = \color{YellowGreen}1\color{black} [/math]
So now it's more obvious how to simplify:
[math] 2×\color{Aquamarine}3\color{black} - 1×\color{Magenta}5\color{black} = \color{YellowGreen}1\color{black} [/math]
And there's our Bézout coefficients, in black! We've gotten [math]2[/math] and [math]-1[/math] as our answers, just as Wolfram Language found for us above.
Advantages of EA
Many of the advantages of EA listed here are things one can do with simple lists of minor determinants, i.e. without actually leveraging EA-specific concepts. So we think it's generous but reasonable to collect them here nonetheless.
Pure representation of temperament information
Identifying temperaments by their canonical multimap can be advantageous, because that information is "independent of the choice of generators"^{[14]}.
Varianced multivectors contain exclusively what we could call "temperament information", without any "generator-size information" as is also present in the case of mapping matrices, or "specific commas information" as is also present in the case of comma bases. In other words, when we consider the canonical mapping for a temperament, while it does effectively uniquely identify a temperament, one does nonetheless have to "look through" the generator-size information that it also contains to see that pure temperament information. And so we could say that varianced multivectors are great from an information architecture standpoint, because they perfectly capture the information about the temperament without any extraneous information, such as whether, say, this is meantone with generators of ~2/1 and ~3/1 or a meantone with generators of ~2/1 and ~3/2.
Temperament sums and differences
It is much easier to find temperament sums and differences (WIP) by entry-wise arithmetic of varianced multivectors. Entry-wise arithmetic of temperament matrices gives unpredictable results.
Computing TE complexity
Varianced multivectors can be used as an effective way to compute TE complexity.
The meaning of varianced multivector entries
Progressive product periodicity parallelotope projections
(WIP)
Tempered lattice fractions generated by prime combinations
So we now understand how to get to canonical multimaps. And we understand that they uniquely identify the temperament. But what about the individual terms — do they mean anything in and of themselves? It turns out: yes!
The first thing to understand is that each term of the canonical multimap pertains to a different combination of primes. We already know this: it’s how we calculated it from the mapping-row-basis. For example, in the canonical multimap for meantone, ⟨⟨1 4 4]], the 1 is for [math](2,3)[/math], the first 4 is for [math](2,5)[/math], and the second 4 is for [math](3,5)[/math].
Now, let’s convert every term of the canonical multimap by taking its absolute value and its inverse. In this case, each of our terms is already positive, so that has no effect. But taking the inverse converts us to [math]\frac 11[/math], [math]\frac 14[/math], [math]\frac 14[/math]. These values tell us what fraction of the tempered lattice we can generate using the corresponding combination of primes.
What does that mean? Who cares? The motivation here is that it’s a good thing to be able to generate the entire lattice. We may be looking for JI intervals we could use as generators for our temperament, and if so, we need to know what primes to build them out of so that we can make full use of the temperament. So this tells us that if we try to build generators out of primes 2 and 3, we will succeed in generating [math]\frac 11[/math] or in other words all of the tempered lattice. Whereas if we try to build the generators out of primes 2 and 5, or 3 and 5, we will fail; we will only be able to generate [math]\frac 14[/math] of the lattice. In other words, prime 5 is the bad seed here; it messes things up.
It’s easy to see why this is the case if you know how to visualize it on the tempered lattice. Let’s start with the happy case: primes 2 and 3. Prime 2 lets us move one step up (or down). Prime 3 lets us move one step right (or left). Clearly, with these two primes, we’d be able to reach any node in the lattice. We could do it with generators 2/1 and 3/1, in the most straightforward case. But we can also do it with 2/1 and 3/2: that just means one generator moves us down and to the right (or the opposite), and the other moves us straight up by one (or the opposite) (see Figure 2). 2/1 and 4/3 works too: one moves us to the left and up two (or… you get the idea) and the other moves us straight up by one. Heck, even 3/2 and 4/3 work; try it yourself.
But now try it with only 5 and one other of primes 2 or 3. Prime 5 takes you over 4 in both directions. But if you have only prime 2 otherwise, then you can only move up or down from there, so you’ll only cover every fourth vertical line through the tempered lattice. Or if you only had prime 3 otherwise, then you could only move left and right from there, you’d only cover every fourth horizontal line (see Figure 3).
One day you might come across a canonical multimap which has a term equal to zero. If you tried to interpret this term using the information here so far, you'd think it must generate [math]\frac 10[/math]th of the tempered lattice. That's not easy to visualize or reason about. Does that mean it generates essentially infinity lattices? No, not really. More like the opposite. The question itself is somewhat undefined here. If anything, it's more like that combination of primes generates approximately none of the lattice. Because in this situation, the combination of primes whose canonical multimap term is zero generates so little of the tempered lattice that it's completely missing one entire dimension of it, so it's an infinitesimal amount of it that it generates. For example, the 11-limit temperament 7&12&31 has canonical multimap ⟨0 1 1 4 4 -8 4 4 -12 -16] and mapping-row-basis [⟨1 0 -4 0 -12] ⟨0 1 4 0 8] ⟨0 0 0 1 1]⟩; we can see from this how primes [math](2,3,5)[/math] can only generate a rank-2 cross-section of the full rank-3 lattice, because while 2 and 3 do the trick of generating that rank-2 part (exactly as they do in 5-limit meantone), prime 5 doesn't bring anything to the table here so that's all we get.
Disadvantages of EA
EA is a higher-level mathematical topic
EA is harder to learn and use than LA, and less people know how to use it.
The collinearity exception
For details about this issue, see the above section Varianced exterior algebra#Collinearity exception.
There is an argument that this aspect of EA means that the wedge product can be used for a quick check of collinearity. But surely its nonconformity with the results of the simpler LA on the matter of meet and join is a much more major detractor. Especially when it's not difficult to check for collinearity in LA either (concat matrices, HNF, see if there's any zero-rows).
The varianced multivector format lacks direct information about dimensionality
When we consider a matrix-based representation of a temperament, such as a mapping, we have near-immediate access to three key pieces of information about the temperament:
- the rank ([math]r[/math])
- the dimensionality ([math]d[/math])
- the nullity ([math]n[/math])
As an example, consider meantone temperament's mapping: [⟨1 0 -4] ⟨0 1 4]⟩. We can get the rank from the row count: 2. We can get the dimensionality from the column count: 3. And we can get the nullity from the rank-nullity theorem, which essentially states that [math]r + n = d[/math]. So nullity should be 1. We can equivalently get this information from meantone's comma basis ⟨[4 -4 1⟩]; the nullity is the column count, 1, the dimension is the row count, 3, and rank is the difference, 2.
On the other hand, the varianced multivector form of meantone ⟨⟨1 4 4]] only provides one of these three pieces of information: the rank, which is the count of brackets it's enclosed within. Finding the dimensionality is possible, but involves cross-referencing the count of entries in the minors list with the bracket count in a Pascal's triangle table such as the one demonstrating sign patterns for the EA dual above. Either that, or memorizing sections of Pascal's triangle. And without the dimensionality, the nullity cannot be ascertained from the rank.
For nilovectors, the varianced multivector format also lacks information about variance
Consider the nilovector 1. Without any angle brackets, it is unable to convey which side of duality it is on. So here we have no information about dimensionality or variance.
Compare this situation with the equivalent LA structure, a matrix consisting only of a row or column of zeros. A rank-0 mapping or "zero mapping" looks like [⟨0 0 0...]⟩, a single row of zeros, structured in the same way as any other mapping, like meantone [⟨1 0 -4] ⟨0 1 4]⟩. While a zero comma basis looks the other way around, ⟨[0 0 0...⟩], a single column of zeros, structured in the same way as any other comma basis, like meantone ⟨[4 -4 1⟩].
Inefficiency of representing temperaments
While varianced multivectors are indeed compressed forms of tensors (as discussed earlier here), they are not generally smaller in size than a corresponding matrix representation of the same temperament information. In fact, it is only in the case of dimensionality 4 and grade 2 that matrices cannot represent a temperament more succinctly than a varianced multivector, because either the mapping or the comma basis would require 8 total entries (2×4 or 4×2) while the varianced multivector can accomplish this in 6 entries. Everywhere else, matrices are either tied with multivectors or have fewer entries. To be absolutely clear, this comparison allows for using comma bases to specify temperaments when they can do it in fewer entries, despite the fact that the matrix-based canonical form for temperaments is the mapping.
matrices | d | varianced multivectors | d | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2 | 3 | 4 | 5 | 6 | 7 | 8 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | |||||
g | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | g | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | |
2 | 3 | 8 | 10 | 12 | 14 | 16 | 2 | 3 | 6 | 10 | 15 | 21 | 28 | |||||
3 | 4 | 10 | 18 | 21 | 24 | 3 | 4 | 10 | 20 | 35 | 56 | |||||||
4 | 5 | 12 | 21 | 32 | 4 | 5 | 15 | 35 | 70 | |||||||||
5 | 6 | 14 | 24 | 5 | 6 | 21 | 56 | |||||||||||
6 | 7 | 16 | 6 | 7 | 28 | |||||||||||||
7 | 8 | 7 | 8 |
For an example of the only case where EA is more efficient, septimal meantone can be represented with the multimap ⟨⟨1 4 10 4 13 12]] or multicomma [[12 -13 4 10 -4 1⟩⟩, either of which have 6 entries, but its mapping [⟨1 0 -4 -13] ⟨0 1 4 10]⟩ and comma basis ⟨[4 -4 1 0⟩ [13 -10 0 1⟩] require 8 entries.
For an example where they are tied, consider (5-limit) meantone. Whenever rank or nullity are 1, you are guaranteed to have a varianced multivector which is the same size as a matrix; in this case it's the multicomma, [4 -4 1⟩, which is essentially the same as the comma basis ⟨[4 -4 1⟩]. Of course meantone's mapping [⟨1 0 -4] ⟨0 1 4]⟩ has more entries, but what is of interest here is the minimum required entries for a structure in the given algebra to completely identify the object.
And for an example where EA is less efficient, consider the 17-limit, rank-3 temperament represented by the mapping [⟨1 0 0 -5 -13 21 12] ⟨0 1 0 2 6 -8 -5] ⟨0 0 1 2 3 -2 0]⟩. That's a total of 21 entries, which is a lot. But its corresponding multimap is ⟨⟨⟨1 2 3 -2 0 -2 -6 8 5 -6 12 10 12 15 -10 -5 -13 21 12 -11 32 24 37 36 -24 -4 -2 1 -22 -7 -9 -30 -17 -16 -41]]], which has 35 entries, and due to being presented as a single unbroken string of numbers, is arguably much harder to read and understand.
To be fair, there is an the argument that 7-limit, rank-2 temperaments as a category are more popular than all 13-limit and higher temperaments with grade ranging from 2 to d-2 combined, or in other words, that the single green cell in the left half of the above table outweighs all of the green cells in the right half of the table, including imagining extending the table off to infinity.
Possibility of nondecomposable varianced multivectors
Of course one must at least take care to make sure the count of minors in the list is possible given the grade (rank or nullity, i.e. count of brackets), requiring a similar lookup in a Pascal's triangle as was discussed a couple subsections ago. The only way to mess up a matrix in this way would be to fail to make it rectangular, i.e. fail to make sure that all rows are the same length as each other and all columns are the same length as each other.
But it's actually much more difficult than meeting that straightforward constraint. Consider this varianced multivector, for example, which was randomly generated: ⟨⟨2 -4 8 -9 7 2]]. As a varianced multivector with grade 2, six is a valid count of minors, because that would pin it at dimensionality 4, so we can see that much is fine; that's not the problem. The problem is that if you try to find a mapping for which this is its list of minors, you will find yourself unable to. This is therefore a "nondecomposable" varianced multivector, which can not represent a temperament.
On the other hand, it is not possible to create RTT matrices that don't represent temperaments. They may be completely ridiculous musically, but mathematically they will always be sound.
The reason why it's possible to create nondecomposable varianced multivectors this way is because they are a compressed form of temperament information, while matrices are not. Each entry in an RTT matrix is independent. But most varianced multivectors' entries represent the results of interactions between entries in RTT mappings. And it is certainly possible to produce a string of supposed varianced multivector entries which could never all together at once have been the results of a real matrix.
An interesting consequence of this problem is that EA cannot deliver on a promise of easier canonicalization than LA that it might otherwise have had. Canonicalization relies on defactoring, and defactoring in LA is hard; defactoring algorithms are fairly tricky to get your head around. Defactoring in EA is super easy, though: it's just GCD removal and positive leading entry normalization. However, because of the fact that a given varianced multivector cannot be assumed to be valid, its validity must be checked by converting to matrix form and back, thus rendering any potential benefits of EA here moot.
Nondecomposability and EA rank
This section just provides a little extra background information on this topic.
Exterior algebra uses the term "rank" too, but in a different sense than how it is used in linear algebra. In EA, rank refers to the decomposability of a varianced multivector. If a varianced multivector can be decomposed into a wedge product of vectors, then it is rank-1. If a varianced multivector cannot be thus decomposed, however it can be expressed as the entry-wise sum of two rank-1 varianced multivectors, then it is rank-2. And so forth.
For example, porcupine plus meantone is tetracot, or ⟨⟨3 5 1]] + ⟨⟨1 4 4]] = ⟨⟨4 9 5]]. We know that porcupine and meantone are each rank-1, because they can be decomposed into wedge products of vectors (⟨1 0 -4]∧⟨0 1 4] and ⟨1 2 3]∧⟨0 3 5], respectively). And in this case, tetracot can also be decomposed into a wedge product of vectors, ⟨1 1 1]∧⟨0 4 9]. So it is possible for two rank-1's to sum to a rank-1. This is because porcupine and meantone are collinear temperaments, or in other words, we can find vector decompositions for these two temperaments that include a common vector. In this case, that shared vector would be the map for 7-ET, ⟨7 11 16].
(WIP: example of nondecomposable)
Lack of intuitiveness
One can imagine handcrafting RTT mappings directly. The meantone mapping directly speaks to us: give me a world where one of the first generator gets me to prime 2, one of the second generator gets me to prime 3, and prime 5 is gotten to by going down four of the first generator and up four of the second. But the entries of varianced multivectors do not provide this level of direct insight into the functioning of the temperament they represent. The "advantage" discussed in the previous section — "tempered lattice fractions generated by prime combinations" — is not nearly as valuable as the information that mapping entries convey.
Lack of use for tuning
With mappings, reasonable tuning calculations for temperaments are a function away, leveraging the power of the Moore-Penrose pseudoinverse function. No such trick exists for varianced multivectors and tuning.
Similarly, preimage generators (JI interpretations of the generators) can be found from a mapping with the method (described here: Transversal generators#Finding the transversal generators), whereas the method for finding these from multimaps only works for rank-2 (described here: Wedgies and multivals#How the period and generator falls out of a rank-2 wedgie).
Summary diagrams and tables
terminology category | building block → | temperament ID | EA temperament ID | EA temperament ID dual | temperament ID dual | ← building block |
---|---|---|---|---|---|---|
RTT application | map, mapping-row (often an ET) | mapping, mapping-row-basis | multimap | multicomma | comma basis | interval, comma |
RTT structure | map | list of maps | list of minors | list of minors | list of prime count lists | prime count list |
linear algebra structure | row vector, matrix row, covector | matrix, list of covectors | minors row vector, multicovector | minors (column) vector, multivector | matrix, list of vectors | (column) vector, matrix column, vector |
extended bra-ket notation representation | bra | ket of bras | nested bras | nested kets | bra of kets | ket |
RTT jargon | val | list of vals | wedgie, multival | multimonzo | list of monzos | monzo |
- ↑ You may also see it as Geometric Algebra (GA) which is a superset of EA, or Grassman algebra, which is a synonym.
- ↑ Unlike tri-, bi-, mono-, the nilo- prefix is not commonplace, but a coinage Dave Keenan and Douglas Blumeyer decided upon for discussions of EA in RTT. The suggested pronunciation of "nilo" is /'nɪ lo/, i.e. to rhyme with "pillow", because other pronunciations might suggest an annihilating covector, i.e. a covector that maps vectors to the null vector. "Nil-" was also considered, but it was felt that, in the case of a nilcovector it would too strongly suggest a (mono) covector with all of its entries equal to 0. Also, nilo- favorably rhymes with mono-.
- ↑ This notation was originally proposed by Dave Keenan here: https://yahootuninggroupsultimatebackup.github.io/tuning-math/topicId_7525#7749
- ↑ Dave Keenan and Douglas Blumeyer also considered the term "vectoral". This noun (emphasis on the first syllable) was coined by them with the etymology of "vector-like thing", by analogy with the nouns "functional" (mathematics) and "orbital" (physics). This could have worked both in natural and computer language. But, sensitive about introducing new made-up terminology, they decided not to recommend it.
Note that in RTT with linear algebra, we do not have a need for the generic word for covector and vector, because we do not represent temperaments directly with those objects, so there "vector" is perfectly fine as short for "contravector". - ↑ This notion of variance is not to be confused with the unrelated notion of variance in statistics.
- ↑ There's an argument that true "nilocovectors" or "nilovectors" could not exist, because varianced multivectors with grade 0 are neither covariant nor contravariant (their entries vary neither with nor against the units), and therefore that we should only use the generic "nilovector", but we think it is still a good idea to permit both "nilocovector" and "nilocontravector" because they allow you to be specific about on which side of duality you find this object, i.e. it helps you understand its dual in the given temperament.
- ↑ Interestingly, regular temperament theoreticians are more similar to physicists than mathematicians in a certain respect: we have units of measurement associated with our vectors, such as octaves or cents. The multilinear algebra from which RTT draws is closely related to "tensor analysis", but not to "tensor algebra", the former being the way that physicists use tensors, while the latter is how general mathematicians use them.
- ↑ or as you might say in MLA, its "indices"
- ↑ Most varianced multivectors' dimensionalities are inferable from their grades and entry lists, however, not so for nilovectors; so, for parity with matrices, the dimensionality is required as a fourth entry in their case.
- ↑ If it helps you, you could think of this sign-changing step as paired with the GCD removal step, if you think of it like dividing out a GCD of -1.
- ↑ Or do an anti-transpose sandwich, if you want to be consistent with the process for the dual in linear algebra, as discussed here Douglas_Blumeyer's_RTT_How-To#Null-space; either way will work here.
- ↑ The bra-ket notation comes from physics, specifically quantum mechanics.
- ↑ You may be wondering — what about two temperaments which on PTS are parallel, e.g. compton and blackwood? Their comma bases are each nullity 1, and they meet to give a nullity 2 comma basis, which corresponds to a rank-1 mapping, which means it should appear as an ET point on the PTS diagram. But how could that be? Well, here's their meet: ⟨[1 0 0⟩ [0 1 0⟩], and so that corresponding mapping is {{ket⟨0 0 1]}}. So it's some degenerate ET. I suppose we could say it's the point at infinity away from the center of the diagram or something.
- ↑ Quote from Graham Breed.