Ineffective Theory

Mutual Subgroups

Two groups can be proper subgroups of each other. Of course, they can’t be finite groups, but that’s alright.

A good place to start is with the free group on $n$ generators, which I’ll call $F^n$ (nonstandard notation, but fine when $n$ is finite). The free group $F^1$ is isomorphic to the integers under addition, so it isn’t very interesting. Let’s look at $F^2 = \langle x,y\rangle$ and $F^4 = \langle a,b,c,d\rangle$. It is obvious that $F^2 < F^4$. Of course there are multiple such inclusions, but the 12 most natural ones are $(x,y)\mapsto(a,b)$ and the like.

Now let’s construct an injection from $F^4$ to $F^2$, demonstrating that $F^4 < F^2$. A tempting guess is \begin{equation*} \phi : \left\{\begin{matrix} a \mapsto xx\\
b \mapsto xy\\
c \mapsto yy\\
d \mapsto yx\\
\end{matrix}\right.\text. \end{equation*} Alas! This is indeed a group homomorphism, but it is not injective: you can verify that $\phi(b c^{-1} d a^{-1}) = 1$. A more robust encoding scheme (again, far from unique) is given by \begin{equation*} \phi : \left\{\begin{matrix} a \mapsto x^0yx^{-0}\\
b \mapsto x^1yx^{-1}\\
c \mapsto x^2yx^{-2}\\
d \mapsto x^3yx^{-3}\\
\end{matrix}\right.\text. \end{equation*} (To see that this is injective, just think about how you would write a program to “decode” the string in $F^2$.)

Are $F^2$ and $F^4$ isomorphic? No. I like the proof that comes from looking at $\mathrm{Hom}(F^n, \mathbb Z / \mathbb Z_2)$ — that is, the set of all group homomorphisms from $F^n$ to the group with two elements. There are of course $2^n$ such homomorphisms, and that suffices to show that the structure of $F^n$ depends on $n$.

In conclusion: $F^2 < F^4$, $F^4 < F^2$, but $F^2 \not\approx F^4$.

It should be clear that none of this really depends on which free groups were chosen, as long as neither is $F^1$. In fact, we also have the glorious result $F^2 < F^2$ (and the same for any $F^n$ with $n \ge 2$). This is demonstrated by $(x,y)\mapsto(xx,yy)$. ($F^1 < F^1$ is easy to show as well.)

Links for March 2021

Here is the third part of a vim tutorial, for advanced users.

Exercise your bullshit detector! Courtesy of Robert Jervis, a list of 78-ish fallacies, largely centered around international relations and, to a lesser extent, politics. These aren’t chess puzzles: as far as I can tell, there’s really no guarantee of one clean answer. Nevertheless, a good read. You can, of course, get much the same effect by reading the news, but the fallacies here are largely in the past and therefore less irritating.

“Facial recognition technology can reveal political orientation," scream the headlines! Go look at figure 3; facial recognition (in this case referring to an algorithm that extracts a grab-bag of humanly incomprehensible characteristics from a face) does about half again as well as an algorithm would if it just looked at: facial hair, glasses, position, and emotion. The most interesting thing here, frankly, is that stuff like ``facial expression'' is such a strong predictor — culture matters! A close competitor for “most interesting”: humans are terrible at guessing political orientation, getting it right only 55% of the time, as compared with about 65% for the amalgamation of position, glasses, hair, and emotion.

Rust code can perform unsafe operations wihout using an unsafe block. Once you see it, it’s obvious: just open the /proc/self/mem file and edit the memory manually! On the one hand, this is some combination of obvious, pointless, and idiotic. On the other, it’s an important reminder of the fact that modern software is largely too complicated for any security/corretness model to be, y’know, secure. Or correct.

A fable of opportunity cost. Long-winded (in the tradition of Faulkner’s Fable, I suppose), but readable. I’m not sure this is the best story to illustrate the point, which as nearly as I understand it, is: “Losing a penny is the same as failing to gain a penny. First derivatives of your objective function ought to be continuous everywhere!” Still, it’s worth reading and thinking about, hence the link.

Quantum computing optimism recedes by a small amount. See also Scott Aaronson on ethics.

And finally, a laugh:

All Hail the Social Mobility Index

The social mobility index is a ranking of U.S. colleges that you can find here. Scroll down to about the middle of that page to see the actual rankings. The idea is that colleges are ranked not according to prestige, or the average performance of graduates, or amount of money spent, but rather according to how many students have been lifted how far. Blurring over the methodological details, a college does better in the SMI rankings to take a struggling student and help them a lot than to accept a wealthy prodigy and help them network. This contrasts with other rankings, which might, for instance, penalize colleges for accepting too high a proportion of applicants.

The rankings themselves are fun to read, if only for the glorious feeling of seeing all your least favorite schools at the bottom of the list where they belong. The top fifteen are all California State — certainly not to be confused with the UC schools — and CUNY. High-ranking “prestigious” schools are, according to my rather poor understanding of which schools have prestige:

  1. Rutgers (Newark campus — this is R2, not the R1 campus)
  2. Florida International University
  3. Texas A&M
  4. Stony Brook
  5. Rutgers (Camden campus — also not the famous campus)
  6. UCSB

For some sense of scale, Stony Brook is ranked 37. My undergraduate and graduate school, UMD-College Park, is ranked 455. MIT is 989, but at least they’re doing better than Harvard: 1264.

Ranking colleges this way is glorious. Reading the copy on that page, I’m not even sure the people who implemented the social mobility index realize how wonderful their invention is — they seem to be concerned solely with attacking economic inequality (and perhaps scoring the occasional political point). But this way of ranking schools attacks more problems than just economic inequality.

Let’s start by looking at incentives. The traditional rankings often explicitly penalize colleges for accepting too many people, but even leaving that aside, the general emphasis on prestige (asking professors to rank universities off the top of their heads, for instance) is always going to create an incentive to be selective. Moreover, the idea that we should care about performance of students some years after graduation, without reference to where they started from, means that colleges are incentivized to select students who would be successful anyway. This ties in to Caplan’s signaling theory of education, but you can be skeptical of that while still acknowledging that colleges are being encouraged to find the best students, not the students they can help the most.

And, of course, when I say “best” students, I really mean “students who were doing very well before college”. Seeking such students is not the path to obtaining a racially, economically, or culturally diverse student body. Nor is it a good way to maximize utilitarian good. It’s pretty much the equivalent of a charity that only gives money to wealthy people, on the grounds that then the average recipient will be wealthier.

The SMI — made sufficiently influential — incentivizes colleges to find the students they can help the most. Colleges are rewarded for taking poorer students, charging them less, and raising their post-graduation salaries. That’s good. That’s what colleges should be doing. Colleges should be preferentially finding ways to find ways to help those who are disadvantaged. Well, the college that figures out how to do that will rocket to the top of SMI.

So, the SMI moves us toward greater social good. Maybe I don’t care. Maybe I just want to know which college is best. Well, the SMI is better at that! When I ask “which college is best”, what I’m interested in is the causal relationship between the college and how well I do afterward. I’m not interested in the fact that being accepted to Stanford is a bigger honor than being accepted to Oklahoma State. I don’t care that the average graduate of Princeton is wealthier, unless Princeton caused them to be wealthier.

Most rankings make really quite a poor effort to determine a causal relationship. The SMI, perhaps inadvertently, does much better. A college that has effective teachers, and an environment conducive to learning, and all the other things you might want — that college is going to have a very easy time rising to the top of the SMI rankings. A college that relies exclusively on determining who will already do well, and attracting them via prestige, will suffer horribly. So, if I want to look for the college that will teach me the most (or any similar goal), SMI should be much better than traditional rankings!

So, I’m in love with these rankings. They aid the selfish interests of the million or so people looking at colleges each year; they create incentives for colleges to pursue the utilitarian good. They work towards the goals of the left (social justice and racial equality) while pandering beautifully to the right (Harvard’s ranking is just what it should be, and as far as I can tell, none of the top-ranked schools are currently targets of discrimination lawsuits). Everybody’s happy.

It’s worth noting that debates over methodology are entirely reasonable. (I can’t find a detailed description of SMI’s methodology.) With one of the more prestige-oriented rankings (US News and World Report comes to mind), it doesn’t make sense for two rankings to disagree by much, because, well prestige is prestige. We’re not going to have a debate about whether a school is prestigious or not — the whole point is that everybody already agrees. But the social mobility index is trying to measure something real, and there can be legitimate disagreement over how to measure that thing. For instance, “Education Reform Now” pays much more attention to Pell grants, among other differences, and some of its rankings are wildly different. In particular, BYU ranks in the top ten with ERN, while not making it to the top 500 under the social mobility index!

Another similar ranking is from the “Equality of Opportunity Project”. I’m sure there are others I missed, and hopefully still more will be created.

Actually, I should expand on that last part a bit more. The SMI, while great, is not free of flaws. As it is, it uses a small number of aggregate variables (like tuition and graduation rate) as a proxy for aiding social mobility. Of course, collecting data is hard, but if I had my ‘druthers, the ranking would be calculated directly from the average difference between a student’s salary post-graduation and that student’s parent’s salary. I’m sure there are more-easily accomplished improvements available.