## DyingLoveGrape.## Complex Algebraic Curves, Part 1: Introduction to Homogeneous Polynomials.## Prereqs.For this series, we're not going to require all that much; mainly just calculus or familiarity with partial derivatives. The first few posts will be somewhat general, focusing mostly on homogeneous polynomials — this can lead us in a number of different directions, but projective space is on the horizons. The book I will mainly be glancing through is Frances Kirwan's Complex Algebraic Curves, though I will deviate from it as I see it. I'm by no means proficient in this subject, so I will count on you, the loyal reader, to correct me in the comments below if I've made an error — though, I'll try my best not to make any particularly devistating mistakes. ## Homogeneous Functions.We're all familiar with functions; we see them all the time in calculus, algebra, and so forth. Let's recall that a For a polynomial function of more than one variable, the same general idea applies: it's denoted by \[f(x_{1}, x_{2}, \dots, x_{k})\] and equal to a bunch of terms which look like \[a_{i}x_{1}^{m_{1}}x_{2}^{m_{2}}\cdots x_{k}^{m_{k}}\] where $a_{i}$ is a scalar, as usual, and the $m_{i}$ are all non-negative integers. For example, if we have a function of two variables, it might look something like this: \[f(x,y) = 4 + 3x + 2y + 5xy + 6x^{2}y^{7} + 7x^{8}y\] and a three-variable function may look something like this \[f(x,y,z) = 2 + x - y + 2z - x^{2}y + y^{2}z + xyz^{3}.\] Note that, for two and three variables we generally write our variables as $x,y,z$ instead of $x_{1},x_{2},x_{3}$, but the idea is the same. The idea behind the degree of a multivariable polynomial is a bit stranger. Consider the polynomial \[f(x,y) = 2 + x^{6} + y^{7} + x^{5}y^{6}.\] What should $\deg(f)$ be? The standard way to define the degree is to add the powers of each of the variables in a term, then take the largest value to be the degree. In this case, $\deg(f) = 11$, since $x^{5}y^{6}$ has a sum of 11 whereas the other two terms are of degree 0 (the constant), 6, and 7 respectively. With that out of the way, let's talk a bit about the way that some functions can grow. Suppose that $f(x)$ models how much a worker gets paid per hour; that is, $f(x)$ is the amount he earns for working $x$ hours. This grows linearly; if he makes 8 dollars an hour, then he will make $8x$ dollars in $x$ hours. The way that we can write this is \[f(x) = 8x.\] This function has an interesting property: for any real number $\alpha$ we have $f(\alpha x) = \alpha f(x)$. Essentially, we can "pull out" the $\alpha$ from inside the function notation. Not every function is like this. Consider $g(x) = x^{2} + 1$. We have \[g(2x) = 4x^{2} + 1 \neq 2x^{2} + 2 = 2g(x).\] Hence it's not the case that we can always "pull a number out of the function." Hmph. Well, let's try the function $f(x) = x^{3}$, just for kicks. \[f(\alpha x) = \alpha^{3}x^{3} = \alpha^{3}f(x).\] In this case, we were able to pull out a few $\alpha$'s. It also (coincidentally?) happened to be equal to the degree of the function. Curious, indeed!. The question is, when can we do this? For one-variable functions, the answer is fairly straightforward and the reader should attempt it. Hint: The function should have at most one term in it! But for a many-variable function, what can happen? Let's look at a few examples.
\[f(x,y) = x^{3}y^{2} + 3\]
You can convince yourself that plugging in $\alpha x$ or $\alpha y$ (or both) won't give you anything nice; it's that darn constant that's throwing things off. What about
\[f(x,y) = x^{3}y^{2} + xy\]
Note that
\[f(\alpha x, \alpha y) = \alpha^{5}x^{3}y^{2} + \alpha^{2}xy\]
What does this equal in terms of $f(x,y)$? Well, we can pull out $\alpha^{2}$, but then we're left with $\alpha^{3}x^{3}y^{2} + xy$. This certainly isn't equal to $\alpha^{i}f(x,y)$ for
Moreover,
This notion is surprisingly useful in some geometries. Specificaly, suppose we define some function $f(x,y)$ on two variables such that we know: \[\begin{align*} f(1,0) &= P\\ f(0,1) &= Q\\ \end{align*}\] If we knew nothing else about this function, it would be impossible for us to talk about any other points; if we happen to know that this is a homogeneous polynomial of degree $m$ then we note have the following things: \[\begin{align*} f(\alpha,0) &= \alpha^{m} f(1,0) = \alpha^{m} P\\ f(0,\alpha) &= \alpha^{m} f(0,1) = \alpha^{m} Q\\ \end{align*}\] This gives us a surprising amount of information: it gives us all of the points on the line $\ell_{1}$ which passes through the origin and $(1,0)$ and also all of the points on the line $\ell_{2}$ which passes through the origin and $(0,1)$. That's kind of neat. What if we want all of the values on the line that passes through the origin and, say, $(2,4)$? We note that \[f(2,4) = 2^{m}f(1,2)\] which tells us if we want $(2,4)$ then it suffices to find $f(1,2)$. We could have also said \[f(2,4) = 4^{m}f(\tfrac{1}{2},1)\] which is equivalent. Notice something here: given any point $(a,b)$, we can always reduce it to the form $(1,\frac{b}{a})$ and, given a homogeneous polynomial of degree $m$, this gives us that \[f(a,b) = a^{m}f(1,\frac{b}{a}).\] This might not seem like too big of a deal at first, but the following proposition makes apparent what is neat about this idea:
This might not seem like a huge deal, but what it does for us is essentially cuts the amount of work we have to do down significantly: instead of having to find values every possible combination $f(a_{1},\dots, a_{n})$, we need only work with, at worst, $f(1,a_{2},\dots, a_{n})$. You may want to try to prove this claim yourself: the idea is, if the first coordinate is not zero then factor it out to make the first coordinate equal to 1 (that is, divide all the other coordinates by the first coordinate's value). If it's 0, then go to the next coordinate. Notice that we don't need to ever worry about $f(0,0,0,\dots,0)$, since its value is always $0$.
Note that this is the case since $f(\alpha) = \alpha^{m}f(1)$ gives us the value for any real number $\alpha$. Note also that this forces $f(0) = 0$; in fact, this is true for any homogeneous function (why?). Also,
This isn't that bad, all things considered. The "picture" of this is also somewhat nice: The idea behind this picture is, given any point on the plane (let's say in the first quadrant; if it's negative, we do the same thing but we need to multiply it by $(-1)^{m}$) just draw a line from the point to the origin; it will go through exactly one point on the dashed blue line which represents $(1,y)$. That's one of the kinds of points that we're talking about in the corollary above. Similarly, if the point is on the $y$-axis, then it will be of the form $(0,y)$, which is the other kind of point from the corollary above. There are similar nice pictures for other dimensions, but we'll get to that soon — ## Properties of Homogeneous Polynomials: Zeros.Homogeneous polynomials are pretty neat. But, being functions, we can do a bunch of different things with them. What's the first thing we usually think of when we talk about functions? Zeros are a big deal, so let's see what happens with a homogeneous polynomial. One zero of the function is given by $(0,0,\dots, 0)$, certainly (why?). Suppose $f$ is homogeneous of degree $m$ and suppose that another zero is given by \[f(a_{1},a_{2},\dots, a_{n}) = 0\] for some $a_{1}, \dots, a_{n}\in {\mathbb R}$. Then any multiple will also be equal to 0. Note that since for $\alpha>0$ we have \[f(\alpha a_{1},\dots, \alpha a_{n}) = \alpha^{m}f(a_{1},\dots,a_{n}) = \alpha^{m}\cdot 0 = 0\] we also have that any multiple of the coordinates will be a zero as well. That is,
Interestingly, this means, in particular, that we need only look for zeros for $f$ on the unit circle (or, in higher dimensions, on the unit $n$-sphere): And, further, we only need
Sometimes it's nice to use algebraic manipulation to find the zeros of homogeneous polynomials, but other times it may require a lot of effort. The nice thing about using the unit $n$-sphere is that we may eliminate one or more variables, thus making the calculation potentially easier. ## Properties of Homogeneous Polynomials: Derivatives.Usually, given some function, we want to find a few things besides zeros: maximums, minimums, and so forth. Most of this is done via a derivative, so it's not all that surprising that we'd want to see if there are any neat properties which derivatives of homogeneous polynomials have. Let's try to do this for a few special cases. If we have a Let's try for a
Note that the reader should try to prove something similar about the one-variable case. The reader should also do this for the $n$-variable case, since it's virtually the same deal. We'll give it its own box, because it is a pretty nice relation. This is sometimes called Euler's Relation for Homogeneous Polynomials.
Besides writing this out "formally" as we did above, there is another slick proof which is similar to how we looked at the one-dimensional case — see if you can figure it out! ## Next Time...(Interpreting Homogeneous Polynomials.)At this point, we know a bunch about homogeneous polynomials but the reader may have that |