## Thursday, June 26, 2014

### Fun with elliptic curve L-function explicit formulae

Although I gave a brief exposition of the "baseline" explicit formula for the elliptic curve $L$-functions in a previous post, I wanted to spend some more time showing how one can use this formula to prove some useful statements about the elliptic curves and their $L$-functions. Some of these are already implemented in some way in the code I am building for my GSoC project, and I hope to include more as time goes on.

First up, let's fix some notation. For the duration of this post we fix an elliptic curve $E$ over the rationals with conductor $N$. Let $L_E(s)$ be its associated $L$-function, and let $\Lambda_E(s)$ be the completed $L$-function thereof, i.e.
$\Lambda_E(s) = N^{s/2}(2\pi)^s\Gamma(s)L_E(s)$, where $\Gamma$ is the usual Gamma function on $\mathbb{C}$. If you want to know more about how $E$, $L_E$ and $\Lambda_E$ are defined and how we compute with them, some of my previous posts (here and here) go into their exposition in more detail.

Let's recap briefly how we derived the baseline explicit formula for the $L$-function of $E$ (see this post for more background thereon). Taking logarithmic derivatives of the above formula for $\Lambda_E(s)$ and shifting to the left by one gives us the following equality:
$$\frac{\Lambda_E^{\prime}}{\Lambda_E}(1+s) = \log\left(\frac{\sqrt{N}}{2\pi}\right) + \frac{\Gamma^{\prime}}{\Gamma}(1+s) + \frac{L_E^{\prime}}{L_E}(1+s).$$
Nothing magic here yet. However, $\frac{\Lambda_E^{\prime}}{\Lambda_E}$, $\frac{\Gamma^{\prime}}{\Gamma}$ and $\frac{L_E^{\prime}}{L_E}$ all have particularly nice series expansions about the central point. We have:

• $\frac{\Lambda_E^{\prime}}{\Lambda_E}(1+s) = \sum_{\gamma} \frac{s}{s^2+\gamma^2}$, where $\gamma$ ranges over the imaginary parts of the zeros of $L_E$ on the critical line; this converges for any $s$ not equal to a zero of $L_E$.
• $\frac{\Gamma^{\prime}}{\Gamma}(1+s) = -\eta + \sum_{k=1}^{\infty} \frac{s}{k(k+s)}$, where $\eta$ is the Euler-Mascheroni constant $=0.5772156649\ldots$ (this constant is usually denoted by the symbol $\gamma$ - but we'll be using that symbol for something else soon enough!); this sum converges for all $s$ not equal to a negative integer.
• $\frac{L_E^{\prime}}{L_E}(1+s) = \sum_{n=1}^{\infty} c_n n^{-s}$; this only converges absolutely in the right half plane $\Re(s)>\frac{1}{2}$.

Here
$$c_n = \begin{cases} \left[p^m+1-\#E(\mathbb{F}_{p^m})\right]\frac{\log p}{p^m}, & n = p^m \mbox{ is a perfect prime power,} \\ 0 & \mbox{otherwise.}\end{cases}$$

Assembling these equalities gives us the aforementioned explicit formula:
$$\sum_{\gamma} \frac{s}{s^2+\gamma^2} = \left[-\eta+\log\left(\frac{\sqrt{N}}{2\pi}\right)\right] + \sum_{k=1}^{\infty} \frac{s}{k(k+s)} + \sum_n c_n n^{-s}$$
which holds for any $s$ where all three series converge. It is this formula which we will use repeatedly to plumb the nature of $E$.

For ease of exposition we'll denote the term in the square brackets $C_0$. It pitches up a lot in the math below, and it's a pain to keep writing out!

Some things to note:

• $C_0 = -\eta+\log\left(\frac{\sqrt{N}}{2\pi}\right)$ is easily computable (assuming you know $N$). Importantly, this constant depends only on the conductor of $E$; it contains no other arithmetic information about the curve, nor does it depend in any way on the complex input $s$.
• The sum $\sum_{k=1}^{\infty} \frac{s}{k(k+s)}$ doesn't depend on the curve at all. As such, when it comes to computing values associated to this sum we can just hard-code the computation before we even get to working with the curve itself.
• The coefficients $c_n$ can computed by counting the number of points on $E$ over finite fields up to some bound. This is quick to do for any particular prime power.

Good news: the code I've written can compute all the above values quickly and efficiently:

sage: E = EllipticCurve([-12,29])
sage: Z = LFunctionZeroSum(E)
sage: N = E.conductor()
sage: print(Z.C0(),RDF(-euler_gamma+log(sqrt(N)/(2*pi))))
(2.0131665172, 2.0131665172)
sage: print(Z.digamma(3.5),RDF(psi(3.5)))
(1.10315664065, 1.10315664065)
sage: Z.digamma(3.5,include_constant_term=False)
1.68037230555
sage: Z.cn(389)
-0.183966457901
sage: Z.cn(next_prime(1e9))
0.000368045198812
sage: timeit('Z.cn(next_prime(1e9))')
625 loops, best of 3: 238 µs per loop

So by computing values on the right we can compute the sum on the left - without having to know the exact locations of the zeros $\gamma$, which in general is hard to compute.

Now that we have this formula in the bag, let's put it to work.

### NAÏVE RANK ESTIMATION

If we multiply the sum over the zeros by $s$ and letting $\Delta = 1/s$, we get
$$\sum_{\gamma} \frac{\Delta^{-2}}{\Delta^{-2}+\gamma^2} = \sum_{\gamma} \frac{1}{1+(\Delta\gamma)^2},$$
Note that for large values of $\Delta$, $\frac{1}{1+(\Delta\gamma)^2}$ is small but strictly positive for all nonzero $\gamma$, and 1 for the central zeros, which have $\gamma=0$. Thus the value of the sum when $\Delta$ is large gives a close upper bound on the analytic rank $r$ of $E$. That is, we can bound the rank of $E$ from above by choosing a suitably large value of $\Delta$ and computing the quantity on the right in the inequality below:
$$r < \sum_{\gamma} \frac{1}{1+(\Delta\gamma)^2} = \frac{1}{\Delta}\left[C_0 + \sum_{k=1}^{\infty} \frac{1}{k(1+\Delta k)} + \sum_n c_n n^{-1/\Delta}\right].$$
Great! Right? Wrong. In practice this approach is not a very good one. The reason is the infinite sum over $n$ only converges absolutely for $\Delta<2$, and for Delta values as small as this, the obtained bound won't be very good. A value of $\Delta=2$, for example, gives us the zero sum $\sum_{\gamma} \frac{1}{1+(2\gamma)^2}$. If a curve's $L$-function has a zero with imaginary part about 0.5, for example - as many $L$-functions do - then such a zero will contribute 0.5 to the sum. And because zeros come in symmetric pairs, the sum value will then be at least a whole unit larger than the actual analytic rank. In general, for $\Delta<2$ the computed sum can be quite a bit bigger than the actual analytic rank of $E$.

Moreover, even though the Generalized Riemann Hypothesis predicts that the sum $\sum_n c_n n^{-1/\Delta}$ does indeed converge for any positive value of $\Delta$, in practice the convergence is so slow that we end up needing to compute inordinate amounts of the $c_n$ in order to hope to get a good approximation of the sum. So no dice; this approach is just too inefficient to be practical.

 A graphic depiction of the convergence problems we run into when trying to evaluate the sum over the $c_n$. For the elliptic curve $E: y^2 + y = x^3 - 79*x + 342$ the above plot shows the cumulative sum $\sum_{n ### A BETTER APPROACH (THE ONE IMPLEMENTED IN MY CODE) We can get around this impasse by evaluating a modified sum over the zeros, one which requires us to only ever need to compute finitely many of the$c_n$. Here's how we do it. Start with the sum$\sum_{\gamma} \frac{s}{s^2+\gamma^2}$, and divide by$s^2$to get$\frac{1}{s(s^2+\gamma^2)}$. Now hearken back to your college sophomore math classes and take the inverse Laplace transform of this sum. We get $$\mathcal{L}^{-1} \left[\sum_{\gamma} \frac{1}{s(s^2+\gamma^2)}\right](t) = \sum_{\gamma} \mathcal{L}^{-1} \left[\frac{1}{s(s^2+\gamma^2)}\right](t) = \frac{t^2}{2} \sum_{\gamma} \left(\frac{\sin(\frac{t}{2}\gamma)}{\frac{t}{2}\gamma}\right)^2$$ Letting$\Delta = \frac{t}{2\pi}$we get the sum $$\mathcal{L}^{-1} \left[\sum_{\gamma} \frac{s}{s^2+\gamma^2}\right](\Delta) = 2\pi^2\Delta^2 \sum_{\gamma} \text{sinc}^2(\Delta\gamma),$$ where$\text{sinc}(x) = \frac{\sin(\pi x)}{\pi x}$, and$\text{sinc}(0) = 1$. Note that as before,$\text{sinc}^2(\Delta\gamma)$exaluates to 1 for the central zeros, and is small but positive for all nonzero$\gamma$when$\Delta$is large. So again, this sum will give an upper bound on the analytic rank of$E$, and that this bound converges to the true analytic rank as$\Delta \to \infty$. If we do the same - divide by$s^2$and take inverse Laplace transforms - to the quantities on the right, we get the following: •$\mathcal{L}^{-1} \left[\frac{C_0}{s^2} \right] = 2\pi\Delta C_0;$•$ \mathcal{L}^{-1} \left[\sum_{k=1}^{\infty} \frac{1}{sk(k+s)}\right] = \sum_{k=1}^{\infty} \frac{1}{k^2}\left(1-e^{-2\pi\Delta k}\right);$•$\mathcal{L}^{-1} \left[ \sum_{n=1}^{\infty} c_n \frac{n^{-s}}{s^2} \right] = \sum_{\log n<2\pi\Delta} c_n\cdot(2\pi\Delta - \log n). $Note that the last sum is finite: for any given value of$\Delta$, we only need to compute the$c_n$for$n$up to$e^{2\pi\Delta}$. This is the great advantage here: we can compute the sum exactly without introducing any truncation error. The other two quantities are also readly computable to any precision. Combining the above values and dividing by$2\pi^2\Delta^2, we arrive at the rank bounding formula which is implemented in my GSoC code, hideous as it is: \begin{align*} r < \sum_{\gamma} \text{sinc}^2(\Delta\gamma) = \frac{1}{\pi\Delta} &\left[ C_0 + \frac{1}{2\pi\Delta}\sum_{k=1}^{\infty} \frac{1}{k^2}\left(1-e^{-2\pi\Delta k}\right) \right. \\ &\left. + \sum_{n<e^{2\pi\Delta}} c_n\left(1 - \frac{\log n}{2\pi\Delta}\right) \right] \end{align*} Of course, the number of terms needed on the right hand side is still exponential in\Delta$, so this limits the tightness of the sum we can compute on the left hand side. In practice a personal computer can compute the sum with$\Delta=2$in a few seconds, and a more powerful cluster can handle$\Delta=3$in a similar time. Beyond Delta values of about 4, however, the number of$c_n$is just too great to make the sum effectively computable. ### THE DENSITY OF ZEROS ON THE CRITICAL LINE Explicit formula-type sums can also be used to answer questions about the distribution of zeros of$L_E$along the critical line. Let$T$be a positive real number, and$M_E(T)$be the counting function that gives the number of zeros in the critical strip with imaginary part at most$T$in magnitude. By convention, when$T$coincides with a zero of$L_E$, then we count that zero with weight. In other words, we can write $$M_E(T) = \sum_{|\gamma|<T} 1 + \sum_{|\gamma|=T} 1/2.$$ There is a more mathematically elegant way to write this sum. Let$\theta(x)$be the Heaviside function on$x$given by $$\theta(x) = \begin{cases} 0, & x<0 \\ \frac{1}{2}, & x=0 \\ 1, & x>0. \end{cases}$$ Then we have that $$M_E(T) = \sum_{\gamma} \theta(T^2-\gamma^2).$$ We have written$M_E(T)$as a sum over the zeros of$L_E$, so we expect that it comprises the left-hand-side of some explicit formula-type sum. This is indeed this the case. Using Fourier transforms we can show that $$M_E(T) = \frac{2}{\pi}\left[C_0 T + \sum_{k=1}^{\infty} \left(\frac{T}{k} - \arctan \frac{T}{k}\right) + \sum_{n=1}^{\infty} \frac{c_n}{\log n}\cdot \sin(T\log n) \right]$$ With this formula in hand we can start asking questions on how the zeros of$L_E$are distributed. For example, how does average zero density on the critical line scale as a function of$T$, the imaginary part of the area in the critical strip we're considering? How does zero density scale with the curve's conductor$N$? If one assumes the Generalized Riemann Hypothesis, we can show that$\sum_{n=1}^{\infty} \frac{c_n}{\log n}\cdot \sin(T\log n) = O(\log T)$. Moreover, this sum is in a sense equidistributed about zero, so we expect its contribution for a sufficiently large randomly chosen value of$T$to be zero. The other two quantities are more straightforward. The$C_0 T$term is clearly linear in$T$. Going back to the definition of$C_0$, we see that$C_0 T = \frac{1}{2}T\log N + $constant$\cdot T$. Finally, we can show that the sum over$k$equals$T\log T + O(\log T)$. Combining this gives us the estimate $$M_E(T) = \frac{1}{\pi} T (\log N + 2\log T+a) + O(\log T)$$ for some explicitly computable constant$a$independent of$E$,$N$or$T$. That is, the number of zeros with imaginary part at most$T$in magnitude is "very close" to$\frac{1}{\pi}T(\log N + 2\log T)$. Another way to put it is than the number of zeros in the interval$[T,T+1]$is about$\frac{1}{2}\log N + \log T$.  The value of$M_E(T)$for$T$up to 30, for the elliptic curve$E: y^2 + y = x^3 - x$. The black line is just the 'smooth part' of$M_E(T)$, given by$\frac{2}{\pi}\left(C_0 T + \sum_{k=1}^{\infty} \left(\frac{T}{k} - \arctan \frac{T}{k}\right)\right)$. This gives us the 'expected number of zeros up to$T$', before we know any data about$E$other than its conductor. The blue line is what we get when we add in$\sum_{n=1}^{\infty} \frac{c_n}{\log n}\cdot \sin(T\log n)$, which tells us the exact locations of the zeros: they will be found at the places where the blue curve jumps by 2.Note that the sum over$n$converges *very* slowly. To produce the above plot, I included terms up to$n=1 000 000$, and there is still considerable rounding visible in the blue curve. If I could some how evaluate the$c_n$sum in all its infinite glory, then resulting plot would be perfectly sharp, comprised of flat lines that jump vertically by 2 at the loci of the zeros. These are two of the uses of explicit formula-type sums in the context of elliptic curve$L$-functions. If you want to read more about them, feel free to pick up my PhD thesis - when it eventually gets published! ## Thursday, June 19, 2014 ### The structure of my GSoC code and a short demo I'm at the point where I can explain the layout of the code that I've written so far, and even demonstrate some of its functionality. As mentioned in previous posts, the initial goal of my Google Summer of Code project is to implement in Sage functionality that will allow us to bound the analytic rank of an elliptic curve$E$via certain explicit formula-type sums relating to the$L$-function of$E$. One could certainly achieve this by simply writing a method on the existing elliptic curve class in Sage. However, the machinery we'll be using to bound rank is useful outside of the context of elliptic curves: it allows us to compute sums over zeros of any$L$-function that comes from a modular form. In fact, so long as 1. The$L$-function has an Euler product expansion over the primes; 2. The Euler product coefficients - akin to the$a_p$values we've mentioned previously - are readily computible; and 3. The$L$-function obeys a functional equation which allows us to define and compute with the function inside of the critical strip; then we can use this explicit-formula methodology to compute sums over the zeros of the$L$-function. All the code is hosted on GitHub, so you can view it freely here (note: this repository is a branch of the main Sage repo, so it contains an entire copy of the Sage source. I'm only adding a tiny fraction of code in a few files to this codebase; I'll link to these files directly below). To this effect, it makes sense to write a new class in Sage that can ultimately be made to compute zero sums for any motivic$L$-function. This is what I'll be doing. I've created a file zero_sums.py in the sage/lfunctions/ directory, inside of which I'll write an family of classes that take as input a motivic object, and contain methods to compute various sums and values related to the$L$-function of that object. I'll start by writing two classes: LFunctionZeroSum_abstract, which will contain all the general methods that can apply to a zero sum estimator attached to any motivic$L$-function. The second class, LFunctionZeroSum_EllipticCurve, inherits from the abstract class, and contains the code specific to elliptic curve$L$-functions. I have also added a method to the EllipticCurve_rational_field class in the sage/schemes/elliptic_curves/ell_rational_field.py file (the class modeling elliptic curves over the rationals) to access the analytic rank estimation functionality of the zero sum estimator. Let's see the code in action. To download a copy yourself, say from within a project in cloud.sagemath.com, open up a terminal and type ~$ git clone git://github.com/haikona/GSoC_2014.git

This will download the code in the repo into a new folder GSoC_2014 in your current directory. CD into that directory, type make and hit enter to build. This will unfortunately take a couple hours to complete, but the good new is you'll only need to do that once. Alternatively, if you have an existing freshly-built copy of the sage source and good github-fu, you should be able to download just the relevant code and apply it, greatly speeding up the build time.

Once your copy of sage containing the new LFunctionZeroSum code is built, fire up that copy of sage (e.g. type ./sage while in the GSoC_2014 directory; don't just type sage, as this runs the system-wide copy of sage - not the one we want). This command-line interface will have all the functionality of any other current copy of Sage, but with the extra methods and classes I've written.

Let's start by estimating the rank of some elliptic curves:

sage: E = EllipticCurve([0,1,1,-2,0]); E
Elliptic Curve defined by y^2 + y = x^3 + x^2 - 2*x
over Rational Field
sage: E.rank(), E.conductor()
(2, 389)
sage: E.analytic_rank_bound()
2.0375000846

In the code above, I've defined an elliptic curve given by the Weierstrass equation $y^2 +y = x^3+x^2-2x$; for those of you in the know, this is the curve with Cremona label '389a', the rank 2 curve with smallest conductor. I've then printed out the rank of the curve, which is indeed 2. Then I've called my analytic_rank_bound() method on $E$, and it returns a value which is strictly greater than the analytic rank of $E$. Since the analytic rank of an elliptic curve is always a non-negative integer, we can conclude that the analytic rank of $E$ is at most 2.

How long did this rank estimation computation take? Thankfully Sage has a timing function to answer this question:

sage: timeit('E.analytic_rank_bound()')
125 loops, best of 3: 3.33 ms per loop

Neat; that was quick. However, this elliptic curve has a small conductor - 389, hence it's Cremona label - so computing with its $L$-function is easy. The great thing with the zero sum code is that you can run it on curves with really large conductors. Here we reproduce the commands above on the curve $y^2 + y = x^3 - 23737x + 960366$, which has rank 8 and conductor $N \sim 4.6\times 10^{14}$:

sage: E = EllipticCurve([0,0,1,-23737,960366]); E
Elliptic Curve defined by y^2 + y = x^3 - 23737*x + 960366
over Rational Field
sage: E.rank(), E.conductor()
(8, 457532830151317)
sage: E.analytic_rank_bound()
8.51446122452
sage: timeit('E.analytic_rank_bound()')
125 loops, best of 3: 3.37 ms per loop

So we see then that this zero sum code doesn't care very much about the size of the conductor of $E$; this is one of its huge plusses. There are of downsides, of course - but we'll save those for another post.

Now let's look at the LFunctionZeroSum code.

sage: E = EllipticCurve([23,-100])
sage: Z = LFunctionZeroSum(E); Z
Zero sum estimator for L-function attached to
Elliptic Curve defined by y^2 = x^3 + 23*x - 100
over Rational Field

Right now we don't have too much beyond that which you can access from the EllipticCurve class. For one, we can compute the coefficients of the logarithmic derivative, since they are needed to compute the rank-estimating zero sum. We can also return and/or compute one or two other values associated to the zero sum class. As time goes on we'll flesh out more functionality for this class.

sage: for n in range(1,11):
....:     print(n,Z.logarithmic_derivative_coefficient(n))
....:
(1, 0)
(2, 0.0)
(3, -1.09861228867)
(4, 0.0)
(5, 1.28755032995)
(6, 0)
(7, -0.277987164151)
(8, 0.0)
(9, -0.366204096223)
(10, 0)
sage: Z.elliptic_curve()
Elliptic Curve defined by y^2 = x^3 + 23*x - 100
over Rational Field
sage: Z.elliptic_curve().rank()
1
sage: Z.rankbound(Delta=1.7,function="gaussian")
1.58647787024

All of this code is of course written to Sage spec. That means that every method written has documentation, including expected input, output, examples, and notes/warnings/references where necessary. You can access the docstring for a method from the command line by typing that method followed by a question mark and hitting enter.

All my code will initially be written in Python. Our first goal is to replicate the functionality that has already been written by Jonathan Bober in c. As such we're not aiming for speed here; instead the focus is to make sure all the code is mathematically correct.

Once that's up and running there are two natural directions to take the project:

1. Speed the code up. Rewrite key parts in Cython, so that computations run faster, and thus analytic rank can be estimated for curves of larger conductor.
2. Generalize the code. Write classes that model zero sum estimators for $L$-functions attached to eigenforms (those modular forms with Euler product expansions) of any level and weight, and allow for more types of sums to be computed. Ultimately write code that can be used to compute sums for any motivic $L$-function.

I'm going to attack speeding up the code first, since the elliptic curve rank estimation functionality will probably see the most use. If there's time at the end of the project I can look to generalizing the code as much as I can.

### My favourite hat

In 2007 I took an overland trip though Namibia with my parents and two sets of friends. Our first port of call was the Augrabies National Park on the South African side of the South Africa/Namibia border. At the gift shop there I bought this hat (along with a warthog fluff toy, which I also still own).

It's a Rogue handcrafted leather bush hat, and it's wonderful. Supremely comfortable, it generally goes where I go whenever there's the chance of sunny weather - which here in Seattle means the months of June through September. I hope to continue to wear this hat for many a year to come.

Also it makes me look like Indiana Jones.

## Friday, June 13, 2014

### The explicit formula and bounding analytic rank from above

Wednesday's post gave some detail on how one can numerically evaluate $L_E(s)$, the $L$-function attached to $E$, at any given complex input $s$. This is certainly the first step in determining the analytic rank of $E$ i.e. the order of vanishing of the $L_E(s)$ at the central point $s=1$.

However, we immediately run into two major issues. The first, as mentioned previously, is that we have no proven lower bound on the size of the coefficients of the Taylor expansion of $L_E(s)$ at $s=1$, so we can never truly ascertain if the $n$th derivative there vanishes, or is just undetectibly small.

Today's post focuses on a method that at least partially overcomes the second issue: that evaluating $L_E(s)$ and its derivatives takes at least $O(\sqrt{N})$ time, where $N$ is the conductor of $E$. This means that working with elliptic curve $L$-functions directly becomes computationally infeasible if the curve's conductor is too big.

To see how we can bypass this problem, we'll have to talk about logarithmic derivatives. The logarithmic derivative of a function  $f(s)$ is just
$$\frac{d}{ds} \log(f(s)) = \frac{f^{\prime}(s)}{f(s)}.$$
Logarithmic derivatives have some nice properties. For one, they turn products into sums: if $f(s) = g(s)h(s)$, then $\frac{f^{\prime}}{f} = \frac{g^{\prime}}{g} + \frac{h^{\prime}}{h}$. Because we can write $L_E(s)$ as an Euler product indexed by the primes, we therefore have that
$$\frac{L_E^{\prime}}{L_E}(s) = \sum_p \frac{d}{ds} \log\left(\left(1-a_p p^{-s} + \epsilon_p p^{-2s}\right)^{-1}\right).$$
When you multiply this out, you rather amazingly get a Dirichlet series whose coefficients can be described in a very mathematically elegant way:
$$\frac{L_E^{\prime}}{L_E}(1+s) = \sum_{n=1}^{\infty} c_n n^{-s},$$
where
$$c_n = \begin{cases} \left(p+1-\#E(\mathbb{F}_{p^m})\right)\frac{\log p}{p^m}, & n = p^m \mbox{a perfect prime power,} \\ 0 & \mbox{otherwise.}\end{cases}$$

That is, the $n$th coefficient of $\frac{L_E^{\prime}}{L_E}(1+s)$ is $0$ when $n$ is not a prime power, and when it is, the coefficient relates directly to the number of points on the reduced curve over the finite field of $n$ elements. [Note we're shifting the logarithmic derivative over to the left by one unit, as this puts the central point at the origin i.e. at $s=0$.]

How does this yield a way to estimate analytic rank? Via something called the Explicit Formula for elliptic curve $L$-functions - which, confusingly, is really more of a suite of related methods than any one specific formula.

Remember the Generalized Riemann Hypothesis? This asserts, in the case of elliptic curve $L$-functions at least, that all nontrivial zeros of $L_E(s)$ lie on the line $\Re(s) = 1$, and two zeros can never lie on top of each other except at the central point $s=1$. The number of such zeros at the central point is precisely the analytic rank of $E$. Each nontrivial zero can therefore be written as $1\pm i\gamma$ for some non-negative real value of $\gamma$ - it turns out that whenever $L_E(1+i\gamma)=0$, then $L_E(1-i\gamma)=0$ too, so noncentral zeros always come in pairs. We believe GRH to be true with a high degree of confidence, since in the 150 years that $L$-functions have been studied, no-one has ever found a non-trivial zero off the critical line.

We can invoke GRH with logarithmic derivatives and something called the Hadamard product representation of an entire function to get another formula for the logarithmic derivative of $L_E(s)$:
$$\frac{L_E^{\prime}}{L_E}(1+s) = -\log\left(\frac{\sqrt{N}}{2\pi}\right) - \frac{\Gamma^{\prime}}{\Gamma}(1+s) + \sum_{\gamma} \frac{s}{s^2+\gamma^2}.$$
Here $\frac{\Gamma^{\prime}}{\Gamma}(s)$ is the logarithmic derivative of the Gamma function (known as the digamma function; this is a standard function in analysis that we know a lot about and can easily compute), and $\gamma$ runs over the imaginary parts of the zeros of $L_E(s)$ on the critical strip $\Re(s)=1$.

What does this get us? A way to equate a certain sum over the zeros of $L_E(s)$ to other more tractable functions and sums. When you combine the two formulae for $\frac{L_E^{\prime}}{L_E}(1+s)$ you get the following equality:

$$\sum_{\gamma} \frac{s}{s^2+\gamma^2} = \log\left(\frac{\sqrt{N}}{2\pi}\right) + \frac{\Gamma^{\prime}}{\Gamma}(1+s) + \sum_n \frac{c_n}{n} n^{-s}.$$
The key realization here is that while computing the zeros of $L_E$ is in general nontrivial and time-consuming, all the other quantities in the equation above are readily computable. We can therefore evaluate on the left by computing the quantities on the right to sufficient precision, none of which require knowledge of the exact locations of the zeros of $L_E(s)$.

The above is the first and perhaps most basic example of an explicit formula for elliptic curve $L$-functions, but it is not the one we want. Specificially, trouble still arises in the fact that when were still dealing with an infinite sum on the right, and when we look close to the central point $s=0$, convergence is unworkably slow. We can, however, work with related explicit formulae that reduce the sum on the right to a finite one.

The formula we'll use in our code can be obtained from the above equation by dividing both sides by $s^2$ and taking inverse Laplace transforms; we can also formulate it in terms of Fourier transforms. Specifically, we get the following. Recall the sinc function $\mbox{sinc}(x) = \frac{\sin(\pi x)}{\pi x}$ with $\mbox{sinc}(0)=1$. Then we have:
$$\sum_{\gamma} \mbox{sinc}(\Delta\gamma)^2 = Q(\Delta,N) + \sum_{\log n<2\pi\Delta} c_n\cdot\left(2\pi\Delta-\log n\right),$$
where $Q(\Delta,N)$ is a certain quantity depending only on $\Delta$ and $N$ that is easy to compute.

Let's look at the left hand side. Since $\mbox{sinc}(\Delta\gamma)^2$ is non-negative for any $\gamma$, and $\mbox{sinc}(0)^2 = 1$, when we sum over all zeros $\gamma$'s we get a value which must be strictly greater than the number of zeros at the central point (i.e. the analytic rank of $E$). Moreover, as we increase $\Delta$, the contribution to the sum from all the noncentral zeros goes to zero, as $\mbox{sinc}(x) \to 0$ as $x \to \infty$.

 A graphic representation of the sum on the left side of the equation for the $L$-function attached to the elliptic curve $y^2 = x^3 + 103x - 51$ for three increasing values of the parameter $\Delta$. Vertical lines have been plotted at $x=\gamma$ whenever $L_E(1+i\gamma) = 0$, and the height of each line is given by the black curve $\mbox{sinc}(\Delta x)^2$. Thus summing up the length of the vertical lines gives you the value of the sum on the left side of the equation. We see that as $\Delta$ increases, the contribution from the blue lines - corresponding to noncentral zeros - goes to zero, while the contribution from the central zeros in red remain at 1 apiece. Since there are two central zeros (plotted on top of each other here), the sum will limit to 2 from above as $\Delta$ increases.

Now consider the right hand side of the sum. Note that the sum over $n$ is finite, so we can compute it to any given precision without having to worry about truncation error. Since $Q(\Delta,N)$ is easy to compute, this allows us to evaluate the entire right hand side with (relatively) little fuss. We therefore are able, via this equality, to compute a quantity that converges to $r$ from above as $\Delta \to \infty$.

 A graphical representation of the finite sum $\sum_{\log n<2\pi\Delta} c_n\cdot\left(2\pi\Delta-\log n\right)$ for the same curve, with $\Delta=1$. The black lines are the triangular function $y = \pm (2\pi\Delta-x)$, whose value at $\log n$ weight the $c_n$ coefficient. Each blue vertical line is placed at $x=\log n$, and has height $c_n\cdot\left(2\pi\Delta-\log n\right)$. Summing up the signed lengths of the blue lines gives the value of the sum over $n$. Note that there are only 120 non-zero terms in this sum when $\Delta=1$, so it is quick to compute.

This formula, and others akin to it, will form the core functionality of the code I will be writing and adapting to include in Sage, as they give us a way to quickly (at least when $\Delta$ is small) show that the analytic rank of a curve can't be too big.

There are some downsides, of course, most notable of which is that the number of terms in the finite sum on the right is exponential in $\Delta$. Tests show that on a laptop one can evaluate the sum in a few milliseconds for $\Delta=1$, a few seconds for $\Delta = 2$, but the computation takes on the order of days when $\Delta=4$. Nevertheless, for the majority of the elliptic curves we'll be looking at, $\Delta$ values of 2 or less will yield rank bounds that are in fact tight, so this method still has much utility.

Another issue that warrants investigation is to analyze just how fast - or slow - the convergence to $r$ will be as a function of increasing $\Delta$; this will allow us to determine what size $\Delta$ we need to pick a priori to get a good estimate out. But this is a topic for a future date; the next post or two will be dedicated toward detailing how I'm implementing this rank estimation functionality in Sage.

## Wednesday, June 11, 2014

### How to compute with elliptic curve L-functions and estimate analytic rank

In a previous post I defined what an elliptic curve $L$-function $L_E(s)$ is, and indicated why the behaviour of the function at the central point $s=1$ is of such great interest: the order of vanishing of $L_E(s)$ at $s=1$ is conjecturally precisely equal to the algebraic rank of $E$.

Because this equality is still conjectural, we will cal the former quantity -- then number of derivitaves of $L_E(s)$ that vanish at $s=1$ - the analytic rank of $E$. The topic of this post is to address the question: given an elliptic curve $E$, how do we go about computing its analytic rank?

Before we can hope to answer this question, we need to know how to evaluate $L_E(s)$ itself for any given $s$. In the previous post I gave both the Euler product and Dirichlet series definitions for $L_E(s)$; to jog your memory, here's the Euler product of $L_E(s)$:
$$L_E(s) = \prod_p \left(1-a_p p^{-s} + \epsilon_p p^{-2s}\right)^{-1},$$
where the product runs over all prime numbers, $a_p = p+1-\#\{E(\mathbb{F}_p)\}$, and $\epsilon_p = 0$ if $p$ divides the conductor of $E$ and $1$ otherwise. The Dirichlet series is $L_E(s) = \sum_{n}a_n n^{-s}$, which is precisely what you get when you multiply out the Euler product. Note that we are suppresing the dependence on $E$ in both the $a_n$ and $\epsilon_p$ constants.

However, both the Euler product and Dirichlet series representations of $L_E(s)$ will only converge absolutely when the real part of $s$ exceeds $\frac{3}{2}$. Although the Sato-Tate Conjecture (now a theorem, but the name has stuck) implies that the expansions will in fact converge conditionally for $\Re{s}>\frac{1}{2}$, the convergence is so slow that attempting to evaluate $L_E(s)$ near the central point by multiplying or summing enough terms is horribly inefficient. As such, we need a better way to evaluate $L_E(s)$ - and thankfully, such better ways do indeed exit.

Remember how I mentioned in the previous post that $L$-functions obey a very nice symmetry condition? Well, here's that condition exactly: first, we need to define something called the completed $L$-function $\Lambda_E(s)$. This is just $L_E(s)$ multiplied by some extra factors. Specifically,
$$\Lambda_E(s) = N^{\frac{s}{2}}(2\pi)^{-s}\Gamma(s) L_E(s),$$
where $N$ is the conductor of $E$ and $\Gamma(s)$ is the usual Gamma function on $\mathbb{C}$ (the one that gives you $(n-1)!$ when you evaluate it at the positive integer $s=n$).

We can show that $\Lambda_E(s)$ is entire on $\mathbb{C}$; that is, it doesn't have any poles. Moreover, $\Lambda_E(s)$ obeys the glorious symmetry property
$$\Lambda_E(s) = w_E \Lambda_E(2-s),$$
where $w_E$ is either $1$ or $-1$ and depends on $E$. This is called the functional equation for the $L$-function attached to $E$.

Another way to put it is that shifted completed $L$-function $\Lambda_E(1+s)$ is either an even or an odd function of $s$. Because the factors $N^{\frac{s}{2}}$, $(2\pi)^{-s}$ and $\Gamma(s)$ are all readily computable, this allows us to determine the value of $L_E(s)$ when the real part of $s$ is less than $\frac{1}{2}$.

What's left, then, is to figure out how to efficinetly evaluate $L_E(s)$ in the strip $\frac{1}{2} \le \Re(s) \le \frac{3}{2}$. This is called the critical strip for the $L$-function, and it is here that the behaviour of the function is most interesting.

[Aside: we can, for example, show that $\Lambda_E(1+s)$ is never zero outside of the critical strip. The Generalized Riemann Hypothesis in fact asserts that elliptic curve $L$-functions are only ever zero along the exact center of this strip, the critical line $\Re(s)=1$. We'll get back to the zeros of elliptic curve $L$-functions in a later post.]

To evaluate $\Lambda_E(s)$ in the critical strip, we make use of the modularity of $E$. The modularity theorem states that elliptic curves are modular: for every elliptic curve $E$ over the rationals there exists a weight 2 cuspidal eigenform $f_E$ of level $N$ (where $N$ is precisely the conductor of $E$), such that the $a_n$ of $E$ as defined previously equal the Fourier coefficients of $f_E$. If you haven't studied modular forms before, the crucial piece of knowledge is that there is a natural procedure for constructing $L$-functions from modular forms in such a way that the definition makes sense for all complex inputs $s$, and that the $L$-function attached to the cusp form $f_E$ will exactly equal the elliptic curve $L$-function $L_E(s)$. This is in fact how we show that elliptic curve $L$-functions can be analytically continued to all of $\mathbb{C}$.

The take-away is that via modularity there is an infinite sum representation for $L_E(s)$ which converges absolutely for all $s$. Here it is: define the auxiliary function
$$\lambda_E(s) = \left(\frac{\sqrt{N}}{2\pi}\right)^{s} \sum_{n=1}^\infty a_n n^{-s}\Gamma \left(s,\frac{2\pi}{\sqrt{N}}\cdot n\right),$$
where all the quantities are as defined previously, and $\Gamma(s,x)$ is the upper incomplete Gamma function (note that since $\Gamma(s,x)$ goes to zero exponentially as $x$ goes to infinity, this sum will converge absolutely for any $s$, with rate of convergence scaling with $\sqrt{N}$). Then we have
$$\Lambda_E(s) = \lambda_E(s) + w_E \lambda_E(2-s).$$
Because we know how to compute incomplete Gamma functions, this gives us a way to evaluate $\Lambda_E(s)$, and thus $L_E(s)$, in the critical strip. The upside with this formula and variations thereof is that it works for any value of $s$ you stick in - including values near $s=1$. Similar formulae exist for the derivatives of $L_E(s)$, so we can in theory compute $L_E^{(n)}(1)$, the $n$th derivative of $L_E(s)$ at the central point, to any degree of accuracy for any $n$.

Thus if we want to compute a curve's analytic rank, what's stopping us from just evaluating successive derivatives of $L_E(s)$ at $s=1$ until we hit one which is not zero?

Two reasons. The first is that there's no way around the fact that you need about $\sqrt{N}$ terms to compute $L_E(s)$ or its derivatives to decent precision. If the conductor of the curve is too big, as is often the case, it takes an inordinate amount of time to simply evaluate the $L$-function near the central point. This makes direct evaluation impractical for all but the smallest-conductor curves -- and for those curves we can usually compute rank via other methods anyway.

The second reason is a more subtle one: how do you tell numerically if the $n$th derivative of $L_E(s)$ at $s=1$ is zero? If you think about it, it's easy to answer this question in one direction only: if you evaluate $L_E^{(n)}(1)$ to some precision and get digits that aren't all zero, then (assuming your code is correct) the $n$th derivative of $L_E(s)$ does not vanish. However, no matter how many digits of precision we compute getting all zeros, the possibility will always remain that the next digit along might not be zero.

In general, there is no numerical way to determine if the value of a black-box complex-valued function at a point is zero, or just really really close to zero. This is why, if you look in the literature, you'll find "methods to estimate analytic rank", but never to compute the quantity exactly. It's impossible to do without extra knowledge. Specifically, we'd need to have some sort of a computable lower bound on the size of the derivatives of $L_E(s)$ at the central. Unfortunately, no such theorems currently exist, so we're stuck with estimating analytic rank for now.

Thankfully, the $\sqrt{N}$-dependence issue is more hopeful. The next post will detail a method that provides good estimates for the analytic rank that scales much more slowly with the curve's conductor.

## Tuesday, June 10, 2014

### How to develop for Sage using Sage Math Cloud  and Git

I will be using Sage Math Cloud to write all the code for my Google Summer of Code project. The advantage of choosing the cloud-based route should be clear: you don’t need to install or store anything locally, and it frees you up to use any device with a web browser you choose to write code. The downside, as Sage Math Cloud is still new, is that tutorials and how-tos on the setup process in order to do so are a bit lacking. Since I couldn’t find any single unified source on how to do it, I thought I might to create a step-by-step guide on getting set up in Sage Math Cloud to write code in for inclusion in the Sage codebase.

Much of what I've done follows the Git the Hard Way tutorial for Sage development. However, there are significant departures. I recommend reading through that tutorial first; if you do then what follows below will be a piece of cake.

First, you’ll need an account at cloud.sagemath.com. Good news: the whole service is completely free to use, and there’s no annoying back-and-forthing via emailed confirmation links. It’ll take you two minutes tops.

 The cloud.sagemath.com homepage.

Your work on your Sage Math Cloud account is project-based; each account can have a large number of projects associated to it. Each project acts like it’s own Linux-based file system inside of which you can create and edit any number of files of any type. Create a project in which you'll house your code.

 The project root view, before any files have been created.

Next, you'll want a local installation of Sage. We're going to initiate a local copy of the Sage master repo, as hosted on GitHub. Click New --> Terminal to open up a new terminal window - we'll be doing everything via terminal commands. Then type the following three commands:

~$git clone git://github.com/sagemath/sage.git ~$ cd sage
~/sage$make  The first will download a copy of the Sage codebase into the new folder 'sage' (it should appear in the project root directory after the download is complete); the second changes into the directory, and the third initiated the compiling of the Sage source. Note that the building of Sage usually takes about 3 hours, so get this going and then kick back and take inspiration from this. While your code is compiling you'll want to make sure you have a GitHub account. If you don't follow the steps there to create one. While not required, hosting any projects on GitHub is highly recommended: it will allow multiple people to work on a project simultaneously, as well as giving you access to sophisticated revision control functionality that will prevent you from accidentally deleting code or data.  GitHub repo creation page. Create an empty repository - we'll fill it with things later. So long as the repo exists and you know its name you're good to go. The next thing we'll want to do is register the Sage Trac repository, where all the active tickets are hosted for Sage development. Once you're done creating an empty repo on your GitHub account, go back to your Sage Math Cloud project and wait for Sage to finish building. When this is done, in the terminal window type the following:  ~/sage$ git remote add trac git@trac.sagemath.org:sage.git -t master

~/sage$git remote -v origin git://github.com/sagemath/sage.git (fetch) origin git://github.com/sagemath/sage.git (push) trac git@trac.sagemath.org:sage.git (fetch) trac git@trac.sagemath.org:sage.git (push)  The git remote add command registers the Sage trac repo as a remote repository (trac is the name it is registered as locally; you can of course call it anything you want). The git remote -v command simply lists what repos you have now registered. We won't be using Sage trac for now. If you want more practice with trac, see the Git the Hard Way tutorial mentioned previously. Note the inclusion of the word master in the command above; this means we will only track the master branch of the remote repo. A branch is an instanced copy of the codebase which you can work on and edit without altering other branches. The master branch should be kept pristine, so we will want to create and switch to a new branch. This is accomplished with the git checkout command:  ~/sage$ git checkout -b demo

Switched to a new branch 'demo'

~/sage$git branch * demo master  The -b parameter creates the branch, while checkout switches to that branch. Typing git branch then shows you what branches currently exist locally, as well as the one you're currently on. We're now in the position to register the demo branch with our fresh repo on GitHub:  ~/sage$ git remote add demo https://github.com/haikona/demo.git

~/sage$git remote -v demo https://github.com/haikona/demo.git (fetch) demo https://github.com/haikona/demo.git (push) origin git://github.com/sagemath/sage.git (fetch) origin git://github.com/sagemath/sage.git (push) trac git@trac.sagemath.org:sage.git (fetch) trac git@trac.sagemath.org:sage.git (push)  And finally, we can push to our repo to sync the local copy therewith. For this you will need your GitHub password - so not just any old Harry can push to your repo. ~/sage$ git push demo demo
Counting objects: 4, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 317 bytes | 0 bytes/s, done.
Total 3 (delta 1), reused 0 (delta 0)
To https://github.com/haikona/demo.git
2b1d88b..21cddb8  demo -> demo

You're now synced and ready to start writing code! From hereon you should be able to follow one of the many tutorials on working with git; the hard part - setting up - is all done.

## Monday, June 9, 2014

### Mathematical background: elliptic curves and L-functions

Day 1 of my Google Summer of Code project! I will try post updates at least once a week; I thought a good starting point would be to give an introduction to the mathematical objects that this project revolves around. The next post will then give a more detailed description of the project itself, and the structure of the code that I'm going to adapt, write and include in Sage.

To that end, for this post I will assume some  knowledge of elementary number theory, algebra and  complex analysis, but nothing more complicated than that.

Let $E$ be an elliptic curve over the rational numbers. We can think of $E$ as the set of rational solutions $(x,y)$ to a two-variable cubic equation in the form:
$$E: y^2 = x^3 + Ax + B$$
for some integers $A$ and $B$, along with an extra "point at infinity". An important criterion is that the $E$ be a smooth curve; this translates to the requirement that the discriminant of the curve, given by $-16(4A^3+27B^2)$, is not zero.

One of the natural questions to ask when considering an elliptic curve is "how many rational solutions are there?" It turns out elliptic curves fall in that sweet spot where the answer could be zero, finitely many or infinitely many - and figuring out which is the case is a deeply non-trivial problem.

The rational solutions form an abelian group with a well-defined group operation that can be easily computed. By a theorem of Mordell, the group of rational points on an elliptic curve $E(\mathbb{Q})$ is finitely generated; we can therefore write
$$E(\mathbb{Q}) \approx T \times \mathbb{Z}^r,$$
where $T$ is a finite group (called the torsion subgroup of $E$), and $r$ is denoted the algebraic rank of $E$.

 The elliptic curve group law explained visually: three points in a straight line add to zero; because the point at infinity is the identity element, this means that the sum R of two points P and Q is the reflection about the real axis of the other point on the curve on the straight line connecting P and Q.

Determining the torsion subgroup of $E$ is a relatively straightforward endeavor. By a theorem of Mazur, rational elliptic curves have torsion subgroups that are (non-canonically) isomorphic to one of precisely fifteen possibilities: $\mathbb{Z}/n\mathbb{Z}$ for $n = 1$ through $10$ or $12$; or $\mathbb{Z}/2\mathbb{Z}\oplus \mathbb{Z}/2n\mathbb{Z}$ for $n = 1$ though $4$. Computing the rank $r$ - the number of independent rational points on $E$ - is the hard part, and it is towards this end that this project hopes to contribute.

Perhaps surprisingly, we can translate the algebraic problem of finding rational solutions to $E$ to an analytic one - at least conjecturally. To understand this we'll need to know what an elliptic curve $L$-function is. These are holomorphic functions defined on the whole complex plane that somehow encode a great deal of information about the elliptic curve they're attached to.

The definition goes as follows: for each prime $p$, count the number of solutions to the elliptic curve equation modulo $p$; we'll call this number $N_p(E)$. Then define the number $a_p(E)$ by
$$a_p(E) = p+1 - N_p(E).$$
Hasse's Theorem states that $a_p(E)$ is always less that $2\sqrt{p}$ in magnitude for any $p$, and the Sato-Tate conjecure (recently proven by Taylor et al) states that for a fixed elliptic curve, the $a_p$ (suitably transformed) are asymptotically distributed in a semi-circular distribution about zero.

Next, for a given $p$ define the local factor $L_p(s)$ to be the function of the complex variable $s$ as follows:
$$L_p(s) = \left(1-a_p(E)p^{-s} + \epsilon(p)p^{-2s}\right)^{-1},$$
where $\epsilon(p)$ is 0 if $p$ divides the conductor of $E$, and 1 otherwise.

The conductor of $E$ is a positive integer that encodes the bad reduction of $E$: for a finite list of primes $p$, when we reduce the curve modulo $p$, we don't get a smooth curve but rather a singular one instead. The conductor is just the product of these primes to the first or second power (or in the cases $p=2$ or $3$, up to the eighth and fifth powers respectively). If you're unfamiliar with elliptic curves, the thing to note is that the conductor always divides the discriminant of $E$, namely the quantity $-16(4A^3+27B^2)$ mentioned previously.

Finally we can define the $L$-function attached to $E$:
$$L_E(s) = \prod_p L_p(s) = \prod_p \left(1-a_p(E)p^{-s} + \epsilon(p)p^{-2s}\right)^{-1}.$$
The above representation of $L_E(s)$ is called the Euler product form of the $L$-function. If we multiply out the terms and use power series inversion we can also write $L_E(s)$ as a Dirichlet series:
$$L_E(s) = \sum_{n=1}^{\infty} a_n(E) n^{-s},$$
where for non-prime $n$ the coefficients $a_n$ are defined to be exactly the integers you get when you multiply out the Euler expansion.

If you do some analysis, using Hasse's bound on the size of the $a_p(E)$ and their distribution according to Sato-Tate, one can show that the above to representations only converge absolutely when the real part of $s$ is greater than $\frac{3}{2}$, and conditionally for $\Re(s)>\frac{1}{2}$. However, the modularity theorem states that these elliptic curve $L$-functions can actually be analytically continued to the entire complex plane. That is, for every elliptic curve $L$-function $L_E(s)$ as defined above, there is an entire function on $\mathbb{C}$ which agrees with the Euler product/Dirichlet series definition for $\Re(s)>1$, but is also defined - and readily computable - for all other complex values of $s$. This entire function is what we actually call the $L$-function attached to $E$.

The way we analytically continue $L_E(s)$ yields that the function is highly symmetric about the line $\Re(s)=1$; moreover, because the function is defined by real coefficients $L_E(s)$ also obeys a reflection symmetry along the real axis. The point $s=1$ is in a very real sense therefore the central value for the $L$-function. It thus makes sense to investigate the behaviour of the function around this point.

Because $L_E(s)$ is entire, it has a Taylor expansion at any given point. We can ask what the Taylor expansion of $L_E(s)$ about the point $s=1$ is, for example. One of the central unproven conjectures in modern-day number theory is the Birch and Swinnerton-Dyer Conjecture: that the order of vanishing of the Taylor expansions of $L_E(s)$ about the point $s=1$ is precisely $r$, the algebraic rank of the elliptic curve $E$. That is, if we let $z=s-1$ so that
$$L_E(s) = c_0 + c_1 z + c_2 z^2 + \ldots$$
is the expansion of $L_E(s)$ about the central point, the BSD conjecture holds that $c_0$ through $c_{r-1}$ are all zero, and $c_r$ is not zero.

 The values of three elliptic curve L-functions along the critical line $1+it$ for $-10<=1<=10$. Blue corresponds to the curve $y^2 = x^3 - 13392x - 1080432$, a rank 0 curve, red is that of $y^2 = x^3 - 16x + 16$, a rank 1 curve, and green is $y^2 = x^3 - 3024x + 46224$, a rank 2 curve. Note that close to the origin the graphs look like non-zero constant function, a straight line through the origin and a parabola respectively.

Thus if we can compute the order of vanishing of the elliptic curve $L$-function at the central point, we can at least conjecturally compute the rank of the curve. This converts an algebraic problem into a numerical one, which is perhaps more tractible.

The techniques we'll use to attempt to do this will be the subject of the next blog post. Unfortunately there are still plenty of challenges to this approach - the least of which boils down to: how do you numerically determine if the value of a function is zero, or just really, really close to zero? The short answer is that without theorems to back you up, you can't -- but we can still make considerable progress toward the problem of computing an elliptic curve's rank.