I published my first scientific paper today. It made sense to try to explain, using less scientific language, what we were trying to accomplish. For reference, here is a link to the full text with plots to accompany. There are two core concepts to the paper, the first being the concept of **forecasting**, and the second being the **Fisher information matrix** (don’t get scared by the name).

**Forecasting**

Forecasting, for my purposes, is the calculation of the observability of a theory. Most physicists tend to try to imagine a theory of some particular phenomenon, for example dark matter, which they try to describe mathematically. There are large frameworks that physicists have set up to help calculate the observables, such as quantum field theory, but in the end it must come down to something measurable. A good example of this is Newton and gravity. Newton wanted to describe the motions of the planets and so he formed a mathematical description of their motion, but to test the theory it needed to be translated into what we see on Earth.

Let’s take a simple but illustrative example. Imagine standing in a pretty dark room (not completely pitch black) with a device that can count the number of photons you receive per second. The experiment is to try to find out if there is avery dim light bulb at a given position in the room. You know where the light bulb *should* be but you are not sure if it *is* there or not, so you point your device in that direction. In this particular case, you have extremely good knowledge of the background dim light since you can point the device in other directions and measure the number of photons. With some calibration, you decide that **on average** you receive two photons per second. Now the question is “how do you know if there is a light bulb?”. As you may have guessed from the emphasised ‘average’, the average is not the full story. If you were to observe three photons what does that mean? What about if you only see one? Does that mean your calibration is wrong?

The above example is extremely common in the field of astrophysics and indirect dark matter searches. We make up some model, and then try to calculate if it is possible to see a signal from that model. Obviously, there are some variables to this problem, such as how far away the light bulb is, or how bright it is inherently. These are what we call the *parameters* of the model. If it is not possible to observe a signal with a particular detector most of the time people do not give up with that model but rather just say that the parameters of the problem are such that we would not be able to see it. It is not as unreasonable as it sounds when you start to consider the scale of something like a dark matter search. If we took the same problem with the light bulb and scaled it up to the size of the Universe then being able to spot a faint light bulb from millions of miles away would be a tall order. Instead, the idea is to ask, since we did not see it, what is the brightest it could have been? The bulb could not be in the same room as us while also being as bright as the sun without us seeing it.Therefore this combination of parameters is ruled out. This process is often referred to as setting upper limits.

**Fisher information matrix**

To avoid talking about matrices, the easiest way to understand the Fisher information matrix is by thinking of it as a box of numbers with many entries. Each entry contains a number that describes a feature of the parameters called the variance or co-variance. It is not important what these words mean technically but they are the key ingredient for forecasting. What is special about the Fisher information matrix is the simplicity of calculation. Most of the time physicists are required to run simulations that attempt to calculate if their theory is observable. This takes a long time and a significant computing power to do correctly. The first result of our paper shows that the much more simple Fisher information matrix also produces similar results to simulations and as such can be an extremely reliable tool moving forward.

In the second half of the paper we showed how the Fisher information matrix can be used in a host of other ways. Most importantly we gave a useful prescription for extracting the most amount of information from a given search strategy. What that means is we have defined the method for calculating the optimum way to create an observational strategy. Going back to the example with the light bulb in a room, if there were now many light bulbs and you knew roughly where they were, but not exactly, then it would be important to be able to confidently calibrate what the background photon counts are. But how do you do this, and for how long? We introduced a concept called the *effective information flux*, which is easy to calculate and takes into account exactly these effects. The key point here is that when designing an expensive experiment with limited time to do observations, the maximum information gain is of extreme importance. I hope that our prescription will help the community towards that goal.