Investigating the early Universe with Professor Nial Tanvir

© iStock/sakkmesterke

Professor Nial Tanvir, who is involved in the proposed ESA M5 mission THESEUS, met with SEQ AT NAM2019 to discuss how gamma-ray bursts can be used to investigate the early Universe.

In June, some 500 astronomers and space scientists gathered at Lancaster University, UK, for the Royal Astronomical Society National Astronomy Meeting 2019 (NAM 2019). The conference, which SciTech Europa Quarterly attended, is the largest annual astronomy and space science event in the UK and saw leading scientists from the UK and around the world present the latest cutting-edge research.

NAM 2019 incorporates the RAS National Astronomy Meeting (NAM), and includes the annual meetings of the UK Solar Physics (UKSP) and Magnetosphere Ionosphere Solar-Terrestrial (MIST) groups. The conference is principally sponsored by the Royal Astronomical Society (RAS), the Science and Technology Facilities Council (STFC) and Lancaster University.

On the sidelines of the event, SEQ met with Professor Nial Tanvir from the University of Leicester, UK, who is involved in the proposed ESA M5 mission, THESEUS (a space mission concept aimed at exploiting gamma-ray bursts for investigating the early Universe and at providing a substantial advancement of multi-messenger and time-domain astrophysics) to discuss this exciting area of astronomy.

You began researching gamma-ray bursts in 1997. How have recent discoveries – such as the fact that there are many more high energy gamma ray sources in the Universe than previously thought – impacted on your work?

It is certainly true that the range of phenomena that we have found in the last 25 years or so has turned out to be broader than anyone imagined; different classes of gamma ray bursts, different events which mimic gamma ray bursts and so on.

And also perhaps the fact that, as has been highlighted in a presentation here today, many of these events aren’t exotic but are, in fact, very frequent?

Yes and, in a sense, one thing that this has changed is that, while very often the same kinds of follow-up observations are looked to, diagnosing as early as you can the kind of event that you are dealing with certainly has implications, not least with regard to how much effort you put into chasing them. The most interesting things, the rarest things, the things that we don’t understand very well, all tend to be the highest priority, and figuring that out quickly (of course, these events tend to be brief) means there is a better chance of optimising your follow-up.

Are capabilities evolving now to make this possible?

Sometimes, yes; but there are some examples where this is still difficult, for instance when you are looking at events at extreme distances. Indeed, very distant gamma ray bursts are still of great interest, but one thing we have learned over the years is that trying to determine their distance based on high energy properties (which is what you tend to get first) is both difficult and unreliable. As such, that does push you towards trying to make at least some observations in the optical and infrared bands early on, which is onerous and expensive to do. But that seems to be, at the moment at least, the only way we have a way of doing it.

There is a hope that future missions might have better optical/infrared capability built in on-board, which would make things a lot easier and more efficient. That would certainly be the case if this is put together with other wave bands and now gravitational waves and, in the future, potentially, very wide area radio surveys, and so on. These are all helpful and of course, we might then go on to discover new things, new phenomena that we’ve not yet predicted.

Observations of SGRBs and gravitational-wave sources will provide direct measurements of heavy element enrichment throughout the Universe. How do you hope to be involved in this? What will the main challenges be of achieving this?

Great question. I think it is a really important project globally to try to determine the heavy element enrichment history and in particular the extent to which neutron stars being shredded in mergers contribute to that. They could be the dominant source, but they might not be. There is certainly a lot of anticipation around this now, and some sense that it might be a solved problem, at least in terms of us identifying the kilonova explosions created by the mergers. However, to really prove that point is going to be a long and arduous process.

One key angle that we are coming at this problem from is to examine in detail any further mergers that we find in the coming years where we get the gravitational wave event, we find the kilonova event that accompanies it, and we try to measure its properties in as much detail as we can. The goal of that is to work out from each event what the heavy element yield is and how much of each different element is formed. And that gives you least one side of what might be termed the ‘production budget’, but that is complicated for at least two reasons: one is that it is probably very variable from event to event, and so we need to somehow span a population of events; and the second is the fact that disentangling the actual heavy element yield from the light that we see is potentially also very difficult. Broadly speaking, the more heavy elements that are produced, the brighter the event is likely to be. But individually pinning down individual elements from the spectroscopy is difficult, not least because we still don’t have enough knowledge of even the relevant nuclear and atomic physics to give us clear predictions.

Is there a role that AI or machine learning tools could assist with filling in these knowledge gaps?

To a certain extent this can indeed help, and people are working on this by doing more calculations with the necessary atomic transitions, for example. But putting that into an astrophysical context becomes complex. Obviously, we tend to start out modelling these things – for instance using the classic ‘spherical cow’ type models where everything is treated with a minimum number of free parameters in order to simplify things as much as possible. But, of course, the reality is likely to be more complex, and we have got a number of different possible sources of material being ejected from the systems, and they could have different compositions, different velocities, and different structure in terms of the densities. And to make things even more complicated, potentially, the light of one component might be enshrouded by another component that is outside of it, in which case you only have a very rough idea that that inner component was produced, and working out what it is made of is going to be a real challenge. We are setting ourselves up here for a pretty long term global project to solve that.

Is the appetite there to do that?

Definitely. Because the whole multi-messenger, gravitational wave, electromagnetic discipline is in its infancy, there is great enthusiasm for really throwing everything we have at any new event. There is also great enthusiasm on the theoretical side for improving those models and trying to remove as many uncertainties as we can, and as we go forward it will become clearer what uncertainties remain. It is nevertheless difficult to predict whether or not this is a problem that will take several decades to solve, or maybe only a few years.

The other angle to this is the cosmic evolution. Of course, it is interesting to know what the heavy element yield from each individual merger is and the range that can span. But we also need to map out those mergers over cosmic history so that we can start to see whether the total global production right then matches what we find in terms of the heavy elements. We need to put those two things together, which is challenging, but that is where the gamma ray bursts come in because we can see the mergers that produce short gamma ray bursts at great distances and so we can start to map out their cosmic evolution. This is still difficult though, because many short duration gamma ray bursts turn out to be very faint in other bands, and so while we might detect the burst, we might not detect the afterglow and so can’t do much with it.

While this therefore remains a significant challenge, it is one which, stands to be better addressed by the next generation of ground based telescopes such as the ELT, which will have an impact because they will enable us to chase things that are much fainter than we are able to chase today.

How important are a combination of ground- and space-based technologies for your work? Do you have any hopes/concerns for the future with regard to proposed telescopes/missions etc.?

The combination of ground and space based technologies has been completely crucial for a great deal of what we have done. Indeed, that is where Swift really came into its own by virtue of providing very rapid, very good localisations, as this meant that we could quickly turn all the other telescopes onto a target.

The issue we have from the point of view of funding and building new facilities is that it is done largely in isolation, which means that while the synergies between different facilities are appreciated, and it is understood that it will benefit the science case, the actual funding itself is disjointed, in most cases at least.

The temptation in the past of space agencies is that they want to build a very powerful next generation satellite that does much more than its predecessors, but then they find this can drive fields into something of a dead end, in the sense that once that mission is finished it is then even more expensive to build something to supersede it. It can also mean that there is no money left to build the synergistic capacity and so they might have to sacrifice one way to view the Universe as they have essentially invested everything in another approach. The alternative philosophy would be to try to build somewhat less ambitious projects which have a more widely spread capacity across all the different wave bands. But it will nevertheless remain impossible to please everyone all of the time.

You mentioned multi-messenger astronomy, and although this is indeed in its infancy it seems to be a very promising field. What are your thoughts on this? How would you like to see it evolve? Where would you like to see the priorities lie when it comes to investments perhaps or instruments? In your earlier presentation, you also mentioned KM3Net, a next generation neutrino telescope, and so do you feel that the different infrastructures involved in multi-messenger astronomy are all coming together now?

Yes, it is indeed all coming together. We are in an era now where, of course, we had the first gravitational wave detections, which have been remarkably successful. We have also had breakthroughs in terms of identifying sources of high energy neutrinos and fast radio bursts (while the latter is not exactly multi messenger in quite the same way, it is at the very extreme end and the electromagnetic spectrum). These are topics which we have been anticipating for many years and the community has been working towards these breakthroughs for some time. As such, it is really great that it is starting to happen.

It will almost undoubtedly be gravitational waves which will go on to be the highest priority and the most fruitful area of study; the gravitational wave breakthroughs have been remarkable, and they touch on so many different problems. We were very lucky with the first neutron star binary event because it was such a rich phenomenon and it meant that we could do things including tests of fundamental physics and the constancy of the speed of light. We have done the cosmological application of standard sirens, as well as the kilonovi and the nuclear synthesis and the connexion to gamma ray bursts, which means that we have been ticking boxes here in a way that we really had no right to expect.

Therefore, one really looks to that as being the example of where it is clearest what the path forward in the future is, and the sort of reward that that could bring, perhaps more so than the other areas, which I think are still in a more exploratory phase while we see what comes out of them.

I actually think, from everything else we know, that seeing high energy radiation accompanying the gravitational wave events will not be so common. When we do, it may still be important (we might see the ones that are far away only because they emit high energy radiation) but I think that for most of the ones that LIGO and VIRGO will be detecting in the next few years, from relatively nearby mergers, it is going to be an unusual thing to detect high energy radiation. This is largely because any jet that they form is usually going to be further away from our line of sight, and so quite likely, we’re not going to see the jet emission. But for the kilonovae seeing the optical and infrared emission, I think, is still very much on the cards. That’s a hard job, because it is harder to survey the necessary large areas of sky quickly, but nonetheless, kilonovae should be more ubiquitous. So we would expect to see them accompany most of these mergers if we can dig them out.

How does the ESA M5 project Theseus fit in with this? How does it build on Swift’s capabilities etc.?

Theseus is built on the heritage of Swift, which has been a really successful mission and was designed with a focus on discovering gamma ray bursts with the intention being that we could then learn a lot about the gamma ray bursts themselves. We have been doing that for a long time now and whilst doing so, we have realised that gamma ray bursts can also act as probes of the Universe in different ways, and that is what motivated the development of the Theseus concept.

The first way that we hope to be able to use Theseus to probe the Universe is to find gamma ray bursts at very high redshift in the era of reionisation, or even before the era of reionisation. This is potentially the only way to see individual stars at that time, as they exploded in such dramatic fashion and their brightness means that we can learn a great deal about the environment in which they exploded, as well as the kind of stars that exploded and the galaxies in which they exploded. We think these are likely very representative of typical galaxies at that time.

This is really a window that is totally independent and different from the traditional way of looking at distant galaxies where, even with the most powerful telescopes, we only see them as very small blobs with rather little information about them. With gamma ray bursts, however, while the samples tend to be smaller, we expect to be able to study those discovered by Theseus in exquisite detail thanks to follow-up with the next generation of giant ground based optical infrared telescopes, especially by using spectroscopy.

Looking at it from the multi-messenger angle, we hope to be able to tie Theseus in very closely to the gravitational wave facilities that we expect to exist in the 2030s, which will find binary mergers out to much larger distances than we can at the moment. And so by finding the accompanying short duration gamma ray burst in at least some fraction of those, we should be able to get redshifts and host galaxies and other things which are really important is to exploit them as fully as we can.


Professor Nial Tanvir

Lecturer in Physics and Astronomy

University of Leicester

+44 (0)116 223 1217

Tweet @PhysicsUoL

Laboratory Supplies Directory - Now Live


Please enter your comment!
Please enter your name here