LECTURE 2 : Nothing Will Come Of Nothing
Last time I looked at the basic properties of galaxies : their sizes, morphologies, colours and brightnesses; the different environments they inhabit and the different processes that affect them in different locations. This lecture will be less about what we know and more about how we know it. There will be a little bit more of how we think galaxies behave, but for the most part this will be a very practical session - right down to which buttons you need to press to get the results. I'll also be covering a little bit about what it is we can measure directly and what we have to infer, the amount of subjectivity involved and why this isn't as bad as you might think.
While I'll mention a few different topics, I'll concentrate heavily on two particular areas : firstly photometry (measuring the brightness of galaxies), which is often covered at undergraduate courses but I want to make sure everyone's on the same level (if you've done it before, then great, the first part of this will be easy), and secondly HI spectral line analysis. This is much less common in undergraduate courses so hopefully everyone should learn something from that.
Photometry : An Idealised Example
You can't do photometry on ordinary RGB images like the one above. Those are composed of data at multiple wavelengths - you need the raw data. But images like this are not just pretty pictures, they are genuinely very useful. They enable you to see clearly, at a glance, the colours of different structures. The blue galaxy stands in stark contrast to the orange blob and yellow-green star to its immediate right, which means that those features are unlikely to be associated (also note the diffraction spikes of the star, which, because it is significantly fainter at some wavelengths, may not be visible in some FITS files). This means we're going to want to exclude them from our photometry. Such details can't be easily seen when we look at the original images from individual wavelengths. So if you have such an image available for your data set, keep it open somewhere so you can keep referring back to it.
When you get hold of a FITS file (more on that in a minute) and open it in ds9, the view will probably look something like this :
Not very impressive. We need to alter the colour transfer function. This is just a fancy term for how the data values are converted into RGB values displayed on the screen. ds9's default options aren't very useful, but you can change this by RMB+drag (right mouse button) anywhere in the window. You'll see the greyscale bar at the bottom change; try moving the mouse vertically and horizontally and you'll see what happens. In this way things can be improved quite a bit.
Not great, but at least we can see the galaxy. The main problem is that the galaxy is very faint compared to the stars, so a linear colour transfer function isn't really suitable. A better option is to use a logarithmic scheme, which is easily found from the top "Scale" menu.
Waaaay better ! Note also the middle section determines the scaling range of the data - by default it's min max. This means the minimum data value corresponds to black while the maximum corresponds to white, with everything in between using a greyscale interpolation (if you want more interesting colour schemes, use the "Colour" menu - but I strongly recommend staring with a simple greyscale). The other options let you change that. For example if you use "90%", then the program will compute the value of the 90th percentile and all data points above that will be white.
Another important function : aligning the image, via the "Zoom" menu. This will orient the view of the image such that north is up and east is to the left, the standard astronomical convention. Also, click the MMB to move around.
Now we can start to define regions to measure the source. This is easy : LMB+drag to start defining a region. By default regions are circular. These are sometimes appropriate, but usually ellipses are better, so use the top "Region" menu and choose "shape" from the drop-down menu that appears.
How to decide on the size of the region ? It's very subjective - this is not an exact process. My advice is to play around with the transfer function and the data range displayed. Try and get the galaxy to extend as far as you can, then make the region a bit bigger than that to err on the side of caution. As long as the data has a nice flat background, then it doesn't matter if you extend a bit too far - the noise will cancel out to zero anyway. We'll see how much of a difference this makes later on.
You're also going to need two other region types : masks and background. A mask region tells the program not to count any flux within that region. A background region is used for calculating the level of the background where no galaxies or other sources are present. Even if the background has already been subtracted, it's a good idea to create a few background regions just in case there are any local variations that the data reduction didn't quite manage to handle. So for this galaxy, the final set of regions would look like this :
For the background regions I've chosen to use boxes, but the shape doesn't really matter. For the masks I usually use circles - if you start getting more complicated, it can simply take too much time. Both types of regions are created in the same way as the source region. Once you've created them, double left click anywhere inside one and you'll see a dialogue box like the one above. This serves several functions :
- For the source region you can define its position angle, so that you can rotate the ellipse to fit the galaxy (circular regions, like the one selected above, don't have position angles).
- Mask regions are defined as in the above example. From the "Property" menu, check "Source" and "Exclude". Any flux within this region will be ignored. A mask region has a solid outline with a red strikethrough.
- To define a background region, choose "Background" and "Include". Background regions have dashed outlines. How many you create depends on the size of the galaxy but generally speaking, "a few". All it does is take the mean of the flux within the regions so there's no point going nuts with this. Keep them close to the source aperture but never overlap with it.
- You can also set the exact coordinates of the selected region. This can be useful if you're trying to find a source from another data set.
You are now in principle ready to do the photometry. But wait ! First you should save those regions. This will let you (or someone else) check your work later on, and if you do photometry on multiple wavebands you can use your current regions as a starting point rather than defining them all from scratch. To save regions, use the highlighted button :
And of course later you can use the "load" button to load existing regions, either to the original FITS file or at another wavelength. |
If you don't see this button, hit the "Region" button in the middle toolbar (note that ds9 has two toolbars, some of which give duplicate functionality and some of which do not).
I suggest creating a file structure in which each galaxy is stored in a separate subdirectory, holding both the FITS files and the region files. It sounds trivial, but simple logistical organisation makes the world of difference on a large project.
Now you're ready to do photometry. From the upper toolbar, choose Analysis->Funtools->Counts in region. That should produce something like this :
It looks intimidating at first but there's really only one interesting value here : net_counts. This is the flux within the source aperture, after discounting the mask regions and subtracting the mean of the background regions.
Calibration
What we now need to do is convert this "net counts" into some more physically meaningful value, i.e. the magnitude system. Exactly how we do this depends on the data set. The general formula is as follows :
Where f is the flux, f0 is a reference flux, the exposure time is in seconds and the other parameters we'll come back to shortly. This is a two-stage process so once you have the ratio f/f0 you use another equation :
Where m is the apparent magnitude - the thing you're interested in. aa is the zero point, kk is the extinction and am is the airmass. These parameters might be in the FITS header, which you can access in ds9 via the File menu ("Display FITS header"), but often you need to consult accompanying data. Note that all these parameters - m, exposure time, aa, k, am - are all waveband dependent !
The survey I've used for the above examples is the Sloan Digital Sky Survey. This has an excellent, very simple web interface to let you access both the data and the calibration parameters. If you open the SDSS "navigate" tool, you'll see something like this :
Note that if you click on the above hyperlink you'll see something slightly different, for reasons I'll explain in a moment. Anyway the interface is mostly self-explanatory. Positional buttons on the left let you find whatever source you're interested in. They understand sexigesimal notation, so if you input 12:30:59 it will automatically be converted to decimal degrees. In more recent versions there's also a "Name" box above the position, where you can just type the common name of the source (e.g. M31) and it will automatically find the coordinates if it has that name in its database.
You can click and drag in the main window to pan the view slightly. The RGB image is composed from the 5 different wavebands the SDSS observes (u, g, r, i, z) and does not necessarily resemble what you'd see, but is realistic - bluer objects really are bluer than red objects.
Once you've identified the object you're interested, left-click to place the green cursor at its location. Then you can hit "explore" in the panel on the right. This will open a new window like this :
Which gives you a lot more information, but can look intimidating. We don't need most of this, however. You can see it also gives magnitude estimate in each different wavebands - these are useful, but don't trust them. These have been done by fancy automatic algorithms but they are notoriously unreliable, tending to break apart large galaxies into multiple sources and include stars and other non-extragalactic objects which should be masked out. They can be a useful sanity check, but don't be surprised if your results turn out to be quite different to these values.
The lower half of this screen covers spectroscopy, which is beyond the scope of this course because I know absolutely nothing about it.
To access the photometric calibration parameters, use the "Field" button highlighted on the left. That will open something like this :
Which is basically everything you could ever want... except you don't need to do this any more ! From data release 8 onwards, the helpful people at the SDSS decided to do this calibration for you. So now you can use the very much simpler formula :
So you don't need anything apart from the net counts from ds9. Hooray ! So why am I mentioning this more complex data ? Well, you'll still need to access this "Explore" window for other reasons. First, you can download the FITS files here via the "FITS" button in the "PhotoObj" section (NOT the one in the "SpecObj" section). Second, near the bottom left you have a tool, "NED search" which provides access to information from other surveys that have examined this source.
What if things aren't so ideal ?
The basics of photometry are easy. In practise, a bunch of things can happen to make it more complicated.
What we now need to do is convert this "net counts" into some more physically meaningful value, i.e. the magnitude system. Exactly how we do this depends on the data set. The general formula is as follows :
Where f is the flux, f0 is a reference flux, the exposure time is in seconds and the other parameters we'll come back to shortly. This is a two-stage process so once you have the ratio f/f0 you use another equation :
Where m is the apparent magnitude - the thing you're interested in. aa is the zero point, kk is the extinction and am is the airmass. These parameters might be in the FITS header, which you can access in ds9 via the File menu ("Display FITS header"), but often you need to consult accompanying data. Note that all these parameters - m, exposure time, aa, k, am - are all waveband dependent !
The survey I've used for the above examples is the Sloan Digital Sky Survey. This has an excellent, very simple web interface to let you access both the data and the calibration parameters. If you open the SDSS "navigate" tool, you'll see something like this :
You can click and drag in the main window to pan the view slightly. The RGB image is composed from the 5 different wavebands the SDSS observes (u, g, r, i, z) and does not necessarily resemble what you'd see, but is realistic - bluer objects really are bluer than red objects.
Once you've identified the object you're interested, left-click to place the green cursor at its location. Then you can hit "explore" in the panel on the right. This will open a new window like this :
Which gives you a lot more information, but can look intimidating. We don't need most of this, however. You can see it also gives magnitude estimate in each different wavebands - these are useful, but don't trust them. These have been done by fancy automatic algorithms but they are notoriously unreliable, tending to break apart large galaxies into multiple sources and include stars and other non-extragalactic objects which should be masked out. They can be a useful sanity check, but don't be surprised if your results turn out to be quite different to these values.
The lower half of this screen covers spectroscopy, which is beyond the scope of this course because I know absolutely nothing about it.
To access the photometric calibration parameters, use the "Field" button highlighted on the left. That will open something like this :
Which is basically everything you could ever want... except you don't need to do this any more ! From data release 8 onwards, the helpful people at the SDSS decided to do this calibration for you. So now you can use the very much simpler formula :
So you don't need anything apart from the net counts from ds9. Hooray ! So why am I mentioning this more complex data ? Well, you'll still need to access this "Explore" window for other reasons. First, you can download the FITS files here via the "FITS" button in the "PhotoObj" section (NOT the one in the "SpecObj" section). Second, near the bottom left you have a tool, "NED search" which provides access to information from other surveys that have examined this source.
What if things aren't so ideal ?
The basics of photometry are easy. In practise, a bunch of things can happen to make it more complicated.
- Stars. We saw an example of a foreground star in the ideal case, but there its emission didn't overlap with the disc of our target galaxy - we masked it simply for the convenience of defining the aperture. In other cases things can be much worse. This example shows one of the worst possible situations, where a very bright foreground star is directly over a low surface brightness galaxy. In cases like these, it's inevitable that your measurements will suffer.
- Galactic cirrus. Clouds of dust in our Galaxy cause extinction, obscuring more distant objects. Sometimes this can be subtle and hard to spot, as in the example below on the left. Here the top spiral galaxy just looks a bit redder than the one below it (even with practise, this is hard to spot visually because some spiral galaxies really are as red as this). The example on the left is a really extreme case, where the cirrus is so dense that even foreground stars are hard to see, much less distant galaxies. Towards the galactic centre, extinction can make objects appear fainter by hundreds of magnitudes - enough to obscure a quasar if there was one !
- Fringing is an instrumental effect relating to defects in the optical filters. For the SDSS it's quite rare, but it can be pretty serious for other surveys. The images below compare the same patch of sky observed with the INT WFS, using the B band on the left and the i band on the right. This is a severe example, close to the worst possible case.
- Galaxies aren't always well-separated. Sometimes the galaxies themselves make measurements difficult. This is a problem which gets worse before it gets better : galaxies which are very far apart are fine, and galaxies which have actually merged are fine (because now you can count them as a single object), but galaxies experiencing a close encounter are very difficult. For objects like those below, it's very difficult to see exactly where one begins and the other ends. This becomes even harder in ds9 when you play around with the colour transfer function.
These problems can generally be alleviated to some extent, and sometimes removed completely (if a star is well-separated from the galaxy, for example). Sometimes you can correct the problem sufficiently well as to make useful measurements, but not always - that's just part of the reality of observational data that you have to accept. Really bright stars in front of your galaxy, for example, can't be removed with any accuracy. You can mask them, but that won't restore the lost data.
Galactic cirrus can be corrected in a more sophisticated way. Fortunately, people are so obsessed with dust (both correcting for it and studying it) that they've constructed models of the extinction across the entire sky. You can access these corrections via NED's Coordinate Transformation & Galactic Extinction Calculator, either directly or via a standard NED search (which you can also get directly from the SDSS "Explore" panel). From a standard NED search, scroll down until you see something like the following :
Here you see a bunch of different corrections for different wavebands, and links to the files explaining these models (which I will not go in to). The ones you want are the SDSS values in the upper panel (the older versions in the lower panel are probably a bit less accurate). Applying these corrections is easy - just use this simple equation :
Or in other words just get your apparent magnitude as normal, then subtract the value given in the NED table. Note that this is strongly wavelength dependent ! So first of all, of course telescope mirrors aren't like normal mirrors :
... but if you didn't know that you're probably reading the wrong guide (sorry !). More subtly, objects in the sky are actually brighter and bluer than they appear. Extinction doesn't just cause objects to appear fainter or redder, it does both. So the correction causes a diagonal shift on the colour-magnitude diagram, not a vertical or horizontal one.
For fringing, or other problems with the background, if the variation is regular then you might be able to correct by fitting a polynomial. But for very complex, strong variations you're better off not trying - subtracting a model will do more harm than good. Fortunately, even weak variations are very rare for the SDSS, which has an exceptionally flat background.
If all else fails, you'll just have to flag bad data. Flagging just means labelling your data in some way that describes the problem (presumably, you'll create a table with columns holding the name of your object, the position, net counts in each band, etc... add another column for the flagging). Exactly how you do this is up to you - you could create an alphanumeric code or even write a short description. I like using numerical codes since this makes things easy to sort in a database program (such as Topcat). So for example :
0 : perfect data, no problems
1 : contamination by foreground stars but minor
2 : severe cirrus
... etc. Just make sure you store these codes in a file somewhere close to your data so you don't forget them ! Flagging is important because eventually someone else might want your data for totally different purposes to what you intended. Sometimes they might just want to know if there was a galaxy present at all - sometimes that's enough to do useful science. More often, they'll want to know if they can trust the photometry, and if not, then what exactly was the problem.
OK, we fixed everything. Now what ?
So you've measured the data as best you can, corrected it where possible and flagged everything that needed flagging. What are you supposed to do with these measurements ? Well, having a single apparent magnitude measurement is USELESS and BORING, no-one wants to hear about your boring photometry. Start telling people about it and it'll be like listening to Grandpa Simpson :
But in combination with other data it can become very interesting ! I'll go through some points in detail but as a sneak peak :
- Distance : this gets you the luminosity of your galaxy and also tells you about its environment, based on the other galaxies around it
- Colour : this tells you about the star formation history and/or metallicity
- Morphology : this gives you clue to the dynamics of the object and the influence of the environment via the morphology-density relation
- Kinematics : enables total mass estimates leading to all kinds of fun with dark matter
- Non-stellar content : e.g. gas and its relation to all other parameters
Distance is the most important parameter. It tells you about the environment since large redshift surveys have already mapped the 3D positions of very large numbers of galaxies, so most of the time there will be a catalogue from which you can get a reasonable idea of what sort of environment you're dealing with (and thus what processes might be at work). Distance also lets you get absolute magnitude, the intrinsic luminosity of your galaxy : meaning you can make meaningful comparisons to other galaxies. This you can get via the standard distance modulus equation :
Where m is the apparent magnitude, M is the absolute magnitude, and d is the distance to the source in parsecs (not kpc or Mpc, just regular parsecs).
It's also possible to convert magnitudes into luminosities, which can be a more intuitive measure of brightness by comparing it to the Sun :
I don't know why blogger has decided to make that equation so big, it's not especially important or anything. Oh well. The standard solar luminosities are generally reckoned to be around 5.45 in the B band, 5.33 in the g band and 4.48 in the i band.
For most analyses, magnitudes are generally sufficient, but we do need the luminosity values for other calculations (some examples will be given later). Note that solar luminosity and solar mass are not directly equivalent : if your galaxy is a billion times brighter than the Sun, it does not follow that it has a mass a billion times greater than the Sun (though that would be a reasonable, zeroth-order ballpark estimate).
For most analyses, magnitudes are generally sufficient, but we do need the luminosity values for other calculations (some examples will be given later). Note that solar luminosity and solar mass are not directly equivalent : if your galaxy is a billion times brighter than the Sun, it does not follow that it has a mass a billion times greater than the Sun (though that would be a reasonable, zeroth-order ballpark estimate).
So now you've got your absolute magnitude, you can start those all-important comparisons with other galaxies. Even absolute magnitude by itself is enough to do useful science.
Luminosity functions
A luminosity function just measures the distribution of luminosities or absolute magnitudes (the terms are somewhat interchangeable); i.e. how many galaxies there are in different magnitude bins. Now you may think that after having done all this work and carefully inspected the data, this will throw away a lot of information. And it will, as other astronomers attest to. Binggeli (famous, among other things, for cataloguing the Virgo cluster) drew this charming cartoon of the universal luminosity function crushing all the details beneath its foot :
Of course there are other things we'll extract from the data beside the LF, but LFs themselves are surprisingly interesting. Now here I must digress from the main theme of the lecture and stray into theory, because if I don't, you'll go away thinking that luminosity functions are really boring. The thing is that there's a massive discrepancy between theory and observations. Here I'll just give a very brief, superficial analysis, but I'll go into a lot more details in the remaining lectures.
Here's a plot of a simulation by Moore et al. 1999 of the formation of a Virgo -mass object on the top...
... and a Local Group-mass object on the bottom. They're practically identical, but as we know from the first lecture, that's not the case in reality. The Local Group has far fewer galaxies than the Virgo cluster. Luminosity functions do indeed suppress details - the problem is, we don't even understand galaxies at this very crude level of how many there should be. Luminosity functions are how we quantify this "missing satellite" problem. Many more details on the nature of the problem and how we might solve it in the next two lectures.
There are different ways of parameterising the luminosity function. One of the simplest is the dwarf to giant ratio. This is very simple : you count the number of giant (say, brighter than -19) and dwarf (fainter than -14) galaxies in a region, and divide the one by the other. Easy. Except that as mentioned last time, there's no widely-accepted standard definition of dwarf and giant. While it's not such an issue these days, not so long ago it wasn't common to include full data tables in a paper (paper cost money, after all). So if authors had used a slightly different definition, then it became very hard to make comparisons. Fortunately the rise of online space has made that essentially a non-issue.
The D/G ratio has the benefit of being a single number, making it easy to parameterise an environment and study how it varies. The disadvantage is that it's not very precise, and throws away even more of the details than the LF. With surveys becoming ever larger, these days the most common approach is to actually display the LF itself. Which generally looks something like this :
This one uses actual luminosity but they can also use absolute magnitudes instead. |
The standard approach is to describe the LF with a Schecter function :
This has two parameters, L* (which describes the turnover point or "knee" of the function, generally about where our Milky Way is found) and α, the slope of the faint end. Astronomers have this very bad habit of making exciting discoveries and reporting them in the most boring way possible. So instead of talking of hundreds of missing galaxies, they talk of the "faint-end slope of the luminosity function". But if you hear that term, then that's what it should trigger in your head, "ahh, I wonder if they've found any of those missing galaxies yet or figured out where they've gone". The answer usually being no, of course.
Constructing the luminosity function is not without issues, not least of which is the issue of whether we've really found every galaxy in each luminosity bin. This is a hugely complicated issue, but more on that next time.
There are variations on the luminosity function. For example, there's the HI mass function :
... which is used for understanding how the HI gas content varies between different environments, and the velocity function :
This measures the distribution of galaxies which have a given circular (rotational) velocity, which is a good proxy for total dynamical mass. Here you can see the discrepancy between theory (red and blue lines) and observation (black circles) even more clearly.
Colour
It's already obvious that comparisons are king, turning a simple observation like brightness into a major astrophysical challenge. This is especially true for colour, which as mentioned last time is useless without an accompanying magnitude measurement. Fortunately you can't really get colour without measuring magnitude, so that doesn't require any extra work.
As described, colour is simply the subtraction of the brightness in two different wavebands. By convention, colour is usually written as (for example) g-i, but what we really mean is Mg - Mi. Colour (or rather, position on the CMD) is an important indicator of environmental effects in its own right, indicating whether galaxies are mainly star-forming or quiescent. But to use it properly you really need a large sample, comparing the CMDs divided by environment, morphology, distance etc. to examine how they vary.
Colour can also be used to give an estimate of the true stellar mass rather than the luminosity. There are many different recipes used, which depend on the wavelengths you've measured. A common one :
This is useful when you need to compare quantities measured in a different way. For example, whereas optical astronomy uses magnitudes, radio astronomy uses flux. Converting everything into common, physical units allows for meaningful comparisons.
As mentioned, the CMD has the red sequence, blue sequence, and the transition region. You don't have to use the g and i bands by any means, and you can even use non-optical data. This can get a much better separation of the sequences, making it easier to determine with greater accuracy is a galaxy is in one of the sequences or in the transition region. For example in the Virgo cluster :
Here, the small triangles are early-type galaxies. The blue squares are late-type galaxies. Note that those two groups are very well-separated, with only a few points in between the main sequences. The ugly green splodges are late-type galaxies which are strongly HI-deficient (we'll quantify that later). This is pretty compelling evidence for an environmental-driven gas loss (because galaxies outside the cluster are rarely deficient) causing not just a colour change, but morphological evolution as well.
Finally, this simple by-eye method of photometry gives perfectly reasonable values for colour and magnitude. It's not perfect, but unless you do something seriously mega-wrong you're always going to find a CMD like the ones above. As long as you have a reasonable sample size, the subjectivity won't change your conclusions.
Measuring size and shape
Aperture photometry isn't well-suited for measuring structural parameters because the subjective element in defining the aperture leads to much bigger errors here. So it's only very crude... but you can do it, if you really need to. The values will be good enough for a press release (compared to the absolute drivel that gets into press releases, a slightly wrong size measurement is practically something to celebrate !), but you'll want to avoid using them for actual science unless you have no other choice. If you need to measure the radius (or diameter) of your galaxy, you can use the following very simple formula :
Where theta is the angular size of your galaxy in degrees, which you can get from double-clicking on the region in ds9 (don't forget to set the coordinate system correctly) and d is the distance to the galaxy in Mpc. This formula just uses the fact that the galaxy spans some fraction of the degrees of a circle (theta/360) and you can work out the circumference of that circle if you know the distance to the galaxy.
You can also estimate the inclination angle of the galaxy. If you assume it's a thin circular disc, then this is simply :
Where a and b are the major and minor axes. Again you can get these by double-clicking in the ds9 region. By convention, inclination is said to be zero if the galaxy is face-on and 90 degrees if it's edge-on. A slightly better approach is to assume the galaxy isn't a perfectly thin circular disc, but has some thickness. In that case the formula becomes :
The q parameter is somewhat morphologically dependent, but if you use a value of 0.2 then no-one's going to ask too many awkward questions. If the axial ratio a/b is less than q, then assume the inclination is 90 degrees. This formula has some limits : it relies on the assumption of a thinnish disc, and will give you the wrong result if that's not the case (i.e. for a spheroidal, early-type system). In any case, if your estimated inclination angle is less than about 30 degrees or so, you probably want to treat this very carefully : even the nicest galaxies aren't perfectly circular, and your measurement errors will start to dominate.
Basic structures
What can you actually do with an inclination angle ? Quite a lot. First, you need it for correcting the rotational velocities of your source. Spectroscopy only measures the velocity along the line of sight since it relies on redshift (more later), so if you galaxy is face-on, you won't measure any rotation since none of that motion is along your line of sight. Conversely, if the galaxy is edge-on, then you will measure the rotation perfectly with no need for any correction. The general formula is very simple :
You can see that at low angles, the correction factor becomes extremely large, hence you should be wary of this since your measurement errors will become dominant even if the galaxy is a nice circular disc. If you're not careful, the very small line-of-sight velocity dispersion you'd measure for a face-on disc could be amplified to give an enormous rotational velocity.
If you have a large enough population of objects, then you can actually use your measured inclination angles to test the assumption of the galaxies being thinnish discs. Assuming your galaxies are randomly oriented with respect to you (there's no reason to assume otherwise), then you should see a flat distribution of inclination angles. If they're not discs then the distribution will be skewed (imagine a population of perfect spheres - you'd never measure any high inclination angles, because that would be impossible as they'd look circular from every direction).
I always used to think (and still have a not insignificant residual sympathy for the idea) that irregular galaxies are actually very fat systems, but the inclination angle distribution apparently disproves that. What we are actually seeing is stars in highly asymmetric distributions throughout the disc, creating the appearance of non-discy structures. This is easy to see with spirals, where the regular, circular structures naturally convey the appearance of everything being in the same plane, but the same applies to irregulars too. Of course this result is statistical, so it's hard to be sure for any individual object.
Inclination angle measurements can also be used to correct images to examine the galaxy's true morphology. Here's an example of the Andromeda galaxy as seen by the Herschel Space Telescope :
Quite a dramatic difference. Instead of classical spiral, Andromeda has a distinct ring-structure, suggesting that our nearest neighbour is not an entirely typical object.
Finally, inclination angle measurements are necessary for correcting for the internal extinction of a galaxy, but I won't go into any details on this.
What about morphology ? Well you can really only measure Hubble type by eye - it's very hard to write an algorithm that produces a meaningful result. This is problematic for large samples. If you have 20,000 galaxies, then you have no choice but to use a proxy such as position on the CMD - no-one would expect you to examine all your objects in that case ! An alternative approach is to use crowd sourcing such as the Galaxy Zoo project, in which thousands of volunteers visually classify millions of galaxies. Fortunately, while Hubble type itself is difficult to measure objectively, once you've established what kind of galaxy it is then individual structural parameters can be measured far more precisely and rigorously.
There are two other reasons why it's important (whenever possible) to actually look at your data and not just trust to the statistics. The first is that there are some things you just can't quantify. Much more on this next time, but for now, consider the following example :
This is an exceptionally peculiar galaxy with a truly bizarre structure; some people think it looks like a scorpion but I favour a resemblance to the Loch Ness Monster. My point is that, okay, maybe you could assign some Hubble type to the disc in the middle... but if you did that, you'd lose the most important feature of this galaxy ! You could always call it a peculiar galaxy, I suppose, but this galaxy is so strange it deserves individual attention. Reducing it to a simple Hubble type risks throwing this really cool feature away.
The second reason is that looking at galaxies is fun. For example this galaxy looks pretty innocuous :
The q parameter is somewhat morphologically dependent, but if you use a value of 0.2 then no-one's going to ask too many awkward questions. If the axial ratio a/b is less than q, then assume the inclination is 90 degrees. This formula has some limits : it relies on the assumption of a thinnish disc, and will give you the wrong result if that's not the case (i.e. for a spheroidal, early-type system). In any case, if your estimated inclination angle is less than about 30 degrees or so, you probably want to treat this very carefully : even the nicest galaxies aren't perfectly circular, and your measurement errors will start to dominate.
Basic structures
What can you actually do with an inclination angle ? Quite a lot. First, you need it for correcting the rotational velocities of your source. Spectroscopy only measures the velocity along the line of sight since it relies on redshift (more later), so if you galaxy is face-on, you won't measure any rotation since none of that motion is along your line of sight. Conversely, if the galaxy is edge-on, then you will measure the rotation perfectly with no need for any correction. The general formula is very simple :
You can see that at low angles, the correction factor becomes extremely large, hence you should be wary of this since your measurement errors will become dominant even if the galaxy is a nice circular disc. If you're not careful, the very small line-of-sight velocity dispersion you'd measure for a face-on disc could be amplified to give an enormous rotational velocity.
If you have a large enough population of objects, then you can actually use your measured inclination angles to test the assumption of the galaxies being thinnish discs. Assuming your galaxies are randomly oriented with respect to you (there's no reason to assume otherwise), then you should see a flat distribution of inclination angles. If they're not discs then the distribution will be skewed (imagine a population of perfect spheres - you'd never measure any high inclination angles, because that would be impossible as they'd look circular from every direction).
I always used to think (and still have a not insignificant residual sympathy for the idea) that irregular galaxies are actually very fat systems, but the inclination angle distribution apparently disproves that. What we are actually seeing is stars in highly asymmetric distributions throughout the disc, creating the appearance of non-discy structures. This is easy to see with spirals, where the regular, circular structures naturally convey the appearance of everything being in the same plane, but the same applies to irregulars too. Of course this result is statistical, so it's hard to be sure for any individual object.
Inclination angle measurements can also be used to correct images to examine the galaxy's true morphology. Here's an example of the Andromeda galaxy as seen by the Herschel Space Telescope :
Quite a dramatic difference. Instead of classical spiral, Andromeda has a distinct ring-structure, suggesting that our nearest neighbour is not an entirely typical object.
Finally, inclination angle measurements are necessary for correcting for the internal extinction of a galaxy, but I won't go into any details on this.
What about morphology ? Well you can really only measure Hubble type by eye - it's very hard to write an algorithm that produces a meaningful result. This is problematic for large samples. If you have 20,000 galaxies, then you have no choice but to use a proxy such as position on the CMD - no-one would expect you to examine all your objects in that case ! An alternative approach is to use crowd sourcing such as the Galaxy Zoo project, in which thousands of volunteers visually classify millions of galaxies. Fortunately, while Hubble type itself is difficult to measure objectively, once you've established what kind of galaxy it is then individual structural parameters can be measured far more precisely and rigorously.
There are two other reasons why it's important (whenever possible) to actually look at your data and not just trust to the statistics. The first is that there are some things you just can't quantify. Much more on this next time, but for now, consider the following example :
This is an exceptionally peculiar galaxy with a truly bizarre structure; some people think it looks like a scorpion but I favour a resemblance to the Loch Ness Monster. My point is that, okay, maybe you could assign some Hubble type to the disc in the middle... but if you did that, you'd lose the most important feature of this galaxy ! You could always call it a peculiar galaxy, I suppose, but this galaxy is so strange it deserves individual attention. Reducing it to a simple Hubble type risks throwing this really cool feature away.
The second reason is that looking at galaxies is fun. For example this galaxy looks pretty innocuous :
... but then someone realised that if you rotate it and play with the colour transfer function, you can pretty well reproduce a famous politically incorrect corporate logo :
And that seems a good point to move swiftly on to another topic.
Surface brightness
Surface brightness is a measure of how much emission occurs per unit area. It's classically expressed in units of magnitudes per square arcsecond :
Where A is the area in arcseconds. But you can also get the "surf_bri" parameter from ds9, which is already in counts per arscecond so you can just put that value directly into the apparent magnitude equation and get the surface brightness directly. Alternatively, you can express it in physical units :
For which the units of this surface density are usually solar masses per square parsec. Again, this makes it possible to compare with other properties which have different observational measurements.
Surface brightness (or density) is currently a hot topic and poorly understood. For a long time it appeared that galaxies appear to have approximately the same surface brightness without much variation, an observation known as Freeman's Law. But now it seems that that was just a selection effect and we're discovering large populations of much fainter, larger objects than we previously know about. More on that in the next two lectures.
Ideally we'd measure true 3D volume densities, but this is difficult to do. However, even the 2D surface densities gave given us important clues to the complex physics of star formation. Surface densities vary strongly depending on which component you're measuring. Here's a plot from a THINGS paper :
The y-axis is star formation efficiency but that's not important here. The blue component shows the atomic HI gas while the red shows the molecular hydrogen. Note that in spiral galaxies, the HI has a wide range of values but with an upper limit. Beyond this limit almost all of the gas is molecular, though H2 is also found below the threshold. In dwarfs there's far less H2 (none at all in this sample) and the HI has a narrower distribution, with much less at lower surface densities.
What's going on here ? We don't fully understand. We think that if HI reaches some upper limit, it becomes dense enough to self-shield against the stellar radiation background so it's able to cool to form H2. Being colder and denser, this molecular gas has less thermal pressure supporting it against gravity so it's able to collapse and form stars. For dwarf galaxies, we think the upper limit is partially an effect of poor resolution : overall the HI remains at low densities, but local overdensities (which we cannot resolve) may be above the limit and allow star formation to proceed at a low level. Why there seems to be a lower density limit of the HI in dwarfs but not in spirals, we just don't know.
Objectivity : nice if you can get it
That just about covers the basic properties. Clearly we can do a lot with aperture photometry but it would be preferable if we had a more objective measurement technique. And we do, but first we should consider the pros and cons of which technique we want to use.
Aperture photometry has the advantage of being simple and fast to do - typically a couple of minutes per object or even less (though of course some difficult objects can take much longer). It gives very reasonable values for magnitudes and colour despite its subjectivity. The reason for this is that as long as the background is fairly flat, increasing the size of the aperture won't matter very much : the noise values will average to zero. However, aperture photometry comes a bit unstuck when measuring structural parameters like size and inclination, because here the subjectivity makes a much bigger difference to the end results :
Surface brightness profiles, on the other hand, are a different beast. Here the basic idea is to create a series of annuli and measure the average surface brightness within each one, so you can plot the radial variation of surface brightness. This is surprisingly robust to variations in the stellar distribution. Consider this example of NGC 4254 :
It has one much more prominent spiral arm, but the stellar surface brightness profile is pretty smooth until we get down near the sensitivity limits (for more details go here). There's nothing very difficult about constructing such profiles, but they are tedious to do - much slower than aperture photometry. Actually the most difficult part of the process - and I say this from bitter experience - is installing the bloody software you need to fit the profiles.
Having such a profile gives you very precise, mostly objective, sort-of rigorous values for all parameters (size, inclination, magnitude, etc.). They are not perfectly objective, because there is still some subjective intervention required (more on that soon), but they're much better than the totally subjective measurements of aperture photometry. You also of course get the surface brightness throughout the entire galaxy, meaning you can quantify things very precisely indeed, and get detailed information about the galaxy's structural parameters.
The downside is that surface brightness profiles may be impossible to construct, or give you meaningless results, for some extreme objects. Consider the Nessie galaxy : there, plotting a radial distribution just wouldn't be appropriate, because it's not at all well-described as a disc. This can also be a problem for highly irregular or faint objects which appear very patchy. In those cases, trying to fit a profile would be a mistake : you'd get a better result using a simple aperture. And of course you can't fit a profile if the galaxy's angular size is too small.
Surface brightness profiles
As we know, galaxies come in a wide variety of shapes and sizes. Fortunately there's a single equation we can use to describe pretty much every case (at least those where a radial profile makes any sense) : the Sersic profile.
Where I is the intensity at some radius r, α is the scale length, and n is the so-called Sersic index. The index is the most important parameter as this controls the shape of the profile. Here's a little animation to show how varying n (while keeping the other parameters constant) changes the profile shape. At high n, the profile is strongly peaked at low radii, while at low n the shape changes quite dramatically to something very similar to a Gaussian.
More often, surface brightness profiles are shown with a logarithmic y-axis (because of both the strong central peak and the convention of the magnitude system), so here it is again with a logarithmic scale :
And although it's not so interesting, for the sake of completeness here's the effect of varying only α :
Which has the simple effect of just flattening the outer parts of the profile without fundamentally changing the shape.
Different types of galaxies and their internal components can be described using different values for the Sersic index. Elliptical galaxies typically have 1.5 < n < 20, bulges have 1.5 < n < 10, pseudo-bulges have 1 < n < 2, discs have n~1 and bars have n ~0.5. More details can be found here.
Individual galaxies can have components which have very different Sersic indices. Spiral galaxies have sometimes have purely discy profiles, but often contain a central bulge. What you'd do in these cases is not try and fit a smoothly-varying n to the profile, but specify in software where each component begins and ends and it will do the interpolation for you (i.e. the radius at which the bulge ends and the disc begins). Hence these are not entirely objective measurements.
Many but not all galaxies have truncated surface brightness profiles. At some radius, the surface brightness remains well above the sensitivity limit, but then drops precipitously. Galaxies do have a real edge, though it's not an exact boundary it nevertheless exists. This further complicates how you describe them. For instance, to measure brightness you could extrapolate the profile to infinity (as long as the function is convergent, which it might not be !) to work out the total magnitude, or give the magnitude at the point of truncation (which would be far more sensible, but if and only if the profile is truncated !). Which means that unfortunately, different techniques are more appropriate for different galaxies, making fair comparisons surprisingly tricky.
One common use of surface brightness profiles that's not so subject to these difficulties is which value of the surface brightness itself you specify. There are some different conventions, but the most common is to use the central surface brightness. But this isn't the actual measured value : the Sersic profile is so strongly peaked that measurement errors would cause havoc. Instead, the profile is extrapolated from the outer regions where the shape can be measured more accurately by using a wider range of radii.
Galaxy sizes
As has been hinted at, measuring this is not at all trivial. The comparison images I showed in the first lecture are not wrong, it's just that defining size can be fiendishly difficult. Consider the following example, NGC 3227 :
These two galaxies are clearly interacting and it's not at all obvious where one beings and the other ends. But it gets much worse. If you look at the raw FITS files, they don't look too bad at first :
But with a little adjustment you can see that this pair of galaxies has some complex surrounding features, including a dramatic southern tail and a northern loop :
This image relies on a combination of (simple) processing techniques. We've discussed altering the colour transfer function already, and using a green-purple colour scheme gives a very high contrast between different features (though it looks ugly). But the key here is smoothing the image, which can be done in ds9 via the Analysis->Smooth menu (see also "Smooth parameters"). Smoothing increases sensitivity at the expense of resolution. The larger the smoothing kernel, the greater the effect - but with too large a kernel you can wash out smaller features.
You'd never reveal these features in the SDSS data without smoothing. It's always good practise to try smoothing your data, because it requires zero effort and minimal time and you just might make an interesting discovery. Essentially you have a small chance of making a publication-worthy discovery for free.
In this case, these highly extended features make defining the size of the galaxy impossible, because you can't say what the galaxy is. Is it one of the discs ? Should it include the smaller extensions close to the central region ? What about the long tail, is that part of the galaxy or should we count it as something else ?
For more regular galaxies, the situation is better but still complicated. Common parameters for the size include the isophotal radius (usually at the level of the 25th magnitude per arcsecond squared), the radius of the last visible part of the surface brightness profile, the scale length and the effective radius. Which one is more suitable depends on both the galaxy and the characteristics of the survey data.
The effective radius is a particularly important measurement. This is just another term of the half-light radius, the radius enclosing half the light. It can be shown that :
Where :
The key point here is that effective radius can be very different to the isophotal radius, since the Sersic profile is so strongly peaked in the centre. For example, the isophotal radius of the Milky Way is generally reckoned to be about 15 kpc, whereas its effective radius is estimated at between 2-5 kpc, with the latest value apparently being 3.6 kpc. This will be important in the next lecture.
One final point : the effective radius is useful for elliptical galaxies. Having such a steep surface brightness profile, the effective radius provides a convenient point at which to measure the average surface brightness without having to specify a galaxy/survey-dependent isophotal radius. Thus elliptical surface brightnesses are often average values rather than central values.
What about other wavelengths ?
That's certainly more than enough about galaxies in the optical. Many of the techniques described above are also relevant to other wavelengths, but the physics behind them is quite different. Consider the galaxy M31 as seen across the spectrum :
The animation begins with the ultra-violet, which traces excited gas around hot young stars. This makes the UV an excellent way to measure the star formation rate in a galaxy, as does the Hα spectral line. Then we switch to the more familiar visible light, which traces the main sequence stars, followed by the infra-red which comes from dust and old stars, and finally the gas as seen at the 21 cm wavelength.
One of the other key components has already been mentioned, the cold molecular H2 gas. Since we think this is the component largely responsible for star formation, we'd love to be able to observe it. Unfortunately this is almost impossible because of the nature of the molecule, but we think that CO (which is relatively easy to observe) is an effective tracer for H2. The problem is that the chemical reactions that convert CO to H2 are fiendishly complicated, so the conversion "X" factor between CO and H2 abundance adds a major level of uncertainty. To make matters worse, it's probably environmentally-dependent, so the conversion factor may vary depending on if you're inside a galaxy or outside one. At least some H2 is thought to be CO-dark, meaning that it has no associated CO emission.
I know that was an incredibly superficial overview - there are a whole host of different components traced by particular wavelengths, each one of which could take a whole course to explain properly. Instead of attempting this mammoth task, I'm going to concentrate for this final section on the HI line. Partly this is for the very selfish and pragmatic reason that that's what they pay me to study, and partly because it is also one of the most important components of galaxies.
Compositionally, the interstellar medium is actually quite a simple place. By mass it's around 70% hydrogen (of which most is HI, the rest is HII and H2), 28% helium and the remainder are metals (anything heavier than helium). 99% of the ISM is gaseous, with just a tiny fraction in solid dust grains. However, the behaviour of the gas is hugely complicated, with the gas being controlled by a wide variety of complex processes :
And the above diagram doesn't even include external effects like tides or ram pressure stripping. All of these influence the gas and trigger the internal processes which regulate star formation. So, for instance, a tidal encounter might increase the density of the gas, which causes compressional heating but also allows it to radiate and cool more effectively, triggering star formation which generates hot young stars that inject both gas and metals and energy into the ISM, with the metals altering the cooling rate and the density wave from the feedback propagating throughout the ISM triggering new stars and perhaps even ejecting the gas into the IGM if the feedback is strong enough... it's an incredibly complex cycle, to say the least.
For all these reasons and more, understanding how the HI relates to star formation is complicated. Yet for all that, there is a correlation. Any HI astronomer will tell you that more often than not, HI is associated with blue, star-forming galaxies rather than "red and dead" ellipticals. There are many very interesting exceptions, but the overall trend is clear.
The Kennicutt-Schmidt Law illustrates the complexity of the trend :
Star formation rate density correlates with the total mass surface density, but not in a nice linear fashion. There's clearly a break at the low end, the effect of the supposed threshold below which star formation doesn't occur. But above that the behaviour isn't exactly a nice linear function either.
Given all these complexities, you might wonder why anyone would study the HI line at all. Well, it has some compensations for its difficulties.
Neutral atomic hydrogen : advantages
The Platonic ideal of an HI survey would be a single-dish map with a truly gigantic spaceborne single dish. That would have the exquisite sensitivity of a single dish combined with the fabulous resolution of an interferometer, plus a huge multibeam receiver for fast survey mapping. Mmmm....
Finding HI : pointed observations
Surface brightness
Surface brightness is a measure of how much emission occurs per unit area. It's classically expressed in units of magnitudes per square arcsecond :
Where A is the area in arcseconds. But you can also get the "surf_bri" parameter from ds9, which is already in counts per arscecond so you can just put that value directly into the apparent magnitude equation and get the surface brightness directly. Alternatively, you can express it in physical units :
For which the units of this surface density are usually solar masses per square parsec. Again, this makes it possible to compare with other properties which have different observational measurements.
Surface brightness (or density) is currently a hot topic and poorly understood. For a long time it appeared that galaxies appear to have approximately the same surface brightness without much variation, an observation known as Freeman's Law. But now it seems that that was just a selection effect and we're discovering large populations of much fainter, larger objects than we previously know about. More on that in the next two lectures.
Ideally we'd measure true 3D volume densities, but this is difficult to do. However, even the 2D surface densities gave given us important clues to the complex physics of star formation. Surface densities vary strongly depending on which component you're measuring. Here's a plot from a THINGS paper :
The y-axis is star formation efficiency but that's not important here. The blue component shows the atomic HI gas while the red shows the molecular hydrogen. Note that in spiral galaxies, the HI has a wide range of values but with an upper limit. Beyond this limit almost all of the gas is molecular, though H2 is also found below the threshold. In dwarfs there's far less H2 (none at all in this sample) and the HI has a narrower distribution, with much less at lower surface densities.
What's going on here ? We don't fully understand. We think that if HI reaches some upper limit, it becomes dense enough to self-shield against the stellar radiation background so it's able to cool to form H2. Being colder and denser, this molecular gas has less thermal pressure supporting it against gravity so it's able to collapse and form stars. For dwarf galaxies, we think the upper limit is partially an effect of poor resolution : overall the HI remains at low densities, but local overdensities (which we cannot resolve) may be above the limit and allow star formation to proceed at a low level. Why there seems to be a lower density limit of the HI in dwarfs but not in spirals, we just don't know.
Objectivity : nice if you can get it
That just about covers the basic properties. Clearly we can do a lot with aperture photometry but it would be preferable if we had a more objective measurement technique. And we do, but first we should consider the pros and cons of which technique we want to use.
Aperture photometry has the advantage of being simple and fast to do - typically a couple of minutes per object or even less (though of course some difficult objects can take much longer). It gives very reasonable values for magnitudes and colour despite its subjectivity. The reason for this is that as long as the background is fairly flat, increasing the size of the aperture won't matter very much : the noise values will average to zero. However, aperture photometry comes a bit unstuck when measuring structural parameters like size and inclination, because here the subjectivity makes a much bigger difference to the end results :
Surface brightness profiles, on the other hand, are a different beast. Here the basic idea is to create a series of annuli and measure the average surface brightness within each one, so you can plot the radial variation of surface brightness. This is surprisingly robust to variations in the stellar distribution. Consider this example of NGC 4254 :
It has one much more prominent spiral arm, but the stellar surface brightness profile is pretty smooth until we get down near the sensitivity limits (for more details go here). There's nothing very difficult about constructing such profiles, but they are tedious to do - much slower than aperture photometry. Actually the most difficult part of the process - and I say this from bitter experience - is installing the bloody software you need to fit the profiles.
Having such a profile gives you very precise, mostly objective, sort-of rigorous values for all parameters (size, inclination, magnitude, etc.). They are not perfectly objective, because there is still some subjective intervention required (more on that soon), but they're much better than the totally subjective measurements of aperture photometry. You also of course get the surface brightness throughout the entire galaxy, meaning you can quantify things very precisely indeed, and get detailed information about the galaxy's structural parameters.
The downside is that surface brightness profiles may be impossible to construct, or give you meaningless results, for some extreme objects. Consider the Nessie galaxy : there, plotting a radial distribution just wouldn't be appropriate, because it's not at all well-described as a disc. This can also be a problem for highly irregular or faint objects which appear very patchy. In those cases, trying to fit a profile would be a mistake : you'd get a better result using a simple aperture. And of course you can't fit a profile if the galaxy's angular size is too small.
Surface brightness profiles
As we know, galaxies come in a wide variety of shapes and sizes. Fortunately there's a single equation we can use to describe pretty much every case (at least those where a radial profile makes any sense) : the Sersic profile.
Where I is the intensity at some radius r, α is the scale length, and n is the so-called Sersic index. The index is the most important parameter as this controls the shape of the profile. Here's a little animation to show how varying n (while keeping the other parameters constant) changes the profile shape. At high n, the profile is strongly peaked at low radii, while at low n the shape changes quite dramatically to something very similar to a Gaussian.
More often, surface brightness profiles are shown with a logarithmic y-axis (because of both the strong central peak and the convention of the magnitude system), so here it is again with a logarithmic scale :
And although it's not so interesting, for the sake of completeness here's the effect of varying only α :
Which has the simple effect of just flattening the outer parts of the profile without fundamentally changing the shape.
Different types of galaxies and their internal components can be described using different values for the Sersic index. Elliptical galaxies typically have 1.5 < n < 20, bulges have 1.5 < n < 10, pseudo-bulges have 1 < n < 2, discs have n~1 and bars have n ~0.5. More details can be found here.
Individual galaxies can have components which have very different Sersic indices. Spiral galaxies have sometimes have purely discy profiles, but often contain a central bulge. What you'd do in these cases is not try and fit a smoothly-varying n to the profile, but specify in software where each component begins and ends and it will do the interpolation for you (i.e. the radius at which the bulge ends and the disc begins). Hence these are not entirely objective measurements.
Many but not all galaxies have truncated surface brightness profiles. At some radius, the surface brightness remains well above the sensitivity limit, but then drops precipitously. Galaxies do have a real edge, though it's not an exact boundary it nevertheless exists. This further complicates how you describe them. For instance, to measure brightness you could extrapolate the profile to infinity (as long as the function is convergent, which it might not be !) to work out the total magnitude, or give the magnitude at the point of truncation (which would be far more sensible, but if and only if the profile is truncated !). Which means that unfortunately, different techniques are more appropriate for different galaxies, making fair comparisons surprisingly tricky.
One common use of surface brightness profiles that's not so subject to these difficulties is which value of the surface brightness itself you specify. There are some different conventions, but the most common is to use the central surface brightness. But this isn't the actual measured value : the Sersic profile is so strongly peaked that measurement errors would cause havoc. Instead, the profile is extrapolated from the outer regions where the shape can be measured more accurately by using a wider range of radii.
Galaxy sizes
As has been hinted at, measuring this is not at all trivial. The comparison images I showed in the first lecture are not wrong, it's just that defining size can be fiendishly difficult. Consider the following example, NGC 3227 :
These two galaxies are clearly interacting and it's not at all obvious where one beings and the other ends. But it gets much worse. If you look at the raw FITS files, they don't look too bad at first :
But with a little adjustment you can see that this pair of galaxies has some complex surrounding features, including a dramatic southern tail and a northern loop :
This image relies on a combination of (simple) processing techniques. We've discussed altering the colour transfer function already, and using a green-purple colour scheme gives a very high contrast between different features (though it looks ugly). But the key here is smoothing the image, which can be done in ds9 via the Analysis->Smooth menu (see also "Smooth parameters"). Smoothing increases sensitivity at the expense of resolution. The larger the smoothing kernel, the greater the effect - but with too large a kernel you can wash out smaller features.
You'd never reveal these features in the SDSS data without smoothing. It's always good practise to try smoothing your data, because it requires zero effort and minimal time and you just might make an interesting discovery. Essentially you have a small chance of making a publication-worthy discovery for free.
In this case, these highly extended features make defining the size of the galaxy impossible, because you can't say what the galaxy is. Is it one of the discs ? Should it include the smaller extensions close to the central region ? What about the long tail, is that part of the galaxy or should we count it as something else ?
For more regular galaxies, the situation is better but still complicated. Common parameters for the size include the isophotal radius (usually at the level of the 25th magnitude per arcsecond squared), the radius of the last visible part of the surface brightness profile, the scale length and the effective radius. Which one is more suitable depends on both the galaxy and the characteristics of the survey data.
The effective radius is a particularly important measurement. This is just another term of the half-light radius, the radius enclosing half the light. It can be shown that :
Where :
The key point here is that effective radius can be very different to the isophotal radius, since the Sersic profile is so strongly peaked in the centre. For example, the isophotal radius of the Milky Way is generally reckoned to be about 15 kpc, whereas its effective radius is estimated at between 2-5 kpc, with the latest value apparently being 3.6 kpc. This will be important in the next lecture.
One final point : the effective radius is useful for elliptical galaxies. Having such a steep surface brightness profile, the effective radius provides a convenient point at which to measure the average surface brightness without having to specify a galaxy/survey-dependent isophotal radius. Thus elliptical surface brightnesses are often average values rather than central values.
What about other wavelengths ?
Two different views of the galaxy I used for the title slides, UGC 1810. They look dramatically different, but actually this is just down to clever manipulation of the colour transfer function rather than using different wavelengths.
That's certainly more than enough about galaxies in the optical. Many of the techniques described above are also relevant to other wavelengths, but the physics behind them is quite different. Consider the galaxy M31 as seen across the spectrum :
The animation begins with the ultra-violet, which traces excited gas around hot young stars. This makes the UV an excellent way to measure the star formation rate in a galaxy, as does the Hα spectral line. Then we switch to the more familiar visible light, which traces the main sequence stars, followed by the infra-red which comes from dust and old stars, and finally the gas as seen at the 21 cm wavelength.
One of the other key components has already been mentioned, the cold molecular H2 gas. Since we think this is the component largely responsible for star formation, we'd love to be able to observe it. Unfortunately this is almost impossible because of the nature of the molecule, but we think that CO (which is relatively easy to observe) is an effective tracer for H2. The problem is that the chemical reactions that convert CO to H2 are fiendishly complicated, so the conversion "X" factor between CO and H2 abundance adds a major level of uncertainty. To make matters worse, it's probably environmentally-dependent, so the conversion factor may vary depending on if you're inside a galaxy or outside one. At least some H2 is thought to be CO-dark, meaning that it has no associated CO emission.
I know that was an incredibly superficial overview - there are a whole host of different components traced by particular wavelengths, each one of which could take a whole course to explain properly. Instead of attempting this mammoth task, I'm going to concentrate for this final section on the HI line. Partly this is for the very selfish and pragmatic reason that that's what they pay me to study, and partly because it is also one of the most important components of galaxies.
Compositionally, the interstellar medium is actually quite a simple place. By mass it's around 70% hydrogen (of which most is HI, the rest is HII and H2), 28% helium and the remainder are metals (anything heavier than helium). 99% of the ISM is gaseous, with just a tiny fraction in solid dust grains. However, the behaviour of the gas is hugely complicated, with the gas being controlled by a wide variety of complex processes :
And the above diagram doesn't even include external effects like tides or ram pressure stripping. All of these influence the gas and trigger the internal processes which regulate star formation. So, for instance, a tidal encounter might increase the density of the gas, which causes compressional heating but also allows it to radiate and cool more effectively, triggering star formation which generates hot young stars that inject both gas and metals and energy into the ISM, with the metals altering the cooling rate and the density wave from the feedback propagating throughout the ISM triggering new stars and perhaps even ejecting the gas into the IGM if the feedback is strong enough... it's an incredibly complex cycle, to say the least.
For all these reasons and more, understanding how the HI relates to star formation is complicated. Yet for all that, there is a correlation. Any HI astronomer will tell you that more often than not, HI is associated with blue, star-forming galaxies rather than "red and dead" ellipticals. There are many very interesting exceptions, but the overall trend is clear.
The Kennicutt-Schmidt Law illustrates the complexity of the trend :
Star formation rate density correlates with the total mass surface density, but not in a nice linear fashion. There's clearly a break at the low end, the effect of the supposed threshold below which star formation doesn't occur. But above that the behaviour isn't exactly a nice linear function either.
Given all these complexities, you might wonder why anyone would study the HI line at all. Well, it has some compensations for its difficulties.
Neutral atomic hydrogen : advantages
- Relatively simple atomic physics. The hydrogen atom is the simplest there is, with just one electron orbiting one proton. The 21 cm line arises due to a spin-flip transition, when the electron changes from the high-energy state (aligned parallel to the proton) to the low-energy state (anti-parallel to the proton). This process isn't strongly dependent on the density of the gas and it's optically thin, so we can detect the HI from within the entire galaxy. Which means we can accurately and easily estimate its total mass. This all sounds great, and it is, but bear in mind that the detailed physics of this is horrendous.
- Precise kinematic measurements can be made, since the line is very narrow. It's common to be able to measure speeds with a precision of just 1 km/s, but if you really want to you can get down to something like 100 m/s. To be able to measure a speed so precisely, bearing in mind that galaxies are at Mpc distances, is extremely impressive. More common the precision is about 5-10 km/s, but that's still more than sufficient to accurately measure rotation.
But there's a complication. The red line has a resolution of 10 km/s but a high S/N, whereas the black has 1.3 km/s but a lower S/N. The actual precision of the measurement thus depends on the S/N as well as the instrumental capabilities. |
- HI extends beyond the optical disc. A fact which has given us the most important discovery from HI observations of all : that galaxies are rotating more quickly than expected and so must have large quantities of unseen dark matter. And as well as probing the kinematics, the extended nature of the HI makes it a sensitive probe of the effects of environment, since it's more extended than the stars and less bound to the galaxy.
- High resolution is possible. With the right setup, HI measurements can reach ~10" resolution. Not as good as the <1" resolution possible with optical surveys, but good enough to make very precise measurements and comparisons. For example, the fact that lower-density HI "holes" correspond to greater amounts of H2 lends credence to the idea of HI reaching a saturation point and then forming molecular gas.
NGC 628 as seen with the SDSS and THINGS. |
Neutral atomic hydrogen : disadvantages
- Highly complex macroscopic physics. All those people who say that the "theory of everything" will be the end of physics are idiots. Understanding the atomic physics of HI doesn't help us understand the incredibly large-scale structures we observe, which depends on the hydrodynamics and requires detailed observations. Theories help, but that's all they do. And of course the difficulties in relating HI to star formation have already been mentioned.
The enormous Leo Ring. See also the Rogue's Gallery for more weird HI features. |
- High resolution is technically challenging. You need not only a large number of telescopes spread out over a wide area to do this, and not only does this require some serious-level computational power, but the data reduction itself is highly complex. It also comes with a severe sensitivity penalty, which I'll describe in a moment. So yeah, you can do it, but it's tough.
This is the correlator for ALMA, a radio telescope which does not observe HI but uses the same basic principles for data analysis. |
- Blind surveys are very slow. You can map the HI line just like you can map the sky at optical wavelengths, but because the wavelength is so large the receivers have to be very large as well. Which limits how many pixels you can have, so limiting your survey speed. Whereas a modern CCD will have millions or even billions of pixels, the ALFA receiver at Arecibo has seven. Not seven million, just seven.
ALFA undergoing maintenance. |
- The HI line is very weak. Another reason for the slow survey speed is that the line itself is intrinsically weak. The spin flip occurs spontaneously in each atom only on timescales of ~10 Myr. It's only detectable at all because there are a lot of atoms, and collisions help the transition occur more frequently. This severely limits the distance at which HI can be detected. Whereas optical and infra-red emission can be detected at redshifts >10, the most distant reliable HI detection is at z=0.3, and that's with several hours of integration time with the Arecibo 305 m reflector.
Observing HI
There are several different techniques of observing HI. Each have their own subtleties which make them suited to different kinds of science - none is inherently better or worse, it depends what you need to do.
- Single-dish pointed observations. This is the simplest technique there is : point your telescope at a target, integrate for some time and you get a spectrum out, like the one above. This is simple, relatively fast, and gets you high sensitivity. Of course it only gives you crude kinematic information from the HI spectrum (though this can be surprisingly interesting, as we'll see) and you need a target - no-one's going to let you point the telescope wherever you happen to feel like. This means your selection is biased, usually towards objects already detected in optical surveys.
The Arecibo 305 m reflector. |
- Single-dish mapping. It's possible to map the HI using a single dish telescope just as with an optical imaging system. This is relatively simple, but slow for the reasons discussed above. It gives extremely high sensitivity but has poor spatial resolution (Arecibo, as the largest functioning telescope, has a 3.5' resolution, the Chinese 500 m FAST will only slightly improve on this - it's a limitation of dish size, not instrumentation). A tremendous advantage is that you don't need a specific target, just an area of sky you're interested in (and a few such surveys have mapped the entire sky, albeit with rather unimpressive sensitivity). This means that you can use them to detect HI features without any optical bias, i.e. things that have no optical counterpart. Depending on what you're studying, you might be able to get some detailed kinematics and spatial information, but only for nearby galaxies. Most detections in single-dish surveys are far away so you only get line profiles, with the maps largely consisting of point sources in the noise (see later).
- Interferometry. This is a whole other level compared to single dish mapping. As mentioned, it's a highly complex technique and the observations are slow (even by the standards of single dish observations). Its sensitivity, compared to single dish observations, is shite. No, really, I mean it - it's a fundamental limit of how you combine multiple telescopes together that can't really be overcome (describing why this happens is not easy !). For example, the VLA has a collecting area about 5x less than Arecibo, but its sensitivity to low surface density gas is about a thousand times worse ! This is so poor a limit that it essentially can't be overcome by throwing more observing time at. On the positive side, an interferometer has both a wide field of view and a much higher resolution than a single dish. So you get a large, detailed map for every pointing, enabling detailed kinematic studies at much greater distances. You don't really need a specific target for interferometry, but only in the same way that you don't need shoes - the telescope time is difficult to get, so you have to be very confident there's something there. You can't really do interferometry HI surveys, as yet.
The Very Large Array in New Mexico. |
Finding HI : pointed observations
How you go about determining if you've detected anything at the HI line depends on what type of observations you've run. For pointed observations, all you'll get is a spectrum, which makes things easy. You just look at each velocity channel and decide if the signal to noise is high enough that you count it as a detection, i.e., do you see a rise in the spectrum over several consecutive velocity channels ? If you don't already know the redshift of the source, a detection in the HI line can be used to obtain it. If you already know the redshift, then you can go straight to the channels at the known velocity range and see what's there. In practise you'll have several hundred or possibly several thousand velocity channels; the examples below have been truncated.
The classic "ideal" HI spectrum is known as a double horn profile, for obvious reasons...
... but I prefer to refer to as the Batman profile, also for obvious reasons.
The resemblance of the spectrum to the Caped Crusader is really quite uncanny. Why does this happen ? The reason is the shape of the rotation curve - not that it's flat, but that most of it is flat. That is, most of the gas is moving at a single speed along the observer's line of sight, either towards or away from them depending on which side of the galaxy you're looking at (this is easiest to imagine if you consider an edge-on galaxy). And since the beam of a single dish is normally so large that it includes the whole galaxy, you see both flat parts of the rotation curve as two separate horns.
Measuring the HI parameters, and to a lesser degree detecting it in the first place, is subjective. The above example shows a quintessential HI detection, with a high S/N ratio and a steep sided profile. This makes it easy and virtually unambiguous to decide which channels contain detectable HI and which contain only noise. Such detections are not uncommon, but often the situation can be more difficult. For example :
This galaxy has a much lower S/N ratio and a shallower profile. It's very much harder to say exactly where the HI ends and the noise begins. HI has very similar issues of subjectivity as optical data, but if anything it's somewhat worse. Just as optical data can suffer from fringing and foreground stars corrupting the galaxy data, so radio frequency interference (RFI), continuum sources and sensitivity variations can cause strange things to happen to the the baseline in HI spectra :
That strange baseline variation means it's not only harder to decide where the galaxy ends and the noise begins, but it also raises concerns about how accurate the measurements will be even when a decision has been made. That linear slope on the left looks suspiciously like it extends at least partway into the galaxy itself. And the low resolution means that the problems are overlapping galaxies are more frequent in HI spectra, which can result in some very strange profiles :
Close proximity of galaxies can result in the superposition of the HI spectra combined with tidal interactions which actually alter their kinematics, so there's no upper limit to how strange the profiles can get :
And of course, the galaxies themselves can be intrinsically strange. Many strange profiles occur because of tidal interactions or other nearby sources in the beam, but sometimes you get strange profiles when no other major companion is visible :
A classic edge-on giant disc galaxy - it should have a Batman profile, but it doesn't. Why not ? I have no idea, it's from a data set I'm still analysing.
Since there are problems with RFI and receivers and whatnot, the only way to be really sure of what you're measuring is to obtain follow-up observations : getting the same strange result twice puts it on much firmer footing. Of course, telescope time is finite, so you wouldn't do this for strong signals unless there was some very compelling reason, but it's the preferred option if a source is so weak you can't be sure it's real.
If your source is reasonably distant so as to be unresolved by the telescope, then for the most part spectra from pointed observations are fine - the beam will enclose the whole source and you'll measure the total HI mass accurately. But mapping is always better, because it's the only way to find extended HI structures or other features that have no optical counterpart.
Finding HI : mappping
When you have a fully-sampled HI data set, you have to catalogue what's in it. However you go about this, and for all surveys in general, you need your catalogue to be both complete and reliable. These have very specific meanings :
Completeness is defined as the fraction of real sources present that are in your catalogue. In the above image there are nine real meerkats (and one mere cat), so if your catalogue includes all of them then it is 100% complete.
Reliability is defined as the fraction of sources in your catalogue which are real. So if your catalogue has nine meerkats and one mere cat, then it is 100% complete but it is not 100% reliable. With a really complex image like the one above, unless you have some very fancy algorithm indeed, then it's entirely possible you might get a very much lower level of reliability.
It's quite important to get these terms right. Consider, for example, this recent report on a drone that spots sharks. The news report says it has a 92% reliability. If they are using the term in the scientific sense, then this is extremely worrying. It means that 92% of the things it identifies as sharks are actually sharks... but it says absolutely nothing about completeness ! The damn thing could potentially be missing thousands upon thousands of sharks !
This is not at all easy. Even for humans, who have astonishing pattern-recognition skills which are far superior to any algorithm, it can be hard to distinguish where things begin and end. Writing a program to do this stuff is hard. It's estimated that a human's catalogue might be up to 80% complete and 50% reliable, whereas an algorithm, on a really good day, might be 80-90% complete and 20% reliable. And that 20% is very much an upper limit, easily dropping to 15%, 10%, 5% if the conditions aren't favourable. More details on this next time.
Data visualisation is also crucial, though I don't have time to do into this in depth. HI "maps" are actually 3D, because we get spectra at every point. Traditionally we inspect these maps by looking at 2D slices :
We could look at it in the normal RA-Dec view (i.e. the sky projection), but this turns out to be unhelpful. Since the sources are smaller than the telescope beam, most of them are completely spatially unresolved. So they appear only as point sources in the sky map. But the kinematic resolution can be excellent - the galaxies are detected in many different velocity channels (because of their rotation), centred on the systemic redshift. Hence they appear as long, cigar-like blobs in the above RA-Velocity projection.
This 2D view is extremely boring. More advanced, modern methods allow us to view the data in 3D :
This is a flight through a large, particularly rich data cube containing a few hundred sources. We start at the low redshift end where the sources are bright because they're nearby, and as the movie goes on you'll see the galaxies start to look fainter and are harder to spot. You'll also see huge, bright, extended features filling the entire screen - sadly these are not alien megastructures, but the effect of RFI from things like mobile phones and overhead satellites.
Measuring HI
Unfortunately the procedures of data reduction produce very different products for optical and HI data, so you can't just do aperture photometry in ds9 on an HI data cube. Instead you need a dedicated software package such as miriad. This lecture being designed specially for a student exercise, I'm not going to go in to the details of that, so instead I'll just show the end results.
For single dish maps or pointings, most sources will be unresolved. This is true for any telescope since even Arecibo has a 3.5' beam and is not likely to be surpassed any time soon. This means that to calculate the total flux, you just need to integrate the area under the spectrum :
Though of course you need to make sure the baseline has been calibrated, e.g. using a first or second order polynomial. You can then get the HI mass via the standard equation :
Where MHI is in solar masses, d is the distance to the source in Mpc, and FHI is the total flux in Jy km/s.
The other main parameters to measure are the velocity components. First, the systemic velocity of the source, i.e. its redshift, which gives distance via Hubble's Law (v = H0d). This is found via the second property : the velocity width. By convention, width is determined in two ways : as the width of the profile at 50% (top line) and 20% (lower line) of the peak flux :
The systemic velocity is then just the mid-point of where those horizontal lines intersect the HI profile. For this source, a nice, bright, well-behaved profile, the W50 and W20 are practically identical. But note the vertical dotted and dashed lines. Miriad requires the user to specify over which interval the measurements are performed, as for much weaker sources it's very hard for an algorithm to identify where the source ends and the noise begins. So the user's visual inspection is important.
The output of the measurement task from miriad looks like this :
Which looks pretty intimidating, but for an introductory course, only the red highlights need concern us here.
The top red oval shows the source position. The user has to enter some initial estimate, which the program then refines by fitting a 2D Gaussian and finding the centroid. This significantly improves on the accuracy, which is important for identifying the optical counterpart. The second red oval shows the total flux, FHI, which we use for calculating the HI mass ("moment 0" just being another term for integrated flux). The third red oval shows the systemic velocity (redshift) and velocity width of the source. It's all very straightforward really.
What can you do with this ?
A just question, my liege. Well, a great many things, really. The first task is to identify the optical counterpart. Sometimes - more often than not, in fact - this is quite simple. You can start by using the SDSS navigate tool and inspecting the RGB images. Fortuitously, by default the field of view is pretty well equal to the Arecibo 3.5' beam. But you may also need to download the FITS files, as sometimes the RGB images lack sensitivity or you may need to smooth the data to find the source. You can then create a circular region in ds9 and set its coordinates to the HI coordinates and its radius to be 1.75'. In exceptional circumstances, the source of the HI might be outside this circle, but 99% of the time if you can't see anything within the circle, then there's no optical counterpart.
The above images illustrate two examples. On the left is what's normally encountered : a single diffuse object close to the centre of the Arecibo F.O.V., with a nice strong spectrum. With absolutely no other sources visible within the beam, that has to be the optical counterpart of the HI. The only exception would be if that source had a measured optical redshift which was in strong disagreement (say, > 200 km/s) with the HI measurement. And if that was the case, I'd tend to doubt the optical redshift in that case.
The example on the right is far more interesting. The HI profile looks weak, but this one has follow-up observations so we know it's real (I've seen weaker detections than that turn out to be real). Yet there's no kind of diffuse object within the beam at all. Oh, I've identified something, but honestly that identification is pretty desperate - its so compact and unresolved, it's more likely to be a very distant background object. The HI redshifts are always so low that you can essentially rely (to a very high degree, at least) on the optical counterparts being diffuse and fuzzy.
The real fun comes when you've got a decent sample of objects with both optical photometry and HI measurements. For instance, in the Virgo cluster :
All filled symbols are HI-detected galaxies, while the open circles are non-detections. You can clearly see the red and blue sequences defined by morphology on this CMD, but at least some early-type galaxies have HI yet lie firmly on the red sequence. So they are probably not misidentifications : they really are early-type galaxies, yet their gas isn't forming stars. Why ? No idea. If we add a sample of galaxies detected outside the cluster :
Now we see that galaxies within the cluster tend to be redder than those outside it, even though both samples were detected in HI - again, this is evidence for a cluster-based environmental influence. When we compare the gas contents, a compelling picture starts to emerge :
Now we see that the HI mass-to-light ratio (sometimes known as the gas fraction, slightly inaccurately) is systematically lower for cluster galaxies : at any given magnitude, cluster members tend to have less gas than field galaxies. So the picture is one of a galaxy being captured by the cluster, losing its gas content and having its star formation quenched as it runs out of fuel. I'll describe some of the complexities of interpreting the data in this way next time. You can also use this MHI/L to quantify the star formation efficiency of a galaxy.
We can also use line width for estimating the dynamical mass. Ideally we would have a proper rotation curve :
... where we have both the rotational velocity and the radius. Then we'd use the simple equation :
But for line profiles we only have v, after correcting for inclination. We have to assume some value of r for the HI, usually taken to be 1.7xropt. This then gives us an estimate of the dynamical mass, but it is neither an upper nor a lower limit unless the radius is also a limit. In clusters this is highly uncertain, since we know that galaxies there often possess truncated HI discs in which rHI < ropt. Rotation curves are always better because they give you so much more detailed information about the kinematics, but they're very much more laborious to obtain than line profiles.
Knowing the HI mass and the optical diameter of the galaxy, we can quantify if the galaxy's HI mass is typical for a galaxy of this class via the HI deficiency equations (since MHI/L varies strongly). First, we need to calculate how much gas we expect the galaxy to contain :
Where d is the optical diameter of the galaxy. Different groups prefer different values for the a and b calibration parameters. The values without parentheses come from Solanes et al. 1996, while those in parentheses are from Gavazzi et al. 2005 (don't mix and match these : if you use Solanes for a, don't use Gavazzi for b !). Knowing the observed mass of HI the galaxy actually has, we can then calculate the deficiency :
Deficiency is a positive, logarithmic parameter. Positive values mean the galaxy has less gas than expected. A value of 1 means a galaxy has only 10% of the gas a comparable field galaxy usually has, a value of 2 means it has 1%, etc.
This parameter is not very precise ! The intrinsic scatter is something like +/- 0.3. Which means that for individual galaxies, you can only really describe them as being non deficient (-0.3 < def < + 0.3), weakly deficient (+ 0.3 < def < +0.6) or strongly deficient (def > +0.6). Anything more precise than that is foolhardy, though you might be able to get away with this if you have a large sample. It's also possible, but very rare, for a galaxy to have a significantly negative deficiency, meaning that it has substantially more gas than expected.
You can do even more with the line width, such as estimating the distance to the galaxy, but that's something I'll cover in the fourth lecture. For now, I think that's more than enough, so you may return to your homes and places of businesses.
The classic "ideal" HI spectrum is known as a double horn profile, for obvious reasons...
... but I prefer to refer to as the Batman profile, also for obvious reasons.
The resemblance of the spectrum to the Caped Crusader is really quite uncanny. Why does this happen ? The reason is the shape of the rotation curve - not that it's flat, but that most of it is flat. That is, most of the gas is moving at a single speed along the observer's line of sight, either towards or away from them depending on which side of the galaxy you're looking at (this is easiest to imagine if you consider an edge-on galaxy). And since the beam of a single dish is normally so large that it includes the whole galaxy, you see both flat parts of the rotation curve as two separate horns.
Measuring the HI parameters, and to a lesser degree detecting it in the first place, is subjective. The above example shows a quintessential HI detection, with a high S/N ratio and a steep sided profile. This makes it easy and virtually unambiguous to decide which channels contain detectable HI and which contain only noise. Such detections are not uncommon, but often the situation can be more difficult. For example :
This galaxy has a much lower S/N ratio and a shallower profile. It's very much harder to say exactly where the HI ends and the noise begins. HI has very similar issues of subjectivity as optical data, but if anything it's somewhat worse. Just as optical data can suffer from fringing and foreground stars corrupting the galaxy data, so radio frequency interference (RFI), continuum sources and sensitivity variations can cause strange things to happen to the the baseline in HI spectra :
That strange baseline variation means it's not only harder to decide where the galaxy ends and the noise begins, but it also raises concerns about how accurate the measurements will be even when a decision has been made. That linear slope on the left looks suspiciously like it extends at least partway into the galaxy itself. And the low resolution means that the problems are overlapping galaxies are more frequent in HI spectra, which can result in some very strange profiles :
Close proximity of galaxies can result in the superposition of the HI spectra combined with tidal interactions which actually alter their kinematics, so there's no upper limit to how strange the profiles can get :
And of course, the galaxies themselves can be intrinsically strange. Many strange profiles occur because of tidal interactions or other nearby sources in the beam, but sometimes you get strange profiles when no other major companion is visible :
A classic edge-on giant disc galaxy - it should have a Batman profile, but it doesn't. Why not ? I have no idea, it's from a data set I'm still analysing.
Since there are problems with RFI and receivers and whatnot, the only way to be really sure of what you're measuring is to obtain follow-up observations : getting the same strange result twice puts it on much firmer footing. Of course, telescope time is finite, so you wouldn't do this for strong signals unless there was some very compelling reason, but it's the preferred option if a source is so weak you can't be sure it's real.
If your source is reasonably distant so as to be unresolved by the telescope, then for the most part spectra from pointed observations are fine - the beam will enclose the whole source and you'll measure the total HI mass accurately. But mapping is always better, because it's the only way to find extended HI structures or other features that have no optical counterpart.
Finding HI : mappping
When you have a fully-sampled HI data set, you have to catalogue what's in it. However you go about this, and for all surveys in general, you need your catalogue to be both complete and reliable. These have very specific meanings :
Completeness is defined as the fraction of real sources present that are in your catalogue. In the above image there are nine real meerkats (and one mere cat), so if your catalogue includes all of them then it is 100% complete.
Reliability is defined as the fraction of sources in your catalogue which are real. So if your catalogue has nine meerkats and one mere cat, then it is 100% complete but it is not 100% reliable. With a really complex image like the one above, unless you have some very fancy algorithm indeed, then it's entirely possible you might get a very much lower level of reliability.
It's quite important to get these terms right. Consider, for example, this recent report on a drone that spots sharks. The news report says it has a 92% reliability. If they are using the term in the scientific sense, then this is extremely worrying. It means that 92% of the things it identifies as sharks are actually sharks... but it says absolutely nothing about completeness ! The damn thing could potentially be missing thousands upon thousands of sharks !
This is not at all easy. Even for humans, who have astonishing pattern-recognition skills which are far superior to any algorithm, it can be hard to distinguish where things begin and end. Writing a program to do this stuff is hard. It's estimated that a human's catalogue might be up to 80% complete and 50% reliable, whereas an algorithm, on a really good day, might be 80-90% complete and 20% reliable. And that 20% is very much an upper limit, easily dropping to 15%, 10%, 5% if the conditions aren't favourable. More details on this next time.
Data visualisation is also crucial, though I don't have time to do into this in depth. HI "maps" are actually 3D, because we get spectra at every point. Traditionally we inspect these maps by looking at 2D slices :
We could look at it in the normal RA-Dec view (i.e. the sky projection), but this turns out to be unhelpful. Since the sources are smaller than the telescope beam, most of them are completely spatially unresolved. So they appear only as point sources in the sky map. But the kinematic resolution can be excellent - the galaxies are detected in many different velocity channels (because of their rotation), centred on the systemic redshift. Hence they appear as long, cigar-like blobs in the above RA-Velocity projection.
This 2D view is extremely boring. More advanced, modern methods allow us to view the data in 3D :
This is a flight through a large, particularly rich data cube containing a few hundred sources. We start at the low redshift end where the sources are bright because they're nearby, and as the movie goes on you'll see the galaxies start to look fainter and are harder to spot. You'll also see huge, bright, extended features filling the entire screen - sadly these are not alien megastructures, but the effect of RFI from things like mobile phones and overhead satellites.
Measuring HI
Unfortunately the procedures of data reduction produce very different products for optical and HI data, so you can't just do aperture photometry in ds9 on an HI data cube. Instead you need a dedicated software package such as miriad. This lecture being designed specially for a student exercise, I'm not going to go in to the details of that, so instead I'll just show the end results.
For single dish maps or pointings, most sources will be unresolved. This is true for any telescope since even Arecibo has a 3.5' beam and is not likely to be surpassed any time soon. This means that to calculate the total flux, you just need to integrate the area under the spectrum :
Though of course you need to make sure the baseline has been calibrated, e.g. using a first or second order polynomial. You can then get the HI mass via the standard equation :
Where MHI is in solar masses, d is the distance to the source in Mpc, and FHI is the total flux in Jy km/s.
The other main parameters to measure are the velocity components. First, the systemic velocity of the source, i.e. its redshift, which gives distance via Hubble's Law (v = H0d). This is found via the second property : the velocity width. By convention, width is determined in two ways : as the width of the profile at 50% (top line) and 20% (lower line) of the peak flux :
The systemic velocity is then just the mid-point of where those horizontal lines intersect the HI profile. For this source, a nice, bright, well-behaved profile, the W50 and W20 are practically identical. But note the vertical dotted and dashed lines. Miriad requires the user to specify over which interval the measurements are performed, as for much weaker sources it's very hard for an algorithm to identify where the source ends and the noise begins. So the user's visual inspection is important.
The output of the measurement task from miriad looks like this :
Which looks pretty intimidating, but for an introductory course, only the red highlights need concern us here.
The top red oval shows the source position. The user has to enter some initial estimate, which the program then refines by fitting a 2D Gaussian and finding the centroid. This significantly improves on the accuracy, which is important for identifying the optical counterpart. The second red oval shows the total flux, FHI, which we use for calculating the HI mass ("moment 0" just being another term for integrated flux). The third red oval shows the systemic velocity (redshift) and velocity width of the source. It's all very straightforward really.
What can you do with this ?
A just question, my liege. Well, a great many things, really. The first task is to identify the optical counterpart. Sometimes - more often than not, in fact - this is quite simple. You can start by using the SDSS navigate tool and inspecting the RGB images. Fortuitously, by default the field of view is pretty well equal to the Arecibo 3.5' beam. But you may also need to download the FITS files, as sometimes the RGB images lack sensitivity or you may need to smooth the data to find the source. You can then create a circular region in ds9 and set its coordinates to the HI coordinates and its radius to be 1.75'. In exceptional circumstances, the source of the HI might be outside this circle, but 99% of the time if you can't see anything within the circle, then there's no optical counterpart.
The above images illustrate two examples. On the left is what's normally encountered : a single diffuse object close to the centre of the Arecibo F.O.V., with a nice strong spectrum. With absolutely no other sources visible within the beam, that has to be the optical counterpart of the HI. The only exception would be if that source had a measured optical redshift which was in strong disagreement (say, > 200 km/s) with the HI measurement. And if that was the case, I'd tend to doubt the optical redshift in that case.
The example on the right is far more interesting. The HI profile looks weak, but this one has follow-up observations so we know it's real (I've seen weaker detections than that turn out to be real). Yet there's no kind of diffuse object within the beam at all. Oh, I've identified something, but honestly that identification is pretty desperate - its so compact and unresolved, it's more likely to be a very distant background object. The HI redshifts are always so low that you can essentially rely (to a very high degree, at least) on the optical counterparts being diffuse and fuzzy.
The real fun comes when you've got a decent sample of objects with both optical photometry and HI measurements. For instance, in the Virgo cluster :
All filled symbols are HI-detected galaxies, while the open circles are non-detections. You can clearly see the red and blue sequences defined by morphology on this CMD, but at least some early-type galaxies have HI yet lie firmly on the red sequence. So they are probably not misidentifications : they really are early-type galaxies, yet their gas isn't forming stars. Why ? No idea. If we add a sample of galaxies detected outside the cluster :
Now we see that galaxies within the cluster tend to be redder than those outside it, even though both samples were detected in HI - again, this is evidence for a cluster-based environmental influence. When we compare the gas contents, a compelling picture starts to emerge :
Now we see that the HI mass-to-light ratio (sometimes known as the gas fraction, slightly inaccurately) is systematically lower for cluster galaxies : at any given magnitude, cluster members tend to have less gas than field galaxies. So the picture is one of a galaxy being captured by the cluster, losing its gas content and having its star formation quenched as it runs out of fuel. I'll describe some of the complexities of interpreting the data in this way next time. You can also use this MHI/L to quantify the star formation efficiency of a galaxy.
We can also use line width for estimating the dynamical mass. Ideally we would have a proper rotation curve :
... where we have both the rotational velocity and the radius. Then we'd use the simple equation :
But for line profiles we only have v, after correcting for inclination. We have to assume some value of r for the HI, usually taken to be 1.7xropt. This then gives us an estimate of the dynamical mass, but it is neither an upper nor a lower limit unless the radius is also a limit. In clusters this is highly uncertain, since we know that galaxies there often possess truncated HI discs in which rHI < ropt. Rotation curves are always better because they give you so much more detailed information about the kinematics, but they're very much more laborious to obtain than line profiles.
Knowing the HI mass and the optical diameter of the galaxy, we can quantify if the galaxy's HI mass is typical for a galaxy of this class via the HI deficiency equations (since MHI/L varies strongly). First, we need to calculate how much gas we expect the galaxy to contain :
Deficiency is a positive, logarithmic parameter. Positive values mean the galaxy has less gas than expected. A value of 1 means a galaxy has only 10% of the gas a comparable field galaxy usually has, a value of 2 means it has 1%, etc.
This parameter is not very precise ! The intrinsic scatter is something like +/- 0.3. Which means that for individual galaxies, you can only really describe them as being non deficient (-0.3 < def < + 0.3), weakly deficient (+ 0.3 < def < +0.6) or strongly deficient (def > +0.6). Anything more precise than that is foolhardy, though you might be able to get away with this if you have a large sample. It's also possible, but very rare, for a galaxy to have a significantly negative deficiency, meaning that it has substantially more gas than expected.
You can do even more with the line width, such as estimating the distance to the galaxy, but that's something I'll cover in the fourth lecture. For now, I think that's more than enough, so you may return to your homes and places of businesses.
No comments:
Post a Comment
Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.