Monthly Archives: January 2012


The observant eye will notice something strange about the orientations of some of these galaxies

One method for detecting extrasolar planets that does not get a significant amount of attention is the use of gravitational microlensing. The reasons for this may be that the method has not produced a long list of planet detections so far (indeed, as of the time of this writing, only fourteen planets have been found using this method in the nearly two decades of searching for them this way), or perhaps because it’s a fairly complicated topic. The concept of detecting planets through their transits across a star is fairly intuitive, and the idea of using Doppler spectroscopy can be made to make sense after considering it for a time, but gravitational microlensing is far outside the norm of human experience.

Stars are not stationary in space. The galaxy has two or three hundred billion swirling around in a disk. So from the perspective of any of them, stars appear to pass close to other stars. When a foreground object passes very close (along our line of sight) to a background light source, the foreground object’s gravity may act as a lens, distorting the background image while magnifying it at the same time. This is called a “microlensing event.” The background source is split into two separate images with a separation on the order of a milliarcsecond (too small to resolve with our current technology). Since the total area of the images are larger than the actual area of the source, the event results in an apparent brightening of the background source. Photometry of the event will show a symmetrical brightening and dimming of the two stars (called a “Paczynski curve”), which themselves are unresolved due to their extreme apparent proximity.

When an observer, a lens, and a source are well aligned, light from the source will be bent by the lens by a deflection angle expressed as

\displaystyle \alpha = \frac{4GM}{r_E c^2}

Wherer_Eis the Einstein radius. This causes the apparent location of the source to be displaced from the true position of the source by an angle\theta_E. In the case where the source goes directly behind the lens, the image of the source is distorted into a ring around the lens. The radius of this ring, projected onto the source plane, has a radius\hat{r}_Eand which called the “projected Einstein radius.” If the alignment is not perfect, the symmetry will be broken and instead of a ring, two images will appear.

The difference between\hat{r}_Eand\tilde{r}_Eis that\tilde{r}_Eis the Einstein radius projected onto the observer plane, whereas\hat{r}_Eis the Einstein radius projected onto the source plane. This minor difference illustrates the relations between the observables (\theta_E,\tilde{r}_E) and the physical parameters (M, \pi_{rel}).

The geometry of a lensing event may be visualised as

Lensing geometry, conceptual and mathematical

Under small-angle approximation,

\displaystyle \frac{\alpha}{\tilde{r}_E} = \frac{\theta_E}{r_E}


\displaystyle \tilde{r}_E \theta_E = \alpha r_E = \frac{4GM}{c^2}

And because of the exterior-angle theorem,

\displaystyle \theta_E = \frac{\tilde{r}_E}{D_l} - \frac{\tilde{r}_E}{D_s}

WhereD_landD_sare the distances to the lens and the source, respectively. Therefore, the image displacement from the true position,\theta_E, is expressed as

\displaystyle \frac{\theta_E}{\tilde{r}_E} = \frac{\pi_{rel}}{AU}

Where\pi_{rel}is the relative paralax between the lens and the source.

The distance of the two images of the source from the lens is expressed as

\displaystyle y_{\pm} = \pm\frac{1}{2} \left( \sqrt{u^2 + 4} \pm u \right)

Whereu = \beta / \theta_Eandy = \theta / \theta_E, and the positive, or “major” image is always on the outside of the Einstein ring, whereas the negative, or “minor” image is always inside the Einstein ring. The angular separation between these two images as the time of closest alignment is ~2\theta_E. Thus, for typical lens masses (0.1 – 1M_{\odot}and lens source distances (1 – 10 kpc),\theta_E \lesssim mas, and so the images are unresolved.

The magnification of the source is dependent on the impact parameter of the event

We know the images are magnified because surface brightness is conserved. The magnification of each image is purely the ratio of the area of the image to the area of the source. The images are typically elongated tangentially by an amounty_{\pm} /u, but are compressed radially by an amountdy_{\pm} / du. The magnification of the images are then expressed as

\displaystyle A_{\pm} = \left| \frac{y_{\pm}}{u} \frac{dy_{\pm}}{du} \right| = \frac{1}{2} \left[ \frac{u^2+2}{u \sqrt{u^2+4}} \pm 1 \right]

And their sum, the total magnification of the source, is given as

\displaystyle A(u) = \frac{u^2+2}{u \sqrt{u^2+4}}

Now, because the lens, source and observer are all in relative motion, a microlensing event will be time-resolved (which is why microlensing phenomena are called “events”). In the simplest case of uniform, rectilinear motion, the relative lens-source motion can be expressed as

\displaystyle u(t) = \left[ u_0^2 + \left(\frac{t - t_0}{t_E} \right)^2 \right]^{1/2}

Whereu_0 is the minimum separation between the lens and source (in units of\theta_E,t_0is the time of maximum magnification (corresponding to whenu = u_0, andt_Eis the time scale to cross the angular Einstein ring radius,t_E \equiv \theta_E / \mu_{rel}, where\mu_{rel}is the proper motion of the source relative to the lens. This form foru gives rise to a smooth, symmetric microlensing event which, for events toward the Galactic bulge, tend to last on the order of a month but can be less than a day, or years long. The magnification isA > 1.34foru \leq 1, and so the events are substantial and easily detectable.

It was suggested in 1991 (by Mao & Paczynski) that gravitational microlensing could be used to discover planets around the primary foreground lensing star. If the lensing star has a planetary companion that happens to be located near the paths of one or both of the two images created by the primary lens, then the planet’s own gravity will contribute to the gravitational lensing and perturb the microlensing event when the image sweeps past the planet. The duration of this deviation is~t_{E,p} = q^{1/2}t_E, whereq = m_p/M is the mass ratio andm_pis the mass of the planet. The magnitude of perturbation depends on how close the source images passes by the planet. Given a range of microlensing event durations of 10 – 100 days, the durations for the perturbations caused by a planet may last a few hours for an Earth-mass planet, to a few days for a Jupiter-mass planet. The location of the perturbation relative to the peak of the primary event depends on the angle of the projected star-planet axis relative to the source trajectory, andd, the instantaneous angular separation between the planet and host star in units of\theta_E.

Appearance of a lensing event

Assuming the orientation of the source trajectory is random, and assuming the position of the planet about the star is random, the time of this perturbation is unpredictable. The probability of the planet being detectable is

\displaystyle A(t_{0,p}) \frac{\theta_{E,p}}{\theta_E}

WhereA(t_{0,p})is the unperturbed magnification of the image that is being perturbed at timet_{0,p} of the perturbation, and\theta_{E,p} \equiv q^{1/2}\theta_E is the Einstein ring radius of the planet itself. Here, the factorAaccounts for the fact that the area of the image plane covered by magnifiecd images is larger by their magnification.

Because the planet must be near one of the two images in order to produce a detectable perturbation, and these images are always located near the Einstein radius when the source is significantly magnified, the sensitivity of the microlensing method reaches its highest for planet-star separations ofr_E, ord \approx 1. Microlensing can also detect planets well outside the Einstein ring (d \gg 1), though with less sensitivity. Since the magnification of the minor image decreases with position as y^4, microlensing is not generally sensitive to planets d \ll 1, i.e., close-in planets.

For a planet with a projected separation within a factor of 2r_E, the detection probability ranges from tens of percent for Jupiter-mass planets, to a few percent for Earth-mass planets.

In rare cases where the lens and source are very well aligned (and are therefore high-magnification events), the two images of the source sweep along almost the entire Einstein ring of the lensing star. In this case, all planets with separations nearr_Eare detected regardless of their orientation with respect to the source trajectory. These events have nearly 100% sensitivity to planets near the Einstein ring radius, including low-mass planets. But these events are rare.

When the lens is binary, then the magnification is not a simple function of the angular separation between the lens and the source and so the light curve of a lensing event of a background source will not be a smooth symmetrical curve. The presence of two gravitationally lensing bodies makes an asymmetrical distribution and intensity of magnification regions on the plane of the sky, through which the background lens, when passing through, will be magnified asymmetrically with time. These jagged regions of very high magnification are called “caustics.”

When the source crosses one of these caustics, it is very strongly magnified and this constitutes an extremely powerful tool. For example, despite a distance of 15,000 light years, the source in the microlensing event MOA 2002-BLG-33 was able to be spatially resolved enough that its shape was determined(!) simply because it fortuitously crossed one of these caustics in a tight binary lensing system. This effectively delivered an angular resolution of 0.04 µas(!). As the star crosses a caustic, the caustic gives us a probe into the surface brightness variation across the stellar surface and therefore gives us a probe into the limb darkening of the star. And because the source can be made very bright through the magnification, spectroscopy can get a measurement of the stellar effective temperature and atmospheric abundances when this would not typically be feasible due to the source star’s great distance and hence low brightness. This video has a decent animation showing a source crossing a caustic in a binary lens system.

Single and binary lens magnification regions. Note the binary lens creates high-magnification regions calld "caustics"

Another thing to note is that a binary lens always produces a magnification of all the images ofA \geq 3when the source is inside the caustic curve

Binary lenses always produce either three or five images of the source, and will have one, two, or three closed and non-self-interacting caustic curves. Which of these is the case depends on the mass ratio of the lensq \equiv m_1 / m_2 and on the angular separation of the two lens components in units of the Einstein ring radius of the binary,d \equiv |z_{m,1} - z_{m,2}|. A triple lens will produce a maximum of ten images, but no less than four, with the number of images at any given time always being a multiple of two. This can lead to quite complicated magnification topologies, nested and/or self-intersecting caustic curves. In general, the maximum number of images a lens ofN_lbodies will produce can be expressed as5(N_l - 1)forN_l \geq 2.

Mathematically, these caustics are the set of positions for which a point source would have infinite magnification. Caustic curves have multiple concave segments called “folds,” which meet at points called “cusps.” Because caustic curves are closed, these caustic crossing come in pairs. Since most of the length of the caustic is made of fold caustics, both the caustic entry and exit are on folds. Once a fold caustic entry is observed, an exit is guaranteed, and is usually a fold exit.

Even if the mass ratio is extreme, a star+planet system constitutes a binary system here, for both bodies contribute to lensing the background source, and the science is effectively the same. Because the magnification strength of the caustics depends on the mass ratio between the two lenses, one can determine the mass of the second lens in the system, whether it be a planet or additional star, with as much certainty as they can determine the mass of the host lensing star.

The concept of detecting planets through microlensing is highly technical and there are great practical challenges associated with it. Analysis of microlensing data is complicated and time-consuming, and once planets are unambiguously found, little is known about them, and they will certainly never be as well studied as closer planets in the solar neighbourhood. It was originally thought that the only information that can be gathered from microlensing detections would be the planet/star mass ratio and the projected separations of planets. Individual planet detections would be of little interest because the microlensing event gives very little information on the system, and follow-up spectroscopy would be difficult or impossible due to both the great distance of the system, and the fact that it would be guaranteed to be blended with the background star for some years. While experience with actual microlensing events has shown that it is possible to get considerably more information, it’s extremely difficult to extract and requires a large amount of data. The detection method does not bask in the glory that has been brought to the Doppler Spectroscopy method that is well established and successful or the transit method that Kepler is enjoying great success with, and resources like dedicated telescopes are scarce, as is manpower – not many seem interested in the detection method.

The detection of an extrasolar planet

Light curves that arise from binary lenses tend to have an extraordinarily diverse and complex range of phenomenology. Unlike the transit light curves where one might even say “if you’ve seen one, you’ve seen them all,” light curves from microlensing events take a huge amount of shapes, sizes and durations. This complicates the interpretation of observed light curves for multiple reasons. The features of binary and planetary microlensing light curves have no direct relationship with the parameters of the underlying model. Consider transit light curves or radial velocity plots. For both, there’s a clear, predictable way in which they should behave and so you can gain the parameters of the system through the clearly visible behaviour of the plots. The radius comes from the transit depth, the period comes from the frequency of the periodicity, and so on. Microlensing is not nearly as straightforward. So it is difficult to choose even an initial guess for the best fit parameters of the system. Even when a solution is found, it is difficult to be sure that it’s the best, and that some other, better one doesn’t exist with considerably different parameters. Lens fits are therefore often degenerate, as they can have two decent fits that are completely different. For this reason, in some lensing events, it is hard to tell whether a planet or star is responsible for the perturbed light curve. Also, slight changes in the values of the parameters can lead to severe changes in the resulting light curve, especially the sharp changes associated with the source coming near or crossing the caustic.

While this may be discouraging, it is important to realise that microlensing surveys do have advantages. Microlensing surveys are sensitive to planets in regions of parameter space that are difficult or impossible to probe with current instrumentation, especially finding low-mass planets beyond the ice line in a planetary system. Detecting an Earth-mass planet at 5 AU around a star of any mass would pose a considerable challenge with current instrumentation, and only microlensing has the capability to feasibly detect such an object. Microlensing is a great tool for planet statistics, being less constrained by the planet mass and semi-major axis, it is able to detect planets of very low mass as well as free-floating planets. It could (and has) detect multi-planet systems in some cases, and could detect analogues of every planet in our solar system (with the possible exception of Mercury). This permits microlensing surveys to set constraints on the statistics of effectively all planets with a mass greater than or equal to Mars. Furthermore, it permits the detection of planets throughout the Galaxy, permitting discussions about how those planet statistics change across the Galaxy once a sufficient quantity of planets have been found. Additionally, it is possible, in principle, to use the method to discover extrasolar planets in nearby galaxies, including the Andromeda Galaxy.

The strategy to finding planets through gravitational microlensing has been the same since it was first proposed in 1992 (by Gould & Loeb), even if it has matured and developed considerably. Finding extrasolar planets this way works in a two-step method. Telescopes observe the sky and discover microlensing events occurring. Next, smaller dedicated observatories are alerted in real-time to the existence of the event and focus their attention to gathering high cadence data. Higher magnification events are favoured for follow-up because they have the highest chance of revealing the presence of planets around the lensing star. Future microlensing surveys will be more optimised and will combine these steps into one single step. Microlensing events will work in a sort of “survey mode,” where wide-field telescopes will discover microlensing events and will switch to high-cadence modes throughout the duration of the event.

The uniqueness of the event means one only has one shot at discovering the planet. After the microlensing event is over, the planet may not be detectable again through this method for tens or hundreds of thousands of years as stars do not often come so close to background stars to produce detectable lensing. For this reason, microlensing surveys target dense star fields, frequently targeting the galactic disk and centre, and the lens is frequently a main-sequence star or stellar remnant in the foreground disk or galactic bulge, whereas the background source is often a main-sequence star or giant star typically in the bulge. The uniqueness of microlensing events for a single target may raise a legitimate question about the security of the science involved in the detection of planets this way. If the microlensing event has a decent amount of data, than the mathematics lead to a clear and unambiguous planet detection.

For a binary (or higher) lens, relative orbital motion of the two components may be detectable as a manifestation of deviations from the light curve that would be expected under the usual, static, unmoving binary lens model. This gives you the projected velocity against the plane of the sky because the motion of the second body around the primary in the lens can cause two types of change to the distribution of the caustics (and thus magnification regions). The angular separation between the two bodies may increase or decrease, resulting in a fundamental change in the caustic structure and hence magnification pattern, and the orbital motion in the form of the perpendicular velocity component could be detected, which would simply rotate the pattern of the lens on the sky. If one has a planet’s projected velocity, and if the mass of the lens is known, and if one assumes a circular orbit for the planet, then a fairly well-constrained orbit may be obtained, including its inclination. For especially well-resolved events, the eccentricity can be estimated. As an outstanding example, the microlensing event OGLE-2006-BLG-109 revealed two planets that appear to be Jupiter-Saturn analogues, but scaled down a bit. For the outer planet, the eccentricity was well constrained. For both, an inclination was derived and the system was found to be coplanar.

The light curve for the microlensing event OGLE-2006-BLG-109

The serious attempt to find planets with microlensing began in 1995, but no convincing planet detections were made until 2001. In this six year period, a relatively small number of microlensing events were reported each year (~50 – 100), but the theory and practice improved and in 2001, the OGLE group upgraded to a new camera with a higher cadence and a sixteen-fold improvement in field of view. This boosted the number of events being reported by an order of magnitude by 2002. The following year, the microlensing technique produced its first planet discovery, OGLE-2003-BLG-235Lb/MOA-2003-BLG-53Lb, which is often abbreviated to OGLE235-MOA53. The MOA group upgraded their telescope to a 1.8 metre, 2 square degree telescope in 2004. By 2007, the OGLE and MOA groups were reporting ~850 events per year.

The future of microlensing surveys is promising, even if the detection rate will probably not ever compete with radial velocity or the transit method for the forseeable future. Next generation ground-based surveys will have large fields of view, making it possible to monitor tens of millions of stars simultaneously with cadences of 10 – 20 minutes. This would produce thousands of detected microlensing events per year. Simultaneous high-cadence measurements of microlensing events, once identified, would permit the unambiguous discovery of Earth-mass planets. This would require telescopes spread across Earth to permit constant sky coverage. If Earth-mass planets with separations of several AU are common in the Galaxy, such a set-up should find several per year. Furthermore, if there exists at least one free-floating planet for every star in the Galaxy, the survey should detect them at a rate of hundreds per year. Already, microlensing surveys have placed constraints on the frequency of free-floating planets, finding them to be almost twice as common as main-sequence stars.

Ideally, microlensing surveys would benefit greatly from a space-based mission. A space-based microlensing mission could place robust constraints on the frequency of\geqMars-mass planets at separations(a \gtrsim 0.5 AU. The biases intrinsic to microlensing are completely unlike the biases for Doppler spectroscopy and transit photometry and so this would provide a complimentary and independent confirmation of such planet statistics.


The Kepler Spacecraft

The Kepler spacecraft attached to its booster stage on a Delta II rocket

It’s the biggest thing in extrasolar planets right now so I figured it deserves an obligatory post. I would like to give more attention to instrumentation, techniques, and technology in these entries, and a post on the Kepler spacecraft seems a wonderful way to start.

The spacecraft was launched into a Heliocentric orbit on March 7, 2009 on top of a Delta II rocket. Compared to some other spacecrafts, Kepler is rather simple in design and purpose. It’s essentially a dedicated photometer attached to a 0.95 metre telescope operating in visible light (more specifically, from 400 – 865 nm wavelengths). It observes nearly 150,000 main sequence stars continuously, using the transit method to discover extrasolar planet candidates (see here for a description of how planets are found this way). It has uncovered thousands of planet candidates using this method. Light enters into the front of the telescope, bounces off the primary mirror at the back, and is focoused onto the CCDs in the middle of the telescope body. These CCDs measure the brightness of each star every 29.4 minutes for most of the stars, but some special target stars get high-cadence observations, with measurements being taken once every minute.

Kepler's CCD

The CCD (imaged above) is what does all the magick. One of the squares malfunctioned and no longer works, but beside that and some trouble with the spacecraft going into safe mode and resetting early on in the mission, everything continues to go well as of this writing.

What the spacecraft ends up seeing is the following:

The Kepler Field of View

Kepler sees this, all day, every day. Except for the seasonal 90° roll to keep the solar arrays aimed at the sun, this field of view does not change. The image appears to be mostly hazy, but upon closer inspection, it’s actually comprised of an obscene quantity of stars (see the full image here, but careful for it is a large image).

Despite the simplicity in its design and purpose, Kepler is revolutionising the field of astrophysics and extrasolar planets. Contributions of Kepler to astronomy include studying active galactic nuclei, finding additional Hot Jupiters, performed asteroseismology on red giant stars, studied the central stars of planetary nebulae, discovered more eclipsing binary stars, finding the first transiting giant planet around a limb-darkened star and constraining its spin-orbit alignment, performed asteroseismology on white dwarfs, performed asteroseismology on sun-like stars, studying red giant granulation, finding a class of bloated white dwarfs, measuring the frequency of terrestrial planets around sun-like stars, permitting the public to discover their own planet candidates, discovering a non-transiting planet through transit timing variations for the first time, improving our knowledge of RR Lyr stars, studying stars that tidally affect each other, studying stars in open clusters, measuring giant planet reflectivity, studying sdB star pulsation behaviour, studying B-type stars and roAp stars, and work is progressing toward other fields, including discovering extrasolar moons.

Kepler's Most Crowned Achievements: Low-Mass Planets

The mission was designed to last 3.5 years. The need for at least three years is due to the requirement to detect three transits of a planet to confirm its candidacy. One transit reveals the planet’s existence. The second transit constrains its orbital period. The third transit confirms the orbital period. While this need for a third transit might seem redundant, consider the case of two similarly-sized planets transiting a single star. One can see how one transit of both planets may be confused for two transits of one planet. A planet at 1 AU from a solar-like star will have a period of about 1 year (like, for example, Earth). Therefore, we expect all extrasolar Earth clones to transit roughly once a year. And therefore three years of observations are required to confirm the planet as a candidate.

However Kepler found that sun-like stars are more photometrically variable than expected. It turns out our star is a bit quieter for its type. What this means is that there is more noise in the data, and the transits do not stand out as much. multiple transits must now be stacked to get more data to confirm the planet. The punch line is that for Kepler to get a complete measurement for the frequency of Earth-sized planets in the habitable zones of solar-type stars (so called\eta_{\oplus}or “eta Earth”), Kepler‘s mission must get extended to six years. Kepler has the fuel onboard to do this, but the funding has not been secured. The topic is a discussion for later…