A publishing partnership

ArticlesThe following article is Free article

EXPERIMENTAL LIMITS ON PRIMORDIAL BLACK HOLE DARK MATTER FROM THE FIRST 2 YR OF KEPLER DATA

, , and

Published 2014 April 25 © 2014. The American Astronomical Society. All rights reserved.
, , Citation Kim Griest et al 2014 ApJ 786 158DOI 10.1088/0004-637X/786/2/158

0004-637X/786/2/158

ABSTRACT

We present our analysis on new limits of the dark matter (DM) halo consisting of primordial black holes (PBHs) or massive compact halo objects. We present a search of the first two yr of publicly available Kepler mission data for potential signatures of gravitational microlensing caused by these objects as well as an extensive analysis of the astrophysical sources of background error. These include variable stars, flare events, and comets or asteroids that are moving through the Kepler field. We discuss the potential of detecting comets using the Kepler light curves, presenting measurements of two known comets and one unidentified object, most likely an asteroid or comet. After removing the background events with statistical cuts, we find no microlensing candidates. We therefore present our Monte Carlo efficiency calculation in order to constrain the PBH DM with masses in the range of 2 × 10−9M to 10−7M. We find that PBHs in this mass range cannot make up the entirety of the DM, thus closing a full order of magnitude in the allowed mass range for PBH DM.

Export citation and abstractBibTeXRIS

1. INTRODUCTION

The nature of dark matter (DM) remains one of the most important unsolved problems in science (Feng 2010 and references therein). We know that it has a universal density around five times larger than that of material made of ordinary atoms, and it is an essential ingredient of the current consensus cosmological model (Ade et al. 2013), but we have little information as to its actual nature. Hundreds of candidates have been proposed, with the most popular being various candidates from beyond the standard model of particle physics, many involving the lightest supersymmetric particle (LSP). Besides the thousands of theoretical papers on supersymmetric (SUSY) weakly interacting massive particle (WIMP) candidates, there have been many experimental searches for these DM candidates (Feng 2010). Despite some tantalizing hints, currently there seems to be no compelling WIMP candidate, SUSY or otherwise. The lack of detection of SUSY partners of any sort from the recent Large Hadron Collider run is especially disappointing in this regard (Chatrchyan et al. 2012, 2013b; ATLAS Collaboration 2013). SUSY has been so popular over the past few decades because it seemed to simultaneously solve the hierarchy fine-tuning problem (Martin 2011) and give a "natural" DM candidate. If SUSY partners existed below the TeV scale, then the LSP annihilation cross section tended to be in the range that gave the measured relic abundance of LSP particles (Jungman et al. 1996). This so-called WIMP miracle motivated much theoretical and experimental work on SUSY DM. The discovery of the Higgs boson with a mass around 126 GeV (Aad et al. 2012; Chatrchyan et al. 2013a), along with the lack of SUSY particles below the TeV scale, removes some of this motivation. The LSP can still be the DM, but now some fine tuning will be required to arrive at the measured relic abundance.

More generally, we note that the theoretical and experimental emphasis on the admittedly elegant SUSY models over the past few decades may have been misplaced. Many important experimental discoveries have not verified our aesthetic desire for simple unified models. In fact, experimental breakthroughs have led us in almost the opposite direction. The discovery of cosmic microwave background (CMB) anisotropies points strongly toward an epoch of cosmic inflation in the early universe, most likely caused by a (finely tuned) inflaton scalar field. The discovery of the dark energy points to an extremely finely tuned cosmological constant or quintessence-like scalar field. (Even a cosmological constant can be thought of as the vacuum expectation value of a scalar field.) Finally, the Higgs boson mass of around 126 GeV (Aad et al. 2012; Chatrchyan et al. 2013a) seems to require some fine tuning.

Perhaps we should abandon our Occam's razor proclivities and accept that finely tuned scalar fields seem to be part of modern physics and thus may also be part of the solution to the DM problem. If so, then primordial black holes (PBHs) should be seriously considered as DM candidates.

In light of the above, PBH DM has several things going for it. First, it is one of the few standard model DM candidates. There is no need for SUSY or superstring-inspired Grand Unified Models. The DM problem is detected primarily through gravity, so PBH DM would be a gravitational solution to a gravitational problem. There are many ways to create PBH DM, and many of these involve finely tuned scalar fields. In the past, this has been taken as a negative for PBH DM, but the above considerations may allow rethinking of this attitude. For example, there are several double inflation models where one inflation solves the flatness, etc., problems that inflation is invoked to solve, and the other inflationary epoch gives rise to PBHs (Frampton et al. 2010; Kawasaki et al. 1998), which can then become the DM. Recall that one of the initial motivations for the scale-free Harrison–Zeldovich spectrum of primordial fluctuations was to avoid creating PBHs. Thus PBHs can easily be made via a tilted spectrum of fluctuations or through production of particles that then create the black holes (BHs).

Note that some mechanisms of PBH creation result in a broad spectrum of PBH masses, but double inflation mechanisms tend to create a nearly delta-function spectrum with the masses strongly concentrated near the mass enclosed in the horizon at the epoch of formation. Also note that there have been dozens of other suggested ways to create PBH DM (see, for example, Khlopov 2008; Frampton et al. 2010; Carr et al. 2010). Finally note that if PBHs are created early enough and in appropriate mass ranges they can evade big bang nucleosynthesis and CMB constraints and make up the entirety of the DM.

2. GRAVITATIONAL MICROLENSING OF KEPLER SATELLITE DATA

Since the work of Paczynski (1986), gravitational microlensing has been used as a powerful method for probing the DM in the Milky Way. If the DM is composed of massive compact halo objects (MACHOs), its signature could be detected through the occasional magnification of stellar flux, when the objects pass near the line of sight to a star as they move through the Milky Way halo. Many theoretical and experimental results have been obtained that together have eliminated MACHO DM (which includes PBH DM) in the mass range from 3 × 10−8M to 30 M from being the entirety of the DM (Carr et al. 2010; Wyrzykowski et al. 2011a, 2011b; Tisserand et al. 2007; Yoo et al. 2004; Alcock et al. 1996, 1998, 2001, Renault et al. 1997; Griest 1991). The strongest limits above on MACHO DM come from observing programs toward the Large and Small Magellanic Clouds (LMC and SMC), but here we follow Griest et al. (2011, hereafter Paper I), Cieplak & Griest (2013, hereafter Paper II), and Griest et al. (2013) in using microlensing of the nearby Kepler mission source stars to search for potential MACHO DM signatures. Throughout this paper we follow the methods developed by the authors mentioned above.

The Kepler telescope has a 1 m aperture with a 115 deg2 field of view and is in an Earth-trailing heliocentric orbit (see Koch et al. 2010 and Borucki et al. 2010 for a description of the Kepler mission). It took photometric measurements of around 150,000 stars every 30 minutes toward the Cygnus-Lyra region of the sky. The telescope was launched in 2009 March and successfully completed a four yr nominal mission. The main goal of the Kepler mission is to discover extrasolar planets by the transit technique. For well-aligned systems, a planet will cross in front of the stellar limb and cause the measured stellar flux to drop by a small amount. In order to detect Earth-size planets, Kepler had exquisite photometric accuracy, measuring fluxes to one part in 10,000 or better. In this paper, we analyze these same stellar light curves but look instead for short-duration increases in stellar flux caused by gravitational microlensing of a PBH as it passes near the line of sight of the star.

Naively, the nearby Kepler source stars should not be very useful for microlensing because they are at a typical distance of 1 kpc and only 150,000 in number. In general, the sensitivity of microlensing searches for DM is proportional to the distance to the stars, the number of stars monitored, and the duration of the observing program (Paczynski 1986). Previous microlensing searches for DM toward the LMC (Renault et al. 1997; Alcock et al. 2000; Tisserand et al. 2007) monitored more than 12million stars at a distance of 50 kpc for 8 yr, naively giving a factor of several thousand larger sensitivity than the Kepler source stars. However, as shown in Papers I and II, the finite size of the Kepler stars coupled with Kepler's high-precision photometry imply that Kepler is more sensitive than any previous microlensing experiment for PBHs in the mass range 2 × 10−10M to 2 × 10−6M.

3. DATA ANALYSIS AND EVENT SELECTION

Here we present a preliminary analysis of two yr of publicly available Kepler data (quarters 2 through 9; http://archive.stsci.edu/kepler, 2013; Fraquelli & Thompson 2011).

Each quarter, the Kepler team released around 150,000 light curve files, each containing around 4400 flux measurements taken over about 90 days, with a cadence of around 30 minutes (more precisely: 29.425 minutes). The data we use consists of the reported time of each flux measurement (time), the photometric flux (pdcflux), the flux error (pdcfluxerr), and the quality flag (sapquality). We use the Kepler pipeline flux pdcflux, which removes various trends from the data. The aperture photometric fluxes are also available but we do not use these. Because the Kepler team is looking for dips in the flux that last only a few hours and we are looking for microlensing bumps in the data that last a similarly short time, the detrending procedure that the Kepler team applies to the data should be good for us as well. The quality flag is set nonzero if problems exist in the photometry; for example, from cosmic rays or reaction wheel desaturation (Fraquelli & Thompson 2011). We ignore all data where the quality flag is nonzero.

We divide the good fluxes by the mean flux of the light curve, which is calculated over 300 data points toward the center of the light curve, and subtract unity from this quantity to produce a fractional magnification light curve. We divide the flux errors by the mean as well to give us appropriate errors on the fractional magnifications. These light curves are used for our initial search; however, when performing fits to microlensing shapes, etc., we renormalize our selected light curves using the median flux of the whole light curve, which gives a more robust baseline. For each light curve, we calculate a set of statistics that allow us to identify microlensing candidate events and also to distinguish various types of background events. These statistics and the selection criteria are listed in Tables 1 and 2.

Table 1. Definitions of Statistics

Statistic Definition
〈flux〉 Average of fluxi over 300 central points in light curve
Ai fluxi/〈flux〉
σi Reported error of flux normalized by average flux
bump Sequence of four or more contiguous fluxes with Ai − 1 ⩾ 3σi
nbump Number of bumps in light curve
bumplen Number of contiguous fluxes with Ai − 1 ⩾ 3σi
lag1autocorr ${1\over N-1}\sum _{i=1}^{i=N-1}((A_i-1) (A_{i+1}-1))$
bumpvar ∑|Ai − 1|/σi over points under bump
Ledgevar ∑|Ai − 1|/σi over 2 bumplen points starting 6 bumplen before bump
Redgevar ∑|Ai − 1|/σi over 2 bumplen points starting 4 bumplen after bump
edgecriterion 1, if the bump begins and ends on a good data point, else 0
dof Number of data points within 5 bumplen of peak minus number of fit parameters
t* Microlensing shape duration fit parameter (see text)
λ Flare event shape duration fit parameter (see text)
mlchi2dof χ2 of fit to microlensing shape divided by dof
fchi2dof χ2 of fit to exponential flare shape divided by dof
chi2in χ2 of microlensing fit for points with time, ti, such that t0 − 1.5t* < ti < t0 + 1.5t*
chi2out χ2 of microlensing fit for points with time, t0 − 6t* < ti < t0 + 6t*, but not in chi2in
Nasy Number of points near peak time, t0, for asymmetry; larger of 1.5λ and 2t*
asymmetry $\sum _{ N_{\mathrm{asy}}\ \rm points} |A(t_0-t_i) - A(t_0+t_i) |/ (\sum A(t_i) - N_{\mathrm{asy}}A_{{\rm min}})$

Download table as:  ASCIITypeset image

Table 2. Selection Criteria

Selection Criterion Purpose Number of Light Curvesa
initial Before selection 1,215,788
nbump <3 AND lag1autocorr < 0.7 Remove obvious variable stars 700,136
0 < nbump < 3 AND bumplen⩾4 Level 1 trigger (one or two significant bumps) 121,092
bumpvar >11(1/Ledgevar + 1/Redgevar)−1 Signal-to-noise cut when reported errors are non-Gaussian 981
edgecriterion > 0 Remove bumps that start or end in bad data 881
mlchi2dof < fchi2dof Microlensing fit better than flare star fit 288
if bumplen = 4 or 5, fchi2dof/mlchi2dof > 1.4 Stronger flare vs. microlensing χ2 requirement for short duration bumps 177
asymmetry < 0.17 Remove short duration flare events 61
mlchi2dof < 3.5 Microlensing fit is not too bad 33
chi2in < 5 Require microlensing $\chi ^2_{\rm dof}$ not dominated by region under peak 17
not caused by moving object Remove comets/asteroids 0

Note. aTotal number of light curves remaining after each selection is made; each light curve in quarter 29 is counted separately.

Download table as:  ASCIITypeset image

3.1. Statistics and Selection Criteria

Because we are looking for very low magnification, short duration bumps in a large amount of data, the main backgrounds are measurement noise and stellar variability. Our level 1 requirement is that a candidate event contain a "bump", defined as four sequential flux measurements that are three standard deviations higher than the mean flux. (This was also the criterion applied in our theoretical papers, Paper I and Paper II.)

Naively applied, this criterion selects about 50% of all of the 150,000 light curves. The problem is that given the extreme precision of Kepler photometry, around 25% of main sequence stars and 95% of giant stars have measurable variability (Ciardi et al. 2011). So before applying our bump selection criteria we first identify and remove "variable stars" from the database. We call a star a "variable star" if it contains more than two bumps (defined above) or if the autocorrelation function of the light curve calculated with a lag time of one measurement (around 30 minutes) is larger than 0.7. This lag1autocorr statistic is a measure of how correlated one flux measurement is with the next. Because most variable stars vary on timescales much longer than 30 minutes, we expect variable stars to have a large value for this statistic. The threshold value of 0.7 is set by requiring that the simulated microlensing events we add to light curves to measure our detection efficiency are not removed by this cut. In general, the cut values of each selection criterion below are set as a compromise. We want to select as many of our added simulated microlensing events as possible, while still eliminating whatever background the cut is intended to remove. Thus our cuts are as loose as possible while eliminating background. All of our cuts are listed and defined in Tables 1 and 2. Table 2 also shows the number of light curves remaining after each selection criterion is applied.

Using these two criteria we find that we remove about 34% of the dwarf stars and around 91% of the giant stars from further consideration. Note that these numbers are consistent with the results of Ciardi et al. (2011) mentioned above, whose numbers we used in Paper II for our theoretical estimate. Note that we are using the term "giant star" very roughly to mean any star with radius greater than 3  R, as listed in the Kepler light curve file.

After eliminating variable stars our bump selection criteria gives us around 15,000 candidate events per quarter. However, the Gaussian error approximation implied by our bump criterion is not adequate, so we calculate three local measures of variability for each bump we find: bumpvar, which is the absolute value of fractional flux increase divided by the error averaged over the duration of the bump (bumpvar =〈|A − 1|/σ〉); Ledgevar, which is the same for the region of the light curve that occurred just before the time of the bump; and Redgevar, which is the same for a small portion of the light curve just after the bump occurred. We then demand that the variation during the time of the bump is substantially more significant than the variation that occurred just before and after the bump. This selection criterion eliminates bumps in light curves caused by a sequence of very noisy measurements, where the noise is non-Gaussian, meaning that the reported error bars are not reliably determining the probability of outlier fluxes.

After applying these criteria we still find a large number of candidate bumps. Many of these bumps happen at the same time and have the same shape across a large number of light curves. These are most likely caused by systematic problems in the photometry coming from either instrumental problems in the Kepler satellite or the detrending software. We especially notice these systematic error bumps just before or after regions of "bad data", areas of light curves that have the quality flag set. To deal with these problems we compiled ranges of dates over the eight quarters that contained substantial numbers of similarly shaped bumps. These ranges of times are organized in a "bad data" file and are removed from all of the light curves before calculating any statistics.

After removing the "bad data", the variables, and the noise bumps as discussed above, we still have around 100 candidate bumps per quarter, some still caused by noise but many caused by short-term variability. Some example light curves are shown in Figures 14. Examination of the light curves show that the selected bumps come in several categories. First, there are dozens of bumps that start during periods of "bad data" and then exponentially approach the mean flux over several hours or days. To eliminate these we apply an edgecriterion, demanding that our best-fit microlensing shape not start or end in a region of "bad data" or off the edge of the data; that is, we demand that the entire bump be contained in the good data.

Figure 1. Refer to the following caption and surrounding text.

Figure 1. Examples of bumps in Kepler light curves caused by stellar flares. The Kepler source star IDs are shown in the upper left corners; the solid blue line is the rather poor fit to the microlensing shape; and the solid green line is a (better) fit to flare event. The top panel shows a medium amplitude flare event; the middle panel shows a high-amplitude event; and the bottom panel shows a very low amplitude flare.

Standard image High-resolution image
Figure 2. Refer to the following caption and surrounding text.

Figure 2. Examples of bumps in Kepler light curves. The top two panels show very short duration events that can be well fit with either microlensing or flare shapes. The bottom panel is an event of unknown origin that is a poor fit to the microlensing shape. The Kepler source star IDs are shown as are fits to microlensing (solid blue line) and flare (solid green line) shapes.

Standard image High-resolution image
Figure 3. Refer to the following caption and surrounding text.

Figure 3. Examples of bumps in Kepler light curves caused by comet C/2006 Q1 (McNaught) during quarter 5. The Kepler source star IDs are shown, and the solid blue line is a fit microlensing model.

Standard image High-resolution image
Figure 4. Refer to the following caption and surrounding text.

Figure 4. Examples of bumps in Kepler light curves caused by comets during quarter 9. The upper panel is from C/2007 Q3 (see Table 3) and the lower panel from an unidentified comet or asteroid. The Kepler source star IDs are shown as are fits to a microlensing shape.

Standard image High-resolution image

Next, the largest category of bumps by far, containing a few hundred events per quarter, is of light curves with a highly asymmetric bump that we determined is probably due to a stellar flare caused by magnetic activity in the stellar atmosphere. Examples are shown in Figure 1. We discuss some properties of these flare events in the Appendix.

To eliminate these flare events we compare two nonlinear fits to each bump, the first using the expected microlensing shape given by formulas in Cieplak & Griest (2013)5 and the second using a flare event shape, which has a nearly instantaneous rise followed by an exponential drop back to the median light curve flux. For microlensing there are four parameters that determine the light curve shape: t0, the time of the peak (time of closest approach between the lens and source); t*, the duration of the event (half the time to cross the stellar limb; see Equation (11) of Paper II); Amax, the peak magnification; and epsilonmin, the distance of closest approach between lens and source lines of sight divided by the stellar radius in the lens plane. For the flare shape, we have t0, the peak time; Amax, the peak amplitude; and λ, the exponential decay constant. While we are assuming the rise is instantaneous and is followed by an exponential decay, the rise can occur at any time during the 30 minute Kepler flux integration. So the flux measurement immediately preceding the measured peak flux can have any value between the baseline and the peak value. Thus we add another degree of freedom (dof) to the flare fit, which is the value of the flux at the time immediately preceding the peak time. Examples of these fits are shown in Figures 1 and 2, where we show the data as well as both the microlensing and flare fits.

In order to eliminate flare events we then require that mlchi2dof < fchi2dof, where mlchi2dof is the chi-squared per dof of the microlensing fit and fchi2dof is the chi-squared per dof of the flare star fit. We fit over a region of the light curve up to 10 times longer than the bump length, where the bump length is defined as the number of 3σ high flux points we used for the level 1 trigger. This selection criterion works extremely well in removing longer duration (i.e., many hour) flare events, but for candidate events lasting only a few hours it does not always remove the flare event. See Figure 1 for longer duration events, where the fits clearly distinguish microlensing from flaring, and the top two light curves in Figure 2, where both microlensing and flare fits are satisfactory due to the small number of data points in the bump. The problem is that there is a large background of flare events that have a wide range of durations. We therefore expect many short-duration flare events with only a few points in the bump. However, our fits lose their power to distinguish flares from microlensing when the number of data points is small. In addition, we start the fit for the flare shape from the highest point but allow one previous flux to be high due to the roughly 30 minute Kepler integration time. Sometimes, due to measurement error, it is the third point in the flare event that is the highest, and then the flare-fit chi-squared is bad. With only four or five points total, there are not enough points to compensate for this one bad point, and the microlensing fit will be better than the flare fit. Thus, due to random measurement errors, bumps that are near our minimum of a total of four flux points are sometimes fit better with a microlensing shape, even when they are probably flare events.

To solve this problem, we require bumps with only four or five points to meet a more stringent chi-squared criterion, fchi2dof/mlchi2dof > 1.4, and calculate an additional asymmetry statistic, which distinguishes the symmetric microlensing shape from the asymmetric flare shape.

To calculate asymmetry we start by finding a symmetry timescale that is the larger of 1.5 times the flare-fit timescale λ and twice the microlensing timescale t*. We define asymmetry as the sum of absolute values of the differences between the flux measured at a time before the peak and the flux measured at the symmetric time after the peak, all normalized to the total flux above the median under the peak (see Table 1 for the formulas). Requiring that asymmetry be smaller than 17%, together with the above chi-squared criterion, the new minimum number of data points effectively removes the flare events, including those of short duration.

We also implement two more signal-to-noise criteria, first requiring that mlchi2dof (the chi-squared per dof of the microlensing fit defined above) be less than 3.5. This ensures that the microlensing shape is a relatively good fit to theory. The bottom light curve in Figure 2 shows an example bump removed by the chi-squared criteria. We also compare chi2in, the chi-squared per dof of the fit under the peak, with chi2out, the chi-squared per dof outside the peak. We define a region centered on the peak of duration three times the event duration t* as "under the peak" and a region six times the peak duration that excludes the above region as "outside the peak."Our penultimate selection criterion is then to require that chi2in < 5. This selection criterion eliminates rare bumps where the extra noise in the peak area causes our bump criteria to be met. We do not want bumps where the fit chi-squared under the peak is much worse than the chi-squared outside the peak.

3.1.1. Foreground Moving Objects in the Kepler Field

After applying these cuts to 2 yr of data, we find 17 candidate microlensing events. These are listed in Table 3, and examples are shown in Figures 3 and 4.

Table 3. Comets in Kepler Data

Quarter Kepler ID Timea R.A.b Decl.b gmagc Comet
5 3527753 482.949 285.905 38.6395 21.3 C/2006 Q1
5 3628766 483.582 285.904 38.7161 21.4 C/2006 Q1
5 3833908 485.217 285.893 38.9171 20.8 C/2006 Q1
5 3937408 486.116 285.887 39.029 21.5 C/2006 Q1
5 3937430 486.106 285.894 39.0263 21.2 C/2006 Q1
5 3937432 486.259 285.894 39.0463 21.4 C/2006 Q1
5 4447346 490.591 285.82 39.5627 21.7 C/2006 Q1
5 4637389 492.379 285.791 39.7713 22.1 C/2006 Q1
5 4729654 492.880 285.774 39.8299 21.9 C/2006 Q1
5 7421340 530.305 282.978 43.0877 20.8 C/2006 Q1
5 7421791 528.762 283.157 43.0068 20.6 C/2006 Q1
9 1429653 872.332 290.769 37.0712 21.3 new
9 3833007 837.604 285.517 38.9324 21.9 C/2007 Q3
9 3936698 838.523 285.568 39.023 21.5 C/2007 Q3
9 4138614 840.608 285.615 39.2529 21.5 C/2007 Q3
9 4347043 842.049 285.662 39.4001 22.6 C/2007 Q3
9 4751561 879.688 293.212 39.8243 21.2 New
9 6870049d 886.968 293.851 42.3156 21.9 New

Notes. aKepler days; convert to Julian date by adding 2454833. bFrom the Kepler satellite point of view. cFound by adding the bump peak flux to the source star magnitude. See the text. dDid not pass all cuts.

Download table as:  ASCIITypeset image

Note that these events are nicely symmetric and reasonable fits to the microlensing (actually just stellar limb-darkening) shape. However, the candidate events only occur in quarters 5 and 9, and as shown in Figure 5 they occur in a pattern on the sky that makes it extremely unlikely that they are due to microlensing. Considering first the quarter 5 events as shown in Figure 5(a) and (b), we see a clear track across the Kepler field. From the times shown in Figure 5(b) we see that an object entered the Kepler field near the lower right-hand corner (Figure 5(a)) and moved at a constant rate to the upper left-hand corner over a period of around 60 days. From the times of the bump peaks as given in Table 3, it is clear that this is an object that moved through the Kepler field, leaving a track of transient brightenings in its path. Note that the probability of microlensing is so small that it is extremely unlikely that these events are all caused by microlensing by a single lens.

Figure 5. Refer to the following caption and surrounding text.

Figure 5. Part (a) shows right ascension (R.A.) and declination (decl.) of bumps caused by comet C/2006 Q1 as it passed through the Kepler field during quarter 5. Part (b) shows the angular distance moved by the comet vs. Kepler time. Part (c) shows R.A. and decl. of two separate objects moving through the field during quarter 9, and part (d) shows the distance moved by these two objects. The constant slopes in parts (b) and (d) imply a constant angular speed. The point circled is a bump that did not pass all of the cuts, but it is shown here so there are more than two points on the line.

Standard image High-resolution image

From the time differences between the bump peaks we deduce an angular speed of around 16 arcsec hr−1. Assuming that the motion is due to the 30 km s−1 Kepler satellite motion around the Sun, this implies a distance to this object of around 10 AU. We thus expect all of these events in quarter 5 to be caused by a comet passing through the Kepler field.

Using the fractional magnification from the peak flux of each event and the g magnitude of each Kepler source star (given in the Kepler light curve file header), we can calculate a magnitude that needs to be added to each star to explain each bump. We find values ranging from gmag = 20 to gmag = 21.5. The largest of these values gives a minimum magnitude of the comet.

Using the right ascension (R.A.) and declination (decl.) of the Kepler stars and making the coordinate transformation from the Earth-trailing Kepler orbit frame to the Earth frame, we examine the objects using the Minor Planet Center (2013) software and find that comet C/2006 Q1 (McNaught) passed through the locations of our quarter 5 candidate events at the times they occurred. This comet was at a distance of around 10 AU, just as we estimated. Thus all of these quarter 5 candidate microlensing events are consistent with being caused by comet C/2006 Q1.

Similarly examining the quarter 9 candidate events as shown in Figures 4 and 5, we see evidence for two tracks that cross the field. Using the time between bumps (Table 3), we again conclude that these events are caused by two objects moving through the Kepler field at a constant angular speed. For one of the new objects, there were only two bumps that passed all of our cuts, which does not make for much of a test for constant motion, so we loosened the cuts somewhat and found more events along this track. We show one such additional bump as a circled point in Figure 5(c) and (d). Using the positions and times, we were able to identify one of these objects as comet C/2007 Q3. The other object is moving faster than the two comets and so is probably closer. We could not identify a counterpart in the Minor Planet Ephemeris (Minor Planet Center 2013), so it may be a previously unidentified comet or asteroid. Using the same method as for C/2006 Q1, we find minimum magnitudes of around gmag = 21.3 for C/2007 Q3 and gmag = 21 for the new bright object. In any case, we remove these events as microlensing candidates.

After removing the events that are due to bright moving objects from our candidate list, we are left with no PBH microlensing candidates.

4. EFFICIENCY CALCULATION

Because we found no candidates, we can place upper limits on the halo density of PBH DM. We display these as limits on the halo fraction, where a DM halo made entirely of PBH DM and a local density of ρDM = 0.3 GeV cm−3 (0.0079 M pc−3) would give a halo fraction of unity. In order to do this we need to calculate the number of microlensing events that our search through the Kepler data would be expected to detect if the halo consisted entirely of PBH DM. We did this estimate previously (Paper I and Paper II), but in those theoretical analyses we made several approximations that our actual search through the data has not substantiated. For example, we assumed Gaussian errors, whereas the actual data has many non-Gaussian excursions in flux, resulting in many noise events. We did not know about the flare events or the frequency of other variability-induced events. Thus we need to recalculate the expected number of detections using the actual selection criteria that gave us no PBH DM candidates. We use the same halo model as in Paper I and Paper II: a constant halo density between the Earth and the Kepler field stars about 1 kpc away in a direction almost orthogonal to the Galaxy center, an isotropic Maxwellian velocity distribution, and no need for any solar motion transverse to the line of sight because the Sun's rotation around the Galaxy center is in the direction of the Kepler field stars.

Using this model, we calculate the expected number of event detections by constructing simulated PBH microlensing events and adding these into actual (nonvariable) Kepler light curves. We then analyze these simulated events using the same software and selection criteria that we used for our microlensing search (including, of course, removal of the "bad data" ranges of the light curves). To create the simulated microlensing light curves we use the full limb-darkened microlensing formula (Witt & Mao 1994; corrected in Equation (13) of Cieplak & Griest 2013) with linear limb-darkening coefficients calculated using the Sing (2010) model grid for each Kepler source star (Paper II). We select a random sample of light curves from our nonvariable list and add these simulated light curves to these.

In the Monte Carlo efficiency calculation, we want to cover all possible actual microlensing events, so we add simulated events covering the possible range of physical parameters. These are discussed in great detail in Papers I and II, but they include m, the DM mass; t0, the time of closest approach between the lens and source lines of sight; x, the distance to the lens divided by the distance to the source star; vt, the transverse speed of the lens relative to the source line of sight; and umin, distance of closest approach scaled by the Einstein radius in the lens plane. The Einstein radius is given by

Equation (1)

where m9 = (m/10−9M) is the PBH mass and L is the source star distance.

Each time a simulated event is detected we calculate the differential microlensing rate for the microlensing parameters used, and for that specific source star, using Equation (19) of Cieplak & Griest (2013):

Equation (2)

where vc = 220 km s−1 is the assumed circular speed around the Milky Way at the Sun's position. Note, vc sets the dispersion in the assumed isotropic Maxwellian velocity distribution of the DM halo.

One weakness in the above calculation is our estimate of the source star distance. The header of each Kepler light curve file contains the stellar radius, R*; Sloan r and g magnitudes; the effective temperature, Teff; the star position (R.A. and decl.); and extinction parameters AV and E(BV), etc. We estimate the apparent visual magnitude from

Equation (3)

(Fukugita et al. 1996) and the stellar distance from

Equation (4)

where B.C. is the bolometric correction. Note that we make a crude bolometric correction, using only the effective temperature and whether the source is a main sequence or giant star (Carroll & Ostlie 2007), but we include it because it slightly reduces the distances to the sources, thereby reducing the expected detection rate, and we want our calculation to be conservative. For the same reason, we also remove the very few stars with calculated distances of greater than 10 kpc from the Monte Carlo selection. Because the predicted rate is proportional to stellar distance, these few stars can make a significant, but quite uncertain, contribution.

By adding many millions of simulated events covering the entire allowed range of microlensing parameters, we effectively perform the efficiency-weighted integral over the above differential rate (Equation (2)), where Γ is in units of expected events per day per star. By multiplying this by the number of nonvariable stars and the duration of the light curves, we thus find the efficiency-corrected number of expected microlensing events. The number of nonvariable stars and star years for each Kepler quarter is given in Table 4. Note, for comparison purposes to future similar work, we give some discussion and plots of our calculated efficiencies as a function of PBH mass, event duration, g magnitude, and so on in the Appendix.

Table 4. Star Years by Kepler Quarter

Quarter Total stars Nonvariable stars Start Timea End Timea Duration Star Years
(days)
2 152,238 94,758 169.765 258.467 88.70 23011
3 152,453 99,528 260.244 349.495 89.25 24320
4 156,994 80,589 352.37 442.20 89.83 19842
5 152,376 102,323 443.50 538.16 94.66 26519
6 145,973 96,268 539.47 629.30 89.83 23676
7 151,345 66,468 630.20 719.55 89.25 16242
8 154,640 84,670 735.38 802.34 66.96 15522
9 150,647 73,856 808.52 905.93 97.41 19697

Note. aKepler days; convert to Julian date by adding 2454833.

Download table as:  ASCIITypeset image

Because we detected no events, a 95% C.L. upper limit of 3/Nexp can be set on the halo fraction. This is similar to the method used previously by the MACHO collaboration (Alcock et al. 1996) to set limits on low-mass MACHO DM. Because the number density of PBHs, the efficiency, and the differential rate all depend strongly on the assumed PBH mass, we perform this Monte Carlo calculation independently for each value of PBH mass. Also, because each quarter of Kepler light curve data includes somewhat different source stars and has different noise characteristics, we also calculate the expected number of events independently for each quarter and sum the total to find our limits. All together we analyzed more than 500 million simulated events. Our results are plotted in Figure 6.

Figure 6. Refer to the following caption and surrounding text.

Figure 6. Upper limits (95% C.L.) on PBH DM from nonobservation of PBH microlensing in two yr of Kepler data. The solid black line is our new limit, the dashed black line is the previous best limit (Alcock et al. 1998), the blue dot–dashed line is the theoretical limit from Paper II, and the red dotted line is the femtolensing limit from Barnacka et al. (2012). The black horizontal line indicates a halo density of 0.3 GeV cm−3.

Standard image High-resolution image

5. LIMITS ON PBH DARK MATTER AND DISCUSSION

The thick black solid line in Figure 6 shows our new limits on the possibility of PBH DM. Because these limits depend only on the mass (assuming the lens is a compact object), they apply to any MACHO and thus are robust limits on planets, nontopological solitons, etc. Our analysis shows that PBH DM with masses in the range 2 × 10−9M to 10−7M cannot make up the entirety of the DM in a canonical DM halo.

Also shown in Figure 6 is a black dashed line that shows the best previous limits from a combined MACHO–Expérience Pour la Recherche d'Objects Sombres (EROS) analysis in 1998 (Alcock et al. 1998). Note some authors (e.g., Carr et al. 2010) only quote the 2007 EROS-only limits (Tisserand et al. 2007), but these are not as strong as the earlier combined limits at this low mass range. We see that our new limits rule out more than one order of magnitude more of the allowed PBH DM mass range, for the first time eliminating masses between 2 × 10−9M and 3 × 10−8M as being the entirety of the DM.

Also shown in Figure 6 as a blue dot–dashed line are the potential theoretical limits from analysis of eight yr of Kepler data from Paper II. Naively we would have expected our limits to be about one-fourth of these theoretical limits because we analyzed two yr of data, about one-fourth of the total of eight yr of data assumed in Paper II. The figure shows that our experimental limits are about a factor of eight weaker than expected. This is due to the overly optimistic efficiency assumptions made in Papers I and II. Because of the existence of many flare events, including those of short duration, we had to increase the stringency of our cuts for bumps with only four or five points. Also, non-Gaussian noise and systematic errors in the data meant that we needed to add several other shape and signal-to-noise selection criteria, all of which reduced the number of microlensing events we expect to find.

Finally, the dotted red line at the left of Figure 6 shows the recent femtolensing limits from Barnacka et al. (2012), which define the lower edge of the PBH allowed mass range. We see that there are still about four orders of magnitude in mass (from 3 × 10−13M to 2 × 10−9M) where PBH DM (or MACHO DM) can make up the entirety of the DM. Future analysis of the entire Kepler data set should discover PBH DM or eliminate some portion of this range, and future missions such as Wide-Field Infrared Survey Telescope (WFIRST) or the Exoplanet Euclid Legacy Survey (ExELS) project of the EUCLID mission have the potential to cover another order of magnitude (Paper II; Penny et al. 2013).

In this paper, we have not considered the effect of having several stars blended in the same Kepler object. This blending effect has been studied in detail in some microlensing work (e.g., Thomas & Griest 2006; Thomas et al. 2005; Popowski et al. 2005) and has been shown to have an effect on the interpretation of microlensing rates and optical depths. Blending should be less important in the Kepler field, which is far less crowded than the LMC/SMC and galactic bulge fields previously studied. However, there are known cases of nearby variable stars causing variation in a given Kepler object. Because we eliminate all "variable" stars and only use "nonvariables," this should not affect our limits. We have no way to calculate from first principles the fraction of stars that are "variable," so any change in this number will not affect us. However, in our efficiency calculation we implicitly assume that all Kepler "nonvariable" stars are not blended, and this adds uncertainty to our final results.

There are several competing effects when a blended star is lensed. First, the amplitude of the flux does not rise as high as the theoretical model predicts because of flux from the background stars. Thus we might miss an event that our Monte Carlo says we should select. This would lower our sensitivity and weaken our limits. Second, the probability of lensing is higher than in the single-star case because there is more than one star that can be lensed. This would increase our sensitivity and strengthen our limits. Finally, the duration of the event will be misunderstood because we use the Kepler object stellar radius to compute the connection with underlying microlensing parameters. This can either raise or lower our sensitivity depending upon the PBH mass and other variables. A complete study of blending is left to further work, but we suspect the combined effect will be quite small because the Kepler fields are uncrowded and we are targeting only bright stars, making any blended star more likely to be substantially fainter than the purported Kepler source.

K.G. and A.M.C. were supported in part by the US Department of Energy under grants DE-FG03-97ER40546 and DE-SC0009919. A.M.C. was supported in part by the National Science Foundation Graduate Research Fellowship under grant number DGE0707423. Some of the data presented in this paper were obtained from the Multimission Archive at the Space Telescope Science Institute (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts.

APPENDIX: DISCUSSION OF FLARE EVENTS AND EFFICIENCIES

In this Appendix, we display some figures that might be useful for comparison purposes for future workers.

A.1. Flare Events

In searching for microlensing events, we encountered a background of flare events. While complete discussion of these short-duration stellar variables is beyond the scope of this paper, we show in Figure 7 some information concerning these events. We plot the duration of these events as measured by the bumplen statistic versus the amplitude as measured by the fit Amax. The main statistic used to distinguish flare events from microlensing candidates was the ratio of the chi-squares of the flare fit to the microlensing fit, R = fchi2dof/mlchi2dof (see Table 1 for definitions). In Figure 7, the light curves that are not flares but pass our cuts are shown as black circles. Events we categorize as flares are plotted as crosses, with the value of R being shown by the different color crosses. We see that shorter duration flare events span a wide range of amplitudes while longer duration events are mostly lower amplitude. The large scatter in amplitude at any given duration shows that flare events come in many durations and amplitudes. A complete study of the properties of these flare events is beyond the scope of this paper and is left for future work.

Figure 7. Refer to the following caption and surrounding text.

Figure 7. Flare events before asymmetry cuts. We plot bumplen (number of measurements taken at 30 minute intervals significantly above the average flux) of the 292 events that have bumplen >5 and pass all cuts up to cuts on flare event fitting and asymmetry (see Table: 2). The small black circles show events that pass the flare fitting and asymmetry cuts (microlensing candidates), whereas the colored Xs show the flare events that fail these cuts. Defining R = fchi2dof/mlchi2dof, black circles indicate R > 1.0, red crosses indicate 0.8 < R ⩽ 1.0, green crosses indicate 0.6 < R ⩽ 0.8, blue crosses indicate 0.4 < R ⩽ 0.6, and magenta crosses indicate R ⩽ 0.4.

Standard image High-resolution image

A.2. Efficiency Discussion

In order to allow comparisons for future workers, we discuss here some details of our efficiency calculation and results. As mentioned in the main text, we added hundreds of millions of artificial microlensing light curves into nonvariable Kepler data and then ran our microlensing search software on them. In many cases, our selection software selected the artificial event as a microlensing candidate, but in many other cases it was not selected. We define the detection efficiency as the total microlensing rate coming from the selected events divided by the total rate we would get if every artificial light curve were selected. Note there are other possible definitions; for instance, one could count each light curve equally and just take the ratio of the number of the selected light curves to the total number of light curves sampled. But in the actual experiment, different stars have different radii and different distances and so contribute differently to the total number of expected microlensing events, so we decided to use the weighting actually used to calculate limits.

For each quarter and for each PBH mass, we randomly pick a set of nonvariable stars to add artificial microlensing events to. For each star, we know the stellar distance, the stellar radius, and the magnitude. The added events span the range of possible microlensing parameters: PBH distance scaled by the distance to the source star, x; transverse velocity, vt; the time of closest approach between the lens and source lines of sight, t0; and distance of closest approach scaled by the Einstein radius in the lens plane, umin.

In calculating the total expected number of microlensing events that we use for our limits, we actually do not calculate the efficiency as defined above. We perform the integral given in Equation (2) directly by performing the Monte Carlo described earlier. However, it is of some interest to see how the efficiency depends upon the microlensing and stellar parameters. Thus in Figure 8, we show the efficiency as a function of PBH mass, gmag, umin, and t*, where t* is found from the stellar radius, vt, x, and umin and is half the time to cross the stellar limb. Because the efficiencies depend strongly on PBH mass (see lower right panel of Figure 8), we plot a set of lines, each of which is averaged over a small range of masses. Thus, in the upper left panel we see a peak efficiency at short durations and then a drop to a roughly constant efficiency at longer durations. The different mass bins show different asymptotic values of efficiencies, with lower masses having the smallest efficiencies as expected.

Figure 8. Refer to the following caption and surrounding text.

Figure 8. PBH detection efficiency vs. various parameters. Lower right panel shows overall rate-weighted efficiency, defined in Equation (2) as a function of PBH mass. The other panels have curves for various mass bins: dotted (black): 10−9MmPBH ⩽ 4 × 10−9M, solid (blue): 6 × 10−9MmPBH ⩽ 1.1 × 10−8M, short-dash (red): 2 × 10−8MmPBH ⩽ 3 × 10−8M, dot-long-short (green): 6 × 10−8MmPBH ⩽ 10−7M, and short-dash-long-dash (cyan): 2 × 10−7MmPBH ⩽ 6 × 10−7M. The upper left panel shows efficiency vs. bins of t*, the input event duration (see text for definition); the upper right panel shows efficiency vs. bins of gmag, the g magnitude of the stars; and the lower left panel shows efficiency vs. umin, the microlensing impact parameter in units of Einstein radius.

Standard image High-resolution image

Note that these efficiency plots cannot be used in the normal way. For example, the total efficiency cannot be found from the area under the efficiency versus t* plot. One of us actually introduced the use of this type of efficiency plot for ordinary point-source-point-lens microlensing, and the original intention of efficiency plots was to show the differential efficiency. This was possible because in ordinary microlensing the differential efficiency was a function of the experimental parameters alone and did not depend upon microlensing parameters such as mass and x. However, with the small masses being considered here and the closeness of the Kepler sources, finite source effects dominate the microlensing rate, and the efficiency is a complicated function of the stellar radius, light curve noise level, lens mass, x, etc. Thus only the efficiency versus umin plot (lower left panel) can be interpreted in this usual way (because umin is actually the final integration variable in our Monte Carlo integration of Equation (2)). We also note that the umin plot does not extend to umin = 0 because we did not calculate the efficiencies for umin < 10. Our actual Monte Carlo did include values from umin = 0 to umin, quite large, but we had not recorded those values, and we made a special calculation for these plots.

Footnotes

  • In fact, to speed the processing, we fit only a linear limb-darkened shape (Equation (11) of Paper II) and not the full microlensing light curve shape. For the PBH masses considered, the small size of the Einstein ring relative to the projected size of the stellar limb means that this is a good approximation. If we find good microlensing candidates, we would, of course, fit with the complete microlensing profile.

Please wait… references are loading.
10.1088/0004-637X/786/2/158