S6 Fermi LAT Analysis Hands On

[Slides]

# Creating a TS map

## 1. Select LAT data during prompt burst phase

This can either be done using a time interval ascertained from data from other instruments, e.g., using the GBM trigger time and T90 values, or it can be estimated directly from the LAT light curve. This is a short burst, so let's create a light curve for the interval (T0-1, T0+5), where T0 = 263607782.000. As in Section 2, this can be accomplished using gtbin:

``````% gtbin
This is gtbin version ScienceTools-v9r17p0-fssc-20100906
Type of output file (CCUBE|CMAP|LC|PHA1|PHA2) [LC]
Event data file name[filtered_zmax105.fits]
Output file name[lc_prompt.fits]
Spacecraft data file name[FT2.fits]
Algorithm for defining time bins (FILE|LIN|SNR) [LIN]
Start value for first time bin in MET
Stop value for last time bin in MET
Width of linearly uniform time bins in seconds[0.1]
%```
```

Instead of using fv to plot the results, we can plot from python using pylab and pyfits. For this, it is generally easiest to write a little script with the plotting commands and then execute the script from the python environment. First, here is the plot_lc.py script:

``````% cat plot_lc.py
import pyfits
from RootPlot import RootPlot

t0 = 263607782.000                  # GBM trigger time to use as reference time
lc = pyfits.open('lc_prompt.fits')
data = lc['RATE'].data              # Light curve data from the RATE extension
time = data.field('TIME') - t0      # Subtract t0
timedel = data.field('TIMEDEL')     # This is needed to convert to a rate.
counts = data.field('COUNTS')       # Counts per time bin
error = data.field('ERROR')         # The error is sqrt(n + gehrels) where
# gehrels = 0.75 if n < 10
#         = 0 if n > 10.
rate = counts/timedel
rate_err = error/timedel

lc_plot = RootPlot(time, rate, dy=rate_err,
xtitle='T - %.2f' % t0,
ytitle='count rate (Hz)')
lc_plot.overlay(time, rate, symbol='line', color='blue')
lc_plot.canvas.SaveAs('lc_prompt.png')
%```
```

Eyeballing this plot, a plausible interval for the prompt phase in the LAT data looks like T0 + (0.56, 1.7) s. We run gtselect to extract the data for exactly this time interval. Remember to set "evclsmin=1" on the command line to ensure that we retain the transient class events:

``````% gtselect evclsmin=1
Input FT1 file[FT1.fits] filtered_zmax105.fits
Output FT1 file[filtered_zmax105.fits] filtered_prompt.fits
RA for new search center (degrees) (0:360) [INDEF]
Dec for new search center (degrees) (-90:90) [INDEF]
radius of new search region (degrees) (0:180) [INDEF]
start time (MET in s) (0:) [INDEF] 263607782.58
end time (MET in s) (0:) [INDEF] 263607783.7
lower energy limit (MeV) (0:)  100
upper energy limit (MeV) (0:) 
maximum zenith angle value (degrees) (0:180) 
Done.
%```
```

## 2. Run the localization tools, gtfindsrc and gttsmap

If the data are essentially background-free (i.e., for a sufficiently short interval), one can run the localization tools gtfindsrc and gttsmap directly on the FT1 file. For longer intervals where the background is significant, we can model the instrumental and celestial backgrounds using diffuse model components. For these data, the integration time is about 1 second so the diffuse and instrumental components will make a negligible contribution to the total counts, so we proceed assuming the background is negligible.

We run gtfindsrc first to find the local maximum of the log-likelihood of a point source model as well as an estimate of the error radius. We will use this information to specify the size of the TS map in order to ensure that it contains the error circles we desire.

``````% gtfindsrc
Event file[] filtered_prompt.fits
Spacecraft file[] FT2.fits
Output file for trial points[] GRB090510_gtfindsrc_0.txt
Response functions to use[P6_V3_DIFFUSE] P6_V3_TRANSIENT
Livetime cube file[none]
Unbinned exposure map[none]
Source model file[none]
Target source name[]
Enter coordinates for test source:
Coordinate system (CEL|GAL) [CEL]
Intial source Right Ascension (deg) (-360:360)  333.55
Initial source Declination (deg) (-90:90)  -26.58
Optimizer (DRMNFB|NEWMINUIT|MINUIT|DRMNGB|LBFGS) [MINUIT]
Tolerance for -log(Likelihood) at each trial point[1e-2]
Covergence tolerance for positional fit[0.01]
Best fit position: 333.501,  -26.5184
%```
```

Since our source model comprises only a point source to represent the signal from the GRB, we do not provide a source model file or a target source name. Similarly, since the exposure map is used for diffuse components, we do not need to provide an unbinned exposure map. Use of a livetime cube will make the point source exposure calculation faster, but for integrations less than 1ks, it is generally not needed.

The tool reports the best-fit position in J2000 coordinates and the 68% confidence level (CL) error radius in degrees. Let's define a TS map region that can contain a circle with 4 times this radius, with 30 pixels on a side. This implies a pixel size of

````0.51*8/30 = 0.0136`
```

We run gttsmap accordingly:

``````% gttsmap
Event data file[] filtered_prompt.fits
Spacecraft data file[] FT2.fits
Exposure map file[none]
Exposure hypercube file[none]
Source model file[]
TS map file name[] GRB090510_gttsmap.fits
Response functions to use[P6_V3_DIFFUSE] P6_V3_TRANSIENT
Optimizer (DRMNFB|NEWMINUIT|MINUIT|DRMNGB|LBFGS) [MINUIT]
Fit tolerance[1e-5]
Number of X axis pixels[] 30
Number of Y axis pixels[] 30
Image scale (in degrees/pixel)[] 0.02
Coordinate system (CEL|GAL) [CEL]
X-coordinate of image center in degrees (RA or l)[] 333.501
Y-coordinate of image center in degrees (Dec or b)[] -26.5184
Projection method (AIT|ARC|CAR|GLS|MER|NCP|SIN|STG|TAN) [STG]
....................!
%```
```

I've attached the plot_tsmap.py script to plot the 68, 90, 95, 99% error contours and the associated error radii. It runs in python:

``````% python -i plot_tsmap.py GRB090510_tsmap.fits
TSmax = -509.062
RA(TSmax) = 333.501
DEC(TSmax) = -26.5334
err68 = 0.0414592950814
err90 = 0.0671717299909
err95 = 0.0766342951482
err99 = 0.101200862168
Info in <TCanvas::Print>: file GRB090510_tsmap.png has been created
>>>```
```

The script reports the RA, Dec of the TS maximum pixel and the 68% error radius that is estimated from the area enclosed by the 68% contour. Here is the resulting contour plot: # Likelihood spectral analysis

In this subsection, we'll use the same data extracted as for the localization analysis above, but we will treat it as if the residual and diffuse backgrounds need to be modeled. This means that we will include diffuse components in the model definition and that will necessitate the exposure map calculation in order for the code to compute the predicted number of events. We'll see from the fit to the data that these diffuse components do indeed provide a negligible contribution to the overall counts for this burst.

## 1. Data subselection

We will use the FT1 file, filtered_prompt.fits, from the TS map calculation above.

## 2. Model definition

The model will include a point source at the GRB location, an isotropic component (to represent the extragalactic diffuse and/or the residual background) and a Galactic diffuse component that uses the recommend Galactic diffuse model, gll_iem_v02.fit. This file is available at the LAT background models page via the FSSC Data Access page.

The easiest way to generate a simple 3 component model like this would be to use the modeleditor program: The "Source" pull-down menu allows one to add point sources and various diffuse components. The source name can be set in the "Source Name:" text input box, and the source coordinates and starting spectral parameters can be set under "value" column in the Specturm and Spatial Model areas. For the GALPROP Diffuse Source, one should edit the "File:" entry to point to the local copy of gll_iem_v02.fit. Here is the xml model that we will use:

``````<?xml version="1.0" ?>
<source_library title="Source Library" xmlns="http://fermi.gsfc.nasa.gov/source_library">
<source name="GRB 090510" type="PointSource">
<spectrum type="PowerLaw2">
<parameter free="true" max="1000.0" min="1e-05" name="Integral" scale="1e-06" value="1.0"/>
<parameter free="true" max="-1.0" min="-5.0" name="Index" scale="1.0" value="-2.0"/>
<parameter free="false" max="200000.0" min="20.0" name="LowerLimit" scale="1.0" value="100.0"/>
<parameter free="false" max="200000.0" min="20.0" name="UpperLimit" scale="1.0" value="10000.0"/>
</spectrum>
<spatialModel type="SkyDirFunction">
<parameter free="false" max="360.0" min="0.0" name="RA" scale="1.0" value="333.501"/>
<parameter free="false" max="90.0" min="-90.0" name="DEC" scale="1.0" value="-26.5334"/>
</spatialModel>
</source>
<source name="GALPROP Diffuse" type="DiffuseSource">
<spectrum type="ConstantValue">
<parameter free="true" max="10.0" min="0.0" name="Value" scale="1.0" value="1.0"/>
</spectrum>
<spatialModel file="/home/jchiang/work/GRB_workshop/gll_iem_v02.fit" type="MapCubeFunction">
<parameter free="false" max="1000.0" min="0.001" name="Normalization" scale="1.0" value="1.0"/>
</spatialModel>
</source>
<source name="Extragalactic Diffuse" type="DiffuseSource">
<spectrum type="PowerLaw">
<parameter free="true" max="100.0" min="1e-05" name="Prefactor" scale="1e-07" value="1.6"/>
<parameter free="false" max="-1.0" min="-3.5" name="Index" scale="1.0" value="-2.1"/>
<parameter free="false" max="200.0" min="50.0" name="Scale" scale="1.0" value="100.0"/>
</spectrum>
<spatialModel type="ConstantValue">
<parameter free="false" max="10.0" min="0.0" name="Value" scale="1.0" value="1.0"/>
</spatialModel>
</source>
</source_library>```
```

I've edited this file by hand to set the energy bounds for the PowerLaw2 spectrum to be LowerLimit=100 and UpperLimit=10000; this means that the Integral parameter will be in units of 1e-6 ph/cm^2/s over the energy range 0.1—10 GeV. I have also edited the names of the diffuse components slightly.

## 3. Exposure map generation

Once the data are selected, we can use the gtexpmap application to generate the exposure map. Note that the exposure maps from this tool are intended for use with unbinned likelihood analysis only:

``````% gtexpmap
The exposure maps generated by this tool are meant
to be used for *unbinned* likelihood analysis only.
Do not use them for binned analyses.
Event data file[] filtered_prompt.fits
Spacecraft data file[] FT2.fits
Exposure hypercube file[] none
output file name[] expmap_prompt.fits
Response functions[P6_V3_DIFFUSE] P6_V3_TRANSIENT
Radius of the source region (in degrees) 25
Number of longitude points (2:1000)  100
Number of latitude points (2:1000)  100
Number of energies (2:100) 
Computing the ExposureMap (no expCube file given)
....................!
%```
```

The radius of the source region should be larger than the extraction region in the FT1 data in order to account for PSF tail contributions of sources just outside the extraction region. For energies down to 100 MeV, a 10 degree buffer is recommended; for higher energy lower bounds, e.g., 1 GeV, 5 degrees or less is acceptable.

Note that we did not provide an "exposure hypercube" (AKA livetime cube) file. For data durations less than about 1ks, gtexpmap will execute faster doing the time integration over the livetimes in the FT2 file directly. For longer integrations, computing the livetime cube with gtltcube will be faster. When we investigate the extended emission, we will run gtltcube first. Note also that we used P6_V3_TRANSIENT response functions, consistent with our data selection evclsmin=1.

## 4. Diffuse response calculation

For each diffuse component in the model, the gtdiffrsp tool adds a column that contains the integral over the source extent (for the Galactic and isotropic components this is essentially the entire sky) of the source intensity spatial distribution times the PSF and effective area. It computes the counts model density of the various diffuse components at each measured photon location, arrival time, and energy, and this information is used in computing the likelihood. This integral is also computed for the point sources in the model, but since those sources are delta-functions in sky position, the spatial part of the integral is trivial.

``````% gtdiffrsp
Event data file[] filtered_prompt.fits
Spacecraft data file[] FT2.fits
Source model file[] grb090510_model.xml
Response functions to use[P6_V3_DIFFUSE] P6_V3_TRANSIENT
Working on...
filtered_prompt.fits......................!
%```
```

## 5. Unbinned likelihood analysis

The python modules of the pyLikelihood package allow one to fit the LAT data interactively while also providing full scripting control of the model fitting and output. The interface is designed around "observation" and "analysis" objects, with the idea that one can create separate analysis objects to analyze a single instance of the data as encapsulated by a single observation object. To illustrate this concretely, here is a script that creates the observation and analysis objects for our data:

``````% cat analysis.py
from UnbinnedAnalysis import *

obs = UnbinnedObs('filtered_prompt.fits', 'FT2.fits',
expMap='expmap_prompt.fits', expCube=None,
irfs='P6_V3_TRANSIENT')

like = UnbinnedAnalysis(obs, srcModel='grb090510_model.xml',
optimizer='MINUIT')

print like.model
%```
```

This imports the classes from the UnbinnedAnalysis module and creates instances of the UnbinnedObs and UnbinnedAnalysis classes, with the appropriate inputs. One could create a distinct analysis object in the same python session, using a different source model and which is applied to the same observation dataset:

``````like2 = UnbinnedAnalysis(obs, srcModel='grb090510_model_2.xml',
optimizer='NEWMINUIT')```
```

Note that both the xml model definition and optimizer choice can have different values than for the original UnbinnedAnalysis object.

The line "print like.model" is similar to the "show model" command in Xspec and displays the model components and their values:

``````% python -i analysis.py
Extragalactic Diffuse
Spectrum: PowerLaw
0      Prefactor:  1.600e+00  0.000e+00  1.000e-05  1.000e+02 ( 1.000e-07)
1          Index: -2.100e+00  0.000e+00 -3.500e+00 -1.000e+00 ( 1.000e+00) fixed
2          Scale:  1.000e+02  0.000e+00  5.000e+01  2.000e+02 ( 1.000e+00) fixed

GALPROP Diffuse
Spectrum: ConstantValue
3          Value:  1.000e+00  0.000e+00  0.000e+00  1.000e+01 ( 1.000e+00)

GRB 090510
Spectrum: PowerLaw2
4       Integral:  1.000e+00  0.000e+00  1.000e-05  1.000e+03 ( 1.000e-06)
5          Index: -2.000e+00  0.000e+00 -5.000e+00 -1.000e+00 ( 1.000e+00)
6     LowerLimit:  1.000e+01  0.000e+00  2.000e+01  2.000e+05 ( 1.000e+00) fixed
7     UpperLimit:  1.000e+04  0.000e+00  2.000e+01  2.000e+05 ( 1.000e+00) fixed

>>>```
```

### Parameter manipulation

Under each model component, the spectral function type and spectral parameters are listed. For the spectral parameters, the columns are

1. index
2. parameter name
3. current value
4. error estimate (This is only non-zero after a fit has been performed and will be reset to zero if the parameter value has been changed.)
5. lower bound
6. upper bound
7. scale factor (in parentheses)
8. fixed flag

The index can be used to reference a parameter value interactively, either to set the parameter properties or for introspection purposes:

``````>>> print like
Value:  1.000e+00  0.000e+00  0.000e+00  1.000e+01 ( 1.000e+00)
>>> like = 2
>>> print like
Value:  2.000e+00  0.000e+00  0.000e+00  1.000e+01 ( 1.000e+00)
>>> like.setBounds(0, 100)
>>> print like
Value:  2.000e+00  0.000e+00  0.000e+00  1.000e+02 ( 1.000e+00)
>>> like.setScale(10)
>>> print like
Value:  2.000e+00  0.000e+00  0.000e+00  1.000e+02 ( 1.000e+01)
>>> like.setScale(1)```
```

In scripts, the index of a parameter may be found using the par_index(<source name>, <parameter name>) function:

``````>>> galprop_norm = like.par_index('GALPROP Diffuse', 'Value')
>>> print galprop_norm
3
>>> print like[galprop_norm]
Value:  2.000e+00  0.000e+00  0.000e+00  1.000e+02 ( 1.000e+00)
>>>```
```

One can also freeze or thaw parameters, either individually or by groups:

``````>>> like.freeze(3)
>>> like.freeze((0, 1, 2, 3, 4, 5, 6, 7))
>>> like.model
Extragalactic Diffuse
Spectrum: PowerLaw
0      Prefactor:  1.600e+00  0.000e+00  1.000e-05  1.000e+02 ( 1.000e-07) fixed
1          Index: -2.100e+00  0.000e+00 -3.500e+00 -1.000e+00 ( 1.000e+00) fixed
2          Scale:  1.000e+02  0.000e+00  5.000e+01  2.000e+02 ( 1.000e+00) fixed

GALPROP Diffuse
Spectrum: ConstantValue
3          Value:  2.000e+00  0.000e+00  0.000e+00  1.000e+02 ( 1.000e+01) fixed

GRB 090510
Spectrum: PowerLaw2
4       Integral:  1.000e+00  0.000e+00  1.000e-05  1.000e+03 ( 1.000e-06) fixed
5          Index: -2.000e+00  0.000e+00 -5.000e+00 -1.000e+00 ( 1.000e+00) fixed
6     LowerLimit:  2.000e+01  0.000e+00  2.000e+01  2.000e+05 ( 1.000e+00) fixed
7     UpperLimit:  2.000e+05  0.000e+00  2.000e+01  2.000e+05 ( 1.000e+00) fixed

>>> like.thaw(range(7))
>>> like.model
Extragalactic Diffuse
Spectrum: PowerLaw
0      Prefactor:  1.600e+00  0.000e+00  1.000e-05  1.000e+02 ( 1.000e-07)
1          Index: -2.100e+00  0.000e+00 -3.500e+00 -1.000e+00 ( 1.000e+00)
2          Scale:  1.000e+02  0.000e+00  5.000e+01  2.000e+02 ( 1.000e+00)

GALPROP Diffuse
Spectrum: ConstantValue
3          Value:  2.000e+00  0.000e+00  0.000e+00  1.000e+02 ( 1.000e+01)

GRB 090510
Spectrum: PowerLaw2
4       Integral:  1.000e+00  0.000e+00  1.000e-05  1.000e+03 ( 1.000e-06)
5          Index: -2.000e+00  0.000e+00 -5.000e+00 -1.000e+00 ( 1.000e+00)
6     LowerLimit:  2.000e+01  0.000e+00  2.000e+01  2.000e+05 ( 1.000e+00) fixed
7     UpperLimit:  2.000e+05  0.000e+00  2.000e+01  2.000e+05 ( 1.000e+00) fixed

>>>```
```

Note that certain parameters, e.g., LowerLimit and UpperLimit, always remain fixed.

### Performing the fit

The "fit(…)" method is used to perform the spectral fit. It takes several optional arguments:

fit(self, verbosity=3, tol=None, optimizer=None, covar=False, optObject=None)

verbosity
The verbosity flag provides some control over the screen output, with verbosity=0 suppressing as much output as possible.
tol
This is the absolute tolerance for changes in the log-likelihood for determining convergence. The default value is 0.001. In the interface, if tol is omitted, the None value indicates that the instance default should be used. Setting it in the function call overrides the default value temporarily for that optimization.
optimizer
optimizer to be used for that function call. Temporary override of the one set when the analysis object was created.
covar
Set this to True to have the optimizers provide estimates of the covariance matrix. For large datasets and/or complicated models this can take some time, so it should generally be used only when the covariance matrix is needed for computing errors on fluxes (see below).
optObject
This allows one to specify an "optimizer object" that can be referenced later outside of this function. Ignore this for now.
``````>>> like.fit()

<...skip output...>

>>> print like.model
Extragalactic Diffuse
Spectrum: PowerLaw
0      Prefactor:  1.000e+02  5.158e-03  1.000e-05  1.000e+02 ( 1.000e-07)
1          Index: -2.100e+00  0.000e+00 -3.500e+00 -1.000e+00 ( 1.000e+00) fixed
2          Scale:  1.000e+02  0.000e+00  5.000e+01  2.000e+02 ( 1.000e+00) fixed

GALPROP Diffuse
Spectrum: ConstantValue
3          Value:  1.000e+01  1.438e-02  0.000e+00  1.000e+01 ( 1.000e+00)

GRB 090510
Spectrum: PowerLaw2
4       Integral:  1.000e+03  1.815e-03  1.000e-05  1.000e+03 ( 1.000e-06)
5          Index: -1.649e+00  8.855e-02 -5.000e+00 -1.000e+00 ( 1.000e+00)
6     LowerLimit:  1.000e+02  0.000e+00  2.000e+01  2.000e+05 ( 1.000e+00) fixed
7     UpperLimit:  1.000e+04  0.000e+00  2.000e+01  2.000e+05 ( 1.000e+00) fixed

>>>```
```

The normalizations for all 3 spectral components are pegged at their maximum values. This arises because the observed number of counts greatly exceeds the number that the model can account for given the current bounds. One can find the total number of events using

``````>>> sum(like.nobs)
108.0
>>>```
```

like.nobs is an array of detected counts for a range of energy bands that is used for plotting purposes. The predicted counts for each component using the current parameter values can be found using the NpredValue(…) function:

``````>>> like.NpredValue('GRB 090510')
6.5687452492682947
>>> for item in like.sourceNames(): print item, like.NpredValue(item)
...
Extragalactic Diffuse 1.0693882205
GALPROP Diffuse 0.104181741447
GRB 090510 6.56874524927
>>>```
```

Let's reset the bounds for the GRB normalization and refit:

``````>>> like.setBounds(0, 1e10)
>>> like.fit(0)
-69.836303135629791
>>> like.model
Extragalactic Diffuse
Spectrum: PowerLaw
0      Prefactor:  4.097e-02  1.923e+00  1.000e-05  1.000e+02 ( 1.000e-07)
1          Index: -2.100e+00  0.000e+00 -3.500e+00 -1.000e+00 ( 1.000e+00) fixed
2          Scale:  1.000e+02  0.000e+00  5.000e+01  2.000e+02 ( 1.000e+00) fixed

GALPROP Diffuse
Spectrum: ConstantValue
3          Value:  5.510e+00  4.974e+00  0.000e+00  1.000e+01 ( 1.000e+00)

GRB 090510
Spectrum: PowerLaw2
4       Integral:  1.851e+04  1.839e+03  0.000e+00  1.000e+10 ( 1.000e-06)
5          Index: -1.951e+00  8.381e-02 -5.000e+00 -1.000e+00 ( 1.000e+00)
6     LowerLimit:  1.000e+02  0.000e+00  2.000e+01  2.000e+05 ( 1.000e+00) fixed
7     UpperLimit:  1.000e+04  0.000e+00  2.000e+01  2.000e+05 ( 1.000e+00) fixed

>>> for item in like.sourceNames(): print item, like.NpredValue(item)
...
Extragalactic Diffuse 0.000438106806618
GALPROP Diffuse 9.93542203626e-08
GRB 090510 107.967369927
>>>```
```

The flux in the 0.1-10 GeV band is 1.85 +/- (0.18) e-2 ph/cm^2/s and the photon index is -1.95 +/- 0.08.

Plots of the fitted counts spectrum and residuals can be obtained using

````>>> like.plot()`
```

This produces two ROOT plotting windows. The plots can be exported as .eps or other image formats.  # Xspec analysis

## 1. Generating .pha and .rsp files

For simple Xspec analysis, it suffices to generate a pha1 file

``````% gtbin
This is gtbin version ScienceTools-v9r17p0-fssc-20100906
Type of output file (CCUBE|CMAP|LC|PHA1|PHA2) [CMAP] pha1
Event data file name[filtered_prompt.fits]
Output file name[cmap_prompt.fits] grb090510_prompt.pha
Spacecraft data file name[FT2.fits]
Algorithm for defining energy bins (FILE|LIN|LOG) [LOG]
Start value for first energy bin in MeV 100
Stop value for last energy bin in MeV 1e4
Number of logarithmically uniform energy bins 20
%```
```

The gtrspgen tool is used to generate the "DRM". Note that one should always use the "PS" response calculation method.

``````% gtrspgen
This is gtrspgen version ScienceTools-v9r17p0-fssc-20100906
Response calculation method (GRB|PS) [GRB] PS
Spectrum file name[] grb090510_prompt.pha
Spacecraft data file name[] FT2.fits
Output file name[] grb090510_prompt.rsp
Cutoff angle for binning SC pointings (degrees)[60.] 90.
Size of bins for binning SC pointings (cos(theta))[.05]
Response function to use, Handoff|DC2|DC2A|DC2FA|DC2BA|DC2FB etc[P6_V3_DIFFUSE] P6_V3_TRANSIENT
Algorithm for defining true energy bins (FILE|LIN|LOG) [LOG]
Start value for first energy bin in MeV[30.]
Stop value for last energy bin in MeV[200000.]
Number of logarithmically uniform energy bins
%```
```

## 2. Backgrounds

For GRB 090510, there really isn't any background contamination. For analyses of longer integrations, one can estimate the background using off-source regions as for more traditional X-ray analyses.

## 3. Running Xspec

``````% xspec

XSPEC version: 12.6.0
Build Date/Time: Sun Jul 25 20:03:01 2010

XSPEC12>data grb090510_prompt
***Warning: No TLMIN keyword value for response matrix FCHAN column.
Will assume TLMIN = 1.

1 spectrum  in use

Spectral Data File: grb090510_prompt.pha  Spectrum 1
Net count rate (cts/s) for Spectrum:1  1.035e+02 +/- 1.059e+01
Assigned to Data Group 1 and Plot Group 1
Noticed Channels:  1-20
Telescope: GLAST Instrument: LAT  Channel Type: PI
Exposure Time: 1.033 sec
Using Response (RMF) File            grb090510_prompt.rsp for Source 1

XSPEC12>model pow

Input parameter value, delta, min, bot, top, and max values for ...
1       0.01         -3         -2          9         10
1:powerlaw:PhoIndex>
1       0.01          0          0      1e+24      1e+24
2:powerlaw:norm>

========================================================================
Model powerlaw<1> Source No.: 1   Active/On
Model Model Component  Parameter  Unit     Value
par  comp
1    1   powerlaw   PhoIndex            1.00000      +/-  0.0
2    1   powerlaw   norm                1.00000      +/-  0.0
________________________________________________________________________

Chi-Squared =   3.722025e+07 using 20 PHA bins.
Reduced chi-squared =   2.067792e+06 for     18 degrees of freedom
Null hypothesis probability =   0.000000e+00
Current data and model not fit yet.
XSPEC12>cpd /xs
XSPEC12>setplot en
XSPEC12>plot ldata chi
***Warning: Fit is not current.
XSPEC12>statistic cstat
Warning: Cstat statistic is only valid for Poisson data.
cstat statistic: variance weighted using standard weighting

C-statistic =       68910.73 using 20 PHA bins and 18 degrees of freedom.

Warning: Cstat statistic is only valid for Poisson data.

Current data and model not fit yet.

XSPEC12>fit

C-Statistic  Lvl Par #          1             2
30.3118       -3    1.91112       603.614

<...skip output...>

==============================
Variances and Principal Axes
1        2
5.6690E-05| -1.0000   0.0001
5.6252E+05| -0.0001  -1.0000
------------------------------

========================
Covariance Matrix
1           2
1.005e-02   7.496e+01
7.496e+01   5.625e+05
------------------------

========================================================================
Model powerlaw<1> Source No.: 1   Active/On
Model Model Component  Parameter  Unit     Value
par  comp
1    1   powerlaw   PhoIndex            1.91112      +/-  0.100225
2    1   powerlaw   norm                603.614      +/-  750.011
________________________________________________________________________

C-statistic =          30.31 using 20 PHA bins and 18 degrees of freedom.

Warning: Cstat statistic is only valid for Poisson data.

XSPEC12>plot
Note: When using cstat or lstat, "plot chisq" is only valid
for data sets with no background files.
XSPEC12>plot ldata resid
XSPEC12>plot ldata ratio
XSPEC12>flux 1e5 1e7
Model Flux  0.018182 photons (1.5334e-05 ergs/cm^2/s) range (1.0000e+05 - 1.0000e+07 keV)
XSPEC12>```
```

# Preparing files for RMFIT

## 1. Create the PHA2 file

Before running gtbin, one must first use gtselect to restrict the FT1 data to the desired energy and time intervals. For this exercise, we will try to "reverse-engineer" the PHA2 file that Valerie provided with her lesson. In this case, we can see what data selections were made using the gtvcut tool:

``````% gtvcut lat_090510016_b.pha2 SPECTRUM
DSTYP1: POS(RA,DEC)
DSUNI1: deg
DSVAL1: CIRCLE(333.6,-26.6,15)

DSTYP2: TIME
DSUNI2: s
DSVAL2: TABLE
DSREF2: :GTI

GTIs:
263607681  263607999

DSTYP3: ENERGY
DSUNI3: MeV
DSVAL3: 100:300000

DSTYP4: EVENT_CLASS
DSUNI4: dimensionless
DSVAL4: 3:3
%```
```

This prints out the so-called "data sub-space" keywords that record the event selections that have been applied with gtselect. These keywords are propagated to all downstream data products and are used to ensure consistency among files used in a given analysis.

For Valerie's file, the EVENT_CLASS range is 3:3. This means that "diffuse" class events were selected. The time range starts 101 seconds before trigger (T0 = 263607782) and lasts 318 seconds. The sky acceptance cone is slightly different than the one we used and no zenith angle cut was applied. We'll ignore these differences and will proceed using filtered_zmax105.fits as our input FT1 file:

``````% gtselect
Input FT1 file[] filtered_zmax105.fits
Output FT1 file[] filtered_pha2_events.fits
RA for new search center (degrees) (0:360) [INDEF]
Dec for new search center (degrees) (-90:90) [INDEF]
radius of new search region (degrees) (0:180) [INDEF]
start time (MET in s) (0:) [INDEF] 263607681
end time (MET in s) (0:) [INDEF] 263607999
lower energy limit (MeV) (0:)  100
upper energy limit (MeV) (0:) 
maximum zenith angle value (degrees) (0:180) 
Done.
%```
```

We can use pyfits to inspect the structure of Valerie's PHA2 file:

``````>>> import pyfits
>>> pha2 = pyfits.open('lat_090510016_b.pha2')
>>> pha2.info()
Filename: lat_090510016_b.pha2
No.    Name         Type      Cards   Dimensions   Format
0    PRIMARY     PrimaryHDU      18  ()            uint8
1    SPECTRUM    BinTableHDU     95  3000R x 8C    [D, D, I, 30I, 30J, 30E, A, D]
2    EBOUNDS     BinTableHDU     38  30R x 3C      [I, 1E, 1E]
3    GTI         BinTableHDU     34  1R x 2C       [D, D]
>>>```
```

With this info, we can run gtbin to try to reproduce the same pha2 file (modulo the ROI and zmax differences):

``````% gtbin
This is gtbin version ScienceTools-v9r17p0-fssc-20100906
Type of output file (CCUBE|CMAP|LC|PHA1|PHA2) [PHA2]
Event data file name[filtered_pha2_events.fits]
Output file name[grb090510_rmfit.pha2]
Spacecraft data file name[FT2.fits]
Algorithm for defining energy bins (FILE|LIN|LOG) [LOG]
Start value for first energy bin in MeV
Stop value for last energy bin in MeV
Number of logarithmically uniform energy bins
Algorithm for defining time bins (FILE|LIN|SNR) [LIN]
Start value for first time bin in MET
Stop value for last time bin in MET
Width of linearly uniform time bins in seconds[0.1] 0.106
%```
```

## 2. Create the DRM file using gtrspgen

As with the pha1 file, this file is used as input to gtrspgen for generating the time-dependent DRM file:

``````% gtrspgen
This is gtrspgen version ScienceTools-v9r17p0-fssc-20100906
Response calculation method (GRB|PS) [PS]
Spectrum file name[grb090510_prompt.pha] grb090510_rmfit.pha2
Spacecraft data file name[FT2.fits]
Output file name[grb090510_prompt.rsp] grb090510_rmfit.rsp
Cutoff angle for binning SC pointings (degrees)
Size of bins for binning SC pointings (cos(theta))[0.05]
Response function to use, Handoff|DC2|DC2A|DC2FA|DC2BA|DC2FB etc[P6_V3_TRANSIENT] P6_V3_DIFFUSE
Algorithm for defining true energy bins (FILE|LIN|LOG) [LOG]
Start value for first energy bin in MeV
Stop value for last energy bin in MeV
Number of logarithmically uniform energy bins
%```
```

Finally, RMFIT requires that certain keywords be modified that are not set to the desired values by gtbin. For these mods, we provide a script gtrmfit.py to make the modifications. The arguments are the input pha2 file and the trigger time:

## 3. Modify the pha2 headers

````% python gtrmfit.py grb090510_rmfit.pha2 263607782`
```

The resulting pha2 file, grb090510_rmfit_p2.pha2, and response matrix file, grb090510_rmfit.rsp, can now be read into RMFIT just as with any GBM data.

## 1. Plotting off-axis angle as a function of time.

In order to get a sense of the effect of an ARR on the observations, or to see if Earth Occultations or SAA passages have interrupted the pointing, it is useful to plot the off-axis angle of the source with respect to the z-axis of the instrument. This information can be ascertained from the FT2 file. Here is a script, offaxis_plot.py that computes and plots this angle:

``````% cat offaxis_plot.py
import numpy as num
import pyfits
from RootPlot import RootPlot

def my_arccos(x):
"""Angular distance in radians corresponding to a cosinus"""
if abs(x) < 1:
angdist = num.arccos(x)
elif abs(x) < 1.00001:
angdist = num.pi/2.*(1 - int(x))
else:
raise ValueError, "x must be smaller than 1"
return angdist

def dist(a, b):
"""Angular separation in degrees between two sky coordinates"""
pi, cos, sin = num.pi, num.cos, num.sin
ra1, dec1 = a
ra2, dec2 = b
ra1 = ra1*pi/180.
dec1 = dec1*pi/180.
ra2 = ra2*pi/180.
dec2 = dec2*pi/180.
mu = (cos(dec1)*cos(ra1)*cos(dec2)*cos(ra2)
+ cos(dec1)*sin(ra1)*cos(dec2)*sin(ra2) + sin(dec1)*sin(dec2))
return my_arccos(mu)*180./pi

def offaxis_plot(ft2file, ra0, dec0, t0):
ft2 = pyfits.open(ft2file)
sc_data = ft2['SC_DATA'].data
ra_scz = sc_data.field('RA_SCZ')
dec_scz = sc_data.field('DEC_SCZ')

sep = num.array([dist((ra0, dec0), (ra, dec))
for ra, dec in zip(ra_scz, dec_scz)])

time = (sc_data.field('START') + sc_data.field('STOP'))/2. - t0

display = RootPlot(time, sep, symbol='point',
xtitle='T - %.2f (s)' % t0,
ytitle='off-axis angle (deg)')
return display

if __name__ == '__main__':
ra0, dec0 = 333.55, -26.58
t0 = 263607782.

my_plot = offaxis_plot('FT2.fits', ra0, dec0, t0)
%```
```

## 2. Aperture photometry light curve and exposure vs time

We will compute the light curve using diffuse class events and compute the exposure as a function of time at the ROI center for the given aperture using the gtexposure tool. This will also give us an estimate of the source light curve.

``````% gtselect
Input FT1 file[filtered_zmax105.fits]
Output FT1 file[filtered_diffuse.fits]
RA for new search center (degrees) (0:360) [INDEF]
Dec for new search center (degrees) (-90:90) [INDEF]
radius of new search region (degrees) (0:180) [INDEF]
start time (MET in s) (0:)  0
end time (MET in s) (0:)  0
lower energy limit (MeV) (0:) 
upper energy limit (MeV) (0:) 
maximum zenith angle value (degrees) (0:180) 
Done.
%```
```

Entering zeros for bot the start and end times tells gtselect not to make any time selection. In this case, the time rage of the input data is the original 5 hr + 15 min interval we selected from the FSSC data server.

Here we create a counts light curve with 100 s binning.

``````% gtbin
This is gtbin version ScienceTools-v9r17p0-fssc-20100906
Type of output file (CCUBE|CMAP|LC|PHA1|PHA2) [PHA2] lc
Event data file name[filtered_pha2_events.fits] filtered_diffuse.fits
Output file name[grb090510_rmfit.pha2] lc_ARR.fits
Spacecraft data file name[FT2.fits]
Algorithm for defining time bins (FILE|LIN|SNR) [LIN]
Start value for first time bin in MET
Stop value for last time bin in MET 263625780
Width of linearly uniform time bins in seconds[0.106] 100
%```
```

The gtexposure tool adds an EXPOSURE column to the light curve file, given the spacecraft data and specified response functions. The source location is determined from the DSS keywords in the lc_ARR.fits file. The exposure is computed over the energy range selected for the FT1 file, in this case 0.1-300 GeV. To compute the exposure over a broad energy band, the effective area must be weighted by the assumed spectrum of the source. We can either provide an xml model definition or just give a photon index.

``````% gtexposure
Light curve file[] lc_ARR.fits
Spacecraft file[] FT2.fits
Response functions[P6_V3_DIFFUSE]
Source model XML file[none]
Photon index for spectral weighting[-2.1]
%```
```