Usage#
The Goodman Spectroscopic Pipeline is designed to be simple to use, however simple does not always is the best case for everyone, thus The Goodman Pipeline is also flexible.
- Getting Help.
This manual is intended to be the prefered method to get help. However the quickest option is using
-h
or--help
redccd --help
Will print the list of arguments along with a quick explanation and default values.
It is the same for
redspec
redspec --help
Observing Guidelines#
In order to be able to process your data with the Goodman Spectroscopic Pipeline you need to follow some guidelines, we do not intend to tell you how to do your science. Here are some basic hints.
Make sure you have a good observing plan as well as a good backup plan too.
Put special attention to the calibration files that are needed for the data that you are planning to obtain, for instance, you can process your spectroscopic data without bias because using overscan will give you a good enough approximation, but Imaging does not have overscan therefore you MUST obtain bias frames.
Keep a detailed log of things that happened while you were observing, mistakes that you made, exposures repeated, etc. An observing log is not an extraction of header information. Well, it can be, but it will be useless.
If you are unsure about the required steps to achieve your science goals ask your PI, not the support scientist, Her/His job is to assist you on how to get good quality data not what data you need in order to achieve your scientific goals.
For using the pipeline you don’t need to use any special file naming convention,
in fact all the information is obtained from the headers. As of version
1.2.0
you need to use a reference lamp naming convention though. Not the
file but the field that goes into OBJECT. It is actually very simple:
Lamp name |
Convention |
---|---|
Argon |
Ar |
Neon |
Ne |
Copper |
CuHeAr |
Iron |
FeHeAr |
Mercury Argon |
HgAr |
Mercury Argon Neon |
HgArNe |
This is to ensure the pipeline is able to recognize them. This will no be the case in future versions but for now this is how it works.
Observing for Radial Velocity#
Radial velocity measurements are possible with the Goodman High Throughput Spectrograph but you have to be careful. A very detailed description of the procedures and what you can expect was prepared and is available here and here .
Please read it carefully so you don’t find any surprises when trying to reduce your data.
Prepare Data for Reduction#
If you did a good job preparing and doing the observation this should be an easy step, either way, keep in mind the following steps.
Remove all focus sequence.
Remove all target acquisition or test frames.
Using your observation’s log remove all unwanted files.
Make sure all data has the same gain (
GAIN
) and readout noise (RDNOISE
)Make sure all data has the same Region Of Interest or ROI (
ROI
).
The pipeline does not modify the original files unless there are problems with fits compliance, is never a bad idea to keep copies of your original data in a safe place.
Updating Keywords#
Since version 1.3.0 if your data is older than August 6, 2019, you will need to change the following keywords.
SLIT
: Replace whitespaces with underscore, remove “ and all letters are uppercase. For instance0.45" long slit
becomes0.45_LONG_SLIT
.GRATING
: Grating’s lines/mm goes first and then the manufacturer. For instance.SYZY_400
becomes400_SYZY
.WAVMODE
: Replace whitespace with underscore and all letters are capitalized. For instance.400 m1
becomes400_M1
.INSTRUME
: Instead of using the classical keywords ‘Goodman Spectro’ and ‘Goodman Imaging’, the AEON standard keywordsghts_red
andghts_blue
will be used for spectroscopy, andghts_red_imager
andghts_blue_imager
for imaging. This is an exception of the upper case rule.
Note
General rules are: Underscore is the only accepted separator. All letter must be upper case. Remove any character that need escaping.
Processing your 2D images#
It is the first step in the reduction process, the main tasks are listed below.
Create master bias
Create master flats
Apply Corrections:
Overscan
Trim image
Detect slit and trim out non-illuminated areas
Bias correction
Normalized flat field correction
Cosmic ray rejection
Note
Some older Goodman HTS data has headers that are not FITS compliant, In such cases the headers are fixed and that is the only modification done to raw data.
The 2D images are initially reduced using redccd
. You can simply move to the
directory where your raw data is located and do:
redccd
Though you can modify the behavior in several ways.
Running redccd
will create a directory called RED
where it will put your
reduced data. If you want to run it again it will prevent you from accidentally
removing your already reduced data unless you use --auto-clean
this will
tell the pipeline to delete the RED
directory and start over.
redccd --auto-clean
A summary of the most important command line arguments are presented below.
--cosmic <method>
Let you select the method to do Cosmic Ray Removal.--debug
Show extended messages and plots of intermediate steps.--flat-normalize <method>
Let you select the method to do Flat Normalization.--flat-norm-order <order>
Set order for the model used to do Flat Normalization. Default 15.--ignore-bias
Ignores the existence or lack ofBIAS
data.--ignore-flats
Ignores the existence or lack ofFLAT
data.--raw-path <path>
Set the directory where the raw data is located, can be relative.--red-path <path>
Set the directory where the reduced data will be stored. DefaultRED
.--saturation <saturation>
Set the saturation threshold in percentage. There is a table with all the readout modes and their values at which saturation is reached, then all the pixels exceeding that value are counted. If the percentage is larger that the threshold defined with this argument the flat is marked as saturated. The default value is 1 percent.
This is intended to work with spectroscopic and imaging data, that it is why the process is split in two.
Extracting the spectra#
After you are done Processing your 2D images it is time to extract the spectrum into a wavelength-calibrated 1D file.
The script is called redspec
. The tasks performed are the following:
Classifies data and creates the match of
OBJECT
andCOMP
if it exists.Identifies targets
Extracts targets
Saves extracted targets to 1D spectrum
Finds wavelength solution automatically
Linearizes data
Saves wavelength calibrated file
First you have to move into the RED
directory, this is a precautionary method
to avoid unintended deletion of your raw data. Then you can simply do:
redspec
And the pipeline should work its magic, though this might not be the desired behavior for every user or science case, we have implemented a set of command line arguments which are listed below.
--data-path <path>
Folder were data to be processed is located. Default is current working directory.--proc-path <path>
Folder were processed data will be stored. Default is current working directory.--search-pattern <pattern>
Prefix for picking up files. Default iscfzst-
. See File Prefixes.--output-prefix <prefix>
Prefix to be added to calibrated spectrum. Default isw-
. See File Prefixes.--extraction <method>
Select the Extraction Methods. The only one implemented at the moment isfractional
.--fit-targets-with {moffat, gaussian}
Model to fit peaks on spatial profile while searching for spectroscopic targets. Defaultmoffat
.--target-min-width <target min width>
Minimum profile width for fitting the spatial axis of spectroscopic targets. If fitting a Moffat it will be reflected as the FWHM attribute of the fitted model and if fitting a Gaussian it will be reflected as the STDDEV attribute of the Gaussian model.--target-max-width <target max width>
Maximum profile width for fitting the spatial axis of spectroscopic targets. If fitting a Moffat it will be reflected as the FWHM attribute of the fitted model and if fitting a Gaussian it will be reflected as the STDDEV attribute of the Gaussian model.--reference-files <path>
Folder where to find the reference lamps.--debug
Shows extended and more messages.--debug-plot
Shows plots of intermediate steps.--max-targets <value>
Maximum number of targets to detect in a single image. Default is 3.--background-threshold <background threshold>
Multiplier for background level used to discriminate usable targets. Default 3 times background level.--save-plots
Save plots.--plot-results
Show plots during execution.
The mathematical model used to define the wavelength solution is recorded in the header even though the data has been linearized for record purpose.
Description of custom keywords#
The pipeline adds several keywords to keep track of the process and in general for keeping important information available. The following table gives a description of all the keywords added by The Goodman Pipeline, though not all of them are added to all the images.
General Purpose Keywords#
These keywords are used for record purpose, except for GSP_FNAM
which is
used to keep track of the file name.
Keyword |
Purpose |
---|---|
GSP_VERS |
Pipeline version. |
GSP_ONAM |
Original file name, first read. |
GSP_PNAM |
Parent file name. |
GSP_FNAM |
Current file name. |
GSP_PATH |
Path from where the file was read. |
GSP_TECH |
Observing technique. Imaging or Spectroscopy. |
GSP_DATE |
Date of processing. |
GSP_OVER |
Overscan region. |
GSP_TRIM |
Trim section. |
GSP_SLIT |
Slit trim section. From slit-illuminated area. |
GSP_BIAS |
Master bias file used. |
GSP_FLAT |
Master flat file used. |
GSP_SCTR |
Science target file name (for lamps only) |
GSP_LAMP |
Reference lamp used to obtain wavelength solution |
GSP_NORM |
Master flat normalization method. |
GSP_COSM |
Cosmic ray rejection method. |
GSP_TERR |
RMS error of target trace |
GSP_EXTR |
Extraction window at first column |
GSP_BKG1 |
First background extraction zone |
GSP_BKG2 |
Second background extraction zone |
GSP_WRMS |
Wavelength solution RMS Error. |
GSP_WPOI |
Number of points used to calculate RMS Error. |
GSP_WREJ |
Number of points rejected from RMS Error Calculation. |
GSP_DCRR |
Reference paper for DCR software (cosmic ray rejection). |
Target Trace Model#
Non-linear wavelength solution#
Since writing non-linear wavelength solutions to the headers using the FITS
standard (reference) is extremely complex and not necessarily well documented,
we came up with the solution of simply describing the mathematical model
from astropy’s modeling
. This allows for maintaining the data
untouched while keeping a reliable description of the wavelength solution.
The current implementation will work for writting any polinomial model. Reading is implemented only for Chebyshev1D
which is the
model by default.
Combined Images#
Every image used in a combination of images is recorded in the header of the resulting one. The order does not have importance but most likely the header of the first one will be used.
The combination is made using the combine()
method with the following parameters
method='median'
sigma_clip=True
sigma_clip_low_thresh=1.0
sigma_clip_high_thresh=1.0
At this moment these parameters are not user-configurable.
Keyword |
Purpose |
---|---|
GSP_IC01 |
First image used to create combined. |
GSP_IC02 |
Second image used to create combined. |
Detected lines#
The reference lamp library maintains the lamps non-linearized and also they get a record of the pixel value and its equivalent in angstrom. In the following table a three-line lamp is shown.
Keyword |
Purpose |
---|---|
GSP_P001 |
Pixel value for the first line detected. |
GSP_P002 |
Pixel value for the second line detected. |
GSP_P003 |
Pixel value for the third line detected. |
GSP_A001 |
Angstrom value for the first line detected. |
GSP_A002 |
Angstrom value for the second line detected. |
GSP_A003 |
Angstrom value for the third line detected. |
Cosmic Ray Removal#
Warning
The parameters for either cosmic ray removal method are not fully understood neither tuned but they work for most common instrument configurations. If your extracted spectrum shows weird features, specially if you use a custom mode, the most likely culprit are the parameters of the method you chose. Please let us know.
The argument --cosmic <method>
has four options but there are only two real
methods.
default
(default):Different methods work different for different binning. So if
<method>
is set todefault
the pipeline will decide as follows:dcr
for binning1x1
lacosmic
for binning2x2
and3x3
though binning3x3
has not being tested.dcr
:It was already said that this method work better for binning
1x1
. More information can be found on Installing DCR. The disadvantages of this method is that is a program written in C and it is required to write the file to the disk, process it and read it back again. Still is faster thanlacosmic
.The parameters for running
dcr
are written in a file calleddcr.par
a lookup table and a file generator have been implemented but you can parse custom parameters by placing adcr.par
file in a different directory and point it using--dcr-par-file <path>
.lacosmic
:This is the preferred method for files with binning
2x2
and3x3
. This is the Astroscrappy’s implementation and is run with the default parameters. Future versions might include some parameter adjustment.none
:Skips the cosmic ray removal process.
Asymetric binnings have not been tested but the pipeline only takes in consideration the dispersion axis to decide. This does not mean that the spatial binning does not impact the performance of any of the methods, we just don’t know it yet.
Note
The prefix c
is added to all the comparison lamps, despite they not being
affected by cosmic rays.
Flat Normalization#
There are three possible <method>
(s) to do the normalization of master flats.
For the method using a model the default model’s order is 15
. It can be set
using --flat-norm-order <order>
.
mean
:Calculates the mean of the image using numpy’s
mean()
and divide the image by it.simple
(default):Collapses the master flat across the spatial direction, fits a
Chebyshev1D
model of order15
and divide the full image by this fitted model.full
:Fits a
Chebyshev1D
model to every line/column (dispersion axis) and divides it by the fitted model. This method takes too much to process and it has been left in the code for experimentation purposes only.
Extraction Methods#
The argument --extraction <method>
has two options but only fractional
is implemented.
fractional
:Fractional pixel extraction differs from a simple and rough extraction in how it deals with the edges of the region.
goodman_pipeline.core.core.extract_fractional_pixel()
optimal
:Unfortunately this method has not been implemented yet.
File Prefixes#
Note
Overscan is no longer performed by default. Since it caused a double bias level substraction. Fixed since release V1.3.3 19-04-2021.
There are several ways one can do this but we selected adding prefixes to the file name because is easier to add and also easy to filter using a terminal, for instance.
ls cfzst*fits
or in python
import glob
file_list = glob.glob('cfzst*fits')
So what does all those letter mean? Here is a table to explain it.
Letter |
Meaning |
---|---|
o |
Overscan Correction Applied |
t |
Trim Correction Applied |
s |
Slit trim correction applied |
z |
Bias correction applied |
f |
Flat correction applied |
c |
Cosmic rays removed |
e |
Spectrum extracted to 1D |
w |
1D Spectrum wavelength calibrated |
So, for an original file named file.fits
:
eczst_file.fits
Means the spectrum has been extracted to a 1D file but the file has not been
flat fielded (f
missing).
Ideally after running redccd
the file should be named:
cfzst_file.fits
And after running redspec
:
wecfzst_file.fits
File Suffixes#
After extraction, suffixes may appear in new files created. There are two scenarios where this can happen:
More than one spectroscopic target for extraction.
*target_X
.More than one comparison lamp.
*ws_Y
.Both above
Let’s consider the following scenario: We start with 3 reduced files.
File Name |
Obstype |
Comment |
---|---|---|
sci_file.fits |
OBJECT |
Science file with two spectra. |
lamp_001.fits |
COMP |
Reference lamp valid for sci_file.fits |
lamp_002.fits |
COMP |
Another valid reference lamp |
Assuming the two targets in sci_file.fits are extracted and they are approximately at the position 400 and 600 (pixels in spatial axis), after extraction we’ll end up with:
esci_file_target_1.fits
esci_file_target_2.fits
elamp_001_390-410.fits
elamp_001_590-610.fits
elamp_002_390-410.fits
elamp_002_590-610.fits
The default prefix for extraction is e
and does not have an underscore to separate it from the
file name.
After wavelength calibration, since there are two suitable lamps and due to the fact that the pipeline does not combine solutions, it will save two wavelength calibrated files with each one solved by the respective lamp. Then:
wesci_file_target_1_ws_1.fits
wesci_file_target_1_ws_2.fits
wesci_file_target_2_ws_1.fits
wesci_file_target_2_ws_2.fits
welamp_001_390-410.fits
welamp_001_590-610.fits
welamp_002_390-410.fits
welamp_002_590-610.fits
Common issues#
No comparison lamps were found#
The latest version may introduce changes to wavelength solutions. To view the current list of available modes and lamps, please refer to the GitHub repository.
If your lamp, filter, or mode is not included in the repository mentioned
above, redspec
will not work as expected. This is particularly common
with custom modes of the Goodman Spectroscopic Pipeline. You may encounter the following
error after running redspec
:
$ redspec
፧
[15:02:49][W]: No comparison lamps were provided for file ecfzst_0001_science.fits
This error occurs because the Goodman Spectroscopic Pipeline relies on specific keywords to extract spectra, such as:
Keyword |
Purpose |
---|---|
LAMP_HGA |
Indicates if HgAr lamp is used. |
LAMP_NE |
Indicates if Ne lamp is used. |
LAMP_AR |
Indicates if Ar lamp is used. |
LAMP_FE |
Indicates if Fe lamp is used. |
LAMP_CU |
Indicates if Cu lamp is used. |
WAVMODE |
Slit and mode configuration. |
Multiple spectra output#
When taking multiple ARC
images during your observation run, they will be linked
with your science data. This means that if you capture several lamp files,
they will be processed alongside the science images, potentially resulting
in multiple outputs of the same spectrum.