This page describes the LCC server's text light curve format. The files generated by the server are in a separator-delimited CSV-like format with a header at the top of the file marked out by comment characters at the beginning of each line. By default, the separator used is the comma character: ,. The default comment character is the octothorpe: #.

The metadata and columns included in the light curve files for this project's LCC server instance are described in the lcformat docs page.

Light curve format

The file contains the following sections.

The format descriptor at the top of the file


Object and light curve metadata in JSON format

# {
#   "objectid": {
#     "val": "HAT-198-0835489",
#     "desc": "object ID"
#   },
#   "ra": {
#     "val": 285.93812,
#     "desc": "RA [deg]"
#   },
#   "decl": {
#     "val": 34.7304,
#     "desc": "Dec [deg]"
#   },
# ... more metadata keys as specified by input light curve format...

Column descriptions in JSON format

# {
#   "rjd": {
#     "colnum": 0,
#     "dtype": "f8",
#     "desc": "time of observation in Reduced Julian date (JD = 2400000.0 + RJD)"
#   },
#   "bjd": {
#     "colnum": 1,
#     "dtype": "f8",
#     "desc": "time of observation in Baryocentric Julian date (BJD_TDB)"
#   },
#   "net": {
#     "colnum": 2,
#     "dtype": "i8",
#     "desc": "network of telescopes observing this target"
#   },
# ... more column descriptions as defined in the input light curve format...

The actual light curve columns

Note that missing number values are denoted by nan and missing string values are blank characters.

... more light curve lines ...

Reading the light curves

Since the light curve files are text based, they should be readable by most editors and programming languages. By default, the files come in gzipped format and the file names are of the form:


One can use standard UNIX tools like zless, e.g.:

$ zless /path/to/<objectid>-csvlc.gz

Using numpy

Using Python and numpy also works well, given the information on the column number and dtype in the file headers, e.g.:

import numpy as np
import gzip

with,'rb') as infd:
    recarr = np.genfromtxt(

The astrobase hatlc module

CSV light curve reader functions are implemented in the astrobase Python package that contains other useful light curve handling tools. The module to use from that package is You can use it as a standalone module by downloading it to somewhere in your PYTHONPATH or install the astrobase package using pip.

To read the light curve using

# if you have the module in your current directory or PYTHONPATH
import hatlc

# or if you're using astrobase
from astrobase.hatsurveys import hatlc

# read in a light curve file to a Python dict
lcd = hatlc.read_csvlc('/path/to/<objectid>-csvlc.gz')

# get times, mags, and errs from columns and read into numpy arrays
times, mags, errs = lcd['rjd'], lcd['atf_000'], lcd['aie_000']

# get a description of the object and light curve metadata

You can also use from the command line. If you downloaded just the module itself, make the module executable with e.g. chmod u+x and then use ./ If you installed the astrobase package, the script will already be in your $PATH as hatlc.

$ hatlc --help

usage: [-h] [--describe] hatlcfile

read a HAT LC of any format and output to stdout

positional arguments:
  hatlcfile   path to the light curve you want to read and pipe to stdout

optional arguments:
  -h, --help  show this help message and exit
  --describe  don't dump the columns, show only object info and LC metadata

Example Python code

The following code automatically parses the header and reads the light curve into a Python dict. This is taken directly from the module:

import numpy as np
import gzip

## parsing the header
def parse_csv_header_lcc_csv_v1(headerlines):
    This parses the header of the LCC CSV V1 LC format.


    # the first three lines indicate the format name, comment char, separator
    commentchar = headerlines[1]
    separator = headerlines[2]

    headerlines = [x.lstrip('%s ' % commentchar) for x in headerlines[3:]]

    # next, find the indices of the various LC sections
    metadatastart = headerlines.index('OBJECT METADATA')
    columnstart = headerlines.index('COLUMN DEFINITIONS')
    lcstart = headerlines.index('LIGHTCURVE')

    metadata = ' ' .join(headerlines[metadatastart+1:columnstart-1])
    columns = ' ' .join(headerlines[columnstart+1:lcstart-1])
    metadata = json.loads(metadata)
    columns = json.loads(columns)

    return metadata, columns, separator

## reading the light curve
def read_lcc_csvlc(lcfile):
    This reads a LCC CSVLC.


    # read in the file and split by lines
    if '.gz' in os.path.basename(lcfile):
        infd =,'rb')
        infd = open(lcfile,'rb')

    lctext =

    lctextlines = lctext.split('\n')

    commentchar = lctextlines[1]

    lcstart = lctextlines.index('%s LIGHTCURVE' % commentchar)
    headerlines = lctextlines[:lcstart+1]
    lclines = lctextlines[lcstart+1:]

    metadata, columns, separator = parse_csv_header_lcc_csv_v1(headerlines)

    # break out the objectid and objectinfo
    objectid = metadata['objectid']['val']
    objectinfo = {key:metadata[key]['val'] for key in metadata}

    # figure out the column dtypes
    colnames = []
    colnum = []
    coldtypes = []

    # generate the args for np.genfromtxt
    for k in columns:

        coldef = columns[k]

    coldtypes = ','.join(coldtypes)

    # read in the LC
    recarr = np.genfromtxt(

    lcdict = {x:recarr[x] for x in colnames}
    lcdict['objectid'] = objectid
    lcdict['objectinfo'] = objectinfo
    lcdict['columns'] = colnames

    return lcdict

Use it like so:

lcdict = read_lcc_csvlc('/path/to/<objectid>-csvlc.gz')