Giter Club home page Giter Club logo

astrobase's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

astrobase's Issues

Can't checkplotserver with only 1 period

I've generated some checkplots where I only have one period per method I want to display. When I try to open these checkplots with checkplotserver I get the following error:

Uncaught exception GET /cp/dGVtcDIvNjA0NTQ2NTYzOTM2MjkzMTU4NF8wLnBrbA== (127.0.0.1)
HTTPServerRequest(protocol='http', host='localhost:5225', method='GET', uri='/cp/dGVtcDIvNjA0NTQ2NTYzOTM2MjkzMTU4NF8wLnBrbA==', version='HTTP/1.1', remote_ip='127.0.0.1')
Traceback (most recent call last):
File "/home/jwallace/py36-venv/lib/python3.6/site-packages/tornado/web.py", line 1543, in _execute
result = yield result
File "/home/jwallace/py36-venv/lib/python3.6/site-packages/tornado/gen.py", line 1099, in run
value = future.result()
File "/home/jwallace/py36-venv/lib/python3.6/site-packages/tornado/gen.py", line 1113, in run
yielded = self.gen.send(value)
File "/home/jwallace/astrobase_waqas/astrobase/astrobase/cpserver/checkplotserver_handlers.py", line 769, in get
phasedlc1plot = cpdict[key][1]['plot']
KeyError: 1

Looking into the source code, I believe the error is related to the fact that I only have one period plotted per method.

I tried to hack my way around needing a second and third period but wasn't able to do so successfully.

comments not saving to JSON/CSV from checkplotserver

The comments I record in the checkplot server are not getting saved to the JSON checkplotlist, neither are they present in the JSON or CSV file I can download directly from the checkplot server. The comments do get saved to the pickle files though. I don't know if this is the intended behavior, I wanted to bring it up in case it isn't.

Discrepancy in epoch between `fourier_fit_magseries` and `fourier_sinusoidal_func`

fourier_fit_magseries() returns an epoch that corresponds to the minimum brightness in the light curve (here). 'fourier_sinusoidal_func()' makes no such assumption for the epoch (here, and line 60 where phase gets used). In my application of the code, where I'm fitting a Fourier series to an RR Lyrae signal and then inserting an attenuated version of that signal into some other light curves, this leads to the inserted signal not having the same epoch as the original signal. What should I do to fix this?

Keep getting " 'str' object has no attribute 'isdecimal' " on line 447 of checkplotlist.py

I am trying to generate a checkplotlist but have so far not succeeded. I keep getting the error

File "<path_to>/checkplotlist.py", line 447, in main
sortkeys = [(int(x) if x.isdecimal() else x) for x in sortkeys]
AttributeError: 'str' object has no attribute 'isdecimal'

I have tried the following calls to checkplotlist:

checkplotlist pkl --search '6045*_*' --sortby 'maxspnr|desc' --outprefix 'checkplot_server_output/' checkplot_pickles_maxspnr/

checkplotlist pkl --search '6045*_*' --sortby 'objectid|desc' --outprefix 'checkplot_server_output/' checkplot_pickles_maxspnr/

checkplotlist pkl --search '6045*_*' --outprefix 'checkplot_server_output/' checkplot_pickles_maxspnr/

All fail with this same error. (A note: 'maxspnr' was a new dictionary key I created for the checkplot dicts for sorting purposes. When I both tried files that did have this new key and didn't have this new key, neither worked.')

plotbase.make_checkplot should also output to a JSON file

Said JSON file will contain:

  • objectinfo
  • period-search info (LSP peaks, best periods, etc.)
  • base64 or binary representations of the LSP and LC plots

This is so a future version of checkplot-viewer can read these JSON files and generate an interactive webpage instead of just showing the checkplot PNG. The webpage should have options to mark objects as interesting, etc. and output these selections and extra metadata into another JSON file (or perhaps just use the browser localstorage).

some improvements for checkplotserver

Older stuff:

  • add sync back to checkplot-filelist.json
  • add load of already reviewed objects from checkplot-filelist.json
  • fix annoying behavior with moving to phased light curve tile if selected, this needs to check if the current tile is in view and if it is, don't move the viewport
  • add controls for redoing each type of period-finding
  • add controls for picking peaks in the periodograms to plot phased light curves for
  • add controls for calculation of variable star features (from varbase)
  • add controls for doing whitening of periodograms and masking periodic signals
  • fix bugs with periods/epochs obtained from tabs other than the first one not being registered
  • add in the ACF and AoVHM period-finders to the period-finding tab
  • enforce the new readonly rules with no tools available
  • add the fancy new neighbor stuff now in checkplot_pickles to a new neighbors tab
  • actually document the neighbor stuff in checkplot.py

High priority (ordered by increasing complexity):

  • readonly mode should disable all the textboxes and also the phased LC tile selection
  • export to PNG should not work in readonly mode?
  • put the neighbor color and mag diff info into the neighbor display
  • checkplots should use GAIA proper motions if the provided LC objectinfo doesn't have any
  • add ability to download exported checkplot PNG directly from server instead of just pointing to the file on the server (this might be as simple as returning a base64 PNG after the POST to /cp/tools that triggers the PNG export action, then turning this PNG into a downloadable file using the same trick as for the exported list CSV/JSON)
  • fix the annoying saving forever animation that happens at the end of a checkplot list traverse
  • check if the finder chart reticle is correctly done
  • clean up exported CSV to remove empty columns
  • add a cp2png script to convert directories/lists of checkplot pickles to PNGs (useful after they've all been reviewed)
  • add ability to flag objects for further review, add stuff in sidebar for this, special notation in the checkplot JSON as well as the output CSV/JSON of reviewed objects
  • add a place on the overview tab somewhere for extra object information that'd go into a checkplot pickle's 'extrainfo' key as a dict (standardize this format)
  • actually enable saving all changes in temp checkplots back to original checkplot and loading from temp checkplots all the results from run period-finders and LC tools
  • add in the 'finalized' checkplot functionality
  • add some sort of object comment and metadata history thing to checkplot pickles
  • turn the periodogram plot in the period-search tab into a canvas and enable clicking on any point to set the peak to plot a phased LC for
  • implement downloadable filtered times, mags, errs as CSV that take into account the currently active LC filters on the period-search tab
  • implement LC collection overview as a pullout tab or something; this will put the finder chart and (now interactive) overlay from make_lclist into the UI; this would involving giving checkplotlist a --lclist kwarg or something to point to the generated LC list, which the checkplotserver would pick up and use to find and load the finder base plot and xy positions of the objects in the LC collection (we should then store the finder as base64 inside the lclist pickle)
  • for the GAIA neighbors, get their proper motion ra/decl vectors from the backend if available, add checkboxes to the GAIA neighbors table, and for each checked object, add an overlay box for it and a proper motion vector to the finder. This will make it easier to see if neighbors are perhaps proper motion companions.

Later (ordered by increasing complexity):

  • add a object/variability tag cloud to the sidebar reviewed objects list
  • add back trapezoid and invgauss EB fit to the lcfit tab
  • add back starfeatures and periodicfeatures to the variability tab
  • add some sort of zoomed viewer for navigating phased LC tiles (this might be a carousel?)
  • think about moving all tools into one tab panel so we don't need to duplicate phased LC displays on every panel
  • add queuing and locking to saving and loading checkplots, with appropriate UI progress notification, so multiple people can work on a checkplot project using the same server
  • update CSS to Bootstrap v4 final

Incorrect comment?

returndict = {'wtimes':wtimes, # these are in phase order

The comment on this line states that these things are in phase order; looking through the code and at the output though, I believe they are in time order. The data was in phase order earlier on but they got changed.

bls_parallel_pfind() should note when both autofreq=True and stepsize are set

In bls_parallel_pfind(), if autofreq=True, then the function sets its own value for stepsize and overwrites any value the user may have passed. There should be a message (maybe not an error) noting when both stepsize has been set by the user and autofreq is True. (Perhaps the default values should be autofreq=True and stepsize=None so that the function can identify when stepsize has been set).

periodbase.bls_parallel_pfind: save `nphasebins`

The blsresult key of the resultdict being returned out to whatever called bls_parallel_pfind is currently a list of dictionaries returned by eebls.f, of the form

{'bestperiod': 153.33159309929854,
 'bestpower': 0.0011432965623006885,
 'power': array([ 0.00082555,  0.00086774,  0.00084304, ...,  0.00061361,
         0.00057821,  0.00059612]),
 'transdepth': 0.015760732761291304,
 'transduration': 0.00529015710163514,
 'transegressbin': 260,
 'transingressbin': 254}

These latter two keys can only be converted to epoch times if the number of bins in the folded timeseries at the bestperiod is known.

This is a one-line commit to just save nphasebins to resultdict. (But the effort of wrangling my git fork into working order to do it is not currently worth it).

port imageutils fits reading functions?

[suggestion]

pipe-trex/imageutils.py has some FITS read functions that I use all the time, mainly just to write less code.

This issue suggests either (a) moving the entire imageutils.py function collection, or else (b) the following functions to a new astrobase/imageutils.py file:

read_fits
read_fits_header
get_header_keyword
get_header_keyword_list
get_data_keyword
get_data_keyword_list (not implemented, but I can write it).

For TESS/Kepler/K2 processing, this would be helpful! Let me know if you think this would be useful, and I can write it up.

sigclip_magseries assumes inputs are np.array()'s

sigclip_magseries assumes that the times, mags, and errs inputs are np.array()'s right from the beginning with the ftimes, fmags, ferrs = times[find], mags[find], errs[find] syntax. Maybe it should be checked that these actually are np.array()'s? I'm not sure what the implications are for the rest of astrobase in providing for non-numpy-arrays to be possible inputs.

Bug running BLS depending on number of workers

Running periodbase.bls_parallel_pfind, I found that one of my light curves fails with the following error:

[2018-05-07T20:32:43Z - INFO] min P: 20.378488547282885, max P: 77.59986466999999, nfreq: 562, minfreq: 0.012886620411679645, maxfreq: 0.04907135274923677
[2018-05-07T20:32:43Z - INFO] autofreq = True: using AUTOMATIC values for freq stepsize: 6.443310205839823e-05, nphasebins: 100, min transit duration: 0.02, max transit duration: 0.55
Traceback (most recent call last):
File "generalvar_astrobase_search.py", line 234, in
main()
File "generalvar_astrobase_search.py", line 196, in main
bls = periodbase.bls_parallel_pfind(times,mags,errs,startp=pmin,endp=maximum_obs_length[obj],mintransitduration=.02,maxtransitduration=.55,nbestpeaks=3,sigclip=sigclipvalue,nworkers=njobs,verbose=True)
File "/home/jwallace/astrobase_waqas/astrobase/astrobase/periodbase/kbls.py", line 600, in bls_parallel_pfind
chunk_minfreqs = [frequencies[xchunksize] for x in range(nworkers)]
File "/home/jwallace/astrobase_waqas/astrobase/astrobase/periodbase/kbls.py", line 600, in
chunk_minfreqs = [frequencies[x
chunksize] for x in range(nworkers)]
IndexError: index 567 is out of bounds for axis 0 with size 562

However, this seems to depend on the value of nworkers. The above is with nworkers=28, below is with nworkers=20 (ran succesfully)

[2018-05-07T20:32:26Z - INFO] min P: 20.378488547282885, max P: 77.59986466999999, nfreq: 562, minfreq: 0.012886620411679645, maxfreq: 0.04907135274923677
[2018-05-07T20:32:26Z - INFO] autofreq = True: using AUTOMATIC values for freq stepsize: 6.443310205839823e-05, nphasebins: 100, min transit duration: 0.02, max transit duration: 0.55
[2018-05-07T20:32:26Z - INFO] worker 1: minfreq = 0.013, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 2: minfreq = 0.015, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 3: minfreq = 0.017, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 4: minfreq = 0.018, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 5: minfreq = 0.020, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 6: minfreq = 0.022, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 7: minfreq = 0.024, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 8: minfreq = 0.026, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 9: minfreq = 0.028, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 10: minfreq = 0.030, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 11: minfreq = 0.032, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 12: minfreq = 0.033, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 13: minfreq = 0.035, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 14: minfreq = 0.037, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 15: minfreq = 0.039, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 16: minfreq = 0.041, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 17: minfreq = 0.043, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 18: minfreq = 0.045, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 19: minfreq = 0.047, nfreqs = 29
[2018-05-07T20:32:26Z - INFO] worker 20: minfreq = 0.048, nfreqs = 11
[2018-05-07T20:32:26Z - INFO] running...
[2018-05-07T20:32:26Z - INFO] auto-cadence for mag series: 0.02043
[2018-05-07T20:32:26Z - WRN!] fit bestperiod = 1.65311 may be an alias, naively calculated bestperiod is = 2.69699

It also seems to depend on the value of min P that I give. The above value, ~20.4 days, is fairly large for my run (the light curve only covers ~80 days total). If I divide min P by 10, I get a successful run even with 28 workers. Same with dividing min P by 2.

I'll send along the light curve for your reference. For the failed run, here is my full call:

bls = periodbase.bls_parallel_pfind(times,mags,errs,startp=20.378488547282885,endp=77.59986466999999,mintransitduration=.02,maxtransitduration=.55,nbestpeaks=3,sigclip=5.,nworkers=28,verbose=True)

move notebooks to an astrobase-notebooks repository

Update them too, while we're at it. Stuff to document:

  • all the new stuff in checkplotlist: filtering, sorting, etc.
  • using lcproc to drive processing of large LC collections
  • all the new stuff in the checkplotserver UI

Will need some example data for these. Host these somewhere.

epochs in checkplot_png

It seems that the function checkplot_png() calculates only the epoch for the best period and then applies that epoch to the rest of the plotted periods, rather than calculating a separate epoch for each period. This is compared to checkplot_pickle(), which calculates a separate epoch for each period.

It would also be perhaps useful if checkplot_png() and checkplot_pickle() would take a list of epochs, one for each period, in the case that one would want to feed the epochs directly to checkplot.py for all the periods. Or perhaps an 'epoch' keyword could be read from the lsp dict, or something along those lines.

checkplotserver creating new checkplot pickles

I have some checkplot pickles that were created in a script running Python 2. I viewed them with a checkplotserver, but every time I went back to look at them my previous comments and classifications were not there. I discovered that new checkplot pickles were being created using the object IDs. I do not have this problem with my checkplot pickles that were made using Python 3.

sigma clipping whole lc vs. bins

Using prewhiten_magseries() on some RR Lyrae light curves, I found that the call to sigclip_magseries() with the default setting (sigclip=3.0) was removing some of the of the lc points at the peak. See attached image. I can and will set sigclip to be a larger value than 3.0 to avoid this, but it may be worth considering different versions of sigma clipping (sigma clipping in bins, for example, though the steep rise of RR Lyrae stars may break that as well.)

temp

add a hatdataserver.py for API access to the HAT data server

This will be a module that encapsulates all the annoying bits of authentication and API keys and allows access to:

  • the HAT footprint service
  • the HAT light curve catalog
  • the HAT object info service
  • the HAT xmatch service

Pointers to credentials and user options, etc. will be in a hatdataserver.conf file. The actual credentials will be in a .hatcreds file.

We may want to break this out into a separate package later.

astrobase.services.tesslightcurves improvements

Ideas for a hack over the next few days:

EDITS:

  • Get some good tests implemented for each of these, in tests.test_tesslightcurves

  • Fix bug in astroquery.mast for objects without HLSP matches.

lcdict KeyError in lclist_parallel_worker

In trying to run make_lclist, I am getting a KeyError in line 395 of lcproc.py:

[2018-03-20T23:24:19Z - EXC!] could not figure out columns for ../../tfa/pickled_tfa_output_lc_final_justapused/6045477978812561408_0_pickled_output.p
exception was: Traceback (most recent call last):
File "/home/jwallace/astrobase_waqas/astrobase/astrobase/lcproc.py", line 395, in lclist_parallel_worker
lcdict = lcdict[0]
KeyError: 0

It appears my lcdict doesn't have a key of 0. Should it?

Syntax error in checkplotserver source code

When I run checkplotserver I get the following error:

$ checkplotserver
Traceback (most recent call last):
File "/nfs/phs3/ar0/S/PROJ/jwallace/venv/bin/checkplotserver", line 11, in
load_entry_point('astrobase', 'console_scripts', 'checkplotserver')()
File "/nfs/phs3/ar0/S/PROJ/jwallace/venv/lib/python2.7/site-packages/pkg_resources/init.py", line 572, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/nfs/phs3/ar0/S/PROJ/jwallace/venv/lib/python2.7/site-packages/pkg_resources/init.py", line 2755, in load_entry_point
return ep.load()
File "/nfs/phs3/ar0/S/PROJ/jwallace/venv/lib/python2.7/site-packages/pkg_resources/init.py", line 2408, in load
return self.resolve()
File "/nfs/phs3/ar0/S/PROJ/jwallace/venv/lib/python2.7/site-packages/pkg_resources/init.py", line 2414, in resolve
module = import(self.module_name, fromlist=['name'], level=0)
File "/home/jwallace/astrobase_waqas/astrobase/astrobase/cpserver/checkplotserver.py", line 58, in
from . import checkplotserver_handlers as cphandlers
File "/home/jwallace/astrobase_waqas/astrobase/astrobase/cpserver/checkplotserver_handlers.py", line 2179
**lctoolkwargs,
^
SyntaxError: invalid syntax

I deleted the trailing comma in my local copy of the repo but there's something I'm not setting up or compiling properly because it still appears in the error message.

'blsresult' key error

I just tried running the latest commit of astrobase and got the following error:

File "astrobase_search.py", line 207, in main
bls = >periodbase.bls_parallel_pfind(times,mags,errs,startp=pmin,endp=maximum_obs_length[obj],nphasebins=200,mintransitduration=.02,maxtransitduration=.55,nbestpeaks=3,sigclip=sigclipvalue,nworkers=njobs,verbose=False)
File >"/nfs/phs3/ar0/S/PROJ/jwallace/m4/vetting_signals/periodogram_snr_analysis/transit_inject/astrobase_aaa/periodbase/kbls.py", line 1011, in bls_parallel_pfind
verbose=verbose,
File >"/nfs/phs3/ar0/S/PROJ/jwallace/m4/vetting_signals/periodogram_snr_analysis/transit_inject/astrobase_aaa/periodbase/kbls.py", line 1422, in bls_stats_singleperiod
thistransdepth = blsres['blsresult']['transdepth']
KeyError: 'blsresult'

I thought this might be connected with yesterday's changes so I wanted to mention it here.

refactor periodbase, add multiharmonic AoV and fast-chi periodograms

We should refactor periodbase so it has a bunch of submodules, one each for each period-search algorithm. Importing periodbase will hoist the main methods up to the main namespace, so no existing code based on periodbase should suffer.

In addition, we should add:

Other stuff to investigate later:

[Feature Request 🚀] Add a `CITATION.cff`

Github recently released a new feature where repository owners can add a CITATION.cff file making it easy for others to cite the repository. Adding a CITATION.cff would make the attribution process very easy and IMO is better than the CITE.md file.

prewhiten_magseries(), possible conflicting optional args

When running prewhiten_magseries() in lcfit.py, if the argument fourierorder is specified but fourierparam is left alone, then a warning triggers at line 331 and fourierparam is used to the exclusion of fourierorder (see link below)

https://github.com/waqasbhatti/astrobase/blob/master/astrobase/varbase/lcfit.py#L331

fourierorder can be used to the exclusion of fourierparam if fourierparam is set to, e.g., an empty list. However, it seems a little problematic that setting one optional argument without resetting another one from its default value makes the freshly set optional argument unused. It took me a little bit to notice this behavior, even though a warning does get displayed when both arguments are set. Given the current defaults (fourierparam set, fourierorder not set), it may make more sense for the behavior when both get set to be that fourierorder is given preference over fourierparam. Or maybe the default values of both should be unset, and then the code, if it detects both are still unset, then assigned fourierparam to have its current default value.

Possible bug in CSV file update for checkplots

I was looking at the CSV file generated by the checkplot server for all the checkplots I looked at. I noticed that the varinfo.objectisvar flag was sometimes set to a decimal number (rather than 1, 2, or 3), and sometimes corresponding the varinfo.varperiod was set to 1, 2, or 3. I was thinking that maybe the two of these were sometimes getting switched? Here's a screenshot.
screenshot_varinfo

possible bls_snr() error

I didn't look at this in depth, but on line 948 in kbls.py (the bls_snr() function) I noticed something that looked a bit funny to me.

transitphase = thistransduration*period/2.0

Since in the subsequent lines we're working in phase rather than time, it seems strange to me that we're multiplying by a period here. If thistransduration is still in units of time, we should be dividing by period; if it's in units of (fractional) phase, then no need to multiply or divide by period.

Again, I didn't look any closer, but I just noticed this and figured I'd point it out.

Segfault in BLS period finding

Hi,

this simple code

import astrobase.periodbase.kbls                                        
import numpy as np
np.random.seed(3)
#astrobase.periodbase.use_astropy_bls() 
times=np.random.uniform(size=100)          *100                             
mags=np.random.normal(0.,0.01,size=100)                                 
errs=mags*0+0.01                                                        
ret=astrobase.periodbase.kbls.bls_serial_pfind(times,mags,errs) 

seems to crash with segfault on my machine with the following backtrace

[I 200211 15:12:00 __init__:84] An Astropy implementation of BLS is available because Astropy >= 3.1.
[I 200211 15:12:00 __init__:86] If you want to use it as the default periodbase BLS runner, call the periodbase.use_astropy_bls() function.
[I 200211 15:12:00 kbls:380] min P: 0.1, max P: 100.0, nfreq: 380425, minfreq: 0.01, maxfreq: 10.0
[I 200211 15:12:00 kbls:385] autofreq = True: using AUTOMATIC values for freq stepsize: 2.6260130709364816e-05, nphasebins: 200, min transit duration: 0.01, max transit duration: 0.4
[W 200211 15:12:00 kbls:418] the requested max P = 100.000 is larger than the time base of the observations = 95.201,  will make minfreq = 2 x 1/timebase
[W 200211 15:12:00 kbls:421] new minfreq: 0.021008104567491852, maxfreq: 10.0

Program received signal SIGSEGV, Segmentation fault.
0x00007ffff6c589bd in __memset_sse2 () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-13.el7.x86_64 gdbm-1.10-8.el7.x86_64 glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64 libffi-3.0.13-18.el7.x86_64 libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64 libstdc++-4.8.5-39.el7.x86_64 libuuid-2.23.2-61.el7_7.1.x86_64 ncurses-libs-5.9-14.20130511.el7_4.x86_64 openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-18.el7.x86_64
(gdb) bt full
#0  0x00007ffff6c589bd in __memset_sse2 () from /lib64/libc.so.6
No symbol table info available.
#1  0x00007fffe73a8441 in eebls (n=<optimized out>, t=..., x=..., u=..., 
    v=..., nfreqs=<optimized out>, fmin=0.021008104567491852, 
    df=2.6260130709364816e-05, nbins=200, qmin=0.01, qmax=0.40000000000000002, 
    p=..., bper=47.600677004788423, bpow=0.0028800913220422253, 
    depth=0.0083002852006913031, qtran=0.14000000000000001, in1=55, in2=122)
    at pyeebls/eebls.f:156
        f0 = 0.021034364698201215
        i = <optimized out>
        ibi = (0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 2, 2, 0, 0, 0, 0, 0, 2, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...)
        j = 1
        jf = 2
        jn1 = 55
        jn2 = 122
        k = <optimized out>
---Type <return> to continue, or q <return> to quit---
        kk = <optimized out>
        kkmi = 5
        kma = 83
        kmi = 2
        nb2 = <optimized out>
        nbkma = 281
        p0 = 47.541250441736253
        ph = <optimized out>
        power = <optimized out>
        rn = <optimized out>
        rn1 = <optimized out>
        rn3 = 14
        s = <optimized out>
        s3 = -0.099935433816323291
        t1 = <optimized out>
        tot = <optimized out>
        y = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0---Type <return> to continue, or q <return> to quit---
, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...)
#2  0x00007fffe73a56b6 in f2py_rout__pyeebls_eebls (capi_self=<optimized out>, 
    capi_args=<optimized out>, capi_keywds=<optimized out>, 
    f2py_func=0x7fffe73a7d90 <eebls>)
    at build/src.linux-x86_64-3.6/pyeebls/_pyeeblsmodule.c:400
        _save = 0x604c40
        capi_buildvalue = 0x0
        f2py_success = 1
        n = 100
        n_capi = 0x7ffff7d60210 <_Py_NoneStruct>
        t = 0x1e966f0
        t_Dims = {100}
        capi_t_tmp = 0x7fffec0f4a80
        capi_t_intent = 1
        t_capi = 0x7fffec0f4a80
        x = 0x1e97ac0
        x_Dims = {100}
        capi_x_tmp = 0x7fffec0f4ad0
        capi_x_intent = 1
        x_capi = 0x7fffec0f4ad0
        u = 0x1e95d70
        u_Dims = {100}
        capi_u_tmp = 0x7fffdcfd8a30
---Type <return> to continue, or q <return> to quit---
        capi_u_intent = 2
        u_capi = 0x7fffdcfd8a30
        v = 0x1e95a40
        v_Dims = {100}
        capi_v_tmp = 0x7fffec0f4b20
        capi_v_intent = 2
        v_capi = 0x7fffec0f4b20
        nfreqs = 380425
        nfreqs_capi = 0x7fffb1218450
        fmin = 0.021008104567491852
        fmin_capi = 0x7fffdcfd9be8
        df = 2.6260130709364816e-05
        df_capi = 0x7fffdcfd9ba0
        nbins = 200
        nbins_capi = 0x7fffdd4d98b0
        qmin = 0.01
        qmin_capi = 0x7fffec6b01e0
        qmax = 0.40000000000000002
        qmax_capi = 0x7fffec6b0270
        p = 0x7fffd3ab8010
        p_Dims = {380425}
        capi_p_tmp = 0x7fffdd50ee40
        capi_p_intent = 12
---Type <return> to continue, or q <return> to quit---
        bper = 47.600677004788423
        bpow = 0.0028800913220422253
        depth = 0.0083002852006913031
        qtran = 0.14000000000000001
        in1 = 55
        in2 = 122
        capi_kwlist = {0x7fffe73a9530 "t", 0x7fffe73a9553 "x", 
          0x7fffe73a9532 "u", 0x7fffe73a9534 "v", 0x7fffe73a9536 "nfreqs", 
          0x7fffe73a953d "fmin", 0x7fffe73a9542 "df", 0x7fffe73a9545 "nbins", 
          0x7fffe73a954b "qmin", 0x7fffe73a9550 "qmax", 0x7fffe73a9c35 "n", 
          0x0}
#3  0x00007ffff7960b60 in _PyObject_FastCallDict (func=0x7fffec1550a8, 
    args=<optimized out>, nargs=<optimized out>, kwargs=0x0)
    at Objects/abstract.c:2331
        tuple = 0x7fffd8154b48
        call = 0x7fffe73a6510 <fortran_call>
        result = 0x0
#4  0x00007ffff7a08f1c in call_function (
    pp_stack=pp_stack@entry=0x7fffffffcfd0, oparg=<optimized out>, 
    kwnames=kwnames@entry=0x0) at Python/ceval.c:4875
        pfunc = 0x1e98308
        func = 0x7fffec1550a8
        x = <optimized out>
---Type <return> to continue, or q <return> to quit---
        w = <optimized out>
        nkwargs = <optimized out>
        nargs = 10
        stack = <optimized out>
#5  0x00007ffff79fe087 in _PyEval_EvalFrameDefault (f=<optimized out>, 
    throwflag=<optimized out>) at Python/ceval.c:3335
        sp = 0x1e98360
        res = <optimized out>
        stack_pointer = <optimized out>
        next_instr = 0x7fffec15a7fc
        opcode = <optimized out>
        oparg = <optimized out>
        why = <optimized out>
        fastlocals = 0x1e982b0
        freevars = 0x1e98308
        retval = <optimized out>
        tstate = <optimized out>
        co = <optimized out>
        instr_ub = -1
        instr_lb = 0
        instr_prev = -1
        first_instr = <optimized out>
        names = 0x7fffa26c46c0
---Type <return> to continue, or q <return> to quit---
        consts = 0x7fffec1470c0
        opcode_targets = {0x7ffff7a07165 <_PyEval_EvalFrameDefault+37909>, 
   ....
#6  0x00007ffff7a08b5a in _PyFunction_FastCall (globals=<optimized out>, 
    nargs=<optimized out>, args=<optimized out>, co=<optimized out>)
    at Python/ceval.c:4933
        fastlocals = 0x1e982b0
        i = <optimized out>
        f = 0x1e98138
        tstate = 0x604c40
        result = <optimized out>
#7  fast_function (func=<optimized out>, stack=0x1e958e8, 
    nargs=<optimized out>, kwnames=<optimized out>) at Python/ceval.c:4968
        co = <optimized out>
        globals = <optimized out>
        argdefs = <optimized out>
        kwdefs = <optimized out>
        closure = <optimized out>
        name = <optimized out>
        qualname = <optimized out>
        d = <optimized out>
        nkwargs = <optimized out>
---Type <return> to continue, or q <return> to quit---
        nd = <optimized out>
#8  0x00007ffff7a08e93 in call_function (
    pp_stack=pp_stack@entry=0x7fffffffd200, oparg=<optimized out>, 
    kwnames=kwnames@entry=0x0) at Python/ceval.c:4872
        pfunc = 0x1e958e0
        func = 0x7fffec0ee9d8
        x = <optimized out>
        w = <optimized out>
        nkwargs = <optimized out>
        nargs = 8
        stack = <optimized out>
#9  0x00007ffff79fe087 in _PyEval_EvalFrameDefault (f=<optimized out>, 
    throwflag=<optimized out>) at Python/ceval.c:3335
        sp = 0x1e95928
        res = <optimized out>
        stack_pointer = <optimized out>
        next_instr = 0x19535be
        opcode = <optimized out>
        oparg = <optimized out>
        why = <optimized out>
        fastlocals = 0x1e95770
        freevars = 0x1e958d0
        retval = <optimized out>
---Type <return> to continue, or q <return> to quit---
        tstate = <optimized out>
        co = <optimized out>
        instr_ub = -1
        instr_lb = 0
        instr_prev = -1
        first_instr = <optimized out>
        names = 0x7fffec2a24d0
        consts = 0x7fffec30e778
        opcode_targets = {0x7ffff7a07165 <_PyEval_EvalFrameDefault+37909>, 
 ....
#10 0x00007ffff7a081d9 in _PyEval_EvalCodeWithName (_co=0x7fffec14fc90, 
    globals=<optimized out>, locals=<optimized out>, args=<optimized out>, 
    argcount=<optimized out>, kwnames=0x0, kwargs=0x65cfb0, 
    kwcount=<optimized out>, kwstep=1, defs=0x7fffec17dec0, defcount=14, 
    kwdefs=0x0, closure=0x0, name=<optimized out>, qualname=0x7ffff043e7c8)
    at Python/ceval.c:4166
        co = 0x7fffec14fc90
        f = <optimized out>
        retval = 0x0
        fastlocals = <optimized out>
        freevars = <optimized out>
        tstate = 0x604c40
---Type <return> to continue, or q <return> to quit---
        x = <optimized out>
        u = <optimized out>
        total_args = <optimized out>
        i = 0
        n = <optimized out>
        kwdict = <optimized out>
#11 0x00007ffff7a08c0a in fast_function (func=<optimized out>, stack=0x65cf98, 
    nargs=3, kwnames=<optimized out>) at Python/ceval.c:4992
        co = <optimized out>
        globals = <optimized out>
        argdefs = <optimized out>
        kwdefs = 0x0
        closure = 0x0
        name = 0x7ffff043e7c8
        qualname = <optimized out>
        d = <optimized out>
        nkwargs = <optimized out>
        nd = <optimized out>
#12 0x00007ffff7a08e93 in call_function (
    pp_stack=pp_stack@entry=0x7fffffffd4e0, oparg=<optimized out>, 
    kwnames=kwnames@entry=0x0) at Python/ceval.c:4872
        pfunc = 0x65cf90
        func = 0x7fffdcfd0950
---Type <return> to continue, or q <return> to quit---
        x = <optimized out>
        w = <optimized out>
        nkwargs = <optimized out>
        nargs = 3
        stack = <optimized out>
#13 0x00007ffff79fe087 in _PyEval_EvalFrameDefault (f=<optimized out>, 
    throwflag=<optimized out>) at Python/ceval.c:3335
        sp = 0x65cfb0
        res = <optimized out>
        stack_pointer = <optimized out>
        next_instr = 0x7ffff7f25f04
        opcode = <optimized out>
        oparg = <optimized out>
        why = <optimized out>
        fastlocals = 0x65cf90
        freevars = 0x65cf90
        retval = <optimized out>
        tstate = <optimized out>
        co = <optimized out>
        instr_ub = -1
        instr_lb = 0
        instr_prev = -1
        first_instr = <optimized out>
---Type <return> to continue, or q <return> to quit---
        names = 0x7ffff7e8a8d0
        consts = 0x7ffff0481868
        opcode_targets = {0x7ffff7a07165 <_PyEval_EvalFrameDefault+37909>, 
          0x7ffff79fe250 <_PyEval_EvalFrameDefault+1280>, 
...
#14 0x00007ffff7a09135 in _PyEval_EvalCodeWithName (qualname=0x0, 
    name=<optimized out>, closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, 
    kwstep=2, kwcount=<optimized out>, kwargs=<optimized out>, kwnames=0x0, 
    argcount=0, args=0x0, locals=0x7ffff7f3c1f8, globals=0x7ffff7f3c1f8, 
    _co=0x7ffff7f01a50) at Python/ceval.c:4166
        co = 0x7ffff7f01a50
        retval = 0x0
        fastlocals = <optimized out>
        kwdict = <optimized out>
        tstate = 0x604c40
        x = <optimized out>
        u = <optimized out>
        n = <optimized out>
        f = 0x65ce18
        freevars = <optimized out>
        total_args = 0
        i = 0
#15 PyEval_EvalCodeEx (_co=_co@entry=0x7ffff7f01a50, 
---Type <return> to continue, or q <return> to quit---
    globals=globals@entry=0x7ffff7f3c1f8, locals=locals@entry=0x7ffff7f3c1f8, 
    args=args@entry=0x0, argcount=argcount@entry=0, kws=kws@entry=0x0, 
    kwcount=kwcount@entry=0, defs=defs@entry=0x0, defcount=defcount@entry=0, 
    kwdefs=kwdefs@entry=0x0, closure=closure@entry=0x0) at Python/ceval.c:4187
No locals.
#16 0x00007ffff7a09e8b in PyEval_EvalCode (co=co@entry=0x7ffff7f01a50, 
    globals=globals@entry=0x7ffff7f3c1f8, locals=locals@entry=0x7ffff7f3c1f8)
    at Python/ceval.c:731
No locals.
#17 0x00007ffff7a907fe in run_mod (mod=mod@entry=0x6b39c0, 
    filename=filename@entry=0x7ffff04411b8, 
    globals=globals@entry=0x7ffff7f3c1f8, locals=locals@entry=0x7ffff7f3c1f8, 
    flags=flags@entry=0x7fffffffd7d0, arena=arena@entry=0x7ffff7f55228)
    at Python/pythonrun.c:1025
        co = 0x7ffff7f01a50
        v = <optimized out>
#18 0x00007ffff7941ecb in PyRun_FileExFlags (fp=fp@entry=0x6ad530, 
    filename_str=filename_str@entry=0x7ffff7efd460 "xx.py", 
    start=start@entry=257, globals=globals@entry=0x7ffff7f3c1f8, 
    locals=locals@entry=0x7ffff7f3c1f8, closeit=closeit@entry=1, 
    flags=flags@entry=0x7fffffffd7d0) at Python/pythonrun.c:978
        ret = 0x0
        mod = 0x6b39c0
---Type <return> to continue, or q <return> to quit---
        arena = 0x7ffff7f55228
        filename = 0x7ffff04411b8
#19 0x00007ffff79422b7 in PyRun_SimpleFileExFlags (fp=fp@entry=0x6ad530, 
    filename=<optimized out>, closeit=closeit@entry=1, 
    flags=flags@entry=0x7fffffffd7d0) at Python/pythonrun.c:419
        m = 0x7ffff7f3a228
        d = 0x7ffff7f3c1f8
        v = <optimized out>
        ext = <optimized out>
        set_file_name = 1
        ret = -1
        len = <optimized out>
#20 0x00007ffff794285a in PyRun_AnyFileExFlags (fp=fp@entry=0x6ad530, 
    filename=<optimized out>, closeit=closeit@entry=1, 
    flags=flags@entry=0x7fffffffd7d0) at Python/pythonrun.c:81
No locals.
#21 0x00007ffff7a97203 in run_file (p_cf=0x7fffffffd7d0, 
    filename=0x6042c0 L"xx.py", fp=0x6ad530) at Modules/main.c:340
        unicode = 0x7ffff7e88298
        bytes = 0x7ffff7efd440
        filename_str = <optimized out>
        run = <optimized out>
#22 Py_Main (argc=argc@entry=2, argv=argv@entry=0x603010) at Modules/main.c:810
---Type <return> to continue, or q <return> to quit---
        c = <optimized out>
        sts = -1
        command = 0x0
        filename = 0x6042c0 L"xx.py"
        module = 0x0
        fp = 0x6ad530
        p = <optimized out>
        skipfirstline = <optimized out>
        stdin_is_interactive = 1
        help = <optimized out>
        version = <optimized out>
        saw_unbuffered_flag = <optimized out>
        opt = <optimized out>
        cf = {cf_flags = 0}
        main_importer_path = <optimized out>
        warning_option = <optimized out>
        warning_options = <optimized out>
#23 0x0000000000400a19 in main (argc=2, argv=<optimized out>)
    at ./Programs/python.c:69
        argv_copy = 0x603010
        argv_copy2 = 0x603030
        i = <optimized out>
        res = <optimized out>
---Type <return> to continue, or q <return> to quit---
        oldloc = 0x603050 ""

I don't know if I'll have time to look at/debug it further, but it looks like a clear bug.
(I'm using python 3.6 on linux with the latest astrobase)

Thanks
Sergey

Finder plot cross hair off

The cross hair in the finder plot looks like it's off slightly. The horizontal line definitely is a bit low, by perhaps a pixel or two. The vertical line seems right.

add a flare finder to varbase (or a new module that does this)

for methods, see:

this is basically a matched-filter convolution with a trend-filtered light curve. the flare finder should keep a list of all flares it finds, along with their start and end times, integrated flux between flare and continuum, and perhaps a measure of the energy based on rough stellar luminosity based on its color (assuming it's a dwarf)

prewhiten_magseries code doesn't like the default 'plotfit=None'

When running prewhiten_magseries() (in signal.py) with the default value for plotfit (None), I get the following error:

File "", line 178, in prewhiten_magseries
if plotfit and isinstance(plotfit, str) or isinstance(plotfit, strio):
TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types

When I give a string to plotfit, everything runs fine.

I am on version (0.2.6)

checkplot_pickle() not ignoring non-existence information in objectinfo dict

In using checkplot_pickle(), I have created an objectinfo dict that contains only objectid, ra, and decl as elements. However, checkplot_pickle() at this point starts looking for magnitudes in objectinfo and fails because it can't find any magnitude information. The specific error message I am getting:

gaiak_colors = gaia_mags - objectinfo['kmag']
KeyError: 'kmag'

If I do not pass an objectinfo dict in to checkplot_pickle(), I do not get this error and the function runs successfully. It's only when I pass in an objectinfo dict that it starts wanting more than I'm giving it.

color_features() preventing printing of mag/color information I do have when I don't have J H or K

The series of function calls checkplot_pickle() makes leads eventually to color_features(), which returns a dict without any values if there are no J, H, or K values, or if it can't get the 2MASS DUST extinction info.

For me, this means that even though I have U, B, g, r, and i information, this doesn't get written on the checkplot because I don't have J, H, or K values. I tried using garbage JHK values (10 for each) but that's when color_feature started complaining about not being able to retrieve 2MASS DUST extinction info.

And unfortunately, 2MASS doesn't have deep enough coverage to have data for even a majority of my objects.

Checkplots showing exceptional outliers

I've appreciated in the past that the checkplots zoom in on the "important" points by automatically discounting some outlier points when determining axis limits. However, a current checkplot of mine is not doing this, even when I set sigclip to a value. The entire range of points, including the extreme outliers, is shown. See attached screenshot for an example.
screenshot-67

move to using pickles/sqlite3 for the checkplot list instead of JSON

The checkplot list JSON grows to a huge size when we get through about 2000 objects or so with the checkplotserver, since it contains all the objectinfo, varinfo, comments, and xmatch info. It also gets progressively slower to move between checkplots. We should move to using a pickle instead.

Also think about sqlite3 (this might actually be better; can probably execute the query in the same background workers as we do to update the JSON).

astrokep.filter_kepler_lcdict

An issue with the current (commented out) approach is that SAP and PDC fluxes have nans at different positions.

The typical use-case is "I want SAP fluxes" or "I want PDC fluxes". I think it might be more sensible to have this function reorganize the lcdict, perhaps bookkeeping the same things as currently tracked in lcdict['columns'], but with two separate sub-dicts: one for PDC, one for SAP.

[minor tweak] tfa n_templates should depend on time baseline

- not more than 10% of the total number of objects in the field or

The qualitative number for "how many parameters should I fit to this lightcurve?" is the number of points in the lightcurve, rather than the number of stars in the field.

E.g., in a v-v-crowded field with 1e6 stars, the 10% parameter would give 1e5 coefficients to detrend against in your lightcurve, irrespective of how many points are in the LC. For the typical ~few thousand points, this would lead to overfitting!

buggy BLS depths

The following tarball has a minimal working example (with dependencies from pipe-trex, + the lightcurves used):

bls_example.tar.gz

The BLS implementation from kbls seems to be doing a couple wonky things.

  1. sometimes, the returned transit depth seems to sometimes have the wrong sign, e.g., for WASP-29b attached
  2. other times (more frequently), the returned depth seems to be systematically too small.

The work-around I've been using for fast transit model-fitting has been to use the BLS period as an initial guess to fit trapezoidal models. This seems to produce robust depths.

However both (1) and (2) above are important to fix. (1) for obvious reasons, (2) because it affects the depths we measure, and thus the measured SNR of peaks.

WASP-4:
wasp-4b_bls_buggy
WASP-5:
wasp-5b_bls_buggy
WASP-29:
wasp-29b_bls_buggy

"""
example showing odd BLS behavior.
"""
import numpy as np
from lcstatistics import read_tfa_lc
from astrobase.periodbase import kbls
from astrobase.varbase import lcfit

##########################################
# change things below here

### WASP-29b: fails with transit depth NEGATIVE.
tfalcfile, plname = 'HAT-633-0000416.tfalc', 'WASP-29b'
### WASP-5b: seems ok, but depth is smaller than expected
#tfalcfile, plname = 'HAT-678-0001318.tfalc', 'WASP-5b'
### WASP-4b: seems ok, but depth is smaller than expected
#tfalcfile, plname = 'HAT-717-0001830.tfalc', 'WASP-4b'

# change things above here
##########################################

lc = read_tfa_lc(tfalcfile)

time = lc['btjd']
mag = lc['TF1']
err_mag = lc['RMERR1'] # raw mag err same as TFA mag err

# these zero-points do not matter; we care about relative flux. for the error
# transformation, note
#   sigma_flux = dg/d(mag) * sigma_mag, for g=f0 * 10**(-0.4*(mag-mag0)).
mag_0 = 10
flux_0 = 1000
flux = flux_0 * 10**(-0.4 * (mag - mag_0))
err_flux = np.abs(
    -0.4 * np.log(10) * flux_0 * 10**(-0.4*(mag-mag_0)) * err_mag
)

fluxmedian = np.nanmedian(flux)
flux /= fluxmedian
err_flux /= fluxmedian

# fit BLS model; plot resulting phased LC.
endp = 1.05*(np.nanmax(time) - np.nanmin(time))/2
nworkers=8
blsdict = kbls.bls_parallel_pfind(time, flux, err_flux,
                                  magsarefluxes=True, startp=0.1,
                                  endp=endp, maxtransitduration=0.15,
                                  nworkers=nworkers, sigclip=None)
fitd = kbls.bls_stats_singleperiod(time, flux, err_flux,
                                   blsdict['bestperiod'],
                                   maxtransitduration=0.15,
                                   magsarefluxes=True, sigclip=None,
                                   perioddeltapercent=5)

blsfit_savfile = '{}_bls_buggy.png'.format(plname)
lcfit._make_fit_plot(fitd['phases'], fitd['phasedmags'], None,
                     fitd['blsmodel'], fitd['period'], fitd['epoch'],
                     fitd['epoch'], blsfit_savfile, magsarefluxes=True)

tuple should be and

objectinfo['kmag'] is not None,

The if statement here seems incorrect, particularly when compared with e.g. line 1372. All the if is doing here is seeing if the tuple that this statement is creating exists, when it seems like the comma at the end of the line should be an "and" so everything gets checked.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.