Giter Club home page Giter Club logo

bleeding_edge's People

Contributors

ericthewizard avatar jameswilburlewis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

bleeding_edge's Issues

st_position_load is broken

you can reproduce the issue by running the crib sheet at:

general/examples/crib_tplot_annotation.pro

The error I'm seeing is:

% Compiled module: ST_POSITION_LOAD.
HTTP/1.1 301 Moved Permanently
Content-Type: text/html; charset=iso-8859-1
Date: Wed, 22 Feb 2023 19:42:19 GMT
Location: https://stereo.srl.caltech.edu2/Position/ahead/position_ahead_2008_GSE.txt
Server: Apache/2.4.54 () OpenSSL/1.0.2k-fips
Content-Length: 282
Connection: Close

Permanent redirect to: https://stereo.srl.caltech.edu2/Position/ahead/position_ahead_2008_GSE.txt
Request Aborted
% Compiled module: READ_ASC.
% OPENR: Error opening file. Unit: 100, File: /Users/eric/data/misc/stereo/Position/ahead/position_ahead_2008_GSE.txt
No such file or directory
% Execution halted at: READ_ASC 88 /Users/eric/trunk/general/tools/misc/read_asc.pro
% ST_POSITION_LOAD 70 /Users/eric/trunk/general/missions/stereo/st_position_load.pro
% $MAIN$ 55 /Users/eric/trunk/general/examples/crib_tplot_annotation.pro

Finding a better home for some scripts

The script that commits the latest SVN changes to this repo, and the script that builds and releases the SPEDAS changelog, currently run on my Raspberry Pi - we should move this to one of the UCLA or Berkeley servers.

tplot2cdf missing some variable attributes

The UNITS and COORDINATE_SYSTEM variable attributes should be set in the CDF if the corresponding entries are present in the data_att dlimits structure for the tplot variable. This would be very useful for unit testing, especially when validating Python results against IDL results.

st_swaves_load URL layout has changed on server

This STEREO dataset is now only available via HTTPS. The load routine has been updated to use the correct protocol, and to use spd_download instead of file_retrieve, but it still doesn't work because the URLs have changed.

Old style:
https://stereo-ssc.nascom.nasa.gov/data/ins_data/swaves/2008/swaves_average_20080101_a.sav

New style:
https://stereo-ssc.nascom.nasa.gov/data/ins_data/swaves/Ahead/one-minute/IDL/HFR-LFR/2008/stereo-a_swaves_hfr-lfr_average_20080101_v04.sav

It remains to be seen whether the V04 .sav files are compatible with the originals.

This is probably a low priority unless someone requests it.

Issue with append function in tplot_restore

Hi,

I am having some issues with using the append keyword in tplot_restore.

Tplot_restore is able to restore one-day of tplot save file data. But when I try to append the data with the second day of tplot save file, an error message appear on Line 142:

% Attempt to subscript S1 with <LONG ( 1)> is out of range.
% Execution halted at: TPLOT_RESTORE 142 C:\Users\gpoh\IDLWorkspace\Tplot\general\tplot\tplot_restore.
pro
% ITPLOT_TEST 45 C:\Users\gpoh\IDLWorkspace\Default\itplot_test.pro
% $MAIN$

Please advise. Thank you.

Test new server sundial (eventual replacement for cronus)

Our server cronus is nearing end-of-support, and its replacement has arrived. The new server is accessible as sundial.ssl.berkeley.edu.
It's running RHEL version 8.7, so a few things might be different from our other Linux servers (which are set up with Centos).

I've already asked for a few tweaks to the configuration:

  1. Create /mydisks/home/thmsoc directory (Done)
  2. Install svn, git, and mysql binaries (Done)
  3. Install include files for C++ boost libraries (Done)

Once these changes are made, we can start testing cron jobs and looking for other potential issues. Some command line tools might have changed options, syntax, or default behavior. We may need to add sundial to the list of permitted hosts on our svn, mysql, or other servers, services, or firewall configurations.

The goal is for the new cronus to be able to run any cron job or workflow from the other Linux servers, not just the ones currently on cronus. It might be good, where practical, to try running the jobs interactively (switching to /bin/sh, setting the THMSOC variable, then calling the script on the command line) just in case any issues show up in the script outputs that might not necessarily make it into the log files.

Nick: PHP scripts, file inventories, GOES/POES summary plots, etc
Jim M: L1->L2 processing, THEMIS summary plots
Cindy: GMAG processing
Jim L: Telemetry, L0, L1 processing tools; logging to mysql and email notifications, SVN client compatibility

I have confirmed that SVN working copies created on our Centos machines need to be upgraded to be compatible with the svn client on sundial. But then they won't be usable on the other machines until we upgrade their svn clients.

Implement Shue magnetopause model in IDL SPEDAS [Shue 1997]

gsm2lmn might have some relevant info? Also, I think this model might be included with GEOPACK.

Cindy, I think someone else as UCLA was working on this, but I can't find anything about it in my archived emails. Do you recall who it was?

Help request, Load Data panel crashing

From [email protected]

He sent a GUI history file showing the following:

(2023-03-07/13:42:26) % Dynamically loadable module failed to load: CDF.
(2023-03-07/13:42:26) % Execution halted at: CL_CSA_INIT 110 D:\Islam\SPEDAS\Spedas 5 with IDL\spedas_5_0\projects\cluster\cluster_science_archive\cl_csa_init.pro

I've replied with some suggestions to check where IDL is trying to load the DLM from, or if his IDL installation is corrupted. He is using IDL 8.2, which might cause other problems down the line. Since he's using the GUI, I suggested the VM executables as a possible option.

Issue with SITL-level FEEPS data

Relayed from Mitsuo Oka via Christine G:

Karlheinz reported that some of the "SITL-level" FEEPS data causes an error when plotting with IDL/SPEDAS. I was able to reproduce this error without using EVA, as shown in the attached script. In this script, I have three different choices of 'trange'. The first one works okay, the second one leads to a crash, and the third one produce no variable to plot.

I have an impression that the latest "SITL-level" FEEPS data is usually okay and that this issue has not disrupted the SITL activity. But we would appreciate it if you could take a look.

PRO test_feeps
  compile_opt idl2

;  trange = ['2022-10-26/00:00','2022-10-27/00:00']; Works okay.
   trange = ['2022-10-26/00:00','2022-10-27/16:00']; Crashes with an error.
;  trange = ['2023-03-05/00:00','2023-03-07/00:00']; No error, but also no valid variable to plot. 
  
  mms_load_data, trange = trange, probes = '3', level = 'sitl', instrument = 'feeps', $
    data_rate = 'srvy', datatype = 'electron'
  
  tplot, '*_epd_feeps_srvy_sitl_electron_intensity_omni'
     
END

Problems loading MAVEN data from PDS/HAPI server at UCLA

There are some persistent issues with trying to download MAVEN data (and probably other missions) from the PDS/HAPI server at UCLA. UCLA POC is In Sook Moon, [email protected]. Last update from him was November 2022, maybe time to ping him again?

Latest from Nick, 2023-02-23:

Using IDL spedas today, I can see that it downloads some maven CSV files from UCLA, but then there are so many data sets in this UCLA HAPI server ... and most of them return nothing, I don't know if this is normal behavior or not.

There are certainly some problems remaining, but I am not sure how important they are. For example, the following link that I reported previously, should not give a 404 error:
https://pds-ppi.igpp.ucla.edu/hapi/data?id=urn:nasa:pds:cassini-caps-calibrated:data-els&time.min=2010-03-23T00:00:00.000Z&time.max=2010-03-23T02:00:00.000Z

And the following request for a non-existent data set should not give a blank page:
https://pds-ppi.igpp.ucla.edu/hapi/data?id=urn:nasa:pds:maven.static.c:data.dddd4_4d16a2m&time.min=2017-03-23T00:00:00Z&time.max=2017-03-23T02:00:00Z

In general, the HAPI server should never send any 404 or 500 responses, and never any empty responses. The HAPI server should always return a JSON response, with an error message when needed.

Here is the correct behavior using NASAs HAPI server for a nonexistent data set dddGOES:
https://cdaweb.gsfc.nasa.gov/hapi/data?id=dddGOES15_EPS-MAGED_5MIN&time.min=2017-03-23T00:00:00Z&time.max=2017-03-24T00:00:00Z&format=json

Bad request on downloading one-hour resolution OMNI data

The function "pyspedas.omni.data" works perfectly when downloading high-resolution OMNI data, but it cannot fetch hourly OMNI data correctly.

For example, I'm trying to download data between 2000/03/01 03:00 and 2000/03/02 03:00 using:
'omni_vars = pyspedas.omni.data(trange=[start_time, end_time], datatype='hourly')'

And this will result in an error message like this:
'06-May-23 15:37:15: Downloading remote index: https://spdf.gsfc.nasa.gov/pub/data/omni/omni_cdaweb/hourly/2000/
06-May-23 15:37:16: No links matching pattern omni2_h0_mrg1hr_20000301_v??.cdf found at remote index https://spdf.gsfc.nasa.gov/pub/data/omni/omni_cdaweb/hourly/2000/'

After checking the website storage, it can be found that the original hourly data are stored as two separate cdf files per year, not one file per month. For the year 2000, the two files are omni2_h0_mrg1hr_20000101_v01.cdf and omni2_h0_mrg1hr_20000701_v01.cdf, which leads to the file pattern error.

I am on the stable version 1.4.32, could this be a real issue or be caused by errors in my inputs?

Look into migrating to SVN over HTTP rather than ssh+svn.

This would allow outside developers to view the SVN history without needing commit privileges. SSL has a shared SVN service that might be suitable; the only outstanding issue is how to manage credentials so we can control who can modify what parts of the repository.

Possibly wrong Hanning window in wavpol

In file general/science/wavpol/wavpol.pro, the smoothing window is defined as a fixed array:

aa=[0.024,0.093,0.232,0.301,0.232,0.093,0.024] ;smoothing profile based on Hanning

And this array is later used as:
ematspec[KK,k,0,0]=TOTAL(aa[0:(nosmbins-1)]*matspec[KK,(k-(nosmbins-1)/2):(k+(nosmbins-1)/2),0,0])
ematspec[KK,k,1,0]=TOTAL(aa[0:(nosmbins-1)]*matspec[KK,(k-(nosmbins-1)/2):(k+(nosmbins-1)/2),1,0])
ematspec[KK,k,2,0]=TOTAL(aa[0:(nosmbins-1)]*matspec[KK,(k-(nosmbins-1)/2):(k+(nosmbins-1)/2),2,0])
ematspec[KK,k,0,1]=TOTAL(aa[0:(nosmbins-1)]*matspec[KK,(k-(nosmbins-1)/2):(k+(nosmbins-1)/2),0,1])
ematspec[KK,k,1,1]=TOTAL(aa[0:(nosmbins-1)]*matspec[KK,(k-(nosmbins-1)/2):(k+(nosmbins-1)/2),1,1])
ematspec[KK,k,2,1]=TOTAL(aa[0:(nosmbins-1)]*matspec[KK,(k-(nosmbins-1)/2):(k+(nosmbins-1)/2),2,1])
ematspec[KK,k,0,2]=TOTAL(aa[0:(nosmbins-1)]*matspec[KK,(k-(nosmbins-1)/2):(k+(nosmbins-1)/2),0,2])
ematspec[KK,k,1,2]=TOTAL(aa[0:(nosmbins-1)]*matspec[KK,(k-(nosmbins-1)/2):(k+(nosmbins-1)/2),1,2])
ematspec[KK,k,2,2]=TOTAL(aa[0:(nosmbins-1)]*matspec[KK,(k-(nosmbins-1)/2):(k+(nosmbins-1)/2),2,2])

where the first nosmbins elements are used. This will cause a fault window function when nosmbins, or bin_freq is not equal to 7, for example the window function would be [0.024,0.093,0.232] if bin_freq=3 is passed.

Loading CSV files into tplot

Is there a way to implement something sensible for generic CSV files, or perhaps the specific case of CSV files returned from a HAPI server?

ROI-dependent calibration for ESA

From Vassilis:

A researcher noticed that our L2 ESA data are not providing accurate moments. The reason is that the background subtraction needs to be changed to accommodate the increased radiation environment. We can easily do that by modifying the appropriate keywords on the number of anodes and energies to average over and the scaling factor to apply. Increasing the number of bins to 5 and scale factor to 5 should remove enough background to get the moments far more accurate.

Is this something that could be automated to happen inside a certain radius, say 6Re, and then populate the L2 data? Can we try a few days (active days in particular) to see how it works and then a week, then a month, if it works well then apply it to the full mission โ€“ what do you think?

POLAR plugin in SPEDAS

We have a POLAR plugin for pyspedas, but it's not yet available in IDL SPEDAS. According to Vassilis, there may be additional data sets available at Berkeley (may or may not be in CDF format) that we would like to support in both IDL SPEDAS and pyspedas.

SSL host/peer verification problems downloading via CDAWeb

IDL 8.5.1 is crashing when attempting to load data via CDAWeb, with the following error:

SSL certificate problem: unable to get local issuer certificate, Curl Error Code = 60

We've seen this with other data providers; HTTPS certificate for CDAWeb was recently updated and that might be why we're just now noticing the issue.

The fix is to disable SSL peer verification when constructing the IDLNetURL object, as we've done for other data providers.

flatten_spectra / flatten_spectra_multi improvements

A user requested some improvements to flatten_spectra and flatten_spectra_multi:

  1. update flatten_spectra to support different line styles and symbols for each panel (currently, only custom colors seems to be supported)
  2. improve flatten_spectra_multi support for multiple panels (right now, it's mostly for multiple time stamps in a single panel)

deriv_data /replace keyword doesn't work

If the /replace keyword is passed to deriv_data, it attempts to set a null suffix so that the original input variable is overwritten. But the following lines of code undo this:

if keyword_set(replace) then begin
   suffix = ''
endif
 
if ~keyword_set(suffix) then begin
  suffix = '_ddt'
endif

Annoyingly, setting a keyword to an empty string is not sufficient to get keyword_set to return true (much like setting a keyword variable to 0). Testing for n_elements(keyword) GT 0 is a more robust test for anything other than boolean-ish keywords.

Other similar routines with a /replace keyword should be checked to see if they're susceptible to the same logic error.

Issue with SPEDAS GUI on Ubuntu

I received a bug report regarding the SPEDAS GUI on Ubuntu 22.04 (Jammy):

  1. data downloads aren't working
  2. drop-down menus aren't working well

Not entirely sure, but I think they're using the VM

GUI calendars are broken

To reproduce, go to Data -> Load Data from Plug-ins... -> Choose Date/time from calendar

This was on IDL 8.8.3, macOS 13.2.1.

Screenshot 2023-02-22 at 1 17 30 PM

ivmon_idpu missing from L1 housekeeping CDFs

The lzp_process_dir script fails to copy the variable thx_ivmon_idpu from the temporary data CDF to the final L1 CDF. It should be checked for other missing variables. After testing, the updates need to be merged into production, and L1 HSK CDFs for the entire mission reprocessed.

Update: ivmon_n8va was also missing, same reason. Everything else accounted for, will push to production and reprocess.

Differing results when computing a gradient using 4-point measurements by SPEDAS and irfu-matlab packages

Copy+pasted from an email:

when computing ion thermal pressure gradient (force) using a 4-spacecraft method from MMS FPI data my colleague and I get differing results using IDL SPEDAS and Matlab irfu-matlab (developed by the Institute of Space Physics, Uppsala, Sweden) software packages. It appears as if the X component by Matlab is the Z component by IDL and vice versa.

I am quite sure that irfu-matlab has it correct (I have looked some code, but not thoroughly), but this should be checked more.

Would it be possible for you to reproduce the ion thermal pressure gradient force components using SPEDAS?

I attach here plots from a few time intervals that could be plotted with SPEDAS. Maybe you could find out if there is something wrong with SPEDAS.

2019-09-05_2114_2119_irfumatlab
2018-08-19_1820_1826_IDLSPEDAS
2017-06-22_0525_0538_IDLSPEDAS
2021-07-07_0829_0836_IDLSPEDAS
2018-08-19_1820_1826_irfumatlab
2017-06-22_0525_0538_irfumatlab
2021-07-07_0829_0836_irfumatlab
2019-09-05_2114_2119_IDLSPEDAS

Implement masking functionality in THEMIS particle routines

We recently added solar wind masking functionality to the MMS particle routines, with some code from Terry Liu. He also sent me the code for THEMIS, so we should clean up that code, add documentation, and add this functionality to the THEMIS particle routines as well.

Issues with our Poynting flux crib sheets (THEMIS and MMS)

The crib sheet at:

projects/mms/examples/advanced/mms_poynting_flux_crib.pro

Seems to have an off-by-one error, and a binning error somewhere in calculating the field-aligned Poynting flux in the frequency domain; I put a stop at the end of the script, and found:

MMS> get_data, 'S_fac_x', data=d
MMS> help, d
** Structure <122f2448>, 3 tags, length=656368, data length=656368, refs=1:
X DOUBLE Array[318]
Y DOUBLE Array[319, 256]
V FLOAT Array[128]

The number of frequencies don't match the data values for each frequency, and the data values seem to be +1 from the time values...

gse2sse incorrectly transforms velocities

If the input variable is a velocity, gse2sse attempts to correct for the relative motion between the earth and moon. But it looks like it's using the position rather than the velocity in the correction.

STEREO crib st_crib.pro with unsupported keyword

It's using an unsupported keyword to the st_mag_load routine:

% Keyword TYPE not allowed in call to: ST_MAG_LOAD
% Execution halted at: $MAIN$ 145 C:\Users\james\Desktop\spdsoft_trunk\general\missions\stereo\st_crib.pro

Most of the files in this directory haven't been touched in a long time....not clear if anyone is still using them, so low priority unless someone asks for it.

Problems with proxy support for downloading via CDAWeb, spdfcdas library version 1.7.41 (latest available)

Christine Gabrielse reports being unable to download via CDAWeb using her proxy configuration. There are at least two issues:

There's a problem in the cdas->isUpToDate() method:

url = obj_new('IDLnetURL', $
              proxy_authentication = $
                  self.proxySettings.getAuthentication(), $
              proxy_hostname = self.proxySettings.getHostname(), $
              proxy_port = self.proxySettings.getPort(), $
              proxy_username = self.proxy.getUsername(), $
              proxy_password = self.proxy.getPassword())

There is no self.proxy member -- those last two lines should probably be referencing self.proxySettings. This error gets caught in the catch block, and the program proceeds as if no update is needed.

Then we eventually get here:

dataviews = cdas->getDataviews()

I single stepped through this routine (without a proxy) and didn't see any obvious issues, but for Christine (through a proxy), cdas->getDataviews() was returning a null object, so she was getting a warning dialog to check the connection to CDAWeb.

Escalated to Bernie Harris and Bobby Candey at SPDF.

Bug report from MMS webinar crib sheet

A user reported a crash in mms_plasma_webinar_10mar21.pro

spd_mms_load_bss, trange=trange, datatype='burst', /include_labels
MMS> stop
% RESTORE: Error opening file. Unit: 100, File: mms_auth_info.sav
  No such file or directory
% Execution halted at: MMS_UPDATE_BRST_INTERVALS   32 /home/kcbarik/IDLWorkspace/Spedas/projects/mms/common/data_status_bar/mms_update_brst_intervals.pro
%                      MMS_LOAD_BRST_SEGMENTS   54 /home/kcbarik/IDLWorkspace/Spedas/projects/mms/common/data_status_bar/mms_load_brst_segments.pro
%                      SPD_MMS_LOAD_BSS   91 /home/kcbarik/IDLWorkspace/Spedas/projects/mms/common/data_status_bar/spd_mms_load_bss.pro
%                      $MAIN$          
% Stop encountered: MMS_UPDATE_BRST_INTERVALS   32 /home/kcbarik/IDLWorkspace/Spedas/projects/mms/common/data_status_bar/mms_update_brst_intervals.pro

We don't usually maintain these webinar crib sheets, but this looks like
there might be a regression in spd_mms_load_bss

Implementing masking functionality in MMS particle routines

A user is trying to apply custom masks to the DF data to remove the solar wind component from the data. Currently, the only way to do this is to manually modify the routines that return the DF data to turn those bins off. I think we should investigate adding an option to mms_part_getspec and mms_part_slice2d to allow users to apply custom masks like this without kludging the underlying routines.

Calibrate out FGM-Z (DSL) offset in THEMIS-E

The FGM instrument on THEMIS_E has developed an offset in the DSL-Z component (about 114 nT, stable for now). The leading theory is that the feedback circuitry has failed for that axis. Running the instrument in open-loop mode may restore correct functioning.

Short term: We need to update the L1->L2 calibration routines/data for THEMIS-E to remove this offset from the FGM waveforms and spin fits, reprocess data back to when the anomaly started, and watch for changes in the offset.

Longer term: This might mean breaking the FGM instrument mode out in the L1 products, in case open-loop operation changes the offset.

Print directly to PDF?

From Marcos Silveira, via help request form

He has unspecified issues working with .ps or .eps files on a Mac, wants to know if SPEDAS can output PDFs directly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.