spedas / bleeding_edge Goto Github PK
View Code? Open in Web Editor NEWIDL-based Space Physics Environment Data Analysis Software (bleeding edge)
Home Page: http://www.spedas.org
License: Other
IDL-based Space Physics Environment Data Analysis Software (bleeding edge)
Home Page: http://www.spedas.org
License: Other
Some summary plots also need reprocessing.
A user asked how to remove the ExB velocity prior to calculating 2D slices, and I think the easiest way would be to support custom velocity variables to mms_part_slice2d
.
you can reproduce the issue by running the crib sheet at:
general/examples/crib_tplot_annotation.pro
The error I'm seeing is:
% Compiled module: ST_POSITION_LOAD.
HTTP/1.1 301 Moved Permanently
Content-Type: text/html; charset=iso-8859-1
Date: Wed, 22 Feb 2023 19:42:19 GMT
Location: https://stereo.srl.caltech.edu2/Position/ahead/position_ahead_2008_GSE.txt
Server: Apache/2.4.54 () OpenSSL/1.0.2k-fips
Content-Length: 282
Connection: ClosePermanent redirect to: https://stereo.srl.caltech.edu2/Position/ahead/position_ahead_2008_GSE.txt
Request Aborted
% Compiled module: READ_ASC.
% OPENR: Error opening file. Unit: 100, File: /Users/eric/data/misc/stereo/Position/ahead/position_ahead_2008_GSE.txt
No such file or directory
% Execution halted at: READ_ASC 88 /Users/eric/trunk/general/tools/misc/read_asc.pro
% ST_POSITION_LOAD 70 /Users/eric/trunk/general/missions/stereo/st_position_load.pro
%$MAIN$ 55 /Users/eric/trunk/general/examples/crib_tplot_annotation.pro
The script that commits the latest SVN changes to this repo, and the script that builds and releases the SPEDAS changelog, currently run on my Raspberry Pi - we should move this to one of the UCLA or Berkeley servers.
It is applying the earth-moon offsets even if /rotation_only is specified.
The UNITS and COORDINATE_SYSTEM variable attributes should be set in the CDF if the corresponding entries are present in the data_att dlimits structure for the tplot variable. This would be very useful for unit testing, especially when validating Python results against IDL results.
This STEREO dataset is now only available via HTTPS. The load routine has been updated to use the correct protocol, and to use spd_download instead of file_retrieve, but it still doesn't work because the URLs have changed.
Old style:
https://stereo-ssc.nascom.nasa.gov/data/ins_data/swaves/2008/swaves_average_20080101_a.sav
It remains to be seen whether the V04 .sav files are compatible with the originals.
This is probably a low priority unless someone requests it.
Usually the preliminary ephemeris files are transferred out of the MOC overnight, and the bfds_archive cron job picks them up at 10:30 am. When today's files are made available, they will have to be manually processed to produce today's V00 STATE CDFs.
Hi,
I am having some issues with using the append keyword in tplot_restore.
Tplot_restore is able to restore one-day of tplot save file data. But when I try to append the data with the second day of tplot save file, an error message appear on Line 142:
% Attempt to subscript S1 with <LONG ( 1)> is out of range.
% Execution halted at: TPLOT_RESTORE 142 C:\Users\gpoh\IDLWorkspace\Tplot\general\tplot\tplot_restore.
pro
% ITPLOT_TEST 45 C:\Users\gpoh\IDLWorkspace\Default\itplot_test.pro
%
Please advise. Thank you.
CDAWeb is showing multiple options for THEMIS datasets, not sure which are tests vs. production-ready. Found at least one example where HAPI data has gaps/NaNs, CDAWeb data is complete.
Our server cronus is nearing end-of-support, and its replacement has arrived. The new server is accessible as sundial.ssl.berkeley.edu.
It's running RHEL version 8.7, so a few things might be different from our other Linux servers (which are set up with Centos).
I've already asked for a few tweaks to the configuration:
Once these changes are made, we can start testing cron jobs and looking for other potential issues. Some command line tools might have changed options, syntax, or default behavior. We may need to add sundial to the list of permitted hosts on our svn, mysql, or other servers, services, or firewall configurations.
The goal is for the new cronus to be able to run any cron job or workflow from the other Linux servers, not just the ones currently on cronus. It might be good, where practical, to try running the jobs interactively (switching to /bin/sh, setting the THMSOC variable, then calling the script on the command line) just in case any issues show up in the script outputs that might not necessarily make it into the log files.
Nick: PHP scripts, file inventories, GOES/POES summary plots, etc
Jim M: L1->L2 processing, THEMIS summary plots
Cindy: GMAG processing
Jim L: Telemetry, L0, L1 processing tools; logging to mysql and email notifications, SVN client compatibility
I have confirmed that SVN working copies created on our Centos machines need to be upgraded to be compatible with the svn client on sundial. But then they won't be usable on the other machines until we upgrade their svn clients.
These attributes are added in the load routine. They should also be in the master CDF, and perhaps the data files too.
gsm2lmn might have some relevant info? Also, I think this model might be included with GEOPACK.
Cindy, I think someone else as UCLA was working on this, but I can't find anything about it in my archived emails. Do you recall who it was?
This can occur with some Cluster variables, for example. Updated version should be forked from ssl_check_valid_names (for backward compatibility), and should ensure that returned values match the exact case of the values from the valid_names input
From [email protected]
He sent a GUI history file showing the following:
(2023-03-07/13:42:26) % Dynamically loadable module failed to load: CDF.
(2023-03-07/13:42:26) % Execution halted at: CL_CSA_INIT 110 D:\Islam\SPEDAS\Spedas 5 with IDL\spedas_5_0\projects\cluster\cluster_science_archive\cl_csa_init.pro
I've replied with some suggestions to check where IDL is trying to load the DLM from, or if his IDL installation is corrupted. He is using IDL 8.2, which might cause other problems down the line. Since he's using the GUI, I suggested the VM executables as a possible option.
Relayed from Mitsuo Oka via Christine G:
Karlheinz reported that some of the "SITL-level" FEEPS data causes an error when plotting with IDL/SPEDAS. I was able to reproduce this error without using EVA, as shown in the attached script. In this script, I have three different choices of 'trange'. The first one works okay, the second one leads to a crash, and the third one produce no variable to plot.
I have an impression that the latest "SITL-level" FEEPS data is usually okay and that this issue has not disrupted the SITL activity. But we would appreciate it if you could take a look.
PRO test_feeps
compile_opt idl2
; trange = ['2022-10-26/00:00','2022-10-27/00:00']; Works okay.
trange = ['2022-10-26/00:00','2022-10-27/16:00']; Crashes with an error.
; trange = ['2023-03-05/00:00','2023-03-07/00:00']; No error, but also no valid variable to plot.
mms_load_data, trange = trange, probes = '3', level = 'sitl', instrument = 'feeps', $
data_rate = 'srvy', datatype = 'electron'
tplot, '*_epd_feeps_srvy_sitl_electron_intensity_omni'
END
There are some persistent issues with trying to download MAVEN data (and probably other missions) from the PDS/HAPI server at UCLA. UCLA POC is In Sook Moon, [email protected]. Last update from him was November 2022, maybe time to ping him again?
Latest from Nick, 2023-02-23:
Using IDL spedas today, I can see that it downloads some maven CSV files from UCLA, but then there are so many data sets in this UCLA HAPI server ... and most of them return nothing, I don't know if this is normal behavior or not.
There are certainly some problems remaining, but I am not sure how important they are. For example, the following link that I reported previously, should not give a 404 error:
https://pds-ppi.igpp.ucla.edu/hapi/data?id=urn:nasa:pds:cassini-caps-calibrated:data-els&time.min=2010-03-23T00:00:00.000Z&time.max=2010-03-23T02:00:00.000Z
And the following request for a non-existent data set should not give a blank page:
https://pds-ppi.igpp.ucla.edu/hapi/data?id=urn:nasa:pds:maven.static.c:data.dddd4_4d16a2m&time.min=2017-03-23T00:00:00Z&time.max=2017-03-23T02:00:00Z
In general, the HAPI server should never send any 404 or 500 responses, and never any empty responses. The HAPI server should always return a JSON response, with an error message when needed.
Here is the correct behavior using NASAs HAPI server for a nonexistent data set dddGOES:
https://cdaweb.gsfc.nasa.gov/hapi/data?id=dddGOES15_EPS-MAGED_5MIN&time.min=2017-03-23T00:00:00Z&time.max=2017-03-24T00:00:00Z&format=json
The function "pyspedas.omni.data" works perfectly when downloading high-resolution OMNI data, but it cannot fetch hourly OMNI data correctly.
For example, I'm trying to download data between 2000/03/01 03:00 and 2000/03/02 03:00 using:
'omni_vars = pyspedas.omni.data(trange=[start_time, end_time], datatype='hourly')'
And this will result in an error message like this:
'06-May-23 15:37:15: Downloading remote index: https://spdf.gsfc.nasa.gov/pub/data/omni/omni_cdaweb/hourly/2000/
06-May-23 15:37:16: No links matching pattern omni2_h0_mrg1hr_20000301_v??.cdf found at remote index https://spdf.gsfc.nasa.gov/pub/data/omni/omni_cdaweb/hourly/2000/'
After checking the website storage, it can be found that the original hourly data are stored as two separate cdf files per year, not one file per month. For the year 2000, the two files are omni2_h0_mrg1hr_20000101_v01.cdf and omni2_h0_mrg1hr_20000701_v01.cdf, which leads to the file pattern error.
I am on the stable version 1.4.32, could this be a real issue or be caused by errors in my inputs?
This would allow outside developers to view the SVN history without needing commit privileges. SSL has a shared SVN service that might be suitable; the only outstanding issue is how to manage credentials so we can control who can modify what parts of the repository.
In file general/science/wavpol/wavpol.pro, the smoothing window is defined as a fixed array:
bleeding_edge/general/science/wavpol/wavpol.pro
Lines 307 to 315 in cf5ae00
nosmbins
elements are used. This will cause a fault window function when nosmbins
, or bin_freq
is not equal to 7
, for example the window function would be [0.024,0.093,0.232]
if bin_freq=3
is passed.Is there a way to implement something sensible for generic CSV files, or perhaps the specific case of CSV files returned from a HAPI server?
I see that there may already be some support in general/missions/fast, but not sure what might be missing.
If the sizes are mismatched, there should at least be a warning.
This can be a wrapper around the cart_to_sph matrix tool. How will input be specified - r, theta, phi in same tplot variable, or separately?
From Vassilis:
A researcher noticed that our L2 ESA data are not providing accurate moments. The reason is that the background subtraction needs to be changed to accommodate the increased radiation environment. We can easily do that by modifying the appropriate keywords on the number of anodes and energies to average over and the scaling factor to apply. Increasing the number of bins to 5 and scale factor to 5 should remove enough background to get the moments far more accurate.
Is this something that could be automated to happen inside a certain radius, say 6Re, and then populate the L2 data? Can we try a few days (active days in particular) to see how it works and then a week, then a month, if it works well then apply it to the full mission โ what do you think?
From @clrussell90404.
We have a POLAR plugin for pyspedas, but it's not yet available in IDL SPEDAS. According to Vassilis, there may be additional data sets available at Berkeley (may or may not be in CDF format) that we would like to support in both IDL SPEDAS and pyspedas.
hapi directory gets put in users home directory; $HOME/data would probably be better. Could be a SPEDAS configuration setting, like we do for the CDAWeb panel.
IDL 8.5.1 is crashing when attempting to load data via CDAWeb, with the following error:
SSL certificate problem: unable to get local issuer certificate, Curl Error Code = 60
We've seen this with other data providers; HTTPS certificate for CDAWeb was recently updated and that might be why we're just now noticing the issue.
The fix is to disable SSL peer verification when constructing the IDLNetURL object, as we've done for other data providers.
A user requested some improvements to flatten_spectra and flatten_spectra_multi:
If the /replace keyword is passed to deriv_data, it attempts to set a null suffix so that the original input variable is overwritten. But the following lines of code undo this:
if keyword_set(replace) then begin
suffix = ''
endif
if ~keyword_set(suffix) then begin
suffix = '_ddt'
endif
Annoyingly, setting a keyword to an empty string is not sufficient to get keyword_set to return true (much like setting a keyword variable to 0). Testing for n_elements(keyword) GT 0 is a more robust test for anything other than boolean-ish keywords.
Other similar routines with a /replace keyword should be checked to see if they're susceptible to the same logic error.
I received a bug report regarding the SPEDAS GUI on Ubuntu 22.04 (Jammy):
Not entirely sure, but I think they're using the VM
From Orlando Romeo ([email protected]):
I was trying to use the tplot_fill_time_intv routine, but it doesn't appear to work correctly. Shown below, when I use tlimit after plotting a shaded region, the shaded region extends beyond or before the time range set by tlimit.
The lzp_process_dir script fails to copy the variable thx_ivmon_idpu from the temporary data CDF to the final L1 CDF. It should be checked for other missing variables. After testing, the updates need to be merged into production, and L1 HSK CDFs for the entire mission reprocessed.
Update: ivmon_n8va was also missing, same reason. Everything else accounted for, will push to production and reprocess.
Copy+pasted from an email:
when computing ion thermal pressure gradient (force) using a 4-spacecraft method from MMS FPI data my colleague and I get differing results using IDL SPEDAS and Matlab irfu-matlab (developed by the Institute of Space Physics, Uppsala, Sweden) software packages. It appears as if the X component by Matlab is the Z component by IDL and vice versa.
I am quite sure that irfu-matlab has it correct (I have looked some code, but not thoroughly), but this should be checked more.
Would it be possible for you to reproduce the ion thermal pressure gradient force components using SPEDAS?
I attach here plots from a few time intervals that could be plotted with SPEDAS. Maybe you could find out if there is something wrong with SPEDAS.
We recently added solar wind masking functionality to the MMS particle routines, with some code from Terry Liu. He also sent me the code for THEMIS, so we should clean up that code, add documentation, and add this functionality to the THEMIS particle routines as well.
The crib sheet at:
projects/mms/examples/advanced/mms_poynting_flux_crib.pro
Seems to have an off-by-one error, and a binning error somewhere in calculating the field-aligned Poynting flux in the frequency domain; I put a stop at the end of the script, and found:
MMS> get_data, 'S_fac_x', data=d
MMS> help, d
** Structure <122f2448>, 3 tags, length=656368, data length=656368, refs=1:
X DOUBLE Array[318]
Y DOUBLE Array[319, 256]
V FLOAT Array[128]
The number of frequencies don't match the data values for each frequency, and the data values seem to be +1 from the time values...
If the input variable is a velocity, gse2sse attempts to correct for the relative motion between the earth and moon. But it looks like it's using the position rather than the velocity in the correction.
It's using an unsupported keyword to the st_mag_load routine:
% Keyword TYPE not allowed in call to: ST_MAG_LOAD
% Execution halted at:
Most of the files in this directory haven't been touched in a long time....not clear if anyone is still using them, so low priority unless someone asks for it.
Christine Gabrielse reports being unable to download via CDAWeb using her proxy configuration. There are at least two issues:
There's a problem in the cdas->isUpToDate()
method:
url = obj_new('IDLnetURL', $
proxy_authentication = $
self.proxySettings.getAuthentication(), $
proxy_hostname = self.proxySettings.getHostname(), $
proxy_port = self.proxySettings.getPort(), $
proxy_username = self.proxy.getUsername(), $
proxy_password = self.proxy.getPassword())
There is no self.proxy member -- those last two lines should probably be referencing self.proxySettings. This error gets caught in the catch block, and the program proceeds as if no update is needed.
Then we eventually get here:
dataviews = cdas->getDataviews()
I single stepped through this routine (without a proxy) and didn't see any obvious issues, but for Christine (through a proxy), cdas->getDataviews()
was returning a null object, so she was getting a warning dialog to check the connection to CDAWeb.
Escalated to Bernie Harris and Bobby Candey at SPDF.
The HAPI client needs to be updated to support the latest updates to the HAPI spec.
A user reported a crash in mms_plasma_webinar_10mar21.pro
spd_mms_load_bss, trange=trange, datatype='burst', /include_labels
MMS> stop
% RESTORE: Error opening file. Unit: 100, File: mms_auth_info.sav
No such file or directory
% Execution halted at: MMS_UPDATE_BRST_INTERVALS 32 /home/kcbarik/IDLWorkspace/Spedas/projects/mms/common/data_status_bar/mms_update_brst_intervals.pro
% MMS_LOAD_BRST_SEGMENTS 54 /home/kcbarik/IDLWorkspace/Spedas/projects/mms/common/data_status_bar/mms_load_brst_segments.pro
% SPD_MMS_LOAD_BSS 91 /home/kcbarik/IDLWorkspace/Spedas/projects/mms/common/data_status_bar/spd_mms_load_bss.pro
% $MAIN$
% Stop encountered: MMS_UPDATE_BRST_INTERVALS 32 /home/kcbarik/IDLWorkspace/Spedas/projects/mms/common/data_status_bar/mms_update_brst_intervals.pro
We don't usually maintain these webinar crib sheets, but this looks like
there might be a regression in spd_mms_load_bss
Spectral analysis tool contributed by Simone Di Matteo ([email protected])
Original source: https://zenodo.org/record/3703168
There are usage examples in the README.txt file that should be turned into a crib sheet.
A user is trying to apply custom masks to the DF data to remove the solar wind component from the data. Currently, the only way to do this is to manually modify the routines that return the DF data to turn those bins off. I think we should investigate adding an option to mms_part_getspec and mms_part_slice2d to allow users to apply custom masks like this without kludging the underlying routines.
The FGM instrument on THEMIS_E has developed an offset in the DSL-Z component (about 114 nT, stable for now). The leading theory is that the feedback circuitry has failed for that axis. Running the instrument in open-loop mode may restore correct functioning.
Short term: We need to update the L1->L2 calibration routines/data for THEMIS-E to remove this offset from the FGM waveforms and spin fits, reprocess data back to when the anomaly started, and watch for changes in the offset.
Longer term: This might mean breaking the FGM instrument mode out in the L1 products, in case open-loop operation changes the offset.
From Marcos Silveira, via help request form
He has unspecified issues working with .ps or .eps files on a Mac, wants to know if SPEDAS can output PDFs directly.
Add ability to load, calibrate, and plot ASI data in the SPEDAS GUI. Some plotting tools are already implemented in the SECS plugin.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.