Giter Club home page Giter Club logo

g-point-reduction's People

Contributors

pernak18 avatar robertpincus avatar

Watchers

 avatar  avatar  avatar  avatar

g-point-reduction's Issues

SW Development in the SW

  • implement band fluxes as a cost function component (SW and LW) -- this can ostensibly be done right now, but the results may not be trustworthy
  • combine band k-distributions after optimization into a single, full k-distribution
  • include more forcings (SW and LW)
  • weight g-point combinations with a scaling factor
  • end product for users with our cost function definition and normalization statistics so apples-to-apples comparison of reduced RRTMGP can be compared to other models that will produce similar RRTMGP flux netCDFs with their fluxes for Garand (or other profiles)
  • do something like the previous item for 1-angle vs. 3-angle RRTMGP calculations

Inconsistencies in Diagnostic Output After Perturbations

now that Karen has this all working, a new issue. high level explanation of problem: at iteration 104, Karen's parabola (defined in https://github.com/pernak18/g-point-reduction/wiki/Modified-g-Point-Combining) flips from positive to negative, meaning the winner can be improved with the perturbations that Karen and Eli implement. But can we trust these flips? The diagnostic output for the perturbations is wrong at this iteration, and so the possibility of the costs (on the parabola) being wrong is likely as well.

iteration 32 is when Karen's provisions are first instantiated:

Iteration 32
band14_coefficients_LW_g09-10_iter032.nc, Trial: 186, Cost: 84.878604, Delta-Cost: 0.0000
	flux_net, band_flux_net, heating_rate, heating_rate_7, flux_net_forcing_5, flux_net_forcing_6, flux_net_forcing_7, flux_net_forcing_9, flux_net_forcing_10, flux_net_forcing_11, flux_net_forcing_12, flux_net_forcing_13, flux_net_forcing_14, flux_net_forcing_15, flux_net_forcing_16, flux_net_forcing_17, flux_net_forcing_18 = 73.6102, 92.5116, 96.7238, 93.4506, 99.4684, 99.9945, 156.5091, 100.0000, 80.9890, 101.5518, 102.5110, 103.9300, 100.0559, 98.6332, 98.7490, 100.8076, 99.9999
will change here
/global/u2/k/kcadyper/g-point-reduction/workdir_band_14
band14_coefficients_LW_g09-10_iter032.nc
  
plus
in gPointCombineSglPair
self.iCombine
32
flux_LW_g09-10_iter032_plus.nc
/global/u2/k/kcadyper/g-point-reduction/workdir_band_14/coefficients_LW_g09-10_iter032_plus.nc
in CombineBandsSgl
/global/u2/k/kcadyper/g-point-reduction/workdir_band_14/coefficients_LW_g09-10_iter032/flux_LW_g09-10_iter032_plus.nc
band13_coefficients_LW_g11-12_iter032.nc, Trial: 177, Cost: 84.878604, Delta-Cost: 0.0000
	flux_net, band_flux_net, heating_rate, heating_rate_7, flux_net_forcing_5, flux_net_forcing_6, flux_net_forcing_7, flux_net_forcing_9, flux_net_forcing_10, flux_net_forcing_11, flux_net_forcing_12, flux_net_forcing_13, flux_net_forcing_14, flux_net_forcing_15, flux_net_forcing_16, flux_net_forcing_17, flux_net_forcing_18 = 73.6102, 92.5116, 96.7238, 93.4506, 99.4685, 99.9945, 156.5091, 100.0000, 80.9890, 101.5518, 102.5110, 103.9300, 100.0559, 98.6332, 98.7490, 100.8076, 99.9999

Note that the band and g-points in the filename printed by the diagnostic code are different for the regular combination and the alternate combination, but they should be the same.

When she modifies the weights, band and g-points are correct:

mod
in gPointCombineSglPair
self.iCombine
32
flux_LW_g09-10_iter032_mod.nc
/global/u2/k/kcadyper/g-point-reduction/workdir_band_14/coefficients_LW_g09-10_iter032_mod.nc
in CombineBandsSgl
/global/u2/k/kcadyper/g-point-reduction/workdir_band_14/coefficients_LW_g09-10_iter032/flux_LW_g09-10_iter032_mod.nc
band14_coefficients_LW_g09-10_iter032.nc, Trial: 186, Cost: 84.878604, Delta-Cost: 0.0000
	flux_net, band_flux_net, heating_rate, heating_rate_7, flux_net_forcing_5, flux_net_forcing_6, flux_net_forcing_7, flux_net_forcing_9, flux_net_forcing_10, flux_net_forcing_11, flux_net_forcing_12, flux_net_forcing_13, flux_net_forcing_14, flux_net_forcing_15, flux_net_forcing_16, flux_net_forcing_17, flux_net_forcing_18 = 73.6102, 92.5116, 96.7238, 93.4506, 99.4684, 99.9945, 156.5091, 100.0000, 80.9890, 101.5518, 102.5110, 103.9300, 100.0559, 98.6332, 98.7490, 100.8076, 99.9999
delta cost
5.06755954532423e-07

Random Astronomical Cost Values When Parallelizing the Cost Function Computation

We randomly produce costs on the order of 1e7 or higher (I have seen as high as 1e84) for random trials and for any (but not necessarily all) cost function component. This happens only in parallel, which is necessary, particularly for very large cost functions, because the computation is expensive. We get correct answers (after a precision issue was solved) in serial, but it is too slow even for this phase in the software development.

Doc for Karen

some things that need to be documented, based on Karen's experience with the code:

  • running scripts in console instead of CLI
  • deleting the workdir files and starting from scratch
  • iter suffix of k-files should be updated after every iteration, even in bands that are not selected
  • band_k_dist and full_band_flux out of the box
  • dummy 0 in finalDS call to combineBands

Combined (multi-band) files need to be consistent with original k-distribution files

@pernak18 We're starting to deploy the reduced-resolution k-distribution files in user scenarios. One stumbling block is that the formats of the file is not quite the same - variables don't have the same types (int vs int64 for many variables; double -> int64 for temperature_Planck in the LW files). Three scalar variables are also missing from the SW file (tsi_default, mg_default, sb_default).

xarray allows you to specify encodings and we should ensure that the files are essentially identical in format.

Memory Profile Iteration 1

In the past, I've noticed that this application consumes an absurd amount of memory (I believe I've seen up to 160 GB in a single iteration; most of the memory is cleared before the next iteration). For the most part, this has been OK on cori at NERSC, but @kcadyper has experienced segmentation faults in her work on cori and the resource hogging is not acceptable for other systems. So we need to examine why the memory hogging is happening before releasing to the general public.

Record of Reduction

what g-points were combined for our "final" product? at this point, "final" can be the 176-iteration run we recently completed in #26

Reduction Refinement

Currently, the merged K and P in an iteration are defined in terms of the existing (i.e. at the time of the iteration) values as:
Ki = wi * ki + wi+1 * ki+1
Pi = Pi + Pi+1
where the normalized (i.e. wi + wi+1 =1) weights are defined in terms of the existing weights as:
wi  = wi / (wi + wi+1) and wi+1  = wi+1 / (wi + wi+1)

We choose the g-points involved in the winning trial as before.

Define a modified merged K’ as
K’i = (wi+Dw) * ki + (wi+1-Dw) * ki+1
where Dw is set as a fraction x of the larger weight wi :
Dw = x * wi
Write Dw = n * x0 * wi.
(The Planck merger is unchanged from above.)

Call the winning trial K0, which has corresponding cost function CF0. (We now drop the subscript i denoting the trial.) If CF0 represents a non-trivial (to be defined) increase in the overall CF, we evaluate two variations on the winning trial K0 (corresponding cost function CF0) :  K-1 and K1, where Kn, Dw = n * x0 * wi.
Let’s choose x0 = 0.05 for our first attempts at this.

We end up with three pairs of points:  (n=-1,CF-1), (n=0,CF0), (n=1,CF1).
----------------
Label these (x1,y1),(x2,y2), (x3,y3).

The minimum of the parabola through those points is 
xmin = [x32 * (y2-y1) + x22 * (y1-y3) + x12 * (y3-y2)] / {2 * [x3*(y2-y1) + x2*(y1-y3) + x1*(y3-y2)]}

Compute nmin = xmin /0.05. 
(This may need some experimentation.) If nmin is outside of the range +-1, then the final winning trial will use the K that yields the lowest of the three CFs above. 
If n is in the range +-1, then run a variation with nmin= xmin /0.05. 
If the resulting change in CF (between the CF for xmin and the CF before this iteration) is negative or a trivial positive, then terminate and the final result of this trial will use nmin.
If the resulting CF is greater than the other three, then the final winning trial will use the K that yields the lowest of the three CFs above. 
Check some sort of convergence criterion (tbd).
Choose the three points with lowest three CFs out of the four candidate points.
Repeat  the steps below -------

User Friendly Cost Comparisons

notes from my meeting with The Honourable @RobertPincus:

  • continue separation of g-point reduction and cost calculation code
    • this includes bifurcating the processes into their own repositories, with cost calculation a submodule of the (this) reduction repository
  • continue with YAML input into Python script for cost comparisons
  • do the name-levels-weight mapping for user in YAML
  • better naming convention for forcings
  • give user option of running executable to produce inputs into cost comparison

feel free to add, @RobertPincus

Refinements to Absolute Value and "Parabola" Implementations

From Eli in slack at 1035 today:

Here are some notes about the motivation for the modification to the g-point reduction script that rick and I discussed yesterday.

  • The script now use the 'abs' method, as opposed to the original script, which minimized the total cost function at every iteration. We are now minimizing the change in cost function at every iteration under the theory that we want to choose the trial that makes the least impact and, in that way, allows for more flexibility later on. This theory seems to be valid in that the change in CF from the beginning to iteration 128 went from +18 to +12 when this change was made (despite the minimum CF at any iteration in the sequence going from 84 to 100).
  • However, the abs(delta-CF) values that we're using now to choose the winner for an iteration does not include the actual abs(delta-CF) value that actually gets implemented for that iteration. That is because after the winner is chosen we modify the initial g-point combination of the winner with the parabola method. (rick edit: if the delta-cost is > 0.1)
  • So, ideally, we would modify the set of abs(delta-CF) values for all trials to include the optimal possible abs(delta-CF) values that all trials could have. I'll explain later why that isn't practical, but let me first give an example from the last run:
    • At iteration 129, the winner has delta-CF of 1.04. The parabola code yields a modified delta-CF of 0.95, so in this case the parabola does very little to help things. A number of iterations later (n=141), the winner has a delta-CF of 2.23, but the ultimate delta-CF after parabolizing it is 0.30. At iteration 152, the initial delta-CF is 4.98 but the final delta-CF is 0.08. For this example, the these three g-point combinations should have gone in exactly the opposite order than they did.
  • The order may not be too important on its own, but the bigger issue is probably the g-point combinations that we do not choose that may be pretty benign. For example, in the 156 iteration sequence I just completed (100 g-points remain), the largest delta-CF is ~5 and the final CF is 142. Let's say our goal was to have 100 g-points and a final CF of 135. This current g-pt reduction would have failed to achieve this, but most likely there are a few of the unchosen g-pt reductions that would be benign. if we could use, say, two of those unchosen ones instead of the largest two delta-CFs we did choose (say 5 + 4.5), then our final CF would be ~132, and we would have successfully achieved our goals.
  • Doing this algorithm for real would require for each iteration the evaluation of the delta-CF for both the starting g-pt combination (xWeight=0.0) and a modified combination (say xWeight=0.10), then estimated the optimal combination (xWeight_opt) and corresponding delta-CF, and choose the winner based on the lowest abs(delta-CF). That would double the time of the script, which I figure is impractical. (?)
  • Instead, we could maybe do that sort of evaluation every 20-30 iterations under the theory that the difference of delta(CF) for Xweight=0 and delta(CF) for xWeight=0.10 won't change too much. That would save time but the bookkeeping challenges might be larger since the script would have to keep this information around for a bunch of iterations, which would change the sequence of g-points and make some of the existing g-points melt into other g-points as they get selected for combining with their neighbors.

so I wanted rick to think about how this best could be done.

but we are getting close. I think a final CF of ~140 might be OK (but I do have to relearn the skill of making statz and plotz to check "final" results like the place I stopped this week. This subject to change, but it would be great if we could end up with a CF of ~140 while having 80-89 g-pts.

The last 10 delta-CF in this week's sequence were 3.0, 0.5, 1.4, 3.8, 3.2, -0.2, 1.1, 5.2, 4.3, 5.0. So 3 or 4 of these are relatively benign. Let's say that 10% of the remaining g-pt combinations are relatively benign - i.e. 10 g-points. so we could add those 10 benign g-pt combinations, getting the total number from 100 to 90 without changing the CF by much. and also let's say that by expanding the max xWeight allowed to +0.15 we could optimize the impact of winners somewhat further and cut the CF down by (wild guess) 3. Then N_g would be 90 and CF =139. We'd then be in range of our goal.

Thanks to you both for your work to get us to this point.

Improvements, Post-delivery to Fearless Leader

Data

  • Store fluxes as attribute/data in gCombine_kDist class and everywhere else
  • Attributes in netCDF should contain everything we need (e.g., kObj = BYBAND.gCombine_kDist(kFile) should be all I need); less arguments the better
  • Also store k-distributions as data instead of having to refer to disk

Programmatic

  • TO DO items in code doc
  • Bring the cost function back out as its own function, not a method in a class so user can just provide a test and reference netCDF and calculate the cost
  • open_mfdataset() in fluxCombine()? Where band dimension is changing
  • fluxCombine() inefficiencies will be addressed by saving data as attributes instead of reading from disk every iteration
  • clean up methods that are clunky (e.g., calcOptFlux)
  • Modularize -- everything pretty much needs to be done in series right now
  • Less clunky parallelization of flux computation

Cosmetic

  • Rename gcombine_kdist: not just k-dist, also knows how to compute fluxes
  • kDistBand —> extractBand or something more transparent
  • DOLW inference from fields
  • Remove fluxesRRTMGP, RRTMGP test netCDF and trim other fat

Doc

  • Document better (README.md and notebook markdown cells)

Output

  • Weight addition incorrect when copying to final optimization directory, but not in working directories?
  • include attributes of original k-distribution in reduced distribution along with any new necessary metadata

Diagnostic Output with Modified Combinations

we should be providing diagnostics output like we used to do:

Iteration 94
band16_coefficients_LW_g03-04_iter094.nc, Trial: 145, Cost: 100.558639, Delta-Cost: 0.1196
	flux_net, band_flux_net, heating_rate, heating_rate_7, flux_net_forcing_5, flux_net_forcing_6, flux_net_forcing_7, flux_net_forcing_9, flux_net_forcing_10, flux_net_forcing_11, flux_net_forcing_12, flux_net_forcing_13, flux_net_forcing_14, flux_net_forcing_15, flux_net_forcing_16, flux_net_forcing_17, flux_net_forcing_18 = 99.5851, 99.3616, 102.0414, 103.3084, 96.2623, 100.5032, 107.0325, 100.0000, 83.2970, 101.2200, 101.4471, 100.4604, 99.9562, 99.2261, 98.1497, 101.5854, 102.0847

... apply g-point combine modifications...results for 2 different weight scales ("plus" and "2plus"):
band16_coefficients_LW_g03-04_iter094.nc, Trial: 145, Cost: 100.517579, Delta-Cost: 0.5176
	flux_net, band_flux_net, heating_rate, heating_rate_7, flux_net_forcing_5, flux_net_forcing_6, flux_net_forcing_7, flux_net_forcing_9, flux_net_forcing_10, flux_net_forcing_11, flux_net_forcing_12, flux_net_forcing_13, flux_net_forcing_14, flux_net_forcing_15, flux_net_forcing_16, flux_net_forcing_17, flux_net_forcing_18 = 99.5145, 99.3939, 102.0424, 103.3090, 96.2623, 100.4943, 107.0325, 100.0000, 83.2970, 101.2200, 101.4471, 100.4604, 99.9562, 99.2261, 98.1497, 101.5854, 102.0847
band16_coefficients_LW_g03-04_iter094.nc, Trial: 145, Cost: 100.474838, Delta-Cost: 0.4748
	flux_net, band_flux_net, heating_rate, heating_rate_7, flux_net_forcing_5, flux_net_forcing_6, flux_net_forcing_7, flux_net_forcing_9, flux_net_forcing_10, flux_net_forcing_11, flux_net_forcing_12, flux_net_forcing_13, flux_net_forcing_14, flux_net_forcing_15, flux_net_forcing_16, flux_net_forcing_17, flux_net_forcing_18 = 99.4411, 99.4270, 102.0434, 103.3096, 96.2623, 100.4856, 107.0325, 100.0000, 83.2970, 101.2200, 101.4471, 100.4604, 99.9562, 99.2261, 98.1497, 101.5854, 102.0847

... eventual winner with real flux and cost calcs and the zero-crossing weight scale:
band13_coefficients_LW_g03-04_iter094_regr094.nc, Trial: 131, Cost: 100.438930, Delta-Cost: -0.0001
	flux_net, band_flux_net, heating_rate, heating_rate_7, flux_net_forcing_5, flux_net_forcing_6, flux_net_forcing_7, flux_net_forcing_9, flux_net_forcing_10, flux_net_forcing_11, flux_net_forcing_12, flux_net_forcing_13, flux_net_forcing_14, flux_net_forcing_15, flux_net_forcing_16, flux_net_forcing_17, flux_net_forcing_18 = 99.3352, 99.3638, 102.0301, 103.3029, 98.0105, 100.4869, 107.4082, 100.0000, 83.2970, 101.2200, 101.4471, 100.4604, 99.9562, 99.2261, 98.1497, 101.5854, 102.0847

but i have suppressed in abs_val because there's so much of it (all modified trials, which initially is 147 trials). i have to find a way to convey this information to end users

`conda` Environment

Eli cannot run our modified g-point code because of library inconsistencies (compared to @pernak18) in the default NERSC JupyterLab kernel. This should be able to be addressed with a shared conda environment, which needs to be linked to a kernel that can be used in a notebook -- see slide 13 of NERSC's "Using Jupyter" presentation

Streamlining

We need to make the process easier for end users after I'm done developing. Action items:

  • single template for by-band and broadband (band dimension, flux_* vs. band_flux_*)
  • SW and LW templates need to be available -- maybe the code to generate them?
  • diffuse flux (and net diffuse) inclusion would be nice (right now we do it in the g-point reduction code)

More FL Suggestions

  • CSV output of cost components
  • Being able to go back to a different iteration after optimization is complete, e.g., if things go awry in the optimization process
  • Better organization of standard output for larger cost functions
  • Handle low weight terms so that large degradation in cost for the components do not get ignored

Parabola Diagnostics

After reverting back to fd45923, we're looking at the data going into the quadratic regression a little more closely. Some questions we have:

  • 1. why do some trials have identical dCosts for more than 1 weight scale? e.g., trial 81 of iteration 95 has a -0.6125413177496171 at both the 1 and 2 points (or 0.05 and 0.1, or plus and 2plus)?
  • 2. Why does weight scale not always line up with winner dCost? e.g., trial 29 of iter 95 is initially at a -0.0046732679841312574 -- there is no zero point, so the weight scale is correctly chosen because the initial (unperturbed) dCost is the lower in the init/plus/2plus formulation; however, the dCost of this trial after recompute() is invoked is -0.02511183448149268, which is the dCost associated with 2plus; this could be related to what i was trying to fix in c5f6d70
  • 3. it looks like i'm assigning 1e6 to some trials...this should happen only for the trials that optimize a given iteration, but there are multiple trials like this -- 110 and 117 of iter 95; these trials are in the same band, so band 11 must have been the winner in iteration 94, but at this point the init dCost should have been recomputed

Diagnostic Output Bug

When working in the LW, FL says:

i got this error at iteration 150 (run was doing 8 iterations from 144 to 152, and had gotten through 149 OK):

--------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-7-0c12c3ce6fa2> in <module>
     87     coObj.findOptimal()
     88     if coObj.optimized: break
---> 89     if DIAGNOSTICS: coObj.costDiagnostics()
     90     coObj.setupNextIter()
     91     with open(pickleCost, 'wb') as fp: pickle.dump(coObj, fp)
/global/u1/e/emlawer/emlawer-g-point-reduction/by_band_lib.py in costDiagnostics(self)
   1168 
   1169         outDS['trial_total_cost'] = \
-> 1170             xa.DataArray(self.totalCost, dims=('trial'))
   1171         outNC = '{}/cost_components_iter{:03d}.nc'.format(
   1172             diagDir, self.iCombine)
~/.local/cori/3.8-anaconda-2020.11/lib/python3.8/site-packages/xarray/core/dataset.py in __setitem__(self, key, value)
   1377             )
   1378 
-> 1379         self.update({key: value})
   1380 
   1381     def __delitem__(self, key: Hashable) -> None:
~/.local/cori/3.8-anaconda-2020.11/lib/python3.8/site-packages/xarray/core/dataset.py in update(self, other, inplace)
   3785         """
   3786         _check_inplace(inplace)
-> 3787         merge_result = dataset_update_method(self, other)
   3788         return self._replace(inplace=True, **merge_result._asdict())
   3789 
~/.local/cori/3.8-anaconda-2020.11/lib/python3.8/site-packages/xarray/core/merge.py in dataset_update_method(dataset, other)
    935         priority_arg=1,
    936         indexes=dataset.indexes,
--> 937         combine_attrs="override",
    938     )
~/.local/cori/3.8-anaconda-2020.11/lib/python3.8/site-packages/xarray/core/merge.py in merge_core(objects, compat, join, combine_attrs, priority_arg, explicit_coords, indexes, fill_value)
    590     coerced = coerce_pandas_values(objects)
    591     aligned = deep_align(
--> 592         coerced, join=join, copy=False, indexes=indexes, fill_value=fill_value
    593     )
    594     collected = collect_variables_and_indexes(aligned)
~/.local/cori/3.8-anaconda-2020.11/lib/python3.8/site-packages/xarray/core/alignment.py in deep_align(objects, join, copy, indexes, exclude, raise_on_invalid, fill_value)
    425         indexes=indexes,
    426         exclude=exclude,
--> 427         fill_value=fill_value,
    428     )
    429 
~/.local/cori/3.8-anaconda-2020.11/lib/python3.8/site-packages/xarray/core/alignment.py in align(join, copy, indexes, exclude, fill_value, *objects)
    341                     "arguments without labels along dimension %r cannot be "
    342                     "aligned because they have different dimension sizes: %r"
--> 343                     % (dim, sizes)
    344                 )
    345 
ValueError: arguments without labels along dimension 'trial' cannot be aligned because they have different dimension sizes: {91, 92}

the way the code in the notebook works is cost computation, optimization determination, diagnostics, write pickle file for iteration, then write the flux and reduced k-distribution. in this case, the cost and optimization was done for iteration 149, but the failure is in the diagnostics, so no diagnostic, flux, or k-distribution netCDFs are written.

Suggested Improvements from Fearless Leader

Branch off in repo, likely re-tag after list is exhausted:

  • How much did each band contribute to the cost function? Write a document with this information in it. Also determine contributions from levels and cost function components

  • Top priority: Eli will start with more iterations when we have more condensed output (cost function components)

  • TOP TOP PRIORITY: fewer digits in netCDF

  • One variable per band, with only the associated trials

  • Single file with all components

  • Attribute should display what trial is the winner

  • Start considering different cost functions and normalizations and preserving that info in filename

  • Print just the winning combination (filename and g-point combination) and cost in notebook

  • Think about a progress bar

  • Don’t use top layer in max Stratosphere HR (do this in stats and profiles plots)

  • Restarting from a specific iteration

Coefficients File Corrections

in bands where we have no minor contributors in one atmosphere region (lower or upper), i have (from @RobertPincus ):

in your coefficients files you have contributors_upper = 1 but this should be a multiple of the gpt dimension, i.e. 0, 15, 30…

Note that contributors_upper should be equal to minor_absorber_intervals_upper*ngpt

I can’t stress enough that it’s important to be sure the coefficients files are being properly formulated. Otherwise you’re reading coefficients for minor gases from random memory locations.

I just fear investing time in results that turn out to be flawed because the files are busted

seems like a pretty big deal

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.