gilbo / ebb Goto Github PK
View Code? Open in Web Editor NEWDSL for physical simulation
Home Page: http://ebblang.org
License: Other
DSL for physical simulation
Home Page: http://ebblang.org
License: Other
It would be nice to be able to easily pretty-print LVectors from Lua. This is a low priority enhancement.
The rub is that the user might need to provide some extra information to make sure we're fully typed when this happens
Semantic checker should be updated to keep track of how global Liszt objects are being accessed in the kernel. This will require:
(Note that L_REDUCE_PLUS is used for both plus and minus, and L_REDUCE_MULTIPLY is used for both multiplication and division.)
Right now we will only detect reductions if they are in the form "lisztobj = lisztobj op expr". Specifically, the global variable being reduced must show up as the left-hand side of the binary operation.
Arbitrary expression statements are not allowed in either lua or terra and, since we haven't yet decided on how we are going to allow function calls from a liszt kernel, we should just remove it for now.
The vector class should be able to support arithmetic operations in lua scope (e.g. 'v1 + v2' should return a new Vector, etc.) The Vector metatable should be modified so that Vector objects support all vector operators inline.
Operator type checking:
'+' and '-' should be vector [op] vector -> vector operations
'*' is vector [op] number -> vector, or number [op] vector -> vector
'/' is vector [op] number -> vector (you cannot divide a number by a vector)
Other useful methods:
Vector.norm(v) -> returns the 2-norm of v (can also be called as v:norm())
Vector.dot(v1, v2) -> returns v1 . v2 (can also be called as v1:dot(v2))
Vector.normalize()
Vector[i] should return/write to the i'th element of the vector
Vector.init({x, x, x}) or some such.
Add ability to declare a single-precision floating point constant as e.g. 0.5f
.
Some alternate options:
Also worth considering, regardless of the above:
L.NewConstant(...)
statement that mirrors the L.NewGlobal
declaration. This would allow us to type values from the Lua Scope while still letting the compiler specialize them into the code.Lulesh site: https://codesign.llnl.gov/lulesh.php
We have been asked by our LANL friends to port lulesh to liszt-in-terra. (For reference, there is an existing implementation in Liszt-in-scala.)
Print statements on the GPU are not guaranteed to come out in any order with respect to thread_id, block_id, etc. Thus, we need a more intelligent diff that can verify that the output of a GPU test case has exactly one of each line of the test output, in any order, and nothing else, in order to verify the results of GPU tests.
Semantic checker should report an error when an object from Lua scope that is not of a Liszt data type (Scalar or Field) is written to.
Vector type has kind "vector" currently. Should change this and handle it accordingly during semantic checking, since a vector type should not be an actual vector.
When we added function polymorphism and replaced kernels with just plain Liszt functions, we also introduced support in the parser for optionally type annotating any argument to a Liszt function. However, these type annotations are currently ignored.
Work Item:
Add support to the type-checker to complain when these explicit annotations are violated.
We want a 3d grid implementation. This should be a fairly straightforward adaptation of the 2d grid code. I recommend against trying to somehow generalize the 2d/3d grid code into one form. While abstracting across 2d/3d will reduce code duplication, the implementation will also get considerably more difficult to read. (my prediction)
We want to have a way to throw type-checking errors when certain accesses use non-constant values.
We want this for Affine-Indexing, and also for indexing into vectors and matrices. In the latter case, we want to be able to have loops over constant ranges generate indices with known bounds (i.e. not constant, but something we can certify is ok to index with statically.
Also, this issue may be related to #34 which handles a special form of constants: strings, which can't be manipulated with computation/arithmetic anyway.
The initial proposal for field polymorphism ( #34 ) suggests fixing field names/identities via typechecking. This is fine. However, we would eventually like to be able to avoid re-typechecking / re-compiling when field names/identities change but their types/parent relationships remain unchanged.
(As an example, this is an important step to being able to efficiently support temporary fields; since it's not reasonable to re-compile every time the identity of a temporary field changes.)
There are multiple approaches to solving this problem. Pick one and implement it.
We did decide that it would be better to make whatever features we add for this modal. That is, the user needs to explicitly turn on periodic boundary condition support in the code by calling some method. This allows us to capture that user choice and throw an error if we're being deployed on GPU or Cluster. This way we can choose not to implement periodic boundary conditions on non-CPU runtimes if we want to.
HIGH PRIORITY: Ivan asked for this.
See the other issue on Field Polymorphism #34 . This issue depends on it.
Once support for string literals is added we'd like to support some of printf string formatting in Liszt.
Maybe introduce novel syntax (e.g. %v, %m ? are those conflicted?) for printing out vector or matrix values? (Don't get caught up on this though)
The hexIntForce and rectangleSqueeze examples are broken because rely on a deprecated field storage implementation. These should be fixed as time permits.
This requires building some kind of GPU write-buffer for the INSERT support. DELETE support could be improved by having a GPU-resident compaction kernel, but Defragging can maybe be ignored for the time being?
Suggested design: (Suppose a function has insert statements)
The above design seems optimal to some slight variations for the following reasons:
Should we use tiny regions or futures?
Problem with tiny regions: Does Legion support this with reasonable performance or will it overload the system?
Problem with futures: Futures don't have any notion of memory residency. In the case of GPU tasks, this introduces spurious blocks on copying global data to/from GPU memory.
Here's a snippet of code with the problem:
for r = 0,3 do
t.e[i,v].stiffness[r,k] += dH[r,i]
end
So, the problem here is that the left-hand side is not correctly resolved as a field-write (actually reduction) here because the conversion of Assignment AST nodes into GlobalWrite or FieldWrite AST nodes relies on pattern matching the pattern Assignment(Global, rhs) or Assignment(FieldAccess, rhs) and rewriting the Assignment into a GlobalWrite or FieldWrite node.
However, in the above snippet of code, the immediate left hand-side of the assignment is actually a SquareIndex ast node (the indexing into the stiffness matrix).
In general, the phenomenon which will trigger this problem is trying to reduce/write into only one entry of a vector or matrix field/global value. We should either disallow this entirely (not recommended) or fix it so that this is correctly detected as a write/reduce by the typechecker (recommended).
This is a language feature which is necessary to perform stencil analysis on grid code. It would involve translating all the current Grid Macro implementations (e.g. constant relative offset c(1,2)
).
This is very low priority, since it doesn't really matter until we start doing stencil analysis. However, it's a simple task for anyone who wants something simple to do.
We want to support polymorphism of Liszt functions over different possible fields. This is important for writing solvers or other kinds of generic numeric code in Liszt.
Proposed User Syntax:
-- definition of field parameters as untyped variables
-- may be possible to add a L.string type annotation
liszt foo( c : cells, in_field, out_field )
...
c[in_field]
...
end
-- calling convention
cells:map(foo, 'temperature', 'temperature_shadow')
Proposed semantics & typechecking:
Field parameters should just be string values.
The actual string value will be supplied at type-checking time. This string value will be used to type the parameter variable with the type L.string('the_actual_string')
a type which should not be coercable to any other string type. As a result, a string-typed variable isn't actually inhabited by values; it's an opaque carrier object whose purpose is only to propagate the string type containing the constant. Whenever a string constant is expected at any point in the code, the typechecker attempts to find a string value in the type. (This same mechanism should make it possible to add formatting strings to the print statement) To be clear, the proposal is that strings should be incorporated into Liszt, but not as proper first-class values.
Also note that this proposed design results in having to re-type-check a function for every possible assignment of string arguments. We can certainly remove this decision at a later point, but it seems reasonable and expedient for now.
This requires having some kind of functional call abstraction that's consistent, along with function types of some sort in the typechecker. The function call abstraction is complicated because there are a lot of implicit arguments to handle passing all of the fields that need accessing.
The liszt library files that try to keep imports out of their public namespaces by making them local neglect to check and see if the shadowed variable existed in the global scope before it was over-written by the imports. Thus, we may be hiding previously visible libraries in the global namespace when we set _G.runtime to nil. This is unlikely to cause problems in the near future, but should eventually be fixed.
There's currently an AST node called QuoteExpr (I think) that conflates quoted code with let expressions (since we only allow liszt code to explicitly write let expressions via the syntactic form liszt quote stmts in expr end
).
It would be good to clean this up in the compiler at some point.
pipe this through from Terra, including working out something for GPUs etc.
Field reading and writing mostly works on GPUs. Following that example, extract the global phase analysis data, and set up the Bran/Germ to provide dynamic global locations to the kernel. (Another design is reasonable. Recommended to talk to Gilbert first if you're deviating from the field approach significantly)
This item requires developing a good understanding of the new codegen & execution details. (Bran/Germ, etc.) It's also easy to mostly copy an existing example (fields). So, this makes a good task to familiarize new/returning people with the code.
HIGH PRIORITY: Main obstacle to GPU being feature complete.
Right now, any error that happens while a kernel is running will cause a stack dump to report the error as originating within our compiler. That's bad for us.
There are two stages to fixing this:
xpcall()
that produces a slightly more useful stack dump. That is, the stack dump should locate the error at the kernel call site instead.We need to add back in Insert/Delete support.
It's deprecated everywhere and not supported under Legion at all.
See #37 for getting insert/delete working on GPUs
Add support for reducing fields or globals with the reduction operators min=
and max=
. This requires touching a lot of different parts of the compiler very lightly. Good familiarizing task.
HIGH PRIORITY: Ivan requested
We need to find all allocation sites in the compiler and possibly change them to eliminate unnecessary uses of Terra global
s and .new
declarations
I recently learned from Zach that we should be avoiding Terra global
declarations because they invoke the LLVM toolchain. Notably, they will NOT be garbage collected.
I also learned that terralib.new/ffi.new both allocate memory on the Lua heap, which is probably also not what we want to do in many cases, though it does ensure that the memory will be garbage collected.
One of the main use cases for abusing these features has been the need for a pattern like the following:
Sometimes, we may also rely on the struct persisting beyond the call-site, which frequently makes it wise to rely on garbage collection to be safe.
The following snippet of code will
local function SafeHeapAlloc(ttype, finalizer)
if not finalizer then finalizer = C.free end
local ptr = terralib.cast( &ttype, C.malloc(terralib.sizeof(ttype)) )
ffi.gc( ptr, finalizer )
return ptr
end
Talk to Gilbert for the design. Need to be able to insert particles using code like the following:
liszt kernel ( c : cells )
...
insert { cell = c, pos = c.center } into particles
...
end
Lower Priority: This is needed for full particle support, but we can kludge around it for a while still.
B.length and B.print builtins will need to be updated to generate GPU-specific code because they currently generate code that makes calls to functions in C standard libraries. I expect this will be a trivial fix.
Terra is already inferring numeric types to be doubles from the generated code, so we should add doubles to our list of accepted types and make sure that we are correctly inferring what types we end up passing to runtime functions.
liszt kernels can only write to or perform reductions on global variables if they are a Liszt Field or Scalar object, so users must be able to create, manipulate, and access Scalars from lua code.
The Scalar object needs to be implemented so that it:
The sphere_cloth benchmark is currently failing to run, presumeably due to an interface change in the grid class. This should be fixed as time permits.
We had a reviewer recommend that on Kepler GPUs the global reduction tree might be more efficient if we replace the second kernel invocation with atomic operations.
To make this clear, consider that implementing a tree reduction in CUDA involves potentially 3 different granularities of parallelism:
Our current scheme uses a tree to perform warp-level parallelism, sync_threads() at the end of the primary kernel execution to aggregate values written to shared memory and then a second kernel to perform kernel-level parallelism.
This proposal is to either (a) replace the second kernel entirely by writing block-level reduction values into a common variable using atomic adds, or (b) replace both kernel and block-level by writing the result of a warp-level tree reduction directly into a global common variable using an atomic add.
Update parser and semantic checker to be able to recognize and build vectors from a statement like:
"var v = {1, 3, 5}" or some such.
Liszt should parse 1.0 / 12.0 as double / double, not int / int. (Terra now has extensions for parsing floats and doubles, so this bug can be fixed.)
There are some tricky issues about the write-only permission
Blocksize is hardcoded right now for GPUs. For kernels that touch large fields, like small matrices and vectors, it may make sense to use a smaller block size, compared to kernels that use only scalars.
Ivan requested this feature. We can do it, but it will require adding new value-types to the compiler for tuples. Those have to be plumbed all the way through then.
This is an instance of the classic PL bug. The problem arises due to name re-use in combination with subtree substitution provoked by macro expansion or user-defined-function inlining.
I believe this may be solvable by translating all of the names into symbols during the specialization pass. However, (I haven't given this enough thought) it may be necessary to do a full, proper beta-renaming step to ensure correct behavior. This requires careful, careful thought.
This is relatively high priority b/c any bugs it causes will be really hard to diagnose. However, it's also somewhat unlikely to crop up soon, so we may be able to postpone fixing it a while yet.
Almost certainly Gilbert will fix this. If someone else thinks they have a good enough handle on the problem then they're welcome to give it a go, but make sure to talk to Gilbert or Zach first to make sure you understand the subtleties of why/how this problem crops up.
We stripped this out to get Legion working
Modify liszt script and setup.sh to make liszt work when it is not evoked from the top level directory of the project. This includes fixing the path to terra and making sure that the proper library paths are fed to terra as options in the liszt script.
Talk to Gilbert for the design details. Need to be able to delete particles using code like the following:
liszt kernel ( p : particles )
if bad_particle(p) then
delete p
end
end
Lower Priority: This is needed for full particle support, but we can kludge around it for a while still.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.