Giter Club home page Giter Club logo

craftr-build-4.x's Introduction

The Craftr build system (v4.x)

Craftr is a Python based meta build system with a focus on compilation of C and C++ projects, but also supports Cython, C# and Java out of the box. It uses Ninja under the hood for parallel builds.

Build scripts are written in Python and we call them "Modules". The files are called build.craftr and have some language extensions and additional built-ins provided via Node.py and the Craftr API.

A build script usually imports the Craftr API first, then declares its name and version and from that point targets can be declared. Code of .craftr files is preprocessed to allow for syntactic sugar, but all functionality can also be replicated with the functional API.

import * from 'craftr'            # 1)
project('myproject', '1.0-0')     # 2)

target('main', 'cxx:build',       # 3)
{
  'cxx.srcs': glob('src/*.cpp'),  # 4)
  'cxx.type': 'executable'        # 5)
})

To build and run the executable, use (6)

$ craftr -cb --variant=release main:cxx.run@="World"
Hello, World!

Explanation

  1. Import all members of the Craftr API. We only use project() and target() in this example.
  2. Call the project() function to specify the module's name and version. This is used for constructing unique target identifiers and folders in the build output directory.
  3. Declare a new target called "main" that is converted into build instructions using the cxx:build finalizer. Following are the properties of the target.
  4. Specify the cxx.srcs property with all .cpp files in the src/ directory relative to the build script's parent directory.
  5. Set the cxx.type property to "executable" in order to create an executable from the source files.
  6. The -c flag, or --configure, is used to run the build script and generate a Ninja build manifest.
    The -b flag, or --build, indicates that the build should be executed right afterwards.
    With --variant=release, you specify a release build (as opposed to the default --variant=debug).
    The main:cxx.run argument specifies the target and operator to build -- and this is the name of the target that is automatically generated for invoking the executable that is built for the target main.
    The @="World" part that is appended directly to the operator is passed to the executable that is executed with the main:cxx.run operator. In the example above, the executable takes the first argument and prints it as Hello, %s!.

Important Built-ins and API Members

  • BUILD api
  • OS api
  • project() api
  • target() api
  • properties() api
  • module builtin
  • module.options builtin
  • require() builtin

Installation

Craftr requires Python 3.6 or newer (preferrably CPython) and can be installed like any other Python modules.

$ pip install craftr-build

To install the latest version from the Craftr GitHub repository use:

$ pip install git+https://github.com/craftr-build/craftr.git -b develop

Tips & Tricks

How to show Python warnings?

The Craftr API makes some usage of the Python warnings module. If you want warnings to be displayed, you can add PYTHONWARNINGS=once to the environment, or use the --pywarn [once] command-line flag which is usually preferred because you won't see the warnings caused by your Python standard library.


Copyright © 2018 Niklas Rosenstein

craftr-build-4.x's People

Contributors

niklasrosenstein avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

craftr-build-4.x's Issues

add run() rule

A rule to create a target that allows to be run via Ninja should be added. Eg.

objects = cxx.objects(sources = glob(join(project_dir, 'src/**/*.cpp')))
program = cxx.executable(filename = 'main', inputs = [objects])
run = rules.run(executable = program, pool = 'console')

I think we should add a new built-in Craftr module called rules for it as a run() rule is not directly related to C/C++ compiler.

Automatically detect msvc_deps_prefix

Currently, msvc_deps_prefix is set to Note: including file: which is for the English version of MSVC. Some people might be using a different language, so it would be nice if it could be detected automatically.

Support for Craftr environment files

Craftr is usually configured with the environment, just like most other build tools. The -Dkey=value option also just appends to the process' environment. There might be cases where you want a single project or all projects you build to be configured with a specific environment always. Also, if your project requires platform dependent configuration, users should not need to modify the Craftfile but instead add or modify the Craftr environment file.

I want to add support for a file that is also plain Python but loaded before any other modules. This file will be loaded from the current working directory of Craftr and from the user home directory. I would call it Craftenv or .craftenv or something like that. It could look like this:

from os import environ
environ['MYO_SDK_PATH'] = '/absolute/path/to/myo/sdk'
environ['CROSS_COMPILER_PREFIX'] = 'arm-eabi-'
# etc.

Allow craftr generator artifacts be outside the sources

With other meta build systems, such as meson, you can have the "build" directory outside of the source tree. This means you never modify the source tree, which I find convenient.

So I'd like to be able to do the following:

$ ./gen_srcs craftr test 1 1
$ mkdir build
$ cd build
$ craftr export ../test

Another idiom is the "build" directory is inside the source tree but all artifacts are just inside that directory.

$ ./gen_srcs craftr test 1 1
$ cd test
$ mkdir build
$ cd build
$ craftr export ..

Re-add Session.env and substitute os.environ in the session context

Although usually not necessary, Craftr is built with the possibility to create multiple Session objects and execute Craftr modules encapsulated in that Session. We currently work on os.environdirectly if we want to read/write global option. Session.env should be initialized with a copy of os.environ and inside the Session context, os.environ should be the very same dictionary.

Implement rule functions

The compiler modules should get rule functions that will generate the correct commands to build object files, libraries and executables.

C = load_module('compiler')
build_dir = join(project_dir, 'build')

C.cc_objects('Objects',
  sources = glob(join(project_dir, 'src/**/*.c')),
  includes = [join(project_dir, 'include')],
)

C.cc_executable('Bin',
  name = 'main',
  inputs = Objects,
)

Error message when passing a non-container to a merged option is not helpful

objects = cc.compile(
  sources = path.glob('src/*.c'),
  include = path.local('include'),  # Error! need a list, not a string
)

Results in an error message like this:

  File "c:\users\niklas\repos\craftr-build\craftr\craftr\__init__.py", line 303, in merge
    return self.options.get_merge(key)
  File "c:\users\niklas\repos\craftr-build\craftr\craftr\__init__.py", line 432, in merge
    'in framework {1!r}, got {0}'.format(key, fw.name, type(value).__name__))
TypeError: expected a non-string sequence for 'include' in framework '<module>', got include

The first step should be to replace <module> with the module identifier of the module that called the rule function cc.compile().

Custom compiler based targets

The Module.target() function accepts an optional keyword argument called target_class that can be passed an alternative class to be used instead of craftr.runtime.Target. This could be used as a "plugin hook".

Consider:

from craftr.utils.path import glob, join
C = load_module('compiler')

module.target('Objects',
  target_class = C.ObjectsTarget,
  inputs = glob(join(project_dir, 'source', '*.cpp')),
  base_dir = project_dir,
  build_dir = join(project_dir, 'build', 'obj'),
)

module.target('Program',
  target_class = C.LinkTarget,
  inputs = Objects.outputs,
  output_file = join(project_dir, 'build', 'main'),
)

These custom targets could also implement special support for specific backends, for example Ninja. The custom target class would know how to properly implement auto dependencies for the used compiler in Ninja.

This is just a draft on how such a target subclass could look like:

from craftr.utils.path import move
import craftr
P = load_module('platform')

# ...

class ObjectsTarget(craftr.runtime.Target):

  def __init__(self, module, name, inputs, base_dir, build_dir):
    outputs = move(inputs, base_dir, build_dir, suffix=P.obj)
    command = ['g++', '-c', '%%in', '-o', '%%out', '-MMD', '-MF', '%%out.d']
    super().__init__(module, name, inputs, outputs, command=command)

  def ninja_rule(self):
    return {'deps': '$out.d'}

Add CUDA support

This issue is regarding craftr 0.20.0-dev. The compiler.nvcc module has been tested with Windows only, yet. It needs to be tested, and eventually adjusted, for Linux and OSX.

Invoke python functions via command-line

It's not the goal of Craftr to be a convenient interface to Ninja as it was with Creator, but it sure was convenient to be able to invoke Python functions from the command-line to execute custom tasks. It was even possible to require a target to be built before a task can be run.

In Craftr, we no longer need to decorate a Python function with a @task() decorator since the function is automatically exposed in the module. In order to build targets from a Python task, we can extend the backend interface to provide a function build that builds a number of targets.

from craftr.utils.shell import call
def run():
  session.backend.build(Program)
  binary = Program.outputs[0]
  call(binary, session)

Make Target iterable and yield the output filenames

This would ease the eye on some syntactic constructs and would probably allow to make no difference between lists or tuples of filenames and targets. The autoexpand() function should allow the expansion of Targets.

internals = cxx.objects(
  inputs = glob(join(project_dir), 'src/internal/*.cpp')),
)
public = cxx.objects(
  inputs = glob(join(project_dir), 'src/public/*.cpp')),
)

# Now we could use the targets directly instead of accessing their *outputs* attribute.
cxx.objcopy(
  inputs = [internals, public],
  outputs = addsuffix(rmvsuffix([internals, public]), '.bin'),
  additional_flags = ['-O', 'binary'],
)

Add option to exclude a Target from the default build

Craftr generates a default <target> [<target> [...]] declaration at the end of a ninja build file. There should be an option exclude a target from this declaration, so it will only be built if another target depends on it or if it is explicitly targeted when calling craftr -b <target> or ninja <target>.

Add build subcommand to merge export and ninja step

Currently, there are two ways to build a project. You can derive the output build directory from code, but that can sometimes be rather complicated depending on your needs and dependent modules might not take this build directory into account.

The preferred way now is to use the current working directory as the build output directory, but each module should use a folder that is named after its module name (so that no conflicts in eg. object files can occur between modules).

You usually don't want to mess up your working tree with the stuff, so you instead create a build directory inside it and run craftr from there.

mkdir build && cd build
craftr -c .. export && ninja

A convenient subcommand should be added to merge these steps and that can be called from the working directory rather than the build directory.

craftr build

There should be options to specifiy the build directory name, and whether the ninja build file should be exported even if it already exists.

Split targets and build rules, like Ninja does

If you're declaring the same target with the same command over and over again, you will have the command duplicated every time in a new rule. We should split a Target into Rule and Build, just like Ninja does it. The compiler interface could look like this instead.

objects = compiler.toolset.CxxCompiler().objects(include=[...], etc..)
objects.build(path.glob('src/*.cpp'))

setdefault() should accepted dotted names, introduce setoption() function

The setdefault() function would be more useful if it would also accept dotted names and resolve it in the modules global variables. Also, a setoption() function that would basically do the same but always set the value instead of only if it is not already present. This would make it easier to set options before a module is loaded. Compare:

some = get_namespace('some_fancy_module')
some.debug = True
load_module(some)
setoption('some_fancy_module.debug', True)
some = load_module('some_fancy_module')

Add option to pass a linker script for Linux/OSX linkers

Derived from issue #3, it should be possible to pass a linker script to linkers for Linux & OSX. Something like

from craftr.ext.compilers import get_platform_toolset as tools
ld = tools().Ld()
ld.link(
  inputs = objects,
  output = 'main',
  linker_script = path.local('link.ld'),
)

Support << for appending implicit_deps to a target

In some cases, you might use a rule function to generate a target, but that rule function does not support an option for additional implicit dependencies. Doing target << [one, two, three] should use craftr.expand_inputs() and add the expanded filenames to Target.implicit_deps.

FileNotFound error should display a proper message

If you start a subprocess and the executable can not be found, you'll see something like this

FileNotFoundError: [WinError 2] The system could not find the specified file

But no clue about what file could not be found.

Specify options on the command-line

It should be possible to pass options via the command-line, like

craftr -Ddebug=True

The value should be converted into the most applicable datatype. In the example above, it should result in a boolean True value to be set, numbers should be converted to numbers and everything else should just be set as strings.

Fix recursive "from" imports

craftr.ext.platform

if sys.platform.startswith('win32'):
  # ...
  from craftr.ext.compiler import msvc as toolset

craftr.ext.compiler.msvc

from craftr.ext import platform

This results in an ImportError. The from craftr.ext import platform looks for an attribute platform in the craftr.ext module rather than �importing craftr.ext.platform (this is only done when the module was not already imported).

Extended compiler support for GCC, Clang and MSVC

  • Support for compiler optimization levels (speed, space, debug)
  • ASM compiler interface for GCC, Clang and MSVC
  • MSVC support for platform toolset versions
  • Auto dependency support (see also #2)
  • Support for cross-compilers
  • Automated compiler detection via CC, CXX and LD environment variables

Add OpenCL support

You can use OpenCL with craftr 0.20.0-dev on Windows using compiler.nvcc. There should be support for other vendors (Intel, AMD) on Windows as well, and also support on Linux and OSX.

I still have to find out, but I think (and hope) that there is some pkg-config magic or a standard way to find all required information to compile OpenCL on Unix platforms. On Windows, I have no idea how I should figure out which vendor must be used, so the only chance is currently to hardcode that into the Craftfile.

from craftr.ext.compiler.nvcc import NvccCompiler
nvcc = NvccCompiler()
opencl = nvcc.get_opencl()
print(opencl['libs'], opencl['libpath'], opencl['includes'])

Ideally, there should be something to automatically detect the applicable vendor and compiler for the current platform, eg.

from craftr import path
from craftr.ext.compiler import get_platform_toolset
tools = get_platform_toolset
cc = tools.CCompiler()
objects = cc.compile(
  sources = path.glob('src/*.c'),
  includes = tools.opencl.includes,
  libs = tools.opencl.libs,
  libpath = tools.opencl.libpath,
)

Add import_file() function

While you should usually import stuff by the module name, it could sometimes be useful to import by filename. The import_file() function would extract the craftr module identifier and then import it by that identifier. If a module of that name already exists and is not the exact same file, an ImportError should be raised.

Use of LD variable to determine the linker

As far as I know, you usually use gcc/g++ instead of ld to link C object files into an executable or shared library as it does the job of invoking the linker for you. Craftr does currently not take the LD variable into account and just uses whatever C compiler is configured as the linker.

loading an extension module that has already an empty proxy fails

I'm not sure how to best describe it in the title... Consider the following two Craftfiles:

# craftr_module(project)
def baz():
  print("hello!")

# craftr-module(project.eggs)
from craftr import session, path
session.path.append(path.local('..'))  # To find 'project'
from craftr.ext.project import baz

Now when you execute the project.eggs module without the project module in the search path, a proxy module will be created for project so that project.eggs can be loaded. Importing baz will fail because the proxy module does not contain it! The project module can be found after appending the path to session.path though, so craftr.ext.CraftrImporter should be capable of load the file delayed.

Support Python functions as Ninja targets

  • Implement the ability to pipe data into the stdin of a Python function invoked with the Craftr daemon
  • Delayed execution of Craftr modules when the initial execution phase is skipped
  • Verify that it works safe with parallel requests (assuming the called functions do
    not perform operations that could influence other requests/threads)
  • Verify that it works on Windows
  • Can we avoid patching sys.stdin, etc. globally in craftr/daemon.py?

I want to be able to run a Python function defined in a Craftfile from ninja. This could be realized with sockets. Eg. the generated ninja rule for a function dance in the Craftr module walz could look like this:

rule walz.do_the_dance
  command = craftr-com walz.dance localhost $CRAFTR_PORT -- $in -o $out
  description = walz.dance()

The craftr-com program would conntect to Craftr via a socket and pass the program arguments. These arguments would be passed as a list to the Python function craftr.ext.walz.dance(). Note that it could be possible that a delayed execution of the Craftr module could be necessary with this technique. (Craftr skips the execution of build definitions if it is only used to invoke ninja with the -b option).

Inside of Craftr, it could look like this:

def dance(args):
  parser = argument.ArgumentParser(prog='dance()')
  parser.add_argument('infiles', args='*')
  parser.add_argument('-o', default=None)
  args = parser.parse_args()
  print(args)
  # xxx: do stuff

do_the_dance = rules.pycom(
  function = dance,
  inputs = path.glob('src/*.c'),
  outputs = ['foo'],
)

Add return_() function

Calling this function from a Craftr module will stop the module execution and jump to the importing module.

Ninja variables are being escaped on Unix

Eg. if you use $in in a command, it is escaped to '$in' by shlex.quote() (called by craftr.shell.quote()) which in turn causes multiple to $in not be taken as multiple arguments but only as one.

extends() should (re-)execute the module contents

Currently, extending a module works by adding a reference to the child module's DataEntity that will be accessed alternatively if a member couldn't be found in the original DataEntity. It would be cleaner in all aspects to execute the contents of the module that is being extended directly (maybe even again if the module that is being extended was already loaded) but in the namespace of the child module.

Refactoring of compiler modules

The compiler modules are currently based on compiler.base.BaseCompiler which implements the objects(), executable(), etc. rule functions and they use the command() function which must be implemented by subclasses of BaseCompiler. The design is very flawed and makes it hard to maintain the code and it is not elegant at all.

There should be a base class, maybe called Translator or CommandBuilder that implements and interface that fullfills the following constraints:

  1. Easy to implement the generation of build commands based on options
  2. Options should be definable on an object-level and call-level (options defined on call-level should override object-level options)
  3. Options should be able to have more influence than just on the build command (eg. an option autodeps could append -MMD -MF %%out.d to the build command but as well add a meta variable depfile to the result of the call that specifies %%out.d. The caller would know that this meta variable might be set and can use it accordingly)
  4. More might be following..

Note: There is already a craftr.utils.CommandBuilder atm. but it isn't really what I am looking for either. It will be removed with the resolving of this issue.


An early draft of a Translator subclass:

class CCompiler(Translator):

  def _handle_autodeps(self, autodeps=True, depsfile='%%out.d'):
    if autodeps:
      self.result.meta['depsfile'] = depsfile
      self.result.command.extend(['-MMD', '-MF', depsfile])

  def _handle_includes(self, includes):
    self.result.meta.setdefault('includes', []).extend(includes)
    self.result.command.extend(['-I{}'.format(x) for x in includes])

  def _handle_defines(self, defines):
    self.result.meta.setdefault('defines', []).extend(defines)
    self.result.command.extend(['-D{}'.format(x) for x in defines])

All functions that start with _handle would be called for the options passed to Translator.translate(**options). In order to be able to accept only the necessary arguments, introspection would be applied to figure out the accepted argument names and call those functions that accept an argument of an option. A _handle...() function would be invoked even if only one of its arguments are defined in the options. The Translator.result would be set to a TranslatorResult instance during the runtime of a Translator.translate() call.

Craftr task enhancements

Currently, we have to pass a task to be executed via -f <task_name>. It would be nice if we only had to pass the <task_name> without the -f option. Craftr would automatically recognize whether a name refers to a task or a target. Also, we should re-add a -d option that enables the dry-run (currently only possible by specifying the null backend). That would greatly ease the execution of a task when no export is actually desired.

Auto Dependency support for all supported compilers

GCC and Clang support the -MMD -MF <filename>.d option. This will be more complicated with MSVC as it can only output the included headers files to stdout using the /showIncludes option. Ninja is capable of taking that into account as well, but we need to figure the include prefix from the current compiler, we can't assume everybody has an english version of the compiler installed.

More information here: Ninja auto dependencies

get_assigned_name() does not work with multipart expressions

from craftr import magic
name = magic.get_assigned_name(magic.get_frame()) + '_foo'
assert name == 'name_foo' 

That is how it is supposed to work, but it doesn't. Instead, you get

  File "<stdin>", line 9, in <module>
    name = magic.get_assigned_name(magic.get_frame()) + '_foo'
  File "/home/niklas/Desktop/craftr/craftr/magic.py", line 116, in get_assigned_name
    raise ValueError(message, op.opname)
ValueError: ('expected {LOAD_NAME, LOAD_FAST, STORE_NAME, STORE_FAST}', 'LOAD_CONST')

The instruction at the point where get_assigned_name() is called is followed by additional instructions for '_foo' (namely LOAD_CONST). The current implementation however requires the assignment sequence to appear immediately after the last instruction.

A possible solution could be to reset if an unexpected instruction is found and continue like that until we find STORE_NAME or STORE_ATTR. At the time we found any of these two instructions, we should've captured the complete assigned name. If POP_TOP is reached at any point, the result of the expression is not assigned and a ValueError should be raised.

Support for non-input dependencies in a target

Currently, we can only pass inputs = ... to a target() definition. A new parameter requires = ... should be added that can list up files that are required for the build, but are not inserted in place of the %%in placeholder.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.