Giter Club home page Giter Club logo

trieste's Introduction

Trieste

PyPI License Release Develop Codecov Slack Status

Documentation (develop) | Documentation (release) | Tutorials | API reference |

What does Trieste do?

Trieste (pronounced tree-est) is a Bayesian optimization toolbox built on TensorFlow. Trieste is named after the bathyscaphe Trieste, the first vehicle to take a crew to Challenger Deep in the Mariana Trench, the lowest point on the Earth’s surface: the literal global minimum.

Why Trieste?

  • Highly modular design and easily customizable. Extend it with your custom model or acquisition functions. Ideal for practitioners that want to use it in their systems or for researchers wishing to implement their latest ideas.
  • Seamless integration with TensorFlow. Leveraging fully its auto differentiation - no more writing of gradients for your acquisition functions!, and scalability capabilities via its support for highly parallelized modern hardware (e.g. GPUs).
  • General purpose toolbox. Advanced algorithms covering all corners of Bayesian optimization and Active learning - batch, asynchronous, constraints, multi-fidelity, multi-objective - you name it, Trieste has it.
  • Versatile model support out-of-the-box. From gold-standard Gaussian processes (GPs; GPflow) to alternatives like sparse variational GPs, Deep GPs (GPflux) or Deep Ensembles (Keras), that scale much better with the number of function evaluations.
  • Real-world oriented. Our Ask-Tell interface allows users to apply Bayesian optimization across a range of non-standard real-world settings where control over black-box function is partial. Built on TensorFlow and with comprehensive testing Trieste is production-ready.

Getting started

Here's a quick overview of the main components of a Bayesian optimization loop. For more details, see our Documentation where we have multiple Tutorials covering both the basic functionalities of the toolbox, as well as more advanced usage.

Let's set up a synthetic black-box objective function we wish to minimize, for example, a popular Branin optimization function, and generate some initial data

from trieste.objectives import Branin, mk_observer

observer = mk_observer(Branin.objective)

initial_query_points = Branin.search_space.sample(5)
initial_data = observer(initial_query_points)

First step is to create a probabilistic model of the objective function, for example a Gaussian Process model

from trieste.models.gpflow import build_gpr, GaussianProcessRegression

gpflow_model = build_gpr(initial_data, Branin.search_space)
model = GaussianProcessRegression(gpflow_model)

Next ingredient is to choose an acquisition rule and acquisition function

from trieste.acquisition import EfficientGlobalOptimization, ExpectedImprovement

acquisition_rule = EfficientGlobalOptimization(ExpectedImprovement())

Finally, we optimize the acquisition function using our model for a number of steps and we check the obtained minimum

from trieste.bayesian_optimizer import BayesianOptimizer

bo = BayesianOptimizer(observer, Branin.search_space)
num_steps = 15
result = bo.optimize(num_steps, initial_data, model)
query_point, observation, arg_min_idx = result.try_get_optimal_point()

Installation

Trieste supports Python 3.7+ and TensorFlow 2.5+, and uses semantic versioning.

For users

To install the latest (stable) release of the toolbox from PyPI, use pip:

$ pip install trieste

or to install from sources, run

$ pip install .

in the repository root.

For contributors

To install this project in editable mode, run the commands below from the root directory of the trieste repository.

git clone https://github.com/secondmind-labs/trieste.git
cd trieste
pip install -e .

For installation to be able to run quality checks, as well as other details, see the guidelines for contributors.

Tutorials

Trieste has a documentation site with tutorials on how to use the library, and an API reference. You can also run the tutorials interactively. They can be found in the notebooks directory, and are written as Python scripts for running with Jupytext. To run them, first install trieste from sources as above, then install additional dependencies with

$ pip install -r notebooks/requirements.txt

Finally, run the notebooks with

$ jupyter-notebook notebooks

Alternatively, you can copy and paste the tutorials into fresh notebooks and avoid installing the library from source. To ensure you have the required plotting dependencies, simply run:

$ pip install trieste[plotting]

The Trieste Community

Getting help

Bugs, feature requests, pain points, annoying design quirks, etc: Please use GitHub issues to flag up bugs/issues/pain points, suggest new features, and discuss anything else related to the use of Trieste that in some sense involves changing the Trieste code itself. We positively welcome comments or concerns about usability, and suggestions for changes at any level of design. We aim to respond to issues promptly, but if you believe we may have forgotten about an issue, please feel free to add another comment to remind us.

Slack workspace

We have a public Secondmind Labs slack workspace. Please use this invite link and join the #trieste channel, whether you'd just like to ask short informal questions or want to be involved in the discussion and future development of Trieste.

Contributing

All constructive input is very much welcome. For detailed information, see the guidelines for contributors.

Citing Trieste

To cite Trieste, please reference our arXiv paper where we review the framework and describe the design. Sample Bibtex is given below:

@misc{trieste2023,
  author = {Picheny, Victor and Berkeley, Joel and Moss, Henry B. and Stojic, Hrvoje and Granta, Uri and Ober, Sebastian W. and Artemev, Artem and Ghani, Khurram and Goodall, Alexander and Paleyes, Andrei and Vakili, Sattar and Pascual-Diaz, Sergio and Markou, Stratis and Qing, Jixiang and Loka, Nasrulloh R. B. S and Couckuyt, Ivo},
  title = {Trieste: Efficiently Exploring The Depths of Black-box Functions with TensorFlow},
  publisher = {arXiv},
  year = {2023},
  doi = {10.48550/ARXIV.2302.08436},
  url = {https://arxiv.org/abs/2302.08436}
}

License

Apache License 2.0

trieste's People

Contributors

alexgoodallai avatar apaleyes avatar avullo avatar awav avatar chrismorter avatar eltociear avatar henrymoss avatar hstojic avatar jamesrobson-secondmind avatar jesnie avatar joelberkeley avatar joelberkeley-secondmind avatar johnamcleod avatar khurram-ghani avatar nfergu avatar satrialoka avatar sebastianober avatar someoneserge avatar st-- avatar stratismarkou avatar tensorlicious avatar tsingqaq avatar uri-granta avatar vdutor avatar vpicheny avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

trieste's Issues

Add static model interface for inference only

Describe the feature you'd like
I would like a static (untrainable) model interface for use with the acquisition functionality, so that I can decouple the model training from the acquisition process.

This also improves reasoning power in the acquisition functionality because you know the model won't be modified

Configurable optimizer history size

Describe the feature you'd like
As a user, I want to be able to configure the history size so that I can track state with minimal memory footprint. For example "save only the 5 latest steps"

This could be implemented by changing track_state to an argument max_history_size: Optional[int] and using a collections.deque. If it would make for a simple user interface, we could then convert the deque to a list before returning it

Support discrete search space with points of arbitrary rank

Describe the feature you'd like
As a user, I want to be able to use a discrete search space with points of arbitrary rank, such as images.

Describe alternatives you've considered
An alternative is to add another search space for higher ranks

Comprehensive unit tests

While everything has some level of testing, unit test coverage for the toolbox is not comprehensive. Missing tests are mostly either unhappy path tests incl. error handling, or integration tests used instead of unit tests.

The first task is to go through the code, look at the tests, and check what already has comprehensive test coverage.

Comprehensive unit tests exist for (list updated as tests are checked/added):

  • acquisition
    • combination #57
    • function
      • mes (except for full functional form ... see #52 )
      • everything except mes #62 (note we need to update the QuadraticMeanAndRBFKernel utility to have non-trivial minimum and variance, and update the tests respectively, for these to be good tests, see #65 )
    • rule
      • EfficientGlobalOptimization
      • ThompsonSampling
      • TrustRegion (rather than testing this more, we could pull it out into a separate thing and the current tests will cover it, and then we'll also get the benefit of using it everywhere)
  • models
    • config
    • model_interfaces
    • optimizer
  • utils
    • misc
    • objectives
    • pareto
  • bayesian_optimizer #17
  • data #60
  • observer #58
  • space #59 #80
  • type

Investigate performance of acquisition optimizer

As a user, I want the acquisition optimizer to be as efficient as possible. We currently use gpflow's Scipy optimizer with tfp's bijector for bounds. However, @joelberkeley has found in #74 that tfp's lbfgs_minimize takes fewer steps (though each of those steps take longer). We may alternatively be able to use gpflow's Scipy with bounds since it is L-BFGS-B. This may be faster than using the tfp bijector.

Task is to investigate the performance impact of each option and implement the "best" (having decided what "best" means).

Batch Bayesian Optimization available?

Hi, I can find some keywords matching "BatchAcquisitionFunction" in the API reference, but see no batch BO related things in the tutorials. Wondering if batch BO is supported by this package?
I am a beginner and I'd appreciate it if anyone could reply.
Thanks in advance!

Refactor notebook utilities

As a user, I want the tutorials to use terse and clear plotting code, so that I can focus on the content.

As a maintainer, I want the plotting utils to be typed, documented, composable, and use Trieste natives, so that it's minimal effort to plot BO progress.

This overlaps with #29

Plotting utilities

As a researcher, I would like plotting utilities available for effortlessly plotting observer data, posteriors, acquisition functions and perhaps other things over search spaces, so that I can quickly and easily analyse Bayesian optimization progress.

It may be best to start simple, with tools only for two dimensional input spaces (or one dimensional if they're more common)

There are lots of examples of plotting in the notebooks. These will need updating to make use of this new functionality, and the unused plotting functionality in the notebooks utils removed

replace requirements.txt with setup.py's extra_requires

Describe the feature you'd like
As a user, I want to be able to install the dependencies for the notebooks using the notation pip install trieste[notebooks] for optional dependencies, so that I don't need to specify the requirements.txt file path.

Is your feature request related to a problem? Please describe.
The requirements.txt files are unneeded, and the installation commands for notebooks is somewhat verbose

Write a technical report for trieste

As a contributor, I want to publish a technical report to demonstrate and advertise what trieste has to offer as a cutting edge Bayesian optimization toolbox, to increase adoption, showcase Secondmind's research capability, and probably more.

As a user, I want a technical report to gain a deeper insight into what trieste is capable of, how to use it, and why it was designed as it was.

Test with mock models with non-trivial variance and minimum

As a contributor, I want the tests to use mock models with non-trivial variance and minimum, so that tests aren't degenerate and pass by accident (e.g. unit variance means unit std deviation, so it's not possible to differentiate between the two, and zero minimum mean it's not possible to differentiate between negative and positive posterior minimum).

We could either make the variance and minimum fixed, or configurable.

The test utils will need to be replaced, so that we don't make more tests with this problem

Any tests that use these test utilitiies need to be updated accordingly.

Add pull request templates

Describe the feature you'd like
As a contributor, I want pull request templates so I know what's expected when I raise a pull request

As a maintainer, I want pull request templates so that there is enough information in pull requests to know how to triage and review them.

Check end-to-end support for multi-output models

Describe the feature you'd like
As a user, I'd like to be confident I can use a multi-output model with trieste in a complete BO loop, without having to use any hacks. As a maintainer, I'd like to know users can do this.

Describe alternatives you've considered
Either write a test, notebook (perhaps duplicate an existing notebook to show an alternative approach), or both.

Additional context
victor's notes

it probably does but this is worth checking. In particular:

  • does the model update work?
  • does the data update work?
    maybe this PR is about writing a unit test?

Multivariate Gaussian CDF function

Describe the feature you'd like
This CDF is not analytically available and requires an SMC algorithm. The SOTA algorithms are available here in Fortran: http://www.math.wsu.edu/faculty/genz/homepage

Feature: a tensorflow routine that, for a given mu [n ,1] and Sigma [n, n] tensors and a set of thresholds T [n,] returns a scalar corresponding to the CDF value (aka orthant probability).

Definition of done:
The CDF routine + a set of tests. Values from the tests can be obtained by running the SOTA Fortran code.

Is your feature request related to a problem? Please describe.
The multivariate Gaussian CDF is critical for some existing acq functions in the literature, for instance batch-EI or for some active learning acquisitions. It is also useful to fit classification models with sharp link functions (that could be used for the failure notebook).

Add linter to project

Describe the feature you'd like
As a maintainer, I want a linter to check the code on CI for common coding errors or bad style. Example of linters being pylint, flake8.

Is your feature request related to a problem? Please describe.
Some coding mistakes like accessing private members aren't picked up by mypy or black.

Tutorials of how to use and extend toolbox functionality

Describe the feature you'd like
As a user, I would like tutorials that explain how to work with the various pieces of functionality in trieste, such as examples of how to build my own acquisition function or rule, how to use my own model.

These can be links in the docs to existing tutorials that do these things, but in cases where there are no tutorials containing these, either create a specific ticket for each tutorial, or make the tutorial

Convenient API when using one model and dataset

As a user, I want an easy, intuitive, and safe way to use the toolbox when my problem is only for one model and dataset.

At the moment, users must use a single-entry dictionary, which is inconvenient and ugly boilerplate.

We have a number of candidate designs to achieve this. They are listed below. The summaries are a work in progress

Allow BatchReparametrizationSampler to use multiple batch sizes

As a maintainer, I want users to be able to use a single BatchReparametrizationSampler with multiple batch sizes, so that they have as much flexibility as possible.

This may be possible by extending the current eps with new N(0, 1) samples if the batch size increases, or take a subsample if it decreases, thus ensuring that eps will remain fixed for any given batch size.

Improve ease of use of acquisition function reducer

Describe the feature you'd like
As a user, I want to be able to compose acquisition function builders with minimal effort and minimal code.

Is your feature request related to a problem? Please describe.
At the moment, one needs to define a new class to create a custom reducer, but since it only needs one function implementing, users should be able to specify just the new function

MC-based acquisition function supporting reparametrization trick

Class of acquisition function that relies on GP sampling. The sampling uses the reparametrization trick, so that:

def sample_with_trick(at, model, Z)
    mu, sigma = model.predict(at)
    samples = mu + chol(sigma) * Z
    return samples

The acquisition function uses a set of Z values fixed over a single BO iteration but renewed for a new iteration.

Notebook explaining what BO is and how it works

As a user new to Bayesian optimization, I would like a notebook that explains what Bayesian optimization is and how it works, so I can start using it to solve my optimization problems with little experience.

As a maintainer, I want the above so that people with little experience of BO use the project as well as those with lots of experience, thus promoting a larger community.

GPflow wrapper update may be incompatible with tf.function()

As mentioned on #54, there might be an issue with the current way of how Trieste updates data in GPflow models when you want to use tf.function to speed up the prediction step (currently, Trieste only uses tf.function for model optimization, and relies on eager mode for predictions).

VariationalGaussianProcess.update() sets the model's data attribute to the new (X, Y) tuple. However, if model.predict_f() is wrapped in tf.function(), it will store the values of (X, Y) when first called in the cached graph.
To enable data update with tf.function(), model.data must be set to a tuple of tf.Variables, which are then .assign()ed rather than overwritten in VariationalGaussianProcess.update().

Add bibliographic reference for Thompson Sampling

As a user, I want the Thompson sampling acquisition rule to include a reference (or references) to the literature, so that I can learn more about how the algorithm works and get general context.

Here's a conversation about which references are good

joel: what's the best paper to reference for thompson sampling in BO?
joel: ... I'm looking for a good paper to reference for TS in the toolbox
joel: the "de facto" reference if you will
sattar: This one is the first one with theoretical results: https://arxiv.org/abs/1704.00445
sattar: For general reference I would suggest:
sattar: http://proceedings.mlr.press/v84/kandasamy18a/kandasamy18a.pdf
sattar: https://arxiv.org/abs/1706.01825
sattar: https://arxiv.org/abs/1704.00445

Prettify bibligraphic references in docstrings

Describe the feature you'd like
As a user, I want to be able to easily read the bibliographic references in docstrings so that I can get context easily and find the source easily, and so that reading the documentation is more pleasant.

It would be particularly nice if the result included a clickable link to the reference

Is your feature request related to a problem? Please describe.
Currently, bibliographic references in docstrings simply paste the reference in the docstring. It works but it's crude and not very pretty.

Describe alternatives you've considered
One solution is to use sphinxcontrib-bibtex

Function that extracts the minima from a Dataset

Describe the feature you'd like
As a user, I would like a function, or functions, that find the minimum observation and corresponding minimizers in a Dataset, perhaps along with their location in the Dataset, so that it's quick and easy to find the best optimization data point.

This should probably not be a method on the Dataset class, as the minima are not always relevant to a Dataset, for example in constraint data, we're interested in boundaries, for example, not minima.

Since Datasets are immutable, it may be worth caching the results.

Describe alternatives you've considered
It may be better to provide the approximate best. For example, if the observer is noisy, we may be interested in the best point and other points with similar values

Additional context
The notebooks provide examples of how this may be used. These will need updating to make use of this new functionality.

ThompsonSampling assumes noise-free, but is undocumented

Describe the bug
As a user, I want to know whether the ThompsonSampling rule assumes noise-free or not. At the moment, it returns only the unique points, which can be less than the num_search_space_samples argument implies.

Expected behaviour
ThompsonSampling should document how many samples it returns, and whether it's for noise-free or noisy observers. It's up to the person fixing this to find out which is more appropriate, change how the rule works as appropriate, and update the docs

Additional context
slack conversation

joelb: i've noticed that ThompsonSampling returns an indeterminant number of query points, because it takes the n best, then takes the unique ones of those. Is that right?
victor: correct
victor: in a noise-free context this makes sense because duplicating an observation would result in singular covariance matrix
victor: in a noisy context we should allow duplicated observations and taking only the unique subset is a bad thing
maybe we should remove this feature for now - we can still add it back for a NoisefreeThompsonSampling later

Infinite lazy collection (generator) of results

As a user, I want more a more composable central optimizer, so I can customize various aspects of the main BO loop.

To this end, I suggest optimize return an infinite generator of results.

For example

  • I can choose why and when to stop the loop
for res in optimize(...):
    if res.data.observation < some_minimum_required_value:
        return res
  • I can resume the loop without a new call to optimize (reducing typing, along with the chance for mistakes)
results = optimize(...)
first5 = itertools.islice(results, 5)
# I examine first5 and decide I want more
next3 = itertools.islice(results, 3)
  • I get logging for free

Model unit tests are impractically slow

Describe the bug
Running the model unit tests via tox -e tests takes much longer than unit tests should do

System information

  • OS: Ubuntu 18.04
  • Python version: 3.7
  • Trieste version: develop commit 3e57035
  • TensorFlow version: 2.3.1
  • GPflow version: 2.1.3

MC-based batch-EI

A Monte-Carlo implementation of EI that works for a single query point or a batch of query points. The returned EI is always a scalar. Compared to analytical EI, the parameter eta is optional.

This acquisition fct uses the reparametrisation trick.

Add utility for creating grid of points

As a developer/researcher, I would like a utility to create a grid of points, so that I can easily evaluate functions over this grid e.g. to test a function or plot a posterior.

We already have code that does this in the notebook utils, the tests for the branin utils, and the tests for EI. These, and other examples you find, will need updating to use the new version.

It should at least be able to create grids over 1D and 2D spaces, but arbitrary dimensions would be good if not a lot of effort. Other ideas: users should be able to control how many points, and the bounds, both inclusive or exclusive, which can be different along each dimension.

Ensure compatibility with tf.function

Describe the feature you'd like
As a user, I want to be able to use tf.function where appropriate with confidence that the code will work, so that I can improve the performance where possible, especially in production.

Is your feature request related to a problem? Please describe.
tf.function has a number of gotchas, some of which are outlined in Google's tf.function summary. These need to be avoided throughout the codebase.

Most of the code uses tensorflow ops (as opposed to e.g. numpy ops), but the models are mutable so those will need particular attention.

Minimize memory footprint of tracked history in optimizer

Describe the feature you'd like
As a user, I want saved state in the BayesianOptimizer to use minimal memory footprint.

This can be achieved by implementing __deepcopy__ on commonly copied objects.

Describe alternatives you've considered
We could let the user specify what to copy, such as just some parts of the models, but that's more involved and may change the API

Add method to recover optimization from failed step

Describe the feature you'd like
As a user, I want to be able to easily recover the optimization process from a failed step. Based on the relevant notebook, the following API may be a good approach

class BayesianOptimizer:
    ...
    def recover(
        self, total_steps: int, history: List[Record[S]], acquisition_rule[S, SP]
    ) -> OptimizationResult[S]:
       result, new_history = self.optimize(
            total_steps - len(history),
            history[-1].datasets,
            history[-1].models,
            acquisition_rule,
            history[-1].acquisition_state
       )
       return OptimizationResult(result, history + new_history)

though we may wish to be able to substitute in a new model/state if desired

Document that autographing is assumed

As a user, I want to know if autographing (the tf.function thing) is supported or not.

Document that we don't support it (as, for example, tf.Tensor.__bool__, which we use, doesn't work with it). We can add support for it later

Make SearchSpace __contains__ broadcast

Describe the feature you'd like
To be able to use a tensor of points in points in search_space (i.e. broadcast the membership test over all points in points)

If the last dimension always corresponds to the search space dimension, points can be any rank

Scaling `Dataset.observations` in each BO iterations

Describe the feature you'd like
For some optimization process (e.g., multi-objective optimization), a scaling/transforming/normalization of each objective from R-> [0, 1] is preferred or even a necessary step (This is not for easier gp modeling but for instance, to make each objective function equivalent important for later operation). Usually this could be done for instance by just scaling Dataset.observations to [0, 1] (and then build the ProbabilisticModel). While seems I cannot think of an easy way to do in Trieste with minor effort.

One way I can think of is just scaling data in create_bo_model before build of the gp, but since the observer still receives raw data, the inner loop in BayesianOptimizer is not scaled properly as can be seen here.

Describe alternatives you've considered

  • Scale inside observer: dynamically update scaling is not very possible as earlier data in model will not get updated due to the augmentation style of data.
  • Scale GP predictions according to data: not very sure if this is mathematically equivalent when consider mean and variance, also one needs to pass the scaling coefficient deeply into the _acquisition_function to consider this type of operation.

Test for full functional form of MES

MES is short of a test for the full functional form. It's good to know MES not only have the correct maximum, but that it is MES everywhere. This is valuable because we may modify it e.g. with a reducer or constraints, where the maximum may shift to another point.

Document and check at runtime all function shapes

As a user, I want to know what shape tensors are expected where, and what shape tensor I should expect from functions and values.

As a contributor, I want arg and return value shapes documented so that I can more easily add features that conform to the design, and reason about how to fit functionality together.

This probably means adding shapes to all values and functions with tensors in their signatures, and shape checks for function arguments (shape checks for function return values are unnecessary - tests are more suitable for that). This is already done over much of the code base, but not all

Module privacy is inconsistent

The acquisition rule module depends on acquisition functions, and currently has to import those the from the actual function module. The function module therefore needs to be visible, but is currently hidden (by omitting the module docstring). A similar problem may be present for the models package.

Task: Find and implement a consistent import tree and approach to docs visibility that is convenient and intuitive to users

Combine Box search spaces

Describe the feature you'd like
A method to combine an arbitrary number of box spaces into a single one. This is done simply by tiling lower and upper bounds.

Is your feature request related to a problem? Please describe.
This feature is useful for batch rules.

Document our approach to NumPy arrays

As a user, I want to know whether Trieste supports NumPy arrays, and if so, what guarantees it has about compatibility (e.g. test coverage).

As a maintainer, I want to know what level of support I should include for NumPy arrays (e.g. should source code support it, should i test for it)

These questions need answering and documenting, probably on the trieste homepage

Add project scope to contributing guidelines

Describe the feature you'd like
As a maintainer, I want the project scope included in the contributing guidelines so that I can clearly and effectively lead community contributions towards the goals in the project scope.

Check notebook syntax on CI

Describe the feature you'd like
As a developer, I want the notebooks to be checked for Jupytext syntax errors on CI, so that I know my changes haven't broken the notebooks.

This is very closely related to #36

Add doc build to CI

As a developer, I want the doc build to be run on CI, so that I know my changes haven't broken the doc build.

This is very closely related to #37, but also includes docstrings and building the website

Automatic link references in documentation

As a maintainer, I want to be able to write docs with minimal effort.

As a user, I want all links/cross-references in the docs to be accurate so that I know what everything means and I can trust the docs.

In this regard, @st-- has suggested adding a Sphinx default_role of "any". This would create links where it can from doc entries like foo, raising a warning where a link can't be found, or where there are multiple possible targets. To add this, we'd need to

  • change all links that should only be formatted as code to use double backticks, so as to avoid links where there isn't meant to be one
  • fix all doc build warnings

This was originally proposed in #131, which includes a number of comments

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.