Giter Club home page Giter Club logo

conda-forge.github.io's Introduction

deploy

Overview

This repository

  • is the home of the source code of conda-forge's documentation.
  • is the home of the source code of conda-forge's status dashboard.
  • provides an issue tracker for conda-forge related questions and issues that are not specific to individual feedstocks.

If you have questions or need help, please check out our documentation for a list of ways to interact with us.

Improving the docs

  • You can help to improve the documentation! It is version-controlled in the conda-forge.github.io repository on GitHub.

  • The docs are built on GitHub Actions and run the .ci_scripts/update_docs script. We are glad to know that you would like to contribute. To build the docs locally, follow the steps mentioned below:

  1. Fork the conda-forge.github.io repository to your own GitHub user account.
  2. Clone this fork onto your computer.
  3. Go into the main folder. Run the following commands.
    • conda env create -f ./.ci_scripts/environment.yml
    • conda activate conda-forge-docs
    • For live builds, npm install && npm run start
    • For production builds, run .ci_scripts/update_docs
  4. Make and commit your changes.
  5. Submit a pull request to the main repository proposing your changes.

Code of conduct

We at conda-forge adhere to the NumFOCUS Code of Conduct:

  • Be kind to others. Do not insult or put down others. Behave professionally. Remember that harassment and sexist, racist, or exclusionary jokes are not appropriate for conda-forge.
  • All communication should be appropriate for a professional audience, including people of many different backgrounds. Sexual language and imagery is not appropriate.
  • conda-forge is dedicated to providing a harassment-free community for everyone, regardless of gender, sexual orientation, gender identity and expression, disability, physical appearance, body size, race, or religion. We do not tolerate harassment of community members in any form.

Thank you for helping make this a welcoming, friendly community for all.

Reporting guidelines

If you believe someone is violating the code of conduct, please report this in a timely manner. Code of conduct violations reduce the value of the community for everyone. The team at conda-forge takes reports of misconduct very seriously and is committed to preserving and maintaining the welcoming nature of our community.

Reports should be sent to [email protected], a private mailing list only accessible by the members of the core team. If your report involves a member of the core team, please send it to NumFOCUS following these instructions.

All reports will be kept confidential. Please have a look at the Reporting guidelines.

Enforcement: What happens after a report is filed?

conda-forge's team and/or our event staff will try to ensure your safety and help with any immediate needs, particularly at an in-person event. Once we have received the report through the relevant authorities, conda-forge will make every effort to acknowledge the receipt and take action. Have a look at the process of What Happens After a Report is Filed?.

conda-forge dev meetings

We hold biweekly meetings every second Wednesday from 17:00-18:00 (UTC). Feel free to stop by! Up-to-date invites are always available in the conda.org community calendar. Look for the [conda-forge] core meeting events!

Our meeting notes record important points discussed during the meetings and serve as a record for upcoming meetings. We make use of HackMd and a template to create the meeting notes.

We use a Github Actions workflow to create an automated PR with the meeting notes template for each session, which is automatically published to our HackMD team account. During the meeting, attendees will edit the HackMD document. After the meeting, the document is saved and the PR is synced with the changes by adding the sync-hackmd-notes label. Once satisfied, the PR is merged and the website will be updated with the new meeting notes.

We encourage contributors to join the meetings and learn more about and from the community.

conda-forge.github.io's People

Contributors

aaishpra avatar bastianzim avatar beckermr avatar chrisbarker-noaa avatar cj-wright avatar conda-forge-admin avatar conda-forge-coordinator avatar croth1 avatar djsutherland avatar ericdill avatar forgottenprogramme avatar github-actions[bot] avatar h-vetinari avatar isuruf avatar jaimergp avatar jakirkham avatar mariusvniekerk avatar mfisher87 avatar msarahan avatar ngam avatar ocefpaf avatar pelson avatar pmlandwehr avatar prachi237 avatar prernasingh587 avatar scopatz avatar ssurbhi560 avatar viniciusdc avatar xhochy avatar zklaus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

conda-forge.github.io's Issues

GPU Support?

I've got a couple packages I'm preparing for upload that rely on GPUs. I'm not up to speed on what open-source CI solutions offer, but would building against a VM w/ GPUs be supported? If it would require pitching in or donating to the project, I'm pretty sure I can figure some way to help.

Multibranch feedstocks

GDAL was the first feedstock to use multiple branches for managing independent version - some of the administrative scripts need updating to handle multi-branch feedstocks:

  • teams
  • feedstocks repo
  • smithy update PR script

Policy: how to compile c libs

From conda/conda-build#779 (comment)

For C dlls and such, yes, I always put in a python build dependency (and often a note saying "HACK: this is how we get the correct compiler" and then pass --python command line args (or have conda-build-all pass those args for me).

Should probably get a selector to only apply to windows: - python x.x [win]

The joys of windows...

Also something about the features and when what should be enabled (haven't looked into this...)

Automatically open issue/PR on version update

The idea here is to cut down the maintenance burden of feedstocks a little more. In particular, notify the user that a new release is out and/or make the needed changes. If we do make the changes, we need to make sure we are following are guidelines ( conda-forge/staged-recipes#139 ) very closely.

This may not be practical in all cases, but I think there are a number of cases where this would be beneficial and possibly quite doable. Here are some cases to consider.

In some cases, we may be able to use a webhook. In others, we may need to do some parsing. We will need to make some decisions on how we want to handle these cases or find some other way that fits well with what we already do.

When to package software which is already in the default conda channel

(@JanSchulz brought this up in #16 (comment))

I Agreed with @JanSchulz that we should avoid as much as possible to add packages in conda-forge that are available in the default channel.

However, we already have a few redundant packages (pyproj, shapely, geos, and more to come soon). The reason for th1 redundancy is that those packages are partially broken in the default channel.
(And we could not find a proper channel of communication to send the recipe patch back to them.)

Maybe, when fixing a default channel package we should allow the package addition here as long as there is a plan to send that fix back to the default channel, and to remove the package from conda-forge once that happens.

Blog piece

We should write something and ask continuum to post at their blog/page.

Pinging @rsignell-usgs (who gave the idea) and I @jakirkham (who is probably the most engajedmember so far ๐Ÿ˜‰).

Consider repos which already have build recipes

Some repos ship with a conda recipe as part of the source - it would be nice to avoid duplicating the recipe, and just use that directly - perhaps using git submodules (which are horrible, but would work for this process). Updating a recipe would then be a matter of updating the submodule location.

I did something similar in https://github.com/pelson/package_with_continuous_delivery/tree/_build.

This might also give us a way to package packages for which no recipe is needed (because the conda skeleton approach works).

New feedstocks not being created in conda-forge/feedstocks

I've diagnosed the problem to the fact that conda_smithy.feedstocks.feedstock_repos('conda-forge') is paginating, or at the very least, an old list of repos.

It could be that Travis is http caching the result, but that is hard to diagnose accurately - on my own machine (with the same github token) I'm getting the correct result... ๐Ÿ˜ข

Terminate AppVeyor if a package is in the channel

When re-starting Appveyor jobs that crashed to due connection problems (or any other random problem) the whole matrix gets re-build. Not sure if it is possible, but maybe we should do

- 'python ci_support\upload_or_check_non_existence.py .\recipe conda-forge --channel=main'

first and, if the package is already there, cancel the build.

Develop code to manage github teams

Looking for an example of code which, given a list of github handles, a repo, and a team name, creates/updates the GH team as appropriate using their API.

def manage_team(org, team_name, members, repo):
    <code here>

With inputs such as:

manage_team('conda-forge', 'matplotlib', ['pelson', 'tacaswell'], 'matplotlib-feedstock']

Given its existence, it makes sense to use pygithub for this.

Periodic failiure

https://travis-ci.org/conda-forge/conda-forge.github.io/jobs/117396831

Arguments passed: Namespace(code=None, force_env=False, path='./scripts/update_teams.py', quiet=False, remaining_args=['./feedstocks_repo/feedstocks'], verbose=True)
Traceback (most recent call last):
  File "/home/travis/conda/bin/conda-execute", line 9, in <module>
    load_entry_point('conda-execute==0.4.1', 'console_scripts', 'conda-execute')()
  File "/home/travis/conda/lib/python3.5/site-packages/conda_execute/execute.py", line 177, in main
    exit(execute(path, force_env=args.force_env, arguments=args.remaining_args))
  File "/home/travis/conda/lib/python3.5/site-packages/conda_execute/execute.py", line 75, in execute
    with open(path, 'r') as fh:
FileNotFoundError: [Errno 2] No such file or directory: '/home/travis/build/conda-forge/conda-forge.github.io/feedstocks_repo/scripts/update_teams.py'

If anybody else sees this, please ping the build job in the issue. Some more evidence will be useful in diagnosing this one I think.

Handling various special compilation optimizations/architectures

Related #27

Building some low level packages benefit significant from special compiler options like enabling SSE and/or AVX instructions. These options may or may not exist for different target architectures. Also, in some cases, these features may end up being leveraged by the OS so smart decisions must be made to make sure we don't incur a penalty. We should really think about how we want to approach this as it will have an effect on things like BLAS, NumPy, SciPy, and other low level libraries that do significant computation.

conda recipe linter

It would be useful if there were a recipe linter which checked some of the common pitfalls and asserts a certain standard of recipe.

Off the top of my head, it would be useful if we can:

  • verify that package name and version are specified
  • document the use of numpy *
  • verify that an about home and license are specified
  • assert that an MD5 is defined for file sources

What is our limit in terms of low level packaging

I have opted to break this out as an issue as this is a real world problem that deserves careful consideration before proceeding. In particular, sometimes we need newer versions of system tools. This was inspired by this issue ( conda-forge/staged-recipes#300 ).

One point is do we package assemblers. This is interesting as we currently do package yasm We need yasm to build other dependencies (x264, ffmpeg, possibly other things in the future). Also, in the case of yasm, it is packaged as cross platform (all platforms, even Windows) so I don't know that there is a way around it. We may need to add nasm in the future too.

There are also some cases we have found it better to package our own build tools like m4, bison, flex, libtool, automake, autoconf, pkg-config, etc. There are many reasons for this that range from the system versions being too old (often the case with Mac, sometimes Linux too), having more consistency across platforms, having more control of the build process, etc. The line between too low and acceptable to package is still pretty fuzzy in this case. For example, should ar be packaged? It isn't that complicated and it could be useful to have a newer version in some cases. Similar arguments could be added for other binutils too.

One thought might be we don't package tools that are OS specific. However, I don't expect that to hold as I think we definitely need to package patchelf and we will want that in conda-forge so that we can ensure we have the latest version (as it still hasn't hit 1.0, but it is getting close). There also has been discussion of having a newer version of clang for Mac, which would presumably require packaging. Though that point is far from settled.

Another thought might be we don't want to package standard system development tools. However, this point could be sort of contentious as we use gcc from a package at present. I expect this will be a topic of a fair amount of debate especially as we engage other conda-recipe communities that have opted for different build strategies.

Please feel free to share your thoughts on this point.

Policy: where should binaries go

E.g. if a environment should contain a binary which should be available to python apps which run in that enviroment. E.g. pandoc, whichshould be available for nbconvert or pypandoc.

This isn't handled consitently in the conda-recipe repo: a linux is isnatleld into PREFIX/bin, but windows sometimes into LIBRARY_BIN and sometimes into SCRIPTS.

In my windows python, LIBRARY_BIN is automatically added to the path (os.environ["PATH"] contains LIBRARY_BIN) but not scripts. For this I would prefer that such apps go into LIBRARY_LIB, so that they are available even if I don't activate the env but call python there directly.

I've no clue how Linux/OSX handle this...

Handling multiple versions

Some packages like Python, NumPy, gcc, and others benefit by providing multiple version to ensure API/ABI compatibility. We should determine how to solve this and add the necessary logic for managing these cases. One thought was to have all versions in the same repo. Another was to have separate repos for each version (major, minor) patches and other lower versions would be an upgrade in that repo.

Related: #44

Adding a status page

After having some issues yesterday during deployment (due to a package having a . in its name) and thinking back to the fact that this problem has occurred before for us while trying to explain this to a user that expected everything worked, I was thinking maybe we would benefit from having a status page. This way we could transparently communicate the status of different parts of our pipeline. Additionally, it would be nice if it could be combined with our automation to announce issues.

Given how much we โค๏ธ repos here, one possibility that I noticed was a status page that used a repo ( https://github.com/pyupio/statuspage ). We already are quite familiar with the process of automating interactions with repos. Errors in feedstock generation could be filed as issues with one label. Errors in webpage deployment could go under another label. Errors updating the feedstocks submodules could go under yet another label (thus solving this issue #70). As labels are required for an issue to show up in the status, we don't need to worry about users opening issues against this status repo and accidentally affecting the status. This would also do double duty as we could be notified as these problems occur letting us more quickly resolve them. Also, it would provide this information to our users without distracting us from trying to get out a quick fix. Once we resolve an issue, we can close it, which will update the status. Alternatively, we can automate the closing of issues too. This won't leave us dependent on yet another service to get the job done. It will just rely on GitHub and whatever CIs are relevant.

We can certainly consider other ways to do this. Right now, I think this idea sounds like the easiest to setup and will nicely integrate with our existing workflows.

Notify when feedstock is empty

When a feedstock fails to generate and ends up empty, we now issue a warning in the Travis log. However, we could really benefit from a much louder notification (see comment). For instance, opening an issue against staged-recipes or similar.

Package naming policies

My 2-cents:

  1. First try the package original name ๐Ÿ˜
    A possible conflict would be packages like beautifulsoup4 (pypi) and beautiful-soup (anaconda name). I strongly believe that the majority of the users cannot find the anaconda package in a CLI search and end up installing the PyPI version.
  2. When conflicts arise, like a c-lib and its python bindings with the same name, add a py<c-libname>. For example: cdo and pycdo. (Or maybe python-cdo?)
  3. When adding a new package that has libraries used by other packages avoid naming it lib<package name>.

Note that anaconda names the netcdf package libnetcdf. However, that package is more than just the netcdf libs. Same for gdal and libdal, but in that case gdal is even more confusing because that is the python bindings only, and the libdal is the rest of the package. To me this behavior is a bad mix of the Linux world, that splits packages into lib, dev, headers, etc and the python bias that we have when packaging non-python packages.

(See the issues raised on #16.)

Make package listing more prominent on the webpage

Would be nice to have a link at the top near "ABOUT" and "CONTRIBUTE" that says something like "PACKAGES" that links to the package listing. Seems this get buried a bit now. With nearly 200 packages, we should be advertising that so people can see the extent immediately or those looking for it can get there quickly.

One AppVeyor account per feedstock

The build queue on AppVeyor looks pretty backed up. Given this is all being run under one account, this is not too surprising. There are a number of ways to improve this. One might be to simply increase our bandwidth on AppVeyor (paid account). Though this could solve the problem temporarily, this will likely result in ratcheting into higher and higher bandwidths becoming prohibitively expensive. Another alternative would be to create a separate account for each feedstock. This allows are bandwidth to increase linearly with our feedstocks and thus should remain maintainable even with large numbers. The only question then is how best to set up this sort of behavior.

Should we use gcc from the default channel for Linux (and maybe OS X)?

Most of the time we are OK using the compilers installed in the CIs because we all have similar build tools pre-installed in our machines. However, every now and then someone tries to use the packages in a docker image without those tools. (For example ioos/conda-recipes#723 and ioos/conda-recipes#700.) A few questions:

  • Will that be fixed using the gcc from the default channel?
  • What would be the downside of that?
  • How about OS X? Are we relaying on clang or homebrew gcc? Or it does not matter?

Dependencies on libraries

What should the policies be about dependencies on libs?

Key question: what system libs can be depended on.

Secondary question -- what should be done with libs needed:

  • statically link ?
  • ship dll, so, etc with package
  • provide separate packge for lib (dll, so, etc.)

@JanSchulz wrote (in issue #16 ) (with my comments):

IMO: Linux: xserver but nothing else?

Plus the core libc of course -- essentially what Anaconda is built on .

Windows: no extra installed packages but the MS compilers, e.g. repackage r-tools and libswhich are > needed (tk, ...). No special libpath locations (old mpl recipe in conda-recipes),

+1

no extra installations of dependencies (e.g. python-igraph needs igraph (c lib) as dependency).

I don't get this -- did python-igraph require that you separately install igraph?

This would mean that we have to put c libs into the same conda channel as the python libs.

Yes -- that is definitely what should be done. If you are packaging a python wrapper around alib, the lib itself should be provided as a conda package.

Also interesting: what to compile in as a static dependency: I have the feeling that on windows pretty
much everything is included in the final package and not used via a dll file...

yeah, it's looking that way -- I assumed that part of the POINT of conda was to be abel to provide libs that various packages could use. But I've been wrestling with this -- having to hack in a change to the PATH environment variable at run time -- ugly! But conda may have fixed this -- I think the LIbrary dir is now added to path in miniconda/anaconda -- but I haven't tested yet.

But if it can work, I think we should prefer that libs are installed as dlls by a separate package -- that way they can be used by multiple other packages. But statically link if it really is a only for this package lib.

Dependency tracking

I have been thinking about this for a long time. We need to have a way to visualize and handle dependency relations between feedstocks both at the overview level and at the maintainer level. The problem is still a bit amorphous, at least in my mind, but I hope by putting this out there we can get a good discussion going and figure out how to handle and manage issues like these and come up with good tools to help us.

Supporting non-standard CPython VC builds on Windows

The current system of having the version Windows' MSVC tied to the CPython is troublesome for people in some cases. In particular, with respect to cases where people needed the latest compiler features (e.g. C++11 features). There may be other relevant cases, as well. The main purpose of this issue is to suggest that we include some (maybe even just one) compilation of a non-standard CPython VC build on Windows to simplify things for the conda community. Of course, making this change could disrupt the current feature landscape of Windows CPython so needs some thought on that point. This issue is also opened to get feedback from the conda community to determine which non-standard CPython VC builds would be valuable and thus be worthwhile to support. At present, I am thinking a vc14 (i.e. Visual Studio 2015) variant of CPython 2.7.x (where x will always be latest) is almost certainly worthwhile just to get C++11 support. Though am not sure if there are other worthwhile non-standard variants to consider.

cc @ukoethe @jasongrout @SylvainCorlay @msarahan @mcg1969

AppVeyor builds are not from conda-forge

I've know about the issue for a while, but I'm finally getting around to fixing the fact that all feedstocks live under my own AppVeyor user. e.g. pelson/matplotlib-feedstock

It turns out that all AppVeyor repos must live under a real user, so there are no organisational level concepts there. As a result, I've created a "conda-forge" user on AppVeyor, and will slowly be transferring the repositories into that user.

The steps needed:

  • Register a conda-forge user on AppVeyor
  • Create an "all-members" team on the GH conda-forge org. Give that team global "restarting" abilities for all registered AppVeyor builds (there is no finer control than that short of some pretty extreme teaming - possible, but not desirable at this point)
  • Register each feedstock under the conda-forge user
  • Submit a PR for each feedstock which updates the conda-forge.yaml to contain an appropriate AppVeyor secret (since the secret will change between users)
  • Remove all of the old pelson/*-feedstock registrations (to avoid things being built many times)
  • Update all of the shields so they point to the right place

Maintainer teams for feedstocks

Not sure exactly how this works now so if I am off base, please feel free to correct my misunderstandings.

It seems that it would be very helpful to automatically add maintainers of feedstocks to conda forge GitHub teams (named after the feedstock) so they can easily merge PRs, deal with CI issues, and do other maintenance on the feedstock directly.

44 new feedstocks ...

I can generate pull-requests or whatever makes your life easier, but I'd like to move all my python conda packages into conda-forge.

They're currently uploaded to https://anaconda.org/gus and conda recipe is in https://github.com/anguslees/conda-*

Assuming that's within the vision of conda-forge, let me know what I can do to make this easier for you.

Package list (basically openstack nova/novaclient and its dependencies):
amqp
anyjson
cliff
cmd2
debtcollector
extras
iso8601
jsonpatch
jsonpointer
kombu
librabbitmq
linecache2
monotonic
netaddr
netifaces
nova
oslo.concurrency
oslo.db
oslo.log
oslo.messaging
oslo.middleware
oslo.rootwrap
paste
posix_ipc
prettytable
python-cinderclient
python-glanceclient
python-mimeparse
python-neutronclient
python-novaclient
retrying
rfc3986
routes
simplejson
sqlalchemy-migrate
tempita
testresources
testscenarios
testtools
traceback2
unittest2
warlock
websockify
wrapt

Adding organization info to extra

This was raised by a contributor that felt it was very important to list the organization that was maintaining the recipe. @ocefpaf proposed the spec below. Seems like we should discuss and agree on some strategy if organizations should be included.

extra:
  recipe-maintainers:
    - 183amir
  github-organization:
    - bioidiap

Policies

Need some detail on:

  • Package naming policies. Particularly with regards to divergence from anaconda. #18
  • Requesting to become a maintainer.
  • Removal of distributions from the conda-forge channel (perhaps those which are beyond a certain age and have never been downloaded)
  • Where to submit issues
  • How to resolve a package name dispute (to avoid name squatters)
  • Contribution guidelines (including the license of the recipes)
  • Advice on how to write quality recipes
  • Dependencies on system libraries #23
  • When to package software which is already in the default conda channel #22 (Answer: we are aiming to make conda-forge the canonical recipe source, so package away!)
  • Notification of originating packaging proposal back to source repository (so that developers are aware, and can contribute to, the packaging effort)
  • Packaged data limits (some packages ship huge datasets with themselves)
  • Dealing with miscreants who compromise the conda-forge channel's integrity (e.g. by echoing the BINSTAR_TOKEN)
  • How far do we go down the packaging rabbit hole (#81)

I don't think it is healthy to discuss these individual topics in this issue, and we should spin off new issues for those discussions (I'll update this issue with the appropriate links).

selecting text on chrome

When I select text on the website (e.g. the config addition), there is no selection marker (-> no blue background), but it seems the text is selected and is copy-abel (sp?).

It would be nice to get a visual response when I want to copy something.

Usage stats

It would be interesting and helpful to pull together the following stats into the conda-forge.github.io homepage:

  • total number of conda-forge downloads
  • total number of recipe maintainers
  • total number of packages
  • total number of contributors to all feedstocks
  • top maintainers based on number of feedstocks maintained

When to bundle libs with the python package?

As brought up in #18, it would be good to have a guidline as to when to pacakge a C ib with its python wrapper, and when to package it separately.

It seems to me the guideline is that if a lib is used by more than one package than it should be independently packaged -- i.e. libjpeg, libpng, etc..

On the other hand, there are python packages that tightly bind a given lib, and keeping them total in sync has its advantages -- i.e. pyproj bundles proj4, even though proj4 is used by gdal, etc. as well. (and is a useful command line tool)

So maybe we need to simply follow the original authors lead here. (though some packages have different bundling policies on Windows than LInux). ANd we may be able to influnece the package as well.

For example, by py_gd package wraps libgd:

https://github.com/NOAA-ORR-ERD/py_gd

as lib_gd is generally available on *nix sytems (it's used by PHP, among other things), I decided to package the lib separately for conda:

https://github.com/NOAA-ORR-ERD/orr-conda-recipes/tree/master/libgd

but in practice, nothing else in conda is using it (though may some day..), and I used a recent libgd which is not in the Linux distros (at least not the ones more than 6 months old...)

so maybe it would be easier and more robust to simily bundle the lib and bindings together (and probably statically link them while I'm at at...)

(and not, these are not in conda-forge yet, but I would like to put them there)

Improved templating

Conda smithy has a hardcoded number of files in https://github.com/conda-forge/conda-smithy/blob/master/conda_smithy/configure_feedstock.py which are generated into the resulting feedstock. Remove the bespoke functions, and just generate all files in the https://github.com/conda-forge/conda-smithy/tree/master/templates directory.

For feedstocks which need to override the templates, allow a templates directory in the feedstock which can inherit from conda-smithy (e.g. {% extends "conda-smithy/README.tmpl" %}) for truly bespoke customisation.

Requires some knowledge of Jinja2.

Adding a gitter channel

As it looks like use of this is going to pick up substantially, I wonder if it would be worthwhile to add a gitter channel to field questions of people trying to get stuff up and running. It may also be useful for discussing some of the bigger issues about how this works and ties in with the existing conda ecosystem as things progress.

Policy: One recipe per repo or multiple recipes per repo?

[This is mainly pulled from: https://github.com/conda-forge/conda-smithy/pull/51#issuecomment-181910644 and the following discussion.]

Single recipe per repo

Pro

  • tooling for the builds (conda-smithy and the CI scripts) scales, as it is only one recipe in each repo and build attempt
  • fine grained permissions per recipe (=repo)
  • no broken builds due to unrelated packages
  • build status per package

Con:

  • needs tooling to setup (how does an outsider without rights in the conda-forge org setup a new recipe?)
  • multi packages changes need to be coordinated
  • high cost of learning all the setup steps for new contributors
  • high complexity of the setup scripts (e.g. setting up a new recipe needs changes in the conda-forge org which are managed by conda-smithy)
  • one ends up with hundreds of github repos -> finding the right repo for the package can be error prone for the user
  • higher cost when the CI scripts need updating (or a new python version comes out) -> all repos need to update
  • High monitoring cost for maintainers, as they have to look into a lot of repos for PRs/Issues instead of just one

Both

  • needs tooling to build a website which lists all packages
  • on the local file system: confusing, either because of hundreds of checked out git repos or because of hundreds of recipe-subdirs

Multiple recipes per repo

Pro

  • Multiple packages changes work out of the box (conda buildall handles the build order)
  • Changes by contributors can be reviewed by maintainers in one place
  • Only one repo to handle: updates of CI scripts, on the github dir
  • easier for new contributors: simple add a PR for the new recipe in a subdir
  • needs less tooling, as the usual github UI can be used (add new committer with push access to a repo) -> also more similar to the rest of the github workflow

~

  • can be done with hand crafted CI scripts (=without conda smithy). This gives more flexibility but also means that it's a manual workstyle from that point on...

Con

  • Not sure how many recipes per repo can be handled by CI code and conda buildall. As long as each PR submits one recipe (and all earlier recipes are built) it's probably fine, but what happens on the release of python 3.6...?
  • unrelated changes can break the builds for all recipes in the channel
  • build badges only for the whole repo, not individual recipes
  • Only broad permissions: committer with push access for all recipes and normal contributor who can submit PRs [this could be managed by splitting into multiple repos by themes, but this will give some of the disadvantages from a single recipe per repo policy]

[I will add pro and cons as they become available in the comments]

What to do with "pure" pypi packages?

There are a lot of python pacakges that "jsut work" with plain old:

pip install the_package

These are pure python packages with non-complex dependencies.

So some folks use a mixture of conda and pip to install stuff, but this gets ugly with the dependency resolution, etc.

I've dealt with this so far by making conda packages for these, but there are a LOT of them -- and as this is an easy lift, it would be do-able to automate it all. I've always thought that Anaconda-org should have a PyPi bridge -- someone looks for a package, it's not there, it looks for a pypi package and builds a conda pacakge on the fly and away we go!

But that wold require Continuum to do it, and maybe would be too much magic.

But maybe we could have a set of conda packages that are auto-built from PyPi (conda skeleton mostly works) and then have an automated system that goes through and looks to see if there are newer versions of any of them, and auto-update those. So in theory, all we'd need to do by hand was keep a list of packages to monitor (probably keep up with whether it had been added to the default channel).

I started down this track before I discovered obvious-ci -- running conda skeleton, and building the package on the fly. Then I decided that it was easier to simply maintain by hand the half a dozen packages I needed. But it would be nice to cover a much larger range of packages....

Thoughts?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.