Giter Club home page Giter Club logo

beet's People

Contributors

actions-user avatar dependabot-preview[bot] avatar dependabot[bot] avatar doublef3lix avatar hexnowloading avatar michaelbrunn3r avatar misode avatar orangeutan avatar ritikshah avatar thewii avatar vberlier avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

beet's Issues

Are there built-in functions for load and tick tags?

I opened the source in VSC and searched for load.json and I couldn't find anything. I imagine I could probably do it easy enough by creating a new JSON file in the minecraft namespace but I'm not exactly sure how to create any file in a data pack that isn't .mcfunction.

Context.project_name is all lowercase

Context.author, Context.project_description, etc are all as in the configuration file, only Context.project_name is converted to all lowercase.

This makes using the original capitalized project name akward, having to add a redundant meta variable.

Example:

# beet.json
{
  "name": "TheQuickBrownFox"
}

Context.project_name results in thequickbrownfox

Cannot escape variable in whitelist command.

When attempting to insert a variable into the whitelist command, it outputs it raw instead of reading it.

For example...
name = "John" whitelist add (name)
Outputs simply as whitelist add (name) instead of whitelist add John

beet don't include all files

files named [".mcfunction","..mcfunction","...mcfunction"] are not copied during beet build

In vanilla it's possible to reference such function, and make loading existing datapack not build with beet not working at all.

Here's an example :

erwan@erwan-pc-portable-ubuntu:~/Documents/Dev/beet/examples/code_void$ cat beet.yaml 
data_pack:
  load: [.]
resource_pack:
  load: [.]

output: build


pipeline:
  - add_header
erwan@erwan-pc-portable-ubuntu:~/Documents/Dev/beet/examples/code_void$ cat add_header.py 


from beet import Context, Function



def beet_default(ctx: Context):
    for f in ctx.data.functions.keys():
        ctx.data.functions[f]=Function(f"say {f}")

erwan@erwan-pc-portable-ubuntu:~/Documents/Dev/beet/examples/code_void$ tree -la
.
├── add_header.py
├── beet.yaml
├── build
│   └── code_void_data_pack
│       ├── data
│       │   └── test
│       │       └── functions
│       │           ├── a
│       │           │   ├── a.mcfunction
│       │           │   ├── a.py.git.mcfunction
│       │           │   └── .mcfunction.mcfunction
│       │           ├── a.mcfunction
│       │           ├── a.py.git.mcfunction
│       │           ├── .mcfunction.mcfunction
│       │           └── .mcfunction.mcfunction.mcfunction
│       └── pack.mcmeta
└── data
    └── test
        └── functions
            ├── a
            │   ├── a.mcfunction
            │   ├── a.py.git.mcfunction
            │   ├── ...mcfunction
            │   ├── ..mcfunction
            │   ├── .mcfunction
            │   └── .mcfunction.mcfunction
            ├── a.mcfunction
            ├── a.py.git.mcfunction
            ├── ...mcfunction
            ├── ..mcfunction
            ├── .mcfunction
            ├── .mcfunction.mcfunction
            └── .mcfunction.mcfunction.mcfunction

10 directories, 23 files
erwan@erwan-pc-portable-ubuntu:~/Documents/Dev/beet/examples/code_void$ 

Tests fail on windows because of newline

Setup:

  • Fork current main branch
$ git clone https://github.com/OrangeUtan/beet
$ poetry install
$ poetry run pytest -vv

Excerpt:

snapshot = snapshot, directory = 'load_templates_plugin'
@pytest.mark.parametrize("directory", os.listdir("tests/examples"))  # type: ignore
    def test_build(snapshot: Any, directory: str):
        with run_beet(directory=f"tests/examples/{directory}") as ctx:
>           assert snapshot("data_pack") == ctx.data

⠇

E                   Drill down into differing attribute content:
E                     content: 'say hello world\r\n' != 'say hello world\n'
E                     - say hello world
E                     + say hello world
E                     ?                +
tests\test_examples.py:12: AssertionError

⠇

36 failed, 59 passed, 8 skipped in 4.06s

Tests seem to fail because of the differences of newlines: Windows \r\n vs Linux \n.

Since the project uses snapshot testing, I don't know how easily that can be fixed. In the case that it can, I recommend adding a matrix to the Github Action (an example from my repo)

Support for datapack advancements

Datapack advancements are a convention from the Minecraft Datapacks Discord server (their article about the issue). The aim is to replace installation messages in an easily viewable and non-obstructive way by putting them on a single advancement page.

The convention is by no means standardized, but a couple of people are already using it (myself included) and I think adding built-in support would be a helpfull feature. Wether this feature can be implemented as is or should be altered is open to discussion.

Configuration

An example configuration:

name: Apple
description: An example datapack
author: Oran9eUtan
pipeline:
  - beet.contrib.datapack_advancement
meta:
  datapack_advancement:
    author_namespace: Oran9eUtan
    author_skull_owner: oran9eutan
    icon:
        item: "minecraft:apple"
        nbt: "{...}"

I'm still unsure how to exactly configure the datapack namespace, since other plugins could benefit from it. For now I put it in the datapack advancement config, but making it a top-level setting would be possible as well

Implementation

Root advancement:

Path: global:root

{
    "display": {
        "title": "Installed Datapacks",
        "description": "",
        "icon": {
            "item": "minecraft:knowledge_book"
        },
        "background": "minecraft:textures/block/gray_concrete.png",
        "show_toast": false,
        "announce_to_chat": false
    },
    "criteria": {
        "trigger": {
            "trigger": "minecraft:tick"
        }
    }
}

Author namespace advancement:

Requires:

  • author: Name of author
  • author_namespace: Unique namespace for author (e.g. oran9eutan)
  • author_skull_owner: Skull owner of author

Path: global:<author_namespace>

{
    "display": {
        "title": "<author>",
        "description": "",
        "icon": {
            "item": "minecraft:player_head",
            "nbt": "{SkullOwner: '<author_skull_owner>'}"
        },
        "show_toast": false,
        "announce_to_chat": false
    },
    "parent": "global:root",
    "criteria": {
        "trigger": {
            "trigger": "minecraft:tick"
        }
    }
}

Project / Datapack advancement

Requires:

  • project_name: Name of the project
  • project_description: Description of the project
  • icon: The icon of the project. A json object
  • author_namespace
{
    "display": {
        "title": "<project_name>",
        "description": "<project_description>",
        "icon": <icon>,
        "announce_to_chat": false,
        "show_toast": false
    },
    "parent": "global:<author_namespace>",
    "criteria": {
        "trigger": {
            "trigger": "minecraft:tick"
        }
    }
}

This advancement can be put anywhere. Different user may want different locations, but a default location could be decided uppon.
There is the issue of namespace clashes, so I prefer <author_namespace>:<project_namespace>/installed. (e.g. oran9eutan:apple/installed)

Proposal of simple cache system for files written to disk

Hello!

While using beet in a project with 7k+ generated files, I've noticed that every time the project is built, all files are deleted and regenerated from scratch. While this is fine for small amounts of data, it can become very slow as the packs get filled with more and more content. For instance, a binary tree generated with Context.generate.function_tree() can easily consist of thousands of <1 KB files, which are slow to write to disk.

As most of the modifications done to a pack only involve a couple of files — especially when you're tweaking something very specific, such as a single position in a block model —, a caching system would avoid many unnecessary I/O operations, which could potentially make the build process much more efficient.

Below is a proposal for how the cache system could work:

Expand

Idea

During each build, construct a dict where each key is a path to an output file relative to the build directory, and each corresponding value is the MD5 hash of that file¹.

This is how the process of writing to/retrieving from this cache would work:

  • [Build started]
    • Read the cache into memory.
    • Create an empty dict to hold the new cache that will be saved at the end of this build.
    • For each file written to disk:
      • Look for the output path in the cache dict.
        • If it's not found, write the file.
        • If it is found and its hash matches the file being written, don't write the file. Remove this entry from the cache.
        • If it is found but the hash doesn't match the file being written, overwrite the file². Remove this entry from the cache.
      • Add the path: hash pair to the future cache.
    • If after writing all files there are still entries in the current cache, delete those files.
    • Write the "future cache" to disk.
  • [Build finished]

The result is that, at the end of each build, we would have a dict storing all the files that have been written to disk during this particular build, which the output of the next build could be checked against.

Notes

¹ The hash could be taken differently according to the file type. For instance, PIL images could take the Image.from_bytes() method; JSON data can use string conversion (json.dumps()) and so on. As different file types will never have matching paths (since they have different extensions), this should be fine, as long as hashing methods are consistent within each file type.

² Deleting the file and writing it has the same practical effect as overwriting, and may be easier to achieve consistently, given that different file types are also written differently to disk.

Considerations

  • While this is not an ideal solution, as it doesn't avoid processing files that may end up never being written to disk, it's simple and less "intrusive" than other methods I could think of — such as inspecting plugins themselves to see if any code within it has changed —, so it looks to me like a good middle-ground solution.

  • There's already a powerful caching system in place that could be taken advantage of. In the same fashion that the .beet_cache already contains link and template caches, a new cache type called build/content could be created for that purpose.

  • It could be possible to disable the build cache with a special CLI command or setting in the config file, for small projects where the benefits of caching wouldn't outweigh the cost.

  • The cache should ideally be cleared when linking to a Minecraft world with beet link, or otherwise changing the build directory, as it would require all files to be generated again.

    • Alternatively, the output path of the last build could be stored in a different cache entry, and checked at the start of each build; if it's different than the current path, then the cached data isn't used.
    • For the same reason, building with zipped=True would (probably) not benefit from the cache at all.
  • Moving where a big bunch of files is located in the pack would still cause deletion/regeneration of identical files. As this is unlikely to happen very often, it's acceptable to have the files regenerate in this case, as detecting that those files have moved would be very advanced behavior, involving a lot more checks.

  • An alternative approach would be to use the existing beet watch mechanic to copy only the files where modifications were detected. While this is conceptually simpler, it seems much more difficult to implement, as altering a single source file may have a cascading effect over the output — tracing all changed files would be a daunting task. The advantage of implementing it at a lower level is that it's easier to detect duplicates, as at that point we already know exactly what the file will look like.

Benefits

  • Gain of speed proportional to the number of files being written
  • Simple and straight-forward; doesn't require "intrusive" techniques such as inspecting code
  • Avoids many unnecessary I/O operations
  • There's already a caching system in place that could be taken advantage of.

Limitations

  • Files that may end up never being written to disk will still be processed by beet
  • Folders that were previously present but become empty (e.g. due to the output path of a given plugin being renamed) will remain in the build output, unless some check is put in place to delete them.
  • The cache will become outdated if manual file modifications are done to the build folder, which it won't be aware of. This means the build process will no longer be guaranteed to yield the "intended" output — the only way to achieve a 100% "clean" build would be to run beet cache --clear prior to building the project.

Now, I don't know if a feature like this is planned (or even if it's already in the works), nor if the way I described it would be suitable to implement. Though I'd love to hear some feedback on the idea, so tell me what you think =)

Support for custom save locations

beet watch --link <World> seems to .minecraft folder for saves.
My saves are however located in a custom location (the new Launcher supports that).

Is there an option to change the folder in which beet will look?

`auto_yaml` doesn't merge `.yaml` function tags properly

When auto_yaml (beet.contrib.auto_yaml) is enabled, .yaml function tags aren't merged properly and are instead replaced by one another.

Replacing the .yaml tags with .json equivalents solves the problem and the tags are merged as expected, hence why it is suspected that this has something to do with auto_yaml.

Beet's default merge policy for model overrides does not sort

Model overrides have to be sorted in a particular order otherwise some values will override the ones below. They must be sorted by all properties with custom_model_data being first.

  • custom_model_data
  • damage
  • damaged
  • pull
  • pulling
  • time
  • cooldown
  • angle
  • firework
  • blocking
  • broken
  • cast
  • lefthanded
  • throwing
  • charged

These could also only be sorted if they are applicable to the base model, pull for bows, angle for compass, etc.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.