mcbeet / beet Goto Github PK
View Code? Open in Web Editor NEWThe Minecraft pack development kit.
Home Page: https://mcbeet.dev
License: MIT License
The Minecraft pack development kit.
Home Page: https://mcbeet.dev
License: MIT License
I opened the source in VSC and searched for load.json
and I couldn't find anything. I imagine I could probably do it easy enough by creating a new JSON file in the minecraft namespace but I'm not exactly sure how to create any file in a data pack that isn't .mcfunction
.
Context.author, Context.project_description, etc are all as in the configuration file, only Context.project_name is converted to all lowercase.
This makes using the original capitalized project name akward, having to add a redundant meta variable.
# beet.json
{
"name": "TheQuickBrownFox"
}
Context.project_name
results in thequickbrownfox
Code to reproduce:
pack_obj= beet.DataPack(path=...)
for name, file_instance in pack_obj.content:
if isinstance(file_instance, beet.NamespaceFile):
print(file_instance.content)
This will print None
for every single file, even though this may not be the case.
The MultiCache.clear()
method just deletes the folder and its content, and never calls the clear
method of the individual caches within it, leading to the finalizers never being run
When attempting to insert a variable into the whitelist command, it outputs it raw instead of reading it.
For example...
name = "John" whitelist add (name)
Outputs simply as whitelist add (name)
instead of whitelist add John
files named [".mcfunction","..mcfunction","...mcfunction"]
are not copied during beet build
In vanilla it's possible to reference such function, and make loading existing datapack not build with beet not working at all.
Here's an example :
erwan@erwan-pc-portable-ubuntu:~/Documents/Dev/beet/examples/code_void$ cat beet.yaml
data_pack:
load: [.]
resource_pack:
load: [.]
output: build
pipeline:
- add_header
erwan@erwan-pc-portable-ubuntu:~/Documents/Dev/beet/examples/code_void$ cat add_header.py
from beet import Context, Function
def beet_default(ctx: Context):
for f in ctx.data.functions.keys():
ctx.data.functions[f]=Function(f"say {f}")
erwan@erwan-pc-portable-ubuntu:~/Documents/Dev/beet/examples/code_void$ tree -la
.
├── add_header.py
├── beet.yaml
├── build
│ └── code_void_data_pack
│ ├── data
│ │ └── test
│ │ └── functions
│ │ ├── a
│ │ │ ├── a.mcfunction
│ │ │ ├── a.py.git.mcfunction
│ │ │ └── .mcfunction.mcfunction
│ │ ├── a.mcfunction
│ │ ├── a.py.git.mcfunction
│ │ ├── .mcfunction.mcfunction
│ │ └── .mcfunction.mcfunction.mcfunction
│ └── pack.mcmeta
└── data
└── test
└── functions
├── a
│ ├── a.mcfunction
│ ├── a.py.git.mcfunction
│ ├── ...mcfunction
│ ├── ..mcfunction
│ ├── .mcfunction
│ └── .mcfunction.mcfunction
├── a.mcfunction
├── a.py.git.mcfunction
├── ...mcfunction
├── ..mcfunction
├── .mcfunction
├── .mcfunction.mcfunction
└── .mcfunction.mcfunction.mcfunction
10 directories, 23 files
erwan@erwan-pc-portable-ubuntu:~/Documents/Dev/beet/examples/code_void$
Setup:
$ git clone https://github.com/OrangeUtan/beet
$ poetry install
$ poetry run pytest -vv
Excerpt:
snapshot = snapshot, directory = 'load_templates_plugin'
@pytest.mark.parametrize("directory", os.listdir("tests/examples")) # type: ignore
def test_build(snapshot: Any, directory: str):
with run_beet(directory=f"tests/examples/{directory}") as ctx:
> assert snapshot("data_pack") == ctx.data
⠇
E Drill down into differing attribute content:
E content: 'say hello world\r\n' != 'say hello world\n'
E - say hello world
E + say hello world
E ? +
tests\test_examples.py:12: AssertionError
⠇
36 failed, 59 passed, 8 skipped in 4.06s
Tests seem to fail because of the differences of newlines: Windows \r\n
vs Linux \n
.
Since the project uses snapshot testing, I don't know how easily that can be fixed. In the case that it can, I recommend adding a matrix to the Github Action (an example from my repo)
Datapack advancements are a convention from the Minecraft Datapacks Discord server (their article about the issue). The aim is to replace installation messages in an easily viewable and non-obstructive way by putting them on a single advancement page.
The convention is by no means standardized, but a couple of people are already using it (myself included) and I think adding built-in support would be a helpfull feature. Wether this feature can be implemented as is or should be altered is open to discussion.
An example configuration:
name: Apple
description: An example datapack
author: Oran9eUtan
pipeline:
- beet.contrib.datapack_advancement
meta:
datapack_advancement:
author_namespace: Oran9eUtan
author_skull_owner: oran9eutan
icon:
item: "minecraft:apple"
nbt: "{...}"
I'm still unsure how to exactly configure the datapack namespace, since other plugins could benefit from it. For now I put it in the datapack advancement config, but making it a top-level setting would be possible as well
Path: global:root
{
"display": {
"title": "Installed Datapacks",
"description": "",
"icon": {
"item": "minecraft:knowledge_book"
},
"background": "minecraft:textures/block/gray_concrete.png",
"show_toast": false,
"announce_to_chat": false
},
"criteria": {
"trigger": {
"trigger": "minecraft:tick"
}
}
}
Requires:
author
: Name of authorauthor_namespace
: Unique namespace for author (e.g. oran9eutan
)author_skull_owner
: Skull owner of authorPath: global:<author_namespace>
{
"display": {
"title": "<author>",
"description": "",
"icon": {
"item": "minecraft:player_head",
"nbt": "{SkullOwner: '<author_skull_owner>'}"
},
"show_toast": false,
"announce_to_chat": false
},
"parent": "global:root",
"criteria": {
"trigger": {
"trigger": "minecraft:tick"
}
}
}
Requires:
project_name
: Name of the projectproject_description
: Description of the projecticon
: The icon of the project. A json objectauthor_namespace
{
"display": {
"title": "<project_name>",
"description": "<project_description>",
"icon": <icon>,
"announce_to_chat": false,
"show_toast": false
},
"parent": "global:<author_namespace>",
"criteria": {
"trigger": {
"trigger": "minecraft:tick"
}
}
}
This advancement can be put anywhere. Different user may want different locations, but a default location could be decided uppon.
There is the issue of namespace clashes, so I prefer <author_namespace>:<project_namespace>/installed
. (e.g. oran9eutan:apple/installed
)
{
"data_pack": {
"load": ["src", "vendor/*.zip"]
}
}
With this beet.json file, all the files ending with .zip in the vendor folder should be loaded.
Hello!
While using beet
in a project with 7k+ generated files, I've noticed that every time the project is built, all files are deleted and regenerated from scratch. While this is fine for small amounts of data, it can become very slow as the packs get filled with more and more content. For instance, a binary tree generated with Context.generate.function_tree()
can easily consist of thousands of <1 KB files, which are slow to write to disk.
As most of the modifications done to a pack only involve a couple of files — especially when you're tweaking something very specific, such as a single position in a block model —, a caching system would avoid many unnecessary I/O operations, which could potentially make the build process much more efficient.
Below is a proposal for how the cache system could work:
During each build, construct a dict
where each key is a path to an output file relative to the build directory, and each corresponding value is the MD5 hash of that file¹.
This is how the process of writing to/retrieving from this cache would work:
dict
to hold the new cache that will be saved at the end of this build.path: hash
pair to the future cache.The result is that, at the end of each build, we would have a dict
storing all the files that have been written to disk during this particular build, which the output of the next build could be checked against.
¹ The hash could be taken differently according to the file type. For instance, PIL images could take the Image.from_bytes()
method; JSON data can use string conversion (json.dumps()
) and so on. As different file types will never have matching paths (since they have different extensions), this should be fine, as long as hashing methods are consistent within each file type.
² Deleting the file and writing it has the same practical effect as overwriting, and may be easier to achieve consistently, given that different file types are also written differently to disk.
While this is not an ideal solution, as it doesn't avoid processing files that may end up never being written to disk, it's simple and less "intrusive" than other methods I could think of — such as inspecting plugins themselves to see if any code within it has changed —, so it looks to me like a good middle-ground solution.
There's already a powerful caching system in place that could be taken advantage of. In the same fashion that the .beet_cache
already contains link
and template
caches, a new cache type called build
/content
could be created for that purpose.
It could be possible to disable the build cache with a special CLI command or setting in the config file, for small projects where the benefits of caching wouldn't outweigh the cost.
The cache should ideally be cleared when linking to a Minecraft world with beet link
, or otherwise changing the build directory, as it would require all files to be generated again.
zipped=True
would (probably) not benefit from the cache at all.Moving where a big bunch of files is located in the pack would still cause deletion/regeneration of identical files. As this is unlikely to happen very often, it's acceptable to have the files regenerate in this case, as detecting that those files have moved would be very advanced behavior, involving a lot more checks.
An alternative approach would be to use the existing beet watch
mechanic to copy only the files where modifications were detected. While this is conceptually simpler, it seems much more difficult to implement, as altering a single source file may have a cascading effect over the output — tracing all changed files would be a daunting task. The advantage of implementing it at a lower level is that it's easier to detect duplicates, as at that point we already know exactly what the file will look like.
beet
beet cache --clear
prior to building the project.Now, I don't know if a feature like this is planned (or even if it's already in the works), nor if the way I described it would be suitable to implement. Though I'd love to hear some feedback on the idea, so tell me what you think =)
beet watch --link <World>
seems to .minecraft folder for saves.
My saves are however located in a custom location (the new Launcher supports that).
Is there an option to change the folder in which beet will look?
When auto_yaml
(beet.contrib.auto_yaml
) is enabled, .yaml
function tags aren't merged properly and are instead replaced by one another.
Replacing the .yaml
tags with .json
equivalents solves the problem and the tags are merged as expected, hence why it is suspected that this has something to do with auto_yaml
.
In README.md, in the section Toolchain, there is a link to some documentation (goes to https://vberlier.github.io/beet/toolchain/) but that page doesn't exist.
Model overrides have to be sorted in a particular order otherwise some values will override the ones below. They must be sorted by all properties with custom_model_data being first.
These could also only be sorted if they are applicable to the base model, pull for bows, angle for compass, etc.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.