vedalai / neuro-amongus Goto Github PK
View Code? Open in Web Editor NEWAmong Us Plugin for Neuro-sama
License: GNU General Public License v3.0
Among Us Plugin for Neuro-sama
License: GNU General Public License v3.0
I want to separate the functionality of the plugin into the side that people will use to record their gameplay as training data for Neuro and the side that communicates with the AI to allow Neuro to play.
I've started by moving some of the Neuro-specific functionality into the Vision class (this keeps track of what Neuro sees throughout the round to be fed into the language model). We should probably use the Recorder class for gathering Frame data.
We will also probably want a separate class in the future to interface with the Python neural network, probably just open a socket and have some sort of communication protocol. This may end up being a different plugin(?), that instead just uses the data collected from this plugin to interface with the neural network.
Ideally there shouldn't be much code in the NeuroPlugin class too and it should instead handle setting these individual components up.
https://github.com/sinai-dev/UnityExplorer
Pretty useful thing, I used it when modifying other unity games
I tried using it in Among Us with BepInEx/MelonLoader and it doesn't work: bepinex didnt see it and melon throws error
There's way to use it standalone but it requires injecting dll, dependencies and manually creating instance
Feel free to just close this issue, it's just idea if someone would can make it work
I haven't looked into this but I imagine there are other tasks which are broken too.
How do we want to handle venting?
One idea could be: Whenever a vent is entered, cycle through vents until we find one without nearby players (using pathfinding as well as vision, to ensure they're not just around the corner) and then vent out.
I'm interested to hear other implementation ideas if there are any, keeping in mind that we aren't using the neural network for this task.
https://github.com/VedalAI/neuro-amongus/blob/main/Neuro/zzz_excluded/Impostor/ImpostorHandler.cs
how
For the purposes of faking tasks as impostor, we need to know a completion time for each step of each task. #8 added something similar (that was later removed) but the times don't seem to be accurate, plus it doesn't take into account all steps of the tasks. If someone could help us gather the information about how long each task should be faked for that would be much appreciated.
My suggestion is that we get a min and max completion time and at runtime we generate a random number from that range.
When attempting to close the game, the WebSocketThread for whatever reason does not want to exit so it gets stuck and the game freezes up and crashes.
Here https://www.innersloth.com/among-us-mod-policy/ it is stated that the mod stamp should be shown in-game and the disclaimer should be written somewhere on the mod's page.
Feel free to close this one if it is not relevant (if this doesn't count as a mod?), but I think since the plan is to widely distribute this to collect as much data as possible, all bases should be covered.
Provide working instructions for the plugin to work when using steam play.
I have tried several methods from https://www.reddit.com/r/SteamDeck/comments/whwgkp/comment/ilv4z04/?utm_source=share&utm_medium=web2x&context=3 but none of them have worked with the pluggin yet.
As most of the interactables are being implemented in #68 for #66, the only thing remaining in this issue is interacting with cameras and admin.
I spoke to vedal about the idea of letting neuro interact with these two and we agreed that given the neural network we have and the way we are recording the data, there's no real way to get and parse data from admin, so it would be a waste of time to try to implement it.
However, as for cams, we can actually record player data for players visible on cams, and even spot murders/vents and report them to the language model.
Now we need to figure out how we're gonna implement this, what will dictate when cams are opened, scrolled, etc?
Currently the agent seems to like going to cams on polus, so opening cams whenever in range might not be a good solution. I also don't really like the idea of using random range. The same problem occurs for the meeting button, see issue #73
I didn't want to just at you in Discord. If you didn't know VS has a copilot extension. Control + Alt + Enter will show the extra solutions. Save you some back and forth between VSCode and VS.
PathfindingHandler
, MovementHandler
and TasksHandler
all touch movement/pathfinding fields. These should be refactored.MinigameHandler
and TasksHandler
and allow each task to specify when it should be opened using something like MinigameSolver in #26VisionHandler
is a mess, it should be refactored to reduce duplicated code, reduce method complexity, and stuff like thatDocument what version of BepInEx is being used - BepInEx 5 plugins are unsupported by BepInEx 6 at the moment.
How do we handle choosing when to press the meeting button? I'm interested to hear any ideas anyone might have.
Currently we record data about the meeting button position and whether or not the player used the interact button, we could use that and just feed it through the neural network, but that might lead to random meetings.
Interacting with DoorConsole, OpenDoorConsole, Ladder and PlatformConsole should happen automatically if the neural network's ForcedMoveDirection points towards the console.
We should look into running the pathfinding on (one or more) different threads so its more efficient and can be used to pathfind to all 10 available tasks at once
Should record more roles than just imposter in case we need it in the future. Recording if the player is alive or not will be very useful for constructing training data.
I think the main example of this is the admin table on "The Skeld"
Currently if lights sabotage is called, neuro can walk past a body and she will not see it because the lights are off, but the report button will light up. We should consider bodies visible if the report button is lit up
I have a question regarding contributing to the training of our AI model, specifically in relation to the neural network and dataset:
Found myself thinking about this:
We will also probably want a separate class in the future to interface with the Python neural network, probably just open a socket and have some sort of communication protocol. This may end up being a different plugin(?), that instead just uses the data collected from this plugin to interface with the neural network. (From #4)
My recommendation (from personal experience) would be to send JSON (Or, msgpack, if that's chosen for the recording plugin) wrapped in a netstring:
30:{"msg":"Hello","name":"World"},
The format is, simply: <Length>:<Message>,
where <Length>
is the length of <Message>
. The :
and ,
are convention, but could be any predetermined characters. (Also, there's no need to escape the message contents which is a huge plus.)
Implementing netstrings is pretty straight forward, and I've done it in about a dozen languages at this point. They're pretty resilient, fast, and simple to understand. Even better, we can implement this 100% natively in both C# and python.
If this seems like a reasonable direction, I'll do up a C# implementation and a simple python test server for it, as a proof of concept.
When you are a ghost we don't really care about using the neural network anymore, instead we should just go straight to and finish all the remaining tasks.
Currently, we convert all recorded data into dictionaries to avoid unnecessary load times performing conversions while defaulted data uses the data.proto_defaults
definitions where applicable.
This becomes an issue when starting to convert during training, where a recorded sabotage is a dictionary goes through convert_dict
, but the defaulted TaskData
goes through convert_taskdata
, resulting in misaligned array dimensions.
Attached is a recording with some sabotage data if necessary for testing.
Putting this here as a reminder so i dont forget
It would be useful if we had the ability to force the impostor for testing.
If someone wants to open a pr feel free, otherwise I will add this later.
So the idea behind recording player data is that this data will be fed to a ML algorithm, and we have to record both the input (environment info) and output (resulting action) values. It is important to figure out exactly what data we need beforehand, so I'm creating this issue as a hub for discussing it.
We currently save:
Values for input:
Is Imposter
Kill Cooldown
Direction to nearest task
Whether an emergency task is active
Direction to nearest vent
Direction to nearest body
Whether a body can be reported
Direction and position of nearby players
Values for output:
Movement direction (last saved, meaning Neuro won't be able to stay in one place)
Whether should report
Whether should vent
Whether should kill
Whether should sabotage
Whether should close doors
The things I think need change now are:
Those are just the things I noticed, there could be more stuff that needs to be addressed
The Recorder class takes the current game state and serializes it. How do you plan on using the information obtained from it? (I assume for training?)
It would be useful to know the purpose of this class, as well as the design process behind it, so we can maybe add more fields to the frame if they are relevant.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.