Giter Club home page Giter Club logo

neuro-amongus's People

Contributors

alekso56 avatar alexejhero avatar ebro912 avatar enderinvader avatar jbomhold3 avatar js6pak avatar krogenth avatar liampwll avatar linkis20 avatar morgul avatar oleg20111511 avatar owobred avatar scrubn avatar taflaxx avatar tokidokitoky avatar vedal987 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neuro-amongus's Issues

Separate plugin functionality

I want to separate the functionality of the plugin into the side that people will use to record their gameplay as training data for Neuro and the side that communicates with the AI to allow Neuro to play.

I've started by moving some of the Neuro-specific functionality into the Vision class (this keeps track of what Neuro sees throughout the round to be fed into the language model). We should probably use the Recorder class for gathering Frame data.

We will also probably want a separate class in the future to interface with the Python neural network, probably just open a socket and have some sort of communication protocol. This may end up being a different plugin(?), that instead just uses the data collected from this plugin to interface with the neural network.

Ideally there shouldn't be much code in the NeuroPlugin class too and it should instead handle setting these individual components up.

consider using Unity Explorer?

https://github.com/sinai-dev/UnityExplorer
image

Pretty useful thing, I used it when modifying other unity games

I tried using it in Among Us with BepInEx/MelonLoader and it doesn't work: bepinex didnt see it and melon throws error
There's way to use it standalone but it requires injecting dll, dependencies and manually creating instance

Feel free to just close this issue, it's just idea if someone would can make it work

(Discussion + PR welcome) Venting mechanic

How do we want to handle venting?

One idea could be: Whenever a vent is entered, cycle through vents until we find one without nearby players (using pathfinding as well as vision, to ensure they're not just around the corner) and then vent out.

I'm interested to hear other implementation ideas if there are any, keeping in mind that we aren't using the neural network for this task.

https://github.com/VedalAI/neuro-amongus/blob/main/Neuro/zzz_excluded/Impostor/ImpostorHandler.cs

Add support for Airship

  • Need to improve the "GetLocationFromPosition" function from Utils/Methods
  • Also need to fix the size of the pathfinding grid generated

Obtain completion times for all tasks (and all steps of those tasks)

For the purposes of faking tasks as impostor, we need to know a completion time for each step of each task. #8 added something similar (that was later removed) but the times don't seem to be accurate, plus it doesn't take into account all steps of the tasks. If someone could help us gather the information about how long each task should be faked for that would be much appreciated.

My suggestion is that we get a min and max completion time and at runtime we generate a random number from that range.

BUG: Game refuses to close

When attempting to close the game, the WebSocketThread for whatever reason does not want to exit so it gets stuck and the game freezes up and crashes.

(Discussion) Interacting with cameras?

As most of the interactables are being implemented in #68 for #66, the only thing remaining in this issue is interacting with cameras and admin.

I spoke to vedal about the idea of letting neuro interact with these two and we agreed that given the neural network we have and the way we are recording the data, there's no real way to get and parse data from admin, so it would be a waste of time to try to implement it.

However, as for cams, we can actually record player data for players visible on cams, and even spot murders/vents and report them to the language model.

Now we need to figure out how we're gonna implement this, what will dictate when cams are opened, scrolled, etc?

Currently the agent seems to like going to cams on polus, so opening cams whenever in range might not be a good solution. I also don't really like the idea of using random range. The same problem occurs for the meeting button, see issue #73


Old issue description
  • We should make an interactions handler to dictate which consoles to interact with.
  • The current MinigameSolver system from pr #26 can be expanded to dictate which task consoles we should interact with (to allow for timer tasks to work correctly)
  • Also, we should only interact with doors if we are pathing to a point behind them.
  • Allow interaction with flying platform
  • Allow interaction with ladders
  • (Maybe?) Allow interaction with cameras and implement record seen players' positions
  • (Maybe?) Allow interaction with admin table (but how would that help the AI though?)

Info

I didn't want to just at you in Discord. If you didn't know VS has a copilot extension. Control + Alt + Enter will show the extra solutions. Save you some back and forth between VSCode and VS.

More project refactoring

  • The current pathfinding system is all over the place. At the moment, PathfindingHandler, MovementHandler and TasksHandler all touch movement/pathfinding fields. These should be refactored.
  • Currently tasks are being opened using heinous methods, we should instead merge MinigameHandler and TasksHandler and allow each task to specify when it should be opened using something like MinigameSolver in #26
  • VisionHandler is a mess, it should be refactored to reduce duplicated code, reduce method complexity, and stuff like that
  • The "Handler" singleton system might not be the best, some of the classes can become static, while others should have one instance per game and be reset once the game ends.
  • if the recording is meant to be separately given out to people, maybe it should be moved to it's own project or at least namespace

Enhancement: Documentation

Document what version of BepInEx is being used - BepInEx 5 plugins are unsupported by BepInEx 6 at the moment.

(Discussion) Interacting with the meeting button

How do we handle choosing when to press the meeting button? I'm interested to hear any ideas anyone might have.

Currently we record data about the meeting button position and whether or not the player used the interact button, we could use that and just feed it through the neural network, but that might lead to random meetings.

Multi-thread pathfinding

We should look into running the pathfinding on (one or more) different threads so its more efficient and can be used to pathfind to all 10 available tasks at once

Implement minigame solving

  • Tasks
    • Align Engine Output (Skeld)
    • Align Telescope (Polus)
    • Assemble Artifact (MIRA)
    • Buy Beverage (MIRA)
    • Calibrate Distributor (Skeld, Airship)
    • Chart Course (Skeld, MIRA, Polus)
    • Clean O2 Filter (Skeld, MIRA)
    • Clean Toilet (Airship)
    • Clean Vent (Skeld, MIRA, Airship)
    • Clear Asteroids (Skeld, MIRA, Polus)
    • Decontaminate (Airship)
    • Develop Photos (Airship)
    • Divert Power (Skeld, MIRA, Airship)
    • Dress Mannequin (Airship)
    • Empty Chute (Skeld)
    • Empty Garbage (All maps)
      • Stage 1 (Airship)
      • Other stages (All maps)
    • Enter Id Code (MIRA, Airship)
    • Fill Canisters (Polus)
    • Fix Shower (Airship)
    • Fix Weather Node (Polus)
    • Fix Wiring (All maps)
    • Fuel Engines (All maps)
    • Insert Keys (Polus)
    • Inspect Sample (Skeld, Polus)
    • Measure Weather (MIRA)
    • Make Burger (Airship)
    • Monitor Tree (Polus)
    • Open Waterways (Polus)
    • Pick Up Towels (Airship)
    • Polish Ruby (Airship)
    • Prime Shields (Skeld, MIRA)
    • Process Data (MIRA)
    • Put Away Pistols (Airship)
    • Put Away Rifles (Airship)
    • Reboot Wifi (Polus)
    • Record Temperature (Polus)
    • Repair Drill (Polus)
    • Replace Water Jug (Polus)
    • Reset Breakers (Airship)
    • Rewind Tapes (Airship)
    • Run Diagnostics (MIRA)
    • Scan Boarding Pass (Polus)
    • Sort Records (Airship)
    • Sort Samples (MIRA)
    • Stabilize Steering (Skeld, Airship)
      • Skeld version
      • Airship version
    • Start Fans (Airship)
    • Start Reactor (Skeld, MIRA, Polus)
    • Store Artifacts (Polus)
    • Submit Scan (Skeld, MIRA, Polus)
    • Swipe Card (Skeld, Polus)
    • Unlock Manifolds (Skeld, MIRA, Polus)
    • Unlock Safe (Airship)
    • Upload Data (Skeld, Polus, Airship)
      • Other stages (Skeld, Polus, Airship)
      • Stage 2 (Airship)
    • Water Plants (MIRA)
  • Sabotages
    • Oxygen Depleted (Skeld, Mira)
    • Reactor Meltdown (Skeld, MIRA)
    • Reset Seismic Stabilizers (Polus)
    • Avert Crash Course (Airship)
    • Comms Sabotaged (All maps)
      • Radio Frequency (Skeld, Polus, Airship)
      • Backup Code (MIRA)
    • Fix Lights (All maps)
    • Door Sabotage (Polus, Airship)
      • Flip Switches (Polus)
      • Swipe Card (Airship)
  • Other
    • Emergency Button

(Question) about dataset to help train the AI

I have a question regarding contributing to the training of our AI model, specifically in relation to the neural network and dataset:

  1. How can someone contribute to the training of the AI model, particularly in the area of the neural network?
  2. Where can I access the dataset required for training the AI model? Or If someone wants to help in this area they should use the recordings from their own games for the dataset?

External Communication Protocol

Found myself thinking about this:

We will also probably want a separate class in the future to interface with the Python neural network, probably just open a socket and have some sort of communication protocol. This may end up being a different plugin(?), that instead just uses the data collected from this plugin to interface with the neural network. (From #4)

My recommendation (from personal experience) would be to send JSON (Or, msgpack, if that's chosen for the recording plugin) wrapped in a netstring:

30:{"msg":"Hello","name":"World"},

The format is, simply: <Length>:<Message>, where <Length> is the length of <Message>. The : and , are convention, but could be any predetermined characters. (Also, there's no need to escape the message contents which is a huge plus.)

Implementing netstrings is pretty straight forward, and I've done it in about a dozen languages at this point. They're pretty resilient, fast, and simple to understand. Even better, we can implement this 100% natively in both C# and python.

If this seems like a reasonable direction, I'll do up a C# implementation and a simple python test server for it, as a proof of concept.

Sabotage data is inconsistent when training

Currently, we convert all recorded data into dictionaries to avoid unnecessary load times performing conversions while defaulted data uses the data.proto_defaults definitions where applicable.

This becomes an issue when starting to convert during training, where a recorded sabotage is a dictionary goes through convert_dict, but the defaulted TaskData goes through convert_taskdata, resulting in misaligned array dimensions.

Attached is a recording with some sabotage data if necessary for testing.

133264897257692422.zip

Force impostor tab for debugging

It would be useful if we had the ability to force the impostor for testing.

If someone wants to open a pr feel free, otherwise I will add this later.

Recording fields discussion

So the idea behind recording player data is that this data will be fed to a ML algorithm, and we have to record both the input (environment info) and output (resulting action) values. It is important to figure out exactly what data we need beforehand, so I'm creating this issue as a hub for discussing it.

We currently save:

  • Values for input:
    Is Imposter
    Kill Cooldown
    Direction to nearest task
    Whether an emergency task is active
    Direction to nearest vent
    Direction to nearest body
    Whether a body can be reported
    Direction and position of nearby players

  • Values for output:
    Movement direction (last saved, meaning Neuro won't be able to stay in one place)
    Whether should report
    Whether should vent
    Whether should kill
    Whether should sabotage
    Whether should close doors

The things I think need change now are:

  • Input should also contain direction to the emergency task and whether can do sabotage
  • If we use bool for sabotage and only let ML decide when to do it, then we need to code something to choose which sabotage to do.
    If we want ML to decide which to do, we need to add Map to input, and replace bool with int representing id of the sabotage
  • For doors, it would make sense to have relative position and whether it can be closed for each door (I think there's no need to store the exact cooldown for each door).
  • Venting is kinda complicated if we want it to be handled purely by ML model:
    We need InVent info in input.
    We need movement inside vent in output.
    Each vent is in different location and a limited amount of other vents are accessible from each vent. Which means we need to know the exact vent we're in, and a unique id of this vent needs to be in input.
    It also means that, while outside of vent, we need to know not only direction to the nearest vent, but the id of this vent since this also affects the decision.
    (I don't know how neural networks work, so correct me if I'm wrong on this one) Since everything is happening based on information from a single frame and AI doesn't really have memory, could it be possible that AI would enter the vent just to exit it at the same place without moving, making the action meaningless? And also there could be situations when it exposes itself by venting in a closed room thinking there's no one in sight, but in fact someone was following closely behind and was seen some frames ago
    CONCLUSION: Venting might be not worth the effort to implement. Though it might work if, after ML model decides to vent, we explicitly program it to jump to a random other vent and then give control back to ML

Those are just the things I noticed, there could be more stuff that needs to be addressed

Requesting information about recording

The Recorder class takes the current game state and serializes it. How do you plan on using the information obtained from it? (I assume for training?)

It would be useful to know the purpose of this class, as well as the design process behind it, so we can maybe add more fields to the frame if they are relevant.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.