Giter Club home page Giter Club logo

plantex's Introduction

🌱 Plantex: open-world game about plants 🌲 πŸƒ 🌿 🌴

Build Status License license

This game was developed in a three week programming practical at the university of OsnabrΓΌck :-)

Plantex Trailer

Everything you see is procedurally generated -- there are no static textures, meshes or worlds! A different seed will generate completely different textures, plants, stars and a different world. You can find more images further down.

Run the game

Windows binaries

Precompiled binaries for Windows x64 can be downloaded on the releases page. Latest build: v0.1.0.

Compile the game

For all other platforms you have to compile the game yourself. First make sure you have a Rust compiler and cargo installed. Then clone this repository (or download it as ZIP file) and execute:

$ cargo build --release

After the compilation has finished, you can run the game by either executing the binary in ./target/release/ or just cargo run --release --bin plantex.

Play the game/controls

You can move with WASD and move faster by pressing Shift. To look around, click inside the window to capture the mouse; afterwards you can use the mouse to rotate the camera. Click again to uncapture the mouse.

When starting the game you are controlling a ghost that can fly around freely. To toggle between ghost and player press G. Pressing Space produces an upward motion (jumping when player, increasing altitude when ghost). Pressing Ctrl produces a downward motion.

You can quickly exit the game with ESC and accelerate the time in the game by pressing +.

Images

next to a rain forest

snow biome

different plants

Documentation

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Development will probably stop after the practical has ended. If there is enough interest in the game idea, the game is probably rewritten from scratch (the code in this repository often is far from optimal). Don't hesitate to make suggestions or file PRs, though! Just keep the status of this project in mind ...

plantex's People

Contributors

92andy avatar arraverz avatar cranc avatar florianjanosch avatar gustav1101 avatar helenakeller avatar jbesteuos avatar jonas-schievink avatar jonasknerr avatar jovobe avatar karolineplum avatar lukaskalbertodt avatar mmildt avatar naffing avatar peb-adr avatar romanschott avatar toschneider avatar vab9 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

plantex's Issues

Helper functions to easily load shader files

Currently, we use include_str!() to include shader files. While this has its advantages, in the next week many groups will work on shaders and might not want to recompile the application everytime a shader is changed. I don't even know if cargo is currently recompiling, which would be even worse because a shader change wouldn't change the compiled program at all...

So yes: a helper function that takes one filename stem, loads ".frag" and ".vert" and returns a Program.

Create the vertex buffer layout of hex pillars with correct normals

Currently the normals in the vertex buffer are not well suited for flat shading. We need to replace those with proper normals, by adding more vertices. Sadly we need multiple vertices for the same point in space, but with different normals. Specifically: for each point in space we need exactly three vertices, one for each face this point is a corner of.

Research more efficient World storage

Chunks in the World are currently stored in a HashMap. Rust uses a collision-resistant cryptographic hash by default (SipHash), which can be quite slow for small keys (and AxialPoint is very small).

A few different approaches should be implemented and compared:

  • Use a different hashing algorithm, but keep the HashMap
  • Use the BTreeMap
    • This requires the key to be Ord, which AxialPoint isn't. To keep AxialPoint clean, a private newtype wrapper can be implemented which implements PartialOrd and Ord (we can also just #[derive] it on AxialPoint for simplicity - but < and > on points doesn't really make sense).
  • Use a sparse vector holding a fixed NxN grid of chunks
    • This should be very fast, but you'll have to actually think a bit about what you're doing since the player can move around in the world (might need deeper integration than just world.rs)
  • ... any other ideas?

If the chunk storage efficiency doesn't cause any practical problems, this isn't very important. But if anyone wants to fix this anyways, feel free to.

Maybe improve design of application by using sync primitives

I'm a bit unhappy with the current structure: we pass an immutable reference of one module (not Rust module) to all other modules that need to use it. For example, the current main loop looks like:

try!(self.renderer.render(&self.world_view, &self.player.get_camera()));
let event_resp = self.event_manager
    .poll_events(vec![&mut CloseHandler, &mut self.player]);

We pass the world and the camera to the renderer, as well as all event receivers to the event manager. The renderer needs a whole lot more things in the future, like the weather system and the sky system; so we can expect the function signature of render() to increase in argument count. The event manager would also be nicer with methods like add_handler().

The problem here is that we, of course, can't pass immutable references (or in the case of the event manager: mutable ones) around while still mutating the object on our own. A possible solution would be to use an RwLock (taking into account that we will have threads in the future).

But the question is: what is more rusty, more efficient and the better design? The current design works and the borrow checker doesn't complain. The RwLock design also adds some complexity and introduces some overhead for locking. Maybe it's not that bad to have terrible long argument lists. I'm really not sure about this (?). Maybe at the very least we should wrap the World into an RwLock and give it to everyone?

I'm curious what you think, especially @jonas-schievink ...

Finish `Camera` type

The camera type is very incomplete: only hardcoded values are used. The camera type should be completed and properly documented.

Dynamic chunk loading and unloading

Currently, a fixed number of chunks at a fixed position is loaded. We need a system that dynamically loads and unloads chunks depending on the position of the player/ghost. This unloading probably only needs to happen for ChunkViews ... the Chunks don't occupy a lot of RAM and aren't drawn without corresponding ChunkView, so they can stay. Of course if a chunk is not even generated as a Chunk, this needs to happen, too.

Additionally there should be some clever distance based measure to unload/load chunks. While it is easier to always load neighbor chunks, the strange shape of our chunks makes it rather inefficient. Two corners of the rhombus are very far away from the player (we don't really need to load them) while the other two corners are pretty close. So the algorithm should load every chunk which is less than n meters away, measured in world coordinates.

Fix wrong indexing of Inner_Position

inner_pos.q and inner_pos.r, two chunk indices, are somehow switched. The method called is at
base > src > world > world.rs in pillar_at, line 40. It is temporarily patched at lines 53-58.
Please find the mysterious origin of the switched indices and fix it at the source.

Create a system to generate chunks in another thread

Currently our application is single threaded. Probably the heaviest/slowest operation that has nothing to do with graphics is the generation or loading of Chunks (calling ChunkProvider::load_chunk()). This would be a good candidate to transfer to another thread to avoid blocking the render loop.

As @jonas-schievink mentioned, it's probably the easiest and (therefore) best to use to channels to send the "pls-load-this-chunk-pos" to the other thread and the Chunks back to the main thread. The main thread then integrates all new chunks into the world's chunk map and creates the corresponding chunk view (related to #55).

This is also related to #56 as the world must be read by different parts of the program (player physics, WorldView, ...) but we still need to mutate the world at some point. As suggested in the linked issue, we could wrap the world into an RwLock and then be careful to only write lock the world at one specific point such that locking never blocks (maybe?).

Design `prop::Plant` while thinking about generation algorithm

This is probably pretty difficult to get right... so it can be experimented a lot.

The task is to think about a smart way to represent plants (meaning: all possible plants in the game). This representation should be compact, kind of "scalable" (compare to: SVG instead of PNG) and possible to generate.

Currently the type contains a height and a stem_width... this is not enough to represent all plants we want to create... it is only able to represent very ugly plants... On the other hand: if we would save a vertex buffer directly, this wouldn't be very scalable and hard to generate directly (and it would be too OpenGL dependent, of course).

So we need some form of smart representation which allows us to compute an vertex buffer of arbitrary precision and is more or less easy to generate. I was thinking of something like multiple curves which also save a "width" at each control point. This would let us represent the branches of a tree (plant) fairly well.

You should definitely talk to me in person about this. And also don't expect the first design to be final... the type will change over time, adding and removing some data.

Setup basic rendering pipeline

(this issue contains a big chunk of explanation for readers that might be unfamiliar with the topic)

While we have a mathematical "viewing pipeline" as well as an OpenGL "graphics pipeline", the word "pipeline" is also used to describe the steps in a complex rendering scenario. You'll hardly find any game that writes the final color for each pixel to the screen directly. It's rather the case that there are multiple steps each of which draws to a texture/an image which is not displayed on the screen (off-screen buffer). This is especially important for post processing effects (one easy example is color grading).

In OpenGL "things to draw to" are called frame buffers. Each step in the rendering pipeline has one or more frame buffers attached to it, to which the step will draw. The final step always draws to the default frame buffer (the screen), which is called Frame in glium.

The most important task of this issue is to implement a basic structure for rendering HDR images. High Dynamic Range means that each pixel has a large set of possible values, in contrast to the tiny 0 to 255 range we can use on our screen. Since illumination values vary a lot in the real world, we have to be able to save HDR values to do realistic rendering.

It's actually not too hard to get the HDR rendering started: instead of using 8 bits per color channel we will use 16 or even 32 bits (f32) per channel. The hard part comes later: "tone mapping" describes the process of compressing this huge float range back into [0, 255] to represent it on our screens. But doing proper tone mapping is another task -- this issue is only concerned with the setup of the basic pipeline.

In order to do that you have to:

  • create a frame buffer with a fitting HDR format
  • draw the whole world into that buffer
  • add a new rendering step which draws this frame buffer texture on the screen again while compressing the color range

The last step which will contain the tone mapping should be very simple in the beginning. For example the HDR color could just be clamped into the 0...255 range (this is pretty much what happens without HDR).

Later on, there can be many many more post processing effects:

  • bloom and lens flare: both are somewhat related to HDR rendering/tone mapping. Bloom describes the effect that very bright regions of an image "leak" the light into neighbor pixels. Due to imperfections in physical lenses and camera sensors, lens flare may occur in real world videos. This only describes the effect that bright spots sometimes cause other pixels to be brighter as well.
  • image smoothing: instead of using real multisampling, post smoothing algorithms are used. The purpose of the smoothing is to get rid ugly pixel-ly edges. There are some rather interesting algorithms, one of which is FXAA.
  • color grading: changes the temperature of hue of the resulting image. This is often done in film to create a specific athmosphere.
  • screen space occlusion: one very interesting algorithm is SSAO (screen space ambient occlusion). This algorithm simulates indirect lighting by online looking at the depth values of the resulting image. It's a very rough approximation but can give nice results with some fine tuning

Especially the last algorithm is something that we can implement in this practical. As already described, SSAO needs the depth of each pixel; sadly it's not yet possible to access the default depth buffer of the GPU via glium. Therefore we will create our own depth buffer (additionally to the GPU provided one) to save the depth value for each pixel. This means that we want to use multiple frame buffers at once (one for color, one for depth).

Here are some important resources:

This is a rather complex topic, so remember to implement this step by step, with preferably very small steps πŸ˜‰ I'd even suggest trying a few things in a test project (read: not in this project at all). The examples/ folder of glium could help a lot. Try to find an example that looks like it might help and play with it until you got a basic feeling for the topic. Then you can talk to me about how to do it in our project.

This issue can be subdivided also: one subgroup could think about the general structure (HDR) while the other one already thinks about either FXAA or SSAO.

I hope you can live with not contributing to the project directly but rather doing research first. If you have any questions, let me know, but remember: a big chunk of this issue is to find the necessary information and to also try out on your own.

Please also excuse any bugs in my language, it's already late enough 😦

Make the world mutable

The player should be able to change the world with various methods. Additionally we might want to simulate some effects in the world which will also mutate it.

This imposes a challenge to our current system which only creates a WorldView out of the world in the very beginning (once!). To mutate the world (and see the result) we need to update all views accordingly. To me it's still a bit unclear, how we should do this, but I guess the "easiest" way is to recreate the ChunkView completely every time a Chunk is edited. Later it can be measured if this is too slow and if yes, thought of another technique. My current guess is that it's just fine...

But even with this complete regeneration of ChunkViews, I'm not sure how to structure the source code to achieve this. This issue is probably linked to #56 and #57.

Basic world rendering

The world should be rendered as hex pillars and not just ugly triangle (like it's done now). Whoever tries to solve this issue needs to dig into some dirty code (sorry, I hadn't had enough time).

You need to create vertex and index buffers and do some OpenGL Stuff. You can also clean up the code.

Tracking issue: gameplay

While the focus of this project (in a computer graphics programming practical) lies on computer graphics, our game should have at least a few simple gameplay elements. As mentioned in the beginning, the player should somehow be able interact with plants (plant them, harvest them, ...) as well as modify the world.

We can certainly discuss gameplay features in the whole group, but note that this project is really not about gameplay: I want to avoid people thinking about skill trees for multiple days -- this has rather little to do with computer graphics and even computer science.

Here is a non-exhaustive list of things we might want to do:

  • Display lines around the "hex slice" we are currently facing: this is probably a good thing to start with. We need to visually highlight the hex-slice the player is looking at (the hex slice is a slice of a hex pillar with the height of PILLAR_STEP_HEIGHT).
  • Make it possible to remove (and add) hex slices. This is blocked by #58 ...
  • Think about how to interact with plants. Here we actually need to think about gameplay only: what do we gain from plants? Fruits? We need some item system for that! What good are fruits for us? Eating? Then we need a system that manages the player's hunger! Or do we get wood from trees? What is that good for? Building stuff? What stuff can we build? And additionally: how can we find out what type of reward we get from harvesting a plant, given that all plants are procedurally generated?

Choose a PRNG

To always generate the same world, even in future versions of this game, we need to stick to one pseudo random number generator. Currently we're using a xor-shift RNG. But we should discuss and finalize our decision at some point.

Think about terrain smoothing

While #61 is only about the performance aspect and has no effect on the visuals, we should decide in the team whether or not to use terrain smoothing. By that term I mean: whenever two hex pillars differ only a little in height, we smooth out the terrain by connecting points of both pillars to create a slope. Note that this smoothing is purely a graphical feature: the representation of the world (in base) doesn't change!

Technically speaking, the advantage is that we could decrease the vertex count by a noticeable amount while the disadvantage is that implementing terrain smoothing is probably pretty hard/tricky.

But as said, this also has a big visual impact. With terrain smoothing, the terrain would look a lot more like terrain in other games and the hexagon shape would be less prominent in the world. The player would still see many hexagon shapes in the game, but not as much, obviously. Whether or not we want terrain smoothing needs to be discussed in the group. A gameplay advantage of terrain smoothing is that the player could easily see whether he can just walk up a hill or needs to jump.

Personally, I think we should use terrain smoothing. A realistic world with a few nice hexagons will look pretty nice IMO.

Improve event manager

We need to think about the event manager that handles all events (keys, mouse, ...).

Then there should be two types (I guess):

  • Player (can be added later, like playing the player)
  • Ghost (this is the mode we can play around freely)

Those types should be able to handle the inputs and change their internal camera (#14 is blocker for this)

Replace some uses of `Vec` with `SmallVec`

Currently we use Vec for everything, including use cases in which we expect only a few items. This is pretty inefficient as Vec always heap allocates.

At some point we should fix this...

Player "physics"

Tomorrow we will probably land a PR that adds a system to handle events properly. On top of that we can start to implement a Player that implements the EventHandler trait. In contrast to the Ghost control, the player is bound to the earth by the laws of physics πŸ˜›

To get started, the players x and y coordinate could be controlled by controls while the z coordinate is just set to the world-height at the players position + 1 (or something like that).

After that some kind of gravity can be implemented and the player could gain the ability to jump (and maybe to duck).

Complex physical behavior like collision with plants is not in the scope of this issue, though.

Basic plant rendering

The dummy plant representation should be rendered to test a few things. Whoever implements this will probably work together with whoever implements #11 (for a few days).

Optimize performance of particle system

The particle system can be improved a lot. This is an unordered list of things I noticed:

  • the small vertex buffer for the particle quad should be created only once, instead of every draw() call
  • the bigger dynamic vertex buffer shouldn't be recreated every draw() call: it's filled with a buffer map afterwards anyway! So just use the old buffer.
  • currently, you create the Vec for the dynamic vertex buffer from the particle list first, then you simulate all particles and change the position in both vectors! First simulate your particles and fill the vertex buffer afterwards with the final positions
  • we could use two dynamic vertex buffers to use "double buffering" ... I've read this can give quite a performance boost, but this is rather complicated and needs to be measured

Apart from all this: you should really try to make your code more readable and shorter. Strive for the shortest, most expressive, most self-explanatory code that solves your task at hand.

Warn about replaced chunks in `add_chunk`

Maybe the caller did not want to replace a chunk when calling add_chunk. So I think we should warn in that case and add a method replace_chunk() which states its intention more clearly.

Basic world generation

We need some very basic world generation... something like sin-mountains 😸
The world should be generated on the fly, depending on the players position.

Implement AxialPoint::from_real

We need to convert world coordinates to axial coordinates to be able to figure out the player position (or any object's position) in pillar/chunk coordinates to implement things like #55, so this has very high priority at least for world gen.

http://www.redblobgames.com/grids/hexagons/#pixel-to-hex

pub fn from_real(real: Point2f) -> Self

(this doesn't have much to do with world gen, but I tagged it S-gen-world anyways because it's basically blocked on this)

More indexing methods for `World`

The world needs two more methods in addition to pillar_at():

  • chunk_at: also takes a world position
  • chunk_at_index: returns the chunk at the given chunk index (basically a proxy to the HashMap)

view frustum culling to improve performance #RenderWhatMatters

I'm currently working on a basic frustum implementation which later on can be used to improve performance of rendering, by not rendering the entire scene but only the stuff inside the players view.

  • create simple fov culling
  • create basic frustum class.
  • impelement the frustum inside renderer.
  • pray that it increases performance a lot.
  • improve frustum class

current stage: here

Add features and tests to `AxialVector`

The AxialVector type is currently very incomplete. It should implement all fitting traits from cgmath, overload many operators and add a few more methods:

  • add unit_q() and unit_r() (like unit_x() from cgmath::Vector2)
  • overload many useful operators (including Index, required by Array anyway)
  • implement cgmath::{Zero, Array, MetricSpace, VectorSpace, InnerSpace}

Everything should be done similar to cgmath::Vector2.

Additionally multiple unit tests should be added to ensure correctness.

Note: this issue is similar to #2

Tracking issue: weather and wind

I will just collect a few ideas and goals here.

  • different types of particles (weight, windage, ...)
  • some kind of basic wind simulation
  • nice rendering of weather particles
    • we can't render everything, so we render all particles nearby ...
    • ... and somehow efficiently emulate particles in the distance
  • different weathers at different times in different places depending on world parameters

(this issue will probably be modified later)

Fancy chunk initialization method

Currently at many places in our code we have two nested for looks like this:

for q in 0..CHUNK_SIZE {
    for r in 0..CHUNK_SIZE { ... }
}

I want to replace this by

Chunk::with_pillars(|q, r| ... )

EDIT: I guess the closure should get one AxialPoint as parameter instead of q and r...

Add features and tests to `AxialPoint`

The AxialPoint type is currently very incomplete. It should implement all fitting traits from cgmath and overload many operators:

  • overload many useful operators (including Index, required by Array anyway)
  • implement cgmath::{Zero, Array, MetricSpace, VectorSpace, InnerSpace}

Everything should be done similar to cgmath::Point2.

Additionally multiple unit tests should be added to ensure correctness.

Note: this issue is similar to #1

Fancy iterators over chunks and pillars

Currently, we can use Chunk::pillars() to get a slice over all pillars, but this is hardly used, because the pillar itself don't know where it's positioned in the world. Therefore we often have two nested for loops in user code.

I want a pillars() method that returns an iterator over the item (AxialVector, &HexPillar). A vector instead of a point, because it's not the world position. Or maybe PillarPosition ... I lost track of all axial position types we have πŸ˜›

The same applied to the World type: we maybe want to iterate over all chunks while also knowing the chunk position.

Tracking issue: world generation

This is the tracking issue for world generation -- it's not expected to be closed anytime soon. We have a few goals we'd like to reach if possible:

  • rough-grained noise maps for for various properties of the terrain
    • temperature
    • humidity
    • type of terrain height variations (hills, mountains, valleys, meadows, ...)
  • the height of the terrain
  • material of the terrain
  • generation of plants
  • rivers
  • caves

(list may be expaned)

Useful links:

Tracking issue: sky rendering

This issue is about rendering the sky box. A detailed German description can be found here.

Probably the first thing that should be implemented is a type that manages all properties of the sky, like the position of the sun. This is needed for other groups as well, e.g. for lightning and shadows.

(this issue will probably expanded in the future)

Tracking issue: shadow

I don't have time to explain this right now, but the basic task is clear: we need sun shadows.

First you should implement the basic algorithm, which is already hard enough.

Here are some names of algorithms to search for to improve the shadow quality:

  • cascading shadow maps
  • percentage closer filtering
  • exponential shadow maps
  • variance shadow maps

Complete `Dimension` type (+ tests)

Currently, the Dimension type doesn't offer a whole lot of features. A few methods should be added:

  • area(&self): calculates the area
  • scaled(&self): returns a scaled version (Dimension::new(2,3).scaled(2) should be (4,6))
  • aspect_ratio(&self): returns the aspect ratio (width/height)
  • fitting(&self, into: Self): returns the biggest dimension with the same aspect ratio as self that fits into into (like when setting your desktop wallpaper to "fit")
  • filling(&self, other: Self): similar to fitting, the smallest dimension that fills other
  • maybe more, if there are more useful things to implement (don't implement common vector operations, we have another type for vectors!)

Additionally, unit tests should be added.

Startup configuration

Currently, the Config type saves a few basic configurations for the game, but it's still impossible to change any of those values. This issue addresses three tasks:

  • Extend the Config type with more useful configuration parameters and their respective useful defaults.
  • Make it possible to set each parameter via CLI-args (like plantex --resolution=1280x720). To achieve this the CLI-arg-parser clap-rs should be used.
  • Make it possible to set each parameter via a config file in TOML format. The file can be specified via CLI (--config-file=<file> or something like that) or a default file can be loaded...

Furthermore a system needs to be build that allows querying the value of a configuration-key. This system needs to rank the three sources for values (default value, CLI-arg, file) by importance.

When choosing useful configuration parameters, only parameters useful to the client need to be considered. The server can be ignored for now.

Milestone is set to "First week": I doubt this can be finished by the end of the first week, but we should absolutely start working on it in the first week.

Add `FallbackProvider` and `NullProvider`

The trait world::Provider abstracts over types that can potentially provide a Chunk from a game world. We should add a helper type FallbackProvider that saves two arbitrary Provider, one of which is the primary provider while the other one is the fallback provider.

The type should also implement Provider by first trying to get the chunk from the primary provider and using the fallback provider in case the primary provider failed to deliver the chunk.

Additionally there should be a type NullProvider that always fails to provide a chunk. Both types are important for world provider composition.

Tracking issue: HDR and friends

We want:

  • an HDR framebuffer to draw into
    • maybe reduce its size by using F16 channel format
  • filmic tonemapping
  • Bloom
  • dynamic exposure compensation
  • eye adaption (slow adaption of exposure value)
  • Bloom using the dynamically calculated exposure value
  • Tonemapping using the dynamically calculated exposure value

Lower-resolution bloom

We're apparently blurring the bloom texture 20 times in a row. We should try to use a lower-resolution bloom texture and blur that just once. Reducing the loop to only blur once (in each direction) noticably improves performance on my desktop (GTX 770) - From about 40 to more than 60 FPS.

Marking as P-high since we really need to get our performance under control :/

Dynamic Chunk loading is off by a bit

You can see it quite easily: Just fly above the map and look down while moving. In some directions, chunks are loaded closer to the player, while in others they're loaded farther away.

The chunk positions are just missing an offset so that the position we use for the calculation is in the middle of them instead of being on a corner. Easy fix, just needs a good eye to see if it's fixed. Also a good way to get into how dynamic chunk loading works if anyone's interested.

Create only one vertex buffer for each chunk

Doing a draw call for every hex-pillar-section is very inefficient. The goal is to manage only one big vertex (+ index) buffer for each chunk. Then we just have to do one draw call per chunk, which results in less then 100 draw calls for the whole world. With that, the draw call amount is not the bottleneck anymore.

Only having one big buffer also allows us to simplify the mesh and remove redundant points and empty faces. Furthermore we can introduces terrain smoothing if we want to.

Some quick calculation for the worst case (no two adjacent hex pillars have the same height) with only one section per pillar:

We have CHUNK_SIZEΒ² pillars per chunk (currently 16Β² = 256). We have 2 * 6 points for each pillar, but need to triple those in order to have correct normals (see #60) => 36 vertices per pillar = 9216 vertices per chunk. This could be decreased a lot by terrain smoothing.

All top faces are drawn with 4 triangles (with terrain smoothing it would be 6), double that to get the bottom faces, too.
Each pillar has 6 side faces which are drawn with 2 triangles each. But luckily every side face is shared by two pillars, so we can remove roughly half of those faces. This results in 256*6/2 = 768 side faces = 1536 triangles for the sides.

In sum: 1024 + 1536 = 2560 triangles and 9216 vertices per chunk.

These numbers seems OK to me... It's of course also worth considering not generating vertices and faces at height 0, because the player will never see them anyway.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.