This Week In Veloren 80

3 minute read10 August 2020

Authored by AngelOnFira

This week we prepare for the 0.7 release. With the code freeze, we don't have too many new features to talk about, but we get a rundown on particle timing from @lobster.

- AngelOnFira, TWiV Editor

Contributor Work

Thanks to this week's contributors, @yusdacra, @Songtronix, @scottc, @Imbris, @IBotDEU, @Slipped, @nepo, @zesterer, @Pfau, @xMAC94x, @Varpie, @Sharp, @AngelOnFira, @Sam, @Silentium, @BottledByte, and @JudgeGriesa!

0.7: "The Progression Update" is releasing this weekend! In this version, many changes have been made to combat, animations, worldgen, and a lot more systems. The release party will be happening this Saturday at 18:00 GMT+0. Come join us on the main server to check out all the new changes!

A Solemn Quest by @Eden

@nepo added Dullahan into the game with help from @Slipped and @Snowram. The model was created by @Gemu. @lobster wrote a basic scheduler for particles, so we have emitter-like functionality. @PlaneCat has been working on adding a weapons guide to the book. @Slipped merged swimming improvements including controlling and animations. @zesterer added @WelshPixie's furniture models into the game.

Swimming improvements

Particle Timing by @lobster

Since we got rid of our ECS particle emitter component, we also lost our timing mechanism to generate particles at the correct time. Which means our particle spawn rate is now tied to the framerate. So if the frame rate stutters, it can cause changes in particle density or other weird visual artifacts.

To fix this we need to track the last time a particle was created for any given effect, so we can have consistent timing between particles. And because we have a lot of particles, reducing the work done per-particle is important because it's multiplied 10,000 times or more, which can have a big performance impact.

New furniture by @WelshPixie

Since many particles are generated using the same or similar timings, we can optimise this by putting all particles that spawn at the same rate into frequency buckets. So now instead of tracking last created time for a single particle, we can do it per frequency.

for _ in 0..self.scheduler.heartbeats(Duration::from_millis(10)) {
    // spawn a smoke particle
}

// ...

/// Accumulates heartbeats to be consumed on the next tick.
struct HeartbeatScheduler {
    /// Duration = Heartbeat Frequency/Intervals
    /// Instant = Last update time
    /// u8 = number of heartbeats since last update
    /// - if it's more frequent then tick rate, it could be 1 or more.
    /// - if it's less frequent then tick rate, it could be 1 or 0.
    /// - if it's equal to the tick rate, it could be between 2 and 0, due to
    /// delta time variance etc.
    timers: HashMap<Duration, (Instant, u8)>,
}

impl HeartbeatScheduler {
    /// updates the last elapsed times and elasped counts
    /// this should be called once, and only once per tick.
    pub fn maintain(&mut self) {
        for (frequency, (last_update, heartbeats)) in self.timers.iter_mut() {
            // the number of frequency cycles that have occurred.
            *heartbeats =
                (last_update.elapsed().as_secs_f32() / frequency.as_secs_f32()).floor() as u8;

            // Note: we want to preserve incomplete heartbeats, and roll them
            // over into the next tick.
            *last_update += frequency.mul_f32(*heartbeats as f32);
            // alternatively expressed as: now - remaining_partial_heartbeat_time
        }
    }

    /// returns the number of times this duration has elasped since the last
    /// tick:
    /// - if it's more frequent then tick rate, it could be 1 or more.
    /// - if it's less frequent then tick rate, it could be 1 or 0.
    /// - if it's equal to the tick rate, it could be between 2 and 0, due to
    /// delta time variance.
    pub fn heartbeats(&mut self, frequency: Duration) -> u8 {
        self.timers
            .entry(frequency)
            .or_insert_with(|| (Instant::now(), 0))
            .1
    }
}

The newly added cultist hammer by @Pfau

Another advantage to this approach is now we now have a larger and more consistent batch of particles that can further avoid memory fragmentation (dead particles wasting memory) by grouping even more particles with the same lifespan together. They can then be more efficiently arranged and uploaded to GPU, by taking advantage of bulk operations such as replacing an entire slice of memory with another set of particles.

But the downside to this approach is that all effects with the same frequency will be in sync and might look weird on the screen. But this is not an issue for effects that have a fast particle spawn frequency, because it's imperceptible to most humans. But for particles that have slow spawn rates, such as 100ms and above, it starts to become noticeable.

Dullahan standing tall. See you next week!