Monthly update for July/August 2018

4 minute read

Looking at the previous entry I think I made some progress, both in terms of conceptual plan of the architecture, as well as in terms of actual implementation.

Focusing on Rust for performance (and memory safety)

In the last post from almost two months ago I’ve mentioned the possibility of multiple implementations in languages other than Rust, mainly C#, but also for example JavaScript. Well, it’s really feasible in the sense that the system is conceptually easy to implement. However, it seems to me now that it’s less desirable in the context of actual performance.

Having one performant program capable of handling simulation processing really well, and designing an interface for connecting it to other applications to make it possible for those applications to use it as a sort of “backend”, seems preferable to multiple implementations that will be inevitably slower in the end.

That’s why I’ve been focusing mostly on Rust implementation of the architecture, dubbed outcome-sim (link to the repository). It’s a library that’s meant to be used in other Rust projects.

Tool for working with simulations

With a basic library such as the outcome-sim we need an application that will make use of it. That’s where endgame comes in (link to the repository). It’s a command-line tool for creating, processing and analyzing data outcome simulations.

Interfacing using a server-like structure

Idea of running a “backend” application that actually does the hard work of processing a simulation, while having user-facing application control it and query data from it, is really a “universal” concept. It seems to enable few desired use cases at once, such as “multiplayer” situation with multiple users being able to connect to the same server I’ve outlined some of the server-mode considerations here.

Finalizing design for first version of module files API

Core patterns for deserializing the module files are getting there. I’m striving for simplicity here, to make the files easy to write and easy to understand, I want to avoid too much “branching” whenever possible.

Anyway this stabilization of the API means that the documentation will finally feature declarations written in a consistent way. Also more complex examples of using the API to create whole modules will be coming online in the coming months.

Rethinking Anthropocene backend

Anthropocene is going to use endgame server as backend, instead of doing the processing work itself. It will be able to make use of the data sent back through http messaging from the server, as well as send messages to that server telling it what to do.

It will support both connecting to endgame server that’s already running and spawning a new one locally based on chosen scenario or snapshot. This means that the game will need to be distributed with the endgame binary.

Communication with locally spawned server should be quite fast. It will spawn an endgame server with “–no-compression” option as well, so that the server doesn’t compress the responses.

Designing graphical interface for endgame

I’ve done some preliminary work on endgame-gui. It will provide a graphical interface for most of the functionality with endgame. Dealing with a graphical interface will make possible better visualization of simulation data.

It’s not the highest priority right now, but it’s on the TODO list.

I’ll try getting some screenshots for the next month’s update.

Planning first modules

In the beginning much of the work here will be tied to testing and tutorials.

Beyond that there is a need to design a good plan for modelling certain systems in a way that will enable scaling of the thing in the future. Basically it will be best to divide certain subsystems into their own modules. This is not easy thing to do when you’re trying to model large systems.

It could be good to start smaller with an already existing model. I’m thinking about using parts of the systems design from Democracy 3 as an initial test.