Ray Casting in 2D Grids

by

Ray casting is used in many different areas of game development, most commonly for visibility, AI (often for visibility though) and collision detection and resolution. If you are making a game, chances are you need ray casting somewhere. I started this blog post over a year ago, but decided to not write it up fully and publish it, since I thought that it might be a little embarassing, since most if it is too trivial. But I still find it fairly hard to find any good information or example code on this and on the weekend of Global Game Jam 2016, I forgot how to do it properly and didn’t find any good resources on this, ending up debugging ray casting code for a little over 3 hours (unnecessarily, I think). So if this actually helps no one then this shall remain as a memorial of shame for myself. Or as a handy reminder, so history does not repeat itself in a way that leaves me debugging ray casting code for hours.

The relevancy of grids is given by the prevalence of tile based games and even if the game is not tile based, because of optimization structures for collision detection/physics or other gameplay elements that are often in place, being either directly (simple uniform grids) or indirectly (e.g. quadtrees) related to grids.

Throughout this article, our grid will be a uniform grid with cell size grid.cellSize.
The ray will be a tuple of a point (it’s origin) and a vector (it’s direction), namely ray.startX, ray.startY and ray.dirX, ray.dirY. The set of points on the ray is then described by: {ray.start + t*ray.dir | t is a real number} and we can “address” every point on the ray with a value t.

All methods presented here on out are implemented in this GitHub repository (rather badly most of the time, since I started this project over a year ago so this is a little patchwork-ey, but I put some work into refactoring, so it should be bearable):

https://github.com/pfirsich/Ray-casting-test

The Naive Method

The simplest, but often times an adequate way to do it, is stepping uniformly in world coordinates. In our case, where our ray direction is not normalized, we can not just step uniformly in t, but must use our normalized direction vector multiplied by the step size (again: in world coordinates).

Even if often times this is sufficient, it is never perfect in the sense that this method finds all cells that intersect with the ray and with finite dt, there will always be corners that could be overstepped. An example of this from the example löve program (function castRay_naive) can be seen in this screenshot:

Naive ray casting

The boxes inside the cells indicate missing or superfluous cells.

Of course this method could be “exact” if we always choose the step size to be corresponding to the size of a single pixel on screen, meaning we would not be able to distinguish exact and inexact at this point, but this would imply a huge number of samples, which is most likely not efficient enough.

Line Rasterization Methods (e.g. DDA, Bresenham)

This might seem like the obvious choice and depending on your game (or the problem at hand) it might very well be, but in general this is also not sufficient, since these algorithms operate in (in our case) tile coordinates only, so that there will be no difference in a ray connecting the upper left corner of a tile and the lower right of another tile and a ray connecting the lower right corner of the same, first tile and the upper left corner of the other one. In fact most commonly line rasterization algorithms will rasterize every pixel, when a diamond shape in the pixel’s center intersects the line connecting the centers of the start and end pixel (the “diamond rule”). An example of this, including a visualization of the diamonds mentioned before, can be found in this screenshot (also from the example program, which has both DDA and Bresenham available):

Ray casting with line rasterization - Bresenham

Here you can see that we lose the information of where exactly inside the tile our start and end points are and also don’t treat the the tiles as axis aligned squares.

If you just need any line connecting your start and endpoint somehow, you might use a line rasterization method since they can be implemented very efficiently – and probably already have been, since they are so important for computer graphics in general. Also if your game has everything snapping to the tile grid and everything centered, this becomes an exact solution (a modern example would be Crypt of the Necrodancer)!

In my example program the functions implementing this are called castRay_DDA and castRay_Bresenham.

Exact Ray Casting

Simplified Case (direction positive)

In a more (algorithmically) efficient method we would love to not visit cells twice, i.e. make no unnecessary steps and never miss anything. So if you want to find all intersections of the line with the grid and don’t miss any, at every step of the algorithm, you just have to find the first one, given your current position and direction.

ray_cast_tile_2

If we consider the current position curX, curY, we can easily find the the tile coordinates of the tile curX, curY is in by dividing by the cell size and truncating the floating point part. Beware that, because we are using Lua for example code (since the example program is in Lua), arrays and tiles will be 1-indexed, i.e. start with index 1. So we will have to add 1 to both tile coordinates to get the final coordinate:

function tileCoords(cellSize, x, y)
return math.floor(x / cellSize) + 1, math.floor(y / cellSize) + 1
end

To get the next edges (assuming our direction is positive in x and y direction), we only have to add grid.cellSize, or in our cases, since we already added 1, do nothing but transform back to world coordinates, by multiplying with grid.cellSize. The distance to these edges can then be divided by the corresponding components of our direction vector to give us these distances in units of t!

local tileX, tileY = tileCoords(grid.cellSize, curX, curY)
local dtX = ((tileX)*grid.cellSize - curX) / ray.dirX
local dtY = ((tileY)*grid.cellSize - curY) / ray.dirY

To find the first intersection, we then only have to increment our current t by the smaller dt of the two and update our current position.

Implemented as the function castRay_clearer_positive, the results are exactly what we wanted!

Ray casting accurate

Though, as mentioned before, negative directions (any axis) still give us some problems:

Ray casting - negative problems

The reasons for this are obviously our flawed calculation for the next edge, since in the current case, we add 1 to the tile coordinates. In the negative cases, we would have to add zero, since the edges we want to hit are the ones of the current tile.

This is intentionally formulated to hint at the other problem, which is that if we jump to the next intersection, we will still be in the current tile and revisit the first intersection indefinitely! This is because math.floor maps values from [x, x+1) to x, so that a whole integer maps to itself, which is very sensible for a floor function, but is not entirely what we want.

Complete Case (all directions)

The problem of finding the next edge is trivial to solve, by introducing an offset of the tile coordinates that depends on the direction:

-- NOTE: 'cond and x or y' is a common idiom that is mostly equivalent to the ternary operator i.e. equals x if cond evaluates to true and y otherwise
local dirSignX = ray.dirX > 0 and 0 or -1
local dirSignY = ray.dirY > 0 and 0 or -1

...

local dtX = ((tileX + dirSignX)*grid.cellSize - curX) / ray.dirX
local dtY = ((tileY + dirSignY)*grid.cellSize - curY) / ray.dirY

The floor-problem has multiple ways to solve it. One would be to rewrite our floor function so that it takes an additional parameter, which indicates which side of the interval should be open (i.e. not include the integer). Another one is to add an epsilon to our t, when incrementing it by either dtX or dtY, so that we will barely sneak into the next cell. Choosing this epsilon is a little tricky though since by introducing it, our algorithm will start to have edges it could overstep, just like the naive method. Any non-vanishing epsilon will have this as a result and even if we make it really small (of the order of a screen pixel size), we might run into issues when it just disappears because of floating point normalization, when ray casting in really big grids. This method might then resemble a naive method, with a really small step, which intelligently skips some samples, if possible.

The best way though would probably be to introduce our own tile coordinates, which are not calculated from the current position, but kept track of alongside it. Then we can just decide to increment/decrement these at the right times – tileX when dtX < dtY and tileY otherwise. This is implemented in the function castRay_clearer_alldirs_improved.

Final Solution

The hawkeye programmer might have noticed that our castRay_clearer_alldirs_improved is a little more complicated than it should be. You have to watch a little more closely but some terms in the calculation of dtX and dtY might be constant and maybe we can eliminate the division. In fact we can and with a little more transforms and noticing that all our constants are symmetric in x and y, so that we might as well write a helper function that calculates these independent of axis, we arrive at our new function castRay_clearer_alldirs_improved_transformed (thank god this article is going to end soon, these names are getting long). The necessary transformations to arrive at this function are shown in this gallery:

One branch and 5 additions per step and a setup function (a little trickier though, but constant time) is probably as good as it can get, though it is definitely not as apparent what is actually happening in this function. If we decide that we have no use for t and therefore the intersection points themselves (and just want to visit some cells, or want to know if there is a possible path at all), then we can eliminate t all together and because we don’t care about the actual values of dtX and dtY, we can decide to omit the “- dt” term, because the comparison in the next iteration will still yield the same result, so that our algorithm would be reduced to one branch and two additions. This is in fact the algorithm described in “A Fast Voxel Traversal Algorithm for Ray Tracing” by John Amanatides and Andrew Woo, which inspired the transformations done to castRay_clearer_alldirs_improved.

All in all, I showed you what the problems of ray casting in 2D grids are and what kind of solutions for them exists or seem to exist, but turn out to be a fluke. And also made a nice connection to another algorithm, which many use without understanding it properly, by deriving it from an easier to understand one and applying some transformations.

Preparation for Indie Speed Run 2015 – Screwdriver, SparklEd, Lighting

by

In preparation for Indie Speed Run 2015 (which went well and for which I will post our game when the voting period starts) me and a long time programming buddy, Markus, decided to prepare by building a handful of tools that are going to make some things a little easier. We find ourself discarding the same ideas time and time again, because we don’t deem them feasible in the often severly limited amount of time or not discarding them but spending too much time on them, so we decided to prepare them beforehand.
We had a lot of stuff in mind including a post-process library, realtime and precomputed 2D lighting, pre-built character controllers and smooth collision detection and response solutions for different scenarios that we might need, finally a proper particle system and a corresponding editor, an editor and a library for skeletal animations, and a level editor that supplements the awesome Tiled for games that are not tile based (but either polygon or simply sprite based). But as it becomes glaringly obvious this is quite a lot of work and of course we did not manage to finish everything we had on the list, although still a considerable part of it.
Markus worked on the animation editor/library and manged to finish it in time. Sadly we didn’t have any use for it in the jam, just as we didn’t for everything I worked on. We wanted to implement the best idea we had and coincidentally (and unexpectedly) there was no need for the tech we developed beforehand (exluding a small exception elaborated on later). The projects I took on and managed to finish mostly before the jam were the lighting system, the particle effect editor and the level editor.

The Level Editor (Screwdriver) – GitHub

This level editor uses my GUI, kraidGUI. It’s a GUI made for being modified in a sense. It only serves as a slim core for handling/passing events and handling properties. Look and feel are implemented in a “theme”, which makes use of a very simple backend, which, as of yet, is only implemented for löve. Every part of it can be exchanged and extended easily. For Screwdriver I had to implement and color picker and was surprised how well kraidGUI did it’s work by making this very easy.
Both the GUI and the Editor were meant as an exercise in writing actual software. I feel like, lacking a formal education and professional experience, I tend to “make things work” first and foremost often leaving proper software design behind. This certainly also has a lot to do with me participating in game jams a lot. But GUIs and Editors (in this case especially) are meant to be used a lot more than once. kraidGUI has some things I’m still not completely content with and Screwdriver is also not free from a few sketchy ways to do things, that I would consider hacks, but all in all I think that it’s still mostly quite readable, maintainable and hack free. Both of them lack documentation, but I think it’s very common for programmers to not be particularly fond of creating these. Anyways I will try to still spend some time on documenting kraidGUI soon, since I think it’s a nice piece of software and other people might deem it useful too.

Regarding the editor itself, I made it for usage with many different games in mind. Every type of entity in the game world has an entity description, which contains Metadata (modifiable or not), so a seemingly fairly custom fitted editor can be prepared for any game without much effort. The system corresponding to the components that make up the entities are implemented in the components itself, which kind of defeats the purpose of ECSs, since special cases still have to be handled inside the components, but the editor was supposed to be modular and capable of being extended with rather complicated edit modes without treating the main components special. Therefore I’m borrowing terminology more than anything from entity component systems. Thankfully I didn’t regret the way of doing this yet, since there was no special case except shared behaviour that could be resolved by properly breaking up functionality and using inheritance. The editor remains very modular and can be extended fairly easily with rather comprehensive editing capabilities.

An example of such a entity description file can be seen here (and should be self-explanatory):
[expand title=”Code”]
[/expand]

The editor itself can be seen used in this video:

The Particle Effect Edtior (SparklEd) – GitHub

After collecting some notes about the particle system I wanted to implement for this editor, I remembered briefly that I might have heard of löve having a particle system already. I didn’t check, but a properly tested system that is potentially a lot more efficient (instancing, geometry shaders) should be preferred to anything I could implement in löve itself. So I could reduce the amount of work I had to do to making an editor for löve’s partice system.
I made the editor in a day plus a few days fixing bugs while using it. I borrowed the idea of using the mousewheel for numeric value editing to reduce the amount of GUI related work (didn’t want to bloat a project that small with kraidGUI). Using the mousewheel while hovering a property reduces/increases it, while holding shift decreases the amount of change. Loading is done by using the mousewheel above a property (serving as a radio button) and saving via shortcut. It supports multiple emitters to make many-layered effects (explosions for example need smoke, debris, fire, etc.). It supports continuous effects (e.g. fire) and bursted effects (you can press space to emit “Emit amount” of particles). And has a brief built-in documentation in the form of tooltips. It can be seen used in this video:

Lighting

When I started on this I was very short on time, so I had to make some suboptimal decisions. I would probably do a lot of things differently if I had to do it again, so I will not go into much detail. Every light in the world has an own lightmap (normally a lot smaller than the in-game pixels it affects, because of the soft shadows that alleviate a lof of these visual impact of lower resolution lightmaps), which is only update if the light or if the occluders the light affects change (if it throws shadows). I also implemented soft shadows by appending fins to the shadow geometry on both sides and scaling them appropriately.

Hard shadows, 1 light

Hard shadows, 1 light

Soft shadows, 1 lights

Soft shadows, 1 lights

Hard shadows, 11 lights

Hard shadows, 11 lights

Soft shadows, 11 lights

Soft shadows, 11 lights

Old videos fixed

by

Some of you may have noticed, that most of the posts tagged with “from old blog” have broken videos. It turns out that it is not super easy to get a high quality video from unpublished blogspot blog posts (that’s what I did with my old blog), but I managed to find most of the old ones.

The following posts now have proper videos and I encourage you to watch them in their full HD glory:

Spacewalk
Metroidlike
Feedback-Effect

Also I found videos of a few demoscene-related things I started when I was a little younger. I never really participated in the demoscene, but was a long time admirer and of course tried to do similar stuff myself, but the results were rather lame. One of these things is an attempt at a 4KB intro, which you can see here:

In retrospect I think it’s quite neat and I probably should have finished it. But I felt a little intimidated and inadequate next to these things:

pouet.net – 4k – sorted by popularity

 

SudoHack update & Replay system

by

So I really have to write another blog post. 120 of the 173 total commits for my Git repository have been after the last blog post and I start to feel bad about putting anything new in the game, because I should have documented all the other stuff already.

SudoHack update

Notably I started putting my Changelog online, drastically changed the game once, should have posted about it and changed it again. I am currently in the process of changing it another time. The first drastic change was losing your Bits a lot faster over time and gaining a lot more for every enemy kill. This essentially meant that you could not stand still or not kill enemies without dying, which was a lot closer to my creative vision I had from the start. I started feeling rather confident with the game and received good feedback, but I realized: to make a full fledged game out of it that offers the scope that I had in mind, I had to make a lot more levels. Taking in to account that the player only sees a subset of all the possible levels in every run, I estimated I had to make about 300 maps. In conjunction with the fact that it took me a least a week to make 6 I kind of liked, I had to take a different approach if I ever want to finish it (which is one of the main goals of this project), prompting the second radical change. So I revisited the two level generators I already started working on and dismissed pretty fast. And like always you just have to throw enough sweat and man hours at something to make something work and I arrived at something I rather like at the moment.

Screenshots:

output - Kopie (9)_1 output - Kopie (10)_1 output - Kopie (8)_1 output - Kopie (2)_1 output - Kopie (3)_1 output - Kopie - Kopie_1 everything_1 output - Kopie (12)_1 output - Kopie (13)_1 output - Kopie (15)_1

The algorithm is essentially a random walk in 2D (a little more fancy than that, but not too fancy) and as you can see in the limit of many steps we can observe a gaussian distribution (as expected with a random walk)! I just put them in to show that this algorithm works rather nicely for small maps (they are very similar to the ones I have built myself) and also quite okay for the medium sized ones.

With being able to have maps that are a little bigger than the ones before, I could, of course, not resist and also changed the scale of the maps a little. Sadly though I fairly quicky lost the stupid grin I acquired after implementing feature after feature and seeing the simulation time per frame steadily sticking at about 1ms. Apparently things aren’t always O(1) and sometimes even worse than O(n). But that’s why I kept a tidy list of any optimizations I could do if need ever be in my Trello, which I honestly couldn’t even wait to tackle.

A lot of the collision was optimized by only checking collision with the level geometry for the tile the object is currently on and surrounding tiles, but player-enemy and enemy-enemy collision never had a broadphase detection step, which I then implemented. After considering numerous approaches and comparing the necessary gain with the amount of work required I settled with the easiest solution, which is a grid/spatial hash. The whole grid effect was re-tweaked and optimized for only being applied to a little more than the visible tiles. Also other parts of the collision detection were optimized a little, I replaced a lot of if shape.type == “whatever” with table lookups, which proved to be exceptionally better. But I don’t even know why I didn’t do it from the start, since the code also got a lot smaller and prettier by doing it. It seems like I reached the point where I grew out of the code I wrote during the beginning of the project, meaning that I know more about the language now and get shivers running down my spine reading the early stuff. Also I implemented collision detection filtering for “idle” objects and enemy AI is now divided between “thinking” and “acting”. The former here represents the current state (and substate) of the enemy and it’s transitions between them, usually involving the more costly computations (ray casts, a lot of distance calculations) and the latter the behaviour, mostly consisting of accelerating towards something or shooting players (outrageous!).

Replay system

Something I spent a lot more time on than I should have is definitely gameplay recordings. I can’t spend a 40 hour week (or more) on this game and I certainly don’t have huge testing capacities, so I wanted to maximize the information I could get from the few testers I have.

As far as I see it, there are two fundamental approaches to doing this:

  • recording user input and just “playing it back” afterwards
  • recording snapshots of the whole game state and relevant events

As far as my analysis went the pros for each method are as follows:

  •  State snapshots
    • will work even if you’re simulation isn’t deterministic, therefore less error prone considering differing floating point behaviour for different architectures (and potentially different binaries). But most networked games already have this requirement and deterministic simulations can be achieved without bending over backwards.
    • probably smaller if the dimension of your state space is rather small (few enemies, bullets, etc.)
      • Also you can choose arbitrary precision for most of your recorded data. It is not important that the player looks exactly in the direction it did while recording if it looks good enough, while it can be dangerous to truncate precision with keypress recording.
    • a lot less dependent on mini-tweaks. Changing a value a little in enemy behaviour might lead to digressions that eventually destroy the recording
    • you can record only a few seconds in the middle of the game without knowing the full history. This could also be done with keypress-recording but you would have to make a full state snapshot at the start, which is almost half the work of doing it completely with state snapshots.
    • Rewinding and seeking are relatively easy to implement. Using keystrokes this is only possible with making snapshots every N frames and skipping to them.
  • Keypresses
    • probably smaller if the dimension of your state space is rather high (only a few inputs for potentially thousands of enemies/bullets)
    • could potentially be coded once and reused for multiple projects (modulo extra meta data), given your simulations are always deterministic
    • can be ‘stapled on’ later rather easily (again, determinism provided)
    • could be used as a tool to reproduce bugs that are often difficult to produce
    • the user can be recorded in the menu or during the pause screen and observing unwanted keypresses (the user trying out different keys because she/he doesn’t know how something is supposed to work) or just how well the user can navigate those menus is essentially for free (but also mandatory). In essence: unwanted behaviour or behaviour not specifically expected might be observed, which is a useful tool for debugging.
    • Gives you another reason to use a fixed timestep, which you definitely should (for framerate independent behaviour/making sure that approximation in the integrator is valid for the used dt, because physics might get weird with too small or too big dt/because it’s crucial for networked multiplayer), though in-between frame interpolation might be needed.

In this case I thought that for my purpose, which is not actual ingame replay of something that is visible to the player but rather a being tool for me, the Keypress recording should be my method of choice. My implementation is Open Source an can be found here:

loveDemoLib (see main.lua for a usage example)

My googling showed (I could have made a histogram or something and tested it myself) that regular gamepads mostly have a precision of 10-12 Bit for every axis, so I tried using fixed precision for these floats by writing floor(val*GAMEPAD_AXIS_FP_PREC) and reading val/GAMEPAD_AXIS_FP_PREC. Apparently GAMEPAD_AXIS_FP_PREC being 10^5 – 1 or 10^6 – 1 is still enough to throw the simulation off rather quickly. I assume this is because of binning problems e.g. non-homogeneous density of float numbers in the interval -1 to 1 in contrast to homogeneous density of fixed point numbers in this interval, so that some fixed point numbers correspond to multiple floats and some to none.

Also I really wanted to implement binary writing, but it is far too much of a hassle in Lua if you want to write numbers (extracting every byte yourself for integers and mantissa/exponent for floats). It’s certainly not inconceivably hard, but considering that I wrote the uploader in Python, zipping the file before uploading guarantees that the file to upload is seldom bigger than 1MB, which is in this day and age totally acceptable, I think.

Integration

First I added a pre-commit to my git repository that calls a Python script which then replaced certain variables in my code. For now it’s just a timestamp of the commit (I would really like to have a hash, but I can’t get the hash of the commit before commiting of course and later commiting the changed file is also suboptimal) so I can (almost) uniquely identify different versions of the game and can always checkout the necessary version to play back a specific recording.

Then I added an awesome duo of an uploader, written in Python:

2015-03-08 14_33_18-Demo uploader

and a Web interface for adding keys and overviewing and downloading the recordings uploaded by the testers (also I love Bootstrap):

2015-03-08 14_55_26-Remote Uploader2015-03-08 14_55_43-Remote Uploader 2015-03-08 14_55_57-Remote Uploader

They are far from great and there is a lot of stuff I would change if I could justify putting any more work into them, but the only people that will ever use them are probably people I personally know quite well, so I don’t feel super bad about it.

 

Of course I also did a bit of other stuff in 120 commits, but I want to write them up in a later blog post with a little more context (and screenshots and videos of course).

Tweaking values in SudoHack

by

It is fairly known that just tweaking values in a game a few percent can make or break awesome gameplay and create the difference between an iconic franchise and the joy of gameplay you often get from games that are made by some 13-year olds and uploaded for testing on a game developement board. Not that I don’t like games by 13 year olds, just that it took me a ridiculously long time to get the basic hang of making games remotely fun (I’m still very far from being good it). And let’s face it: if you are a game developer and you’re tweaking the friction that is applied to the player when the boss knocks him back using his extra-power-attack, you often stop after 20 minutes and call it a day. It’s tedious and even if it that’s not always the case, it is seldom any fun. Especially with a game like SudoHack I discover a new game almost every day by just trying out different parameter spaces for all the configuration values in it.

Until this morning, for every value I wanted to tweak, I had to make changes in the code and restart the game to view it’s effects. There was a system in place, which let me press ctrl+r, generating a tweaks.lua-configuration file (it looked something like this: https://gist.github.com/pfirsich/854145171298f3175ecf) and giving me the opportunity to change these values, hit ctrl+r again and have the changes show up in the game in real time. Even though this is almost neat, it is way too complicated to have actually been used (like once or twice maybe). I remember having something like this in one of the dozen C++ game engines I started developing, when I was a little younger, where I would define a TWEAK(value) macro, which would use “__FILE__” and “__COUNTER__” to uniquely identify every call of this macro. The actual implementation would look up the key-pair and upon triggering a reload (by pressing ctrl+r for example) parse all the source files for all occurences of TWEAK and update it’s values, therefore giving me the opportunity to tweak everything in the source and only tell the game to reload the tweaks. And I thought it was awesome. So I set out to do something like this in Lua/löve. The first implementation worked fairly quickly using the already introduced tweak(defaultValue, name) functions all over the code, I update the values by parsing the source for these function calls. Because I fear that it might not fully be clear what I’m actually doing with this, here is an example work flow:

<implement new feature (for example: bit rendering)>
[...]
love.graphics.setColor(tweak(40, "bits: colR"), tweak(150, "bits: colG"), tweak(200, "bits: colB"), 255)
[...]
<be not content with the look of the bits, change the values, while leaving the game running>
[...]
love.graphics.setColor(tweak(255, "bits: colR"), tweak(100, "bits: colG"), tweak(100, "bits: colB"), 255)
[...]
<hit ctrl+r>
=> The changes are visible in the game

This is already pretty neat, but one might think: disregarding potential future uses, why do you even need names? Isn’t the nth-occurence of a tweak()-function call and the source file enough to uniquely identify the values? And you’re right, they are actually quite useless, but if you have a look at my implementation:

[expand title=”Code”]

[/expand]

Then you might formulate this problem as: can we generate the tweak-value names ourselves? And the answer is: I don’t know. I asked in #lua on freenode and people recommended me looking at the lua debug module, which has the capability of building a stacktrace at any point in the file, getting the source file name and the current executed line number, but this is not enough if you want to allow multiple tweak values in a single line.

I implemented a package loader, which will be invoked if another module is required in a source file that replaces every occurence with a specific func()-call with func(x), where x is the filename and the number of occurences before it concatenated in a string. For the interested:

[expand title=”Code”]

[/expand]

Despite it working I don’t really feel comfortable using it, since I don’t like changing the actual entry point off my game to a different file, only to have it be included right after the tweak-loader. Especially if what I gain seems minimal. Also I really don’t like messing with things like this, since love’s filesystem sandbox might make this a little more complicated if it is run from a zip-archive (which is, in a shipped version, the default mode of operation), but I’m not entirely sure about this.

Finally I will just use the system that was in place today 5 hours ago and see how much entering names will annoy me pretending that they might have an actual use in the future. All the approaches discussed here are, of course, only actually usable in small teams or, more concretely, teams where everyone, who should have the possibility of tweaking game values, knows her/his way around the code. Otherwise a system where code and data are more separated is probably the way to go, since the amount of overhead can be justified with expanding the access to configuration values. Also in that case I would really like some Quake/Source engine like console, which makes easy changes in the game possible and gives the opportunity to spawn graphical elements like sliders/spindowns to edit them visually. For me though, the edit-quick and almost no-overhead solution is more desirable.