Português English


Godot Jam Review feature

Godot Jam Review

At the beginning of the year, I posted about the retrial of Godot, the most popular free and open-source game engine around.

I’ve posted some pros and cons at the time. I then decided to enter into a JAM to motivate myself to try to use it myself for a real complete project. Even if it is a jam-like game.

Now it’s time to write a review the whole process.

TLDR: I failed to complete the game. I tried to create a pipeline to build a nightly version for the latest version, with C# support. It is partially running ok.


The Jam theme was “ocean”. Bonus points if:

  1. All sounds in-game are made with your mouth(s)
  2. Include a fishing mini-game
  3. Include your favorite quote or pun in-game

So I started. As previously said, I’ve planned to implement an old game of mine as the main game. The advantage was that I knew what was needed and the general need. Another plus was the fact that the game was abstract, so I could save a lot of resources and time on the presentation. And by doing the sound effects with the mouth, I could neglect this front until the end.

For the mini-game, I looked for a small board game that I could easily implement in digital form. After some research, I settled with Leaky Boat, a fast-paced pen-and-paper game with dice.

So I started to code. But the problems with the C# integration were getting on my nerves. Godot editor crashed more than 30 times on the very first night of coding. It was not blocking the path, but it was making it very, very difficult.

New Version from Scratch

As a potential solution, I checked if the undergoing development of Godot 4 (I was using the “stable” version of Godot 3.5) had any nightly build available. I’ve found a guy that was creating these nightly builds! But only the original non-C# version. The repository was open, so I checked if anything was possible to salvage. Not much.

So, as a detour, I decided to build a pipeline on GitLab that would compile the source code and build it. Eventually, I would schedule it to run every night. However, the process of creating a build pipeline online is very tedious and laborious: on every change, I had to run online. In the case of Godot, trigger the code compilation to eventually discover that 30 min the build started, it failed due to some dependency on the build stack was not fun. It took me a whole day to spend my CPU quota doing this.

So, as a second detour, I decided to host a local GitLab instance on my computer. It would allow me to develop the pipeline itself. Once ok, I would migrate back to the online service. It took me 2 days to set this. I first decided to go with local Kubernetes, but it was getting too complicated. Then I migrated to a solution that I am more familiar with: Docker-compose inside a virtual machine. I created it inside VirtualBox (instead of KVM) because I planned to reuse it when I decided to use Windows.

Downloading and building several docker images takes a lot of space! I had to resize the VMs to a much bigger size than originally planned to accommodate a dozen images created/downloaded.

The plan was to create a helper image with all the tools needed to compile the code with or without C#, host/register inside GitLab itself, and reuse it in the main pipeline. This step was working fine, but the actual build was failing time after time.

To check if the steps were right, I decided to compile them inside my machine. I did not want this to not pollute my pc. But worked. Since I “wasted” a couple of days on this detour, I decided to use this local compilation in my project again.

New Version, Old Problems

Godot 4 renamed various of its classes. Also, it changed countless small things internally, and it took me a couple of hours to migrate to the new environment. The good thing is that I did not have much to convert. Done. And the game was working the same as before.

Now it was time to continue the development. But the problems continue the same way: the editor was crashing time after time. I managed to make both the game and the mini-game functional, but with restrictions. The pace was slow because I had to investigate the way of doing things all the time. And the documentation was definitively not comprehensive for C# users.

After 5 days, I gave up. 😞 I could theoretically finish the game in a certain state, but I decided to focus my attention on other projects instead. I might try to go with this engine later in the future, but for now, I will return to Unity until I finish one of my projects.

A couple of days after the end of the jam, Godot 4 alpha 1 was released. I still think that, if the devs do not provide a nightly version by themselves, my project has some space.

Despite the failure, I’ve learned a lot about Godot, GitLab, and Kubernetes. Especially, the latter two. I will use it in the future for sure, so I do not feel the pressure of failure.

All the code, even incomplete, is open-source in my GitLab profile.

Also, they are organizing a Jam every month. I can reuse all to the new jam, for certain.

Trying Godot Engine Again feature

Trying Godot Engine Again

It’s about 10 years since I discovered Unity and fell in love. The editor was great, but I liked programming in C#. It allowed me to be both organized and creative.

Despite being among the top 2 suites in the world, I’m increasingly annoyed by them. It became big spyware, heavy and full of annoyances. In addition to being super expensive (for Brazilian standards), the pricing model is much less indie-friendly than its nemesis, Epic’s Unreal Engine. Users pay upfront instead of paying royalties for their success.

Time to explore new grounds! To be honest, I try new stuff all the time. It’s time to land on new grounds! Some criteria to consider:

  • Open source preferred, almost required.
  • Avoid C++ (because my games would leak memory of certain). JavaScript is discarded due to its performance. Rust is hot, but an engine supporting it is probably super beta.
  • Small footprint if possible.
  • Pro developer tools, like CI/CD headless compilation.
  • Big community or organization supporting it. The lack of big support is an abandoned project wannabe.

So for the past months, I tried to play with several options. Notably:

  • Unreal is unbearably gigantic (7gb+), which hits especially hard on CI/CD. And the Linux editor is buggy.
  • I was excited by Stride/Xenko, but months after put as open source, it was abandoned as far I can tell.
  • Godot has that annoying scripting language embedded, but the no-go was the lack of an equivalent of ScriptableObject to create data assets.
  • O3DE is a possibility for the future. Lua as a scripting language is a personal nostalgia.

Spark of hope

Then I read an article about creating data assets in Godot. It used C#. It was not a trick or complex. Pretty straightforward. I decided to try it again. Less than 100 Mb later, with no need to install or register, I started my -again- the first project. The goal was to load data from an asset created using C# code, just like a ScriptableObject in Unity. The test was a success.

So it’s time to try to create a full prototype game! I’m planning to join one of the several jams they organize to motivate myself to finish. No prizes are involved, just for the challenge. Things to explore to be conformable with:

  • Client-server multiplayer.
  • Scene streaming.
  • Animations.

Another idea is to recreate an old game of mine: PICubic. It was not commercially released, so it might be a good way to learn and expect results.

Some general thoughts

After a week that I’m playing with it. Some thoughts:


👎 The design principle is that each node has only one script attached instead of the super common component-driven approach lacks. Especially trying to design complex systems using small parts, like the microservices in web development. I heard once that there is a spin-off that implements this, but there is no traction in the community.

👎 C# integration is still not good. At least on my computer, the editor crashes each 30 min at a random time I hit play. Also, the editor does not display custom C# classes in the inspector. I design several vanilla classes to organize the code, but I had to transform them into Resources to be able to edit their data.

👎 Linking assets in the editor does not respect the class restriction. One could insert a Player asset instead Weapon and the editor will not complain. I have to check before using an external variable every time.


😐 Refereeing nodes in the hierarchy and the asset folder are two distinct things. Nodes in the hierarchy are accessed by NodePath while prefabs (here called PackedScenes) have a different type.

😐 GDScript: focusing on a custom language instead of vanilla widespread like C# or C++ is a waste of both newbies’ and Godot’s own developer’s energy.


👍 The everything is a scene approach fascinates me. I always thought this way in Unity: scenes are just a special prefab.

👍 Creating an automatic build pipeline on GitLab was a breeze. Due to the smaller container and less complexity, it takes less than 2 minutes to create a build on any platform. An empty Unity project takes this time just to download the 4gb+ image and at least 5 more minutes to compile.

The project development is somewhat slow for my taste, but they are receiving more and more financial support in the last months that might enable them to accelerate the pace. I’m especially interested in the new external language integration for the upcoming Godot 4.


GitOps Lifestyle Conversion

I’m currently fascinated with GitLab’s handbooks. I heard of companies trying to be more open to the public, but the extent that GitLab is doing is unprecedented. They are documenting everything publicly. Most, if not all, internal processes are getting written for everyone to see.

  • How we admit new people? It’s there.
  • How and when do employees are bonuses? It’s there too.
  • What is the ERP used? It’s there.
  • In fact, what is the whole list of external software and service used? It’s there too.
  • The scripts used to manage their own site? They are there too.
  • Personal information, like employees’ actual salaries? Of course, they are not there.

Too much information? Maybe. But it’s definitively inspiring.

Another source of personal inspiration comes from a guy on Twitter: Keijiro Takahashi. This Japanese programmer does several mini-tools for himself but publishes everything on GitHub with minimalist licenses like MIT.

In contrast, I was checking my LinkedIn the other day when I decided to share my GitLab and GitHub accounts. There are so many projects over there. #ButNot. Most, almost all, were private! Many game prototypes, and small side projects. All locked. Some are live backups since are not been updated for ages. So I decided to do a couple of things:

  1. Open some of the closed projects
  2. Git-fy some of my personal and professional projects
  3. Documentation as code for my new company

The first is pretty straightforward. Mostly checking a box. Sometimes adding small README or LICENSE files. Few times making real changes.

The second is a new mindset: I have dozens of small projects, from games to personal scripts, that I’ve never used git to track changes. But not only I could get better control of it, but also I could share it with the world. You will see more and more projects popping up on my GitLab account page.

The third, follow partially GitLab’s way. I’m considering documenting most of the processes in git-like wikis. It will not only be good to share the knowledge with other employees and partners. It’s also good for tracking the business decisions that changed these processes. A rather clever approach.

Hugo Images Processing feature

Hugo Images Processing

Hugo static website creating is a fantastic tool and I told you before. Since I changed to it, I’m very confident that the site is fast and responsive.

However, my site is packed full of images. Some are personal. Some are huge. Some are PNGs and some are JPGs. I created a gallery component just to handle posts that I want to fill with dozens of them.

Managing posts images is a boring task. For every post, I have to check:

  • Dimension
  • Compression
  • EXIF metadata
  • Naming


Hugo images processing size 2.jpg

Having a bigger image than the size of the screen is useless. It’s a bigger file to download, consuming bandwidth from both the user and from the server. Google Lighthouse and other site metric evaluators all recommend resizing the images to at most the screen size.

In Hugo, I’ve automated using some functions:

{{ $image_new := ($image.Resize (printf "%dx" $width)) }}


Loss compression comparison.png

My personal photos are, most of the time, taken in JPEG. Recently I changed the default compression to HEIC for my phone camera, that provides better compression to hi-resolution photos. The web, however, does not allow such format.

Some pictures used to illustrate the posts are PNG. They have better quality at the expense of being larger. Mostly only illustrations and images with texts are worth to have a lossless format.

Whatever the format, I would like to compress as much as possible to waste less bandwidth. I’m currently inclined to use WebP, because it can really shrink the final size to a considerable amount.

{{ $image_new := ($image.Resize (printf "%dx webp" $width)) }}

EXIF metadata

Each digital image have a lot, and a mean A LOT, of metadata embedded inside the file. Day and time when it was taken, camera type, phone name, even longitude and latitude might also be included by camera app. They all reveal personal information that was supposed to be hidden.

In order to share them in the open public internet, it is important to sanitize all images, stripping then all this information. Hugo do not carry these infos along when it generates new images. So, for all images get a minimal resize, this matter is handled by default.


I would like to have a well organized image library, and it would be nice to standardize the file names. Using the post title to rename all images would be great, even more if used some caption of user provided description.

However, Hugo does not allow renaming them. To make matters even worse, it appends to each file name a hash code. A simple picture.jpeg suddenly became picture-hue44e96c7fa2d94b6016ae73992e56fa6-80532-850x0-resize-q75-h2_box.webp.

An incomprehensible mess. If you know a better way, let me know.

So What?

So, if most of the routines can be automated, that’s the problem?

The main issue is that Hugo have to pre-process ALL images upfront. As mentioned in the previous post, it can take a considerable amount of time. Especially if converted to a demanding format to compute such as WebP.

Netlify is constantly reaching the time limit to build the site, all because the thousands of image compressions. I am planning to revert some commits that I implemented WebP and rewrite them little by little, allowing Netlify to build a version a cache the results.

There are some categories of images:

  • Gallery full-size images: there are hundreds of them, it would take a lot of the processing time, but I will have the metadata extracted from the originals. The advantage is that they are rarely clicked and served.
  • Gallery thumbnails: the actual images that are shown on gallery mode. They are accountable of the biggest chunk of the main page overall size when a gallery is in the top 10 latest posts.
  • Post images: images that illustrate each article. They are resized to fit the whole page, so when compressed they represent a nice saving.
  • Post top banner: some posts have a top image. They are cropped to fit a banner-like size, so they are generally not that big.

I will, in the next couple of hours, try to implement the WebP code on each of these groups. If successfully completed, it will save hundreds of megabytes in the build.

Bonus Tip

Hugo copies all resources (image, PDF, audio, text, etc.) from the content folder to the final public/ build. Even if you only use the resized ones. Not only the build becomes larger, but the images that you wanted to hide the metadata is still online, there. Even if not directly pointed in the HTML.

A tip for those that are working with Hugo with a lot of images processed: use the following code into the content front-matter to instruct Hugo to not include these unused resources in the final build.

    publishResources: false

Let’s build.

Edit on 2021-08-25

I discovered that Netlify has a plugin ecosystem. And one of the plugins available is a Hugo caching system. It would speed up drastically the build times, as well the possibility of converting to WebP all images once and for all. I will test this feature right now and post the results later.

Edit on 2021-09-13

The plugin worked! I had to implement it using file configuration instead the easy one-click button. Building time went from 25 minutes to just 2. The current cache size is about 3.7 GB, so totally understandable.

It will allow me to must more frequent updates. Ok, to be frank: it will not restrict the posting frequency. However, patient, inspiration and focus are still the main constrains on blogging.

On netlify.toml file on root, I added:

# Hugo cache resources plugin
# https://github.com/cdeleeuwe/netlify-plugin-hugo-cache-resources#readme
package = "netlify-plugin-hugo-cache-resources"

# If it should show more verbose logs (optional, default = true)
debug = true

Project Curva

For the last 9 years, I work as a planner and controller of a multinational Brazilian oil company. The team consolidates all the planning information of the whole company, analyses it, and reports to the company board of directors.

For all these years, I’ve struggled to deal with some basic business scenarios:

  • At the very end of the process, someone in the chain of information submits a last-minute update that cannot be ignored
  • The board decides to change the plan
  • The existence of multiple simultaneous plans, for optimistic and pessimistic scenarios
  • Changes in the organizational structure

The current information systems used or developed by the company are too restrictive to accommodate their business cases. The general solution is to create entire systems using dozens of spreadsheets. It is a patchwork of data, susceptible to data loss and zero control.

To address this, I decided to develop, on my own, a new system that is both flexible and powerful. The overall core propositions are:

  • Versioning: instead of overwriting data whenever there is a change request, the system should be able to preserve the existing data and generate another version. Both should be accessible, in other to allow comparison and auditing.
  • Branching: not only sequential versioning (v1, v2, v3), it should allow users to create multiple current versions. Creating scenarios of event temporary exercises should be effortless.
  • Multiple dimensions: for each unit (ie, a project in a list of projects), the user could insert the future CAPEX, OPEX, production, average cost, number of workers, or any arbitrary dimension. It’s all about capturing future series of values, regardless of the meaning.
  • Multiple Teams: in the same organization, users can create inner teams that deal with different aspects of the business. The system should allow to users set the list of units to control (projects, employees, buildings, or whatever), their dimensions of measurement, and then control the user access to all this information. It’s a decentralized way to create plans.
  • Spreadsheet as a first-class citizen: small companies might not use them much. But any mid-to-big companies use spreadsheets for everything. Importing and exporting system data as Excel/LibreOffice/Google Docs is a must.

With this feature set in mind, I started to create a spear time what is now temporally called Project Curva for the last 3 months. I will post more about it in the future: the used technology, the technical challenges, and some lessons learned.

A beta is due at the end of April 2021.

Update 2021-10-18

The project is called NiwPlan and can be checked on NiwPlan.com.