Behind the Screens: Infrastructure Development #2

Behind the Screens: Infrastructure Development #2

Today, I want to share with you the development of our project from its humble beginnings to the current moment, as it transforms into a powerful and flexible web service. In this article, I will talk about overcoming technical challenges, choosing technologies, and making strategic decisions that were part of our project’s development process.

What was before…

It’s been a while since my last post, and I am happy to share some news with you again. Previously, I talked about our initial efforts and first steps in developing a web service for the company SPEECH. Over the past period, we faced some problems, from the interface to modularity, and have done a lot to solve them.

Alpha version
First, realize a version

In the latest update, I described how we transformed the interface and improved user interaction. We redesigned the page using the Bootstrap template engine, making the service more intuitive and user-friendly. Additionally, we changed the data handling logic from Speckle, which significantly eased the work of designers.

The overall concept of the SPEECH+ service interaction.

Now I want to present the new stage in our project’s evolution. Last time, I touched specifically on the frontend part, but in this section, we will delve even deeper into the server side and show how the solutions applied affect our work. Let’s go!

Initial Stage Problems

Our service was initially deployed within a traditional three-tier architecture on a single physical server. We had one virtual machine with 4 cores and 16 gigabytes of RAM, where a frontend container, a backend container, and a database were running simultaneously.

three-tier architecture

This approach had its advantages in terms of simplicity of configuration and minimal initial costs. However, it also became a source of many problems, especially when it came to scaling. With a large number of requests, the RAM quickly ran out, leading to the need for patches to balance memory and prevent its overflow.

Moreover, the update system was extremely inconvenient. Any changes required a full reboot of the virtual machine, which inevitably led to temporary downtime of the service. Testing new versions under such conditions became a real puzzle due to the lack of an isolated environment.

Transition to Containerization

Current Service Architecture SPEECH+

Solving these issues required a new approach. We chose the path of containerization using Docker and Docker Compose. This allowed us to isolate different parts of our application into separate containers, which significantly simplified dependency management, deployment, and scaling of our service.

Containerization also solved the problem with memory, as we can now manage the resources allocated to each container more flexibly. Moreover, Docker, Inc Compose has simplified the process of configuring and launching multi-container applications, which made the deployment of new service versions simpler and less risky.

Docker Container Memory Limits — Relationship of memory and swap ( Thorsten Hans )

Load Balancing

During the development period at our architectural firm, we faced a critical issue: updating the web service for area calculations without stopping its operation. Each update meant halting calculations across all projects. This was a forced measure and a huge risk. Often, the updates were minor, and we sought a convenient moment when it seemed no architect was doing any calculations. But, as we updated, someone would inevitably send an urgent message saying “EVERYTHING IS BROKEN!” This situation also needed to be addressed.

A typical message from users complaining about the service stability

To address the issue of managing incoming traffic and ensuring the uninterrupted operation of our services, we have selected Traefik Labs as our load balancer. Traefik stands out for its ease of configuration and its ability to automatically discover and configure services in our container network.

This allowed us to carry out updates without stopping the service, thereby ensuring the continuous operation of our web service even during the integration of new features and updates. Thanks to Traefik Labs , the deployment process has become less painful, and we were able to offer our users a stable service without long interruptions. Most importantly, it eliminated the need to find time for making program corrections.

Enhancing Security with Vault

As I mentioned earlier, we have decided to implement a username and password system for our users due to security concerns. Previously, our passwords, keys, and Speckle tokens were stored in a manner that did not meet optimal security standards. Recognizing the necessity of secure storage, access, and management for confidential data such as passwords, tokens, and API keys, we opted for a solution. We selected Vault by HashiCorp as it offers a comprehensive approach to secret management.

Using Vault allowed us to centrally manage secrets and credentials, ensure their encryption, and control access to them. This significantly increased the overall security of our web service and reduced the risk of unauthorized access to sensitive information.

Benefits of transitioning to a microservices architecture

As you may have already understood, all these changes inevitably led us towards a microservices architecture. The shift from a monolithic architecture to microservices was a crucial step in the development of our project. Each service now operates independently, allowing us to scale and update them separately from one another. This not only improved system performance and resilience but also simplified the process of developing and implementing new features.

Visualizing the Differences between Monolithic and Microservices Architectures

The implementation of a microservice architecture has enhanced our system’s flexibility and responsiveness to the evolving requirements of our users. It enables us to introduce new concepts and explore emerging technologies while maintaining the system’s overall stability.

Results and Future Plans

Transforming our web service from a simple single-server system into a powerful, flexible, and secure set of microservices was not easy, but it was an extremely beneficial journey. We encountered numerous technical challenges, but each problem helped us grow and learn.

We still have many tasks ahead, and we are continuing to work on improving our service, adding new features, and enhancing usability for users. Thanks to the solid foundation laid by our technical solutions, we are ready for future challenges and opportunities. There are already so many ideas that I have enough material for articles for the year ahead.

For instance, one of our overarching objectives is to integrate PyRevit and Speckle through the intermediary of our service, SPEECH+. I’ll delve into the details of this at a later time.

As you might have noticed in this post, there was more theory and practice. I am simply sharing my experience and mistakes and hope it will be useful to some of you or just inspire you to create something new.


To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics