The current state of Fountane LLC includes being able to pick up tasks with regards to developer operations when required.
The overall work involves being able to work with the following tech
These are basic building blocks of how things run in Fountane. I’ll go in deeper as to how each is being used and then we’ll look at things you should learn to help improve and make it simpler for when yo do dev in fountane.
Most existing automations use Gitlab Runners to act as the interface between the final artifact and the developer’s workflow. In very few cases do we have a jenkin’s webhook that handles talking with external services and servers because it’s easier to configure to Jenkins to handle external nodes and connections. We’ll get to these cases but for now let’s look at the automations in place.
Web
The web uses Gitlab Runners to test, build and deploy the artifacts to Firebase and
this is the general flow and very easily replicable. As to why we use Firebase hosting;
It’s being done to make use of the Firebase Preview Channels which have a better
integration with Gitlab as compared to the competitors and most of our original projects
are on the Google Cloud so it was even easier to make sure the assets were encrypted
and sent from the cloud storage as CDN. This is no more required and it’s just pushing
the bundled website assets to firebase.
Back office
Coming oto the backend, this is more complicated setup since each project has it’s
own arch in terms of code and there’s differences between single repository based
setups and monorepos with multiple backoffice modules.
In these cases, most automations are run on Jenkins where you might need a persisited disk to be able to run generations and tests but if there’s no need for any large codegen and you just need to run tests, you can just use Gitlab for this as well. Though remember to create the automations as pipelines so there’s a Test stage run before the Deploy Stage. We normally do not allow production deploys to be automatic and instead to be manually triggered on Gitlab / Jenkins so there’s someoneto monitor the result of the deploy and rollback if needed.
Some even older projects which didn’t get the attention to move to CI use the
classic git clone
technique to deploy and use PM2 to keep the service alive.
Mobile
The setup for this is mostly based on Fastlane and that is the tech that handle
multi account certificates and provisioning profiles for iOS builds. The keystore
for Android is handled by adding it to a vault and then retreiving it from that vault.
In the current setup the vault
is just a repository in the project repository group.
The remaining of the automation of building and pushing is as follows.
.dev-
in the tag, it create a development
build for you and then uploads the artifact to diawi.com and the link is then
posted to the requested Slack channel..dev-
in the name and it’ll create a
production build.
The reason for the Android to have a harder flow is because of the number of clients and managing their google play service credential file for each client since there isn’t a proper vault service in Fountane yet.
The Infrastructure providers for Fountane are basically the following.
When working with companies of different levels you often have this uncertainity of where things might get deployed and so fountane tries to avoid writing anything specific to a certain platform.
This avoids having to fight with the plaform when migrating from dev boxes to the actual production boxes. Docker is also something we use to make sure we can make sure of containerised boxes and so codebases are to make sure they can be dockerised if the target environment is unknown.
To learn and understand the above and even improve on what’s already present. Here’s the roadmap to it.
/ping
with a response of {“pong”:true}
Fountane’s version management in SemVer Compliant. Which to be simplified means that there’s 3 possible increments.
You can read more about it on Release Cycles
You need to know how to expose the application to the world without exposing all the ports of the system, so you need to understand how proxies work and the following are proxies used by fountane based on the infrastructure of the application.
When working with simpler projects we just use Nginx/Caddy to handle the routing to the application running with something like PM2 or Docker’s process management.
When working with a more complicated app which will have more than one backoffice, these proxies can be used as the API gateway and load balancer to redirect to the required service if running everything in one VM. Otherwise you can use the API Gateway / Mesh of the platform that you are working with (AWS, AZURE, etc)
Next up, understand how to make the server secure. You can keep the dev server relatively open but keeping a production server open is a bad idea. Learn about how to make DMZ’s with the platform that you are working with. This might differ in name but they all have the same theoretical purpose which is to limit who/what can access the network in which the app is running.
For Example, AWS calls these VPC’s and you add in the VM, Storage and DB into this VPC and they can only talk to each other and nothing outside the network can access them. You can also add in a VPN that connects to this VPC so you can get SSH access to these VM’s.
While SSH is secure, it’s not very hard to replicate. You are one leaked private key away from loosing everything. DMZ’s are added friction so that the leak cannot hurt you unless your IAM was given access to the VPC.
These are collection of things I wish fountane could do in terms of automation.
It’s a glorified way of saying that builds are as close to each other no matter the time of building. a.k.a, I should be able to create a build in 10 years and the output would still be the same it was when it was created. This is currently very hard with the versioning methods used by different languages and because of the dependence on external services but for tiny core packages built at fountane, this should be something we end up doing.
The current solution that the research team is researching on is called Nix and this could solve the supply chain dependencies for binaries and language binaries by using strict hashes and environments.
Something that we love from Vercel is the ability to have isolated app archives for both backend and frontend when using Next.js , this is also possible to be done with the server/api/client architecture and we’d like to set that up in the future.
The current solution is to make use of isolated docker setups aligned to a preview domain that can be used for both QA and hermetic reviews
We are using various platforms for documentation and wiki and the current solution might be confluence but then it’s different for how we write out websites and everything.
The idea is to host something like Ghost or BCMS to handle the website content for all websites and sync them with code. The requirements are the following.