Gitlab CI
The Gitlab Continuous Integration and Continuous Delivery (CI/CD) Pipeline
Because CI/CD is a topic all unto itself, and Gitlab CI/CD a full product within a platform, suffice to say that you should start with Gitlab CI/CD docs if you've never touched any of this kind of thing before.
In brief, the CI/CD pipeline is a system by which we are able to store, compare, combine, test, and compile our code. Once we have that code compiled, we are then able to deliver it to customers. All this is done continuously and mostly autonomously throughout the day.
You can see a quick start guide here.
The .gitlab-ci.yml
configuration file
In the root folder of our react-native
project, you will find the .gitlab-ci.yml
file. This YAML file, is used to configure how our CI pipeline works.
What the config file does is create the various actions that will occur whenever something happens in the project's codebase, stored in Gitlab.
If you are working on some new feature, or a fix, you would create a branch off of our main
branch usually. After committing your changes to your local branch, you would push your branch up to the repository in Gitlab, or push your changes to an existing one. At that point, a branch pipeline will run the scripts contained inside of the .gitlab-ci.yml
file.
Similarly, once you're done working on that branch, and are ready to create a Merge Request (MR) to merge your branch into the main
branch, then a merge request pipeline will run different scripts that are specifically meant to run only when we are merging changes into main
, or when we create a release/
branch, or cherry pick into that release/
branch from main
.
You could also be manually triggering a pipeline run on a branch, without pushing a commit or working on a merge request, and that will run specific scripts for that scenario.
The scripts can also be run based on different conditions - for most of the above, everything will run automatically, but any uploading of built binaries needs to be done manually, by pressing a button.
For scheduled pipelines, ones that are meant to run periodically (e.g. once a day) so that we can have a fresh build ready for Quality Assurance testers to check new changes - those pipelines will run even the upload scripts automatically, as such a scheduled pipeline is meant to be more "set and forget" than the others, which would be checked every step of the way.
Pipeline Stages and Jobs
Generally speaking, the pipelines will run different scripts at different stages and each script being run inside of a stage is called a job.
You can visualise the current CI Pipeline setup by going to the Pipeline Editor page of our project.
Notes:
- Danger
- Runs for MRs only, checking that any rules specified in the dangerfile are adhered to. For more info see the Danger wiki page.
- Commit Lint
- Runs for MRs only, checking that commits adhere to Conventional Commits
- Code Lint
- Runs always, using our linter to spot problems with our code
- Code Test
- Runs always, using Jest to check that our suite of code tests (unit and integration) are all passing, potentially catching unintended code changes breaking functionality or APIs
- Build jobs
- iOS has different targets and Android has different flavours for each brand, that's why there are multiple build versions.
- Internal Upload jobs
- In addition to all the targets/flavours we also create different app bundles, pointing to the UAT (internal) or Production environment at the app level, and some through the Google Play Store for Android. iOS is all handled via Apple's App Store Connect portal and the Test Flight app previewing app.
Each job in turn can be configured with various keywords to determine, for example:
- what Docker image to use for a particular job (iOS and Android builds need different environments to run)
- special variables for the job
- specifying if a job can run sooner, if a job from another stage is ready before all other jobs in that stage have finished, using the needs relationship keyword
- choosing when a pipeline or job should or shouldn't run
There are many more config keywords that can be utilised, and all the info can be found in the relevant Gitlab CI docs.
Pipeline Variables
Gitlab provides us with a set of predefined variables that can be used within the .gitlab-ci.yml
config file.
We can also specify our own custom variables to use in configuring the pipeline and its jobs. Those variables can be found under the Settings -> CI/CD menu in Gitlab.
For example, currently we have set $IGNORE_IOS
to true, while the iOS pipeline jobs are being worked on. You can also manually pass in variables to any manually run pipeline, when creating that pipeline for a branch.
Running pipelines
Generally, pipelines are run automatically, based on certain triggers. Those triggers are all governed by a set of rules we set in the .gitlab-ci.yml
file. All of the pipeline stages and jobs can be visualised by clicking on the tab of the same name in the CI/CD Editor page.
Pipeline types
main
and release/
branch pipelines
The most important pipeline rules are the ones that govern what happens when a push or a merge request occurs for the main
or release/
branch. Those are the blood of our Trunk Based Development workflow.
Particularly the main
and release/
branches are the only ones that will have the upload
CI pipeline stage, in which all of the app bundles will be uploaded and made available to internal stakeholders to test on the Android platform, and soon for iOS (currently a manual process, so you won't see that platforms jobs in the pipeline yet, as their are ignored by default via the $IGNORE_IOS variable).
When you create and push your branch, including any future commits to the remote repository, generally no branch pipeline will be spawned, and no pipeline jobs will run for that branch. Once you create an MR for your branch, targeting either main
or release/
- you will then get lint
and test
pipeline stage jobs running, to check whether your changes are all ok before they can be merged to either of those branches.
Those jobs will run on what are called merge request pipelines, which are different to the regular branch pipelines, and have special CI variables available to them. You can identify these by the (detached) label on them when looking at all of the pipelines on the CI/CD page in Gitlab.
Once the MR is approved, and the Merge button is used, triggering a merge commit to be pushed to main
or release/
, there will be no further pipelines run, as we have scheduled pipelines that will run the full suite of CI stages and jobs automatically. We opted to not have Merge Requests that succeed run their own build and upload jobs, instead having scheduled builds, to save on the amount of pipelines run due to the sheer amount of commits and merges happening in the project.
Scheduled Pipelines
In the scenario where QA wants a new build to test fresh changes, it makes sense to create a scheduled pipeline, that will make a fresh build every day perhaps, and that way the QA team doesn't have to nag developers to go and make those builds manually.
Given that this is a very automated workflow, we generally want scheduled pipelines to run all of the jobs automatically if everything is working, without anyone having to keep an eye on them if everything is fine.
In that case we set the rules
for the jobs that would run in that scheduled pipeline (lint, test, build, upload internal) to all run on_success
, which means that they will run automatically, and every next job in the pipeline will continue on automatically as well so long as the preceding job didn't fail.
If any of the jobs fails, the whole pipeline stops. This ensure both that, if everything is "Green" (working ok), the builds are done automatically, and if anything is "Red" (or failing), then the pipeline doesn't continue and it needs to be addressed.
Manually run Pipelines (aka web
pipelines)
We have the ability to manually create pipelines, without the need to push a commit or a branch to the repository, or opening an MR. Instead we can just run a new pipeline or schedule one to run automatically.
In both instances, we are generally making these kinds of pipelines because we want to make new builds of our app, perhaps for the QA team to test the latest changes or for a product team to do an internal showcase.
For a scenario where an internal team wants to run a showcase, this will be something that is done more on-demand, and is likely to create a build off of code that is still work-in-progress, and can be done on any branch - without first needing to have that feature branch merged into main
.
This allows the product stakeholders to validate feature work quicker, without slowing down the team due to a very stringent pipeline on the main
branch. The product team can just run the pipeline on their feature branch manually, running all of the same jobs as before (lint, test, build, upload internal) but the upload internal stage jobs are set to be manual
, meaning that the developers will need to check on the pipeline and manually trigger the uploading jobs to run.
This is primarily to avoid needlessly uploading new builds (which can be seen by the QA team and might confuse them) if there are subsequent pipeline runs due to some changes required in the build.
Using rules
to determine when to run CI pipelines
In our .gitlab-ci.yml
file, we use a rules
keyword to set specific conditions for when a certain pipeline job should or shouldn't run.
To read more about Gitlab CI rules go here: https://docs.gitlab.com/ee/ci/yaml/index.html#rules
And here to read more about how to config when pipelines should run, generally, to avoid spawning duplicate pipelines.
IMPORTANT! You cannot mix the use of only/except
and rules
in the same job. Gitlab will throw an error at you, so don't even try.
As an example, we only really want the commit-lint
job in the lint
stage to run if we are working on a MR, because it makes no sense to run a CI pipeline job to check that commits are adhering to the conventional commits format, while we are still doing Work In Progress on our branch, and we may knowingly have "unclean" commit messages.
Generally, we only clean up our commits, using rebasing when we are done and are ready to Open up an MR. For that reason, we set rules
for the commit-lint
job to only run when there is an MR for a certain branch, and that MR itself is triggering a pipeline.
Creating rules
templates, using YAML anchoring
As each pipeline and each job might have different scenarios when they should or shouldn't run, this may mean spending a lot of time writing the rules
to determine when and if they should run. Instead of repeating a lot of configuration code for similar scenarios, it can be useful to create templates. Luckily the .gitlab-ci.yml
file can take advantage of the YAML node anchors which can be used to create templates.
This allows us to create reusable sections of our config. A template is created by using the &
symbol, which creates an anchor. We then give that anchor a name like e.g. &anchor_example
and then can later reused it with the *
symbol, which defines an alias, like e.g. *anchor_example
.
Here's what that looks like in our rules. Here we create a pipeline job, and define it's rules
section. For each rule - if:
, instead of defining the rule itself, we define our anchor. We then specify the rules keywords normally.
Note that any pipeline job that is prefixed with .
will be ignored by Gitlab CI, like e.g. .ignored_job
. This then allows us to create whole template sections, or use that ignored job to create templates out of sub-sections like we are for the rules!
.rules_template_example_job
rules:
- &this_is_our_first_template_example
if: $COMMIT_BRANCH # This is our actual rule, that we're creating a template out of
when: never # You can add all of the other `rules` keys in the template
- &another_rule_template_example
if: '$CI_PIPELINE_SOURCE == "merge_request"'
when: on_success
Note: The default value of the when
keyword is always on_success
, so you can omit that when creating your rule templates - but it's a good idea to still keep it, for clarity.
To use one of the template rules, e.g. &example_rule_template,
add them like this to a job, using the template name from before, but instead passing it in with the *
prefix...
some_job:
rule:
- *example_rule_template
- *some_other_rule_template
- if: # some non-template rule
- *another_rule_template
You cannot sadly create anchors of multiple normal and templated rules, but you can create a whole rule section, like this
.some_specific_rule_template: &some_specific_rule_template
rules:
- *example_rule_template_from_before
- if: # some non-template rule
some_CI_job:
<<: &some_specific_rule_template #<-- This will merge in the whole rules section we created an anchor for before
This uses the anchor overriding, using the <<:
key, which will replace that entire rules
section if it already exists.
You cannot mix this <<:
and adding in specific *rule_templates
together, it doesn't work :(
If you try to pass in something like this
<<: *some_rules_section_template
rules:
- *some_specific_rule_template
This will override the entire rules
section you templated in &some_rules_section_template
that you merged in with <<:
:-/
Order of execution for Rules
Rules are evaluated in descending order, top to bottom. If the preceding rule doesn't match, the next will be evaluated. Order matters, I don't know what other way to put it.