Getting Started
This document describes how to write GitLab CI/CD pipelines on ONES-AI projects.
Pipeline Configuration
In your project, create a file named .gitlab-ci.yml
. This file contains the configuration for your CI/CD pipelines. Your pipelines will run on our CI/CD runners.
Jobs
Stages
Identify the different stages your CI/CD pipeline should have. Common stages include build, test, deploy. You can define custom stages based on your specific workflow.
For NEST-Compiler, we use check
→ build
→ publish
→ test
stages.
Inside the .gitlab-ci.yml file, define the jobs you want to run and organize them into stages. Specify the commands, scripts, and settings for each job.
Example:
stages:
- build
- test
- deploy
build:
stage: build
script:
- echo "Building..."
Images
It is recommended to prepare a docker image that is required to build a project. For this, we separated NEST-Compiler and NEST-Compiler SDK. NEST-Compiler SDK project is only used to build and publish the docker images for NEST-Compiler project. For more about writing a SDK images, you can refer to “Writing SDK Images”.
If your image is prepared, you can define the job like the following.
build:
stage: build
image: onesai1/nest-compiler-sdk:1.0.0
script:
- cmake -G Ninja ../ -DCMAKE_BUILD_TYPE=Release -DNESTC_WITH_EVTA=ON -DNESTC_EVTA_BUNDLE_TEST=ON -DGLOW_WITH_BUNDLES=ON
Scripts
In GitLab CI/CD, you can define commands with before_script
, script
, and after_script
keywords.
before_script
Commands specified in
before_script
are executed before the main script (script
) of the job.It is typically used for setting up the environment, installing dependencies, or configuring settings that are common to all jobs in the pipeline.
Multiple commands can be specified as a list under
before_script
.
script
The main commands that perform the primary tasks of the job are specified in the
script
section.This is where the core build, test, or deployment tasks are defined.
Multiple commands can be specified as a list under
script
.
after_script
Commands in
after_script
are executed after thescript
section. This can be useful for cleanup tasks, reporting, or additional actions after the main job logic.It is commonly used for tasks such as cleaning up temporary files, generating reports, or performing actions based on the result of the main script.
Multiple commands can be specified as a list under
after_script
.
Working Directory
Since GitLab Runner dynamically identifies job working directories, it’s recommended not to rely on fixed locations for builds, such as using a fixed location like /root/dev
. Instead, to dynamically determine where your scripts run, you can make use of the CI_PROJECT_DIR
variable. In the provided example, several environment variables are set in the before_script
section to facilitate dynamic build locations.
Extend
The extends
keyword in GitLab CI/CD allows you to reuse configurations from other jobs or templates within your .gitlab-ci.yml
file. This feature promotes code reusability, reduces redundancy, and makes it easier to maintain consistent configurations across multiple jobs or projects.
.builds:
stage: build
image: onesai1/nest-compiler-sdk:1.0.0
before_script:
- export TVM_HOME=$CI_PROJECT_DIR/tvm
- export DMLC_CORE=$CI_PROJECT_DIR/tvm/3rdparty/dmlc-core
- export TVM_BUILD_PATH=$CI_PROJECT_DIR/build_ci/tvm
- export TVM_LIBRARY_PATH=$CI_PROJECT_DIR/build_ci/tvm
- export PATH=$CI_PROJECT_DIR/build_ci/tvm:$PATH
- export PYTHONPATH=$TVM_HOME/python:$PYTHONPATH
build:
extends: .builds
script:
- cmake -G Ninja ../ -DCMAKE_BUILD_TYPE=Release -DNESTC_WITH_EVTA=ON -DNESTC_EVTA_BUNDLE_TEST=ON -DGLOW_WITH_BUNDLES=ON
build_from_aws:
extends: .builds
script:
- cmake -G Ninja ../ -DCMAKE_BUILD_TYPE=Release -DNESTC_WITH_EVTA=ON -DNESTC_EVTA_BUNDLE_TEST=ON -DGLOW_WITH_BUNDLES=ON -DNESTC_USE_PRECOMPILED_BUNDLE=ON -DNESTC_USE_PRECOMPILED_BUNDLE_FROM_AWS=ON
For more information about extends
keyword, you can refer to this documentation.
Workflow
GitLab workflow controls when pipelines are created. For example, The folllowing workflow create pipelines when 1. merge requests are created, 2. tags are created, 3. push events has occured. This workflow also prevents duplicate pipelines for merge requests and commits.
workflow:
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_TAG
- if: $CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS
when: never
- if: $CI_COMMIT_BRANCH
Workflow for Merge Requests
It is also possible to control the condition to run a specific job. The example job is triggered with following conditions.
The pipeline is automatically triggered for merge request events.
For regular branch commits, it won’t be automatically triggered if there are open merge requests for the branch (when: never). Instead, it can be manually triggered when needed (when: manual).
test-board:
stage: test
script:
- cmake -DNESTC_WITH_EVTA=ON -DLLVM_DIR=/usr/lib/llvm-8.0/lib/cmake/llvm -DCMAKE_BUILD_TYPE=Release -DNESTC_USE_VTASIM=OFF -DVTA_RESNET18_WITH_SKIPQUANT0=ON -DNESTC_EVTA_RUN_ON_ZCU102=ON -DNESTC_USE_PRECOMPILED_BUNDLE=ON -DNESTC_EVTA_RUN_WITH_GENERIC_BUNDLE=ON ..
- sudo make check_zcu102
tags:
- etri-board
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS
when: never
- if: $CI_COMMIT_BRANCH
when: manual
Workflow for Docker Builds
Build and Push
When you are building a Docker image in your project, you may also define rules like below:
docker-build:
stage: docker-build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
before_script:
- echo "{\"auths\":{\"${DOCKER_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${DOCKER_USERNAME}" "${DOCKER_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
script:
- /kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/utils/docker/Dockerfile"
--destination "${DOCKER_IMAGE}:${CI_COMMIT_REF_NAME}"
--destination "${DOCKER_IMAGE}:latest"
tags:
- cpu-runner
rules:
- if: $CI_COMMIT_REF_PROTECTED == "true"
changes:
- utils/docker/Dockerfile
In this case, the job will be executed if the pipeline is associated with a protected branch ($CI_COMMIT_REF_PROTECTED == "true"
) and if there are changes detected in the specified Dockerfile located at utils/docker/Dockerfile
. The rule ensures that the job is specifically triggered for protected branches when changes occur in the mentioned Dockerfile.
We use such rules to use protected variables for Docker credentials and to prevent unnecessary duplication of Docker builds.
Build Only
On branches that are not protected, you can employ the following rules specifically for testing builds.
docker-build-no-push:
stage: docker-build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- /kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/utils/docker/Dockerfile"
--no-push
tags:
- cpu-runner
rules:
- if: $CI_COMMIT_REF_PROTECTED == "false"
changes:
- utils/docker/Dockerfile
References
Fore more information about job control, you can refer to the following documents.