NGINX.COM

When F5’s acquisition of NGINX was finalized in May of this year, one of the first priorities for Engineering was to integrate the control‑plane development teams and adopt a common set of tools. In this post, I describe the changes we made to our code repository, development pipeline, and artifact storage.

Before the acquisition, the NGINX Controller team’s tool suite included GitHub for the code repo, Jenkins for the development pipeline, and Nexus Repository Manager OSS, Docker Registry, and Amazon S3 for artifact storage. The control‑plane team at F5 was using GitLab with its built‑in continuous integration feature (GitLab CI) for the code repo and pipeline, and JFrog Artifactory for all artifacts.

We saw lots of value in adopting the F5 tech stack for the control‑plane team as a whole. First, it’s simpler than the stack the NGINX Controller team was using. Second, the company’s IT team manages it, freeing DevOps teams from having to deal with infrastructure.

Moving the Code Repository

The first step was to move the NGINX Controller code repository from GitHub to GitLab. This turned out to be trivial, requiring just a few scripted git clone and git push operations. We were able to preserve all of the branch rules and merge checks we had been using in GitHub.

Converting to the GitLab Pipeline

This was the most important step, as it enables the integrated engineering team to build, test, and publish code.

We had defined the pipelines for NGINX Controller in Jenkinsfiles written in the Jenkins domain‑specific language (DSL). We had tried to make them as Jenkins‑agnostic as possible, though, by:

  • Storing secrets in HashiCorp Vault wherever possible
  • Limiting the use of plug‑ins
  • Running most commands explicitly in the pipeline, without wrappers
  • Making it possible to run all jobs locally on a developer’s laptop in the same way as on the Jenkins server

The process of translating Jenkinsfiles to GitLab CI pipelines was then mostly a matter of creating GitLab Runners that matched our Jenkins workers, and creating pipelines with the same stages by copying the commands from the previous pipelines.

The biggest task was to change the Jenkins environment variables to GitLab CI variables and to verify all the paths and file locations. To avoid having to do this, I recommend always representing paths as variables. I know that’s obvious, but sometimes we sacrifice best practices for the sake of speed.

Here’s an example of one of our Jenkinsfile stages:

stage ('Test & PushToReg') {
    environment {
        NODE_ENV = 'production'
    }
    steps {
        sh '''
        VERSION=$(git tag --contains | head -1 | tr -d v )
        CTR_VERSION=${VERSION} gulp build
        '''
        sh "PLUGIN_NAME=NAME1 yarn webpack-cli --config webpack/platform/plugin.js"
        sh "PLUGIN_NAME=NAME2  yarn webpack-cli --config webpack/platform/plugin.js"
        sh "docker build -t /${BRANCH}:${BUILD_NUMBER} -f docker/Dockerfile ."
        sh "echo 'Running unit tests'"
        sh "yarn test"
        stash includes: 'test-results.xml', name: 'utest', allowEmpty: true
        sh "docker push …..”
    }
}

This was translated to the following Gitlab CI code:

BuildTestPublishFE:
  stage: build_test_publish
  image: ${BUILD_IMAGE}
  tags:
    - devops-runner
  script: |
    yarn config set registry ${NPM_REGISTRY}
    yarn install --verbose > yarn.log
    yarn autoclean
    VERSION=$(git tag --contains | head -1 | tr -d v )
    CTR_VERSION=${VERSION} gulp build
    PLUGIN_NAME=NAME1 yarn webpack-cli --config webpack/platform/plugin.js
    PLUGIN_NAME=NAME2  yarn webpack-cli --config webpack/platform/plugin.js
    # Pushing to different locations depending on the branch
    if [[ ${CI_COMMIT_REF_SLUG} == "release-"* ]]; then
      DOCKER_REG=${DOCKER_REGISTRY_STG}
    else
      DOCKER_REG=${DOCKER_REGISTRY_DEV}
    fi
    docker build -t ${DOCKER_REG}/${CI_COMMIT_REF_SLUG}:${CI_PIPELINE_ID} -f docker/Dockerfile .
    echo 'Running unit tests'
    yarn test
    docker login ….
    docker push ….
  artifacts:
    expire_in: 1 day
    when: always
    paths:
      - yarn.log
      - test-results
    reports:
      Junit: test-results.xml

Storing Artifacts

The NGINX Controller team was using separate tools to store different types of artifact:

  • Nexus for deb and rpm packages
  • Amazon S3 for tar balls
  • A private Docker registry for container images

Moving to the F5 infrastructure gave us the opportunity to consolidate all storage under Artifactory, so we went for it.

Artifactory natively supports all of the repository types we were using, so we just needed to configure the repository location and credentials in our new pipelines and we were ready to start publishing.

The are three big advantages to having just one Artifactory repository. Obviously, it makes it management easier. It also means we can scan for vulnerabilities and license compliance in just one place. Lastly, access control is centralized.

Hero image
免费白皮书:
NGINX 企阅版全解析

助力企业用户规避开源治理风险,应对开源使用挑战

关于作者

Ismael Serrano

DevOps Engineer

关于 F5 NGINX

F5, Inc. 是备受欢迎的开源软件 NGINX 背后的商业公司。我们为现代应用的开发和交付提供一整套技术。我们的联合解决方案弥合了 NetOps 和 DevOps 之间的横沟,提供从代码到用户的多云应用服务。访问 nginx-cn.net 了解更多相关信息。