NGINX.COM

The landscape of IT has been changing for the last decade or more – you’ve probably experienced the changes in your day-to-day work or at least read about them in a book like The Phoenix Project. I’ve been a traditional infrastructure engineer and consultant for most of my career, and in the last five or so years my role has changed dramatically. I’ve had to adapt and pick up new skills along the way.

Most roles in the modern enterprise straddle everything that I learned in the early years of my career about delivering high‑performance, robust infrastructure platforms, plus all the new skills I’ve picked up in software development, automation, and cloud technologies. Delivering modern applications that are mostly API‑driven requires a broad range of skills and understanding of multiple business units and teams.

As more and more businesses undertake digital transformation projects and cloud adoption rises, it is becoming increasingly important to review and modify how operations teams work within the business and how development teams deliver products to the business. Although it’s something of a simplification, this is essentially where DevOps practices have risen from.

These new practices and the associated tooling are prevalent within most software development teams and are gaining ground with operations teams. Innovations like Infrastructure as Code, continuous integration/continuous deployment pipelines, and the use of code repositories for all manner of configurations – from initial infrastructure deployment to desired state configuration and application deployment – are all now commonplace in the daily work of IT professionals across the board.

These new technologies and processes have allowed businesses to get products to market a lot faster than previous methods. The tighter integration between teams that deliver products and those that maintain the production systems has increased uptime and reduced major incidents for many organizations. The ability to seamlessly deploy and test new versions of code live with techniques like canary and blue‑green deployments enables your organization to get real‑time feedback and metrics about changes without long‑winded beta testing cycles.

API management plays a pivotal role in the success of many deployments. If you are providing API services either internally to your business or externally to your customers, being able to deploy new API versions (coupled with the latest version of the software) quickly and efficiently through deployment pipelines is invaluable. With API versioning, you can maintain multiple versions and backward compatibility where required, which ensures an uninterrupted experience for your users.

Challenges in API Lifecycle Management

Over the last few years, I have worked with many popular cloud‑based API management solutions, and while they have been great at providing API services, they present challenges when it comes to integration into software development lifecycle (SDLC) practices.

With the systems I have used, for the most part either you can’t deploy the software locally or the control plane remains in the cloud even if local deployment is possible. Often you need multiple instances of the API management tooling to separate the different stages of the development lifecycle. Working around all these limitations can be time‑consuming and introduce additional costs.

With current conditions around the world, where members of development and operations teams are all working from home, these issues can be exacerbated by increased demand for access to resources. Small things such as poor Internet connectivity at home can hinder development if you have to connect to systems that are either cloud‑based or hosted in an on‑premises data center.

Over the course of the last few years, I have been advocating for tools that make the developer experience consistent across environments, starting with personal workstations/laptops, through shared development and test environments, and finally in production. Tying the developer experience to an Internet connection just limits productivity.

When working with containers or functions in modern application development, this is relatively easy. Virtualization, Docker Desktop, MiniKube, k3s, OpenFaaS, and many other tools allow you to run scaled‑down local environments on your own hardware and continue to develop local branches in most situations. With Kubernetes and Docker Compose manifests, you can tie together multiple components and call automated build and test suites, or just manually run through your application, ensuring everything works as you expect.

Using the Right Tool for the Job

NGINX Plus as an API gateway provides a huge amount of flexibility in the SDLC stages. I can consistently run the same software and configuration regardless of the environment. I can deploy a configuration using the same tools on my laptop as when deploying new APIs into production.

The benefits come from consistency and reliability all across the deployment process: everyone – from the developer to the test engineer and all the way through to the operations staff – can be confident that the deployed configuration will exactly match what was developed and tested. Building confidence throughout the business and demonstrating a reliable release cycle makes everyone’s lives easier.

Another great addition to the process is the NGINX Controller API Management Module. NGINX Controller pulls together features for API definition and publication, authentication, security, and real‑time monitoring. You can also provide operations teams with dashboards and integrate with popular platforms such as Prometheus and Splunk.

Bringing It All Together

What you get when you bring everything together is APIOps. It’s the ability to deliver consistent code that has been through testing and review at every stage, to provide rich metrics and analytics to operations teams, and to publish versioned APIs to your customers, whether internal or external.

Now, this nirvana can’t be achieved overnight, but you need to start your journey somewhere or you’ll never start at all. Work on implementing good processes and finding the tools that work for your environment. Provide development and operations teams with the right arsenal to deliver great customer experiences.

This is an area where I am confident that NGINX can help accelerate your journey. Not only can you deploy lightning‑fast real‑time APIs, you can deliver a consistent experience from cradle to grave for your apps hosted in any location, whether they’re legacy apps running in your on‑premises data centers or new cloud‑native projects.

Try NGINX Plus as an API gateway and the NGINX Controller API Management Module – start a free 30‑day trial or contact us to discuss your use cases.

This blog is part of a series about APIs and other trends in app modernization, written especially for NetOps engineers by industry experts and published in collaboration with Gestalt IT. If you enjoyed this blog, check out the entire series.

Hero image
NGINX 实时 API 手册

我们的 NGINX 实时 API 手册使企业能够提供更可靠、高性能的 API。

关于作者

Jason Benedicic

Cloud/DevOps/Automation Consultant

Jason Benedicic has been a professional consultant for over 10 years with 20 years in the IT industry, based in Cambridge, UK. He works with customers to design IT solutions that meet a variety of needs, including backup, virtualization, cloud storage, and containerization. As an expert in building and managing cloud services and professional services infrastructure, he has experience working in all areas of business from sales cycle through to support. Additional interests include digital ethics, influencing, marketing, strategy, and business processes. Outside of the technology industry, he enjoys cycling and all forms of gaming.

关于 F5 NGINX

F5, Inc. 是备受欢迎的开源软件 NGINX 背后的商业公司。我们为现代应用的开发和交付提供一整套技术。我们的联合解决方案弥合了 NetOps 和 DevOps 之间的横沟,提供从代码到用户的多云应用服务。访问 nginx-cn.net 了解更多相关信息。