First Principles for the MLOps Engineer
June 27, 2022

Taimur Rashid

Launching an airplane from an aircraft carrier is a systematic and well coordinated process that involves reliable systems, high-performance catapults, precise navigation systems, and above all, a specialized crew having different roles and responsibilities for managing air operations. This crew, also known as the flight deck crew, are known for their colored jerseys to visually distinguish their functions. Everyone associated with the flight deck has a specific job. As a corollary to this example, launching machine learning (ML) models into production are not entirely different, except instead of launching a 45,000-pound plane into air, ML teams are launching trained ML models into production to serve predictions.

There are several categorizations that define this function of enabling the whole process of taking trained ML models and launching them into production. One of those definitions is MLOps engineering and can be defined as the technical systems and processes associated with the stages of the ML lifecycle (also referred to as MLOps cycle) from data preparation, modeling building, and production deployment and management.

While MLOps engineering entails the provisioning, deployment, and management of infrastructure that enables model building, data labeling, and model inference, it can go much deeper than that. MLOps engineering can entail developing algorithms too.

Mature IT functions like data engineering, data preparation, and data quality all have corresponding personas that perform specific tasks, or in the frequently mentioned parlance, "Jobs to Be Done."

ML engineering also has a specific persona, and that is the MLOps Engineer. What do MLOps Engineers do?

For the sake of simplicity, MLOps Engineers design, deploy, and operate the underlying systems (infrastructure) that allow data science teams to do their jobs, which include feature engineering, model training, model validation, model refinement, just to name a few. MLOps Engineers also automate the process around those specific needs so that the work involved in launching ML models into production is streamlined, simplified, and instrumented.

Just like any other IT role, there is a broad spectrum of functional tasks MLOps Engineers can undertake. Fundamentally, a MLOps Engineer fuses software engineering expertise with knowledge of machine learning.

While the number of tools, frameworks, and approaches continue to expand and evolve, there are certain skill sets that are needed, which transcend the specific tools and frameworks. That’s why it’s important to ground the discussion on first principles. There is a core list of skill sets needed for an MLOps Engineer to carry out the specific tasks, and while not all are required, the tasks an MLOps Engineer undertakes is a function of the existing composition, size, and maturity of the broader ML team.

Some of these first principles or core skill sets entail:

1. Programming experience

2. Data science knowledge

3. Familiarity with math and statistics

4. Problem-solving skills

5. Proficiency with machine learning and deep learning frameworks

6. Hands-on experience with prototyping.

Related to these core skill sets are knowledge and experience with programming languages, DevOps tools, databases (relational, data warehousing, in-memory, etc). There are a variety of online resources that unpack the details related to skill sets, and this continues to evolve as more companies mainstream ML across their teams.

While definitions are important, the industry is still early in defining MLOps engineering and better characterizing the roles and responsibilities of a MLOps Engineer. In the journey towards understanding this domain, and the associated education and learning paths to become a MLOps Engineer, it’s important to not be too dogmatic across the board. By focusing on the Jobs to Be Done, and applying that to the context of the project, company process, and maturity of teams, companies can better structure and define the MLOps engineering crew that can launch ML models into production.

Taimur Rashid is Chief Business Development Officer at Redis
Share this

Industry News

May 16, 2024

Pegasystems announced the general availability of Pega Infinity ’24.1™.

May 16, 2024 and Sysdig unveiled a joint solution to help developers, DevOps, and security teams accelerate secure software delivery from development to deployment.

May 16, 2024

GitLab announced new innovations in GitLab 17 to streamline how organizations build, test, secure, and deploy software.

May 16, 2024

Kobiton announced the beta release of mobile test management, a new feature within its test automation platform.

May 15, 2024

Gearset announced its new CI/CD solution, Long Term Projects in Pipelines.

May 15, 2024

Rafay Systems has extended the capabilities of its enterprise PaaS for modern infrastructure to support graphics processing unit- (GPU-) based workloads.

May 15, 2024

NodeScript, a free, low-code developer environment for workflow automation and API integration, is released by UBIO.

May 14, 2024

IBM announced IBM Test Accelerator for Z, a solution designed to revolutionize testing on IBM Z, a tool that expedites the shift-left approach, fostering smooth collaboration between z/OS developers and testers.

May 14, 2024

StreamNative launched Ursa, a Kafka-compatible data streaming engine built on top of lakehouse storage.

May 14, 2024

GitKraken acquired code health innovator, CodeSee.

May 13, 2024

ServiceNow introduced a new no‑code development studio and new automation capabilities to accelerate and scale digital transformation across the enterprise.

May 13, 2024

Security Innovation has added new skills assessments to its Base Camp training platform for software security training.

May 13, 2024

CAST introduced CAST Highlight Extensions Marketplace — an integrated marketplace for the software intelligence product where users can effortlessly browse and download a diverse range of extensions and plugins.

May 09, 2024

Red Hat and Elastic announced an expanded collaboration to deliver next-generation search experiences supporting retrieval augmented generation (RAG) patterns using Elasticsearch as a preferred vector database solution integrated on Red Hat OpenShift AI.

May 09, 2024

Traceable AI announced an Early Access Program for its new Generative AI API Security capabilities.