To meet the growing demand for Oracle Container Engine for Kubernetes (OKE) with global organizations, Oracle Cloud Infrastructure (OCI) is introducing new capabilities that can boost the reliability and efficiency of large-scale Kubernetes environments while simplifying operations and reducing costs.
Launching an airplane from an aircraft carrier is a systematic and well coordinated process that involves reliable systems, high-performance catapults, precise navigation systems, and above all, a specialized crew having different roles and responsibilities for managing air operations. This crew, also known as the flight deck crew, are known for their colored jerseys to visually distinguish their functions. Everyone associated with the flight deck has a specific job. As a corollary to this example, launching machine learning (ML) models into production are not entirely different, except instead of launching a 45,000-pound plane into air, ML teams are launching trained ML models into production to serve predictions.
There are several categorizations that define this function of enabling the whole process of taking trained ML models and launching them into production. One of those definitions is MLOps engineering and can be defined as the technical systems and processes associated with the stages of the ML lifecycle (also referred to as MLOps cycle) from data preparation, modeling building, and production deployment and management.
While MLOps engineering entails the provisioning, deployment, and management of infrastructure that enables model building, data labeling, and model inference, it can go much deeper than that. MLOps engineering can entail developing algorithms too.
Mature IT functions like data engineering, data preparation, and data quality all have corresponding personas that perform specific tasks, or in the frequently mentioned parlance, "Jobs to Be Done."
ML engineering also has a specific persona, and that is the MLOps Engineer. What do MLOps Engineers do?
For the sake of simplicity, MLOps Engineers design, deploy, and operate the underlying systems (infrastructure) that allow data science teams to do their jobs, which include feature engineering, model training, model validation, model refinement, just to name a few. MLOps Engineers also automate the process around those specific needs so that the work involved in launching ML models into production is streamlined, simplified, and instrumented.
Just like any other IT role, there is a broad spectrum of functional tasks MLOps Engineers can undertake. Fundamentally, a MLOps Engineer fuses software engineering expertise with knowledge of machine learning.
While the number of tools, frameworks, and approaches continue to expand and evolve, there are certain skill sets that are needed, which transcend the specific tools and frameworks. That’s why it’s important to ground the discussion on first principles. There is a core list of skill sets needed for an MLOps Engineer to carry out the specific tasks, and while not all are required, the tasks an MLOps Engineer undertakes is a function of the existing composition, size, and maturity of the broader ML team.
Some of these first principles or core skill sets entail:
1. Programming experience
2. Data science knowledge
3. Familiarity with math and statistics
4. Problem-solving skills
5. Proficiency with machine learning and deep learning frameworks
6. Hands-on experience with prototyping.
Related to these core skill sets are knowledge and experience with programming languages, DevOps tools, databases (relational, data warehousing, in-memory, etc). There are a variety of online resources that unpack the details related to skill sets, and this continues to evolve as more companies mainstream ML across their teams.
While definitions are important, the industry is still early in defining MLOps engineering and better characterizing the roles and responsibilities of a MLOps Engineer. In the journey towards understanding this domain, and the associated education and learning paths to become a MLOps Engineer, it’s important to not be too dogmatic across the board. By focusing on the Jobs to Be Done, and applying that to the context of the project, company process, and maturity of teams, companies can better structure and define the MLOps engineering crew that can launch ML models into production.
Industry News
Perforce Software joined the Amazon Web Services (AWS) Independent Software Vendor (ISV) Accelerate Program and listed its free Enhanced Studio Pack (ESP) in AWS Marketplace.
Aembit, an identity platform that lets DevOps and Security teams discover, manage, enforce, and audit access between federated workloads, announced its official launch alongside $16.6M in seed financing from cybersecurity specialist investors Ballistic Ventures and Ten Eleven Ventures.
Hyland released Alfresco Content Services 7.0 – a cloud-native content services platform, optimized for content model flexibility and performance at scale.
CAST AI has announced the closing of a $20M investment round.
Check Point® Software Technologies introduced Infinity Global Services, an all-encompassing security solution that will empower organizations of all sizes to fortify their systems, from cloud to network to endpoint.
OpsCruise's Kubernetes and Cloud Service observability platform is certified to run on the Red Hat OpenShift Kubernetes platform.
DataOps.live released an update to the DataOps.live platform, delivering productivity for data teams.
CoreStack and Zensar announced a strategic global partnership. CoreStack will provide its AI-powered NextGen cloud governance and FinOps capabilities, complementing Zensar’s composable cloud operations offering.
Delinea introduced the Delinea Platform, a cloud-native foundation for Delinea's PAM solutions that empowers end-to-end visibility, dynamic privilege controls, and adaptive security.
Sysdig announced a new foundation that will serve as the long-term custodian of the Wireshark open source project.
Talend announced the latest update to Talend Data Fabric, its end-to-end platform for data discovery, transformation, governance, and sharing.
Descope has raised $53M in seed funding and emerged from stealth to launch a frictionless, secure, and developer-friendly authentication and user management platform.
Loft Labs announced Loft v3 with new capabilities and flexibility for platform teams to build and enable their development teams with a self-service Kubernetes.
AWS Application Composer is now generally available.