Optimizing enterprise MLOps in the cloud with Domino Data Lab and Amazon Elastic File System | Amazon Web Services – AWS Blog

Domino Data Lab is an AWS Partner Network (APN) partner that provides a central system of record for data science activity across an organization. The Domino solution delivers orchestration for all data science artifacts, including AWS infrastructure, data and services.

As part of the solution, Dominos platform leverages the scale, security, reliability, and cost-effectiveness of AWS cloud computing coupled with Amazon Elastic File System (Amazon EFS). Together they orchestrate all data science artifacts, such as AWS infrastructure, data, and services. This approach lets data science teams benefit from this flexible and collaborative research environment with automated workflows tracking model development dependencies for full reproducibility, including enterprise-grade governance, risk management, and granular cost controls.

In this post, we interview David Schulman, Director of Partner Marketing at Domino Data Lab, and explore the Domino Data lab Enterprise AI Platform solution to consider why centralizing data AI and machine learning operations (MLOps) initiatives into a single system of records across teams can help enterprises to work faster, deploy results sooner, scale rapidly and reduce regulatory and operational risk. In 2023, Domino surveyed artificial intelligence (AI) professionals with their REVelate State of Generative AI survey . The responders included AI professionals leading, developing, and operating AI across Fortune 500 companies. The survey reports that 49% plan to develop generative AI in-house, while 42% plan to fine-tune commercial models. Top limitations focused on security, reliability, cost, and IP protection. Consequently, 69% are worried about data leakage, with both top leadership (82%) and IT (81%) being especially concerned.

According to Ventana Research, Through 2026, nearly all multinational organizations will invest in local data processing and infrastructure and services to mitigate against the risks associated with data transfer. Hybrid cloud environments complicate the operationalizing of AI/ML at scale by creating silos across data, infrastructure, and tooling. Challenges include data science teams that are prevented from collaboration because of siloed data, processes, and tools. Non-standardized, non-repeatable, ad-hoc bespoke workflows result in sprawl on individuals computers across systems. Data and compute resources are distributed across cloud and on-premises data centers, causing unconnected environments and silos. There are also hidden costs resulting from data scientists spending time on DevOps and infrastructure management tasks to prevent underutilized infrastructure due to idle, always-on, and over-provisioned resources.

Over years of partnership, Domino and AWS have worked together to assist organizations, such as Johnson & Johnson (JnJ), in reducing analysis time by 35% for data scientists[1]. This involves integrating Dominos data science platform with essential AWS services, such as Amazon EFS. Amazon EFS provides analytics storage with shared file access to data scientists. Applications include open-source genomics and Shiny, and Domino Data Lab, run on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Amazon EFS provides access to a fully managed petabyte-scale file system supporting genomics sequence data at 500 TB. More recently, JnJ has further scaled data science across their hybrid and multicloud environment, adopting AI infrastructure strategies straddling on-premises data centers and the cloud to address concerns over cost, security, and regulatory compliance. Lilly also centralizes data science to drive value across the healthcare value chain, as discussed last spring on a panel at NVIDIAs GTC AI Developer conference.

Dominos Enterprise AI Platform integrated with key AWS services provides a unified, collaborative, governed, and end-to-end MLOps platform. The solution orchestrates the complete ML lifecycle, providing easy access to data, preferred tools, and infrastructure in any environment. By sharing knowledge, automating workflows across teams, and tracking all changes and dependencies, Domino guarantees complete reproducibility while fostering collaboration. It also helps maintain peak model performance in production while making sure of enterprise-grade model governance and cost-savings. Domino can be deployed into VPCs, or it is available as a SaaS offering on AWS Marketplace. Attributes include:

Domino is Kubernetes-native and can be deployed on Amazon EKS for ease of management across hybrid environments. This enables cloud-native scalability, multi-cloud portability, reduced costs through elastic workloads paired with underlying hardware resources, and simplified administration by integration with existing DevOps stacks.

Domino can run on a Kubernetes cluster provided byAWS Elastic Kubernetes Service. When running on EKS, the Domino architecture uses AWS resources to fulfill theDomino cluster requirements as in Figure 1.

Figure 1: View Domino Documentation for Domino on Amazon EKS

Seamless collaboration and knowledge sharing is a requirement for data science teams. First, Domino Datasets, integrated with Amazon EFS provide high-performance, versioned, and structured filesystem storage in Domino so that data scientists can build curated pipelines of data in one project and share them with colleagues for collaboration. Amazon EFS enables the sharing of data pools among multiple instances that were previously isolated from one another. This increases data science team productivity because Domino not only tracks snapshots of data used to build models, but all of the underlying code, packages, environments, and all supporting artifacts for full reproducibility providing rich file difference information between revisions. Additionally, customers such as JnJ value the Amazon EFS storage class feature which enables them to automatically move data from Amazon EFS Standard to Amazon EFS Infrequent Access. By automating the process of moving data to long-term, cost-effective storage, the customer successfully reduced their storage cost.

With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files to dynamically provide storage capacity to your applications as needed. With elastic capacity, provisioning is automatic, and youre billed only for what you use. Amazon EFS is designed to be highly scalable both in storage capacity and throughput performance, growing to petabyte scale and allowing massively parallel access from compute instances to your data. This makes it the perfect data science platform foundation for organizations such as JnJ to reduce analysis time by 35%.

Why does this matter? With Amazon EFS, data science teams are empowered to:

Second, Domino Data Sources act as a structured mechanism to create and manage connection properties for external sources such as Amazon S3, Amazon Redshift, and a variety of sources. This reduces DevOps work for data scientists, as they get desktop-like data store connectivity without needing any coding.

Flexible model deployment options support diverse business and operational requirements. Models developed in Domino can be exported for deployment in Amazon SageMaker for scalable and low latency hosting, while models from SageMaker and SageMaker Autopilot can be accessed and monitored inside Domino for drift and prediction performance. Models can be deployed to the cloud, in-database (deploy to Snowflake or Databricks for predictive analytics), or to the edge (Domino supports NVIDIA Fleet Command). Models can be deployed for both batch and real-time predictions at scale, while Domino Model Sentry controls model validation, review, and approval processes for an additional governance layer.

Hybrid cloud support is a necessity for enterprise data science teams, and Domino Nexus acts a single-pane of glass for all data science workloads across hybrid and multi-cloud environments.

Figure 2: Domino/AWS architecture

A Domino Nexus deployment consists of a control plane, a Kubernetes cluster hosting Dominos core platform services (deployed on Amazon EKS as in the preceding Figure 2) and one or multiple data planes: distinct Kubernetes clusters that run a small set of Domino services that can execute workloads. These can be deployed in any cloud region, across multiple clouds, or in on-premises data centers.

Figure 3: Nexus Hybrid Architecture | AWS Cloud Control Plane (US East Region), AWS Cloud Data Plane (EU Central Region)

As shown in Figure 3, Users connect to the Domino control plane through a browser connection, while users connect directly to the data plane where they are doing their model development work in a Domino Workspace. Amazon Elastic Load Balancer (Amazon ELB) allows ingress to Domino control plane services from data planes.

This architecture (Figure 4), eliminates the possibility of inadvertently transferring region-locked data. It also allows data scientists to seamlessly burst to the cloud if they run out of on-premises compute capacity.

Figure 4: Source: Nexus hybrid Architecture Source

Dominos Enterprise AI Platform is proven to deliver an average of 722% ROI in three years (average NPV and ROI based on study of Domino customers, using Domino Business Value Assessment). This is achieved by 2x more models delivered with the same resources, in the same amount of time. A 40% reduction in data scientist time wasted waiting for IT support, doing DevOps, or duplicating work. A 40% reduction in IT and cloud infrastructure costs over three years. Reduced risks of revenue loss from violations of regulations or reputation issues.

Want to go deeper into these metrics? Learn more about Domino and cost-effective AI. Read the following case studies from enterprises with Dominos platform deployed on AWS:

Although generative AI gets all of the attention, large language models (LLMs) are in fact just models. And although there are many intricacies in operating generative AI at scale, such as prompt engineering, model fine-tuning, and inference/hosting (well save that all for another post), the following best practices of scalable, enterprise AI remain the same:

Flexibility is required by modern enterprises, who need to build, deploy, and operate AI at scale across a variety of complex architectures. In addition, storage plays an important role, and Amazon EFS provides a cost-effective, elastic, and highly performant solution for your ML inferencing workloads. You only pay for each inference that you run and the storage consumed by your model on the file system. Amazon EFS provides petabyte-scale elastic storage so your architecture can automatically scale up and down based on the needs of your workload without requiring manual provisioning or intervention.

To learn more about how Dominoon AWS can accelerate responsible AI initiatives? Downloadthe Strategies and Practices for Responsible AI TDWI playbook for insight on a proactive approach on identifying and mitigating business, legal, and ethical risks to create trust and deliver tangible business value.

Visit Domino on AWS Marketplace.

[1] AWS Storage blog Johnson & Johnson reduces analysis time by 35% with their data science platform using Amazon EFS

Read more:

Optimizing enterprise MLOps in the cloud with Domino Data Lab and Amazon Elastic File System | Amazon Web Services - AWS Blog

Related Posts

Comments are closed.