{% extends "ai/base_ai.html" %}
{% block title %}Open source AI{% endblock %}
{% block meta_description %}
Open source AI: path to production for models. From development to deployment, get a seamless experience.
{% endblock %}
{% block meta_copydoc %}
https://docs.google.com/document/d/1HP9YdHF11yVQ9SLl0VXhLezDZREsr1GHIKNi9wT1Gv0/edit
{% endblock meta_copydoc %}
{% block body_class %}
is-paper
{% endblock body_class %}
{% block content %}
Build enterprise-grade AI projects with secure and supported Canonical MLOps. Develop on your Ubuntu workstation using Charmed Kubeflow or Charmed MLFlow and scale up quickly with open source tooling in every part of your stack.
Develop machine learning models on Ubuntu workstations and benefit from management tooling and security patches.
MLOps is the short term for machine learning operations and it stands for a set of practices that aim to simplify workflow processes and automate machine learning and deep learning deployments.
MLOps is an approach that enables you to deploy and maintain models reliably and efficiently for production, at a large scale.
Develop and deploy models with automated workflows. Charmed Kubeflow is an end-to-end MLOps platform designed to run AI at scale. It is the foundation of Canonical MLOps and seamlessly integrates with other big data and machine learning tools.
Track your experiments and get a better overview of your model catalogue. Charmed MLFlow is an open source platform used for managing machine learning workloads. It integrates with other MLOps tools to cover different functions of the machine learning lifecycle.
Simply the best way to run SparkĀ®, whether on the cloud or in your data centre. Runs on Kubernetes. Includes a fully supported distribution of Apache Spark.
Charmed OpenSearch simplifies the operations of your favourite search and analytics suite. In addition, OpenSearch provides an integrated vector database that can support AI systems by serving as a knowledge base.
With NVIDIA AI Enterprise and NVIDIA DGX, Charmed Kubeflow improves the performance of AI workflows, by using the hardware to its maximum extent and accelerating project delivery. Charmed Kubeflow can significantly speed up model training, especially when coupled with DGX systems.
Production-grade projects require a solution that enables scalability, reproducibility and portability. Canonical MLOps speeds up AI project timelines, giving you:
Focus on building production grade models, while Canonical experts manage the infrastructure underneath. Work with our experts to understand your data better and deliver on your use case.
Looking for Kubeflow support? Work with our team to get support for any cloud environment or CNCF-complaint Kubernetes distribution.
University of Tasmania (UTAS) modernised its space-tracking data processing with the Firmus Supercloud, built on Canonical's open infrastructure stack.
Learn how to take models to production using open source MLOps platforms. Learn how to scale AI projects using hardware that's designed for AI workloads and certified software.
Choosing a suitable machine learning tool can often be challenging. Understand the differences between the most famous open source solutions.
Take your models to production
with open source AI
Why choose Canonical for Enterprise AI?
Develop artificial intelligence projects on any environment
Ubuntu: the OS of choice for data scientists
Move beyond experimentation with machine learning operations (MLOps)
Open source MLOps tooling
Charmed Kubeflow
Charmed MLflow
Charmed Spark
Charmed OpenSearch
Run AI at scale with Canonical and NVIDIA
{{ image (
url="https://assets.ubuntu.com/v1/ba3a0335-Canonicla+nvidia.png",
alt="Canonical + Nvidia",
width="335",
height="41",
hi_def=True,
loading="lazy"
) | safe
}}
Use modular platforms to run AI at the edge or in large clouds
Open source AI services
Managed Canonical MLOps
AI consulting
Support
Open source AI resources