Introduction to the Lean AI framework

Overview

It is hard to build production systems that rely on bleeding-edge technology. The technology stacks frequently change and the goal is often not a static product but rather one that adapts rapidly to accommodate new tools and data.

There are four important guiding principles of an AI strategy:

  1. Reduce risk with an open-source centric AI strategy​. By ensuring that you have continuous access to leading data engineers and cloud native experts that live and breathe open source.
  2. Enable a quick project start and an agile approach.​ There is no need to decide on a particular cloud infrastructure or a firm technological roadmap before getting started with your project. It is important to arrive at a sustainable solution no matter where the evolving requirements take you.
  3. Retain ownership of your AI projects​. Being assured that you have continuous access to experts and engineering capacity to overcome technological roadblocks, even resource- and knowledge constrained teams can take full ownership of AI projects.
  4. Avoid vendor lock-in.​ The strategic choice of cloud infrastructure providers and data-asset ownership should not be taken on the basis of availability of particular ML tools. Sustainable AI is built with a cloud native strategy to ensure that your AI solutions are vendor agnostic.

The aim of Lean AI is to establish processes and technology that integrate knowledge, processes and data in machine learning workflows supporting continuous innovation and the guiding principles. Lean AI is designed to accelerate the adoption of AI by empowering the teams that take use cases to production services. It addresses the challenge of putting and keeping AI in production at its core by providing teams with knowledge, tools and support needed to repeatedly succeed with AI projects and ultimately delivering business value quickly and efficiently.

Course outline

1.
Creating workflows for machine learning development
2.
What characterizes the Lean AI process and what are the roles in the Lean AI team?
3.
What do we mean with a model in production?
4.
Introduction to cloud native computing and DevOps.
5.
Introduction to the Lean AI stack and workflow.
6.
Team formation exercises.

Deliverables & Master Class material


A high-level understanding of the Lean AI process and the associated technology stacks.

An understanding of the challenges involved in putting ML models in production.

An understanding of roles in a Lean AI team

A background to cloud native computing and DevOps needed for Lean AI.

Master Class participants and prerequisites

Participants should have good business knowledge and be subject matter experts.

  • Current or future participants in machine learning projects – domain experts
  • Software Developers
  • Data Scientists
  • IT operations

The instructors

Morgan Ekmefjord
CTO of Scaleout Systems. Business and Product development experience from several Fortune 500 companies. Successfully built solutions ranging across a wide range of industries from Mission planning and Fighter simulators, Telecommunication planning and optimization
software, Regulatory and Compliance tracking systems, Large Scale POS solutions for retail and food companies.
Andreas Hellander
Andreas is Chief Scientific Officer at Scaleout Systems. He holds a PhD in scientific computing and is an expert on modelling, simulation, and development of scientific applications using cloud infrastructure. He is also Associate Professor at Uppsala University where he leads a research group in computational science and engineering.
Ola Spjuth
Ola is the lead scientist machine learning & AI at Scaleout Systems and holds a PhD in bioinformatics with many years of experience in applied machine learning on high-performance and distributed e-infrastructures. Ola is also Associate Professor at Uppsala University where he leads a research group that studies how predictive modelling, large-scale calculations and modern e-infrastructure can aid research and development.
Salman Toor
Salman is lead scientist of distributed infrastructures at Scaleout Systems. He holds a PhD in scientific computing and is an expert on scientific data management, scalability and performance of distributed infrastructures, and solutions for data-intensive applications. Salman is also Assistant professor at Uppsala University where he conducts research on e-infrastructure.

Apply your interest


Catch a cookie!

This website uses cookies to help improve your user experience.