Towards a Development Methodology for Machine Learning — part I

Why ML needs its own flavour of dev. methodology and a partial, draft proposal for such a methodology

Assaf Pinhasi
11 min readDec 18, 2019
Photo by Denise Jones on Unsplash

TL;DR

  • ML teams working on complex projects need to battle the intrinsic challenges in ML, as well as the friction which arises from their multi-disciplinary nature which makes it hard to make decisions.
    A solid Development Methodology can help teams improve execution.
  • ML is different from standard software in its level of uncertainty, as well as the fact that models are influenced indirectly — and not engineered by design.
  • Agile doesn’t work out of the box for ML despite being a useful mindset — For example, it assumes that small features can be built by design and plan with low risk which doesn’t work for ML
  • Scientific Research techniques offer techniques for handling uncertainty and gaining learning — but they are not designed to deliver commercial software projects
  • Sequenced/pipeline flow doesn’t work for ML either — since ML is iterative in nature
  • I propose a partial draft for an ML Development methodology — describing a development flow, decisions and artefacts, and try to defend this proposal by explaining the rationale behind it.
  • Would be great if some of you can share your experience in complex ML projects, what methodologies you used, what worked and what didn’t, and comment about the draft methodology this post describes :-)

Introduction

Delivering complex machine learning projects requires getting many things right.

In the previous blog post, I attempted to cover some of the reasons why for any sufficiently challenging ML project, it’s important to set up a project team and include people of different skill sets.

But forming ML teams also presents new challenges — getting a multidisciplinary team working on a high-uncertainty project to execute well together is a big challenge — especially in the face of the glaring lack of well-understood development methodology.

Machine Learning hasn’t had it’s “agile moment” yet — where a complete, holistic methodology emerged, providing rationale, principles, measurements, practical implementation details, techniques for how to scale it to large organisations, and examples for flavours and adaptations teams can choose to apply.

I’d say that most teams try and do one of the following:

  • Wing it
  • Shoehorn their project into whatever methodology the most influential people on the team know best
  • Try and cobble together their individual experience, external advice and whatever they have into a custom process

At various points in my career, I was guilty of all of these; and the companies I help as a consultant are all on this spectrum too.

Does your project need a better Development Methodology?

Don’t need a methodology bro

First — it’s possible that the answer is no! just like some software projects don’t justify a full-fledged Agile team and dev. process and still succeed!

Don’t fix what ain’t broken. If you’re not feeling any pain, it’s probably since your project doesn’t actually justify a team, or is close enough to a methodology you understand well enough.

However, If you face frequent team arguments, inability to plan, lack of agreement on what to do next, or often sense that stakeholders don’t really understand what is the project’s real status — you will probably benefit from a better defined Dev. Methodology

I find that there is a heuristic rule for deciding whether your project is complex enough to justify a team and a solid methodology:

The top row represents projects where there is value in refining the model’s behaviour and adapting it to a complex domain problem. If your project is in the top row, you will probably need a team and a methodology to aid the process.

Why some existing methodologies don’t work for ML

Understanding the uncertainty and the indirect nature of ML

ML Models are an approximation function, used to generate desired outputs (predictions) given a certain input (data).

We use ML when the logic for how to create the desired output from our input cannot be explicitly coded by engineers. So instead, we use an algorithm (model training) to create this logic (in the form of a machine learning model).

This has some interesting side effects:

  • The patterns in the data which will influence the prediction rules are not known to us in advance in many cases — and we cannot plan for them ahead of time. Uncovering them requires spending time with the data, and new insights may be gained even late.
  • The training process is a generic optimisation algorithm. It is not specific to the problem domain, and hence it’s not possible to directly engineer functional requirements as changes to the training algorithm.
  • What control we do have over the training process is indirect, and is done using trial and error. Also, every re-training process may introduce some regression as well as progression (e.g. model will start making mistakes on some of the inputs it previously got right)
  • Models are by definition wrong on some of the input. Figuring out what makes the model fail on some of the inputs is a very time-consuming activity
  • The model itself, once trained, is not readable or comprehensible to humans. So it’s impossible to “debug” a model by reading its code or stepping through it, and hence it’s not possible for engineers to directly implement requirements inside the model.
  • And a few more…

In other words, ML projects introduce a higher level of:

Uncertainty — About what can be expected from the model to learn, what mistakes it’s likely to make, and how its behaviour might change when data behaviour changes.

Indirection— models cannot be directly engineered and manipulated to specification. Instead we tweak things which influence the model, until we get roughly what we wanted

Maybe Agile can work for ML despite ML’s special needs?

TL;DR — Agile principles are solid; its practices don’t work as-is for ML

Your beautifully planned sprints failing one after the other in an ML project. source: https://www.redbull.com/int-en/events/red-bull-flugtag-moscow-2015

Agile methodologies brought a huge relief to software teams, and there is a significant body of knowledge and understanding in the industry regarding how to implement and tune it.

It’s no surprise that some ML teams turn to agile by default — especially in companies that have been delivering a non-ML product, and have working engineering and product delivery practices in place.

The good

The Agile principles are very sound and applicable to most projects, including ML.

For example, Agile is iterative and accepts the fact that requirements can change frequently — which is certainly useful for ML projects.

Also, Agile acknowledges that “big” requirements are hard to plan and execute with predictability and that the way to reduce risk is to break work into small steps which provide some value to the project.

The bad

Agile practices (ceremonies, planning, measurements, reporting) were carefully adapted to fit engineering projects, and don’t generalise well to ML due to ML’s uncertainty and indirection characteristics

For example, the rationale behind user stories and their planning doesn’t really hold for ML:

In other words, Agile assumes that small functional improvements map cleanly to achievable goals with low risk and high predictability.

In ML, even “small” functionality improvements are hard to commit to — and hence Agile’s practices of planning and measuring progress cannot work out of the box in ML projects

Can the Scientific Research Methodology solve ML needs?

TL;DR — useful methods; not a complete solution, since it wasn’t designed for commercial software release

Not every hypothesis is worth an experiment source:https://beerbabesandbsd.wordpress.com/2019/01/01/frankensteins-lab-part-1-migrating-linux-mint-19-1-to-devuan-ascii/

The good

The scientific method was designed to make disciplined, reproducible discoveries, in the face of unknowns.

It follows the following a cycle question -> hypothesis -> experiment -> results -> analysis -> conclusion loops. That can be useful in an ML project.

Also, many data science teams are comprised of people who hold advanced degrees — with some members fresh out of university. These team members naturally lean towards treating the project like an Academic Research task.

So — Research methods are very useful in ML projects, where initially there is a high uncertainty and where there’s a need to discover methodically how the data behaves and how the model responds, raising hypothesis as to why and how to nudge the model in the right direction.

The bad

While offering useful techniques for dealing with uncertainty, Scientific Research is not a complete solution for ML projects, since it wasn’t designed for commercial software delivery

More specifically:

  • Academic research is an individual sport in most cases, which isn’t conducive to teamwork
  • The goal of the research is to discover a novelty and writing a paper — Whereas commercial product development is concerned with providing the user with value as early as possible, with as few risks as possible
  • In academic research, there’s a certain amount of liberty to make assumptions and narrow down the scope in order to discover an interesting novelty. In product, we are looking for broad solutions which can make sense of real, messy data
  • Lastly — the academic research approach doesn’t address any software delivery aspects

Note: some large companies have a pure research function — which are run like a professor’s lab in academia. But their goal is papers, not solutions.

Perhaps we can develop ML in phases linked in a pipeline?

Pipelines are a leaky abstraction for describing ML development lifecycle. source: http://www.allmach.com.au/blog-item/10-best-ever-plumbing-disasters-and-piping-fails

There are many resources out there that describe the activities involved in building an ML model, and broadly speaking they describe the following sequence:

  • Project ideation, Requirements and KPI
  • Data ingestion, exploration, cleaning, feature engineering
  • Modelling
  • Evaluation
  • Preparing inference code, testing, packaging, deploying
  • Monitoring, troubleshooting
  • Measurements, analysis, improvements
  • Repeat

At face value, this looks like a clean sequenced process — where one phase is done and the next begins, with well-defined handovers. This is appealing because it’s easy to understand; also it sounds like it can allow each phase owner to work according to their own standard, as long as the handovers are clean.

However, in reality, ML development is rather the opposite of a clean sequence — it’s an iterative process with a lot of back and forth— from models to data to requirements to metrics. The reason is that the team is trying things out and learning in the process, and each Learning may lead to a refinement of the plan and product

Ingredients for an ML Dev. Methodology (draft)

So hopefully by now, we understand ML’s special requirements, we have some ideas about what we can borrow from other methodologies, and are ready to start tackling the task of defining a dev. methodology for ML.

What do we expect the methodology to include? here are some of the ingredients:

  • Rationale and principles —These are needed in order to guide teams in decision making, and enforce consistency over the methodology.
    As a start, we can try the agile mindset (and customise later if necessary).
  • A Flow for how to turn requirements into a product — including what are the activities, decisions points, and transitions between activities.
    In our case, the flow will be iterative since ML is iterative, and the decisions points should allow high adaptability in the face of new learning as the team ‘tries stuff out’ and ‘learns’ the problem
  • Describe the main artefacts in the project
    Both input artefacts (e.g. stuff we define as part of the planning) and output artefacts (which can be evaluated and accepted, signify progress, and influence flow).
    For example — in Agile, requirements are formalised as “user stories” —
    a format was carefully chosen to be consistent with the principles of the Agile method (putting the user in the centre, talking about the what and why and leaving the how to the team etc.).
    In ML we will need to have something similar.
  • We may want to incorporate techniques from Scientific research — Since they are both formal and proven to be effective in battling uncertainty and discovering information.
  • Formalise how the result of “trying stuff out” triggers many decisions, and hence it deserves the status of an artefact and can have value in itself.
  • Provide Project management tools like ceremonies, roles, metrics, how to customise the process — to make the method concrete.

Machine Learning Development Lifecycle (draft)

Quick walkthrough

  • The flow is iterative in nature.
  • Product Requirements are translated into ML Goals (usually 1:n)— which are phrased accurately in the domain of the Data and Machine Learning objectives.
    ML Goals are hard to estimate in terms of effort, but we should attempt to keep them small (including breaking them down as we learn, if needed).
  • ML Goals are achieved by trying out various directions via Experiments
    (again 1:n)
  • Experiments are the main execution unit. They follow the scientific method, and include a hypothesis, measurements, Analysis and Learningand sometimes an improvement in model functionality.
  • The Learning gathered from experiments influences choices re.
    the next experiment, the framing or priority of ML Goals as well as how we define the Requirements.

This means that In ML projects, Learning is a very important measure of progress alongside new working model functionality

  • The process involves Release to production whenever the team feels the model is ready and that there will be value in doing so.

Some of the missing pieces to make this proposal concrete

  1. Formalisation of what the artefacts need to look like — both planning artefacts such as Requirements, ML goals, Experiments, and output artefacts (Learning, Working software, etc.)
  2. Guidelines on how to make good decisions — such as how to translate requirements to ML goals, how to prioritise the experiments, or how to draw conclusions from the learning
  3. Propose some ceremonies, roles
  4. Methods for dealing with Sizing and planning
  5. Metrics for progress
  6. A few more…

However, I’m going to stop here — since this post is long enough.

Conclusion

Some Machine Learning projects are complex and full of uncertainty.

ML Teams need to adopt a solid dev. methodology to help them battle the uncertainty, and to help them execute effectively across disciplines.

Borrowing principles from academic research and agile development, this post attempts to outline a process for ML development which hopefully addresses ML’s unique needs. This is still an early draft but I hope some of you may find this useful in your projects.

I look forward to hearing what methodologies you follow in your ML projects, what worked and what didn’t, and what you think about the proposal above…

Until next time,

Assaf

--

--