Focusing on Framing, Timing, and Targets
In order to build successful machine learning solutions, there are certain fundamental ideas that everyone involved needs to understand. In this blog post, we look at three key early stages of the design process that managers can focus on to ensure that the project is headed toward a successful outcome.
This post presumes the reader already understands distinctions in machine learning such as supervised and unsupervised models, training and testing stages, and the overall machine learning lifecycle. Returning to the earliest stage of defining the business problem, we focus our attention on three key objectives.
Each of these objectives are introduced to some extent in data science training. However, quite often they do not receive the emphasis they deserve. In part this is because, on the surface, they seem obvious: frame the problem, decide when predictions will be made, and determine a target variable for the model to learn from.
These are three major levers a project stakeholder can utilize to influence the likelihood of project success. For this reason, it’s important to dive deeper on each one in order to surface some of the subtleties that can differentiate between getting a model that solves your business problem, or not.
In the world of applied machine learning, it needs to be decided how the machine learning model will interact with the business. This involves determining how the prediction made by the model will influence, or determine, a decision. Be specific and think through the exact mechanical way that the output of the model is used by the business. Does the business only act when the model produces certain values? Or does the business always take an action but have that action modified by the model output? Does the specific real-value of the prediction influence the action? Or does it merely determine whether or not the team takes action?
It is tempting to think that this relationship will be very simple or that there is only one option for a given problem, however this is usually not the case. Let’s look at an example that illustrates how alternative framing to a common problem can be generated that may produce better results.
Example: Reframing Forecasting Problems
Many organisations encounter challenges related to predicting the volume of goods that will be ordered or required at a given point in time in the future. The desire to understand those numbers often relates to one key business decision. It may determine: Will there be sufficient stock on hand? Or perhaps: Will there be sufficient warehouse space?
What does this mean when framing the machine learning project?
Sometimes there is heavy emphasis on a model that attempts to predict the exact number of widgets that will be sold or delivered, whereas in reality, all that’s needed to support the specific business is a model that conveys whether or not that number will exceed a given threshold. In many circumstances building the latter model will be easier and more effective.
This is what is meant by focusing on problem framing: thinking through exactly what it is necessary to know in order to achieve the outcome that provides the solution for the targeted business problem.
A good rule of thumb here is that a better outcome can be achieved by bringing the output of the machine learning model closer to the specific decision the business is looking to make. The reason this can help is because the model will be focused on exactly what the business needs to know for the decision, rather than trying to model every minute detail.
If you are interested in other examples of reframing forecasting problems then I recommend you watch Charles Prosper’s excellent talk Using reinforcement learning to solve business problems on the AWS Summit Online website.
All machine learning models need to produce an output, and it is a physical necessity that this output be produced at a particular point in time. It may not be important that this time is, for example, Thursday afternoon at 3pm, but what is important is the time relative to the event being predicted.
For example, there is a big difference between predicting if a customer is going to churn one hour before they close their account versus predicting that churn a month before it happens. Another important factor is the action that needs to be taken. Different actions take different amounts of time to put into place and to take effect. Sending an SMS to a dissatisfied customer with an immediate offer is fast to implement and achieve an outcome. Conversely, if the action required is sending staff out to visit the customer to solve their problem, that requires more time.
This may seem like an implementation detail that is separate from the task of building the model to predict the churn — but in many cases it is not. Why? The model depends on data, and data only becomes available at key points in time. If the churn model depends on information about a customer using the business’s services in a reduced fashion, then, in spite of its accuracy, it may be unable to produce a prediction early enough to be acted upon.
For all of these reasons, it’s important to carefully consider the model’s intended actions. Think about the amount of time needed for them to be performed, then ensure that the timing component of the design is sufficient to allow those actions to take effect. Communicate clearly with the data science team to ensure they only use data that will be available far enough in advance of the event being predicted to allow for the intended actions. The data science training and testing procedures should reflect this timing so that any performance metrics used to evaluate a model will provide an accurate reflection of how the model would perform in situ.
In most practical applications of machine learning, a historical signal needs to be decided on that will inform the models being built. In supervised machine learning, that signal is often called a target variable.
In some cases, defining that target variable can be simple, whereas in others it can de deceptively difficult. In order to illustrate a common source of difficulty, let’s again look at an example.
Example: Defining a Target Variable that doesn’t reinforce the status quo
A large supermarket chain requests that the data science team develop an application that makes recommendations to loyalty members on products to buy. The initial temptation could be to slot this project down as a product recommender. There may be a team member with direct experience at this, which means the team can get started immediately. However, further digging might reveal that the executive pushing the project actually wants those recommendations to drive the objective of encouraging healthy eating. Now the problem moves away from a simple recommender and becomes a problem of deciding how to determine products for loyalty members that are both something that they are likely to purchase and a healthy eating option.
In this instance, customers’ historical buying patterns could have been used to determine the signal for a recommender, but that would not have driven the right outcome. It would have simply encouraged them to keep buying what they previously bought, or products that people with similar profiles bought. Simply put, it would reinforce the status quo.
The most difficult part of defining this particular target is establishing how to determine a healthy eating option that is also something someone would potentially buy. Is there a need to manually classify some of the shopping baskets as healthy vs. unhealthy? Is it possible to extract that information from product descriptions or ingredients? Is it worthwhile to look at customers’ historical baskets and classify which contents are healthy, allowing for recommendations from their own previous healthy behaviour?
There are many ways to solve this problem, but the key takeaway is that all historical data cannot blindly be used as the training set, because the outcome the business is looking to drive is different from the historical behaviour.
Although this is a trivial example, it is indicative of a commonly occurring problem: relying on historical data does not always lead to the outcome a business is driving towards. A best practice for machine learning project management is a rigorous target variable review session. In this session, all stakeholders can sit down together and ascertain whether the proposed target variable, and the data that encapsulates it, will accurately result in a model that produces the outcome they desire.
Armed with the three core objectives of framing, timing, and target, businesses can ensure that their machine learning projects align with their goals. These topics can be used as the baseline when reviewing projects and when presenting them to the rest of the business. While they do not guarantee success, quite often defining these objectives means that even a simple model will deliver business results.