Model-assisted labelling - For better or for worse?

Collecting data is for many AI projects without a doubt the most expensive part of the project. Labelling data like images and text pieces is hard and tedious work without much possibility of scaling. If an AI project requires continuously updated or fresh data then this can be a high cost that can challenge the whole business case of an otherwise great project.

There are a few strategies though to lower the costs of labelling data. I have previously written about Active Learning; a data collection strategy that focuses on prioritizing the labelling of the most crucial data first given the models weakest confidence. This is a great strategy but in most cases you still need to label a lot of data. 

To speed up the labelling process the strategy of model-assisted labelling has come up. The idea is simply that you train an AI in parallel with labelling and as the AI starts to see a pattern in the data, the AI will suggest labels to the labeller. In that way the labeller in many cases can simply approve the pre suggested label. 

Model-assisted labelling can be done both by training a model solely for the purpose of labelling but can also be done by putting the actual production model in the labelling loop and letting that suggest labels.

But is modelassisted labelling just a sure way to get data labelled quicker? Or are there downsides to the strategy? I have worked intensively with model-assisted labelling and I know for sure that there are both pros and cons and if you’re not careful you can end up doing more harm than good with this strategy. If you manage it correctly it can work wonders and save you a ton of resources.

So let’s have a look at the pros and cons.

The Pros

The first and foremost advantage is that it’s faster for the person working with labelling to work with pre-labelled data. Approving the label with a single click for most cases and only having to manually select a label once in a while is just way faster. Especially when working with large documents or models with many potential labels the speed can increase significantly.

Another really useful benefit with model-assisted labelling is that you very early on get an idea about the models weak points. You will get a hands-on understanding of what instances are difficult for the model to understand and usually mislabels. This reflects on the results you should expect in production and as a result youtube the chance early to improve or work around these weak points. When seeing weak points in the model that also often suggests a lack of data volume or quality in these areas. So it also provides an insight to what kind of data you should go look for to be labelled more of.

The cons

Now for the cons. As I mentioned the cons can be pretty bad. The biggest issue with model-assisted labelling is that you are running the risk of lowering the quality of your data. So even though you get more data labelled faster with less quality you can end up with a model performing worse than it would had you not used model-assisted labelling. 

So how can model-assisted labelling lower the data quality? It’s actually very simple. Humans tend to prefer defaults. The second you slip into autopilot you will start making mistakes by being more likely to choose the default or suggested label. I have seen this time and time again. The biggest source of mistakes in labelling tend to be accepting wrong suggestions. So you have to be very careful when suggesting labels.

Another downside can be if the pre-labelling quality is simply so low that it takes the labeller more time to correct than it would have to start with a blank answer. So you will have to be careful to not enable the pre-labelling too early.

A few tips for model-assisted labelling

I have a few tips for being more successful with model-assisted labelling.

First tip is to set a target for data quality. You will never get 100% correct data anyway so you will have to accept some number of wrong labels. If you can set a target that is acceptable to train the model from, you can monitor if the model-assisted labelling is begging to do more harm than good. That also works great as an expectations alignment on your team in general.

I’d also suggest doing samples without pre-labelling to measure if there’s a difference between the results you get with and without pre-labelling. You simply do this by turning off the assist model for an example one out of every ten cases. It’s easy and will show a lot of truth.

Lastly I will suggest one of my favorites. Probabilistic programming models are very beneficial for model-assisted labelling. Probabilistic models are Bayesian and as a result offer uncertainty in distributions instead of scalars(a number) and make it much easier to know if the pre-label is likely to be correct or not.  

Previous
Previous

What is artificial intelligence? - The pragmatic definition

Next
Next

What is data operations (DataOps)?