Editshare - Enhancing EditShare’s flow media asset management with Automated AI
Science
Mobius Labs’ Few-shot Learning feature uses very little input data during the training phase.
Object detection is a core concept of machine learning and most AI applications can handle it fairly easily. Nevertheless, the majority of these machine learning models still require a huge visual data-set in order to detect objects accurately and efficiently.
Mobius Labs’ Few-shot Learning feature stands out from the rest. This feature detects objects as well as new concepts with considerably less input data in the training phase. The underlying idea behind this technology is to use an extensive variety of visual concepts to pre-train the model.
Few-shot Learning is particularly useful when training models to recognize new custom concepts that are relevant to specific use-cases for different industries.
Relevant visuals are indispensable for press and broadcasting agencies that have to deliver quality content on-the-go. The information that they relay to their audience changes everyday; so the technology they use to assist them has to be quick to learn and easily adaptable.With Few-shot Learning, creating tags to identify relevant images is quick and cost-effective. The feature allows press agencies to create new concepts that are trending in the media and have significant socio-political impact with only a few reference images. Instances include training the machine learning model to identify all images related to the COVID pandemic: this includes people with masks, empty shops, deserted streets and over-crowded hospitals.
What makes the feature significantly advantageous is that it allows the training of even abstract concepts and ideas. In light of recent social and political events, custom concepts like ‘Home Office’ and ‘Democracy in crisis’ were successfully trained using small data-sets.
Companies that deal with brands and marketing agencies need to constantly stay updated with the ever-changing brand identities of their clients. Each brand has its own unique taxonomy and as such require the ability to label their data according to their specific visual language and style.
Few-shot Learning enables these platforms to train new models that can recognize new objects, styles and abstract concepts in a matter of seconds. This could apply to fresh products or updated brand logos, enabling marketing departments and agencies to efficiently discover brand friendly visuals or help brands find the most suitable influencers for their businesses. The feature is equally useful when it comes to identifying content with high commercial potential based on few example images that do well on these platforms.
With hundreds of images and videos being captured on mobile devices every day, tagging and managing these visual archives is the need of the day. Computer vision solutions are no longer limited to desktop computers anymore. Superhuman Vision™ can be deployed easily on edge devices which include mobile phones and laptops.
Few-shot Learning can constantly improve mobile applications by adding new and improved custom tags. It allows devices to train specific models for their markets with just a few reference images. The devices can also learn from individual users themselves, and as a result it can prioritize content based on personalized recommendations.
Computer vision can help in the space sector by analysing and tagging satellite images to detect things starting from cars and buildings, to varying cloud covers in the atmosphere. However, this is one of the industries where accurate, labelled data is scarce and expensive. So concepts need to be trained as quickly and simply as possible.
Even with satellite images, Few-shot Learning enables the custom training of new concepts on the machine learning models. Although these satellite images comprise very different visuals, the Few-shot Learning feature allows successful training of concepts.
You don’t need to know a thing about the complex algorithms to be able to do this!
To train any new concept, the machine learning model needs a few examples as reference. The Few-shot Learning feature ensures that this training is successful even when these reference images are small in number.
Let’s say you want to train the concept of Home Office. All you need is to follow three simple steps.
Once your upload is complete, you should be able to see a processing window. The processing time varies depending on the number of images trained.
You’ll notice that the images of the Validation set (i.e. learnings) have a number associated with each image, and the images are arranged in a descending fashion based on these numbers. The number is actually the confidence score about the relevance of the custom concept with the image.
In this scenario, the algorithm considers the concept of “Home Office” when arranging the images in the descending order. Images high up in the order have a high correlation with Home Office, while images further down bear little to no correlation to Home Office.
Invariably, there will be instances where these learnings may not be the best. The validation step is to improve upon what the algorithm has learned thus far. This can be done by simply up-voting or down-voting specific results which weren’t satisfactory.