Editshare - Enhancing EditShare’s flow media asset management with Automated AI
Features
As anyone familiar with modern-day Media Asset Management (MAM) or Digital Asset Management (DAM) systems will tell you, your image and video archives are only as valuable as your ability to search and retrieve specific content in a cost-effective and timely way that is tailored and appropriate for your business.
If you have a lot of valuable visual content, whether in an archive or a live stream, you need to extract metadata to index, organize and search that content effectively. “Tagging”, “labelling” or “annotating” are terms that are generally used interchangeably and all refer to the act of adding metadata to unstructured collections of images or video at the point of ingest.
The ingest process is both time consuming and highly critical from an indexing perspective. Choosing appropriate tags and applying them consistently are two extremely important considerations if content is to maintain its future value, whether you are in the business of monetizing visual assets or managing your own visual assets as part of your daily workflow. In short, getting the indexing wrong at the outset can have a very big cost associated with it.
The dream of being able to re-index your content days, weeks or even years after the original ingest with a set of entirely new, more current tags has seemed pretty far fetched, even in the not too distant past. But now, thanks to AI metadata and Visual DNA, this is set to become the reality for an ever-increasing number of visual archives.
Traditionally if you wanted to add new tags to your taxonomy once you had ingested your original content, the standard and only option was to re-ingest all of that content again, assigning new, updated tags as you go along.
With media libraries typically running to hundreds of thousands or even millions of hours, whilst this made cloud vendors happier and richer, the time and money considerations effectively created a massive barrier to extracting the full value of your content.
Today, innovative AI metadata solutions and Visual DNA mean that this is a thing of the past.
Well, it can be a difficult concept to grasp but the “secret” is to capture the essence of the video during the initial ingest process. Here at Mobius Labs we call this the “Visual DNA” of the content.
When Visual DNA is combined with advanced AI, it is possible to effectively turn the clock back and re-index the ingested content in seconds based on any future tags that you choose to create.
So we ingest once to decode frames and extract the visual DNA, the essence of the video, that will:
This is true future-proofing in action, allowing you to turn back time and re-index with an entirely new set of tags at any future time that you prefer, without having to go back through the costly and time consuming ingest process again from scratch. So, no more sleepless nights agonising that you have got your taxonomy correct at the outset!
To discover other ways in which you can future-proof your media library, why not download our latest guide “5 Essential Steps to Future-Proof your Visual Assets with AI Metadata”.