SAM, Meta’s latest AI model, allows users to choose specific objects within an image

Select specific objects in images with Meta's new AI model SAM

Meta, the owner of Facebook, published an artificial intelligence model that could identify individual objects in an image. In a blog post, the company’s research division explained that their Segment Anything Model (SAM) could recognize objects in images and videos, even in cases where Meta AI model SAM had yet to be trained on those items. The company also released a dataset of image annotations, which they claimed was the largest of its kind.

SAM allows users to select objects by clicking on them or by writing text prompts.

“SAM has learned a general notion of what objects are. It can generate masks for any object in any image or video, including objects and image types not encountered during training. SAM is general enough to cover a broad set of use cases and can be used out of the box on new image “domains” – whether underwater photos or cell microscopy – without requiring additional training (a capability often referred to as zero-shot transfer),” Meta wrote.

Meta internally uses technology similar to SAM for various activities, such as tagging photos, moderating prohibited content, and determining which posts to recommend to Facebook and Instagram users.

What the future holds for Meta AI model SAM

According to the company, the release of SAM will expand access to this type of technology. The SAM model and dataset will be downloadable under a non-commercial license. Additionally, users uploading their images to an accompanying prototype must agree to use it solely for research.

“In the future, SAM could help power applications in numerous domains that require finding and segmenting any object in any image. For the AI research community and others, SAM could become a component in larger AI systems for more general multimodal understanding of the world, for example, understanding both the visual and text content of a webpage,” Meta said.

“In the AR/VR domain, SAM could enable selecting an object based on a user’s gaze and then “lifting” it into 3D. SAM can improve creative applications for content creators, such as extracting image regions for collages or video editing. SAM could also aid scientific study of natural occurrences on Earth or space, for example, by localizing animals or objects to study and track in the video. The possibilities are broad, and we are excited by the many potential use cases we haven’t even imagined.

Will Meta release its version of ChatGPT?

Meta has hinted at several features that utilize generative AI, which ChatGPT has popularized. This type of AI creates fresh content instead of merely identifying or categorizing data like other AIs.

Although the company has yet to launch a product, they have showcased tools to generate surreal videos from text prompts and children’s book illustrations from prose.

Chief Executive Mark Zuckerberg has prioritized the integration of these “creative aids” that use generative AI into Meta’s apps this year.


Team Eela

TechEela, the Bedrock of MarTech and Innovation, is a Digital Media Publication Website. We see a lot around us that needs to be told, shared, and experienced, and that is exactly what we offer to you as shots. As we like to say, “Here’s to everything you ever thought you knew. To everything, you never thought you knew”

Leave a Reply

Your email address will not be published. Required fields are marked *