Google today announced that it launched a new multisearch feature that was directed to help search very complicated. While using searches can usually be an easy process, it’s not always true, especially when you try to find something like a piece of clothes in different colors or new furniture that matches what you have in your home. With multiearch, Google wants to use AI to make the search easier (or at least more useful).
Multiicearch is a new feature launched for Google Lens, which itself is found on Google App on Android and iOS. It uses text and images simultaneously to help users make their requests more specific and narrow the search that will be too vague using only text or images. It seems like it must offer solid functionality, although it can be one feature that will be better than time to time since it is based on AI.
How to use multiearch with a Google lens
To use multiicearch, first you need a picture of something you want to search. This can be a screenshot or photo, according to new blog posts on keywords. After you select the image you want to use, swipe up and tap “+ Add to your search.” From there, you can add text to fix your search. Google says that multiicearch allows users to find out more about the items they encounter in the wild or repair search “based on color, brand, or visual attributes.” Some examples of Google gave a case that was quite interesting for its function. For example, one scenario involves taking pictures of orange dresses and use Multiach to find the same clothes with green, or use pictures of plants to look for care instructions for it – we imagine multi research may also be useful for identifying plants first position.
While multiicearch can be a useful feature, its function may be rather limited at first. For now, it only rolls in beta in the United States with support only for English, which means most of the world will not be able to use this technology. Google also said that Multiach works best for shopping related searches, although with AI support it, it might not be long before the feature goes up and running for all types of search. The company is looking for an integrated multitask model – which allows users to search very specific through image combinations and questions – can work with multix in the future, so we will stay here for further details on the front.