Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

The Amazon app on your phone just got a cool AI feature

Rufus AI chatbot in Amazon app.
Amazon

Last year, Amazon CEO Andy Jassy said that every business division at the company was experimenting with AI. Today, Amazon has announced its most ambitious AI product yet: a chatbot named Rufus to assist with your online shopping.

Imagine ChatGPT, but one that knows every detail about all the products in Amazon’s vast catalog. Plus, it is also connected to the web, which means it can pull information from the internet to answer your questions. For example, if you plan to buy a microSD card, Rufus can tell you which speed class is the best for your photography needs.

Amazon says you can type all your questions in the search box, and Rufus will handle the rest. The generative AI chatbot is trained on “product catalog, customer reviews, community Q&As, and information from across the web.”

In a nutshell, Amazon wants to decouple the hassle of looking up articles on the web before you make up your mind and then arrive on Amazon to put an item in your cart. Another benefit of Rufus is that instead of reading through a product page for a certain tiny detail, you can ask the question directly and get the appropriate responses.

An AI nudge to informed shopping

Amazon app’s Rufus AI.
Amazon

Amazon says Rufus is capable of answering generic queries such as “What to look for before buying a pair of running shoes” or simply telling it, “I need to deck up my workstation,” and it will automatically recommend the relevant products. In a nutshell, it’s a web-crawling recommendation machine that will also answer your questions, product-specific or otherwise.

“Customers can expand the chat dialog box to see answers to their questions, tap on suggested questions, and ask follow-up questions in the chat dialog box,” says the company’s official blog post.

For queries such as “Is this phone case reliable,” the AI bot will summarize an answer based on product reviews, Q&As, and information on the product page. At the end of the day, it’s all about making informed purchasing decisions with some help from an AI chatbot.

Rufus AI answering Amazon product questions.
Amazon

Rufus is currently limited to a small selection of Amazon mobile app users in the U.S. as part of a beta test. However, this is an early version of the product, and Amazon also warns that Rufus “won’t always get it exactly right.” In the coming weeks, the AI chatbot will be made available to a broader set of users in its home market.

Rufus seems to be one of the more thoughtful and practical implementations of generative AI I’ve seen recently, and far away from the hype machinery built around the tech with hidden caveats. Plus, it seems to be free, without any Prime mandates.

Nadeem Sarwar
Nadeem is a tech and science journalist who started reading about cool smartphone tech out of curiosity and soon started…
HuggingSnap app serves Apple’s best AI tool, with a convenient twist
HuggingSnap recognizing contents on a table.

Machine learning platform, Hugging Face, has released an iOS app that will make sense of the world around you as seen by your iPhone’s camera. Just point it at a scene, or click a picture, and it will deploy an AI to describe it, identify objects, perform translation, or pull text-based details.
Named HuggingSnap, the app takes a multi-model approach to understanding the scene around you as an input, and it’s now available for free on the App Store. It is powered by SmolVLM2, an open AI model that can handle text, image, and video as input formats.
The overarching goal of the app is to let people learn about the objects and scenery around them, including plant and animal recognition. The idea is not too different from Visual Intelligence on iPhones, but HuggingSnap has a crucial leg-up over its Apple rival.

It doesn’t require internet to work
SmolVLM2 running in an iPhone
All it needs is an iPhone running iOS 18 and you’re good to go. The UI of HuggingSnap is not too different from what you get with Visual Intelligence. But there’s a fundamental difference here.
Apple relies on ChatGPT for Visual Intelligence to work. That’s because Siri is currently not capable of acting like a generative AI tool, such as ChatGPT or Google’s Gemini, both of which have their own knowledge bank. Instead, it offloads all such user requests and queries to ChatGPT.
That requires an internet connection since ChatGPT can’t work in offline mode. HuggingSnap, on the other hand, works just fine. Moreover, an offline approach means no user data ever leaves your phone, which is always a welcome change from a privacy perspective. 

Read more
Low-cost smart ring shows the future of sign language input on phones
Person wearing SpellRing on their thumb.

The smart ring segment has matured significantly over the past couple of years. We have entered the era of miniaturised sensors that are ready for ultrasound-based blood pressure monitoring. The likes of Circular are taking a different dual-sensor approach to measuring blood pressure levels, and are even eyeing glucose trend analysis by next year.
Health sensing, however, has remained the predominant application area for smart rings. Now, experts at Cornell University have developed a smart ring platform that can continuously track American Sign Language in real time, and send it as input to computers and smartphones.
Dubbed the SpellRing, it can recognize the full 26-letter range of the English alphabet pool. Worn on the thumb, this ring comes equipped with a speaker and mic array. Together, they allow the back-and-forth transfer of audio waves generated by hand motion, while a gyroscope measures the angular data.

Accessibility for all

Read more
Google Gemini can now tap into your search history
Google Gemini app on Android.

Google has announced a wide range of upgrades for its Gemini assistant today. To start, the new Gemini 2.0 Flash Thinking Experimental model now allows file upload as an input, alongside getting a speed boost.
The more notable update, however, is a new opt-in feature called Personalization. In a nutshell, when you put a query before Gemini, it takes a peek at your Google Search history and offers a tailored response.
Down the road, Personalization will expand beyond Search. Google says Gemini will also tap into other ecosystem apps such as Photos and YouTube to offer more personalized responses. It’s somewhat like Apple’s delayed AI features for Siri, which even prompted the company to pull its ads.

Search history drives Gemini’s answers

Read more