Samsung Electronics on Wednesday unveiled its newest Galaxy S25 smartphones, powered by Qualcomm's chips and Google's artificial-intelligence model, hoping its upgraded AI features can reinvigorate sales and fend off Apple and Chinese rivals.
Samsung on Wednesday announced the latest additions to its Galaxy S flagship smartphone line at the company's annual Samsung Unpacked 2025. As suspected,
Samsung's partnership with Google imbues its new Galaxy S25 phones with powerful AI and deep integration across its ecosystem. That's a major, major win.
Unpacked conference is here, and we've got our hands in all the announcements the company is set to reveal today. And it seems that Samsung is extra excited about this year's lineup: Not only did the company announce Unpacked details during its CES press conference,
Samsung's updated Galaxy AI features launching alongside the Galaxy S25 series look to integrate deeply into your daily routines.
With these Galaxy AI updates, the S25 series could be the most helpful virtual assistant you've ever had in a phone.
At a Samsung event in London last week, executives demoed the company’s new take on AI assistants. The headline news is that Gemini has effectively replaced Bixby as the default Galaxy phone AI, even kicking the latter out of the quick-launch home button.
Samsung launched the Galaxy S25 series in India, competing with Apple’s iPhone 16 series. Here is a comparison between both the flagships.
Samsung is injecting another dose of artificial intelligence into its next lineup of Galaxy smartphones. Most of the hardware on the Galaxy S25 is mostly the same as last year’s model, except for a faster chip and a more powerful ultra wide lens on its improved camera.
Samsung’s Android phones usually feature just enough differences ... Google introduced last January is also getting new features, in the form of expanded “AI Overview” search results (which you should still double-check by following source links ...
The company says the CUA’s reasoning technique, which they call an “inner monologue,” helps the model understand intermediate steps and adapt to unexpected input. Under the hood, CUA takes screenshots of web pages and uses a virtual mouse and keyboard to navigate.