Tech News Hub
DuckDuckGo Adds Option to Hide AI-Generated Images in Search Results
Privacy-focused search engine DuckDuckGo has introduced a new setting that allows users to filter out AI-generated images from their search results. The update responds to growing concerns from users who say AI content often makes it harder to find real and relevant visuals.
What’s New?
The new filter gives users more control over the images they see:
- After performing a search, go to the Images tab.
- A new drop-down menu labeled “AI images” will appear.
- From there, you can choose to either show or hide AI-generated content.
- Alternatively, the filter can be enabled in the search settings under “Hide AI-Generated Images.”
With this feature, users can minimize the clutter caused by AI-generated visuals and focus on authentic, real-world images.
Why It Matters
According to DuckDuckGo, many users were frustrated by the increasing number of AI images in their search results. This change is aimed at improving the overall experience by allowing people to filter out what they don’t want to see.
The company notes that the filter uses manually curated open-source blocklists from tools like uBlockOrigin and the Huge AI Blocklist. While it may not catch every AI image, it significantly reduces their appearance in search results.
A Step Toward Cleaner Search
This update comes at a time when the internet is flooded with low-quality, AI-generated visuals. Often referred to as “AI slop,” this type of content can get in the way when users are looking for accurate or natural imagery.
DuckDuckGo even referenced a real-world example — a search for “baby peacock” — which last year sparked criticism of Google for displaying more AI-generated than actual photos.
Looking Ahead
DuckDuckGo has said it plans to roll out more filtering options in the future, though it hasn’t provided a timeline or details yet.
Conclusion
As AI-generated content becomes more widespread, tools like this one from DuckDuckGo offer a way to bring more control and clarity back to the user experience. It's a small but important step toward keeping search results reliable and user-focused.

OpenAI Introduces ChatGPT Agent: A Smarter Assistant Designed to Work for You
OpenAI has unveiled ChatGPT Agent, a powerful new feature that takes the capabilities of its popular chatbot to the next level. This tool goes beyond simply answering questions—it’s designed to perform complex, real-world tasks for users, acting more like a personal assistant that can think, plan, and execute.
What Is ChatGPT Agent?
ChatGPT Agent is a general-purpose AI tool that can handle a range of computer-based tasks, such as:
- Navigating your calendar
- Creating editable presentations and slide decks
- Running code through a terminal
- Connecting with apps like Gmail and GitHub to retrieve relevant information
The agent can be accessed by selecting “Agent Mode” from ChatGPT’s tools menu. It is currently available to users on OpenAI’s Pro, Plus, and Team subscription plans.
Built from Previous Innovations
ChatGPT Agent combines several features from OpenAI’s earlier experimental tools:
- Operator, which could simulate user interactions like clicking and navigating through websites
- Deep Research, which pulled information from multiple sources to create detailed summaries and reports
This new tool brings those capabilities together into a more integrated and practical experience, designed to respond to natural language prompts.
Real-World Applications
OpenAI is positioning the agent to handle more complex tasks than previous models. For example, users can ask it to:
- Plan and purchase ingredients for a specific meal
- Research competitors and generate a presentation
- Automate parts of workflows that involve online tools and services
These types of requests require the agent to make decisions, synthesize information, and carry out multiple steps—something earlier AI assistants often failed to do reliably.
Strong Benchmark Performance
OpenAI reports that the underlying model in ChatGPT Agent performs well on advanced benchmarks:
- Humanity’s Last Exam (a multi-subject knowledge test): It achieved a score of 41.6%, roughly double that of OpenAI’s earlier models like o3 and o4-mini
- FrontierMath (a complex math benchmark): It scored 27.4% when allowed access to tools like a terminal, significantly outperforming the previous top score of 6.3%
These improvements reflect OpenAI’s effort to build an AI that is not only smarter but also more capable of completing real-world tasks.
Focus on Safety and Responsible Use
Given the expanded capabilities, OpenAI has introduced several safeguards:
- Prompts are screened in real-time to identify sensitive topics, especially in the biological and chemical domains
- For flagged topics, responses undergo additional scrutiny to prevent harmful misuse
- ChatGPT’s memory feature is disabled in this mode to reduce risks related to prompt injection and data leakage
- Although OpenAI says it has no evidence that the model has been used maliciously, it is taking a precautionary approach to mitigate potential threats.
Still Early Days for AI Agents
Despite the impressive technical capabilities, the real test for ChatGPT Agent will be its performance in everyday use. Historically, AI agents have struggled with reliability and accuracy when asked to perform tasks that involve multiple steps and changing online environments.
However, OpenAI believes this new version is a significant step forward in building AI tools that can actually take meaningful action—handling tasks that were once too complex for earlier models.
Conclusion
ChatGPT Agent represents a major advancement in the push toward truly useful AI assistants. By combining app integration, natural language interaction, and improved task execution, OpenAI is aiming to reshape how people use AI in their daily work. While the rollout is still limited and the tool will need to prove itself over time, it marks a turning point in the evolution of ChatGPT—from a chatbot into a fully functional digital agent.

Google Supercharges Search with AI Business-Calling and Gemini 2.5 Pro Integration
Google is changing the way people interact with businesses and search online. The company has announced a wide rollout of its AI-powered business-calling feature across the United States. At the same time, it’s boosting the capabilities of Google Search’s AI Mode by introducing the Gemini 2.5 Pro model and launching a new Deep Search feature designed for advanced research.
AI That Makes the Call for You
One of the most noticeable updates is Google’s new business-calling feature. Instead of making phone calls yourself, you can now rely on AI to call local businesses and gather key information — such as availability, pricing, and appointment times.
To use it, just search for something like “pet groomers near me,” and you’ll see a new option labeled “Have AI check pricing.” After answering a few basic questions — like what kind of pet you have and which services you need — Google’s AI will place a call on your behalf and return the results. Each call begins by clearly stating that it’s an automated system calling from Google, ensuring transparency.
This feature was tested earlier this year through Google’s Search Labs and is now gradually rolling out to all users in the U.S. Those subscribed to Google AI Pro and AI Ultra plans will have access to higher usage limits.
A Smarter Search with Gemini 2.5 Pro
Alongside the business-calling feature, Google is enhancing AI Mode in Search with the Gemini 2.5 Pro model. This new version is built to handle more advanced tasks such as coding, complex reasoning, and math problems.
Users who subscribe to Google AI Pro or AI Ultra will be able to choose the 2.5 Pro model from a drop-down menu inside AI Mode. It’s designed to help with more technical and in-depth questions that go beyond everyday searches.
Deep Search: Research Done for You
Another major addition is Deep Search — a new capability within AI Mode that allows users to conduct serious research quickly and efficiently. Instead of manually sifting through dozens of websites, Deep Search scans hundreds of sources and compiles a detailed, fully cited report within minutes.
This is especially helpful for people making important decisions or diving deep into topics like:
- Job searches and career planning
- Buying a home or comparing mortgages
- Academic projects or hobby research
- Financial planning and analysis
Deep Search aims to save users hours of work by doing the heavy lifting in the background — a move that puts Google in direct competition with services like Perplexity AI and ChatGPT's search tools.
Ongoing Improvements in AI Mode
Google has been steadily improving its AI Mode since its launch earlier this year. In recent months, it added the ability to have back-and-forth voice conversations, as well as new shopping features that allow users to view product visuals and get smarter recommendations.
These updates show Google’s commitment to transforming Search into a more intelligent and interactive experience — one that goes far beyond the traditional list of links.
Conclusion
With these new updates, Google is turning its search engine into something more powerful and personalized. Whether you need help booking a service, solving a technical problem, or conducting in-depth research, AI is now doing more of the work for you. As the features continue rolling out in the U.S., Google is clearly positioning itself to lead the next wave of AI-driven search tools.

Google Plans to Merge Android and ChromeOS Into a Unified Platform
After years of speculation and a decade of anticipation, Google has finally confirmed plans to merge its two major operating systems—Android and ChromeOS—into a single platform.
Sameer Samat, President of Android Ecosystem at Google, shared the news in an interview with TechRadar. “We’re going to be combining ChromeOS and Android into a single platform,” he said. The statement marks the first clear confirmation from a top Google executive, turning long-standing rumors into an active development plan.
What the Merger Means
This upcoming merger aims to streamline development across laptops, tablets, and mobile devices—particularly to improve user experiences on devices where Google has traditionally lagged behind Apple, like tablets. Samat also hinted at broader ambitions, noting his interest in how people use laptops today—signaling that Google might be working toward more seamless cross-device functionality.
ChromeOS has already been incorporating Android features for years. Chromebooks can run Android apps, and in 2024, Google announced that large parts of ChromeOS would begin to rely on the Android software stack. Android, too, has been evolving with new desktop-like features such as:
- Desktop mode
- Resizable windows
- Enhanced support for external displays
These features suggest a convergence is already underway in practice.
Why Now?
The shift likely comes in response to the continued dominance of Apple’s iPadOS in the tablet and hybrid computing space. Merging Android and ChromeOS would give Google a stronger foundation to compete, particularly in the growing market for powerful yet portable devices. A unified platform could speed up feature rollouts, simplify development, and offer a more consistent experience across smartphones, tablets, laptops, wearables, TVs, and even cars.
A Decade in the Making
Talks of merging the two systems date back to 2015, and even earlier articles speculated that a combined platform would be a logical move for Google. While the idea has lingered for years, the official confirmation is the clearest sign yet that Google is finally taking concrete steps toward unification.
However, it may still take time before we see a fully merged OS in consumer devices. Google has not announced a specific release date or detailed roadmap, but the groundwork is clearly being laid.
Conclusion
Google’s plan to merge Android and ChromeOS signals a major shift in its software strategy. While this unification has been years in the making, its potential impact on tablets, laptops, and multi-device ecosystems could be transformative. If successful, it may redefine how users interact with Google-powered devices—and finally give Apple some real competition in the unified device experience.

Meta Acquires Play AI to Strengthen Human-Like Voice Technology Across Its AI Platforms
Meta has acquired Play AI, a startup known for creating realistic, human-like AI voices, marking another step in the company’s expanding investment in artificial intelligence. The news was first reported by Bloomberg and confirmed by a Meta spokesperson. According to an internal memo, the entire Play AI team will officially join Meta next week.
What Play AI Brings to Meta
Play AI has developed tools that allow users to generate natural-sounding voices using artificial intelligence, along with a platform that simplifies voice creation. Meta described the acquisition as a strong fit for its goals, particularly in areas like:
- AI-powered digital characters
- Meta AI’s broader assistant ecosystem
- Voice features in wearable devices
- Audio content creation tools for developers and creators
The memo noted that Play AI’s work complements Meta’s roadmap and ongoing research into more natural and interactive AI experiences.
Integration Plans and Broader AI Ambitions
The Play AI team will be fully integrated into Meta, although specific financial terms of the acquisition were not disclosed. This move is part of a broader strategy at Meta to accelerate its AI development.
Recent efforts include:
- Recruiting top talent from OpenAI
- Signing a major deal with Scale AI
- Appointing Scale AI’s CEO, Alexandr Wang, to lead a new group focused on building superintelligent systems
These steps highlight Meta’s commitment to remaining competitive in the AI race, especially in fields where human-like interaction is a key component.
Looking Ahead
By bringing Play AI into its ecosystem, Meta is aiming to push the boundaries of how users interact with voice-enabled systems. Whether it’s for AI characters, wearable devices, or digital assistants, natural-sounding voice technology is becoming increasingly important in making AI feel more intuitive and human.
This acquisition reflects Meta’s vision for a future where voice plays a central role in how people connect with technology—seamlessly, naturally, and at scale.

Samsung Gears Up to Launch Its First Triple-Folding Smartphone by Year-End
Samsung is ready to take foldable phones to the next level. While the company has just released its latest generation of foldable smartphones, a new and more ambitious device could be on the way before the year ends—a triple-folding phone.
TM Roh, the head of Samsung’s consumer electronics division, confirmed the news in an interview with The Korea Times. “I expect we will be able to launch the tri-fold phone within this year. We are now focusing on perfecting the product and its usability,” he said. Although the name of the device hasn’t been finalized yet, he noted that the decision would be made as the product nears completion.
A separate Samsung executive also confirmed to Android Authority that the tri-folding phone is ready for production. This suggests that the company is serious about bringing the device to market in the coming months.
What to Expect
While Samsung hasn’t shared technical details, a triple-folding smartphone is expected to feature two hinges, allowing it to fold out into a much larger display. This design could offer a tablet-like experience in a compact form when folded, potentially changing how users approach multitasking, productivity, and entertainment on a mobile device.
Samsung has previously showcased prototypes of tri-folding designs at tech events, but this would be the first time one is available for consumers.
Conclusion
Samsung's upcoming triple-folding smartphone could mark a major step forward in mobile innovation. With production reportedly underway and an official launch expected later this year, the company appears ready to introduce a bold new form factor that could redefine the future of foldable devices.

Elon Musk’s xAI Launches Grok 4 and $300 SuperGrok Heavy Plan Amid Growing AI Competition
Elon Musk’s artificial intelligence company, xAI, has launched its most advanced model to date—Grok 4—along with a new premium subscription called SuperGrok Heavy, priced at $300 per month. The move positions xAI directly against major AI players such as OpenAI and Google in the race to build smarter, more capable AI systems.
A Major Step Forward for Grok
Grok 4 represents a significant evolution for xAI’s chatbot. The new model is capable of analyzing images and answering complex academic questions across a wide range of subjects. Elon Musk, during a livestream, said that Grok 4 performs beyond PhD-level in every subject, although it sometimes lacks common sense and has yet to invent new technologies or discover scientific breakthroughs.
Tightly integrated into X—Musk’s social media platform previously known as Twitter—Grok has gained widespread visibility. However, this exposure has also made the model’s occasional missteps more visible to the public, raising concerns about its reliability and safety.
Introducing SuperGrok Heavy
xAI has also introduced Grok 4 Heavy, a more advanced version of the model that uses multiple AI agents to tackle tasks collaboratively. Musk explained that these agents work independently on a problem and then compare their answers to arrive at the best possible solution—similar to a study group.
This enhanced version is accessible through the new SuperGrok Heavy subscription, which is the company’s most expensive plan to date. Subscribers will get early access to Grok 4 Heavy and upcoming features that xAI plans to release in the coming months.
Strong Performance on AI Benchmarks
According to xAI, Grok 4 is showing impressive results on major AI benchmarks:
- On Humanity’s Last Exam, a test that evaluates AI performance across diverse subjects, Grok 4 scored 25.4% without using external tools—outperforming Gemini 2.5 Pro (21.6%) and OpenAI’s o3 (21%).
- Grok 4 Heavy, when using tools, achieved 44.4%, surpassing Gemini’s performance with tools (26.9%).
- On the ARC-AGI-2 benchmark, which challenges AI systems to recognize patterns in visual puzzles, Grok scored 16.2%, significantly ahead of the next best model, Claude Opus 4.
These scores suggest that Grok 4 is among the leading models in the industry, at least on paper.
Upcoming AI Products
xAI also shared its future roadmap, revealing a series of new AI tools coming soon:
- A coding-focused AI model in August
- A multi-modal agent in September
- A video generation model in October
These developments aim to broaden xAI’s capabilities and provide more tools for both developers and businesses.
Controversy and Challenges
Despite these technical achievements, the launch comes at a difficult time. Just days earlier, Grok’s automated X account came under fire for posting antisemitic comments. In response, xAI temporarily limited the account and removed a part of the chatbot’s system prompt that encouraged making politically incorrect statements. Neither Musk nor the company’s leaders addressed the issue during the Grok 4 launch event.
Compounding the turmoil, Linda Yaccarino, CEO of X, stepped down from her role on the same day Grok 4 was introduced. Her departure after nearly two years leaves a leadership gap, with no successor named yet.
Looking Ahead
xAI is offering Grok 4 through an API to attract developers and enterprise customers. While the company’s enterprise division is still new, it plans to partner with large cloud providers to make Grok available on major platforms.
Despite promising benchmark results, xAI’s challenge now is to convince businesses to adopt Grok in a competitive market already dominated by well-established tools like ChatGPT, Gemini, and Claude. Grok 4’s success may ultimately depend not only on performance but also on how well xAI manages trust, safety, and public perception moving forward.
