Tech News Hub
OpenAI Launches GPT-5: A Unified AI Model That Redefines ChatGPT’s Power
OpenAI has officially unveiled GPT-5, its most advanced AI model yet — and it’s set to transform what ChatGPT can do. Released on Thursday, GPT-5 is OpenAI’s first “unified” AI model, merging the deep reasoning skills of its o-series with the rapid, responsive style of its GPT line.
This launch marks a major shift in OpenAI’s ambitions — moving ChatGPT beyond being just a chatbot toward becoming a true AI agent capable of completing complex tasks for users.
A Leap Beyond GPT-4
While GPT-4 brought smarter and more context-aware conversations, GPT-5 goes further. The new model can generate entire software applications, manage calendars, compile research briefs, and handle a broader range of tasks on users’ behalf.
One of GPT-5’s biggest upgrades is its real-time router, which automatically decides whether to respond instantly or take extra time to think through more complex queries — removing the need for users to tweak settings manually.
OpenAI CEO Sam Altman called GPT-5 “the best model in the world” and described it as a “significant step” toward artificial general intelligence (AGI) — AI that can outperform humans at most economically valuable work.
Starting immediately, GPT-5 is the default model for all free ChatGPT users, marking the first time OpenAI has offered a reasoning-capable AI to non-paying users.
Performance: Strong Gains in Coding, Creativity, and Accuracy
OpenAI claims GPT-5 leads in several areas compared to rivals like Anthropic’s Claude Opus 4.1, Google DeepMind’s Gemini 2.5 Pro, and Elon Musk’s Grok 4 Heavy.
- Coding Excellence – On the SWE-bench Verified benchmark, GPT-5 scored 74.9%, slightly edging Claude Opus 4.1’s 74.5% and far surpassing Gemini 2.5 Pro’s 59.6%. It’s especially strong in “vibe coding” — quickly spinning up functional software from scratch.
- Science Mastery – On the GPQA Diamond (PhD-level science test), GPT-5 Pro scored 89.4%, outperforming most competitors.
- Health Accuracy – On HealthBench Hard Hallucinations, GPT-5 (with extended thinking) had a hallucination rate of just 1.6%, a dramatic improvement over GPT-4o’s 12.9% and o3’s 15.8%.
- Creativity – OpenAI says GPT-5’s creative writing and design outputs feel more natural, with “better taste” in tone and style.
Overall hallucination rates are down to 4.8%, compared to over 20% for previous OpenAI reasoning models — a huge leap in reliability.
Safer and Smarter Interactions
According to OpenAI’s safety lead Alex Beutel, GPT-5 is less deceptive, more transparent, and better at distinguishing harmful queries from harmless ones — meaning it can refuse unsafe requests while avoiding unnecessary content blocks for legitimate users.
New Features for Users and Developers
As part of the GPT-5 rollout:
ChatGPT Personalities – Users can now choose from four preset response styles: Cynic, Robot, Listener, and Nerd.
Subscription Benefits –
- Free users get GPT-5 as default.
- Plus ($20/month) – Higher usage limits.
- Pro ($200/month) – Unlimited GPT-5 and access to GPT-5 Pro for more powerful reasoning.
Developer Tools – GPT-5 comes to OpenAI’s API in three sizes (gpt-5, gpt-5-mini, gpt-5-nano) with adjustable verbosity controls. Pricing starts at $1.25 per million input tokens and $10 per million output tokens.
The Bigger Picture
Since its debut in 2022, ChatGPT has become one of the most-used apps in the world, with over 700 million weekly users — nearly 10% of the planet’s population. GPT-5 is one of OpenAI’s most anticipated launches, and its reception could influence Big Tech competition, Wall Street sentiment, and global AI policy.
While benchmarks suggest GPT-5 is a step up in several areas, it remains on par with other frontier AI models in some tests. The real test will be how developers and users apply it in the wild — from enterprise automation to personal productivity.
Conclusion
With GPT-5, OpenAI has delivered a model that blends speed, intelligence, safety, and creativity in ways that set a new bar for AI assistants. Whether it becomes the definitive leap toward AI agents — or just another incremental step — will depend on how quickly users and developers push it to its limits.

OpenAI Releases GPT-OSS: Two Open-Weight AI Models Built for Developers
OpenAI has taken a significant step toward transparency and developer accessibility by releasing two open-weight AI reasoning models: gpt-oss-120b and gpt-oss-20b. This marks the company's first open language model release since GPT-2, which came out more than five years ago.
The models are now available for download through the Hugging Face platform. According to OpenAI, both models are “state-of-the-art” in performance among other open-weight AI systems.
What’s New in GPT-OSS
OpenAI’s open models come in two versions:
- gpt-oss-120b: A powerful model designed to run on a single Nvidia GPU.
- gpt-oss-20b: A smaller version that can run on consumer laptops with 16GB of RAM.
Both models are designed to handle complex reasoning tasks and can connect to more advanced, closed models when additional capabilities—like image processing—are needed.
Model Performance
OpenAI reports strong benchmark results for its new models:
Codeforces (coding benchmark):
- gpt-oss-120b: Score of 2622
- gpt-oss-20b: Score of 2516
- These scores outperform DeepSeek’s R1 model but are slightly below OpenAI’s closed o3 and o4-mini models.
Humanity’s Last Exam (HLE):
- gpt-oss-120b: 19%
- gpt-oss-20b: 17.3%
- Again, these models rank just below OpenAI’s own advanced models but lead other open-source competitors like Qwen and DeepSeek.
However, one area of concern is hallucination. On OpenAI’s PersonQA benchmark:
- gpt-oss-120b hallucinated in 49% of cases.
- gpt-oss-20b hallucinated in 53% of cases.
- This is significantly higher than the o1 model’s 16% rate, and even higher than the o4-mini's 36%.
How the Models Were Built
OpenAI used similar training processes for these open models as it does for its proprietary ones. Key features include:
- Mixture-of-Experts (MoE): Only a subset of the 117 billion parameters (about 5.1 billion) is activated per token, improving efficiency.
- High-Compute Reinforcement Learning (RL): The models were trained in simulated environments to improve reasoning and decision-making.
Despite their strength, the models are text-only. They don’t support image, video, or audio processing.
Licensing and Access
Both models are released under the Apache 2.0 license, allowing businesses and developers to use, modify, and commercialize them freely without needing OpenAI’s permission.
However, the training data used to build the models is not being released. This decision likely stems from ongoing legal challenges regarding the use of copyrighted materials in AI training.
Focus on Safety
OpenAI delayed this release several times to ensure the models wouldn't pose major security risks. After extensive testing, including evaluations for possible misuse in cyberattacks or biological threats, OpenAI found:
- A small increase in biological capabilities was possible.
- No evidence that the models could be fine-tuned to reach dangerous thresholds.
These safety evaluations align with OpenAI’s broader commitment to responsible AI deployment.
A Strategic Move
This release comes as U.S. tech companies face increasing competition from Chinese labs like DeepSeek, Qwen, and Moonshot AI. Meanwhile, Meta’s LLaMA models, once leaders in the open AI space, have reportedly fallen behind.
There is also growing political pressure for American AI companies to open source more of their work. In July, the U.S. government encouraged developers to promote global AI adoption rooted in democratic values.
OpenAI appears to be responding to that pressure. CEO Sam Altman stated:
“To that end, we are excited for the world to be building on an open AI stack created in the United States, based on democratic values, available for free to all and for wide benefit.”
Looking Ahead
While GPT-OSS sets a high bar for open models, developers are already looking ahead to DeepSeek’s R2 release and new offerings from Meta’s Superintelligence Lab.
In the meantime, OpenAI has positioned itself as a strong contender in the open-source space, offering powerful tools while continuing to prioritize safety and accessibility.
Conclusion
OpenAI’s return to open-source with GPT-OSS reflects both a strategic shift and a response to growing global demand for transparent AI development. With strong performance and permissive licensing, the models offer valuable opportunities for innovation, while acknowledging the ongoing challenges of safety and ethics in artificial intelligence.

Apple Reportedly Developing Its Own AI-Powered 'Answer Engine' to Compete with ChatGPT
Apple is reportedly working on its own artificial intelligence-powered "answer engine", signaling a new push into generative AI and search. According to Bloomberg’s Mark Gurman, the tech giant has formed a dedicated team called Answers, Knowledge, and Information, tasked with building a tool capable of responding to user questions using real-time web data—similar in scope to OpenAI’s ChatGPT or Google’s Gemini.
A New Direction for Apple AI
While Apple has already partnered with OpenAI to integrate ChatGPT into iOS 18 and Siri, the company appears to be planning a more in-house, long-term solution. The new AI system could either launch as a standalone application or become deeply integrated across Apple services, including Siri, Safari, and Spotlight.
The aim is clear: Apple wants to create its own version of a conversational AI assistant, powered by web-based search intelligence, rather than rely entirely on third-party providers like OpenAI or Google.
Hiring Push Signals Strategic Shift
Apple’s job postings provide more clues about its intentions. The company is actively recruiting experts with backgrounds in search engine algorithms, machine learning, and large-scale information systems. This hiring push suggests Apple is serious about building a robust AI search platform that could compete with—or eventually replace—reliance on external AI providers.
Timing and Challenges
Apple has faced several delays in delivering a truly AI-enhanced Siri. While ChatGPT integration was announced as part of Apple Intelligence during WWDC 2024, a fully personalized AI assistant experience has yet to materialize.
This new "answer engine" effort may also be influenced by Google's ongoing antitrust issues, which could eventually force changes to Apple’s lucrative search deal with Google. With growing pressure to innovate and reduce dependency, Apple may see an in-house AI engine as a strategic necessity.
What Could This Mean for Users?
If successful, Apple’s AI answer engine could lead to:
- A smarter and more responsive Siri, capable of real-time, context-aware answers.
- Enhanced web search in Safari, tailored to user preferences and Apple’s privacy standards.
- Better information discovery across Apple platforms, combining AI capabilities with user-centric design.
Conclusion
Apple's move to develop its own AI-powered answer engine represents a significant step toward gaining independence in the growing AI ecosystem. As competition heats up between tech giants to build the smartest assistants and most capable AI tools, Apple’s strategy appears to be shifting from partnerships to proprietary innovation. Whether this new AI initiative will rival ChatGPT or Google’s Gemini remains to be seen—but it could redefine how Apple users interact with information across devices.

TikTok Launches ‘TikTok Pro’ in Germany, Portugal, and Spain with a Built-in Charity Program
TikTok is introducing a new version of its app in select European markets, aiming to merge entertainment with social impact. Called “TikTok Pro,” the platform has launched in Germany, Portugal, and Spain with a unique feature: a built-in charity support program known as the Sunshine Programme.
While the core content of TikTok Pro mirrors the regular app—short videos, creator clips, and trending topics—there are a few key differences in experience and purpose.
A Platform with Purpose: The Sunshine Programme
The centerpiece of TikTok Pro is the Sunshine Programme, a new feature designed to empower users to contribute to charitable causes directly through their activity on the platform.
Here’s how it works:
- Users earn “virtual sunshine” by completing in-app actions like inviting friends, liking or reposting charity-related videos, following verified charity accounts, or searching for nonprofit initiatives.
- Once users accumulate enough virtual sunshine, they can choose a charitable organization to support.
- TikTok then translates that support into a real monetary donation to the selected charity.
This initiative aims to connect everyday user engagement with meaningful social contributions, allowing people to support causes they care about without spending their own money.
What’s Different About TikTok Pro?
While the content on TikTok Pro remains the same as the standard TikTok experience, the new app offers a streamlined, ad-free version that is stripped of livestreams, shopping features, and advertisements. This gives users a more focused, distraction-free environment.
The move also aligns with TikTok’s goal to create a more values-driven experience for those who prefer purpose-led digital interactions over commerce and monetization.
TikTok’s Mission in Europe
TikTok says the launch of TikTok Pro is part of a broader strategy to promote social good through entertainment. By offering an app that combines charitable engagement with community content, the company hopes to encourage users in Europe to take part in philanthropic action while exploring the same engaging content they love.
The launch in Germany, Portugal, and Spain appears to be an initial rollout, possibly laying the groundwork for wider availability depending on user response and impact.
Conclusion
With TikTok Pro, the platform is taking a creative approach to social responsibility by enabling users to make a difference through simple online interactions. It’s a new kind of digital experience—one that doesn’t just entertain but also empowers users to support meaningful causes. Whether this model will expand across Europe or globally remains to be seen, but it signals a shift toward more purpose-driven innovation in social media.

Google Launches Gemini 2.5 Deep Think, Redefining AI Intelligence with Multi-Agent Reasoning Model
Google has officially launched Gemini 2.5 Deep Think, a next-generation AI model that marks a major advancement in artificial intelligence. With a focus on reasoning and problem-solving, this multi-agent system represents Google’s most ambitious step yet toward building AI that can think more like humans—strategically, creatively, and collaboratively.
Now available to users subscribed to Google’s $250-per-month Ultra plan, Gemini 2.5 Deep Think is accessible through the Gemini app and sets a new benchmark for AI intelligence and real-world application.
What Is Gemini 2.5 Deep Think?
Unveiled earlier at Google I/O 2025, Gemini 2.5 Deep Think is Google’s first publicly available multi-agent AI model. Unlike traditional models that rely on a single AI agent, this system uses multiple agents to work on different ideas in parallel. The goal is to produce better answers by combining diverse reasoning paths—much like a team of experts working together to solve a problem.
This multi-agent architecture allows Gemini 2.5 Deep Think to analyze complex questions, evaluate options, and select the most effective solutions, making it far more powerful in tasks that require deep thinking, strategic planning, or step-by-step improvements.
Demonstrated Success in Competitive Environments
To showcase its real-world capabilities, Google used a version of this model at the International Math Olympiad (IMO)—where it helped secure a gold medal. This version of Gemini 2.5 Deep Think, specifically tuned for academic reasoning, took hours (instead of seconds or minutes) to process solutions and has now been made available to a select group of researchers and mathematicians.
Google hopes that by releasing this version to the academic community, the AI model can help accelerate discoveries in mathematics, science, and other research-intensive fields.
Setting New Industry Standards
Gemini 2.5 Deep Think is outperforming leading AI models across major benchmarks:
- Humanity’s Last Exam (HLE):
- Google’s model scored 34.8%
- Compared to xAI’s Grok 4 (25.4%) and OpenAI’s o3 (20.3%)
- LiveCodeBench 6 (competitive coding tasks):
- Gemini 2.5 Deep Think achieved 87.6%
- Grok 4 scored 79%, while OpenAI’s o3 scored 72%
The model also showed superior results in tasks like web development, generating more detailed and visually polished outputs than its competitors.
Built for Tool Use and Extended Reasoning
Another key strength of Gemini 2.5 Deep Think is its ability to integrate with tools like Google Search and code execution. It can produce longer, more in-depth responses that make it suitable for a wide range of tasks—from academic research and technical writing to software development and creative work.
Google has also applied novel reinforcement learning techniques to help the AI better explore and optimize its reasoning processes.
Industry-Wide Shift Toward Multi-Agent AI
The launch of Gemini 2.5 Deep Think reflects a broader trend in AI development. Other major players are also turning to multi-agent systems:
- xAI introduced Grok 4 Heavy, a multi-agent model built for performance benchmarking.
- OpenAI used a multi-agent system to compete in the IMO, although that model has not been released.
- Anthropic is deploying a multi-agent approach in its research assistant tools.
However, multi-agent models are expensive to run, which explains why they’re currently being offered through premium subscription plans by companies like Google and xAI.
What’s Next?
In the coming weeks, Google will extend access to Gemini 2.5 Deep Think via the Gemini API, initially targeting a limited group of testers. The company is aiming to explore how developers, researchers, and businesses can integrate this powerful reasoning model into their workflows and applications.
Conclusion
With the launch of Gemini 2.5 Deep Think, Google is reshaping the future of AI. By introducing a multi-agent model capable of deep reasoning and parallel thinking, the company is taking a bold step toward more intelligent, adaptable, and reliable AI systems. Whether in research, software development, or real-world problem-solving, this breakthrough points to a new era where artificial intelligence doesn't just respond — it reasons.

Google Supercharges AI Mode with Canvas, Search Live, and Desktop Integration Features
Google is enhancing its experimental AI Mode in Search with a suite of new features designed to help users research, study, and explore topics in a more organized and interactive way. The latest updates include Canvas, Search Live with Lens, and deeper integration with desktop tools—all part of Google's push to make Search a more intelligent assistant.
Canvas: Organize Study and Research with AI
One of the most useful additions is Canvas, a side panel tool that helps users build study guides or organize information over time. After clicking the new “Create Canvas” button in AI Mode, users can start forming a structured plan—ideal for preparing for exams or researching a complex topic.
Canvas lets you interact with AI to refine the content and structure based on your needs. In the coming weeks, users will also be able to upload class notes, PDFs, or a syllabus to personalize their study materials even further.
This feature is rolling out soon to users in the AI Mode Labs experiment in the United States.
Real-Time Help with Search Live and Google Lens
Google is also bringing Project Astra technology into AI Mode through a feature called Search Live. When using Google Lens in the app, users can tap the Live icon and point their camera at something to ask questions in real time.
This makes it possible to get immediate explanations or information about whatever you're seeing—like a math problem, a product label, or a confusing document. According to Google, it's like having an expert who can see what you see and help you understand it instantly.
Search Live is rolling out this week on mobile devices in the U.S. for users in the AI Mode Labs program.
AI Mode Comes to Desktop with New Capabilities
Google is also expanding AI Mode to work better on desktop. You’ll now be able to:
- Use Lens to ask questions about what’s on your screen, such as a chart or diagram on a webpage.
- Click on “Ask Google about this page” from the address bar and get an AI Overview with a summary in the side panel.
- Follow up with deeper questions using a new “Dive deeper” button or the AI Mode toggle in the Lens results.
Another major update is the ability to upload and ask questions about PDFs on desktop. Whether it’s lecture slides or research papers, AI Mode can help you understand complex content by allowing follow-up questions directly tied to the document. Later this year, AI Mode will support additional file types, including Google Drive files.
A Smarter Way to Use Search
These features reflect Google's goal of transforming Search into a more conversational and helpful AI assistant. Whether you're studying for a test, researching a new topic, or trying to understand a visual concept, AI Mode now offers a smarter, more interactive way to get the help you need.
As these tools expand, more users can expect personalized, context-aware assistance—across both mobile and desktop.

AI Magic Comes to Photoshop: Adobe Launches Harmonize, Upscale, and Better Object Removal
Adobe is pushing the boundaries of creative tools once again—this time by expanding Photoshop’s AI capabilities with a powerful trio of features: Harmonize, Generative Upscale, and a revamped Object Removal tool. Powered by Adobe’s Firefly AI models, these new tools are designed to make complex edits faster, smoother, and more realistic—whether you're working on desktop, web, or even mobile.
Harmonize: Realistic Blending for Added Objects
What began as Project Perfect Blend in 2024 is now officially released as Harmonize. This tool allows users to seamlessly insert objects into existing images and have them match the surrounding scene automatically.
Using Adobe Firefly, Harmonize adjusts lighting, color, shadow, and visual tone so that a new item—like a piece of furniture or a product—blends naturally with its environment. Backgrounds are removed in one click, and the AI handles the rest, saving creators from time-consuming manual edits.
Harmonize is available in beta for Photoshop on desktop and web and is also part of an early access rollout on the Photoshop iOS app.
Generative Upscale: Bigger and Better Images
The new Generative Upscale feature allows users to increase an image’s resolution up to 8 megapixels, making it perfect for reviving older photos, preparing assets for print, or adapting visuals for different formats—all without sacrificing clarity or quality.
By analyzing image data intelligently, Firefly enhances fine details and textures, giving creatives the freedom to work with lower-res images in professional-grade projects.
Smarter Object Removal: Clean Results, Less Effort
Adobe has also reworked its Object Removal feature, significantly improving how backgrounds are reconstructed after removing an element from a scene.
In earlier versions, removing an item could lead to visual glitches or unnatural patches. Now, the updated algorithm focuses specifically on the object’s surrounding area, producing more accurate fills and fewer artifacts. The result: cleaner, more realistic edits with minimal effort.
In a live demo, Adobe’s team showcased how the new object removal tool replaced a bowl of pico de gallo in a food photo with natural-looking table texture—where the old version would have simply added a blurry green blob.
More Control for Creators
Alongside these major updates, Adobe is introducing smaller enhancements:
- Better tools for organizing assets in shared workspaces
- The ability to choose different versions of the Firefly model when using AI features, allowing users more control over style and output
According to Shambhavi Kadam, Senior Director of Product Management for Photoshop, these features were designed based on real feedback from creative professionals:
“We spoke to photographers, designers, and content creators to understand the pain points in their workflows. These updates are about removing friction and helping them focus on creativity—not repetitive tasks.”
Now Available in Beta
All three AI-powered features—Harmonize, Generative Upscale, and the improved Object Removal—are now available in beta for Photoshop on desktop and web. Harmonize is also accessible through early access on the Photoshop iOS app.
As Adobe continues to integrate generative AI deeper into its Creative Cloud tools, Photoshop users can expect even more intelligent, intuitive, and time-saving features in the near future.
