Tech News Hub
Facebook to Share All Videos as Reels in Upcoming Platform Update
Facebook is set to make a significant change to how videos are posted on its platform. The company has announced that in the coming months, all videos — regardless of their length or orientation — will automatically be shared as Reels.
Previously, users could choose between uploading a standard video or a Reel, with different tools and workflows for each. This update removes that distinction, aiming to simplify the video-sharing experience for users and creators alike.
Key Changes Ahead
- All Videos Become Reels: Facebook will no longer separate video posts from Reels. Everything you upload will now be treated as a Reel, even longer or horizontal videos.
- Reels Tab Replaces Video Tab: The current "Video" tab on Facebook will be renamed "Reels" to reflect this new unified format.
- No Length or Format Limits: The update removes the current 90-second limit for Reels and allows for videos in both vertical and horizontal formats.
- Streamlined Publishing Tools: A single, simplified video publishing flow will be introduced, offering access to all creative tools in one place.
- Audience Control Settings: Users will be asked to confirm their audience preference — such as public, friends, or a custom group — when uploading Reels. These settings will now apply across both Feed and Reels.
Why Facebook Is Making This Move
According to Facebook, the goal is to unify the video experience on the platform and make content creation easier. This change also mirrors what Instagram did in 2022, when it started automatically converting most video uploads into Reels.
By consolidating all video content into one format, Facebook hopes to offer a more consistent user experience and provide creators with better tools and reach.
User Concerns
The update may not be welcomed by everyone. Some users have expressed concerns that horizontal videos may appear awkward in a vertical format and that combining short and long videos into one stream could lead to a cluttered viewing experience.
Facebook says it will still recommend videos based on user interests, so the types of content people see shouldn’t change — only the format they’re presented in.
Rollout Timeline
The changes will begin rolling out globally in the coming months. Facebook emphasizes that the platform will continue to support a variety of video types, including short clips, long-form content, and live videos — all under the Reels umbrella.
Conclusion
This update reflects Facebook’s effort to modernize and simplify video sharing in a format that matches today’s user habits. As Reels become the standard across the platform, users can expect a more unified experience — whether they’re uploading a quick clip or a longer video.
Related Stories:
Top Stories:

Adobe Brings Firefly AI to iOS and Android with Powerful Cross-Platform Features
Adobe is expanding access to its AI tools by launching a dedicated Firefly app for both iOS and Android. Previously available only on the web, Firefly now offers mobile users the ability to create, edit, and enhance visual content directly from their smartphones — using both Adobe’s in-house models and several leading third-party AI models.
This move follows Adobe’s broader push into the AI space, which began with a major redesign of the Firefly web app in April 2025. With the mobile release, Adobe is aiming to make advanced AI image and video generation more accessible and flexible for creators on the go.
Key Features of the Firefly Mobile App
- Multi-Model Support: The app supports not only Adobe's own Firefly image and video generation models but also external models including OpenAI (for GPT-powered image generation), Google (Imagen 3 and Veo 2), and Flux (Flux 1.1 Pro).
- Prompt-Based Creation: Users can enter text prompts to generate images or videos, convert existing images into video content, or use tools like generative fill to edit specific parts of an image.
- Generative Expand: Similar to Photoshop’s AI tools, users can expand an image’s boundaries with consistent results based on the original content.
- Cloud Integration: Adobe Creative Cloud subscribers can seamlessly start a project on mobile and pick it up later on the web or desktop via cloud storage.
- Expanded Third-Party Support: Adobe is also incorporating additional models such as Flux.1 Kontext (Black Forest Labs), Ideogram 3.0 (Ideogram), and Gen-4 Image (Runway), giving users even more creative flexibility.
Firefly Boards Get a Boost
Alongside the mobile launch, Adobe is enhancing its collaborative tool, Firefly Boards. This digital whiteboarding feature now includes video generation capabilities, allowing users to create videos using Adobe’s own models or third-party alternatives. It’s a step toward turning Firefly into a full-fledged creative workspace.
The Bigger Picture
According to Adobe, over 24 billion media assets have already been created using Firefly’s generative tools. This surge in creative activity has helped drive a 30% quarter-over-quarter increase in new Adobe subscribers — many of whom are first-time users drawn in by the platform’s AI features.
Conclusion
With Firefly now available on mobile, Adobe is making it easier than ever for creators to harness AI tools from their phones — whether they’re generating art, editing images, or building video content. By offering both native and third-party model support and seamless integration across devices, Adobe is positioning Firefly as a top-tier solution in the rapidly evolving world of creative AI.
Related Stories:
Top Stories:

WhatsApp Confirms Ads Are Coming to the Status Screen: What You Need to Know
WhatsApp, one of the world’s most popular messaging apps, is preparing to introduce ads for the first time. The company has confirmed that advertisements will soon begin appearing on the Status screen—its version of Instagram Stories.
What’s Changing?
Ads will not show up in your private chats or groups. Instead, users will begin seeing ads after viewing a few Status updates, similar to how ads appear between Stories on Instagram.
How Will Ads Be Targeted?
Meta, WhatsApp’s parent company, says it will use general signals to determine which ads to show. These include:
- The user's country or city
- Language preferences
- Channels the user follows
- Interaction with previous ads
Meta has made it clear that it will not use personally identifiable information—like phone numbers, message content, calls, or group activity—for ad targeting. However, if a user has linked their WhatsApp account with Meta’s Account Center, their preferences in that system may influence the ads they see.
New Business Tools: Channel Promotions and Subscriptions
Alongside ads, WhatsApp is introducing new features to support creators and businesses:
- Channel Promotion will allow creators and companies to highlight their Channels in the app’s discovery section.
- Channel Subscriptions will let select creators and businesses offer exclusive updates for a fee, with payments managed through the app stores.
These tools expand on WhatsApp’s current focus on its Channels feature and its support for business communication.
Building on Existing Revenue Streams
Until now, WhatsApp has generated revenue through services like the WhatsApp Business platform and click-to-WhatsApp ads on Facebook and Instagram. The company has often highlighted these as growing sources of income in its earnings reports.
Alice Newton-Rex, WhatsApp’s Vice President of Product, said the introduction of ads and promotional features is a natural extension of what businesses have been requesting.
"These updates felt like the next step, now that our business tools have grown. Businesses want to help people discover them directly inside WhatsApp, and this gives them that opportunity," she said in a press briefing.
Global Rollout Coming Soon
WhatsApp says these updates—including ads on Status and the new promotion tools—will begin rolling out globally over the coming months.
Conclusion
WhatsApp is entering a new phase. By adding ads to the Status screen and expanding monetization options for creators and businesses, the platform is evolving beyond simple messaging. While the core experience remains intact, users will begin to see subtle changes that align with WhatsApp’s broader strategy to become a space for both communication and business discovery.
Related Stories:
Top Stories:

Google Tests Audio Overviews in Search: A Hands-Free Way to Understand Queries
Google is enhancing the way people interact with its search engine by testing a new feature called Audio Overviews. Announced on Friday, the feature is part of Google Labs, the company’s experimental platform where users can try out early-stage tools.
What Are Audio Overviews?
Audio Overviews are short spoken summaries of search results. Designed for those who prefer listening over reading, the feature provides a hands-free option to understand key information. Whether you're multitasking or just prefer audio content, it offers a different way to explore your search results.
These overviews are powered by Google’s latest Gemini AI models, which help generate useful and relevant spoken summaries based on your query. If Google determines that an Audio Overview would be helpful, you’ll see an option to listen to it.
How the Feature Works
Once the feature appears, users can:
- Play or pause the audio using a built-in player
- Adjust volume and playback speed
- See source links to explore more about the topic
Google also includes the option to provide feedback. You can give a thumbs up or down on each Audio Overview to help improve the experience over time.
Expanding Beyond Search
Audio Overviews were first introduced in NotebookLM, Google’s AI-based research and note-taking tool. There, users can turn documents such as course notes or legal texts into AI-generated podcasts, offering a similar hands-free experience. The same technology was also added to Gemini in March.
Now, by bringing this tool to Google Search, the company aims to support auditory learners and users looking for more accessible ways to engage with information.
Industry Impact
While Audio Overviews may offer convenience, they also raise concerns among content creators and publishers. A recent Wall Street Journal report highlighted how Google’s AI-powered summaries are reducing traffic to news websites. With Audio Overviews, this trend could accelerate, as users may rely more on AI explanations instead of visiting the original sources.
To address this, Google includes visible links in the audio player that direct users to the original content, ensuring that the information remains traceable and verifiable.
Conclusion
Audio Overviews offer a fresh and accessible way to interact with Google Search, especially for those who benefit from listening instead of reading. While the technology shows promise in improving usability and learning, it also sparks important discussions about how AI is reshaping the way we find and engage with content online.
Related Stories:
Top Stories:

Meta AI Launches Video Editing Tools to Boost Creative Content Creation
Meta is expanding its AI toolkit with a major update—video editing capabilities that allow users to transform short clips with just a few taps. Announced on Wednesday, this new feature lets people edit 10-second videos using preset AI prompts to change backgrounds, clothing, and visual styles.
The update is part of Meta's broader push to compete with platforms like Google and other AI-driven editing apps such as Captions. With this rollout, Meta is making it easier for users to create engaging video content without needing advanced editing skills.
What’s New?
The new video editing tools are available on the Meta AI app, the Meta.ai website, and a new app called Edits—Meta’s take on CapCut. These tools come with 50 built-in presets that were developed based on feedback from creators. Each preset offers a quick way to transform a video’s look and feel. For example, users can give their clip a vintage comic book vibe, change a sunny background to a rainy day, or even swap a person’s outfit into something like a space suit.
For now, editing is limited to 10-second videos, but the effects are eye-catching and designed to encourage experimentation.
Easy Sharing to Facebook and Instagram
Once a video is edited, it can be shared directly to Facebook and Instagram through both the Meta AI app and Edits. This seamless integration is clearly aimed at helping creators stay within Meta’s ecosystem, instead of relying on third-party tools.
More Customization on the Way
Meta says it plans to add more editing flexibility later this year, giving users even more creative control. The goal, according to the company, is to help people make fun, expressive videos—whether it’s for reliving personal moments or building content for a larger audience.
In a blog post, Meta said, “We built this so that everyone can experiment creatively and make fun, interesting videos to share with their friends, family, and followers.”
A Strategic Move
Meta already offers AI-powered image generation across its platforms. By adding video editing, it's clearly aiming to keep creators engaged and loyal to its tools. The company is positioning Meta AI as a one-stop shop for content creation, from images to short-form videos.
Conclusion
Meta’s new video editing tools mark a significant step in making AI-powered creativity more accessible. With user-friendly presets and direct integration with its social platforms, Meta is giving creators a fast and easy way to bring their ideas to life—no complex software required.
Related Stories:
Top Stories:

OpenAI Launches O3-Pro: Its Most Powerful Reasoning Model Yet
OpenAI has just released O3-pro, a significantly upgraded version of its earlier O3 AI model. Built for high-level reasoning, O3-pro is now the most capable model the company has developed, with notable improvements across science, math, education, programming, business tasks, and writing assistance.
What Makes O3-Pro Different?
Unlike traditional AI models, O3-pro is designed to think step by step, which helps it solve complex problems with greater reliability. This kind of structured reasoning is particularly useful for disciplines that require precision and logical flow—like physics, advanced mathematics, and software development.
According to OpenAI, expert evaluations consistently favored O3-pro over the earlier O3 model. The reviewers highlighted major improvements in clarity, depth, accuracy, and the model’s ability to follow instructions.
Availability and Pricing
O3-pro is now available to ChatGPT Pro and Team users, replacing the older o1-pro model. OpenAI says that Enterprise and Education users will get access starting next week. Developers can also access O3-pro via the API starting today.
In the API, O3-pro is priced at:
- $20 per million input tokens (input provided to the model)
- $80 per million output tokens (content generated by the model)
To put that in perspective, one million input tokens equals roughly 750,000 words—just a little longer than Tolstoy’s War and Peace.
Features and Capabilities
O3-pro comes with access to OpenAI’s full set of tools, including:
- Web search
- File analysis
- Visual reasoning
- Python execution
- Memory for personalized experiences
These tools make it a strong option for users who need in-depth research, coding support, or tailored responses.
Performance Highlights
In internal testing, O3-pro outperformed top competitors. It scored higher than Google’s Gemini 2.5 Pro on the AIME 2024 benchmark, which tests mathematical reasoning. It also beat Anthropic’s Claude 4 Opus on GPQA Diamond, an exam-level test that evaluates PhD-level scientific knowledge.
Limitations to Note
While powerful, O3-pro isn’t without limitations:
- Responses generally take a bit longer than those from o1-pro
- Temporary chat mode in ChatGPT is currently disabled due to a technical issue
- The model cannot generate images
- It is not yet supported in OpenAI’s Canvas workspace
Conclusion
O3-pro represents a major step forward for OpenAI, offering enhanced performance where it matters most—reasoning, precision, and adaptability. While it's not without some temporary limitations, its capabilities mark a clear upgrade for users who rely on AI for complex tasks and decision-making.
Related Stories:
Top Stories:

WWDC 2025: Apple Unveils iOS 26 with Liquid Glass Design, Smarter AI, a New Games App, and More
At WWDC 2025, Apple introduced one of its boldest updates in years. The event focused on major software changes across its ecosystem, including iOS 26, macOS 26, watchOS 26, and new features powered by Apple Intelligence.
From a striking new visual design called Liquid Glass to deeper AI integration and a long-awaited Games app, Apple made it clear that it's ready to evolve how users interact with their devices.
Here’s a full breakdown of everything Apple announced.
iOS 26 Redesign: Meet Liquid Glass
iOS 26 brings a significant design transformation—the biggest since iOS 7. Apple is calling this new interface Liquid Glass, and it introduces translucent, glossy, and reflective design elements that adapt based on what you're doing and the environment around you.
The interface feels more dynamic:
- Alerts now appear from where you tap.
- Menus expand into easily scrollable lists.
- The lock screen and home screen also reflect the new look, making the entire user experience feel more fluid and immersive.
This design gives iPhones a more modern, interactive, and polished feel that adapts to light and content in real time.
A Simpler Naming System
Starting this year, Apple is naming all its operating systems based on the release year. That means you’ll see:
- iOS 26
- iPadOS 26
- macOS 26 (code-named Tahoe)
- watchOS 26
- tvOS 26
- visionOS 26
This move should make it easier for users to know exactly which version they're using.
A Dedicated Games App for Apple Devices
Apple finally launched a Games app to bring all gaming experiences under one roof. It serves as a central hub where users can launch games, track progress, and engage with friends.
The app includes:
- A feed showing what your friends are playing
- Score comparisons
- A “Play Together” feature to send invites
- “Challenges” for friendly competitions
It’s Apple’s first serious attempt to build a social gaming layer into its ecosystem.
Apple Intelligence Takes the Lead
AI continues to play a bigger role across Apple’s software. This year’s updates show deeper integration of Apple Intelligence in daily tasks.
For example:
- macOS 26 introduces a Shortcuts app powered by AI for automating tasks.
- watchOS 26 includes a new Workout Buddy that gives personalized fitness guidance using AI.
- iPadOS and iOS benefit from smarter suggestions and context-aware actions throughout the system.
Apple seems committed to making its AI work more quietly and usefully in the background, rather than just showcasing flashy features.
What's New Across macOS, iPadOS, watchOS, and tvOS
macOS 26 (Tahoe)
The desktop experience now includes the Liquid Glass look, and the new AI-powered Shortcuts app will help users automate daily workflows more easily.
iPadOS 26
This update continues to blur the line between iPad and Mac. It includes a new Preview tool for file management, Apple Pencil markup support, AutoFill, and more options for exporting images.
watchOS 26
There’s a refreshed UI built on the Liquid Glass theme, plus a wrist-flick gesture for quicker navigation and an AI fitness coach called Workout Buddy.
tvOS 26
Apple TV is getting a more refined interface and a handy new feature—each time the device wakes, it prompts users to pick their profile. It also introduces karaoke mode, making the platform more fun for group settings.
Updates for Vision Pro
The Vision Pro headset isn’t left out. Apple announced support for new accessories:
- The Logitech Muse stylus for creative tasks
- Compatibility with the PlayStation VR2 Sense controller for gaming
These additions expand what users can do with Apple’s AR/VR device, opening up new creative and interactive possibilities.
AirPods and CarPlay Get Functional Upgrades
AirPods now support:
- Studio-quality voice recording
- A remote camera trigger, which is perfect for hands-free photography
CarPlay gets:
- Support for Live Activities and message widgets
- Tapback responses in messages
- Pinned conversations, similar to iMessage
These features aim to make driving safer and more connected without distraction.
Smaller—but Notable—Improvements
A few extra features were quietly introduced:
- Apple Wallet adds new tools for frequent travelers
- iMessage polls are now a thing, helping you get quick feedback from friends
- Translated lyrics and pronunciation guides enhance the music experience
- Maps now learns your commuting habits for better navigation suggestions
Conclusion
WWDC 2025 highlighted a more refined and thoughtful approach from Apple. With its new Liquid Glass design, simplified naming, stronger AI capabilities, and expanded gaming and media features, Apple is reshaping how users interact with its devices—without overwhelming them.
This year’s updates are more than cosmetic. They reflect Apple’s growing focus on personalization, intelligence, and connection across all platforms.
Related Stories:
Top Stories:
