Tech News Hub
Apple Working on New Vision Pro Models, Including One That Connects to Your Mac
Apple is continuing to invest in its Vision Pro mixed reality headset, despite a slower-than-expected start. According to Bloomberg’s Mark Gurman, the company is moving ahead with two new models that could address key criticisms of the original device.
A Shift in Strategy After Slow Sales
Sales of the first Vision Pro have reportedly fallen short of internal expectations, leading Apple to reconsider its direction for the product. Now, the company is planning two follow-up versions: one that’s more affordable and lighter, and another that brings tighter integration with the Mac.
What’s in the Works?
One of the new models will be a streamlined version of the Vision Pro aimed at solving two major complaints: the high $3,500 price and the bulkiness of the headset. While making the device cheaper and lighter is no small feat—especially with ongoing trade and tariff challenges—it’s a move that could make the Vision Pro more appealing to a broader audience.
The second model will reportedly connect directly to Macs. This version would let users stream content from their Mac displays into the headset, opening up possibilities for more immersive work experiences and enterprise applications.
Looking Further Ahead
According to Gurman, these updates are part of a bigger vision led by CEO Tim Cook: building lightweight augmented reality glasses that people could comfortably wear all day. While such a product may still be years away, these new Vision Pro models represent an important step in that direction.
Conclusion
Apple’s latest efforts suggest it’s not giving up on mixed reality. Instead, it’s refining its approach to make the Vision Pro more practical and appealing—whether that means reducing its price and weight or turning it into a powerful companion for Mac users. As Apple continues to shape the future of spatial computing, the Vision Pro could still find its place in everyday life.
Related Stories:
Top Stories:

Grok 3 API Is Live: Can Elon Musk’s AI Rival ChatGPT and Gemini?
Elon Musk’s AI company, xAI, is stepping into the big leagues with the official launch of the Grok 3 API—a major move aimed at competing with top players like OpenAI’s ChatGPT and Google’s Gemini. Despite ongoing legal tensions with OpenAI, Musk isn’t slowing down his AI ambitions.
Grok 3 and Grok 3 Mini Now Available
xAI has rolled out two versions of its model through the API:
- Grok 3
- Grok 3 Mini, a lighter and more affordable option.
Both models come with built-in “reasoning” capabilities and can understand both text and images, making them versatile tools for developers and businesses.
Let’s Talk Pricing
Grok 3 isn’t exactly cheap. It’s priced similarly to Anthropic’s Claude 3.7 Sonnet, but higher than Google’s Gemini 2.5 Pro, which often outperforms Grok 3 in benchmark tests.
Here’s a breakdown:
- Grok 3: $3 per million input tokens and $15 per million output tokens
- Grok 3 Mini: $0.30 input / $0.50 output
- Speedier versions are also available, going up to $25 per million output tokens
While Grok offers power and flexibility, the price may be a sticking point—especially with more affordable and competitive models on the market.
What It Can (and Can’t) Do
Musk originally pitched Grok as an edgier alternative to mainstream AIs—a model that wouldn’t shy away from controversial questions or “unfiltered” responses. Grok 3 keeps that bold personality while also offering improvements in language understanding and reasoning.
However, it’s not without limitations. Despite claims that Grok 3 supports up to 1 million tokens, the API only allows 131,072 tokens—much less than promised. This has left some developers on X (formerly Twitter) questioning the accuracy of xAI’s promotional benchmarks.
Political Positioning Still in Question
Earlier Grok models were known for sometimes avoiding sensitive political questions or leaning left on certain social topics. Musk has promised to steer Grok toward “political neutrality”, but so far, it’s unclear how much that has changed with Grok 3. Users are still watching closely to see if the model truly delivers on that front.
The Bigger Picture
xAI's Grok 3 might not be the cheapest or the most accurate AI model available right now, but it’s certainly making waves. With Musk’s backing, built-in integration with X (Twitter), and a growing developer ecosystem, Grok has a real chance to carve out a niche—even if it’s not yet dethroning GPT-4 or Gemini.
Conclusion
Grok 3 is now officially in the hands of developers, giving them a new, bold AI tool to explore. Whether it can truly compete with giants like ChatGPT and Gemini depends on how well it performs in real-world use—and how much value users can extract at its premium price.
Let’s see if Elon Musk’s latest AI bet pays off.
Related Stories:
Top Stories:

ChatGPT Surpasses Instagram and TikTok as the Most Downloaded App Worldwide in March
In a major milestone for artificial intelligence, ChatGPT became the most downloaded app in the world in March 2025. For the first time ever, it beat out popular social platforms like Instagram and TikTok to claim the top spot on global app charts.
According to app intelligence firm Appfigures, ChatGPT was downloaded 46 million times during the month, marking a 28 percent increase from February. This made it not only the most downloaded app of the month (excluding games) but also ChatGPT’s best month to date.
Why the Sudden Surge?
Several updates in March played a role in the app’s rising popularity. OpenAI rolled out a major upgrade to ChatGPT’s image generation features—the first big change in over a year. The update sparked a wave of viral content, with users sharing AI-generated images and memes styled after classic Japanese animation, including artwork inspired by Studio Ghibli films like Spirited Away and My Neighbor Totoro.
OpenAI also loosened its image content restrictions and improved ChatGPT’s voice interaction features, giving users more flexibility and a more engaging experience overall.
Still, experts believe it’s not just the new features that are driving the growth.
Ariel Michaeli, founder and CEO of Appfigures, explained that ChatGPT’s name has become almost synonymous with AI for many people. “It’s starting to feel like ChatGPT is becoming a verb, much like how ‘Google’ did in the 2000s,” Michaeli said. Even when people hear about other AI tools like Grok or Claude, many still download ChatGPT because that’s the brand they recognize.
What Happened to Instagram and TikTok?
Instagram dropped to the second position after previously holding the number one spot in January and February. TikTok came in third. Earlier this year, TikTok saw a boost in downloads as users rushed to get the app over fears of a potential ban in the United States. That concern has since faded as negotiations continue between the U.S. and TikTok’s parent company, ByteDance.
Despite slipping in global rankings, Instagram remains extremely popular among American teens. A recent Piper Sandler survey showed that 87 percent of U.S. teens use Instagram every month, compared to 79 percent for TikTok and 72 percent for Snapchat.
The Rest of the Top 10
Other top-performing apps in March included Facebook, WhatsApp, CapCut, Telegram, Snapchat, Threads, and Temu. In total, the top 10 non-game apps were downloaded 339 million times in March—an increase from 299 million downloads in February.
Conclusion
ChatGPT’s rise to the top of the app charts is a clear sign that AI is becoming part of everyday life for millions of people. Whether it's being used for creative projects, casual chats, or learning something new, more users are turning to ChatGPT as their go-to tool for interacting with artificial intelligence.
Related Stories:
Top Stories:

Canva Unveils Powerful New AI Features, Including Coding, Spreadsheets, and Smarter Design Tools
Canva is stepping up its AI game in a big way. Despite ongoing debates around the use of AI in creative fields, the design platform is doubling down by rolling out a wide range of artificial intelligence features aimed at both everyday users and enterprise teams.
A New Era of AI-Powered Creativity
The update introduces Canva AI, a smart assistant designed to help users generate content faster and more efficiently. Whether you want to create a social media post, draft a presentation, or generate written copy — this assistant is ready to help. It can also produce visuals based on text prompts and suggest design concepts, offering inspiration with just a few clicks.
One standout addition is Canva Code, which allows users to build mini-apps using prompts — think interactive maps, custom calculators, or tools that can be embedded right into designs. This is powered in part by a partnership with Anthropic, a leading AI company. According to Cameron Adams, Canva’s co-founder and chief product officer, the goal is to make app-building as easy and visual as the rest of Canva’s platform.
Smarter Photo Editing and AI-Generated Backgrounds
Canva is also boosting its photo editing features with AI-driven tools. One lets users simply click on areas of an image to edit or replace objects, while another can generate realistic, customized backgrounds that respect lighting and layout. These updates are Canva’s response to the growing popularity of pro-level editing apps like Adobe Photoshop and Apple’s Pixelmator.
Bringing Spreadsheets to Design
As part of its push into the enterprise world, Canva is launching Canva Sheets — a new feature that blends the familiarity of spreadsheets with Canva’s signature design-first approach. Canva Sheets includes:
- Magic Insights to automatically analyze and summarize data
- Magic Charts to instantly transform data into branded visuals
- Integration with platforms like Google Analytics, HubSpot, and Statista for quick data import
These tools are designed to serve marketing, sales, and analytics teams who want to present data in a visually compelling way — without needing a separate tool for graphic design.
Artists and AI: A Growing Tension
Despite the tech-forward updates, the rollout highlights the ongoing conflict between AI development and the concerns of creative professionals. Many artists are wary of how generative AI is trained — particularly when it involves their work — and fear the impact on job security.
Still, Canva’s leadership sees AI not as a replacement for creativity, but as a catalyst for evolution. Adams believes that AI will touch every profession, including design, but in ways that augment rather than replace human work.
“Every role — from design and engineering to marketing — is going to change as AI tools become more common. We see this as a moment of massive opportunity,” Adams said.
Restructuring Amid Change
Earlier this month, Canva laid off some members of its technical writing team. While some may link this move to the growing influence of AI tools, Adams clarified that the decision was part of a broader restructuring effort, not directly related to the platform’s AI upgrades.
Conclusion
With these sweeping updates, Canva is making a bold statement: AI is not just a feature — it’s becoming central to the future of design. Whether you're a solo creator or part of a large team, Canva’s latest tools aim to bring more speed, flexibility, and creativity to your workflow.
Related Stories:
Top Stories:

Amazon Introduces Nova Sonic: A Next-Gen AI Voice Model That Challenges OpenAI and Google
Amazon has just launched Nova Sonic, a powerful new generative AI voice model that brings a significant upgrade to the company’s voice technology game. Designed to generate highly natural speech and handle voice interactions in real-time, Nova Sonic is Amazon’s answer to the cutting-edge voice models developed by OpenAI and Google.
What Makes Nova Sonic Stand Out?
Unlike traditional AI voice assistants like the early versions of Alexa or Siri, Nova Sonic is built for dynamic, real conversations. It supports bi-directional streaming, allowing smoother back-and-forth interaction and better understanding of speech even in noisy environments or when users mumble or mispronounce words.
Amazon claims Nova Sonic has outperformed rival models on multiple industry benchmarks. On Multilingual LibriSpeech, a test for understanding various languages, it achieved a word error rate of just 4.2% across English, French, German, Italian, and Spanish. In another test involving loud, multi-speaker conversations, Nova Sonic beat OpenAI’s GPT-4o-transcribe model by 46.7% in accuracy.
Speed and Efficiency
One of Nova Sonic’s biggest strengths is its speed. It has an average response latency of just 1.09 seconds, slightly faster than OpenAI’s GPT-4o, which clocks in at 1.18 seconds. On top of that, Amazon is marketing Nova Sonic as 80% more cost-efficient than GPT-4o — a major benefit for developers and businesses working with voice AI at scale.
Already Powering Alexa+
Nova Sonic isn’t just theoretical — parts of it are already being used in Alexa+, Amazon’s next-gen digital assistant. According to Rohit Prasad, SVP and Head Scientist of AGI at Amazon, the model is particularly good at routing user commands to different APIs and data sources, making it highly versatile for enterprise and real-time applications.
Built for Developers
Developers can access Nova Sonic through Amazon Bedrock, the company’s enterprise AI platform, via a new streaming API. The model not only speaks with timing awareness — knowing when to respond during pauses and interruptions — but also provides real-time transcripts for developers to use in custom applications.
Part of Amazon’s AGI Ambitions
Nova Sonic is just one part of Amazon’s bigger plan to build Artificial General Intelligence (AGI) — AI that can perform any computer-based task a human can do. Prasad shared that future models will expand to handle not just voice but also images, video, and other sensory data, pushing the boundaries of what AI can understand and do in the physical world.
Recently, Amazon also previewed Nova Act, another AI model built to browse the web and power smart shopping features. It’s clear that Amazon is positioning its internal AGI division at the center of its future AI roadmap.
Conclusion
With Nova Sonic, Amazon is stepping confidently into the next generation of AI-powered voice assistants. By combining fast response times, improved accuracy across languages, and deeper conversational intelligence, Amazon is showing it’s ready to compete — and maybe even lead — in the evolving world of AI voice technology.
Related Stories:
Top Stories:

Samsung’s Ballie Home Robot Just Got Smarter with Google’s Gemini AI
Samsung is bringing some serious brains to its home robot Ballie by teaming up with Google Cloud to integrate the Gemini AI model. This collaboration means Ballie will now be able to understand and respond using both audio and visual input — making it a much more helpful and intuitive companion around the house.
Smarter Interactions, Thanks to Gemini
By combining Samsung’s AI with Google’s Gemini, Ballie can now do things like analyze how you look and suggest outfit ideas using its built-in camera. So instead of just rolling around and playing music, it can actually see and understand what’s happening around it.
Personalized Health Tips at Home
Ballie will also use Gemini to provide wellness suggestions — things like what exercises you can do or how to improve your sleep routine. You’ll also be able to ask general knowledge questions, and the robot will pull from Gemini’s database to give you useful answers on the spot.
Samsung’s Vision for AI at Home
Yongjae Kim, Executive VP at Samsung’s Visual Display Business, shared that this partnership is about more than just features — it's about changing how AI fits into our daily lives. The goal is to create an assistant that moves with you, understands your needs, and becomes a truly interactive part of your home environment.
When Can You Get One?
Samsung has shown off Ballie at events like CES for a few years now, and earlier this year they announced it would officially launch in South Korea and the U.S. in the first half of 2025. So the wait may finally be over soon.
More to Come from Samsung and Google
This isn't the first time the two tech giants have teamed up — Gemini has already been integrated into Samsung’s Galaxy S24 phones. There are also reports that they’re working together on a mixed reality (XR) device, where Gemini could be a key part of the experience too.
Related Stories:
Top Stories:

Google AI Mode Now Understands Your Photos: Ask Complex Questions About Images
Google is stepping up its AI-powered search game with a new feature that allows users to ask detailed questions about the images they upload or take with their camera. This update brings multimodal search to AI Mode, Google's experimental search experience that supports complex, multi-part queries and follow-up questions.
With this update, users can interact with photos just as easily as they would with text. Whether it’s a picture of a bookshelf, a product, or a room, AI Mode can now analyze the entire scene to provide intelligent, in-depth answers.
Smarter Searches with Photos
The new feature uses Google Lens’ advanced image recognition technology to understand not just what’s in a photo, but how everything in the image relates. It can detect colors, materials, object relationships, and more.
For example, if you take a picture of your bookshelf and ask, “If I enjoyed these, what are some similar books that are highly rated?” — AI Mode will identify the titles, understand your reading preferences, and suggest similar books. You can then follow up with something like, “Which of these is the shortest read?” to narrow things down even further.
This is made possible by a method called query fan-out, which breaks your question into multiple parts and explores each one to offer a more thorough response than a standard search.
Expanded Access
Until now, AI Mode was only available to Google One AI Premium subscribers. But starting this week, Google is expanding access to millions of users who are part of Google Labs, the company’s platform for testing experimental features.
AI Mode was introduced just last month and is already being compared to other advanced tools like Perplexity and OpenAI’s ChatGPT Search. Google says it plans to continue improving the experience, focusing on making searches more natural, visual, and intuitive.
Conclusion
Google’s latest update to AI Mode is a big step forward in making search more interactive and intelligent. By allowing users to ask questions about what they see—not just what they type—Google is opening up new ways to explore the world around us.
Related Stories:
Top Stories:
