Tech News Hub
AWS Debuts Incident Response Service to Bolster Cybersecurity
Cyberattacks are becoming more frequent and destructive. Many businesses need help to respond appropriately to cyberattacks. However, with these risks, only 60% of organizations have an incident response plan, and a smaller percentage of organizations conduct regular exercises in the use of such a plan. Global cyberattack costs are projected to surge from $8.4 trillion in 2022 to more than $23 trillion by 2027, the International Monetary Fund reports.
Amazon Web Services (AWS) recently launched a new tool called AWS Security Incident Response. It helps businesses recover quickly after an attack. Hart Rossman, VP of Global Services Security at AWS, said, "Many organizations find it hard to manage the tools and teams needed for effective incident response. This tool makes everything simpler by bringing everything into one place."
Simplifying Cybersecurity
The new AWS service integrates with tools such as GuardDuty (AWS' threat detection tool) and other third-party security systems. It enables businesses to detect and respond to threats like ransomware, data breaches, and account takeovers. Here's what it offers:
- Automatic Threat Detection: The system rapidly identifies and categorizes threats, saving time for security teams.
- Easy Management: A centralized dashboard shows active incidents, historical data, and how long it takes to resolve issues.
- Proactive Solutions: Most threats are self-solved, and the complex ones are reported to the right people in the organization.
What makes AWS stand out is its seamless integration with other AWS tools and access to its expert support team. Businesses can also work with their security vendors if they prefer.
Why It Matters
This new service will allow the companies that are already on AWS to easily, in a streamlined way, make their cybersecurity stronger. The PGA Tour is one of its early adopters, and since the global incident response market will reach $89 billion by 2030, AWS is preparing to dominate this field.
Conclusion
Cybersecurity threats aren't disappearing anytime soon, but tools like AWS Security Incident Response can make managing them a whole lot easier. This service helps businesses take the lead against attackers through automation and unification of many tasks. The AWS Management Console and APIs make it available today—offering lifelines for companies ready to take their security to the next level.
Related Stories:
Top Stories:
Bluesky Pledges Stronger Verification Amid Rising Impersonation Concerns
Bluesky has taken a tougher approach as the platform continues to expand its user base, which brings along more celebrities and influencers, amid concerns over rising cases of impersonation. New users have led the way, with stricter policies now from the platform and a sizeable moderation team added to ensure no handle-squatting, and no fake accounts are ever allowed.
The Bluesky Safety team announced updates in its impersonation policy: it promises to be aggressive on fake accounts. Impersonation and handle-squatting accounts will be taken down much faster with a fourfold increase in the moderation team size. Although there is still a backlog of reports in the rapid growth of the platform, the expanded team handles cases much more efficiently.
Verification Without Badges
Unlike traditional social platforms, Bluesky does not offer verification badges to confirm any user's authenticity. Verified domains in user handles, including "bsky.team," are used to signify a trusted account. The platform is also actively engaging with high-profile users and organizations to set up these verified domain handles with the aim of enhancing trust in the network.
However, the same system has caused much confusion among users, raising the question of whether some accounts are real or parodies. In response, the users have come up with unofficial verification lists and badges that help the community recognize authentic accounts.
Transparent Parody Accounts Rules
Bluesky enables parodies and fan accounts but demands to be transparent about the nature of the account. These types of accounts must be self-described in their display name and bio. The service also bars "identity churning" where users create one set of fake accounts to collect followers and then switch to another identity. Such accounts will be deleted under the new policies.
Looking Ahead: Expanded Verification Options
Bluesky CEO Jay Graber hinted at plans to enhance verification further. The platform may enable other organizations or apps to offer their own verification systems, giving users more options to establish their identity.
The Bluesky Safety team echoed this sentiment, saying, “We hear your feedback: users want more ways to verify their identity beyond domain verification. We’re exploring additional options and will share updates soon.”
Conclusion
Bluesky's strict impersonation policies and better verification reflect its focus on keeping the trust intact with the user base expansion. It is proactively trying to make a safer and more authentic space for all its users.
Related Stories:
Top Stories:
Canadian Publishers Take Legal Action Against OpenAI Over ChatGPT Training
OTTAWA – A coalition of major Canadian news organizations is taking legal action against OpenAI, accusing the ChatGPT creator of using their copyrighted material without permission to train its generative AI system.
The group, which comprises The Canadian Press, Torstar, Globe and Mail, Postmedia, and CBC/Radio-Canada, claims that OpenAI has been "scraping" their content to feed its AI models, a practice they argue breaches copyright laws.
"OpenAI is building its business on the hard work of Canadian journalists without consent or compensation," the coalition said Friday.
Journalism Meets AI
The publishers stress that, collectively, they invest hundreds of millions of dollars annually in producing quality journalism. They welcome technological innovation but insist that any innovation respects intellectual property rights.
The statement emphasized that technological progress is important, but it must not come at the expense of fair practices or disregard for the law.
The Bigger Picture
Generative AI systems, such as ChatGPT, rely on huge amounts of data—frequently sourced from online content—to generate text, images, and many other forms. This approach has propelled the creation of groundbreaking tools but also raises significant questions about where that content comes from and whether it respects copyright rules.
A First in Canada
This lawsuit is the first one of its kind in Canada, but in the United States, similar cases are unfolding, including one filed by The New York Times against OpenAI and Microsoft.
What's at Stake
This case may open up the way for the developers of AI to make use of copyrighted content, hence a redefinition of the relationship between media houses and tech companies. The ruling may also have far-reaching effects in the debate on fair use for journalism, intellectual property, and the future of generative AI worldwide.
Related Stories:
Top Stories:
Australia Bans Social Media for Children Under 16: A Bold Move
Australia has taken a groundbreaking step to protect its youngest internet users. Children under 16 are no longer allowed to use social media, to address the growing concerns over the harmful effects of these platforms on kids' mental health and overall well-being.
This law will take effect in November 2025 and mandates social media services to have an effective age verification. Failure to comply could make companies liable for a colossal AUD 50 million penalty.
Why is this Important?
The decision comes at a point when mounting evidence is rising to the fore on how the social media world is hostile to children. Ranging from mental health problems and social pressures, these have become impossible to overlook risks. This law is testimony to Australia's intent and commitment to making the internet safer for its kids.
Tech companies object
Not all parties agree. In fact, big technology firms such as Google and Meta pleaded for further delay on the law due to a live age verification trial it would complete midway next year. However, the government cannot wait anymore as children's safety is above everything. The main priority, though, is still about privacy.
One of the most significant fears was privacy. Nobody likes to expose sensitive documents like a passport, just for verification of age. Luckily, an amendment to the bill ensures that platforms cannot request personal information for age checks and thus brings a healthy balance between child protection and protection of the users' rights.
Is This a Global Shift Waiting to Happen?
It is hoped that such a step by Australia inspires many more countries to do something similar, as it exposes nations currently struggling with how to regulate social media to imitate a program Australia Bans Social Media for Children Under 16: A Bold Move
Australia has taken a groundbreaking step to protect its youngest internet users. Children under 16 are no longer allowed to use social media, to address the growing concerns over the harmful effects of these platforms on kids' mental health and overall well-being.
This law will take effect in November 2025 and mandates social media services to have an effective age verification. Failure to comply could make companies liable for a colossal AUD 50 million penalty.
Why is this Important?
The decision comes at a point when mounting evidence is rising to the fore on how the social media world is hostile to children. Ranging from mental health problems and social pressures, these have become impossible to overlook risks. This law is testimony to Australia's intent and commitment to making the internet safer for its kids.
Tech companies object
Not all parties agree. In fact, big technology firms such as Google and Meta pleaded for further delay on the law due to a live age verification trial it would complete midway next year. However, the government cannot wait anymore as children's safety is above everything. The main priority, though, is still about privacy.
One of the most significant fears was privacy. Nobody likes to expose sensitive documents like a passport, just for verification of age. Luckily, an amendment to the bill ensures that platforms cannot request personal information for age checks and thus brings a healthy balance between child protection and protection of the users' rights.
Is This a Global Shift Waiting to Happen?
It is hoped that such a step by Australia inspires many more countries to do something similar, as it exposes nations currently struggling with how to regulate social media to imitate a program like that.
Conclusion
By banning social media for children under 16, Australia is taking a strong stance for youth safety in the digital age. It’s a bold step, but one that could make a lasting difference. The question now is, will other nations join in?
Related Stories:
Top Stories:
- Apple Gears Up to Redesign Siri with Smarter, More Conversational AI by 2026
- Musk Revives Lawsuit Against OpenAI, Adding Microsoft as a Defendant
- iOS 18.2 Launching Soon in December with ChatGPT-Powered Siri and More AI Upgrades!
- Microsoft Delays Testing of “Recall” Feature for Windows 11 Until December
- Bluesky Sets Itself Apart by Promising Not to Use User Data for AI Training, Unlike Competitors
Orange Teams Up with OpenAI for Early Access to AI Models in Europe
Orange, the French telecom giant, has just signed an exciting multi-year deal with OpenAI, giving them exclusive access to pre-release AI models. This makes Orange the first telecom company in Europe to partner directly with OpenAI, and it's a big step forward for the integration of AI in the telecom industry.
Why This Partnership Matters
This is more than a technical partnership. Steve Jarrett, chief AI officer of Orange, said that OpenAI models are among the most popular in the AI space, so it is smart for Orange to work directly with them. The company will gain early access to new versions of these models but will also have an influence on their development. In addition, the models will be hosted on secure infrastructure within Europe, thus having data privacy and security at the forefront of the processes.
AI for Orange's Workforce and Customers
The partnership comes at a time when AI is rapidly changing the way businesses operate. Over 50,000 employees at Orange are already using OpenAI’s models, which means the company is already seeing firsthand how AI can boost productivity and improve services. The early access to OpenAI’s models will allow Orange to integrate them across its operations, enhancing customer support and creating new ways for the company to engage with users.
A Step Toward Including African Languages
Orange is not being done there. The company, in partnership with Meta and Open AI, is working on a translation project for African languages that include Wolof and Pular into AI models that can be them in customer support. For Orange, this means preserving and modernizing the usage of African languages while supporting the greater conversation of an inclusive AI. This is a critical initiative, stretching AI from purely commercial applications toward aiding governments, universities, and startups in Africa to enjoy AI-powered tools.
What's in the Future for Telecom and AI?
This alliance sets the stage for even greater cooperation between telecom operators and AI developers. By partnering directly with OpenAI, Orange positions itself as the front-runner in the world of telecommunications and will implement AI to better internal processes as well as the customer experience. It's pretty obvious that this is the beginning of something much larger for the telecom industry in general, and AI is going to take a front-row seat in shaping connectivity and services for the future.
Conclusion
Orange's deal with OpenAI is a game-changer for both the telecom and AI industries. With early access to cutting-edge models and an exciting project to bring African languages into the AI fold, this partnership shows how telecom companies can lead the way in AI innovation. It's a big step forward for Orange, and it's sure to have a lasting impact on the future of telecom services across Europe and beyond.
Related Stories:
Top Stories:
- Apple Gears Up to Redesign Siri with Smarter, More Conversational AI by 2026
- Musk Revives Lawsuit Against OpenAI, Adding Microsoft as a Defendant
- iOS 18.2 Launching Soon in December with ChatGPT-Powered Siri and More AI Upgrades!
- Microsoft Delays Testing of “Recall” Feature for Windows 11 Until December
- Bluesky Sets Itself Apart by Promising Not to Use User Data for AI Training, Unlike Competitors
Microsoft Denies Using Customer Data to Train AI Models
Microsoft is denying a series of recent claims that it makes use of its customers' information from its various applications under Microsoft 365, such as Word and Excel, in the creation of its AI models. The company said this in response to discussions made on social media on "connected experiences," leading most people to be worried that their data was misused without them ever knowing.
Microsoft's statement on the usage of data
According to Microsoft in a statement it responded, saying, "These claims are false. Microsoft does not use customer data from Microsoft 365 consumer and commercial applications to train foundational large language models." The feature was meant to enable a type of "connected experiences," that would help, for example, with editing of a collaborative document or storage to a cloud and not a function of training the AI model.
Privacy Issues Persistent
Though Microsoft has denied all the allegations, the buzz on social media points to an increasing unease over how companies are using our data. With AI technology becoming increasingly integrated into everyday tools, more and more users are getting concerned that their personal or business data might be used without permission, raising wider debates on privacy.
Microsoft's Moves to Rebuild Trust
In its response, Microsoft assured users that it is committed to the protection of the privacy of its users. The company noted that it does not use customer data in training its AI models, to be transparent and clear about its data practices. Microsoft wants users to be assured that their information is secure and is not used in ways they have not agreed to.
Conclusion
As long as Microsoft is prompt with the clarification of issues, the bigger conversation remains for AI and data privacy. One thing is sure, with the evolution of AI technology, these are things companies such as Microsoft will want to have in place so they can maintain a robust relationship with their users. In that regard, the argument is bound to grow with respect to how personal data should be handled in the near future.
Related Stories:
- Microsoft Teams Brings Voice Cloning and Real-Time Translation to Meetings
- Microsoft Brings AI Magic to Paint and Notepad for Windows Insiders!
- Microsoft Unveils Windows 365 Link: A PC Built for the Cloud
- Microsoft Rolls Out Game-Changing AI Features for Copilot and Bing Search
- Microsoft Opens Access to Windows 11 ARM ISO for Everyone
Top Stories:
- Apple Gears Up to Redesign Siri with Smarter, More Conversational AI by 2026
- Musk Revives Lawsuit Against OpenAI, Adding Microsoft as a Defendant
- iOS 18.2 Launching Soon in December with ChatGPT-Powered Siri and More AI Upgrades!
- Microsoft Delays Testing of “Recall” Feature for Windows 11 Until December
- Bluesky Sets Itself Apart by Promising Not to Use User Data for AI Training, Unlike Competitors
Spotify Teams up with Google's Gemini for Smarter Music Searches
Spotify has partnered with Google's generative AI assistant, Gemini, to make music discovery easier and more intuitive. This new feature allows Spotify users to search for and play music using natural language commands, elevating how users interact with their favorite tunes.
First seen in Google app code early this year, this Gemini extension is now rolling onto compatible Android devices, making the company Spotify the second application apart from WhatsApp to support the latest technology.
What does the introduction of Gemini offer?
The integration allows users of Spotify to discover and play songs using voice or text commands. It may be the title of a song, an artist, an album, or music for an activity—whatever you want, it is made easy by the AI assistant. There are, however, some limitations; that is, you cannot use Gemini to create playlists or radio stations on Spotify.
For users with numerous linked music services, including YouTube Music, Gemini first asks to input Spotify but will remember such a choice for subsequent search requests.
How to Set Up
To use Gemini with Spotify users must:
- Link Spotify and Google accounts.
- Enable Gemini Apps Activity. This feature holds AI-related queries for 72 hours.
The feature can only be enabled on Android devices where the language has been configured to English. It's currently unavailable for Google Messages, the Gemini web app, or the Gemini app on iOS.
Why It Matters
It's a significant step forward for users to access and enjoy music on Spotify with Gemini integration. This will tap into the power of generative AI to make the experience both smoother and more personalized. While it is not yet without its initial limitations, this partnership has much to show in the form of AI-driven features enhancing daily interactions with popular apps.
As the rollout continues, users can look forward to easier music discovery and potentially more advanced integrations in the future. With Spotify and Google joining forces, the future of music streaming is set to become even smarter.
Related Stories:
- Spotify Launches Parent-Managed Accounts for Safer Kid-Friendly Listening
- Spotify introduces "Spotify for Authors" in a new platform to reach more listeners
- Apple Change Disables iPhone Volume Control for Spotify Connect and Workarounds Available
- Meta and Instagram Testing New Social Music-Sharing Feature with Spotify
Top Stories:
- Apple Gears Up to Redesign Siri with Smarter, More Conversational AI by 2026
- Musk Revives Lawsuit Against OpenAI, Adding Microsoft as a Defendant
- iOS 18.2 Launching Soon in December with ChatGPT-Powered Siri and More AI Upgrades!
- Microsoft Delays Testing of “Recall” Feature for Windows 11 Until December
- Bluesky Sets Itself Apart by Promising Not to Use User Data for AI Training, Unlike Competitors