ai training – TheNewsHub https://thenewshub.in Thu, 02 Jan 2025 08:07:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 OpenAI’s AI Training Opt-Out Tool for Creators Not Releasing Anytime Soon: Report https://thenewshub.in/2025/01/02/openais-ai-training-opt-out-tool-for-creators-not-releasing-anytime-soon-report/ https://thenewshub.in/2025/01/02/openais-ai-training-opt-out-tool-for-creators-not-releasing-anytime-soon-report/?noamp=mobile#respond Thu, 02 Jan 2025 08:07:41 +0000 https://thenewshub.in/2025/01/02/openais-ai-training-opt-out-tool-for-creators-not-releasing-anytime-soon-report/

OpenAI, in May, announced a new machine learning (ML) tool that could enable creators to specify whether they wanted OpenAI to train its artificial intelligence (AI) models on their content or not. Dubbed Media Manager, the tool was said to identify copyrighted text, images, audio, and videos across multiple sources and could reflect creators’ preferences. However, the company has yet to launch the tool, and as per a report, the development and release of Media Manager is not a priority.

According to a TechCrunch report, the AI firm does not view the tool as an important project internally. Unnamed people familiar with the matter told the publication that it was likely not a priority for OpenAI and that nobody was working on it. Another unnamed source reportedly highlighted that while the tool was discussed in the past, there have not been any recent updates on it.

Additionally, TechCrunch was told by the company that a member of its legal team who was working on the AI tool, Fred von Lohmann, was transitioned to a part-time consultant role in October 2024. These developments potentially indicate that the AI tool is not part of the company’s short-term roadmap. Notably, it has been seven months since the first mentions of Media Manager.

The AI tool was the company’s way of providing creators a way to exclude their copyrighted content from being used to train OpenAI’s large language models (LLMs). The company also has a form-based process that creators can use to tell the ChatGPT maker to remove any copyrighted material from its AI model’s training data. However, it is a cumbersome process where complainants are required to list every item of their content and describe it for the AI firm to take action on it.

Media Manager, instead, would use AI and ML processes to auto-detect content across websites and other sources and would cross-check with the names of the creators who have opted out of AI training.

Several domain experts reportedly expressed concerns over the efficiency of the AI tool and highlighted that even giant platforms such as YouTube and TikTok struggle with content identification at scale. Others have reportedly criticised OpenAI’s Media Manager for putting the burden of opting out on creators who might not even know about such an AI tool.

]]>
https://thenewshub.in/2025/01/02/openais-ai-training-opt-out-tool-for-creators-not-releasing-anytime-soon-report/feed/ 0
YouTube Creators Get New Option to Allow Third-Party AI Firms to Train Models on Their Videos https://thenewshub.in/2024/12/17/youtube-creators-get-new-option-to-allow-third-party-ai-firms-to-train-models-on-their-videos/ https://thenewshub.in/2024/12/17/youtube-creators-get-new-option-to-allow-third-party-ai-firms-to-train-models-on-their-videos/?noamp=mobile#respond Tue, 17 Dec 2024 13:03:01 +0000 https://thenewshub.in/2024/12/17/youtube-creators-get-new-option-to-allow-third-party-ai-firms-to-train-models-on-their-videos/

YouTube announced a new update on Monday that will allow content creators on the platform more control over third-party artificial intelligence (AI) training. The move comes after the video-streaming giant introduced new tools to protect creators from deepfakes that imitate their likenesses, including their faces and voices. The new option will let content creators decide whether or not they want third-party AI firms to access their videos to train large language models (LLMs). They can also grat permission to specific AI companies, while forbidding others from using their videos.

YouTube Lets Creators Decide Which AI Firm Can Train Models Using Their Videos

Companies are now racing to source new data to train AI models while developing LLMs. Now that publicly available data has been exhausted by these AI firms, they are looking at newer ways to find large deposits of high-quality data to train models and make them more capable.

While some AI companies have taken the content-partnership route, it is generally considered expensive to source such data. Another option is synthetic data, which is created by other generative AI models. However, there is a risk that such data can be low-quality, which can negatively impact the growth of newer models.

As such, companies are trying to collaborate with content creators to find new high-quality data to train AI models. For instance, Grok is currently trained on public posts on X (formerly known as Twitter), and Meta AI is trained on public posts on Facebook and Instagram.

YouTube has also become a platform of interest for AI firms, given the large amount of human-created data. With the rise of video generation models, this data becomes even more valuable. However, so far the video-streaming giant has disallowed companies to crawl and scrape videos in an unauthorised way to protect creators.

In a support document, the company announced a new option that will allow content creators on the platform to choose whether they want to let any AI firm access their videos to train LLMs or not. In the next few days, YouTube is planning to roll out an update that will add a new option in Studio Settings under the “Third-party training” section.

There, creators can choose to allow specific AI companies to scrape their videos. The list of companies currently includes AI21 Labs, Adobe, Amazon, Anthropic, Apple, ByteDance, Cohere, IBM, Meta, Microsoft, Nvidia, OpenAI, Perplexity, Pika Labs, Runway, Stability AI, and xAI. Notably, creators can also access their videos to all the AI companies by picking the relevant option.

YouTube highlights that only those videos will be eligible for AI training that are allowed by the creators as well as applicable rights holders. Additionally, the company’s terms of service still apply, meaning AI firms cannot illegitimately scrape videos from the platform.

This new option does not include any mention of compensation from AI firms to the creators for using their videos. However, YouTube highlighted that it will continue to facilitate new forms of collaboration between creators and third-party companies.

]]>
https://thenewshub.in/2024/12/17/youtube-creators-get-new-option-to-allow-third-party-ai-firms-to-train-models-on-their-videos/feed/ 0
Meta Reportedly Refuses to Clarify Whether Videos Captured by Ray-Ban Meta Smart Glasses Will Remain Private https://thenewshub.in/2024/10/01/meta-reportedly-refuses-to-clarify-whether-videos-captured-by-ray-ban-meta-smart-glasses-will-remain-private/ https://thenewshub.in/2024/10/01/meta-reportedly-refuses-to-clarify-whether-videos-captured-by-ray-ban-meta-smart-glasses-will-remain-private/?noamp=mobile#respond Tue, 01 Oct 2024 10:04:02 +0000 https://thenewshub.in/2024/10/01/meta-reportedly-refuses-to-clarify-whether-videos-captured-by-ray-ban-meta-smart-glasses-will-remain-private/

Meta is reportedly staying quiet on whether it is collecting video and image data from its artificial intelligence (AI) wearable device Ray-Ban Meta smart glasses to train its large language models (LLMs). The company announced a new real-time video feature for the device using which users can ask the AI to answer queries and ask for suggestions based on their surroundings. However, there is no clarity on what happens to this data once the AI responds to the query.

The feature in question is the real-time video capability that allows Meta AI to “look” at the users’ surroundings and process that visual information to answer any query a user may have. For instance, a user can ask it to identify a famous landmark, show it the closet and ask for wardrobe suggestions, or even ask for recipes based on the ingredients in the refrigerator.

However, each of these functionalities requires the Ray-Ban Meta smart glasses to take passive videos and images of the surroundings to understand the context. In normal circumstances, once the response has been generated and the user has ended the conversation, the data should be left in private servers if not instantly deleted. This is because a lot of the data might be private information about the user’s home, and other belongings.

But Meta is reportedly not stating this. On being asked whether the company is storing this data and training native AI models on this, a Meta spokesperson told TechCrunch that the company is not publicly discussing the matter. Another spokesperson reportedly highlighted that this information is not being shared externally and added that “we’re not saying either way.”

The company’s refusal to clearly state what happens with the user data is concerning given the private, and potentially sensitive nature of the data the smart glasses can capture. While Meta has already confirmed training its AI models on public user data of its US-based users on Facebook and Instagram, the data from the Ray-Ban Meta smart glasses are not public.

Gadgets 360 has reached out to Meta for a comment. We will update the story once we receive a statement from the company.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


Samsung Galaxy Z Fold 6 Ultra Could Launch on October 25, Pre-Orders Said to Go Live on October 18



‘Secure Your Crypto’: Mudrex Announces VDA-Focused Awareness Initiative in India



]]>
https://thenewshub.in/2024/10/01/meta-reportedly-refuses-to-clarify-whether-videos-captured-by-ray-ban-meta-smart-glasses-will-remain-private/feed/ 0