Google, which owns YouTube, said it has “seen unconfirmed reports” of OpenAI’s activity.
Tag: OpenAI
-
Tesla CEO Elon Musk Thought That OpenAI Would Fail: OpenAI CE0 Sam Altman
Elon Musk sued OpenAI and Altman, alleging they breached their original contractual agreements around AI.
-
Elon Musk Announces xAI's Plan To Open-Source Grok Chatbot Amid OpenAI Dispute
Grok became available to premium users on X, previously referred to as Twitter in December 2023.
-
Nvidia Faces Lawsuit Over AI Copyright Infringement By Authors |
New Delhi: Nvidia, a leading provider of chips for artificial intelligence, finds itself embroiled in legal action as three authors Brian Keene, Abdi Nazemian, and Stewart O'Nan have filed a lawsuit against the company. The authors alleged that Nvidia used their copyrighted books without permission in the development of its NeMo platform, sparking a contentious legal battle over intellectual property rights.
The authors argue that their books were part of a dataset of 196,640 books used to train Nvidia's NeMo AI platform. They claim that the platform aimed to replicate everyday written language but was shut down in October following allegations of copyright infringement. The legal document suggests that Nvidia's action to remove the NeMo dataset implies acknowledgment of copyright violation. (Also Read: Google Wallet Will Auto-Add Movie Tickets And Boarding Passes From Gmail)
The authors are seeking unspecified compensation on behalf of people in the United States whose copyrighted works were used to train NeMo's large language models during the past three years, in the proposed class action. (Also Read: Xiaomi 14 Ultra vs Samsung Galaxy S24 Ultra: Which Smartphone Should You Buy?)
The lawsuit covers various types of literary works, such as Keene's novel “Ghost Walk” from 2008, Nazemian's novel “Like a Love Story” from 2019, and O'Nan's novella “Last Night at the Lobster” from 2007.
Nvidia chose not to provide a comment on Sunday, and lawyers representing the authors did not respond immediately to requests for further comment on Sunday. This lawsuit pulls Nvidia into a larger pool of legal disputes initiated by writers, as well as The New York Times, concerning generative AI. Generative AI generates new content using inputs like text, images, and sounds.
Other tech giants, including OpenAI, the developer of the AI platform ChatGPT, and its collaborator Microsoft, are also facing legal issues linked to generative AI.
Despite facing legal hurdles, Nvidia continues to attract investors' favor, benefiting from the ongoing growth of AI technology. Since the close of 2022, the company's stock price has surged by nearly 600%, propelling Nvidia's market capitalization to nearly $2.2 trillion.
The lawsuit is identified as Nazemian et al v Nvidia Corp in the US District Court for the Northern District of California, under case number 24-01454.
-
Elon Musk Sues OpenAI And CEO Sam Altman Over Agreement Breach |
New Delhi: Elon Musk, the CEO of Tesla and SpaceX, has sued OpenAI and its CEO, Sam Altman, accusing them of violating their initial contractual agreements concerning artificial intelligence (AI). Filed in a San Francisco court in the United States, the lawsuit centers on OpenAI's recent development of the GPT-4 natural language model.
The owner of company This is despite an agreement that the artificial general intelligence capabilities would be non-profit and aimed at serving humanity. (Also Read: Google Removes Some India Matrimony Apps, Executive Calls Move 'Dark Day')
“Musk has long recognized that AGI poses a grave threat to humanity — perhaps the greatest existential threat we face today,” read the lawsuit. (Also Read: UK Woman Discovers Baby's Rare Eye Cancer Using Phone Flash; Read The Full Story)
In Musk's lawsuit, he outlines grievances including breach of contract, violation of fiduciary duty, and unfair business practices. Musk served as a founding board member of OpenAI until 2018.
According to the lawsuit, OpenAI's initial research was performed in the “open, providing free and public access to designs, models, and code”.
When OpenAI researchers discovered that an algorithm called “Transformers,” initially invented by Google, could perform many natural language tasks without any explicit training, “entire communities sprung up to enhance and extend the models released by OpenAI”.
Altman became OpenAI CEO in 2019. On September 22, 2020, OpenAI entered into an agreement with Microsoft, exclusively licensing to Microsoft its Generative PreTrained Transformer (GPT)-3 language model.
“Most critically, the Microsoft license only applied to OpenAI's pre-AGI technology. Microsoft obtained no rights to AGI. And it was up to OpenAI's non-profit Board, not Microsoft, to determine when OpenAI attained AGI,” the lawsuit further read.
Musk said that this case is filed to compel OpenAI to “adhere to the Founding Agreement and return to its mission to develop AGI for the benefit of humanity, not to personally benefit the individual defendants and the largest technology company in the world”. (With inputs from IANS)
-
Google Pauses Gemini’s People Image Feature After Anti-‘Woke’ Backlash |
New Delhi: Google has paused the image-generation function of its Gemini AI chatbot following a string of controversies. The company, acknowledged that Gemini had provided “incorrect representations in certain historical contexts” and assured users that an enhanced version of the feature would be released shortly.
In a statement on the Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
Images produced by Gemini spread across social media recently, leading to widespread ridicule and anger. Some users criticized Google, claiming that the company prioritizes being “woke” over truth or accuracy.
Some of the images that drew criticism included a portrayal of four Swedish women, none of whom were depicted as white, as well as scenes featuring Black and Asian soldiers dressed as Nazis.
“It’s embarrassingly hard to get Google Gemini to acknowledge that white people exist,” Debarghya Das, the engineer who founded the enterprise search startup Glean, shared on X platform a post containing several images created by Gemini.
It’s embarrassingly hard to get Google Gemini to acknowledge that white people exist pic.twitter.com/4lkhD7p5nR — Deedy (@debarghya_das) February 20, 2024
The challenges faced by AI models, such as overlooking people of color and perpetuating stereotypes underscore the need for continual improvement and ethical considerations in AI development.
Google’s efforts to keep pace with competitors like OpenAI since the launch of ChatGPT in 2022 have been accompanied by setbacks in the rollout of its AI products. Instances like the apology issued last year for inaccuracies in its AI chatbot Bard’s demonstration highlight the ongoing complexities in AI technology.
-
OpenAI, Meta And Other Tech Giants Sign Effort To Fight AI Election Interference |
New Delhi: A group of 20 tech companies announced on Friday they have agreed to work together to prevent deceptive artificial-intelligence content from interfering with elections across the globe this year.
The rapid growth of generative artificial intelligence (AI), which can create text, images and video in seconds in response to prompts, has heightened fears that the new technology could be used to sway major elections this year, as more than half of the world’s The population is set to head to the polls. (Also Read: OpenAI Can’t Register GPT As Trademark, Rules US Patent Office)
Signatories of the tech accord, which was announced at the Munich Security Conference, include companies that are building generative AI models used to create content, including OpenAI, Microsoft and Adobe. Other signatories include social media platforms that will face the challenge of keeping harmful content off their sites, such as Meta Platforms, TikTok and X, formerly known as Twitter. (Also Read: You Can Now Remix YouTube Music Videos In Shorts – Here’s How!)
The agreement includes commitments to collaborate on developing tools for detecting misleading AI-generated images, video and audio, creating public awareness campaigns to educate voters on deceptive content and taking action on such content on their services.
Technology to identify AI-generated content or certify its origin could include watermarking or embedding metadata, the companies said. The accord did not specify a timeline for meeting the commitments or how each company would implement them.
“I think the utility of this (accord) is the breadth of the companies signing up to it,” said Nick Clegg, president of global affairs at Meta Platforms. “It’s all good and well if individual platforms develop new policies of detection, provenance, labeling, watermarking and so on, but unless there is a wider commitment to do so in a shared interoperable way, we’re going to be stuck with a hodgepodge. of different commitments,” Clegg said.
Generative AI is already being used to influence politics and even convince people not to vote. In January, a robocall using fake audio of US President Joe Biden circulated to New Hampshire voters, urging them to stay home during the state’s presidential primary election.
Despite the popularity of text-generation tools like OpenAI’s ChatGPT, the tech companies will focus on preventing harmful effects of AI photos, videos and audio, partly because people tend to have more skepticism with text, said Dana Rao, Adobe’s chief trust officer, in an interview.
“There’s an emotional connection to audio, video and images,” he said. “Your brain is wired to believe that kind of media.”
-
OpenAI Can’t Register GPT As Trademark, Rules US Patent Office
As generative AI use surged last year, several AI companies added GPT to their product names.
-
What Is Sora That Creates 1-Min AI Video From Text? All About ChatGPT Maker OpenAI’s Instant Video Maker |
New Delhi: ChatGPT maker OpenAI has gone another step further into generative AI by unveiling Sora — an AI model that can create realistic and imaginative scenes from text instructions.
“Introducing Sora, our text-to-video model. Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions,” tweeted OpenAI.
In the tweet video, Open AI demonstrates a short video that has been generated by AI using the “Prompt: “Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes.”
Introducing Sora, our text-to-video model.
Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. https://t.co/7j2JN27M3W
Prompt: “Beautiful, snowy… pic.twitter.com/ruTEWn87vf
— OpenAI (@OpenAI) February 15, 2024
OpenAI has said that Sora — the text-to-video model –can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.
“In addition to being able to generate a video solely from text instructions, the model is able to take an existing still image and generate a video from it, animating the image’s contents with accuracy and attention to small detail. The model can also take an existing video and extend it or fill in missing frame,” OpenAI said.
Sora is not publicly available currently. It is now available only for red teamers. Sora builds on past research in DALL·E and GPT models, using the recaptioning technique from DALL·E 3, which involves generating highly descriptive captions for the visual training data.
“We’ll be taking several important safety steps ahead of making Sora available in OpenAI’s products. We are working with red teamers — domain experts in areas like misinformation, hateful content, and bias — who will be adversarially testing the model,” OpenAI said. .
-
Hackers Utilizing ChatGPT To Enhance Cyberattacks, Microsoft And OpenAI Reveal |
New Delhi: Microsoft and OpenAI announced on Wednesday that hackers are utilizing large language models (LLMs) such as ChatGPT to enhance their current cyber-attack methods. The companies have identified efforts by groups supported by Russia, North Korea, Iran, and China to utilize tools such as ChatGPT for investigating targets and developing social engineering tactics.
In partnership with Microsoft Threat Intelligence, OpenAI intervened to disrupt five state-affiliated actors who aimed to utilize AI services to facilitate malicious cyber operations. (Also Read: Meta CEO Mark Zuckerberg Tries Apple Vision Pro, Shares Video On Instagram)
“We disrupted two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard,” said the Sam Altman-run company. (Also Read: OpenAI’s ChatGPT Is Testing New ‘Memory’ Feature With Select Users)
The identified OpenAI accounts associated with these actors were terminated. These bad actors sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks.
“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft said in a statement. .
While attackers will remain interested in AI and probe technologies’ current capabilities and security controls, it’s important to keep these risks in context, said the company.
“As always, hygiene practices such as multifactor authentication (MFA) and Zero Trust defenses are essential because attackers may use AI-based tools to improve their existing cyberattacks that rely on social engineering and finding unsecured devices and accounts,” the tech giant noted. (With IANS Inputs)