Twitter CEO Elon Musk said Saturday that the social media platform will limit how many tweets users can read due to “extreme” levels of system manipulation and data scraping.
Musk said in a statement that Twitter has applied the following temporary limits on users, with new unverified accounts limited to reading just 300 posts per day.
The limits rise to 1,000 posts per day for existing unverified accounts, meaning ones without a blue checkmark, while verified accounts enjoy ten times the volume, 10,000 posts, per day.
Meanwhile, the quota for a new unverified account was raised to 500 posts per day.
Some users expressed disappointment about the throttling.
“Putting hard limits on reads is web 1.0 stuff,” wrote the Disclose.tv verified account, which has around 1.2 million followers.
“I may be overestimating, but it feels like I usually see more than 6,000 posts a day as part of my job,” John Junyszek, a senior community manager at 343 Industries, wrote in a tweet commenting on Musk’s posts. “It feels like it could negatively impact people who use this platform the most.”
“If this does end up causing issues for folks, would you be open to increasing the view limit?” he asked.
Then, in a follow-up post about 15 minutes after Junyczek’s message, Musk said that the rate limits would “soon” be raised to 8,000 posts per day for blue check accounts, and 800 for unverified and 400 for new unverified accounts.
“Rate limited due to reading all the posts about rate limits,” Musk joked in a tweet.
Earlier, Twitter announced it would require users to have an account on the social media platform to view tweets, a move that Musk on Friday called a “temporary emergency measure.”
Musk said at the time that hundreds of organizations or more were scraping Twitter data “extremely aggressively,” with a negative impact on user experience.
The Twitter chief had earlier expressed displeasure with artificial intelligence firms like OpenAI, which owns ChatGPT, for using Twitter’s data to train their large language models.
Musk Threatens to Sue Microsoft
In April, Musk threatened to sue Microsoft, which has invested billions into OpenAI, after accusing the company of using Twitter data for training.
“They trained illegally using Twitter data. Lawsuit time,” Musk wrote on Twitter on April 19, without providing further details regarding the allegations.
While Musk did not provide evidence of Microsoft’s alleged “illegal training” and did not state what the training was for, ChatGPT is trained using reinforcement learning from human feedback (RLHF) and large bodies of text from various sources across the internet, including human conversations.
Musk’s tweet came shortly after Microsoft announced it was removing Twitter from one of its advertising platforms.
Microsoft did not respond to a request for comment from The Epoch Times on Musk’s lawsuit threat.
Earlier, Musk joined more than 1,100 individuals, including experts and industry executives such as Apple co-founder Steve Wozniak, in signing an open letter calling on all artificial intelligence labs to pause training of systems more powerful than Chat GPT-4 for at least six months.
The letter doesn’t call for a halt to AI development in general, just the most advanced systems in what Musk and the other experts described as an act of “merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”
Musk, along with other signatories of the letter, cited concerns over AI’s possible “risks to society and humanity.”
‘Catastrophic’ Impacts on Society
Signatories of the letter warned that AI systems with human-competitive intelligence could pose “profound risks to society and humanity” and should be planned for and managed carefully to avoid potentially “catastrophic” impacts on the world and its people.
“Having succeeded in creating powerful AI systems, we can now enjoy an ‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt,” the experts said.
“Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall,” they argued.
They called for AI labs and independent experts to use the six-month moratorium to develop and implement a set of safety protocols for advanced AI design that would ensure that these systems are “safe beyond a reasonable doubt.”
In his first public remarks since the letter was published, Microsoft co-founder Bill Gates said that calls to pause the development of AI won’t “solve the challenges” ahead, that a stoppage would be hard to implement globally, and that the rationale for doing so isn’t clear.
The Microsoft co-founder threw cold water on the idea of a development pause and suggested a different course of action.
“I don’t think asking one particular group to pause solves the challenges,” Gates said. “Clearly, there [are] huge benefits to these things … what we need to do is identify the tricky areas.”
Besides recommending a more surgical approach to addressing the risks of AI by presumably identifying the biggest risks and working on ways to mitigate them, Gates criticized the letter’s vague criteria for enforcement.
“I don’t really understand who they’re saying could stop, and would every country in the world agree to stop, and why to stop,” Gates told Reuters. “But there are a lot of different opinions in this area.”
While Gates didn’t specify the “tricky areas” that he had in mind for closer risk scrutiny, he noted in a recent blog post that there is the “possibility that AIs will run out of control” and decide humans are a threat.
He also acknowledged the possibility that superintelligent or “strong” AIs could, in the future, set their goals that could run counter to the interests of humanity.
Microsoft has been at the forefront of AI development, investing billions in OpenAI.
Katabella Roberts contributed to this report.
Update: This article has been update to include the newest updates to Twitter’s viewing restrictions.