Isadora Teich

08/28/2023
Philadelphia, PA
You are never too old to set another goal or to dream a new dream.
Read More
Connect with Chop Dawg

All the Latest News Surrounding OpenAI’s ChatGPT

Some treat ChatGPT and AI in general like it’s an additional rider of the apocalypse. Others say it’s a bubble that’s doomed to burst. Some say it will revolutionize many industries for the better.

But, what is actually going on with ChatGPT? How is it being used, what updates have been released, and what interesting issues are arising globally as we explore this brand-new innovation?

Let’s take a look!

A New Partnership

TechCrunch reports that OpenAI has partnered with Scale AI, a San Francisco-based data labeling startup.

This is a part of their plan to partner with third-party vendors. Their goal is to make it easier for developers, specifically enterprises, to fine-tune its AI models for their benefit using custom data.

Essentially, they want to:

…Bring together Scale AI’s fine-tuning tools and OpenAI’s GPT-3.5 text-generating model. (GPT-3.5 is the predecessor to GPT-4, OpenAI’s flagship model, which understands images as well as text.)

This would give developers more control over the outputs that they recieve from AI models.

For example, they can tailor copy to suit a particular brand voice or even power models that respond in different languages more effectively.

News Organizations Around the World Ban ChatGPT Web Crawler

Part of the way that ChatGPT works is by scraping the entire web for information. Its web crawler, GPTBot, scans webpages to generate content and answer questions.

Many news organizations have decided that they don’t want ChatGPT farming their content for information. This includes:

  • The New York Times
  • Reuters
  • CNN
  • Australian Broadcasting Corporation and Australian Community Media Brands
  • The Chicago Tribune

In general, these news corporations seem to have done so to protect their intellectual property and copyright. Many have criticized ChatGPT as essentially being a ‘plagiarism machine’ and do not want their data scraped by AI companies without their consent to build AI models.

A spokesperson from Reuters told The Guardian:

“Because intellectual property is the lifeblood of our business, it is imperative that we protect the copyright of our content.”

The New York Times even completely prohibits data scraping by AI companies. Many major companies outside of the news sphere are starting to opt out of data scraping as well. This includes both Shutterstock and Amazon.

This raises some interesting questions. Is it ethical for AI companies to take the original work of others to ‘train their algorithms’ without the consent of those who own the content?

However, if many legitimate sources of information do not allow these AI programs to learn from their content, what kinds of things will ChatGPT and other AI programs learn?

After all, one of the big drawbacks of AI is that it is a blank slate. It works blindly off of whatever it’s fed. If these algorithms are fed mostly misinformation, they may create more problems than they solve.

A Potential New York Times Lawsuit on the Horizon

Currently, the New York Times is considering suing OpenAI.

Let’s take a look.

NPR reports that OpenAI and the NYT might end up in court over copyright issues.

According to NPR:

For weeks, the Times and the maker of ChatGPT have been locked in tense negotiations over reaching a licensing deal in which OpenAI would pay the Times for incorporating its stories in the tech company’s AI tools, but the discussions have become so contentious that the paper is now considering legal action.

According to inside sources, the New York Times feels that ChatGPT is competing with the paper by using the original reporting done by its staff and paraphrasing it to answer questions.

For example, if someone is able to have something answered in a sentence or less by ChatGPT, which is based on their reporting, that person will never know the actual source of their information or have any need to visit the New York Times’ site.

In the US, Copyright violation is a serious and expensive crime. If a judge finds OpenAI guilty of it, they will have to destroy all of their datasets related to the New York Times and face fines of up to $150,000 for each ‘willfully’ committed infringement.

Currently, many authors are seeking legal action against generative AI companies for using their works to train their AI algorithms or bolster their products without their consent.

One of the most well-known of these is comedian Sarah Silverman, who claims AI companies have basically stolen her 2010 memoir “The Bedwetter.”

According to a class action lawsuit, it’s a common practice that generative AI firms use pirated copies of authors’ works and data scrape them without their consent.

Matthew Butterick, one of the lawyers representing Silverman and other authors in seeking a class-action case, said:

“This is an open, dirty secret of the whole machine learning industry. They love book data and they get it from these illicit sites. We’re kind of blowing the whistle on that whole practice.”

While this might seem far-fetched, the generative AI industry is less than ethical in other ways.

For example, it is a known fact that OpenAI tried to solve the longstanding issue of AI models becoming offensive by paying Kenyan workers less than $2 an hour to view the most disturbing content on the web, and flag it.

OpenAI Launches Enterprise Plan for Businesses

Despite its controversies, OpenAI has been working to capitalize on the viral success of ChatGPT in many ways. This includes a brand new ChatGPT plan that is meant to support businesses.

ChatGPT Enterprise can do all of the same things as the regular ChatGPT, generating various types of content and emails. However, there are some interesting added features. According to Tech Crunch:

The new offering also adds “enterprise-grade” privacy and data analysis capabilities on top of the vanilla ChatGPT, as well as enhanced performance and customization options.

That puts ChatGPT Enterprise on par, feature-wise, with Bing Chat Enterprise, Microsoft’s recently-launched take on an enterprise-oriented chatbot service.

In a blog post announcing the new offering, OpenAI said:

“We’re launching ChatGPT Enterprise, which offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities, customization options, and much more. We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive. Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.”

Addressing the Doom and Gloom

Many organizations have started reporting that OpenAI and its products are doomed to fail.

Why have we swung from wild optimism and awe to extreme pessimism?

Let’s take a look at the numbers!

ChatGPT truly did have a moment of insane virality that most businesses can only dream of. They also claim that over 80% of Fortune 500 companies have already adopted their products.

However, successful businesses make a profit. Virality doesn’t always translate to a sustainable business.

The fact of the matter is that OpenAI is losing large sums of money. It reportedly spent more than $500 million in 2022 to develop ChatGPT and only earned $30 million in revenue.

This puts the company in the position of entering 2023 over $400 million in the red while maintaining its product costs $700,000 a day.

CEO Sam Altman has reportedly told investors that OpenAI has plans to skyrocket revenue to $200 million in 2023 and $1 billion in 2024.

It seems reasonable that many are skeptical, especially considering all of the legal trouble mounting around this company.

After all, how well will these AI tools work if they legally lose access to much of their quality data?

Final Thoughts on the Innovation and Controversy

Whether you love or hate ChatGPT, there is no denying that it’s a fascinating time in the tech world.

Over the coming years, we will all find out what AI companies are legally allowed to do, and what our systems of government consider ethical when it comes to how these innovations can operate.

If you want to learn more about how governments around the world are trying to grapple with generative AI, take a look at our blog post on the subject.

What do you think? Comment below.

Since 2009, we have helped create 350+ next-generation apps for startups, Fortune 500s, growing businesses, and non-profits from around the globe. Think Partner, Not Agency.

TwitterLinkedInFacebook

Find us on social at #MakeItApp’n®


You might also like

5 Ways Startups Can Make Their Website More User-Friendly

Advertising & Marketing, Design & Branding, Web & Mobile, Technology

Founders: Consider This Before Working With Any Investor

Operations & Management, Revenue & Finances

8 Ways to Bootstrap Market Your App

All, Advertising & Marketing, Featured

How to Fund Your App in 2022

All, Revenue & Finances

Leave a comment

Your email address will not be published. Required fields are marked *

Have An App Idea?

Entrepreneurs approach us at all stages of planning. We’re ready to provide our valuable feedback whether you decide to partner up with us or not!

dear entrepeneurs,

Since founding Chop Dawg in 2009, Joshua and our team have been on a mission to help as many entrepreneurs as possible. To date, Chop Dawg has helped launch over 400+ next generation digital products for startups and growing companies around the world.

Find Your Personal Framework to Success

“Davidson offers hard won advice for fellow entrepreneurs looking to create a lasting business in this perceptive strategy primer.”

– Publisher’s Weekly.

Finally, your favorite ice cream truck has an app! Stay cool and download the Mister Softee app this summer to track your treats in real-time.