Spread of ChatGPT Fever in US Workplace Raises Concerns

Spread of ChatGPT Fever in US Workplace Raises Concerns

A significant number of employees across the United States are turning to ChatGPT for assistance with routine tasks. Despite the apprehensions that have prompted companies like Microsoft and Google to limit its use, many workers are finding value in the AI-powered chatbot.

Businesses globally are grappling with how to effectively integrate ChatGPT, a chatbot program driven by generative AI that engages in conversations and addresses a wide array of queries. However, concerns have been raised by security firms and companies over potential leaks of intellectual property and strategic information.

Reports of individuals employing ChatGPT for daily work tasks have emerged, including tasks like composing emails, summarizing documents, and conducting initial research.

It revealed that 28% of respondents from an online survey conducted between July 11 and 17 claimed to use ChatGPT regularly at their workplaces. Conversely, only 22% reported that their employers explicitly permitted the use of such external tools.

The poll, which gathered insights from 2,625 adults across the United States, carried a credibility interval—an indicator of precision—of around 2 percentage points.

Approximately 10% of the survey participants mentioned that their employers outrightly prohibited the use of external AI tools, while about 25% were uncertain about their company's stance on such technology utilization.

Following its debut in November, ChatGPT has swiftly ascended to become the fastest-growing app ever recorded. The application's advent has generated both enthusiasm and apprehension, landing its developer, OpenAI, in clashes with regulatory bodies, particularly in Europe. The company's expansive data collection practices have come under scrutiny from privacy watchdogs.

Consequently, OpenAI's employment of human reviewers from external firms to peruse the generated chats has come to light. Researchers have unearthed instances wherein similar artificial intelligence can replicate the information it assimilated during training, potentially jeopardizing sensitive proprietary data.

Ben King, Vice President of Customer Trust at corporate security firm Okta, remarked, "People do not comprehend the intricacies of data utilization within generative AI services." King underscored the criticality of this issue for businesses, pointing out that users often lack contracts with many AI services, as they are free offerings. Consequently, corporations might have bypassed their customary assessment procedures when engaging with such services.

OpenAI refrained from commenting on the implications of individual employees leveraging ChatGPT. However, the company highlighted a recent blog post assuring its corporate partners that their data would not be harnessed for further training of the chatbot, unless explicit consent was granted.

For instance, Google's Bard collects users' data, including text, location, and usage patterns. The company extends the option to users to erase past activities from their accounts and request the removal of content fed into the AI. Alphabet-owned (GOOGL.O) Google declined to furnish additional details upon inquiry.

Microsoft remained unresponsive to queries seeking their comments on the matter.


The comments posted here are not from Cnews Live. Kindly refrain from using derogatory, personal, or obscene words in your comments.