AI companies explore new steps to prevent online extremism

 AI companies explore new steps to prevent online extremism

Sidney: Major artificial intelligence companies are working on new ways to prevent the spread of extremism through their platforms, as concerns grow about the social impact of advanced chatbots.

A crisis response startup called ThroughLine is now partnering with companies such as OpenAI and Anthropic to expand its work beyond mental health support into tackling online radicalisation. The company already operates in about 180 countries, helping identify users in distress and connecting them with support services.

With the rise of powerful AI tools, experts and governments have raised concerns that these systems could be misused to spread harmful ideologies or influence vulnerable users. In response, companies are looking for ways to detect early signs of extremist thinking and intervene before harm occurs.

ThroughLine is developing systems that can identify risky behaviour during conversations with chatbots. These systems aim to guide users away from harmful ideas and connect them with real world help such as counsellors or community organisations.

The initiative is linked to wider global efforts to reduce online extremism, including actions taken after past attacks that were connected to online radicalisation. It reflects a growing belief that technology companies must take a more active role in preventing harm, not just moderating content.

At the same time, AI companies are facing increasing pressure from governments to strengthen safety measures. There have been concerns about how chatbots respond to sensitive topics and whether current safeguards are enough.

Industry experts say that while AI can help identify risks, it cannot solve the problem alone. Effective prevention will depend on strong support systems outside the digital space, including mental health services and community engagement.

The move shows how the role of artificial intelligence is changing. Instead of simply providing information, these systems are now expected to help protect users and respond to complex social challenges.


Follow the CNewsLive English Readers channel on WhatsApp:
https://whatsapp.com/channel/0029Vaz4fX77oQhU1lSymM1w

The comments posted here are not from Cnews Live. Kindly refrain from using derogatory, personal, or obscene words in your comments.