Artificial intelligence: almost one in four uses ChatGPT

Survey by the TÜV Association: Concerns among the population about incalculable risks, a flood of fake news and job losses. 84 percent call for legal requirements for AI applications. Europe leads the way in regulating AI: The EU Parliament's lead committees finalize position on the AI Act.

© Unsplash

Berlin, 11 May 2023 – Generative artificial intelligence (AI) applications such as ChatGPT are spreading rapidly, but they are also causing considerable concern among the population: Since their introduction in November, 83 percent of German citizens have already heard of ChatPT. And almost one in four has already used ChatGPT for professional or private purposes (23 percent). This is the result of a representative Forsa survey conducted on behalf of the TÜV Association among 1,021 respondents aged 16 and older. Younger people are leading the way: 43 percent of 16- to 35-year-olds have used ChatGPT. In the 36- to 55-year-old age group, the figure is 20 percent, and among 56- to 75-year-olds, only 7 percent. Men have used the AI application slightly more often (27 percent) than women (19 percent). "Since its launch, ChatGPT has impressively demonstrated the enormous potential of AI," said Dr. Joachim Bühler, CEO of the TÜV Association, when presenting the study results. "However, ChatGPT also shows the risks associated with the use of AI." Among the population, concerns are widespread. 80 percent of respondents agree with the statement that there are currently unforeseeable risks in the use of AI. Nearly two in three worry that the technology will become unmanageable for humans (65 percent) or that they could be manipulated without their knowledge (61 percent). And 76 percent are concerned that AI will not adequately protect personal data. "It must be ensured that AI applications do not physically harm or disadvantage people," Bühler said. "With the planned AI regulation, there is an opportunity to create a legal framework for the ethical and safe use of AI in the EU." In the study, 84 percent of German citizens:in favor of legal requirements for AI applications.

Generative AI systems such as ChatGPT, Midjourney or DALL-E are used to create texts, images, videos or other content. AI is also increasingly used in critical areas such as autonomous vehicles, medical diagnostics or robotics. And as automated decision-making systems, AI applications are being used, for example, in hiring processes or for assessing creditworthiness. Opinions are divided on the opportunities and risks of the technology. Every second respondent (50 percent) believes that, on balance, the opportunities exceed the risks. 39 percent disagree and 12 percent are undecided. "AI will have far-reaching consequences for the labor market," Bühler said. 87 percent of respondents believe that AI will fundamentally change the world of work. And nearly half believe that a great many people will lose their jobs as a result of AI use (48 percent). However, only 15 percent are currently worried that AI systems will replace their own professional jobs. Many even expect benefits. One in two respondents agree with the statement that AI has the potential to help in their job (50 percent). "One likely scenario is that AI applications, as digital assistants, will support employees in accomplishing a wide variety of tasks," Bühler said. This applies to office work as well as to planning and performing manual tasks, he said.

Dangers for the media system and democracy

The public is very concerned about the impact of ChatGPT and other generative AI applications on the media system and our political system. Just over one in two respondents believe the technology is a threat to democracy (51 percent). "Citizens fear a wave of fake news, propaganda and manipulated images, texts and videos," Bühler said. According to the survey, 84 percent of respondents believe AI will massively accelerate the spread of "fake news." 91 percent believe it will be almost impossible to tell whether photos or videos are real or fake. And a good two out of three respondents fear that AI will massively accelerate the spread of state propaganda (69 percent). Bühler: "Content created with AI will pose enormous challenges to democratic societies."

Population demands legal framework for AI

The results of the survey are also clear when it comes to the question of legal requirements. 91 percent call on legislators to create a legal framework for the safe use of AI. In this context, the regulation and use of AI in the EU should be based on European values, say 83 percent. Only 16 percent of respondents believe that AI should not be regulated at present and that ethical development should be left to tech companies. The respondents also have clear ideas about possible requirements: 94 percent call for mandatory labeling of content generated automatically or with AI support, 88 percent call for labeling that AI is "contained" in a product or application, and 86 percent even call for mandatory testing of the quality and safety of AI systems by independent testing organizations.

In the view of the TÜV Association, this results in a clear mandate for action by policymakers. "With the AI Act, the EU is a global pioneer in legislation in democratically organized economic blocs," said Bühler. "With smart regulation, we can set an international standard for innovative and value-based AI." The EU Parliament’s lead committee position on the draft AI Act maintains that AI applications should be divided into four risk classes. The planned regulation includes a ban on AI applications with an "unacceptable risk" such as social scoring but no requirements at all for applications with "minimal risk" such as spam filters or games. AI systems with a "limited risk" such as simple chatbots must comply with certain transparency and labeling requirements. For AI applications with a "high risk", for example in critical infrastructures, software in human resource management or certain AI-based robots, strict security requirements apply. In addition to transparency obligations, these must also meet requirements such as the explainability of their results or non-discrimination. Bühler: "Before their market launch, all high-risk AI systems should be reviewed by an independent body. This is the only way to verifiably ensure that the applications meet the safety requirements."

In the view of the TÜV Association, the practical implementation of the AI requirements should already be prepared now. An important basis for this are generally applicable norms, standards and quality criteria. In addition, corresponding test procedures must be developed. The TÜV Association has been committed to the establishment of interdisciplinary "AI Quality & Testing Hubs" at the German state and federal level for some time. TÜV companies are currently preparing for the testing of AI systems and have founded the "TÜV AI Lab" for this purpose.


Ergebnisse der Befragung zum Download

Presentation for the press conference "Security of generative artificial intelligence (AI) applications such as ChatGPT".

 

Methodological note: The data is based on a representative Forsa survey commissioned by the TÜV Association among 1,021 people aged 16 and over. The survey was conducted in April and May 2023.

Note: This press release was translated by DeepL (https://www.deepl.com/de/translator) and adjusted.