ChatGPT, KI und Sicherheit im Unternehmen – wie passt das zusammen?
Tiredly smiled at by some, touted by others as the master solution of tomorrow: Artificial intelligence is an important driver of the technology age. ChatGPT has impressively demonstrated how powerful intelligent systems already are today: Released in November 2022, ChatGPT is a sophisticated, conversational chatbot that mimics human conversation and can answer queries in multiple languages. What's special: The responses given by ChatGPT are virtually indistinguishable from human responses. Thanks to artificial intelligence, the bot - unlike most conventional Q&A bots - is not restricted thematically when answering questions and does not use any predefined text modules. And not only that: even typically human characteristics such as creativity or humor are not alien to ChatGPT. One thing is clear, of course: as human as ChatGPT may seem, all of its actions are based on statistical calculation and a variety of other data. But thanks to its complex AI algorithm and extensive training, ChatGPT's performance is tremendous and represents a significant advance for the use of AI in human-machine communication. But what does this progress mean for enterprises?
The path from weak AI to true intelligence
Artificial intelligence is a fairly abstract umbrella term for applications that are intended to provide human-like intelligence performance. It aims to enable technical systems not only to receive data, process it and respond accordingly, but also to be able to evaluate the results of previous actions and thus autonomously adjust their future actions to optimize the outcome. The idea of AI in this context is anything but new - extensive research and development in this area already took place in the 1950s. But unlike back then, technological progress, especially the Internet and the spread of cloud services, as well as the constant increase in computing capacities, are opening up entirely new possibilities today. A basic distinction is made between weak and strong AI. So far, only weak AI has been used in companies. In contrast to strong AI, it is developed for a specific application and is intended to solve concrete problems within predefined parameters. The approach to problem solving always remains the same - the optimization lies solely in reducing the error rate. With today's possibilities, however, the step to real, strong AI is also within reach for companies.
Artificial intelligence: opportunity or risk?
One thing is certain: AI has long since arrived in the corporate world. Whether it's analyzing shopping behavior or controlling smart machines, a full 87 percent of companies in Germany consider AI solutions to be important or even very important for their own business success. And the importance will continue to rise. In fact, 7 out of 10 companies say they will increase their investments in AI in 2023 compared to the previous year. However, the great advances brought about by ChatGPT and other developments also harbor risks - not least for corporate security. After all, new areas of application also give rise to new issues - this applies in particular to the step from weak to strong AI. After all, autonomous decisions made on the basis of previous knowledge are difficult to control. Moreover, with human-like intelligence also comes human-like weakness: those who learn on the basis of their own experience and orient future actions to it can also be manipulated or develop stereotypes. In the case of a simple chatbot, potential negative impacts may be limited - but what about in the case of AI applications that are integral to risk management or operational security management? There is also a risk of cybersecurity being neglected amid all the gold-rush excitement. Likewise, as AI continues to evolve, the explosive nature of ethical issues increases. For these are based not only on empirical values, but also on a social and cultural context. To what extent artificially created intelligence can take this context into account remains questionable. But is the complexity of these questions reason enough to demonize AI? - Not at all. Instead, companies should acknowledge that the state of development of AI is far ahead of current practice. So if you want to have a sustained discussion about the opportunities and risks of AI, you need to do so based on tomorrow's technology rather than resting on today's realities.
Passende E-Learnings für Ihre Mitarbeitenden
Die perfiden Tricks und Manipulationen von Angreifenden entlarven und korrekt reagieren.
Basiskurs zur Informationssicherheit
Die Grundlagen der Informationssicherheit lernen und Informationen im Unternehmen schützen.
20 - 25 Min.