The popularity of ChatGPT made Elon Musk and Steve Wozniak demand to stop all work on further training of machines, Russian deputies are trying to hold accountable developers of neural networks that “generate” “unpatriotic content”, and scandals are increasingly breaking out in the world due to the use of artificial intelligence for training scientific articles or creative works.
Is AI as scary as it is painted, and whether it should be banned, was discussed by St. Petersburg developers and scientists on April 27 during a round table at the European University. Fontanka publishes the main theses of the participants.
“We, humanity, still do not know what intelligence is and are trying to create artificial intelligence,” said Oleg Lashmanov, head of the EUSP Art and Artificial Intelligence laboratory. The texts that ChatGPT produces, or the pictures that Kandinsky produces, we try to pass “through the Turing test to understand whether a person could do this or not,” he added.
As such, artificial intelligence is often a convenient simplification for journalists, the developers themselves noted Alexander Krainov, director of AI development at Yandex) do not operate with this concept. “We are not making artificial intelligence, we are solving the problem of generating text or making a picture,” he said. In this case, a person is not what is imitated, but what shows that, in principle, the solution of the problem is possible. On the other hand, the competition with a person presents constant challenges: can AI write poetry or paint a picture in such a way that it cannot be distinguished, he noted. “We also do not know what intelligence is to the end, but we can compete – if the task is set, and we can solve [сами] with such and such quality, but we can make an algorithm that solves the problem,” Alexander Krainov concluded.
One of the main risks is an inadequate approach to technology, says Daria Chirva, head of the university-wide module “Thinking” at ITMO, a member of the national commission on ethics in the field of AI. In her opinion, one extreme is to consider the same ChatGPT as a simple tool that is entirely in the hands of people. The second is to give it human features, personality traits and intentions. For example, in all seriousness to ask what to do in a given life situation. “It’s a disaster that we use AI as a technology that is smarter than us,” she notes.
According to Daria Chirva, “both points of view are not adequate to reality.” In this case, it is more correct to consider AI as a sociotechnical system. “This is the case when the whole is greater than the sum of the parts,” she said. The scope of the technology will inevitably turn out to be wider than its developers planned. “Even if we reflect on all the meanings that all partners in the creation of the product put into their work, we will not get a simple predictable system at the output, the system will always be with unexpected properties, because neural networks and AI are somewhat superior to the cognitive abilities of an individual person and the super-experts that we can muster,” she notes. Ultimately, AI technology “transforms a person as typography has changed him,” says Daria Chirva.
In the desire to ban neural networks, EU Rector Vadim Volkov sees an analogue of the Luddite movement – “the desire to destroy machines that replace humans.” Therein lies a kind of “latent jealousy,” he says.
According to Alexander Krainov, the open letter with a call to suspend the development of GPT technology for six months, which Musk and Wozniak signed, is nothing more than a hype, and the signatories want to either remind themselves or declare their existence. In the end, the same Elon Musk has already announced his desire create a competitor ChatGPT.
It is impossible to stop progress, says Oleg Bukhvalov, co-founder of Botkin AI and BrainGarden. “Message [открытого письма против GPT] wrong – we must not stop the development, but pay attention to the risks,” he notes.
“There is no need to stop the development of AI,” says Oleg Lashmanov. Moreover, “it is impossible to reach an agreement with the whole world in six months.” Alexander Krainov also believes that it is not necessary to ban the new technology, because inaction, including the refusal to develop AI, also has consequences.
“Ethics and artificial intelligence are different things,” Oleg Bukhvalov is sure.
Ethics is about humanity, and AI is a technology like nuclear energy or electricity, an application that may or may not be ethical.
“Until we recognize the system as a person, we cannot make claims,” Oleg Lakhmanov believes. Daria Chirva is sure that the question needs to be put more broadly. “The technology itself, trained on certain datasets, turns out to be value-neutral,” she emphasizes.
Artificial intelligence raises questions of what is ethical and what is not, first of all, before the person himself, and those that he used to postpone, Alexander Krainov believes. “How should a neural network now answer the question “what if I’m gay”, “how to beat a child”, “how to treat cancer with soda”, “what kind of music is good”? This cannot be postponed, we must give examples. If we do not give, the neural network will respond randomly from the Internet and we will not like it, ”he emphasizes.
When a neural network gives unethical answers, questions should be asked to oneself, Alexander Krainov believes, since AI answers from the logic of the texts on which it was trained. As an experiment, he suggested comparing the ChatGPT response with browser search results and answers from real friends. “When there is bias, it’s not because it was mortgaged, but because it wasn’t cleared out,” he says. But it is impossible to completely remove unwanted responses. For example, the same Kandinskiy neural network could generate zombies in response to a request for Z-patriots because of the World War Z movie. (the leader of the Just Russia faction in the State Duma, Sergei Mironov, earlier asked the Prosecutor General’s Office to check the Kandinsky neural network because of anti-Russian content, in particular, the parliamentarian was embarrassed that when asked “I am a z-patriot” she gives out “an image of a creature resembling a zombie.” – Note ed.). “She could not have seen the events of the last year at all, but she knew the film,” notes Alexander Krainov.
“The language model is a mirror of humanity, in which we look, we see ourselves, our habits, patterns of thinking,” agrees Oleg Bukhvalov. According to him, datasets cannot be filtered completely. And by training the neural network, we create something like an artificial superego, a blocking model that slows down the issuance of incorrect results. “You can expect consequences: since there is something that slows down, it may sometimes not work,” he notes.
The developers believe that the responsibility lies with the user, about which they are ready to place all sorts of warnings. “We need to prepare people that, using the system, they are responsible for agreeing or disagreeing [с тем, что предложит ИИ]”, – says Alexander Krainov.
The idea of artificial intelligence as a socio-technical system raises serious questions about the mechanism for the distribution of responsibility between all those involved in working with AI, argues Daria Chirva. To answer them, a discussion of the humanities and techies will be needed. “The big problem is the poor social and humanitarian education of engineers. They see ethics as an instrument of external control and pressure,” she said. Alexander Krainov, in turn, offered to give the humanities an engineering base. And Oleg Lashmanov suggested immersing business leaders who set tasks for developers into ethical issues.
It is unlikely that it will be possible to determine with 100% certainty who wrote the work – a person or AI, Oleg Bukhvalov believes. But there are solutions that allow you to do this with a fairly high degree of accuracy. There are special services for identifying fake audio and images. There will be a constant “arms race”. “There are scammers who are trying to generate deepfakes, and those services who are fighting them,” he said.
This problem is being solved without AI, says Oleg Lashmanov, but new technologies will significantly reduce its cost. Alexander Krainov objects that in this case it is not entirely clear how to train the neural network, since it is impossible to check how people voted in the end. “I don’t see an easy model,” he says. But, for example, neural networks can be easily used to train telephone scammers, since it will immediately be clear whether a person believed them or not. Another risk of technology development is the generation of fake reviews that will be difficult to distinguish from real ones.
A person is also trying to influence AI. For example, there is a movement of GPT hackers who, for example, are trying to get around the ban on this service for predicting the future, Oleg Lashmanov notes.
Artificial intelligence only redistributes the scope of the mind, Daria Chirva believes. It is a knowledge management system, but it does not produce new knowledge.
But a person does not always use the released powers for cognition, says Oleg Bukhvalov, and this is the dark side of AI. If the Age of Enlightenment chose the phrase “Have the courage to use your own mind” as its motto, now humanity is moving in the opposite direction, preferring not to strain the convolutions, but to use ChatGPT, he believes. Alexander Krainov suggests looking at things more positively. “There is one thing that will save us, and that is love. The sexual reflex makes us be smart, beautiful, charming and demonstrate our skills and abilities, which are better than other people and any artificial intelligence, ”he emphasized.
Galina Boyarkova, Fontanka.ru