In my later years, God has granted me the gift of living two lives at once: a professional life and a poetic one. They complement each other and sit in harmony alongside my family life, which matters to me especially. In my professional (working) life I make decisions — strategic ones and everyday ones. In my poetic life I write, almost daily. Recently I have been exploring how tools of artificial “intelligence” (in my case, ChatGPT) can help in both roles. I place the word “intelligence” in quotation marks because we are dealing with an artificial construct — algorithms, datasets, and the infrastructure that supports them — and we should not ascribe to it intelligence in the human sense, which is contextual, embodied, and social.
In my working life, when I use ChatGPT (Thinking or Deep Research), I sometimes feel as though I’m working with a colleague — or even a whole team — who, in many areas, has access to a larger stock of information and knowledge than I do. And yet I quickly realise that the key still lies in me: in how precisely I formulate my request, and how patiently I work through several steps to get exactly what I need. My artificial “colleague” looks as if it can do everything, but in fact it offers everything it can — and that is where the difference between abundance and usefulness begins.
I won’t describe here what I do, or how, to extract what is truly useful from that abundance. It matters more to me to say what the tool lacks — and why, despite its power, it cannot replace me in decision-making.
What it lacks is authentic experience in a specific context — experience I have acquired, and continue to acquire every day, by making imperfect decisions, watching their consequences, and learning from both successes and mistakes. That experience is decisive when the space of possibilities must be narrowed. Large language models (which is what we are really talking about when we talk about artificial intelligence) and the language tools built on them (such as ChatGPT) excel at generating options, explanations, and variants. But they do not have the knowledge and the “feel,” from the inside and on the skin, of what is feasible here and now — under what conditions, with which people, in what institutional and cultural environment, and with which risks and constraints.
That is why my main task in communicating with my artificial “colleague” is to set the right constraints. Not to shrink its potential, but to make it productive. When globally available knowledge meets a real problem in a real environment, only then does the real work begin: judgement, calibration, adaptation, selection, and verification. At that point, our non-transferable knowledge of context becomes decisive — because it determines what can survive contact with reality and what will remain a beautiful idea.
What fascinates me most is the possibility of constant “mutual improvement”: the language tool becomes ever better at handling an enormous corpus of general knowledge, while my team and I learn to ask more precisely, constrain more clearly, verify more thoroughly, and decide more responsibly. When guided wisely, that interaction raises the bar for everyone involved in preparing decisions. It forces a team to keep “stretching” — to go one step further in learning, analysis, structuring, and argument. That is why I believe tools like ChatGPT help a smart decision-maker and a smart team — especially if they are entrepreneurially oriented and see change as an opportunity to notice and use, rather than as a threat to hide from and defend against.
At the same time, we should be aware of risks that can undermine both individual and organisational capacity: the ability to decide, and the practice of developing experts and managers in companies. Uncontrolled use of such tools in decision-making can suppress so‑called abductive reasoning — the creative human capacity to “guess” the explanation or solution that works best in a given context. Looking back, from today’s distance, at what made the difference in Eda’s decision-making and leadership compared with other similar organisations over the past 27 years, I would say it was abductive reasoning above all else, combined with combined with a continual balancing of the short and the long term. On the other hand, giving colleagues trust and support to take the lead on complex projects — along with occasional reminders of how important it is to distinguish what is complicated from what is complex (see the Cynefin framework) — has driven steady individual, team, and organisational strengthening of Eda. That must not now be threatened — either by ignoring the possibilities of new tools or by using them uncritically.
When he read the first version of this blog, one colleague reminded me of the book Quick Decision–Making, which I wrote with my mentor and a fellow postgraduate colleague, published in Sarajevo by Svjetlost in 1991. He suggested that it might be worth revisiting and refreshing those concepts and tools in light of what artificial intelligence — more precisely, large language models — can offer today. Perhaps I will take up that challenge in a future blog.
In my poetic life, things are different. The “smart” tool is, essentially, a large language model in a compact package. That is why it can generate translations of poetry into other languages — of course, under clear constraints, with corrections and refinements on my side. It can also offer interpretations of individual poems, cycles, and even entire books. Out of curiosity, I tested both, and I was honestly surprised by the level of quality one can reach after a bit of truly “shared” work.
Since I am neither a professional translator nor a professional literary interpreter/poetry critic, I can only suggest to those who are that they take these tools’ possibilities seriously. Not only because they will reach better versions faster, but also because, by ignoring the new reality, they may be left in the “dust” — like Bronco Bill and his team facing a train that did not stop for their old-fashioned ambush, but thundered past without a glance.
(That image is especially fresh for me, because I rewatched the film on TV a few days ago. It reminded me of Yesenin’s tender, warm, and wistful lines, written in 1920 in the Kuban steppes, as he watched through the window of a salon carriage a little foal running — and then faltering — after a locomotive:
Dear, dear, you funny, crooked little foal, / Why, why this race — this never-ending goal? / Don’t you know — who could ever explain — / The living horse is beaten by the steel train?)
The great challenge will be how literary interpreters preserve their unique voice — the thing by which they are recognisable — because no one else can take that from them, and no one else can give it to them. Ranko Popović — academician and, of all the people I know, the finest interpreter of Serbian poetry, with whom I have the good fortune to share friendship and the harmony of wine and conversation — seems to have sensed what was coming years ago. Writing afterwords to my poetry books, he did not interpret poems or cycles; he conducted his own conversation with Serbian literature and wove his authentic life experience around the themes the books explored (wine, taverns, rakija), using the poems as lyrical occasions and prompts. Perhaps in that way he discovered a signpost for “the road less travelled.” A critic’s unique voice is both his defense and his advantage.
And now the most important point — what this language tool cannot do, at least not in the way that matters most to me. It cannot write a lyric poem that carries an authentic lived experience. The reason is simple: the substance of every good poem is, on the one hand, the lived experience of the person who writes it, and, on the other, inspiration that arrives unplanned. That meeting — experience and inspiration — makes the shaping of the poem possible. Poetic craft matters, but it cannot replace what comes first: authenticity and the spark that cannot be calculated or ordered.
The tool, then, possesses much of what comes after: it can offer rhythm, rhyme, metaphors, and variations — even a convincing tone. It can also, retrospectively, once a poem already exists, point to coherence and intention that I, as the poet, did not consciously think about in advance, in the moment of inspiration and throughout the act of writing (“Pred namjerom je pjesma nijema – Confronted with intention, the poem refuses to appear,” says a line from my poem On Creation).
But it does not have what comes before: truly lived experience and unorderable inspiration. It can imitate form, but it cannot be the source of what makes form become a poem.
So my answers today, regarding the language tool I use (ChatGPT), are these: it helps me be better in my work, which I live daily. It also helps me, at times, in interpreting and translating poetry, which I sometimes do out of curiosity. In writing poems, it does not help me — and I’m glad it doesn’t.
Thank God it is so.