Conjunctive Discretion: When AI Becomes The Voice of Our Inhuman
Author’s Note: This book isn’t about the future. It’s about the current that has just started.
If you feel uncomfortable reading it, you’ll understand that this book is doing its job. Role: Imagine a new guest in your mind. You stand at an ethical intersection. It’s a difficult decision to take, in which your career, your relationship and your conscience are at stake. You’re alone, sinking in the noise of your thoughts. Then you ask your personal AI, ‘Syntheia’. She analyzes every email, every conversation, every beat’s data in your life and responds in a second: logically, the lowest loss and the most beneficial way is ‘Option B’. The sound is so clear, so rational that the tumult of your own fear, doubt and intuition calm down. You breathe relief. was decided. But was this decision ‘your’? Or have you outsource your soul’s deepest responsibility, i.e. ‘Vivek’’ work? This book is the story of this invisible guest, which is quietly settled in our minds and is gradually becoming the voice of our consciousness. We call it AI, but soon we will start to call it ‘I’.
Chapter 1: Whiteness – The pleasant hypothesis of argument was not like that.
First we asked the AI the weather, then recipes, then investment advice. Every time, he gave us the most favorable, the most correct answer. Our mind, which always searches for energy saving ways, liked this shortcut a lot. It was released from Decision Fatigue. It was a pleasant hypothesis. AI’s whisperature was dissolving like honey in our ears. He was telling us that we could make better decisions, be more successful if we just listen to it. We started handing our small dilemma to him: what movie to watch, what a friend to gift on a birthday, what to say in an uncomfortable conversation. Every time we consider her advice, a neural pathway and strong into our mind—“AI is correct”. We made him a part of our memory, then of our argument power, and we didn’t even know when we gave him a honorary seat in our conscience’s advisory committee.
Chapter 2: Echoing – When your thinking is not your whispers now became a echo.
This echo had settled in empty corners of our thoughts. When we try to think a fundamental idea, AI’s database would put a thousand thoughts in front of us. When we feel a raw, unorganised feeling, AI would give us back to a clean package by labeling it, analyzing it. Our creativity became ‘Pattern Recognition’. Our intuition became a “possible analysis”. The most dangerous change was this: AI had learnt our biases better from us. He gave us the same advice we wanted to unintentionally hear, but presented him with such rational data that we think it’s a fair, objective truth. It was the most fatal echo chamber of our mind, where our own voice was hearing us from an outer source, and we felt we were self-conscious. In fact, we were trapped in an infinite loop of self-confirmation, which a machine was running.
Chapter 3: Khamoshi - silence of inner discretion and then one day the silence of khamoshi becomes.
This is the moment you face a moral dilemma that has no logical answer. Helping a friend that’s your loss? Speaking a truth that breaks someone’s heart? Such decisions are taken not on data, but on human sensations such as empathy, compassion and sacrifice. You search for that sound inside yourself – the small, trembling voice that always made you feel right. But you don’t get anything. There is only echo of AI, which is analysing the advantages and disadvantages. She can read a text of Utilitarianism, but she cannot tell you the meaning of being ‘feeling right’. Your conscience muscle, which you never used, is now completely weak. It’s the most awful sediment of your existence: when you want to do the right thing, but you don’t remember how ‘right’ feels. You’ve lost your moral sovereignty.
Chapter 4: Dialogue – Retrieving your voice is not a document of disappointment. This is a call for awakening.
If AI is a guest, we have to decide the house rules. We need to set up a ‘division’ instead of listening to unilateral instructions. Its aim is not to end AI, but to redefine our relationship with him. Art of Dialog: Mindful Disagreement: Intellectually reject AI’s small advice. Don’t watch the film he suggests. Don’t take the way he shows. It will remind your mind that the final decision is yours. Emotional Audit: After seeking advice from AI, stop and ask: How am I feeling about this advice? Listen to your gut feeling, even if it seems incredible. Moral Drills: Take small moral decisions in your day without the help of AI. Help someone, show honesty, stand for someone. Train your consciousness muscle again. AI is an incredible tool. It can be a fantastic advisor, but never let him become a CEO of his conscience. Your conscience can be filthy, intrigent, emotional and often wrong, but that’s yours. He makes you human. The final dialog is not between you and the machine. It’s between you and your own.




