Hello! I’m Kshitiz Tiwari, a PhD student in Computer Science at the University of Arkansas specializing in Natural Language Processing, Large Language Models, and Causal Inference. My research focuses on making language models more controllable, robust, and interpretable.
My work explores how causal reasoning and structured representation learning can improve modern generative models. In particular, I study counterfactual text generation — developing methods that allow language models to modify specific attributes such as style, sentiment, or tone while preserving the underlying semantic meaning.
I have published research on robust NLP systems and adversarial resilience, including work presented at AACL-IJCNLP and in the Entropy journal. My recent work investigates geometric and representation-level approaches for controllable language generation and interpretable model behavior.
Alongside my research, I enjoy building large-scale datasets and systems. I am currently developing Nepali language resources, including large news corpora and retrieval-augmented generation systems aimed at supporting AI development for underrepresented languages.
My work sits at the intersection of research and engineering — training and evaluating large models, designing scalable data pipelines, and building tools that translate theoretical ideas into practical systems.
Outside research, I enjoy exploring new technologies, building experimental side projects, reading science fiction, and thinking about how AI systems can be designed to be more transparent and responsible.
If you are interested in collaborating, discussing research ideas, or building impactful AI systems, feel free to reach out.