Week 11 & 12

The last two weeks were mostly midterms and a lot of learning in class.

Natural Language Processing

I started working on SemEval 2026 Task 5, which studies how models interpret ambiguous sentences inside short stories. It sounds simple until you realize how subtle human understanding really is.

The task asks models to rate the plausibility of word meanings. Basically, how likely a certain sense of a word fits in context. The more I work on it, the more I realize how much language depends on background knowledge, assumptions, and emotion.

This project sits somewhere between linguistics, psychology, and data science. I love that balance. It’s less about raw accuracy and more about interpretation: how can a model understand the same nuance a human reader feels instantly? That question’s been living rent-free in my head all week.

Besides that, in class, we spent a lot of time on how models actually understand and generate language. We started with RNNs for sequence modeling and text classification, then moved into LSTMs and how they help remember long-term patterns.

After that came attention mechanisms and how they make translation and context handling so much better. We looked at encoder, decoder, and encoder–decoder setups before moving on to Transformers, learning about multi-head attention, transformer blocks, and how they changed everything about natural language understanding.

Toward the end, we talked about language modeling, masked language models like BERT, and how tasks like NER, machine translation, and conditional generation all build on those foundations. We also touched on modern topics like prompting, in-context learning, and how models learn to follow instructions or preferences.

It’s been a lot, but every topic fits together like layers in a bigger system. All about helping machines understand language a little more like we do.


Neural Networks and Deep Learning

We wrapped Module 2, an intense deep dive into the math that keeps everything running: backpropagation, activation functions, derivatives, Softmax, Cross Entropy, and One-Hot Encoding.

Most of my time went into working through examples like the XOR problem and the Iris dataset. Seeing the math come alive in Python and watching the loss values drop with each iteration felt satisfying.

We’ve been unpacking how feed-forward models actually learn, line by line in code. The lectures on Softmax, Categorical Cross-Entropy, and One-Hot Encoding tied a lot of scattered ideas together. The derivative proofs made more sense once I started connecting them to code instead of just formulas.

We also explored how activation functions shape learning. Why ReLU helps with vanishing gradients and why Sigmoid’s simplicity still has its place. I built a few toy networks from scratch just to see how parameter tweaks change performance, and that repetition is what’s making the theory stick.


Information Visualization

Our latest project in Info Viz was to take bad real-world charts and make them better. It sounds simple, but it completely changes how you think about communication.

I applied Knaflic’s storytelling principles like reducing clutter, emphasizing the message over decoration, and guiding the viewer’s attention intentionally. It’s wild how small design changes can shift how people read data.

I presented my version last week: a set of before-and-after charts showing how clarity transforms perception. It reminded me why I love visualization. It’s where data meets design and meaning finally becomes visible. I personally enjoyed every second of remaking those charts; it felt creative and analytical at the same time.

If you want to check it out: INFO 5602 – Individual Project Presentation – Moukthika.


Boulder Climate Ventures

Week 2 - Electricity Demand: We heard from Allison Weis from Wood Mackenzie, Cully Cavness from Crusoe Energy, and Amanda Rohrer from Blackhorn Ventures.

They talked about how onshoring, AI, and manufacturing are reshaping the U.S. power grid. It was an interesting mix of policy, technology, and economics. This was definitely a reminder of how forecasting and data modeling directly influence energy decisions at a national scale.


Week 3 - Critical Minerals: This session featured Nathan Ratledge, Co-Founder and CEO of Alta Resource Technologies, and Leo Banchik, Director at Voyager Ventures.

They talked about the global race for minerals essential to clean energy and semiconductor production. They also talked about how data plays a role in managing supply chains, measuring sustainability, and reducing environmental impact while meeting growing industrial demand.

Later that week, I read Bill Gates’ memo: Three Tough Truths About Climate. The article shared his reminder that while climate change is serious, progress is real, and we can’t fix the planet by cutting off human development.

“It’s time to put human welfare at the center of our climate strategies.”

That line stuck with me because it captures what data should really be about: turning insight into decisions that improve lives.


Work at the Research & Innovation Office

At RIO, I’ve been helping organize data on faculty funding opportunities, building a cleaner structure for how information moves between agencies and the university. A lot of it involves documentation: compliance requirements, proposal templates, and aligning faculty projects with agency priorities.

I’ve also been working on a way to automatically pull funding opportunity listings into our website using an RSS feed. It’s a simple idea, but a really useful one. It could save a lot of time in the future and make funding opportunities easier to access without any restrictions. Working here has made me realize how much data exists in the administrative side of research.

Data in Nanomaterials

This week, I met Professor Carson Bruns and learned about the nanomaterials research happening in his lab with Aseem Visal. Their work focuses on plastics and material innovation, and even though they don’t currently have any data-heavy projects, it was fascinating to learn about what they’re doing.

I loved seeing how research in materials science connects with ideas in data analysis, even when the overlap isn’t obvious. Experiments like testing how materials react to light or pressure still generate layers of data through sensors, measurements, and simulations. Translating that kind of information into patterns and predictions feels like a natural bridge between disciplines.

It was a good reminder of how broad data science really is. It’s not confined to one space. It can connect chemistry, physics, computation, and everything in between. I’m hoping to keep exploring more opportunities that sit at that intersection.


Community & Career Growth

Had a 1-on-1 with Abigayle Peterson and joined The Netwrk Community. It’s good to have a space that blends mentorship, professional growth, and open conversation about tech.

Amid all the studying, we had an MS Data Science Field Day: volleyball, the beautiful flatirons (I don't think I'll ever stop appreciating them), and meeting classmates outside the classroom.



If the last few months were about getting started, these two weeks were about getting grounded: learning, connecting ideas, and finding momentum again. 

Comments

Popular posts from this blog

Weeks 15 - 18

Weeks 19 - 22